Posts tagged ‘facebook’
As just about everyone is already aware, Facebook has been up to a bunch of big brotherly stuff lately, including “instant personalization” — making your identity and data available to 3rd party sites you visit, arguing to treat ToS violations as criminal violations, and forcing you to make your “interests” public (or delete them). Overall, it looks like they’re making a bold move to take control of everyone’s identity and connections, privacy be damned.
- The questionable feature was linking your statuses to “connections” pages. The outrage was based on the meme “if your status contains the word FBI then the FBI will have a record of it,” which appears to have started here. That article is full of hyperbole and understandably appears to have been widely misunderstood to be claiming that even private statuses appear on Connection pages (they don’t). There’s really nothing new in terms of the visibility of your statuses: Facebook already had real-time search for public statuses, and the only difference is that someone can now click on the “FBI” page instead of having to type in “FBI” into the search box.
- The minor bug was that Facebook started listing Connect-enabled websites you visit in the “Applications” tab in your privacy settings. The sites didn’t get your identity, any of your data, nor did they have priveleges to post to your wall. The fact that you visited them was not visible to anyone else. No actual harm was done. And yet an article titled Facebook’s new features secretly add apps to your profile alleged all of these things without making any real effort to check with Facebook. Facebook quickly fixed the bug and contacted the authors, and they updated the story, but it did little to quell the rumors which took on a life of their own.
- The non-issue was Facebook leaking your IP address in email notifications. This is normal behavior: most webmail providers, except gmail, put the sender’s IP into the message header as a spam-prevention technique. This kicked up another shitstorm.
In spite of these unfair accusations, it is hard for me to feel any sympathy for the beleaguered company. This is how public opinion works, and they can’t claim not to have seen it coming. As this fantastic visualization by Matt McKeon shows, Facebook has been on a long and consistent path to make all of your information public, essentially pulling a giant bait-and-switch on their users. They stepped up the pace recently, asked their users to give up too much too fast, and something just snapped.
I think Facebook underestimated the extent to which privacy correlates with trust. They were forgiven for Beacon and other problems in the past, but after the most recent series of privacy violations, it became clear that these were not missteps but deliberate actions. I believe that Facebook’s relationship with its users has changed fundamentally, and isn’t going to mend any time soon. Perhaps Facebook’s reckoning is that they are now big enough that it doesn’t matter any more. That remains to be seen.
On a personal note, someone pretty high up at Facebook emailed me a couple of months ago (although “not in an official capacity”) to have a discussion about privacy issues with some of their upcoming product launches. Unfortunately I was traveling at the time, and when I got back they were no longer interested. I guess by then it was too close to f8 and all the important decisions had been made. I can’t help wondering if the outcome might have been different if I’d been able to meet with them — perhaps they might have eased off just a little bit on their world-domination plans and avoided the straw that broke the camel’s back. But I suspect that that’s just wishful thinking, given that the imperative for their current push in all likelihood came from the very top.
I was on a panel at the second FTC privacy roundtable in Berkeley on Thursday. Meeting a new community of people is always a fascinating experience. As a computer scientist, I’m used to showing up to conferences in jeans and a T-shirt; instead I found myself dressing formally and saying things like “oh, not at all, the honor is all mine!”
This post will also be the start of a new direction for this blog. So far, I’ve mostly confined myself to “doing the math” and limiting myself to factual exposition. That’s going to change, for two reasons:
- The central theme of this blog and of my Ph.D dissertation — the failure of data anonymization — now seems to be widely accepted in policy circles. This is due in large part to Paul Ohm’s excellent paper, which is a must-read for anyone interested in this topic. I no longer have to worry about the acceptance of the technical idea being “tainted” by my opinions.
- I’ve been learning about the various facets of privacy — legal, economic, etc. — for long enough to feel confident in my views. I have something to contribute to the larger discussion of where technological society is heading with respect to privacy.
Underrepresentation of scientists
As it turned out, I was the only academic computer scientist among the 35 panelists. I found this very surprising. The underrepresentation is not because computer scientists have nothing to contribute — after all, there were other CS Ph.Ds from industry groups like Mozilla. Rather, I believe it is a consequence of the general attitude of academic scientists towards policy issues. Most researchers consider it not worth their time, and a few actively disdain it.
The problem is even deeper: academics have the same disdainful attitude towards the popular exposition of science. The underlying reason is that the goal in academia is to impress one’s peers; making the world better is merely a side-effect, albeit a common one. The incentive structure in academia needs to change. I will pick up this topic in future posts.
The FTC has an admirable approach to regulation
As I found out in the course of the day’s panels, the FTC is not about prescribing or mandating what to do. Pushing a specific privacy-enhancing technology isn’t the kind of thing they are interested in doing at all. Rather, they see their role as getting the market to function better and the industry to self-regulate. The need to avoid harming innovation was repeatedly emphasized, and there was a lot of talk about not throwing the baby out with the bathwater.
The following were the potential (non baby hurting) initiatives that were most talked about:
- Market transparency. Markets can only work well when there is full information, and when it comes to privacy the market has failed horribly. Users have no idea what happens to their data once it’s collected, and no one reads privacy policies. Regulation that promotes transparency can help the market fix itself.
- Consumer education. This is a counterpart to the previous point. Education about privacy dangers as well as privacy technologies can help.
- Enforcement. A few bad apples have been responsible for the most egregious privacy SNAFUs. The larger players are by and large self-regulating. The FTC needs to work with law enforcement to punish the offenders.
- Carrots and sticks. Even the specter of regulation, corporate representatives said, is enough to get the industry to self-regulate. Many would disagree, but I think a carrots-and-sticks approach can be made to work.
- Incentivizing adoption of PETs (privacy enhancing technologies) in general. The question of how the FTC can spur the adoption of PETs was brought up on almost every panel, but I don’t think there were any halfway convincing answers. Someone mentioned that the government in general could go into the market for PETs, which seems reasonable.
As a libertarian, I think the overall non-interventionist approach here is exactly right. I’m told that the FTC is rather unusual among US regulatory agencies in this regard (which makes sense, considering that the FCC, for example, spends its time protecting children from breasts when it is not making up lists of words.)
Facebook’s two faces
Facebook public policy director Tim Sparapani, who was previously with the ACLU, made a variety of comments on the second panel that were bizarre, to put it mildly. Take a look (my comments are in sub-bullets):
- “We absolutely compete on privacy.”
- That’s a weird definition of “compete.” Facebook has a history of rolling out privacy-infringing updates, such as Beacon, the ToS changes, and the recent update that made the graph public. Then they wait to see if there’s an outcry and roll back some of the changes. It is hard to think of another company has had such a cavalier approach.
- “There are absolutely no barriers to entry to create a new social network.”
- Except for that little thing called the network effect, which is the mother of all barriers to entry. In a later post I will analyze why Facebook has reached a critical level of penetration in most markets which makes it nearly unassailable as a general-purpose social network.
- “Our users have learned to trust us.”
- I don’t even know what to say about this one.
- “We are a walled garden.”
- Sparapani is confusing two different senses of “walled garden” here. This was said in response to a statement by the Google rep about Google’s features to let users migrate their data to other services (which I find very commendable). In this sense, Facebook is indeed a walled garden, and doesn’t allow migration, which is a bad thing. But Sparapani said he meant it in the sense that Facebook doesn’t sell user data wholesale to other companies. That sounds like good news, except that third party app developers end up sharing user data with other entities, because enforcement of the application developer Terms of Service is virtually non-existent.
- “If you delete the data it’s gone.” (in the context of deleting your account)
- That might be true in a strict sense, but it is misleading. Deleting all your data is actually impossible to achieve because most pieces of data belong to more than one user. Each of your messages will live on in the other person’s inbox (and it would be improper to delete it from theirs). Similarly, photos in which you appear, which you would probably like gone when you delete your account, still live on in the album of whoever took the picture. The same goes for your pokes, likes and other multi-user interactions. These are the very things that make a social network social.
- “We now have controls on privacy at the moment you share data. This is an extraordinary innovation and our engineers are really proud of it.”
- The first part of that statement is true: you can now change the privacy controls on each of your Facebook status messages independently. The second part is downright absurd. It is completely trivial to implement from an engineering perspective (and LiveJournal for instance has had it for a decade).
There were more absurd statements, but you get the picture. It’s not just the fact that Sparapani’s comments were unhinged from reality that bothers me — the general tone was belligerent and disturbing. I missed a few minutes of the panel, during which he apparently he responded to a criticism from Chris Conley of the ACLU by saying “I was at the ACLU longer than you’ve been there.” This is unprofessional, undignified and a non-answer. Amusingly, he claimed that Facebook was “very proud” of various aspects of their privacy track record at least half a dozen times in the course of the panel.
Contrast all this with Mark Zuckerberg’s comments in an interview with Michael Arrington, which can be summed up as “the age of privacy is over.” That article goes on to say that Facebook’s actions caused the shift in social norms (to the extent that they have shifted at all) rather than merely responding to them. Either way, it is unquestionable that Facebook’s true behavior at the present time pays lip service to privacy, and Zuckerberg’s statement is a more-or-less honest reflection of that. On the other hand, as I have shown, the company sings a completely different tune when the FTC is listening.
Engaging privacy skeptics
Aside from Facebook’s shenanigans, I feel that that there are two groups in the privacy debate who are talking past each other. One side is represented by consumer advocates, and is largely echoed by the official position of the FTC. The other side’s position can be summed up as “yeah, whatever.” When expressed coherently, there are three tenets of this position (with the caveats that not all privacy skeptics adhere to all three):
- Users don’t care about privacy any more
- Even if they do, privacy is impossible to achieve in the digital age, so get over it
- There are no real harms arising from privacy breaches.
To the right is an illustrative example of a mainstream-media representative who was at the workshop covering it on Twitter through the lens of his preconceived prejudices.
Privacy scholars never engage with the skeptics because the skeptical viewpoint appears obviously false to anyone who has done some serious thinking about privacy. However, it is crucial to engage the opponents, because 1. the skeptical view is extremely common 2. many of the startups coming out of the valley fall into this group, and they are are going to have control over increasing amounts of user data in the years to come.
The “privacy is dead” view was most famously voiced by Scott McNealy. In its extreme form it is easy to argue against: “start streaming yourself live on the Internet 24/7, and then we’ll talk.” (To be sure, a few people did this 10 years ago as a publicity stunt, but it is obvious that the vast majority of people aren’t ready for this level of invasiveness of monitoring/data collection.) But engaging with skeptics isn’t about refutation, it’s about dealing with a different way of thinking and getting the message across to the other side. Unfortunately real engagement hasn’t really been happening.
I have a double life in academia and the startup world, and I think this puts me in a somewhat unusual position of being able to appreciate both sides of the argument. My own viewpoint is somewhere in the middle; I will expand on this theme in future blog posts.