Posts tagged ‘policy’

Unlikely Outcomes? A Distributed Discussion on Decentralized Personal Data Architectures

In recent years there has been a mushrooming of decentralized social networks, personal data stores and other such alternatives to the current paradigm of centralized services. In the academic paper A Critical Look at Decentralized Personal Data Architectures last year, my coauthors and I challenged the feasibility and desirability of these alternatives (I also gave a talk about this work). Based on the feedback, we realized it would be useful to explicate some of our assumptions and implicit viewpoints, add context to our work, clarify some points that were unclear, and engage with our critics on some of the more contentious claims.

We found the perfect opportunity to do this via an invitation from Unlike Us Reader, produced by the Institute of Network Cultures — it’s a magazine run by a humanities-oriented group of people, with a long-standing interest in digital culture, but they also attract some politically oriented developers. The Unlike Us conference, from which this edited volume stems, is also very interesting. [1]

Three of the five original authors — Solon, Vincent and I — teamed up with the inimitable Seda Gürses for an interview-style conversation (PDF). Seda is unique among privacy researchers — one of her interests is to understand and reconcile the often maddeningly divergent viewpoints of the different communities that study privacy, so she was the ideal person to play the role of interlocutor. Seda solicited feedback from about two dozen people in the hobbyist, activist and academic communities, and synthesized the responses into major themes. Then the three of us took turns responding to the prompts, which Solon, with Seda’s help, organized into a coherent whole. A majority of the commenters consented to making their feedback public, and Seda has collected the discussion into an online appendix.

This was an unusual opportunity, and I’m grateful to everyone who made it happen, particularly Seda and Solon who put in an immense amount of work. My participation was very enjoyable. Research proceeds at such a pace that we rarely have the opportunity to look back and cogitate about the process; when we do, we’re often surprised by what we find. For example, here’s something I noted with amusement in one of my responses:

My interest in decentralized social networking apparently dates to 2009, as I just discovered by digging through my archives. I’d signed up to give a talk on pitfalls of social networking privacy at a Stanford workshop, and while preparing for it I discovered the rich academic literature and the various hobbyist efforts in the decentralized model. My slides from that talk seem to anticipate several of the points we made about decentralized social networking in the paper (albeit in bullet-point form), along with the conclusion that they were “unlikely to disrupt walled gardens.” Funnily enough, I’d completely forgotten about having given this talk when we were writing the paper.

I would recommend reading this text as a companion to our original paper. Read it for extra context and clarifications, a discussion of controversial points, and as a way of stimulating thinking about the future prospects of alternative architectures. It may also be an interesting read as an example of how people writing an article together can have different views, and as a bit of a behind-the-scenes look at the research process.

[1] In particular, the latest edition of the conference that just concluded had a panel titled “Are you distributed? The Federated Web Show” moderated by Seda, with Vincent as one of the participants. It touched upon many of the same themes as our work.

To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter or Google+.

March 27, 2013 at 7:44 am 1 comment

A Critical Look at Decentralized Personal Data Architectures

I have a new paper with the above title, currently under peer review, with Vincent Toubiana, Solon Barocas, Helen Nissenbaum and Dan Boneh (the Adnostic gang). We argue that distributed social networking, personal data stores, vendor relationship management, etc. — movements that we see as closely related in spirit, and which we collectively term “decentralized personal data architectures” — aren’t quite the panacea that they’ve been made out to be.

The paper is only a synopsis of our work so far — in our notes we have over 80 projects, papers and proposals that we’ve studied, so we intend to follow up with a more complete analysis. For now, our goal is to kick off a discussion and give the community something to think about. The paper was a lot of fun to write, and we hope you will enjoy reading it. We recognize that many of our views and conclusions may be controversial, and we welcome comments.


While the Internet was conceived as a decentralized network, the most widely used web applications today tend toward centralization. Control increasingly rests with centralized service providers who, as a consequence, have also amassed unprecedented amounts of data about the behaviors and personalities of individuals.

Developers, regulators, and consumer advocates have looked to alternative decentralized architectures as the natural response to threats posed by these centralized services.  The result has been a great variety of solutions that include personal data stores (PDS), infomediaries, Vendor Relationship Management (VRM) systems, and federated and distributed social networks.  And yet, for all these efforts, decentralized personal data architectures have seen little adoption.

This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. We start with a historical discussion of the development of various categories of decentralized personal data architectures. Then we survey the main ideas to illustrate the common themes among these efforts. We tease apart the design characteristics of these systems from the social values that they (are intended to) promote. We use this understanding to point out numerous drawbacks of the decentralization paradigm, some inherent and others incidental. We end with recommendations for designers of these systems for working towards goals that are achievable, but perhaps more limited in scope and ambition.

To stay on top of future posts, subscribe to the RSS feed or follow me on Google+.

February 21, 2012 at 8:27 am 3 comments

Bad Internet Law: What Techies Can Do About It

From the dangerous copyright lobby-sponsored PROTECT IP to a variety of misguided social networking safety laws, the spectre of bad Internet law is rearing its ugly head with increasing frequency. And at the e-G8 forum, Sarkozy and others talked about even more ambitious plans to “civilize” the Internet that will surely have repercussions in the U.S. as well. Three things are common to these efforts: a general ignorance of technological reality, an attempt to preserve pre-Internet era norms and business models that don’t necessarily make sense anymore, and severe chilling effects on free speech and innovation.

The bad news is that fighting specific laws as they come up is an uphill battle. What has changed in the last ten years is that the Internet has thoroughly permeated society, and therefore the interest groups pushing these laws are much more determined to get their way. The good news is that lawmakers are reasonably receptive to arguments from both sides. So far, however, they are not hearing nearly enough of our side of the story. It’s time for techies to step up and get more actively involved in policy if we hope to preserve what we’ve come to see as our way of life. Here’s how you can make a difference.

1. Stick to your strengths—explain technology. The primary reason why Washington is prone to making bad tech law is that they don’t understand tech, and don’t understand how bits are different from atoms. Not only is educating policymakers on tech more effective, as a technologist you’ll have more credibility if you stick to doing that, rather than opining on specific policy measures.

2. Don’t go it alone. Giving equal weight to every citizen’s input on individual issues may or may not be a good idea in theory, but it certainly doesn’t work that way in practice. Money, expertise, connections and familiarity with the system all count. You’ll find it much easier to be heard and to make a difference if you combine your efforts with an existing tech policy group. You’ll also learn the ropes much more quickly by networking. Organizations like the EFF are always looking for help from outside technologists.

3. Charity begins at home—talk to your policy people. If you work at a large tech company, you’re already in a great position: your company has a policy group, a.k.a. lobbyists. Help them with their understanding of tech and business constraints, and have them explain the policy issues they’re involved in. Engineers often view the in-house policy and legal groups as a bunch of lawyers trying to impose arbitrary rules. This attitude hurts in the long run.

4. Learn to navigate the Three Letter Agencies. “The Government” is not a monolithic entity. To a first approximation there are the two Houses, a variety of Agencies, Departments and Commissions, the state legislatures and the state Attorneys General. They differ in their responsibilities, agendas, means of citizen participation and the receptiveness to input on technology. It can be bewildering at first but don’t worry too much about it; you can pick it up as you go along. Weird but true: most Internet issues in the House are handled by the “Energy and Commerce” subcommittee!

While I have focused on bad Internet laws, since that is where the tech/politics disconnect is most obvious, there are certainly many laws and regulations that have a largely positive, or at least a mixed reception in technology circles. Net neutrality is a prominent example; I am myself involved in the Do Not Track project. These are good opportunities to get involved as well, since there is always a shortage of technical folks. I would suggest picking one or two issues, even though it might be tempting to speak out about everything you have an opinion on.

To those of you who are about to post something like, “What’s the point? Congresspeople are all bought and paid for and aren’t going to listen to us anyway,” I have two things to say:

  • Tech policy is certainly hard because of the huge chasm, but cynicism is unwarranted. Lawmakers are willing to listen and you will have an impact if you stick with it.
  • If you’re not interested, that’s your prerogative. But please refrain from discouraging others who’re fighting for your rights. Defeatism and apathy are part of the problem.

Finally, here are some tech policy blogs and resources if you feel like “lurking” before you’re ready to jump in.

Thanks to Pete Warden and Jonathan Mayer for comments on a draft.
To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter.

June 7, 2011 at 4:56 pm Leave a comment

Insights on fighting “Protect IP” from a Q&A with Congresswoman Lofgren

Summary. Appeals to free speech and chilling effects are at best temporary measures in the fight against Protect IP and domain seizures. Even if we win this time it will keep coming back in modified form; the only way defeat it for good is to convince Washington that artists are in fact thriving, that piracy is not the real problem, and that takedown efforts are not in the interest of society. We in the tech world know this, but we are doing a poor job of making ourselves heard in Washington, and this needs to change.

As most of you know, the Protect IP Act is a horrendous piece of proposed legislation sponsored by the “content industry” that gives branches of the Government powers to sieze domain names at will, force websites to remove links, etc. Congresswoman Zoe Lofgren has been one of the very few legislators fighting the good fight, speaking out against this grave threat to free speech.

I was invited to a brown bag lunch with Rep. Lofgren at Mozilla today. (Mozilla has gotten involved in this because of the events surrounding the Mafiaafire add-on and Homeland Security.) I asked the Congresswoman this question (paraphrased):

“Does the strategy of domain-name seizures even have a prayer of achieving the intended outcome, or is it going to lead to something similar to the Streisand effect, as we’ve seen happen repeatedly on the Internet? Tools for circumvention of censorship in dictatorial regimes, that we can all get behind and that the U.S. government has often funded, may be morally different from tools for circumvention of anti-infringement efforts, but they are technologically identical.” [Princeton professor and now FTC chief technologist Ed Felten has pointed this out in a related context.]

In response, Rep. Lofgren pivoted to the point that seemed to be her favorite theme of the day—the tech world needs to come up with ways to monetize online content, she said. Unless that happens, it’s not looking good for our side in the long run.

At first I was slightly annoyed by her not addressing my question, but after she pivoted a couple of more times to the same point in answer to other questions I started to pay close attention.

What the Congresswoman was saying was this:

  1. The only way to convince Washington to drop this issue for good is to show that artists and musicians can get paid on the Internet.
  2. Currently they are not seeing any evidence of this. The Congresswoman believes that new technology needs to be developed to let artists get paid. I believe she is entirely wrong about this; see below.
  3. The arguments that have been raised by tech companies and civil liberties groups in Washington all center around free speech; there is nothing wrong with that but it is not a viable strategy in the long run because the issue is going to keep coming back.

Let’s zoom in on point 2 above. We techies all say we have the answers. New technology is not needed, we say. The dinosaurs of the content industries need to adapt their business models. Piracy is not correlated with a decrease in sales. Piracy happens not because it is cheaper, but because it is more convenient. Businesses need to compete with piracy rather than trying to outlaw it. Artists who’ve understood this are already thriving.

Washington is willing to listen to this. But no one is telling it to them.

There are a million blog posts that make the points above. But those don’t have an impact in Congress. “You vote up articles on Reddit all day,” Rep. Lofgren said. “Guess what, we don’t check Reddit in Washington.” Yes, she actually said that. The exact wording might be off but she used words to essentially that effect. She also pointed out that the tech industry spends by far the least amount of effort on lobbying. The entire industry has fewer representatives, apparently, than individual companies from many other sectors do.

A lot of information that we consider common knowledge is not available in Washington. It needs to be in a digestible form; for example, academic studies with concrete numbers that can be cited will be particularly useful. But a simple and important first step is to start communicating with policymakers. In my dealings with them, I’ve found them more willing to listen than I would have thought. So here’s my plea to the community to redirect some of the energy that we expend writing blog posts and expressing outrage into something more constructive.

To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter.

May 19, 2011 at 10:50 pm Leave a comment

An Academic Wanders into Washington D.C.

I was on a Do Not Track panel in Washington D.C. last week. I spent a day in the city and had many informal conversations with policy people. It was fascinating to learn from close range how various parts of the Government work. If I could sum it up in a single phrase, it would be “so many smart people, so many systemic problems.”

What follows is obviously the account of an outsider, and I’m sure there are many nuances I’ve missed. That said, an outsider’s view can sometimes provide fresh perspective. So without further ado, here are some of my observations.

A deep chasm. Techies are by-and-large oblivious of what goes on in D.C., and have a poor mental picture of what regulators are or aren’t involved in. For example, I attended part of a talk on antitrust concerns around the Google search algorithm, and it blew my mind to realize that something that techies think of as their playground comes under serious regulatory scrutiny. (I hear the Google antitrust issue is really big in the EU, and the US is catching up.) Equally, the policy world is quite lacking in tech expertise.

Libertarian influence. While the libertarian party is not mainstream in the US, libertarian think tanks and lobbying groups exercise significant influence in D.C. While that gladdens me as a libertarian, one unfortunate thing that appears to be common to all think tanks is toeing the party line at the expense of critical thinking. I’m not sure there can be a market failure so complete that libertarian groups will consider acknowledging the need for some government intervention.

A new kind of panel. The panel I attended was very different from what I’m used to. In a scientific or technical panel, there is an underlying truth even if the participants may disagree about some things. Policy panels seem to be very different: each participant represents a group that has an entrenched position and there is no scope for actual debate or any possibility of changing one’s mind. The panel is instead a forum for the speakers to state their respective positions for the benefit of the media and the public. There is nothing wrong with this, but it took me a while to grasp.

Lobbyists. The American public is deeply concerned about the power of lobbyists. But lobbyists perform the valuable function of providing domain expertise to legislators and regulators. Of course, the problem is that they also have the role of trying to get favorable treatment for the industry groups they represent, and these roles cannot be disentangled.

The solution is to increase the power of the counterweights to lobbyists, i.e., consumer advocates, environmental groups etc. A loose analogy is that if we’re worried about wealthy individuals getting better treatment from the judicial system, the answer is not to get rid of lawyers, but to improve the quality of public prosecutors and defenders. I don’t know if the lobbyist imbalance can ever be completely eliminated, but I think it can be drastically mitigated.

A humble suggestion. Generalizing my experiences in the tech field, I suspect that the Government lacks domain expertise in virtually every area, hence the dependence on lobbyists. If only more academics were to get involved in policy, it seems to me that it would solve both problems mentioned above — it would address the lack of expertise and it would shift the balance of advocacy in favor of consumers. (There are certainly many law scholars involved in policy, but I’m thinking more of scientists and social scientists here — those who have domain knowledge.)

To reiterate, I believe that a greater involvement of academics in policy has the potential to hugely improve how government works. But how do we make that happen? I have a couple of suggestions. Government people seem to have a tendency to listen to whoever talks the loudest in Washington. Instead, they should make an effort to seek out people with actual expertise. Second, I hope academics will take into account benefits like increased visibility and consider moonlighting in policy circles.

Thanks to Joe Calandrino for comments on a draft.

To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter.

December 6, 2010 at 7:38 pm 8 comments

Web Crawlers and Privacy: The Need to Reboot Robots.txt

This is a position paper I co-authored with Pete Warden and will be discussing at the upcoming IAB/IETF/W3C Internet privacy workshop this week.

Privacy norms, rules and expectations in the real world go far beyond the “public/private dichotomy. Yet in the realm of web crawler access control, we are tied to this binary model via the robots.txt allow/deny rules. This position paper describes some of the resulting problems and argues that it is time for a more sophisticated standard.

The problem: privacy of public data. The first author has argued that individuals often expect privacy constraints on data that is publicly accessible on the web. Some examples of such constraints relevant to the web-crawler context are:

  • Data should not be archived beyond a certain period (or at all).
  • Crawling a small number of pages is allowed, but large-scale aggregation is not.
  • “Linkage of personal information to other databases is prohibited.

Currently there is no way to specify such restrictions in a machine-readable form. As as result, sites resort to hacks such as identifying and blocking crawlers whose behavior they don’t like, without clearly defining acceptable behavior. Other sites specify restrictions in the Terms of Service and bring legal action against violators. This is clearly not a viable solution — for operators of web-scale crawlers, manually interpreting and encoding the ToS restrictions of every site is prohibitively expensive.

There are two reasons why the problem has become pressing: first, there is an ever-increasing quantity of behavioral data about users that is valuable to marketers — in fact, there is even a black market for this data — and second, crawlers have become very cheap to set up and operate.

The desire for control over web content is by no means limited to user privacy concerns. Publishers concerned about copyright are equally in search of a better mechanism for specifying fine-grained restrictions on the collection, storage and dissemination of web content. Many site owners would also like to limit the acceptable uses of data for competitive reasons.

The solution space. Broadly, there are three levels at which access/usage rules may be specified: site-level, page-level and DOM element-level. Robots.txt is an example of a site-level mechanism, and one possible solution is to extend robots.txt. A disadvantage of this approach, however, is that the file may grow too large, especially in sites with user-generated content what may wish to specify per-user policies.

A page-level mechanism thus sounds much more suitable. While there is already a “robots” attribute to the META tag, it is part of the robots.txt specification and has the same limitations on functionality. A different META tag is probably an ideal place for a new standard.

Taking it one step further, tagging at the DOM element-level using microformats to delineate personal information has also been proposed. A possible disadvantage of this approach is the overhead of parsing pages that crawlers will have to incur in order to be compliant.

Conclusion. While the need to move beyond the current robots.txt model is apparent, it is not yet clear what should replace it. The challenge in developing a new standard lies in accommodating the diverse requirements of website operators and precisely defining the semantics of each type of constraint without making it too cumbersome to write a compliant crawler. In parallel with this effort, the development of legal doctrine under which the standard is more easily enforceable is likely to prove invaluable.

To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter.

December 5, 2010 at 7:54 pm 5 comments

In which I come out: Notes from the FTC Privacy Roundtable

I was on a panel at the second FTC privacy roundtable in Berkeley on Thursday. Meeting a new community of people is always a fascinating experience. As a computer scientist, I’m used to showing up to conferences in jeans and a T-shirt; instead I found myself dressing formally and saying things like “oh, not at all, the honor is all mine!”

This post will also be the start of a new direction for this blog. So far, I’ve mostly confined myself to “doing the math” and limiting myself to factual exposition. That’s going to change, for two reasons:

  • The central theme of this blog and of my Ph.D dissertation — the failure of data anonymization — now seems to be widely accepted in policy circles. This is due in large part to Paul Ohm’s excellent paper, which is a must-read for anyone interested in this topic. I no longer have to worry about the acceptance of the technical idea being “tainted” by my opinions.
  • I’ve been learning about the various facets of privacy — legal, economic, etc. — for long enough to feel confident in my views. I have something to contribute to the larger discussion of where technological society is heading with respect to privacy.

Underrepresentation of scientists

Living up to the stereotype

As it turned out, I was the only academic computer scientist among the 35 panelists. I found this very surprising. The underrepresentation is not because computer scientists have nothing to contribute — after all, there were other CS Ph.Ds from industry groups like Mozilla. Rather, I believe it is a consequence of the general attitude of academic scientists towards policy issues. Most researchers consider it not worth their time, and a few actively disdain it.

The problem is even deeper: academics have the same disdainful attitude towards the popular exposition of science. The underlying reason is that the goal in academia is to impress one’s peers; making the world better is merely a side-effect, albeit a common one. The incentive structure in academia needs to change. I will pick up this topic in future posts.

The FTC has an admirable approach to regulation

As I found out in the course of the day’s panels, the FTC is not about prescribing or mandating what to do. Pushing a specific privacy-enhancing technology isn’t the kind of thing they are interested in doing at all. Rather, they see their role as getting the market to function better and the industry to self-regulate. The need to avoid harming innovation was repeatedly emphasized, and there was a lot of talk about not throwing the baby out with the bathwater.

The following were the potential (non baby hurting) initiatives that were most talked about:

  • Market transparency. Markets can only work well when there is full information, and when it comes to privacy the market has failed horribly. Users have no idea what happens to their data once it’s collected, and no one reads privacy policies. Regulation that promotes transparency can help the market fix itself.
  • Consumer education. This is a counterpart to the previous point. Education about privacy dangers as well as privacy technologies can help.
  • Enforcement. A few bad apples have been responsible for the most egregious privacy SNAFUs. The larger players are by and large self-regulating. The FTC needs to work with law enforcement to punish the offenders.
  • Carrots and sticks. Even the specter of regulation, corporate representatives said, is enough to get the industry to self-regulate. Many would disagree, but I think a carrots-and-sticks approach can be made to work.
  • Incentivizing adoption of PETs (privacy enhancing technologies) in general. The question of how the FTC can spur the adoption of PETs was brought up on almost every panel, but I don’t think there were any halfway convincing answers. Someone mentioned that the government in general could go into the market for PETs, which seems reasonable.

As a libertarian, I think the overall non-interventionist approach here is exactly right. I’m told that the FTC is rather unusual among US regulatory agencies in this regard (which makes sense, considering that the FCC, for example, spends its time protecting children from breasts when it is not making up lists of words.)

Facebook’s two faces

Facebook public policy director Tim Sparapani, who was previously with the ACLU, made a variety of comments on the second panel that were bizarre, to put it mildly. Take a look (my comments are in sub-bullets):

  • “We absolutely compete on privacy.”
    • That’s a weird definition of “compete.” Facebook has a history of rolling out privacy-infringing updates, such as Beacon, the ToS changes, and the recent update that made the graph public. Then they wait to see if there’s an outcry and roll back some of the changes. It is hard to think of another company has had such a cavalier approach.
  • “There are absolutely no barriers to entry to create a new social network.”
    • Except for that little thing called the network effect, which is the mother of all barriers to entry. In a later post I will analyze why Facebook has reached a critical level of penetration in most markets which makes it nearly unassailable as a  general-purpose social network.
  • “Our users have learned to trust us.”
    • I don’t even know what to say about this one.
  • “We are a walled garden.”
    • Sparapani is confusing two different senses of “walled garden” here. This was said in response to a statement by the Google rep about Google’s features to let users migrate their data to other services (which I find very commendable). In this sense, Facebook is indeed a walled garden, and doesn’t allow migration, which is a bad thing.  But Sparapani said he meant it in the sense that Facebook doesn’t sell user data wholesale to other companies. That sounds like good news, except that third party app developers end up sharing user data with other entities, because enforcement of the application developer Terms of Service is virtually non-existent.
  • “If you delete the data it’s gone.” (in the context of deleting your account)
    • That might be true in a strict sense, but it is misleading. Deleting all your data is actually impossible to achieve because most pieces of data belong to more than one user. Each of your messages will live on in the other person’s inbox (and it would be improper to delete it from theirs). Similarly, photos in which you appear, which you would probably like gone when you delete your account, still live on in the album of whoever took the picture. The same goes for your pokes, likes and other multi-user interactions. These are the very things that make a social network social.
  • “We now have controls on privacy at the moment you share data. This is an extraordinary innovation and our engineers are really proud of it.”
    • The first part of that statement is true: you can now change the privacy controls on each of your Facebook status messages independently. The second part is downright absurd. It is completely trivial to implement from an engineering perspective (and LiveJournal for instance has had it for a decade).

There were more absurd statements, but you get the picture. It’s not just the fact that Sparapani’s comments were unhinged from reality that bothers me — the general tone was belligerent and disturbing. I missed a few minutes of the panel, during which he apparently he responded to a criticism from Chris Conley of the ACLU by saying “I was at the ACLU longer than you’ve been there.” This is unprofessional, undignified and a non-answer. Amusingly, he claimed that Facebook was “very proud” of various aspects of their privacy track record at least half a dozen times in the course of the panel.

Contrast all this with Mark Zuckerberg’s comments in an interview with Michael Arrington, which can be summed up as “the age of privacy is over.” That article goes on to say that Facebook’s actions caused the shift in social norms (to the extent that they have shifted at all) rather than merely responding to them. Either way, it is unquestionable that Facebook’s true behavior at the present time pays lip service to privacy, and Zuckerberg’s statement is a more-or-less honest reflection of that. On the other hand, as I have shown, the company sings a completely different tune when the FTC is listening.

Engaging privacy skeptics

Aside from Facebook’s shenanigans, I feel that that there are two groups in the privacy debate who are talking past each other. One side is represented by consumer advocates, and is largely echoed by the official position of the FTC. The other side’s position can be summed up as “yeah, whatever.” When expressed coherently, there are three tenets of this position (with the caveats that not all privacy skeptics adhere to all three):

  • Users don’t care about privacy any more
  • Even if they do, privacy is impossible to achieve in the digital age, so get over it
  • There are no real harms arising from privacy breaches.

Click image to embiggen

To  the right is an illustrative example of a mainstream-media representative who was at the workshop covering it on Twitter through the lens of his preconceived prejudices.

Privacy scholars never engage with the skeptics because the skeptical viewpoint appears obviously false to anyone who has done some serious thinking about privacy. However, it is crucial to engage the opponents, because 1. the skeptical view is extremely common 2. many of the startups coming out of the valley fall into this group, and they are are going to have control over increasing amounts of user data in the years to come.

The “privacy is dead” view was most famously voiced by Scott McNealy. In its extreme form it is easy to argue against: “start streaming yourself live on the Internet 24/7, and then we’ll talk.” (To be sure, a few people did this 10 years ago as a publicity stunt, but it is obvious that the vast majority of people aren’t ready for this level of invasiveness of monitoring/data collection.) But engaging with skeptics isn’t about refutation, it’s about dealing with a different way of thinking and getting the message across to the other side. Unfortunately real engagement hasn’t really been happening.

I have a double life in academia and the startup world, and I think this puts me in a somewhat unusual position of being able to appreciate both sides of the argument. My own viewpoint is somewhere in the middle; I will expand on this theme in future blog posts.

January 31, 2010 at 3:49 am 13 comments


I'm an assistant professor of computer science at Princeton. I research (and teach) information privacy and security, and moonlight in technology policy.

This is a blog about my research on breaking data anonymization, and more broadly about information privacy, law and policy.

For an explanation of the blog title and more info, see the About page.


Be notified when there's a new post — subscribe to the feed, follow me on Google+ or twitter or use the email subscription box below.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 245 other followers