I've done a bit more online experimentation with Google's “famous or not” algorithm, first described here. Unfortunately, one of the risks of experimentation is that it may raise more questions than it answers. That's what happened to me. So I'll simply report the results.
In short, the use of quotations in name searches seems to have an effect on when Google.co.uk displays the warning tag that it uses for non-famous people. Here are the results so far for several different searches on my name (quotation marks are part of the search). Remember that Google inserts the tag, warning that some entries may have been deleted due to EU data protection law, when it concludes that someone is not famous:
stewart baker = no tag (i.e., Google-famous)
stewart a. baker = no tag (i.e., Google-famous)
“stewart a. baker” = no tag (i.e., Google-famous)
“stewart baker” = tag (i.e., not Google-famous)
stewart baker steptoe = no tag (i.e., Google-famous)
stewart baker nsa = no tag (i.e., Google-famous)
“stewart baker” nsa = tag (i.e., not Google-famous)
Just to see how Google treats a genuinely famous person, I tried Robyn Rihanna Fenty (aka Rihanna):
robyn fenty = no tag (i.e., Google-famous)
robyn rihanna fenty = no tag (i.e., Google-famous)
“robyn fenty” = tag (i.e., not Google-famous)
“robyn rihanna fenty” = tag (i.e., not Google-famous)
rihanna = no tag (i.e., Google-famous)
“rihanna” = no tag (i.e., Google-famous)
So there's clearly something about the quotation marks that changes Google's fame algorithm, but not always, as witness the searches for rihanna" or "stewart a. baker." I also checked to see if the tag shows up when Google puts a wikipedia entry at the top of the results or when it autosuggests a name search in Google News. No joy.
So I haven't quite broken the code. But if you're checking your Google-fame status, be sure to search google.co.uk with and without quotation marks around your name and let us know what you find.
Three months ago, I tried hacking Google's implementation of Europe's “right to be forgotten.” For those of you who haven't followed recent developments in censorship, the right to be forgotten is a European requirement that “irrelevant or outdated” information be excluded from searches about individuals. The doctrine extends even to true information that remains on the internet. And it is enforced by the search engines themselves, operating under a threat of heavy liability. That makes the rules particularly hard to determine, since they're buried in private companies' decisionmaking processes.
So to find out how this censorship regime works in practice, I sent several takedown requests to Google's British search engine, google.co.uk. (Europe has not yet demanded compliance from US search engines, like Google.com, but there are persistent signs that it wants to.)
I've now received three answers from Google, all denying my requests. Here's what I learned.
The first question was whether Google would rule on my requests at all. I didn't hide that I was an American. Google's “right to be forgotten” request form requires that you provide ID, and I used my US driver's license. Would Google honor a takedown request made by a person who wasn't a UK or EU national?
The answer appears to be yes. Google's response does not mention my nationality as a reason for denying my requests. This is consistent with Europe's preening view that its legal "mission civilisatrice" is to confer privacy rights on all mankind. And it may be the single most important point turned up by this first set of hacks, because it means that lawyers all around the world can start cranking out takedown requests for Belorussian and Saudi clients who don't like the way they look on line.
But will the requests succeed? The reasons Google gave for denying my requests tell us something about that as well.
1. I had asked that Google drop a link to a book claiming that in 2007 I had the “dubious honor” of being named the world's “Worst Public Official” by Privacy International, beating out Vladimir Putin on the strength of my involvement with NSA and the USA Patriot Act. It's true that Privacy International announced I had won the award, but I argued that the book was inaccurate because in fact, I “had very little to do with either domestic surveillance activities at NSA or with the USA Patriot Act, and the trophy is a 'dubious' honor only in the sense that Privacy International never actually awarded it.” (All true: I've been trying to collect the trophy for years but Privacy International has refused to deliver it.)
Google refused to drop the link, saying, “In this case, it appears that the URL(s) in question relate(s) to matters of substantial interest to the public regarding your professional life. For example, these URLs may be of interest to potential or current consumers, users, or participants of your services. Information about recent professions or businesses you were involved with may also be of interest to potential or current consumers, users, or participants of your services. Accordingly, the reference to this document in our search results for your name is justified by the interest of the general public in having access to it.”
So it looks as though Google has adopted a rule that “information about recent professions or businesses you were involved with” are always relevant to consumers. It would be impressive if the poor paralegal stuck with answering my email did enough online research to realize that I sell legal services, but I fear he or she may have thought that being the world's worst public official was just one of the gigs I had tried my hand at in the last decade.
2. My second takedown request was a real long shot. In an effort to see whether Google would let me get away with blatant censorship of my critics, I asked for deletion of a page from Techdirt that seems to be devoted to trashing me and my views; I claimed that it was “inappropriate” under European law to include the page in a list of links about me because it contains “many distorted claims about my political views, a particularly sensitive form of personal data. The stories are written by men who disagree with me, and they are assembled for the purpose of making money for a website, a purpose that cannot outweigh my interest in controlling the presentation of sensitive data about myself.”
To American ears, such a claim is preposterous, but under European law, it's not. Google, thank goodness, still has an American perspective: “Our conclusion is that the inclusion of the news article(s) in Google’s search results is/are – with regard to all the circumstances of the case we are aware of – still relevant and in the public interest.”
If I had to bet, I'd say that this rather vague statement is the one Google uses when other, more pointed reasons to deny relief don't work. But the reference to this page as a “news article” suggests that Google may be using a tougher standard in evaluating takedown requests for news media, a term that applies, at least loosely, to Techdirt.
3. The third denial was a little less interesting. I tried to get Google to take down an image showing me with a beard, arguing that it was out of date: “I don't have a beard now. If you look at the picture, you'll see why.”
But Google just gave me the same “professional life” rejection it gave to my “Worst Public Official” request. I suspect that's because the article that accompanies the picture is without question about my professional life; it's published by the Blog of the Legal Times. I can understand why Google would want to evaluate the complete link, not just the image, for this purpose but that's going to make deletion of images harder, especially when a bad photo accompanies an unexceptionable article.
What next? With these results in hand, I'm preparing a second round of hacks to further explore the boundaries of the right to be forgotten, and I'll resubmit my "does this search engine make me look fat?" request that Google take down a fourteen-year-old photo (unattached to a story) on the grounds that I weigh less now.
But to tell the truth, I'm having trouble finding stuff in my search history that is sufficiently inaccurate or outdated, especially now that we know Google is treating professional activities and news as per se relevant (at least if it's “recent,” whatever that means). So I hope that others will make their own searches and their own takedown requests and report what they find. In fact, my second effort has shed some light on how Google decides someone is famous, but I'll write that up separately, since this post is already long enough.
I am not a big fan of the EU's "right to be forgotten," but it has one silver lining. I was noodling around with Google's ever-more-baroque implementation of the principle this weekend, and I discovered that it offers a quick and cheap way to discover just how famous Google thinks you are.
To understand how Google got in the "famous or not" business requires a dive into the search engine's stutter-step implementation of the EU requirement. In China, of course, when Google is required to suppress a link, it includes a warning on the results page, saying in essence that the results have been censored. Google originally planned to do the same in response to European censorship. But the European data protection censors didn't like that kind of transparency. They thought that the notice, even if it didn't actually say what had been suppressed, would stigmatize Europeans who invoked the right to be forgotten. (That, and it might remind searchers that their access to data was being restricted by European law.)
Google caved, mostly. But it left in place a vestige of its original policy. Now, it includes the following warning on its European results pages whenever any name is searched for: "Some results may have been removed under data protection law in Europe. Learn more."
But that policy isn't implemented across the board. As Google's global privacy counsel explained a month ago, “Most name queries are for famous people and such searches are very rarely affected by a removal, due to the role played by these persons in public life, we have made a pragmatic choice not to show this notice by default for known celebrities or public figures."
So there you have it. Somewhere, Google has an algorithm for deciding who is a celebrity or public figure and who is not. To find out whether you made the grade, all you have to do is go to Google.co.uk, and type in your name. Then look at the bottom of the page for the tag that says, "Some results may have been removed" etc. If it's not there, apparently you're a public figure in Google's eyes. If it is, well, you'd better get working on your SEO techniques.
I found this when I searched for myself and didn't see the "some results" tag-of-ignominy. I thought that was weird, so I ran a few other names. And it looks as though Google is making a cut based on number of name searches, but as Google's counsel more or less admitted in his letter, the system is still pretty rough. Maybe it will get better. But why wait until it comes out of beta? Knowing Google, that could be years.
Let's ask now who makes it past Google's equivalent of the red velvet rope. Here's my quick census:
Google-Famous: Stewart Baker, Ben Wittes, Eugene Volokh, Jack Goldsmith, Orin Kerr, Kent Walker, Nicole Wong, Declan McCullagh, Peter Swire, Annie Anton, Dan Geer (cybersecurity guru), Jim Lewis (ditto), Raj De (NSA's GC), Dianne Feinstein(Senate intelligence committee chair), David Hoffman (upcoming guest on the Steptoe Cyberlaw Podcast), Chris Soghoian, James X. Dempsey (CDT senior counsel, member of Privacy and Civil Liberties Oversight Board).
Not Google-Famous: Nuala O'Connor (head of CDT), Michael Daniel (White House cybersecurity czar), Bob Litt (DNI's general counsel), John P. Carlin (Assistant AG for National Security), Michael J. Rogers (chair of House intelligence committee), David Medine (chair of Privacy and Civil Liberties Oversight Board),Michael Vatis (cohost of the Steptoe Cyberlaw Podcast), Jason Weinstein (ditto), Ellen Nakashima (astonishingly prolific Washington Post national security reporter).
It's pretty clear that Google is struggling with the old saw, "On the Internet, everyone is famous for fifteen people." But it's still hard to see exactly where the line is being drawn.
For further irony, consider Max Mosley, who is internet-famous mainly for the video of his multi-hour, multi-hooker, sadomasochistic orgy and for his successful campaign to force Google to suppress links to those pictures. His search results are being censored. But he's now so famous that Google gives us no warning -- not even that they might be bowdlerized. That can't make sense.
But why should I have all the fun? Why not google yourself first (don't pretend you won't) and then your friends and acquantances? Then list any additional surprises in the comments.
The evidence is mounting that Edward Snowden and his journalist allies have helped al Qaeda improve their security against NSA surveillance. In May, Recorded Future, a predictive analytics web intelligence firm, published a persuasive timeline showing that Snowden's revelations about NSA's capabilities were followed quickly by a burst of new, robust encryption tools from al-Qaeda and its affiliates:
This is hardly a surprise for those who live in the real world. But it was an affront to Snowden's defenders, who've long insisted that journalists handled the NSA leaks so responsibly that no one can identify any damage that they have caused.
In damage control mode, Snowden's defenders first responded to the Recorded Future analysis by pooh-poohing the terrorists' push for new encryption tools. Bruce Schneier declared that the change might actually hurt al Qaeda: “I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight.”
Schneier is usually smarter than this. In fact, the product al Qaeda had been recommending until the leaks, Mujahidin Secrets, probably did qualify as “home-brew encryption.” Indeed, Bruce Schneier dissed Mujahidin Secrets in 2008 on precisely that ground, saying “No one has explained why a terrorist would use this instead of PGP.”
But as a second Recorded Future post showed, the products that replaced Mujahidin Secrets relied heavily on open-source and proven encryption software. Indeed, one of them uses Schneier's own, well-tested encryption algorithm, Twofish.
Faced with facts that contradicted his original defense of Snowden, Schneier was quick to offer a new reason why Snowden's leaks and al Qaeda's response to them still wouldn't make any difference:
Whatever the reason, Schneier says, al-Qaida's new encryption program won't necessarily keep communications secret, and the only way to ensure that nothing gets picked up is to not send anything electronically. Osama bin Laden understood that. That's why he ended up resorting to couriers.
Upgrading encryption software might mask communications for al-Qaida temporarily, but probably not for long, Schneier said...."It is relatively easy to find vulnerabilities in software," he added. "This is why cybercriminals do so well stealing our credit cards. And it is also going to be why intelligence agencies are going to be able to break whatever software these al-Qaida operatives are using."
So, if you were starting to think that Snowden and his band of journalist allies might actually be helping the terrorists, there's no need to worry, according to Schneier, because all encryption software is so bad that NSA will still be able to break the terrorists' communications and protect us. Oddly, though, that's not what he says when he isn't on the front lines with the Snowden Defense Corps. In a 2013 Guardian article entitled “NSA surveillance: A guide to staying secure,“ for example, he offers very different advice, quoting Snowden:
"Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on."
Scheier acknowledges that hacking of communication endpoints can defeat even good encryption, but he's got an answer for that, too:
Try to use public-domain encryption that has to be compatible with other implementations. ...Since I started working with Snowden's documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I'm not going to write about.…
The NSA has turned the fabric of the internet into a vast surveillance platform, but they are not magical. They're limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.
Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That's how you can remain secure even in the face of the NSA.
It sounds as though al Qaeda took Bruce Schneier's advice to heart, thanks to leaks from Edward Snowden -- even if Schneier is still doing everything he can to avoid admitting it.
UPDATE: The description of Recorded Future was changed at the request of the company, which said, "While this may seem like splitting hairs, in the world of data analysis software "predictive analytics" has specific technical meaning which implies something different. We use the term web intelligence to reduce this confusion."
I've long been an advocate for fewer restraints on how the private sector responds to hacking attacks. If the government can't stop and can't punish such attacks, in my view the least it could do is not threaten the victims with felony prosecution for taking reasonable measures in self-defense. I debated the topic with co-blogger Orin Kerr here. I'm pleased to note that my side of the debate continues to attract support, at least from those not steeped in the "leave this to the professionals" orthodoxy of the US Justice Department.
The members of the 9/11 Commission, who surely define bipartisan respectability on questions of national security, have issued a tenth anniversary update to the Commission's influential report. The update repeats some of the Commission's earlier recommendations that have not been implemented. But it also points to new threats, most notably the risk of attacks on the nation's computer networks. No surprise there, but I was heartened to see the commissioners' tentative endorsement of private sector "direct action" as a response to attacks on private networks:
Congress should also consider granting private companies legal authority to take direct action in response to attacks on their networks.
This "should consider" formulation avoids a full embrace of particular measures, and in that respect it parallels another establishment endorsement of counterhacking. The Commission on Theft of American Intellectual Property, said in its 2013 report:
Finally, new laws might be considered for corporations and individuals to protect themselves in an environment where law enforcement is very limited. Statutes should be formulated that protect companies seeking to deter entry into their networks and prevent exploitation of their own network information while properly empowered law-enforcement authorities are mobilized in a timely way
against attackers. Informed deliberations over whether corporations and individuals should be legally able to conduct threat-based deterrence operations against network intrusion, without doing undue harm to an attacker or to innocent third parties, ought to be undertaken.
If repeated tentative embraces are the way new policy ideas become respectable, "direct action" is well on its way. The 9/11 commission deserves credit, not just for moving the debate but for contributing a label that gives counterhacking a kind of anarcho-lefty frisson.
HIPAA is an arguably well-intentioned privacy law that seems to yield nothing but "unintended" consequences. I put "unintended" in quotes because the consequences are often remarkably convenient, at least for those with power. I'm not sure you can call something that convenient "unintended."
The problem has gotten so bad that even National Public Radio and the Pro Publica organization -- hotbeds of bien pensant liberalism -- have started to notice. This story, for example, could be mined for a host of Privy nominations for Dubious Achievements in Privacy Law:
In the name of patient privacy, a security guard at a hospital in Springfield, Mo., threatened a mother with jail for trying to take a photograph of her own son.
In the name of patient privacy, a Daytona Beach, Fla., nursing home said it couldn't cooperate with police investigating allegations of a possible rape against one of its residents.
In the name of patient privacy, the U.S. Department of Veterans Affairs allegedly threatened or retaliated against employees who were trying to blow the whistle on agency wrongdoing.
When the federal Health Insurance Portability and Accountability Act passed in 1996, its laudable provisions included preventing patients' medical information from being shared without their consent and other important privacy assurances.
But as a litany of recent examples show, HIPAA, as the law is commonly known, is open to misinterpretation — and sometimes provides cover for health institutions that are protecting their own interests, not patients'.
"Sometimes it's really hard to tell whether people are just genuinely confused or misinformed, or whether they're intentionally obfuscating," said Deven McGraw, partner in the healthcare practice of Manatt, Phelps & Phillips and former director of the Health Privacy Project at the Center for Democracy & Technology.
At this point, we've seen a boatload of stories in which HIPAA produces stupid or bad results. The real question is whether there are any stories in which HIPAA has produced unequivocally good results -- things that wouldn't have happened without the law. Otherwise, we're looking at a law passed to prevent nonexistent abuses that has become a source of abuse itself. In my view, that's a recipe for repeal -- and pretty much the story of most privacy law.
When you're in the business of pointing out how often privacy law ends up protecting power and privilege, you never run out of material.
Everyone remembers Lois Lerner, the IRS official who pleaded the fifth amendment and refused to testify about her role in the agency's scrutiny of Tea Party nonprofits. And everyone remembers her mysterious computer crash making years of emails unavailable in 2011.
Could the messages be recovered with advanced forensics? We'll never know, because the IRS so systematically nuked Lerner's drives that no one could ever recover anything from them.
Why? According to The Hill, "the agency said in court filings Friday that the hard drive was destroyed in 2011 to protect confidential taxpayer information."
I'm sure the taxpayers will find a way to show their gratitude.
It's time once again to point out that privacy laws, with their vague standards and selective enforcement, are more likely to serve privilege than to protect privacy. The latest to learn that lesson are patients mistreated by the Veterans Administration and the whistleblowers who sought to help them. According to the Washington Post:
Citing patient privacy, managers have threatened VA employees or retaliated against those who complain about agency misconduct, according to a key congressman and the union that represents most of the department’s employees.
“VA routinely uses HIPAA as an excuse to punish into submission employees who dare to speak out,” said Rep. Jeff Miller (R-Fla.), chairman of the House Committee on Veterans’ Affairs. He is leading a probe into the coverup of long wait times for VA patients.
David Borer, the American Federation of Government Employees’ top lawyer, listed a number of cases involving a VA claim of patient privacy used to stifle whistleblowers in a June letter to the department.The Office of Special Counsel (OSC), which investigates whistleblower retaliation cases, is “very concerned about the misuse of HIPAA,” said Eric Bachman, an OSC deputy special counsel. “The potential chilling effect of even a small number of these HIPAA retaliation cases is a serious issue and one that should be addressed by the VA in short order.”...
Valerie Riviello is one VA employee who felt the lash of the department’s culture of retaliation.
A registered nurse at the Albany Stratton VA Medical Center in Upstate New York, Riviello said she was threatened with suspension and stripped of managerial duties after she complained last November about how a veteran was treated.Riviello said the vet was unnecessarily restrained, with an arm and leg strapped to bedposts.
“They scared the hell out of me,” Riviello said with worry clear in her voice. “They sent me a letter saying I could go to jail.”
That threat came in the form of an e-mail to Riviello’s lawyer, Cheri L. Cannon, a partner with the Tully Rinckey law firm. The VA e-mail said that information Riviello provided Cannon “unlawfully includes medical records of a VA patient” and noted that violating HIPAA “is a felony offense subject to imprisonment and a fine of up to $250,000.”
Misuse of privacy law is now so common that I've begun issuing annual awards for the worst offenders -- the Privies. The Veterans Administration has officially earned a nomination for a 2015 Privy under the category "We All Got To Serve Someone: Worst Use of Privacy Law to Serve Power and Privilege." The Department is in good company; here are the 2014 nominees.
China seems to have found a reliable legal tool for suppressing dissent. A prominent Chinese human rights lawyer, Pu Zhiqiang, has been arrested after a meeting in a private home to commemorate the 25th anniversary of the killings at Tiananmen Square. The charge? “Illegal access to the personal information of citizens,” a crime punishable by three years in prison.
Clearly, China is on its way to earning a second Privy nomination for “Worst Use of Privacy Law to Protect Power and Privilege.”
But where are EFF and EPIC and CDT and the ACLU? This is not the first time China has brought privacy charges against politically disfavored defendants. Why haven't these advocates of more privacy law vocally condemned China's use of privacy law to foster oppression?
The same question might be asked of the Article 29 Working Party in the European Union, along with a second one: How is China’s law different from the data protection laws that Europe has been urging the world to adopt?
Vodafone put out a highly informative report on the intercept practices of the countries where it does business. The greatest news interest was spurred by its statement that some countries tap directly into the provider's infrastructure and take what they want without notice to the provider:
In a “small number” of countries, Vodafone said in the report, the company “will not receive any form of demand for communications data access as the relevant agencies and authorities already have permanent access to customer communications via their own direct link.”
Vodafone refused to name the countries. But I can't help thinking that the report provides some pretty clear clues about two of them. I suspect we'll soon discover that they are France and Belgium.
The reason is buried in the footnotes to the report. The report gives reasons when it does not disclose the number of lawful intercept warrants the company received in a particular country. Sometimes reporting on wiretaps is prohibited by law.
But in eight cases, the report doesn't cite legal restrictions on disclosure. Instead, it says that it has no intercept numbers because there is “no technical implementation” of lawful intercept capabilities in those countries. In one country, Kenya, there's no implementation because Kenyan law prohibits operators from deploying wiretap capabilities. In the other seven, though, the reason the company gives is murkier: “We have not implemented the technical requirements necessary to enable lawful interception and therefore have not received any agency or authority demands for lawful interception assistance.”
You might think those are countries that have simply decided not to do lawful intercepts, perhaps because intercept equipment is expensive or technically demanding. For five of the seven, that's plausible. They are Mozambique, Ghana, Lesotho, Tanzania, and Fiji.
But the other two on the list are France and Belgium. Does anyone think that these two countries lack the resources, the technical skills, or the will to conduct lawful intercepts? Hardly. France is second to none in its enthusiasm for state intelligence collection, and especially for wiretaps. And Francophone Belgium is often heavily influenced by the governing style of French institutions.
It is inconceivable that these two countries lack a robust wiretapping capability. It is also inconceivable that they would fail to tap mobile phone systems, including Vodafone's. Yet Vodafone says that it has not received any demands for lawful interception from these countries and has not implemented the technical requirements to enable lawful interception.
The answer may be in how the Vodafone report seems to define lawful interception -- as requiring that the operator carry out the wiretap:
“In most countries, governments have powers to order communications operators to allow the interception of customers’ communications. This is known as ‘lawful interception’.... . Lawful interception requires operators to implement capabilities in their networks to ensure they can deliver, in real time, the actual content of the communications (for example, what is being said in a phone call, or the text and atachments within an email) plus any associated data to the monitoring centre operated by an agency or authority.”
So if Vodafone had been ordered simply to give the governments of France and Belgium direct access to Vodafone switches, I'm guessing that Vodafone's report would say that it had not implemented lawful interception in those countries. Which is what it does say.
Maybe I'm wrong about this, but my money is on Belgium and France as two of the countries with "direct access" to telecom switches.
I'm sure the European Commission will investigate, as soon as they finish their proposals for reforming NSA, FBI, DHS, Treasury, CIA, and the Bureau of Land Management.
UPDATE: Corrected Vodaphone to Vodafone. Thanks, Bart (Amon_RA)
Just how dumb is the “right to be forgotten”? Google will make it easy to find out. That's because Google has automated the process for making takedown requests under the European Court of Justice's “right to be forgotten” ruling. If you've got a piece of personal data that you'd like forgotten, all you have to do is fill out Google's handy online form.
Anyone can make a request (though you'll need to take a digital photo of a piece of ID as proof of identity). You then need to find a link (using a European version of Google) and explain why the personal data at the link is inaccurate, outdated, or inappropriate. The opportunity for abuse is obvious.
I feel bad for Google, which is stuck trying to administer this preposterous ruling. But that shouldn't prevent us from showing quite concretely how preposterous it is.
I propose a contest. Let's all ask for takedowns. The person who makes the most outrageous (and successful) takedown request will win a “worst abuse of privacy law” prize, otherwise known as a Privy.
To get you started, here are the four requests I've already filed.
1. Ban this book!
Reason this link violates the right to be forgotten:
The book claims that in 2007 I narrowly defeated Vladimir Putin and Tony Blair in a contest to win Privacy International's title “Worst Public Official.” The book also states that the prize was awarded because of NSA's controversial domestic surveillance activities, combined with fallout from the USA Patriot Act, and that the award is a “dubious” honor. This is inaccurate. In fact, I had little or nothing to do with either domestic surveillance activities at NSA or the USA Patriot Act, and the trophy is a “dubious” honor only in the sense that Privacy International never actually gave me the promised trophy, despite repeated requests on my part.
2. Louis Brandeis memorial wuss fit over a photo
Reason this link violates the right to be forgotten:
This image is outdated. It shows me with a beard.
I don't have a beard now. And if you look at the picture, you'll realize why.
3. You won't believe how much weight you can lose with this simple European Court of Justice trick
Reason this link violates the right to be forgotten:
This image is outdated. It is 14 years old. I've lost weight since then.
4. Oh hell, let's just censor people we don't like
Reason this link violates the right to be forgotten:
This link is inappropriate. It compiles stories making many distorted claims about my political views. Political views are a particularly sensitive form of personal data. The stories are written by men who disagree with me, and they are assembled for the purpose of making money for a website, a purpose that cannot outweigh my interest in controlling the presentation of sensitive data about myself.
Think you can do better? Enter as often as you like and send the results to firstname.lastname@example.org. If you don't want your name listed as a winner, tell me, or use a pseudonym.
Remember, though, that prizes will be awarded only for takedown requests that succeed. Please send details demonstrating that the link you identify was in fact taken down.
I'll be testifying tomorrow afternoon before the Senate Select Committee on Intelligence, talking about the bill that bans NSA's bulk collection of metadata. It passed the House after small amendments that privacy groups are now complaining loudly about.
I don't like the bill for quite different reasons. My prepared testimony is here: Download Stewart Baker Testimony June 5 2014 to Senate Intelligence Committee.
After explaining why bulk collection should not be banned, here's what I say about the privacy groups' objections:
Everyone recognizes that if bulk collection requests are foreclosed, then the government must make individualized requests for data. And to do that, it has to give the companies specific search terms to use. Before amendment, the House bill said that the government could only ask the companies to use three kinds of search terms. They could only ask the companies to look for a suspicious “person, entity, or account.”
This was foolish. Clues come in many forms. What if the agency doesn’t know the suspect’s name but does know his internet address, or the unique identifier of his tablet? Those are properand specific search terms, and they are likely to be of value to terrorism investigators. So the bill was revised; now it allows the agency use search terms “such as … a person, entity, account, address or device.”
Some opponents have a beef with the addition of “address or device.” They claim that these words are too open-ended and ambiguous. But when asked to identify the ambiguity they fear, the critics offer only strained and unlikely interpretations. Senator Ron Wyden has said that the law as adopted by the House “could be used to collect all of the phone records in a particular area code, or all of the credit card records from a particular state.” This apparently rests on the remarkable view that a state or an area code is the same as an “address.” Wow. I knew Oregon was a big state, but I’m still surprised to hear that they’re using area codes as addresses out there.
Let’s be realistic. If that can be called an ambiguity, then no words will ever satisfy the critics. Why not object to the word “person,” for there are cases treating entire cities or counties as persons?
And dropping those words from the bill creates obvious and dangerous gaps in our ability to investigate terrorism. Take these examples:
- Suppose that attackers use a VOIP phone as in Mumbai. The phone might have only an IP address. If we drop “address” from the list, the government can’t serve a 215 order asking for information about the online activities of that phone.
- Or suppose that in a Mumbai-style attack the terrorists keep changing their SIM cards (and thus phone number); we would need to search not for the phone number but for the IMEI number that identifies the actual phone. Drop “device” from the list and the government can’t ask for that information.
Other opponents aim their fire at the words “such as.” They would drop those words, capping the list of search terms at five, or even three. This too is foolish and dangerous. Can the proponents of this change predict with perfect foresight which clues we’ll need to uncover the next conspiracy? I doubt it.
Again, a few examples show why we cannot foreclose the use of other specific search terms:
I suppose that the practical answer to some of these questions is that the government won’t use its counterterrorism or national security authorities. It will rely on criminal subpoenas, which don’t come with any of these restrictions. But if that is so, if the intent is to let our intelligence agencies have all this information as long as they rely on law enforcement authorities, then we’re back to privacy theater.
Or privacy farce, since the result of the legislative changes is to put the United States Congress on record as giving fewer tools to those investigating terrorism and national security threats than to those pursuing muggers and embezzlers.
The ACLU and EPIC have campaigned long and hard against surveillance cameras in public spaces, and they've had considerable success -- despite a paucity of actual serious privacy abuses. So it's worth remembering that all this privacy theater imposes real costs on crime victims.
This story, headlined "After Boy and Girl Are Stabbed, Anger Over a Lack of Cameras" is only surprising because it appears in the New York Times:
The 7-year-old girl is hospitalized in critical condition, the only witness to a crime that so far defies explanation: A man stabbed two young children in the elevator of a public-housing project and escaped into the late-spring evening. Her best friend, a 6-year-old boy, is dead.
Though residents of the Brooklyn housing project saw a man fleeing through the development after the attack, he remained at large on Monday, the search made more difficult because the building has no surveillance cameras.
Living in housing projects in East New York means living with the daily threat of violence, and Boulevard Houses is no exception. But until Sunday night, parents felt safe taking their children downstairs to play....
The lack of cameras raised questions on Monday as elected officials accused the New York City Housing Authority, which manages the building, of being slow to install the cameras.
To be fair, I haven't seen reports suggesting that privacy groups opposed installation of surveillance cameras in these particular public spaces. Maybe they think that city-owned public housing should be as freely surveilled as private housing. But I wouldn't take bets on it.
The NBC interview with Edward Snowden was instructive in several ways. He continues to present himself as a reasonable man who tried to stop illegal programs but was left with no option but to go public. But the more closely you listen, especially when he says things that can be checked against the record, the more dubious his claim begins to seem.
In fact, the NBC interview, and the exchange with NSA that followed, reveal a lot about Snowden’s style of truth-telling, which turns out to be hard to distinguish from, well, lying.
When questioned about his claim to have raised concerns inside the NSA before breaking his promises of confidentiality, Snowden said, “I actually did go through channels, and that is documented. The NSA has records, they have copies of emails right now to their Office of General Counsel, to their oversight and compliance folks, from me raising concerns about the NSA’s interpretations of its legal authorities.”
This time, remarkably, NSA was not caught flat-footed. Showing an impressive grasp of the news cycle, the agency quickly released the only email that Snowden sent to the NSA GC. It was clearly the message Snowden described, but it was nothing like a blown whistle.
Instead, it asked a question straight out of high school civics. Pointing to training materials about the agency’s sources of legal authority, starting with the Constitution, Snowden noted that the materials listed “Federal Statutes/Presidential Executive Orders” on a single line. He asked what in retrospect is a gotcha question with a phony humility worthy of Uriah Heep: “I'm not entirely certain, but this does not seem correct, as it seems to imply Executive Orders have the same precedence as law. My understanding is that EOs may be superseded by federal statute, but EO's may not override statute. Am I incorrect in this?”
NSA’s lawyer responded promptly, and correctly, saying that EO’s were indeed subordinate to statutes. And there the matter rested.
Only the delusional would view that exchange as “raising concerns” about NSA’s programs. But Snowden isn’t delusional. He’s deliberately misleading us. Because when we parse his answer, it turns out not to say what we thought it said. What he actually said was that NSA had emails “from me raising concerns about NSA’s interpretations of its legal authorities.” (Emphasis added.)
And sure enough, that’s exactly what his email did. What it didn't do was raise concerns about the lawfulness or wisdom of NSA’s programs – which was of course the impression he meant to leave. And, quite probably, it was an impression he thought he could get away with. He didn't think NSA was capable of conducting a wild-goose chase through its email records and quickly declassifying what it found.
When NSA showed how slippery his original answer had been, Snowden got angry, and his intemperate defense was equally revealing. He insisted that this was not his only communication, that he had sent other messages to other offices. But now we know how to read his claims, and it’s likely that those other emails, if they exist, are just more vague expressions of concern about “NSA’s interpretations of its legal authorities.” Because, on a really close reading, that's all he promised us.
Snowden’s second line of defense was to accuse the NSA of having lied earlier, when it said it found no record of his past objections. Wrong again. What NSA said at the time was, “we have not found any evidence to support Mr Snowden's contention that he brought these matters to anyone's attention." That’s still true, since the much-touted email doesn’t bring anything (other than Snowden's skill as sea lawyer and proofreader) to anyone's attention, nor does it raise any objection to any program.
In short, Snowden has revealed a lot about himself in this exchange. In his original statement he worked hard to avoid an outright lie. But he worked equally hard to leave a deeply false impression, and on a point where he thought the security agencies couldn't contradict him -- either because they were unable to search their records completely or because they were unable to declassify the truth. I'm willing to bet that Edward Snowden didn't invent his approach to the truth just for this interview.
More likely, it is something he's done before, such as when talked about NSA committing economic espionage. He didn't quite say that NSA steals commercial secrets for American companies, but you have to parse him like the Talmud to realize that. Most listeners, and most headline writers, came away convinced that he had confirmed their worst suspicions.
But if deploying technical truth in support of deliberate misrepresentation is Snowden's style, and I think it is, then there's one simple lesson. The public can't trust him. At least not when he offers hints and teases and “interpretations,” rather than factual statements backed by unequivocal documentary evidence.
And, come to think of it, hints and teases and interpretations of the ambiguous are pretty much all the public has been offered by Snowden and his journalist allies since, oh, about June of last year.
When the Justice Department's indicted six People's Liberation Army hackers, it directly accused the PLA of stealing "privileged attorney-client communications related to Solar World's ongoing trade litigation with China."
This is not a surprise to knowledgeable observers. Chinese attacks on large U.S. law firms have been widely acknowledged, and last summer the American Bar Association condemned "unauthorized, illegal intrusions into the computer systems and networks utilized by lawyers and law firms." But the ABA flinched from actually mentioning China or the PLA in the resolution, and as far as I can see, ABA President Jim Silkenat has still said nothing about Chinese hacking of US law firms.
Contrast that silence with Silkenat's rush to demand answers from the NSA about more attenuated allegations. On February 15 of this year, the New York Times published a Snowden-inspired article claiming that Australia had intercepted an American law firm's advice to Indonesia on a piece of trade litigation. The article was full of anti-NSA spin but it made no claim that NSA itself was spying on privileged communications.
Nonetheless, five days after that story appeared, Silkenat sent a two-page letter to the head of NSA. "Whether or not those press reports are accurate," Silkenat wrote, he sought the NSA's director's support "in preserving fundamental attorney-client privilege protections for all clients and ensuring that the proper policies and procedures are in place at NSA to prevent the erosion of this important legal principle."
Fair enough. But it's now been three days since we saw a much more direct accusation that the PLA was spying on privileged attorney-client communications in the US.
Who's taking bets on whether the American Bar Association will be as quick to call out the Chinese government as it was to call out its own?
That's the possibility raised by Edward Jay Epstein in a (paywalled) Wall Street Journal op-ed. Epstein offers some new evidence for his theory. In particular he says that NSA investigators now know that Snowden's tactics included breaking into two dozen compartments using forged or stolen passwords. Once there, Snowden loosed an automated "spider" with instructions to scrape the compartments for particular information. In most cases, US officials have said, the data Snowden took was overwhelmingly of military and intelligence value to our adversaries and had little or nothing to do with privacy or whistleblowing.
It's entirely possible that Snowden is a spy. But it's also possible that he stole the military data to make sure he could find a safe foreign haven after his disclosures. That would fit the pattern of his disclosures over the past year. Dozens of recent Snowden leaks have revealed nothing about "mass surveillance" -- but they have consistently advanced Russian geopolitical interests.
In support of the "documents for asylum" theory, remember that, during his unsuccessful campaign to stay in Hong Kong, Snowden was quick to display stolen documents detailing the Chinese computers NSA had hacked. Here's the South China Morning Post from June 13, 2013:
Snowden said that according to unverified documents seen by the Post, the NSA had been hacking computers in Hong Kong and on the mainland since 2009. None of the documents revealed any information about Chinese military systems, he said.
One of the targets in [Hong Kong], according to Snowden, was Chinese University and public officials, businesses and students in the city. The documents also point to hacking activity by the NSA against mainland targets.
Snowden believed there had been more than 61,000 NSA hacking operations globally, with hundreds of targets in Hong Kong and on the mainland.
Interestingly, now that Snowden has lost his bid for haven in the Chinese-dominated city, he's stopped leaking information about NSA's hacking of Chinese computers.
Epstein argues that Snowden's whistle-blowing was just a cover for espionage. Maybe so, but there's at least one hole in his argument. "Contrary to Mr. Snowden's account," Epstein writes,
the document he stole about the NSA's domestic surveillance couldn't have been part of any whistleblowing plan when he transferred to Booz Allen Hamilton in March of 2013. Why? Among other reasons, because the order he took was only issued by the FISA court on April 26, 2013.
The problem for this claim is that the April 26 order was just one in a long line of 90-day FISA court orders. (Here's one from April of 2011, for example.) Snowden could easily have been motivated by those earlier orders, even though he ultimately stole the most recent one. In fact he has said that "the breaking point," when he finally decided to reveal the program, was DNI Clapper's "least untruthful answer" to Sen. Wyden's question about mass collection in March of 2013.
That doesn't make Snowden a truth-teller. He was almost certainly lying when he claimed he didn't steal passwords from his co-workers. And if he was watching Sen. Wyden in March 2013 as he claims, then he must have known that Wyden's question was part of a years-long campaign to end the Patriot Act interpretation that sustained NSA's program. For several years, the senator had been making coded attacks on the FISA court's interpretation of section 215. Examples here, here, here, and here. Anyone who knew about the NSA program, and that included Snowden, would have had no trouble understanding what Sen. Wyden was complaining about.
That's important because Snowden has tried to portray himself as a whistleblower with nowhere to go to challenge a plainly unlawful program. But if he were following Sen. Wyden's campaign, he knew that there already was a debate about the program, and at the highest levels of government. He also knew that the program had been ruled lawful and that Sen. Wyden had not yet persuaded his colleagues to end it.
Snowden revealed classified information, in short, not because he lacked an outlet for his complaints but because he didn't like the decisions that the executive, congressional, and judicial branches had made.
So the best case for Snowden is that he's an egoist who thinks his views should triumph over the country's leadership -- that he leaked classified documents not to start the debate but to end it.
The worst case is that he's a spy.
And in between is the theory I find most plausible, at least today: He's an egoist who wanted to kill the program -- but without paying too heavy a personal price. So he stole all the other secrets in hopes that they would give America's adversaries an incentive to protect him.
And so far, they have.
Earlier, I promised a post that would make the positive case for the third-party doctrine and Smith v. Maryland.
The case against it seems pretty obvious. Privacy advocates are glad to tell us that the pace of technological change requires that we expand fourth amendment protections. “We're putting our entire lives on line,” they say. “The government's ability to collect and analyze data is growing. Only by expanding the fourth amendment can we even the balance that protects our privacy.”Or more colloquially, “Some new technologies are just plain creepy, especially in the hands of the government, and we want the fourth amendment to save us from them.”
The problem with that argument is that definitions of “creepy” change pretty fast.
Brandeis wrote his seminal article on privacy because he thought the Kodak camera was creepy, and he wanted the law to prevent the hoi polloi from taking his picture. In the 1970s, the FBI's ability to maintain clippings files on prominent Americans was a creepy source of power for J. Edgar Hoover. And the Attorney General actually imposed a fourth-amendment-style “predicate” requirement on future FBI clippings files about individuals. Today, though, Google has democratized the clippings file, and it's too common to be creepy.
Much as we may regret what we said to a reporter back in 1997, there's no point in feeling violated every time it shows up in search results. So we don't.
We adjust. The line between “creepy” and "not creepy" isn't fixed. It creeps.
This makes it very dangerous to build a fourth amendment doctrine on the relative creepiness of new technologies. To start, even if we thought the law should restrict creepy new technologies, why would we ask nine cloistered quasi-academics with an average age pushing 70 to tell us where the “creepy line” is today? And why would we put their answer largely beyond reconsideration – enshrined as precedent in the Constitution? There's a good chance that if we'd done that in the last century, we'd still be waiting for the Court to reconsider the rule that governments must have probable cause and a warrant before taking pictures of people or before running Google searches on them.
If you want to know what information Americans really value, and what technologies they really find creepy, Smith v. Maryland turns out to be a pretty good proxy – and certainly better than consulting a panel of nine Baby Boomers. When Americans share certain data, they are voting with their feet – giving up some privacy for the sake of something they value more. By now everyone understands the social media business model; we're getting the service because we are giving up the data. And most of us have been occasionally surprised and disconcerted by the ways in which the data has been used. Sometimes we decide that we value our privacy more than the service, and we quit. More often, we don't. And our “creepy line” moves a bit. The more often it moves, the less surprising and the less offensive we find it when the government gets access to the same data we've already given to Twitter or Google or Facebook or AT&T.
Viewed another way, the decision to share certain data with a third party is part of a predictable journey. It's a sign that we care about the privacy of the data a little less than we once did. And once shared the data slowly becomes less sensitive. It's the journey from Brandeis to Kodak to Flickr.
If I had to guess, that's the journey we're on with location data. The Supreme Court clearly thinks that routine government access to location data is kind of creepy, and it's tempted to give location some special constitutional status, notwithstanding Smith. But if it does, I predict, it's going to end up looking as foolish and out of touch as Brandeis does today. Why? Because more and more kids are getting smart phones today, sometimes as early as elementary school, and practically every parent who buys one is installing an app that relays the kid's location to the parents. Which means that kids are already beginning to graduate from high school without any sense that their location can or should be hidden from the ultimate authorities, their parents. They will never share the current Supreme Court's instinct that their location is uniquely private.
Saying that I'd rather trust the verdict of millions of Americans than the instincts of nine Supreme Court justices is not the same as saying that there should be no special privacy rules for third-party data. It just means that those rules should not be written by the Court.
In fact, the introduction of new third-party services is routinely used by privacy advocates to call for new restrictions on government access to those services' data. And the new services themselves are eager to deflect their customers' privacy concerns toward regulating the government and not their service. So there's a built-in lobby for legislation that tinkers with the default Smith rule. As a result, Congress has been active in setting special rules for government access to some third-party data.
Under Smith, for example, electronic communications can be obtained without a warrant, just like everything else we share with third parties. But Smith doesn't apply to electronic communications. Instead, government access to those records is governed by the Electronic Communications Privacy Act, enacted nearly twenty years ago and revised a dozen times since then. ECPA is remarkably fine-tuned, setting several different standards for government access to different kinds of private communications, all of them higher than the default that Smith offers. (There is an active campaign right now to further further raise those standards.)
Or take the most famous collection of third party data, the one that got this debate rolling – NSA's collection of the metadata for all calls touching the United States. Even there, actual intrusions into privacy were strictly limited. The government held a lot of data, but it conducted searches on fewer than 500 identifiers a year. All three branches of government imposed limits on NSA's actual access to the data. And Congressional reforms of the program are already being debated, with some changes nearly certain.
None of this suggests a failure of democracy that requires the Supreme Court to step in and impose its own Procrustean definition of “creepy” on the country.
It turns out that Smith v. Maryland provides a good first-order estimate of Americans' evolving expectations of privacy. And where it's wrong about those expectations, it provides a powerful incentive for Congress and the Executive to bring the law into accord with Americans' expectations.
The third-party doctrine of Smith v. Maryland, 442 U.S. 735 (1979), is getting a bad rap from libertarians of the left and the right. Smith holds that the police don't need a search warrant to get information about me from a third party. If I keep a diary in my desk drawer, the police must get a search warrant based on probable cause if they want to read it. If I leave the diary with my mother for safekeeping, though, the third party doctrine says that the police only need to serve her with a subpoena to get it. The same is true if I store the diary in the cloud with Google Drive or Dropbox. If it were on my computer, the police would need a warrant to read it; in the cloud, they don't.
The theory of Smith is that I have a reduced privacy expectation in things I've shared with others. Life teaches us the same lesson. By the third grade we've all discovered the dangers of telling our deepest secrets to a friend. The Founders knew it too. As Ben Franklin famously said in Poor Richard's Almanack, “Three can keep a secret, if two of them are dead.” And, less famously but even more to the point for Smith, “If you would keep your secret from an enemy, tell it not to a friend.”
Why should we rethink a doctrine so grounded in human experience? Advocates point to the mass of data that we increasingly share on the internet, especially via smart phones. If all that data can be obtained without a search warrant, they ask, what is left of the fourth amendment's protection of our privacy? Surely there must be a limiting principle, a point where the intrusions are so great that the fourth amendment kicks in, no matter what Ben Franklin says. For example, Randy Barnett, my co-blogger, asks me whether the Smith doctrine means that the government could sweep up all the data that Americans have given to credit card companies, doctors, and accountants – without implicating the fourth amendment.
I'll answer that by turning the question around. Smith is the law today; so when should it not apply? Randy seems to argue that Smith should recede and the fourth amendment should kick in whenever government starts gathering “too much” information. My first, quick answer to Randy pointed out that he has his own problem finding a limiting principle for that approach: We can agree that, today, a park policeman standing on the steps of the Lincoln Memorial does not need a warrant to observe the behavior of a tourist walking by. After all, the tourist knows that his appearance and actions there are available to the public, and he has no fourth amendment protection from observation. Why should the constitutional analysis change if the same policeman stands in the same spot on Inaugural Day, when he can observe half a million people? And if it should, when is he looking at "too many" people? 200? 20,000?
To be fair, Randy isn't required to extend his "too much" argument to other fourth amendment exceptions. So let me try here to address the question whether Smith must be reconsidered if it allows the government to collect enormous amounts of information about Americans simply because the data is stored in third-party computers. Randy is sure that large-scale government scrutiny of third-party financial and medical records will be deeply shocking to American sensibilities, and the Court's. I'm skeptical. A few simple Google searches turn up stories suggesting that the government is already scrutinizing 4.5 million medical transactions a day:
A provider-screening process is able to capture critical attributes that may help identify fraudulent health care providers. CMS is then able to use software to sift through the 4.5 million claims CMS receives a day and run them through algorithms that search for patterns of fraud.
Another search discloses that, pressed by Congress, the SEC is already using Big Data to scrutinize financial trading patterns as well as the transactions of regulated entities. Yet another shows that financial institutions already send the government 11 million reports a year about customer transactions that look suspicious. And as for phone metadata, I've estimated in the past that American law enforcement serves over a million metadata subpoenas a year on telephone companies, and that the practice is probably a hundred years old.
It seems a little late to decide that all this is “too much,” especially in the absence of evidence that Smith has been seriously abused. Privacy advocates are going to have trouble explaining where the “too much” line should be drawn, and exactly how many existing regulatory regimes the Supreme Court should overturn by undoing Smith.
That's the argument against changing the Smith doctrine, and especially against the claim that "too much" use of Smith requires that the fourth amendment kick in. But I also think that there is a positive case for Smith. Since this post is already too long, I'll cover the positive case in a second post, hopefully later today.
Apart from the word "property," what is it about modern intellectual property law that should appeal to conservatives? The free-floating liability to plaintiffs' lawyers? The income transfers to people who mostly hate middle America? The capture of lawmakers and regulators by a rent-seeking minority? The enshrining of those lobbyists' victories in international law -- enforced in Geneva and immune to democratic change in this country? The law's dramatic turn from the original understanding of the Framers of the Constitution?
Despite these features, only a handful of conservatives seem ready to rethink intellectual property law. One young conservative in that camp is Derek Khanna, whose just-released R Street Policy Paper makes the conservative case for copyright reform. Here's a sample:
As with other enumerated powers of the federal government, Congress has expanded copyright far beyond what was originally intended. Just as Congress frequently neglects to abide the Origination Clause and the Commerce Clause, it likewise has ignored the Copyright Clause’s requirement that these monopoly instruments be granted only for “limited times.” Contributing greatly to this distortion has been the influence of a persistent army of special interest lobbyists, usually representing media companies, rather than the interests of cre- ators and the general public.
In order to restore the original public meaning of copyright, copyright’s term must be shortened. We must reconsider existing international treaties on copyright and not sign any treaty that either would lock in existing terms or extend terms even longer (such as the Trans Pacific Partnership Treaty). Finally, copyright terms must not be extended to “life+100” when the next copyright extension bill is expected to come up in 2018.
I don't know how the Supreme Court will decide ABC v. Aereo, argued last week. But however the case is decided, I suspect there's a real risk that the Court will screw up the law.
Why? Three reasons.
1. The case requires interpretation of a complicated statutory regime that the Court rarely construes. Aereo is exploiting a seam in copyright law that implicates fair use, performance rights, and how these rules apply to cloud computing. Intervening occasionally in complex statutory schemes is a high-risk endeavor for the Court. They are very smart lawyers but if they don’t get a run of cases in the same area, they often lack a feel for how all the pieces fit together.
2. That is surely true in Aereo, where the court is genuinely at sea. Oral argument revealed a widespread disposition to view Aereo's business model as too clever by half -- using thousands of tiny "personal" antennas to collect and transmit broadcast television without paying the fees that apply to cable companies who do the same. The justices seem to be struggling to find a way to slap Aereo down without damaging the legal framework that today protects cloud companies like Dropbox from the copyright plaintiff's bar. The Court seems to be reaching for a creative way out of this predicament. That's not good, for the third reason:
3. This was an April argument. In fact it was so late in April that the opinion probably wasn’t assigned until the last Friday of the month. That’s important because the court aims to finish its business and go on recess by the end of June. And since there will surely be concurring or dissenting opinions, simple fairness and tradition require that the justices in the minority see the majority opinion by June 1. That means the justice drafting the Court's opinion has only five weeks both to figure out how to reach the desired result and to produce a detailed opinion that scans the field and explains how its decision fits into that context. That’s a big task, and it can’t be done by starting from scratch, even with the help of a law clerk. The justice assigned the case will have to fall back on briefs filed by interested parties, all of whom drafted their submissions to teach the Court all the copyright law and facts that fit their interests.
It will be very difficult for the Court to see past those interests as it drafts, especially if it's trying to chart a new path on a tight deadline. Sometimes gaps and mistakes can be cured in the back and forth between the majority and the dissenters. But not in June. By the time the dissent is drafted, there may only be a week or two before recess. The opinions are more likely to talk past each other than to engage in a dialogue.
This is a recipe for error. The errors may not be obvious, of course. They could be no more than a misguided footnote, but that footnote could easily make law for a generation if the Court never returns to this dusty corner of the US Code. As Justice Jackson once said of the Court, "We are not final because we are infallible, but we are infallible only because we are final."
It is also a recipe for a splintered Court. If a justice isn’t sure the assigned drafter will produce an acceptable rationale, or fears there won’t be time to hone the draft into something he or she supports, the temptation will be to begin writing a separate opinion soon, just in case. And, once written, the draft is likely to seem more persuasive to its author than someone else’s work. That's how separate opinions proliferate, so that the lower courts must figure out what the Court actually held by counting noses rather than construing text. A divided Court has the advantage, I suppose, of avoiding error, since none of the justices’ opinions is authoritative. But it often leaves the law less certain than before the Court spoke. Which means that the Court will have to take more cases to clear up the confusion.
I could be completely wrong, but my money is on a decision in Aereo that leaves copyright law worse off than it is right now.
The latest NSA data dump is a set of declassified pleadings on the 215 metadata program. The program has been upheld by all the Foreign Intelligence Surveillance Court (FISC) judges and by district court William Pauley in New York. It was, however, ruled unlawful once -- by DC district judge Richard Leon. Judge Leon's opinion was colorful, but it hasn't proved especially persuasive.
The declassified documents may tell us why. They disclose the latest court fight over the program. A telecom company that received a 215 order (identified by Ellen Nakashima of the Washington Post as Verizon) asked the FISC to reconsider the program in light of Judge Leon's ruling. Judge Rosemary Collyer of the FISC did so, and made short work of it, laying out and rejecting each of Judge Leon's reasons for treating the program as a fourth amendment violation. No surprise, really.
But what has to hurt is Judge Collyer's dead-pan takedown of Judge Leon's hyperventilating prose. And in a parenthetical, no less. Summarizing Judge Leon's reasons for not following Smith v. Maryland, Judge Collyer writes:
The NSA program, on the other hand, "involves the creation and maintenance of a historical database containing five years' worth of data" and might "go on for as long as America is combating terrorism, which realistically could be forever!" Id. (italics and exclamation point in original).
Yes, indeed they were.
Which raises two questions about Judge Leon's opinion:
1. Is this the first judicial opinion rendered unpersuasive by its punctuation?
2. Was his CAPS LOCK key broken?
UPDATED to correct a typo in Judge Pauley's name. Thanks, Matt!
Edward Snowden cleared up a lot when he appeared on Vladimir Putin's "town hall" video program. https://www.youtube.com/watch?v=w1yH554emkY
His question for Putin was familiar to anyone who's followed Snowden's remarks in recent months: spying isn't bad, but "the mass surveillance of online communications and the bulk collection of private records " is evil. He trashes the US for programs that "unreasonably intrude on the private lives of ordinary citizen"' and asks, "Does Russia intercept, store or analyse in any way the communications of millions of individuals?"
I've prepared and answered a lot of questions at hearings, and a compound question like that is almost always a setup: It begs for a categorical "No." And that's what it got. It sure looks as though Snowden is playing the Kremlin's game here, serving up a pre-arranged softball on demand.
Equally interesting is the Russian government's implicit endorsement of the Snowden "mass surveillance" talking point. This television program is tightly scripted, and Snowden's question must have been approved at the highest levels of the Russian government to get past the screeners. So this is clearly a message that the Russian government wants to promote.
I've suspected for a while that Snowden's objection to mass surveillance point was a phony. It doesn't explain most of the stories Snowden has fathered or most of the documents that Snowden has compromised. Is it mass surveillance for NSA to monitor the communications of the Syrian military or to join with Norway in scrutinizing Russia's activities in the Arctic or to modify a USB cable so it can extract the secrets of a single computer -- to name just three programs that the Snowdenistas have disclosed?
Now we can see not just that the "mass surveillance" justification for Snowden's leaks is false but where the falsehood came from: it was almost certainly manufactured by the same Russian government that has now embraced it.
Why does Russia want this particular lie in circulation? Putin's answer tells us that too. After making the laughable claim that Russian surveillance is controlled by Russian law and Russian courts, Putin lets his mask slip just a bit: "there is no mass scale .... We do not have as much money and as many devices as the US to do that."
The Russians can't match NSA in money or technology (or in allies, he might have added). So Russia wants to drastically erode the American advantage in these things. And that, of course, is exactly the effect that Snowden's disclosures have had. If he persuades Americans to turn against NSA's foreign intelligence methods, or if he induces our allies to trim NSA's wings, or if he gets American technology companies to refuse to help their country, then Russia's lack of money, allies, and technology won't matter as much.
To sum up, for the last several months, while living in Russia, Snowden has been putting forward a justification for his acts (a) that he knows is not true, since it doesn't explain his actions, (b) that is approved at the highest levels by the Russian government and (c) that gravely harms the US and helps Russia in its confrontations with the US around the world.
I've said for a while that I thought the jury was out on whether Snowden is a traitor.
Now I think I hear it filing in.
Who says you can't learn anything watching Russia's propaganda programs?
An army of researchers recently published a short study of a weakness that NSA is alleged to have introduced into a public security standard. Joseph Menn of Reuters gave the study lengthy and largely uncritical coverage; here's the gist:
Security industry pioneer RSA adopted not just one but two encryption tools developed by the U.S. National Security Agency, greatly increasing the spy agency's ability to eavesdrop on some Internet communications, according to a team of academic researchers. Reuters reported in December that the NSA had paid RSA $10 million to make a now-discredited cryptography system the default in software used by a wide range of Internet and computer security programs. The system, called Dual Elliptic Curve, was a random number generator, but it had a deliberate flaw - or "back door" - that allowed the NSA to crack the encryption. A group of professors from Johns Hopkins, the University of Wisconsin, the University of Illinois and elsewhere now say they have discovered that a second NSA tool exacerbated the RSA software's vulnerability.
The allegation that NSA weakened the dual elliptic curve random number generator has been floating around for some time, and it has already had some policy impact. The President’s Review Group was reacting to the story when it declared that the US Government should "fully support and not undermine efforts to create encryption standards [and] not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software."
A careful reading of the actual study, though, suggests that there’s been more than a little hype in the claim that NSA has somehow made us all less safe by breaking internet security standards. I recognize that this is a technical paper, and that I’m not a cryptographer. So I welcome technical commentary and corrections.
With that disclaimer, however, it seems to me that the paper makes two points that take a lot of the air out of the "NSA wrecks internet security" balloon:
1. If there’s a backdoor in the standard, no one has found it.
It’s an article of faith among academic cryptographers (and something the Reuters article just assumes) that there is a backdoor in the dual elliptic curve standard. In 2007, some Microsoft researchers explained how a backdoor might have been implanted in the standard. Researchers have been looking for ways to exploit the backdoor – and thus prove its existence – ever since. Yet the paper concedes that the researchers can’t confirm the existence of a flaw. Instead, the researchers had to make up a different flawed protocol and show how quickly they could exploit that vulnerability. The artificiality of that exercise probably should have made Reuters a little more skeptical about the study's results, but there's a more important point in the researchers' concession.
Seven years is a lifetime in cryptanalytic attacks, so it’s quite a surprise that no backdoor has been proved in all this time. It raises the possibility that there really is no flaw – or that NSA has introduced a flaw that only NSA can exploit. That’s important because the press and a lot of cryptographers have been saying that NSA weakened internet security for everyone. But if there is no flaw, or if it’s a flaw only NSA can exploit, then at worst internet security has been weakened for adversaries and intelligence targets of the United States.
Call me old-fashioned, but that sounds like a good thing to me. Of course, academic cryptographers may still argue that it's not, but only by flirting with a moral relativism that most Americans don’t share.
2. If there’s a backdoor in the standard, it’s had no discernible effect on internet security.
Talk about burying the lede. After measuring how fast their fake standard’s contrived flaw could be exploited, the researchers decided to go looking for examples of the flawed elliptic curve standard in the wild. What they found seems to cast doubt on the news value of the whole flap.
It turns out that you can scan more or less every public-facing server on the internet in less than an hour. A company called Zmap will do it for you for free. The researchers used ZMap, and they found a total of 21.8 million servers offering secure http connections of the sort that the controversial elliptic curve standard is accused of subverting. And how many of those 21.8 million servers were clearly using the controversial standard?
Let me say that again. 720 out of 21,800,000 secure servers used the standard that is accused without conclusive proof of weakening security on the internet.
In a fit of understatement. the researchers note that this is “much less than 1%.” Well, yes. In fact, it is less than one percent in the same way that the weight of your cat is less than that of a bull African elephant – three orders of magnitude less.
Put another way, only .0003% of the secure servers on the internet were identified as running code that is subject to the famous flaw, if it is a flaw. And it’s likely that the vast majority of those servers are of no interest to the United States government, so the backdoor would never be used for them. If you assume that NSA has a real interest in maybe 1% of internet traffic, that’s 72 servers on the internet whose security might be put at risk by the standard -- and then only if they harbor information of intelligence interest to the United States government.
Big whoop. That's not even table stakes in the world of computer security.
When other researchers went looking for devices on the internet that were open to attack because of flawed plug and play protocols, they found 40 or 50 million online devices with the security flaw, a flaw that some manufacturers have simply refused to fix. And there are between 300 and 500 million computers running Windows XP that will get their last security updates from Microsoft this weekend; after that, it's open season on those machines.
So when it comes to weakening internet security, there are a lot of people and companies that are way, way ahead of NSA. Though you wouldn't know it from the credulous press coverage given to academic cryptographers' attack on the elliptic curve number generator.
Academic cryptographers have seen NSA as their adversary for fifty years, and press coverage so far has simply treated their worst assumptions about the agency as received truth. Despite that, the academic cryptographers' campaign against NSA's role in standards has not attracted widespread public support or serious legislative proposals. Nor did the Obama expert’s group recommendation gain much traction inside the administration.
If I’m right about the two lessons to be learned from this academic paper, that is just about the right response.
Notes: When I did my calculations, I didn’t count SChannel servers, which account for 12% of secure servers. That’s because the researchers admit that, while the controversial protocol is an option in SChannel, it is not the default. Similarly, ZMap could only identify servers running the Java version of the controversial protocol, not the C++ version. But even assuming that there are twice as many, or ten times as many, C++ implementations as Java implementations, the possible flaw in the protocol is dwarfed in its impact by many known security flaws that no one seems to be especially exercised about – suggesting that the flap over NSA’s role in the standard grows out of an agenda other than security.
UPDATE: Dropped an erroneous zero from my percentage calculation. There's no greater honor than having Dorothy Denning correct your math.
According to the New York Times, the President has decided to kill the existing NSA phone metadata program and come up with a substitute that leaves the metadata with the phone companies. The decision will limit the government's ability to find older connections, since few companies hold records for three or more years; it will also be hard to construct a social graph that combines customers of different carriers.
This may have been inevitable when large swaths of the Republican party decided to treat NSA as though it were an arm of Organizing for America. But even so, the President's decision is disappointing for other reasons. The key passage for the future is this passage in the NYT story:
In recent days, attention in Congress has shifted to legislation developed by leaders of the House Intelligence Committee. That bill, according to people familiar with a draft proposal, would have the court issue an overarching order authorizing the program, but allow the N.S.A. to issue subpoenas for specific phone records without prior judicial approval.
The Obama administration proposal, by contrast, would retain a judicial role in determining whether the standard of suspicion was met for a particular phone number before the N.S.A. could obtain associated records.
The administration’s proposal would also include a provision clarifying whether Section 215 of the Patriot Act, due to expire next year unless Congress reauthorizes it, may in the future be legitimately interpreted as allowing bulk data collection of telephone data.
The House intelligence committee has been working to produce a bipartisan replacement for the metadata program. The President had a chance, rare for him, to embrace bipartisanship and work with the House committee. This certainly looks doable, since it appears from press coverage that the differences between the White House and the House approach are modest.
Instead, the White House just couldn't resist sniping at the House and posturing itself as a hair more privacy protective than the bipartisan House approach. This is a sadly familiar story; the White House did the same thing on CISPA, the cybersecurity information sharing bill. There the White House tacked left at the last minute, threatening to veto a bipartisan House bill because it lacked privacy protections that the President's own bill hadn't included.
So which approach is better? Looking at the press coverage, the White House is highlighting two differences in approach. One seems completely symbolic -- deciding how section 215 should be interpreted between the time the new bill passes and the time section 215 expires. But there may be no such interim, since legislation takes a long time to pass, and in any event the new bill is likely to repeal the current program.
The other difference, requiring the FISA court to evaluate each request for phone data, is a bigger deal. It's also problematic. First, it is inconsistent with criminal practice, where subpoenas are routinely served by investigators without court involvement. Does the administration think that stopping cross-border terror attacks is less urgent than investigating bank robberies?
Second, I'm not aware of any circumstances where judges make "reasonable articulable suspicion" determinations in advance. In fact the whole point of the "articulable" part of that test is that the government needs to be able to explain itself later to a judge. What does judicial review of such a standard look like? Do the judges have to decide that the phone number also looks suspicious to them or just that it's reasonable for the government to be suspicious?
Third, the metadata program is needed mainly to speed up a cumbersome process of mapping contacts more or less by hand, but the administration's proposal adds new delays by injecting the court into the front end of the process. No one knows how or whether that will work, because we've never put the courts into that stage.
Finally, there is at least some reason to worry that the administration is going to inject the court into every request for data from the carriers. I hope not, because that would be completely unworkable. Remember, in the new system, all the data remains with the phone companies, so assembling one suspicious character's social graph means first assembling a list of all the people he calls, which is easy -- just serve his phone company with the request -- and then assembling a list of his contacts' contacts. That's the second hop. To collect second-hop records means obtaining records from every carrier whose customers showed up on the first hop. Right now, NSA can move from the first hop to the second with the click of a mouse. But under the proposed new system, every hop requires a batch of new subpoenas to a batch of carriers. That's going to slow the process quite a bit. Adding the courts to the process, though, will turn it into a morass. I hope that's not what the administration has in mind.
At best, this is an opportunity missed. The President seems genuinely convinced that his efforts to build bridges to Republicans have failed because of right-wing intransigence. Sorry, Mr. President, it's stupid point-scoring by your staff, like this leak, that makes you look like someone who either can't do Congress or doesn't care to.
For some reason, debates about Snowden are thick on the ground these days, and I've joined a couple of them. The most fun was the Oxford Union, which has been preparing future Parliamentarians (and Prime Ministers) all around the British Commonwealth since 1823. The Oxford Union debate was "This House would call Edward Snowden a Hero." My argument to the contrary is here:
Highlights of the debate included the arguments of Jeffrey Toobin, with whom I agree on nothing but Snowden, and P.J. Crowley, lately of the Clinton State Department -- both of them well worth watching. I also thought Chris Huhne and Chris Hedges did particularly well in support of the motion. And Charlie Vaughan, the Aussie student who stepped in to support our side, already shows signs of being a formidable politician. They can all be found here.
The motion carried, but narrowly (something like 212-175), which I thought a moral victory with a university audience outside the United States. (And an audience that thinks very highly of itself; Even at Harvard I would have expected a laugh when I declared that being a toady was the key to debating success and then immediately told the audience that it was the most intelligent I had ever appeared before. At Oxford, no one saw anything remotely humorous in the suggestion.)
UCLA also held a debate, on "Snowden -- Patriot or Traitor," a choice I wasn't fond of, since I think there's an element of intent in being a traitor that is hard to judge from this distance. Luckily the school left room for a third choice, "Neither," so I encouraged the audience to vote for anything but patriot. I was paired with Judge James Carr of the N.D.Ohio, formerly of the FISA court. Our opponents included Jesselyn Radack and Trevor Timm. Bruce Fein argued for "neither" though his attack on the government was unrelenting.
UCLA took two votes, one before and one after the debate. Gratifyingly, the room flipped after hearing the argument. The vote was 43-33 in favor of "Patriot," at the outset, but it declined to 34-51 when the debate was done. Here's the (rather long) UCLA debate from beginning to end. (I show up at 29:00 and again at 1:26:20.)
I've also started to take straw polls of audiences on the question "Snowden, Good or Bad?" Snowden doesn't do well in that binary choice. He lost about 10:1 at a Suits and Spooks conference for civil liberties and security researchers three weeks ago, and he lost about 4:1 at a conference of minority corporate counsel where I spoke a week ago.
All this suggests that Snowden is wearing out his welcome with the American public as he compromises intelligence program after intelligence program without producing anything more shocking than the fact that NSA is an aggressive, effective collector of intelligence in a dangerous world.