Episode 40 of the Steptoe Cyberlaw Podcast is done. Our guest this week is Bob Litt, the General Counsel of the Office of the Director of National Intelligence. Bob has had a distinguished career in government, from his clerkship with Justice Stewart, his time as a prosecutor in the Southern District of New York and at Main Justice, and more than five years in the ODNI job. This week in NSA: The latest fad in news coverage of the agency is a hunt for possible conflicts of interest in its leadership. And it’s having an effect. Two high-ranking NSA seniors, the CTO and the head of signals intelligence have recently left positions that drew scrutiny for getting too close to private industry. I ask him whether we should be pleased or worried about the trend toward individual converts to Islam carrying out random attacks with whatever weapon comes to hand. Prudently, he refuses to be drawn into my comparison of Islamists to the Manson Family. We debate whether the USA Freedom Act has a chance of passage in the lame duck Congress – and whether it should, focusing among other things on how the act’s FISA civil liberties advocates would function and what ethical rules would govern their day jobs. And we explore another ODNI project – implementing the President’s directive on protecting the privacy of foreign nationals while gathering intelligence. Are the nation’s spies really required to wait until a foreign target’s speech goes beyond what the first amendment protects before they collect and analyze the remarks? Will the requirement for advance justification for collection projects institutionalize risk aversion at NSA? And can government officials look forward to intelligence reports that read like this: “[SYRIAN NATIONAL 1] asked [IRAQI NATIONAL 1] to kill [US PERSON 1]”?
Our news roundup begins with the sudden press interest in possible conflicts of interest in NSA’s leadership. The Supreme Court takes another privacy case – one with no obvious federal connection. Lots of city ordinances require hotels to keep guest registries – and to let the police inspect those registries on demand. But the 9th circuit recently held en banc that these laws touch the privacy interests of the hotel owner, not just the guests, and that the laws are unconstitutional if they offer no opportunity for prior judicial review of the police demand. Just what we need: another opportunity for the Roberts Court to pad a narrow ruling with a lot of ill-considered dicta about Smith v. Maryland.
Harking back to last week’s interview with Tom Finan about insurance coverage for cyber incidents, we discover that where there’s insurance coverage there are also insurance coverage disputes. The head of Steptoe’s insurance coverage practice explains the P.F. Chang dispute with Travelers Insurance and hints that it’s in the first wave of what could be thirty years of litigation. Not that there’s anything wrong with that.
FBI Director Comey isn’t alone in complaining about Silicon Valley’s reluctance to help law enforcement. Leslie Caldwell, the new head of the Justice Department’s criminal division, has joined the chorus.
According to the Stored Communications Act, companies like Google may not provide the contents of emails in response to subpoenas. So what do civil litigants do when they need access to Gmail accounts in, say, divorce cases? The usual solution is for the court with jurisdiction over the civil suit to order the litigants to “consent” to the disclosure of their email messages. But is court-ordered consent really consent? According to a California appeals court, it is. Michael explains.
Whoa! The FCC really is taking cybersecurity seriously. It’s proposing $10 million in fines for two carriers who stored hundreds of thousands of “Obamaphone” beneficiaries’ personal data on a server accessible by anyone on the internet.
Confusion over when you need a warrant to get third party information continues to roil the courts. The Florida Supreme Court raises the bar for cell-site location data. And the NJ AG plots a counter-attack on a billing record warrant requirement in the Garden State. Michael suggests a new feature to keep all the litigation straight: This Week in Smith v. Maryland.
Lawyers with banks for clients have a new reason to upgrade their cybersecurity. As the banks struggle with increasingly sophisticated intrusions, they’re sharing the pain, demanding that their contractors and suppliers adopt stronger cybersecurity. Law firms are expressly included, since they’ve been targeted frequently for what inevitably will be called “bank shot” intrusions.
As I mentioned, I have been doing a weekly podcast on security, privacy, government and law with a couple of my partners, Michael Vatis and Jason Weinstein. This week, in episode 39, our guest is Tom Finan, Senior Cybersecurity Strategist and Counsel at DHS’s National Protection and Programs Directorate (NPPD), where he is currently working on policy issues related to cybersecurity insurance and cybersecurity legislation. Marc Frey asks him why DHS, specifically NPPD, is interested in cybersecurity insurance, what trends they are seeing in this space for carriers and other stakeholders, and what is next for their role in this space. He is forthcoming in his responses and even asks listeners to email him with their feedback.
This week in NSA: The House and Senate Judiciary chairs call for action on USA Freedom Act. And nobody cares. We conclude that the likelihood of action before the election is zero, and the likelihood of action in a lame duck is close to zero. But next week we’ll be interviewing Bob Litt, one of the prime negotiators for the intelligence community on this issue, and he may have a different view.
The Great Cable Unbundling seems finally upon us, as several content providers announce that they’re willing to sell content direct to consumers over the Internet. Does that mean more support for net neutrality? Not necessarily. Stephanie Roy explains.
Are parents responsible for what their adolescent kids do and say on Facebook? That makes sense, if you’ve never had adolescent kids. Maybe that explains why Michael Vatis sees merit in the Georgia appellate court decision finding potential liability. It reversed the trial court, which had granted summary judgment in favor of the parents of a kid who set up a fake and defamatory Facebook page in the name of a classmate he hated. The facts are a little odd. The kid who set up the page never took it down, even after he’d been caught and punished by school and parents. The appeals court thought that the parents had a “supervisory” obligation to make their child delete the fake account, and that they could be held liable for negligently failing to do so. It’s quite possible, though, that everyone in this case is a Privacy Victim; the issue could have been hashed out with a phone call from the parents of the victim to the parents of the perpetrator, but according to the press, “the child’s parents didn’t immediately confront the boy’s parents because their school refused to identify the culprit.” Because privacy.
FBI Director Comey comes out swinging for CALEA reform, saying in a speech at Brookings that the law needs to be updated to require cooperation from makers of new communications systems when the FBI has a court order granting access to those systems.
When it comes to regulating on other topics, though, the Justice Department is a little less restrained; it has opened the door to a round of new disability claims against websites, offering a roadmap to what it thinks the law requires.
The right to be forgotten is attracting more flak in Europe, as the BBC announces a competing “right to remember” website devoted to publicizing stories that Google has delinked. It’s Auntie BBC v. Nanny Europe. Cue popcorn. Unhappily, a “progressive” group most famous for relentlessly sliming Google on privacy issues has urged the search engine to bring the right to be forgotten to the United States. Sigh.
In breach news, TD Bank pays $850,000 to the state AGs over a “breach” that may never have happened. TD lost a backup tape in transit, and the data wasn’t encrypted. Was anyone’s data actually compromised by the loss of the tape? The AGs don’t say. They just want their money. And they get it.
The Russians are getting sloppy, or maybe they’re taking a leaf from China’s book – figuring it doesn’t matter if they get caught. And caught they have been, by iSight Partners, which reports that Russian hackers used a Microsoft zero-day to target Western governments and Ukraine. Meanwhile, the FBI is warning about another and even more sophisticated set of Chinese government hackers. And hackers are now adding a new form of targeted attack to their arsenal a tactic that combines spearphishing with watering hole attacks. They’re targeting ads at users that take them to a compromised website that serves malware.
And, in good news for privacy skeptics, the Video Privacy Protection Act gets a narrow reading.
We remind everyone that the Steptoe Cyberlaw Podcast welcomes feedback, either by email (CyberlawPodcast@steptoe.com) or voicemail ( +1 202 862 5785) and that the views expressed by the participants are their own, not the firm's.
I've spent much of this year doing a weekly podcast on security, privacy, government and law with a couple of my partners, Michael Vatis and Jason Weinstein. (The RSS feed is here.) I thought readers of this blog might like a taste of the podcast, which has attracted a substantial audience in Washington. This week, in episode 38, our guest is Shaun Waterman, editor of POLITICO Pro Cybersecurity. Shaun is an award-winning journalist who has worked for the BBC and United Press International; and an expert on counterterrorism and cybersecurity.
We begin as usual with the week’s NSA news. NSA has released its second privacy transparency report. We’ve invited Becky Richards, NSA’s privacy and civil liberties watchdog, on the program to talk about it, so I’m using this post to lobby her to become a guest soon: Come on in, Becky, it’s a new day at the NSA!
Laura Poitras’s new film about Snowden gets a quick review. We question the hyped claim that there’s a “second leaker” at NSA; most of the leaked information described in the film was already pretty widely known.
Two more post-Snowden pieces of litigation are also in the news. We dig into the Justice Department’s botched handling of the notice that must be given to parties on the receiving end of FISA taps and section 702 of FISA. As often turns out to be the case, the Justice Department develops a limp, and all the other agencies have to put stones in their shoes: It looks as though OFAC is going to be dragged into this comedy of errors.
The second piece of litigation began as a humdrum piece of FOIA litigation (though with a bit of Glomar for spice). It has now has produced a much more interesting result: Judge Pauley, ordinarily a good friend to the government, declares that he has lost confidence in the Justice Department’s representations about the risks of releasing FISA opinions; he insists on reviewing the FIS court’s opinions himself in camera to decide what can be released.
In other national security litigation, we all know that a canary can emit a twitter, but can Twitter emit a canary? The social media giant is going to court to get approval for its “warrant canary,” claiming a first amendment right to list the orders it has not (yet) received under national security surveillance laws. Meanwhile, on the opposite coast, the government’s authority to issue gag orders in national security letters is argued before the Ninth Circuit, which seems to find the issue at least a little troubling.
Maybe it’s a coincidence, but just as Europol is raising the possibility that the internet might be used to kill people, the FDA is trying to do something about it, issuing cybersecurity guidelines for manufacturers. We damn them with faint praise, note that our refrigerators have been trying to kill us slowly for years, and wonder when the National Highway Safety Administration will issue security guidelines for self-driving cars.
The pendulum may be swinging toward privacy in the US but it swings hard the other way in the Southern Hemisphere. First New Zealand gives Snowden a swift kick and now the Australian government is enacting surveillance reforms that increase government authority to conduct national security intercepts.
There’s a bit of good news in our update on the right to be forgotten. The European Commission has poured cold water on the European Court of Justice, hinting strongly that the court’s enthusiasm for sacrificing free expression is a bad idea. Sad to say, though, the notion seems as communicable as Ebola; even Japan is getting in the act, as a Tokyo court orders Google to take down search links at the request of an individual.
The prize for Dumbest Judicial Opinion of the Month goes (where else?) to the Ninth Circuit, which expressed shock and dismay over the idea that a Navy investigator conducted “surveillance of all the civilian computers in an entire state” in the course of looking for military personnel trading child porn. Turns out that the investigator in question simply looked at images being shared publicly online using a common file-sharing program, Gnutella. And when he had the IP address of someone sharing child porn images he checked to see if the suspect worked for the military. When that turned out not to be the case, he turned the information over to civilian law enforcement, giving the Ninth Circuit a severe case of the vapors and ultimately leading to exclusion of the evidence. Because posse comitatus. You won’t want to miss my translation from the Latin.
We unpack the controversy over Ross Ulbricht and how the FBI managed to captcha him. And we congratulate the FCC for a regulatory action near and dear to anyone who’s ever paid too much for bad Wi-Fi in a good hotel.
Finally, we remind everyone that the Steptoe Cyberlaw Podcast welcomes feedback, either by email (CyberlawPodcast[at]steptoe.com) or voicemail( +1 202 862 5785). And to prove it, I read a message from Dick Mills, a libertarian blogger who started out tagging me as the Great Satan of statism but ended by admitting that the podcast occasionally changed his mind. We can’t ask for more than that.
Apple is a lot like a teenager getting Edward Snowden's name tattooed up her arm. The excitement will die, but the regrets will last. For all of us.
Most Americans believe in privacy from government searches, but not for criminals. The Constitution protects a citizen's “houses, papers and effects” only until a judge finds probable cause that the citizen has committed a crime. This year, the Supreme Court ruled that the police need a warrant to search cellphones seized at the time of arrest. But with Apple's new encryption, probable cause and a warrant will be of little help to the police who seize a suspect’s iPhone and want to search it.
That decision should not be left to Apple alone. And it won't be.
Companies do not want to give their employees the power to roam corporate networks in secrecy. And even if they did, their regulators wouldn't let them. If Apple wants to sell iPhones for business use, it will have to give companies a way to read their employees’ business communications. Corporate IT departments won’t welcome a technology that could help workers hide misdeeds from their employer.
And as a global company, Apple is subject to regulation and market pressure everywhere. If China doesn't like Apple's new policy, it can ban the iPhone or simply encourage China's mobile carriers to slow Apple's already weak sales there. Even democracies like India, and U.S. allies like the United Arab Emirates, have shown the determination and the clout to force changes in phone makers' security choices.
So if Apple wants to sell its iPhone everywhere, it will have to compromise. But then what? Will it really give China's authoritarian regime more access to iPhone data than it gives to American police trying to stop crimes in this country?
And if so, how will its management sleep at night?
If you think Edward Snowden and Glenn Greenwald have stopped attacking NSA, you haven't been following them closely enough. While American media have largely lost interest in Snowden and Greenwald, the pair continue to campaign outside the United States against the intelligence agency.
Their most ambitious effort was in New Zealand, a member of the “Five Eyes” intelligence alliance with the U.S. and U.K. The center-right New Zealand government has been embroiled in accusations of illegal surveillance of Kim Dotcom, who grew wealthy running a file-sharing site and is now fighting extradition to the United States for copyright violations. As part of that fight, Dotcom dove into New Zealand's national elections, hoping to unseat the two-term government and, in his words, "to close one of the Five Eyes."
Snowden and Greenwald dove in with him, joining eagerly in campaign events sponsored by Dotcom. Greenwald used his new Omidyar-funded news site to release a lengthy article in the last week of the campaign; it accused New Zealand of working with NSA to conduct mass surveillance. When the prime minister denied the accusation, Snowden called him a liar.
The combination of carefully timed Snowden leaks and Dotcom's millions looked potent. Dotcom even funded a new Internet Party, aligned with the small Mana party, which already had a seat in New Zealand's Parliament.
Well, New Zealanders went to the polls today, and the results are in.
The biggest losers? Snowden, Greenwald, and Dotcom.
The prime minister whom Snowden accused of lying won an "overwhelming" victory that may give him the first outright majority for any New Zealand party in nearly twenty years.
Meanwhile, Dotcom's Internet Party bombed, even costing its tiny ally the only seat it held in Parliament.
I've done a bit more online experimentation with Google's “famous or not” algorithm, first described here. Unfortunately, one of the risks of experimentation is that it may raise more questions than it answers. That's what happened to me. So I'll simply report the results.
In short, the use of quotations in name searches seems to have an effect on when Google.co.uk displays the warning tag that it uses for non-famous people. Here are the results so far for several different searches on my name (quotation marks are part of the search). Remember that Google inserts the tag, warning that some entries may have been deleted due to EU data protection law, when it concludes that someone is not famous:
stewart baker = no tag (i.e., Google-famous)
stewart a. baker = no tag (i.e., Google-famous)
“stewart a. baker” = no tag (i.e., Google-famous)
“stewart baker” = tag (i.e., not Google-famous)
stewart baker steptoe = no tag (i.e., Google-famous)
stewart baker nsa = no tag (i.e., Google-famous)
“stewart baker” nsa = tag (i.e., not Google-famous)
Just to see how Google treats a genuinely famous person, I tried Robyn Rihanna Fenty (aka Rihanna):
robyn fenty = no tag (i.e., Google-famous)
robyn rihanna fenty = no tag (i.e., Google-famous)
“robyn fenty” = tag (i.e., not Google-famous)
“robyn rihanna fenty” = tag (i.e., not Google-famous)
rihanna = no tag (i.e., Google-famous)
“rihanna” = no tag (i.e., Google-famous)
So there's clearly something about the quotation marks that changes Google's fame algorithm, but not always, as witness the searches for rihanna" or "stewart a. baker." I also checked to see if the tag shows up when Google puts a wikipedia entry at the top of the results or when it autosuggests a name search in Google News. No joy.
So I haven't quite broken the code. But if you're checking your Google-fame status, be sure to search google.co.uk with and without quotation marks around your name and let us know what you find.
Three months ago, I tried hacking Google's implementation of Europe's “right to be forgotten.” For those of you who haven't followed recent developments in censorship, the right to be forgotten is a European requirement that “irrelevant or outdated” information be excluded from searches about individuals. The doctrine extends even to true information that remains on the internet. And it is enforced by the search engines themselves, operating under a threat of heavy liability. That makes the rules particularly hard to determine, since they're buried in private companies' decisionmaking processes.
So to find out how this censorship regime works in practice, I sent several takedown requests to Google's British search engine, google.co.uk. (Europe has not yet demanded compliance from US search engines, like Google.com, but there are persistent signs that it wants to.)
I've now received three answers from Google, all denying my requests. Here's what I learned.
The first question was whether Google would rule on my requests at all. I didn't hide that I was an American. Google's “right to be forgotten” request form requires that you provide ID, and I used my US driver's license. Would Google honor a takedown request made by a person who wasn't a UK or EU national?
The answer appears to be yes. Google's response does not mention my nationality as a reason for denying my requests. This is consistent with Europe's preening view that its legal "mission civilisatrice" is to confer privacy rights on all mankind. And it may be the single most important point turned up by this first set of hacks, because it means that lawyers all around the world can start cranking out takedown requests for Belorussian and Saudi clients who don't like the way they look on line.
But will the requests succeed? The reasons Google gave for denying my requests tell us something about that as well.
1. I had asked that Google drop a link to a book claiming that in 2007 I had the “dubious honor” of being named the world's “Worst Public Official” by Privacy International, beating out Vladimir Putin on the strength of my involvement with NSA and the USA Patriot Act. It's true that Privacy International announced I had won the award, but I argued that the book was inaccurate because in fact, I “had very little to do with either domestic surveillance activities at NSA or with the USA Patriot Act, and the trophy is a 'dubious' honor only in the sense that Privacy International never actually awarded it.” (All true: I've been trying to collect the trophy for years but Privacy International has refused to deliver it.)
Google refused to drop the link, saying, “In this case, it appears that the URL(s) in question relate(s) to matters of substantial interest to the public regarding your professional life. For example, these URLs may be of interest to potential or current consumers, users, or participants of your services. Information about recent professions or businesses you were involved with may also be of interest to potential or current consumers, users, or participants of your services. Accordingly, the reference to this document in our search results for your name is justified by the interest of the general public in having access to it.”
So it looks as though Google has adopted a rule that “information about recent professions or businesses you were involved with” are always relevant to consumers. It would be impressive if the poor paralegal stuck with answering my email did enough online research to realize that I sell legal services, but I fear he or she may have thought that being the world's worst public official was just one of the gigs I had tried my hand at in the last decade.
2. My second takedown request was a real long shot. In an effort to see whether Google would let me get away with blatant censorship of my critics, I asked for deletion of a page from Techdirt that seems to be devoted to trashing me and my views; I claimed that it was “inappropriate” under European law to include the page in a list of links about me because it contains “many distorted claims about my political views, a particularly sensitive form of personal data. The stories are written by men who disagree with me, and they are assembled for the purpose of making money for a website, a purpose that cannot outweigh my interest in controlling the presentation of sensitive data about myself.”
To American ears, such a claim is preposterous, but under European law, it's not. Google, thank goodness, still has an American perspective: “Our conclusion is that the inclusion of the news article(s) in Google’s search results is/are – with regard to all the circumstances of the case we are aware of – still relevant and in the public interest.”
If I had to bet, I'd say that this rather vague statement is the one Google uses when other, more pointed reasons to deny relief don't work. But the reference to this page as a “news article” suggests that Google may be using a tougher standard in evaluating takedown requests for news media, a term that applies, at least loosely, to Techdirt.
3. The third denial was a little less interesting. I tried to get Google to take down an image showing me with a beard, arguing that it was out of date: “I don't have a beard now. If you look at the picture, you'll see why.”
But Google just gave me the same “professional life” rejection it gave to my “Worst Public Official” request. I suspect that's because the article that accompanies the picture is without question about my professional life; it's published by the Blog of the Legal Times. I can understand why Google would want to evaluate the complete link, not just the image, for this purpose but that's going to make deletion of images harder, especially when a bad photo accompanies an unexceptionable article.
What next? With these results in hand, I'm preparing a second round of hacks to further explore the boundaries of the right to be forgotten, and I'll resubmit my "does this search engine make me look fat?" request that Google take down a fourteen-year-old photo (unattached to a story) on the grounds that I weigh less now.
But to tell the truth, I'm having trouble finding stuff in my search history that is sufficiently inaccurate or outdated, especially now that we know Google is treating professional activities and news as per se relevant (at least if it's “recent,” whatever that means). So I hope that others will make their own searches and their own takedown requests and report what they find. In fact, my second effort has shed some light on how Google decides someone is famous, but I'll write that up separately, since this post is already long enough.
I am not a big fan of the EU's "right to be forgotten," but it has one silver lining. I was noodling around with Google's ever-more-baroque implementation of the principle this weekend, and I discovered that it offers a quick and cheap way to discover just how famous Google thinks you are.
To understand how Google got in the "famous or not" business requires a dive into the search engine's stutter-step implementation of the EU requirement. In China, of course, when Google is required to suppress a link, it includes a warning on the results page, saying in essence that the results have been censored. Google originally planned to do the same in response to European censorship. But the European data protection censors didn't like that kind of transparency. They thought that the notice, even if it didn't actually say what had been suppressed, would stigmatize Europeans who invoked the right to be forgotten. (That, and it might remind searchers that their access to data was being restricted by European law.)
Google caved, mostly. But it left in place a vestige of its original policy. Now, it includes the following warning on its European results pages whenever any name is searched for: "Some results may have been removed under data protection law in Europe. Learn more."
But that policy isn't implemented across the board. As Google's global privacy counsel explained a month ago, “Most name queries are for famous people and such searches are very rarely affected by a removal, due to the role played by these persons in public life, we have made a pragmatic choice not to show this notice by default for known celebrities or public figures."
So there you have it. Somewhere, Google has an algorithm for deciding who is a celebrity or public figure and who is not. To find out whether you made the grade, all you have to do is go to Google.co.uk, and type in your name. Then look at the bottom of the page for the tag that says, "Some results may have been removed" etc. If it's not there, apparently you're a public figure in Google's eyes. If it is, well, you'd better get working on your SEO techniques.
I found this when I searched for myself and didn't see the "some results" tag-of-ignominy. I thought that was weird, so I ran a few other names. And it looks as though Google is making a cut based on number of name searches, but as Google's counsel more or less admitted in his letter, the system is still pretty rough. Maybe it will get better. But why wait until it comes out of beta? Knowing Google, that could be years.
Let's ask now who makes it past Google's equivalent of the red velvet rope. Here's my quick census:
Google-Famous: Stewart Baker, Ben Wittes, Eugene Volokh, Jack Goldsmith, Orin Kerr, Kent Walker, Nicole Wong, Declan McCullagh, Peter Swire, Annie Anton, Dan Geer (cybersecurity guru), Jim Lewis (ditto), Raj De (NSA's GC), Dianne Feinstein(Senate intelligence committee chair), David Hoffman (upcoming guest on the Steptoe Cyberlaw Podcast), Chris Soghoian, James X. Dempsey (CDT senior counsel, member of Privacy and Civil Liberties Oversight Board).
Not Google-Famous: Nuala O'Connor (head of CDT), Michael Daniel (White House cybersecurity czar), Bob Litt (DNI's general counsel), John P. Carlin (Assistant AG for National Security), Michael J. Rogers (chair of House intelligence committee), David Medine (chair of Privacy and Civil Liberties Oversight Board),Michael Vatis (cohost of the Steptoe Cyberlaw Podcast), Jason Weinstein (ditto), Ellen Nakashima (astonishingly prolific Washington Post national security reporter).
It's pretty clear that Google is struggling with the old saw, "On the Internet, everyone is famous for fifteen people." But it's still hard to see exactly where the line is being drawn.
For further irony, consider Max Mosley, who is internet-famous mainly for the video of his multi-hour, multi-hooker, sadomasochistic orgy and for his successful campaign to force Google to suppress links to those pictures. His search results are being censored. But he's now so famous that Google gives us no warning -- not even that they might be bowdlerized. That can't make sense.
But why should I have all the fun? Why not google yourself first (don't pretend you won't) and then your friends and acquantances? Then list any additional surprises in the comments.
The evidence is mounting that Edward Snowden and his journalist allies have helped al Qaeda improve their security against NSA surveillance. In May, Recorded Future, a predictive analytics web intelligence firm, published a persuasive timeline showing that Snowden's revelations about NSA's capabilities were followed quickly by a burst of new, robust encryption tools from al-Qaeda and its affiliates:
This is hardly a surprise for those who live in the real world. But it was an affront to Snowden's defenders, who've long insisted that journalists handled the NSA leaks so responsibly that no one can identify any damage that they have caused.
In damage control mode, Snowden's defenders first responded to the Recorded Future analysis by pooh-poohing the terrorists' push for new encryption tools. Bruce Schneier declared that the change might actually hurt al Qaeda: “I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight.”
Schneier is usually smarter than this. In fact, the product al Qaeda had been recommending until the leaks, Mujahidin Secrets, probably did qualify as “home-brew encryption.” Indeed, Bruce Schneier dissed Mujahidin Secrets in 2008 on precisely that ground, saying “No one has explained why a terrorist would use this instead of PGP.”
But as a second Recorded Future post showed, the products that replaced Mujahidin Secrets relied heavily on open-source and proven encryption software. Indeed, one of them uses Schneier's own, well-tested encryption algorithm, Twofish.
Faced with facts that contradicted his original defense of Snowden, Schneier was quick to offer a new reason why Snowden's leaks and al Qaeda's response to them still wouldn't make any difference:
Whatever the reason, Schneier says, al-Qaida's new encryption program won't necessarily keep communications secret, and the only way to ensure that nothing gets picked up is to not send anything electronically. Osama bin Laden understood that. That's why he ended up resorting to couriers.
Upgrading encryption software might mask communications for al-Qaida temporarily, but probably not for long, Schneier said...."It is relatively easy to find vulnerabilities in software," he added. "This is why cybercriminals do so well stealing our credit cards. And it is also going to be why intelligence agencies are going to be able to break whatever software these al-Qaida operatives are using."
So, if you were starting to think that Snowden and his band of journalist allies might actually be helping the terrorists, there's no need to worry, according to Schneier, because all encryption software is so bad that NSA will still be able to break the terrorists' communications and protect us. Oddly, though, that's not what he says when he isn't on the front lines with the Snowden Defense Corps. In a 2013 Guardian article entitled “NSA surveillance: A guide to staying secure,“ for example, he offers very different advice, quoting Snowden:
"Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on."
Scheier acknowledges that hacking of communication endpoints can defeat even good encryption, but he's got an answer for that, too:
Try to use public-domain encryption that has to be compatible with other implementations. ...Since I started working with Snowden's documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I'm not going to write about.…
The NSA has turned the fabric of the internet into a vast surveillance platform, but they are not magical. They're limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.
Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That's how you can remain secure even in the face of the NSA.
It sounds as though al Qaeda took Bruce Schneier's advice to heart, thanks to leaks from Edward Snowden -- even if Schneier is still doing everything he can to avoid admitting it.
UPDATE: The description of Recorded Future was changed at the request of the company, which said, "While this may seem like splitting hairs, in the world of data analysis software "predictive analytics" has specific technical meaning which implies something different. We use the term web intelligence to reduce this confusion."
I've long been an advocate for fewer restraints on how the private sector responds to hacking attacks. If the government can't stop and can't punish such attacks, in my view the least it could do is not threaten the victims with felony prosecution for taking reasonable measures in self-defense. I debated the topic with co-blogger Orin Kerr here. I'm pleased to note that my side of the debate continues to attract support, at least from those not steeped in the "leave this to the professionals" orthodoxy of the US Justice Department.
The members of the 9/11 Commission, who surely define bipartisan respectability on questions of national security, have issued a tenth anniversary update to the Commission's influential report. The update repeats some of the Commission's earlier recommendations that have not been implemented. But it also points to new threats, most notably the risk of attacks on the nation's computer networks. No surprise there, but I was heartened to see the commissioners' tentative endorsement of private sector "direct action" as a response to attacks on private networks:
Congress should also consider granting private companies legal authority to take direct action in response to attacks on their networks.
This "should consider" formulation avoids a full embrace of particular measures, and in that respect it parallels another establishment endorsement of counterhacking. The Commission on Theft of American Intellectual Property, said in its 2013 report:
Finally, new laws might be considered for corporations and individuals to protect themselves in an environment where law enforcement is very limited. Statutes should be formulated that protect companies seeking to deter entry into their networks and prevent exploitation of their own network information while properly empowered law-enforcement authorities are mobilized in a timely way
against attackers. Informed deliberations over whether corporations and individuals should be legally able to conduct threat-based deterrence operations against network intrusion, without doing undue harm to an attacker or to innocent third parties, ought to be undertaken.
If repeated tentative embraces are the way new policy ideas become respectable, "direct action" is well on its way. The 9/11 commission deserves credit, not just for moving the debate but for contributing a label that gives counterhacking a kind of anarcho-lefty frisson.
HIPAA is an arguably well-intentioned privacy law that seems to yield nothing but "unintended" consequences. I put "unintended" in quotes because the consequences are often remarkably convenient, at least for those with power. I'm not sure you can call something that convenient "unintended."
The problem has gotten so bad that even National Public Radio and the Pro Publica organization -- hotbeds of bien pensant liberalism -- have started to notice. This story, for example, could be mined for a host of Privy nominations for Dubious Achievements in Privacy Law:
In the name of patient privacy, a security guard at a hospital in Springfield, Mo., threatened a mother with jail for trying to take a photograph of her own son.
In the name of patient privacy, a Daytona Beach, Fla., nursing home said it couldn't cooperate with police investigating allegations of a possible rape against one of its residents.
In the name of patient privacy, the U.S. Department of Veterans Affairs allegedly threatened or retaliated against employees who were trying to blow the whistle on agency wrongdoing.
When the federal Health Insurance Portability and Accountability Act passed in 1996, its laudable provisions included preventing patients' medical information from being shared without their consent and other important privacy assurances.
But as a litany of recent examples show, HIPAA, as the law is commonly known, is open to misinterpretation — and sometimes provides cover for health institutions that are protecting their own interests, not patients'.
"Sometimes it's really hard to tell whether people are just genuinely confused or misinformed, or whether they're intentionally obfuscating," said Deven McGraw, partner in the healthcare practice of Manatt, Phelps & Phillips and former director of the Health Privacy Project at the Center for Democracy & Technology.
At this point, we've seen a boatload of stories in which HIPAA produces stupid or bad results. The real question is whether there are any stories in which HIPAA has produced unequivocally good results -- things that wouldn't have happened without the law. Otherwise, we're looking at a law passed to prevent nonexistent abuses that has become a source of abuse itself. In my view, that's a recipe for repeal -- and pretty much the story of most privacy law.
When you're in the business of pointing out how often privacy law ends up protecting power and privilege, you never run out of material.
Everyone remembers Lois Lerner, the IRS official who pleaded the fifth amendment and refused to testify about her role in the agency's scrutiny of Tea Party nonprofits. And everyone remembers her mysterious computer crash making years of emails unavailable in 2011.
Could the messages be recovered with advanced forensics? We'll never know, because the IRS so systematically nuked Lerner's drives that no one could ever recover anything from them.
Why? According to The Hill, "the agency said in court filings Friday that the hard drive was destroyed in 2011 to protect confidential taxpayer information."
I'm sure the taxpayers will find a way to show their gratitude.
It's time once again to point out that privacy laws, with their vague standards and selective enforcement, are more likely to serve privilege than to protect privacy. The latest to learn that lesson are patients mistreated by the Veterans Administration and the whistleblowers who sought to help them. According to the Washington Post:
Citing patient privacy, managers have threatened VA employees or retaliated against those who complain about agency misconduct, according to a key congressman and the union that represents most of the department’s employees.
“VA routinely uses HIPAA as an excuse to punish into submission employees who dare to speak out,” said Rep. Jeff Miller (R-Fla.), chairman of the House Committee on Veterans’ Affairs. He is leading a probe into the coverup of long wait times for VA patients.
David Borer, the American Federation of Government Employees’ top lawyer, listed a number of cases involving a VA claim of patient privacy used to stifle whistleblowers in a June letter to the department.The Office of Special Counsel (OSC), which investigates whistleblower retaliation cases, is “very concerned about the misuse of HIPAA,” said Eric Bachman, an OSC deputy special counsel. “The potential chilling effect of even a small number of these HIPAA retaliation cases is a serious issue and one that should be addressed by the VA in short order.”...
Valerie Riviello is one VA employee who felt the lash of the department’s culture of retaliation.
A registered nurse at the Albany Stratton VA Medical Center in Upstate New York, Riviello said she was threatened with suspension and stripped of managerial duties after she complained last November about how a veteran was treated.Riviello said the vet was unnecessarily restrained, with an arm and leg strapped to bedposts.
“They scared the hell out of me,” Riviello said with worry clear in her voice. “They sent me a letter saying I could go to jail.”
That threat came in the form of an e-mail to Riviello’s lawyer, Cheri L. Cannon, a partner with the Tully Rinckey law firm. The VA e-mail said that information Riviello provided Cannon “unlawfully includes medical records of a VA patient” and noted that violating HIPAA “is a felony offense subject to imprisonment and a fine of up to $250,000.”
Misuse of privacy law is now so common that I've begun issuing annual awards for the worst offenders -- the Privies. The Veterans Administration has officially earned a nomination for a 2015 Privy under the category "We All Got To Serve Someone: Worst Use of Privacy Law to Serve Power and Privilege." The Department is in good company; here are the 2014 nominees.
China seems to have found a reliable legal tool for suppressing dissent. A prominent Chinese human rights lawyer, Pu Zhiqiang, has been arrested after a meeting in a private home to commemorate the 25th anniversary of the killings at Tiananmen Square. The charge? “Illegal access to the personal information of citizens,” a crime punishable by three years in prison.
Clearly, China is on its way to earning a second Privy nomination for “Worst Use of Privacy Law to Protect Power and Privilege.”
But where are EFF and EPIC and CDT and the ACLU? This is not the first time China has brought privacy charges against politically disfavored defendants. Why haven't these advocates of more privacy law vocally condemned China's use of privacy law to foster oppression?
The same question might be asked of the Article 29 Working Party in the European Union, along with a second one: How is China’s law different from the data protection laws that Europe has been urging the world to adopt?
Vodafone put out a highly informative report on the intercept practices of the countries where it does business. The greatest news interest was spurred by its statement that some countries tap directly into the provider's infrastructure and take what they want without notice to the provider:
In a “small number” of countries, Vodafone said in the report, the company “will not receive any form of demand for communications data access as the relevant agencies and authorities already have permanent access to customer communications via their own direct link.”
Vodafone refused to name the countries. But I can't help thinking that the report provides some pretty clear clues about two of them. I suspect we'll soon discover that they are France and Belgium.
The reason is buried in the footnotes to the report. The report gives reasons when it does not disclose the number of lawful intercept warrants the company received in a particular country. Sometimes reporting on wiretaps is prohibited by law.
But in eight cases, the report doesn't cite legal restrictions on disclosure. Instead, it says that it has no intercept numbers because there is “no technical implementation” of lawful intercept capabilities in those countries. In one country, Kenya, there's no implementation because Kenyan law prohibits operators from deploying wiretap capabilities. In the other seven, though, the reason the company gives is murkier: “We have not implemented the technical requirements necessary to enable lawful interception and therefore have not received any agency or authority demands for lawful interception assistance.”
You might think those are countries that have simply decided not to do lawful intercepts, perhaps because intercept equipment is expensive or technically demanding. For five of the seven, that's plausible. They are Mozambique, Ghana, Lesotho, Tanzania, and Fiji.
But the other two on the list are France and Belgium. Does anyone think that these two countries lack the resources, the technical skills, or the will to conduct lawful intercepts? Hardly. France is second to none in its enthusiasm for state intelligence collection, and especially for wiretaps. And Francophone Belgium is often heavily influenced by the governing style of French institutions.
It is inconceivable that these two countries lack a robust wiretapping capability. It is also inconceivable that they would fail to tap mobile phone systems, including Vodafone's. Yet Vodafone says that it has not received any demands for lawful interception from these countries and has not implemented the technical requirements to enable lawful interception.
The answer may be in how the Vodafone report seems to define lawful interception -- as requiring that the operator carry out the wiretap:
“In most countries, governments have powers to order communications operators to allow the interception of customers’ communications. This is known as ‘lawful interception’.... . Lawful interception requires operators to implement capabilities in their networks to ensure they can deliver, in real time, the actual content of the communications (for example, what is being said in a phone call, or the text and atachments within an email) plus any associated data to the monitoring centre operated by an agency or authority.”
So if Vodafone had been ordered simply to give the governments of France and Belgium direct access to Vodafone switches, I'm guessing that Vodafone's report would say that it had not implemented lawful interception in those countries. Which is what it does say.
Maybe I'm wrong about this, but my money is on Belgium and France as two of the countries with "direct access" to telecom switches.
I'm sure the European Commission will investigate, as soon as they finish their proposals for reforming NSA, FBI, DHS, Treasury, CIA, and the Bureau of Land Management.
UPDATE: Corrected Vodaphone to Vodafone. Thanks, Bart (Amon_RA)
Just how dumb is the “right to be forgotten”? Google will make it easy to find out. That's because Google has automated the process for making takedown requests under the European Court of Justice's “right to be forgotten” ruling. If you've got a piece of personal data that you'd like forgotten, all you have to do is fill out Google's handy online form.
Anyone can make a request (though you'll need to take a digital photo of a piece of ID as proof of identity). You then need to find a link (using a European version of Google) and explain why the personal data at the link is inaccurate, outdated, or inappropriate. The opportunity for abuse is obvious.
I feel bad for Google, which is stuck trying to administer this preposterous ruling. But that shouldn't prevent us from showing quite concretely how preposterous it is.
I propose a contest. Let's all ask for takedowns. The person who makes the most outrageous (and successful) takedown request will win a “worst abuse of privacy law” prize, otherwise known as a Privy.
To get you started, here are the four requests I've already filed.
1. Ban this book!
Reason this link violates the right to be forgotten:
The book claims that in 2007 I narrowly defeated Vladimir Putin and Tony Blair in a contest to win Privacy International's title “Worst Public Official.” The book also states that the prize was awarded because of NSA's controversial domestic surveillance activities, combined with fallout from the USA Patriot Act, and that the award is a “dubious” honor. This is inaccurate. In fact, I had little or nothing to do with either domestic surveillance activities at NSA or the USA Patriot Act, and the trophy is a “dubious” honor only in the sense that Privacy International never actually gave me the promised trophy, despite repeated requests on my part.
2. Louis Brandeis memorial wuss fit over a photo
Reason this link violates the right to be forgotten:
This image is outdated. It shows me with a beard.
I don't have a beard now. And if you look at the picture, you'll realize why.
3. You won't believe how much weight you can lose with this simple European Court of Justice trick
Reason this link violates the right to be forgotten:
This image is outdated. It is 14 years old. I've lost weight since then.
4. Oh hell, let's just censor people we don't like
Reason this link violates the right to be forgotten:
This link is inappropriate. It compiles stories making many distorted claims about my political views. Political views are a particularly sensitive form of personal data. The stories are written by men who disagree with me, and they are assembled for the purpose of making money for a website, a purpose that cannot outweigh my interest in controlling the presentation of sensitive data about myself.
Think you can do better? Enter as often as you like and send the results to firstname.lastname@example.org. If you don't want your name listed as a winner, tell me, or use a pseudonym.
Remember, though, that prizes will be awarded only for takedown requests that succeed. Please send details demonstrating that the link you identify was in fact taken down.
I'll be testifying tomorrow afternoon before the Senate Select Committee on Intelligence, talking about the bill that bans NSA's bulk collection of metadata. It passed the House after small amendments that privacy groups are now complaining loudly about.
I don't like the bill for quite different reasons. My prepared testimony is here: Download Stewart Baker Testimony June 5 2014 to Senate Intelligence Committee.
After explaining why bulk collection should not be banned, here's what I say about the privacy groups' objections:
Everyone recognizes that if bulk collection requests are foreclosed, then the government must make individualized requests for data. And to do that, it has to give the companies specific search terms to use. Before amendment, the House bill said that the government could only ask the companies to use three kinds of search terms. They could only ask the companies to look for a suspicious “person, entity, or account.”
This was foolish. Clues come in many forms. What if the agency doesn’t know the suspect’s name but does know his internet address, or the unique identifier of his tablet? Those are properand specific search terms, and they are likely to be of value to terrorism investigators. So the bill was revised; now it allows the agency use search terms “such as … a person, entity, account, address or device.”
Some opponents have a beef with the addition of “address or device.” They claim that these words are too open-ended and ambiguous. But when asked to identify the ambiguity they fear, the critics offer only strained and unlikely interpretations. Senator Ron Wyden has said that the law as adopted by the House “could be used to collect all of the phone records in a particular area code, or all of the credit card records from a particular state.” This apparently rests on the remarkable view that a state or an area code is the same as an “address.” Wow. I knew Oregon was a big state, but I’m still surprised to hear that they’re using area codes as addresses out there.
Let’s be realistic. If that can be called an ambiguity, then no words will ever satisfy the critics. Why not object to the word “person,” for there are cases treating entire cities or counties as persons?
And dropping those words from the bill creates obvious and dangerous gaps in our ability to investigate terrorism. Take these examples:
- Suppose that attackers use a VOIP phone as in Mumbai. The phone might have only an IP address. If we drop “address” from the list, the government can’t serve a 215 order asking for information about the online activities of that phone.
- Or suppose that in a Mumbai-style attack the terrorists keep changing their SIM cards (and thus phone number); we would need to search not for the phone number but for the IMEI number that identifies the actual phone. Drop “device” from the list and the government can’t ask for that information.
Other opponents aim their fire at the words “such as.” They would drop those words, capping the list of search terms at five, or even three. This too is foolish and dangerous. Can the proponents of this change predict with perfect foresight which clues we’ll need to uncover the next conspiracy? I doubt it.
Again, a few examples show why we cannot foreclose the use of other specific search terms:
I suppose that the practical answer to some of these questions is that the government won’t use its counterterrorism or national security authorities. It will rely on criminal subpoenas, which don’t come with any of these restrictions. But if that is so, if the intent is to let our intelligence agencies have all this information as long as they rely on law enforcement authorities, then we’re back to privacy theater.
Or privacy farce, since the result of the legislative changes is to put the United States Congress on record as giving fewer tools to those investigating terrorism and national security threats than to those pursuing muggers and embezzlers.
The ACLU and EPIC have campaigned long and hard against surveillance cameras in public spaces, and they've had considerable success -- despite a paucity of actual serious privacy abuses. So it's worth remembering that all this privacy theater imposes real costs on crime victims.
This story, headlined "After Boy and Girl Are Stabbed, Anger Over a Lack of Cameras" is only surprising because it appears in the New York Times:
The 7-year-old girl is hospitalized in critical condition, the only witness to a crime that so far defies explanation: A man stabbed two young children in the elevator of a public-housing project and escaped into the late-spring evening. Her best friend, a 6-year-old boy, is dead.
Though residents of the Brooklyn housing project saw a man fleeing through the development after the attack, he remained at large on Monday, the search made more difficult because the building has no surveillance cameras.
Living in housing projects in East New York means living with the daily threat of violence, and Boulevard Houses is no exception. But until Sunday night, parents felt safe taking their children downstairs to play....
The lack of cameras raised questions on Monday as elected officials accused the New York City Housing Authority, which manages the building, of being slow to install the cameras.
To be fair, I haven't seen reports suggesting that privacy groups opposed installation of surveillance cameras in these particular public spaces. Maybe they think that city-owned public housing should be as freely surveilled as private housing. But I wouldn't take bets on it.
The NBC interview with Edward Snowden was instructive in several ways. He continues to present himself as a reasonable man who tried to stop illegal programs but was left with no option but to go public. But the more closely you listen, especially when he says things that can be checked against the record, the more dubious his claim begins to seem.
In fact, the NBC interview, and the exchange with NSA that followed, reveal a lot about Snowden’s style of truth-telling, which turns out to be hard to distinguish from, well, lying.
When questioned about his claim to have raised concerns inside the NSA before breaking his promises of confidentiality, Snowden said, “I actually did go through channels, and that is documented. The NSA has records, they have copies of emails right now to their Office of General Counsel, to their oversight and compliance folks, from me raising concerns about the NSA’s interpretations of its legal authorities.”
This time, remarkably, NSA was not caught flat-footed. Showing an impressive grasp of the news cycle, the agency quickly released the only email that Snowden sent to the NSA GC. It was clearly the message Snowden described, but it was nothing like a blown whistle.
Instead, it asked a question straight out of high school civics. Pointing to training materials about the agency’s sources of legal authority, starting with the Constitution, Snowden noted that the materials listed “Federal Statutes/Presidential Executive Orders” on a single line. He asked what in retrospect is a gotcha question with a phony humility worthy of Uriah Heep: “I'm not entirely certain, but this does not seem correct, as it seems to imply Executive Orders have the same precedence as law. My understanding is that EOs may be superseded by federal statute, but EO's may not override statute. Am I incorrect in this?”
NSA’s lawyer responded promptly, and correctly, saying that EO’s were indeed subordinate to statutes. And there the matter rested.
Only the delusional would view that exchange as “raising concerns” about NSA’s programs. But Snowden isn’t delusional. He’s deliberately misleading us. Because when we parse his answer, it turns out not to say what we thought it said. What he actually said was that NSA had emails “from me raising concerns about NSA’s interpretations of its legal authorities.” (Emphasis added.)
And sure enough, that’s exactly what his email did. What it didn't do was raise concerns about the lawfulness or wisdom of NSA’s programs – which was of course the impression he meant to leave. And, quite probably, it was an impression he thought he could get away with. He didn't think NSA was capable of conducting a wild-goose chase through its email records and quickly declassifying what it found.
When NSA showed how slippery his original answer had been, Snowden got angry, and his intemperate defense was equally revealing. He insisted that this was not his only communication, that he had sent other messages to other offices. But now we know how to read his claims, and it’s likely that those other emails, if they exist, are just more vague expressions of concern about “NSA’s interpretations of its legal authorities.” Because, on a really close reading, that's all he promised us.
Snowden’s second line of defense was to accuse the NSA of having lied earlier, when it said it found no record of his past objections. Wrong again. What NSA said at the time was, “we have not found any evidence to support Mr Snowden's contention that he brought these matters to anyone's attention." That’s still true, since the much-touted email doesn’t bring anything (other than Snowden's skill as sea lawyer and proofreader) to anyone's attention, nor does it raise any objection to any program.
In short, Snowden has revealed a lot about himself in this exchange. In his original statement he worked hard to avoid an outright lie. But he worked equally hard to leave a deeply false impression, and on a point where he thought the security agencies couldn't contradict him -- either because they were unable to search their records completely or because they were unable to declassify the truth. I'm willing to bet that Edward Snowden didn't invent his approach to the truth just for this interview.
More likely, it is something he's done before, such as when talked about NSA committing economic espionage. He didn't quite say that NSA steals commercial secrets for American companies, but you have to parse him like the Talmud to realize that. Most listeners, and most headline writers, came away convinced that he had confirmed their worst suspicions.
But if deploying technical truth in support of deliberate misrepresentation is Snowden's style, and I think it is, then there's one simple lesson. The public can't trust him. At least not when he offers hints and teases and “interpretations,” rather than factual statements backed by unequivocal documentary evidence.
And, come to think of it, hints and teases and interpretations of the ambiguous are pretty much all the public has been offered by Snowden and his journalist allies since, oh, about June of last year.
When the Justice Department's indicted six People's Liberation Army hackers, it directly accused the PLA of stealing "privileged attorney-client communications related to Solar World's ongoing trade litigation with China."
This is not a surprise to knowledgeable observers. Chinese attacks on large U.S. law firms have been widely acknowledged, and last summer the American Bar Association condemned "unauthorized, illegal intrusions into the computer systems and networks utilized by lawyers and law firms." But the ABA flinched from actually mentioning China or the PLA in the resolution, and as far as I can see, ABA President Jim Silkenat has still said nothing about Chinese hacking of US law firms.
Contrast that silence with Silkenat's rush to demand answers from the NSA about more attenuated allegations. On February 15 of this year, the New York Times published a Snowden-inspired article claiming that Australia had intercepted an American law firm's advice to Indonesia on a piece of trade litigation. The article was full of anti-NSA spin but it made no claim that NSA itself was spying on privileged communications.
Nonetheless, five days after that story appeared, Silkenat sent a two-page letter to the head of NSA. "Whether or not those press reports are accurate," Silkenat wrote, he sought the NSA's director's support "in preserving fundamental attorney-client privilege protections for all clients and ensuring that the proper policies and procedures are in place at NSA to prevent the erosion of this important legal principle."
Fair enough. But it's now been three days since we saw a much more direct accusation that the PLA was spying on privileged attorney-client communications in the US.
Who's taking bets on whether the American Bar Association will be as quick to call out the Chinese government as it was to call out its own?
That's the possibility raised by Edward Jay Epstein in a (paywalled) Wall Street Journal op-ed. Epstein offers some new evidence for his theory. In particular he says that NSA investigators now know that Snowden's tactics included breaking into two dozen compartments using forged or stolen passwords. Once there, Snowden loosed an automated "spider" with instructions to scrape the compartments for particular information. In most cases, US officials have said, the data Snowden took was overwhelmingly of military and intelligence value to our adversaries and had little or nothing to do with privacy or whistleblowing.
It's entirely possible that Snowden is a spy. But it's also possible that he stole the military data to make sure he could find a safe foreign haven after his disclosures. That would fit the pattern of his disclosures over the past year. Dozens of recent Snowden leaks have revealed nothing about "mass surveillance" -- but they have consistently advanced Russian geopolitical interests.
In support of the "documents for asylum" theory, remember that, during his unsuccessful campaign to stay in Hong Kong, Snowden was quick to display stolen documents detailing the Chinese computers NSA had hacked. Here's the South China Morning Post from June 13, 2013:
Snowden said that according to unverified documents seen by the Post, the NSA had been hacking computers in Hong Kong and on the mainland since 2009. None of the documents revealed any information about Chinese military systems, he said.
One of the targets in [Hong Kong], according to Snowden, was Chinese University and public officials, businesses and students in the city. The documents also point to hacking activity by the NSA against mainland targets.
Snowden believed there had been more than 61,000 NSA hacking operations globally, with hundreds of targets in Hong Kong and on the mainland.
Interestingly, now that Snowden has lost his bid for haven in the Chinese-dominated city, he's stopped leaking information about NSA's hacking of Chinese computers.
Epstein argues that Snowden's whistle-blowing was just a cover for espionage. Maybe so, but there's at least one hole in his argument. "Contrary to Mr. Snowden's account," Epstein writes,
the document he stole about the NSA's domestic surveillance couldn't have been part of any whistleblowing plan when he transferred to Booz Allen Hamilton in March of 2013. Why? Among other reasons, because the order he took was only issued by the FISA court on April 26, 2013.
The problem for this claim is that the April 26 order was just one in a long line of 90-day FISA court orders. (Here's one from April of 2011, for example.) Snowden could easily have been motivated by those earlier orders, even though he ultimately stole the most recent one. In fact he has said that "the breaking point," when he finally decided to reveal the program, was DNI Clapper's "least untruthful answer" to Sen. Wyden's question about mass collection in March of 2013.
That doesn't make Snowden a truth-teller. He was almost certainly lying when he claimed he didn't steal passwords from his co-workers. And if he was watching Sen. Wyden in March 2013 as he claims, then he must have known that Wyden's question was part of a years-long campaign to end the Patriot Act interpretation that sustained NSA's program. For several years, the senator had been making coded attacks on the FISA court's interpretation of section 215. Examples here, here, here, and here. Anyone who knew about the NSA program, and that included Snowden, would have had no trouble understanding what Sen. Wyden was complaining about.
That's important because Snowden has tried to portray himself as a whistleblower with nowhere to go to challenge a plainly unlawful program. But if he were following Sen. Wyden's campaign, he knew that there already was a debate about the program, and at the highest levels of government. He also knew that the program had been ruled lawful and that Sen. Wyden had not yet persuaded his colleagues to end it.
Snowden revealed classified information, in short, not because he lacked an outlet for his complaints but because he didn't like the decisions that the executive, congressional, and judicial branches had made.
So the best case for Snowden is that he's an egoist who thinks his views should triumph over the country's leadership -- that he leaked classified documents not to start the debate but to end it.
The worst case is that he's a spy.
And in between is the theory I find most plausible, at least today: He's an egoist who wanted to kill the program -- but without paying too heavy a personal price. So he stole all the other secrets in hopes that they would give America's adversaries an incentive to protect him.
And so far, they have.
Earlier, I promised a post that would make the positive case for the third-party doctrine and Smith v. Maryland.
The case against it seems pretty obvious. Privacy advocates are glad to tell us that the pace of technological change requires that we expand fourth amendment protections. “We're putting our entire lives on line,” they say. “The government's ability to collect and analyze data is growing. Only by expanding the fourth amendment can we even the balance that protects our privacy.”Or more colloquially, “Some new technologies are just plain creepy, especially in the hands of the government, and we want the fourth amendment to save us from them.”
The problem with that argument is that definitions of “creepy” change pretty fast.
Brandeis wrote his seminal article on privacy because he thought the Kodak camera was creepy, and he wanted the law to prevent the hoi polloi from taking his picture. In the 1970s, the FBI's ability to maintain clippings files on prominent Americans was a creepy source of power for J. Edgar Hoover. And the Attorney General actually imposed a fourth-amendment-style “predicate” requirement on future FBI clippings files about individuals. Today, though, Google has democratized the clippings file, and it's too common to be creepy.
Much as we may regret what we said to a reporter back in 1997, there's no point in feeling violated every time it shows up in search results. So we don't.
We adjust. The line between “creepy” and "not creepy" isn't fixed. It creeps.
This makes it very dangerous to build a fourth amendment doctrine on the relative creepiness of new technologies. To start, even if we thought the law should restrict creepy new technologies, why would we ask nine cloistered quasi-academics with an average age pushing 70 to tell us where the “creepy line” is today? And why would we put their answer largely beyond reconsideration – enshrined as precedent in the Constitution? There's a good chance that if we'd done that in the last century, we'd still be waiting for the Court to reconsider the rule that governments must have probable cause and a warrant before taking pictures of people or before running Google searches on them.
If you want to know what information Americans really value, and what technologies they really find creepy, Smith v. Maryland turns out to be a pretty good proxy – and certainly better than consulting a panel of nine Baby Boomers. When Americans share certain data, they are voting with their feet – giving up some privacy for the sake of something they value more. By now everyone understands the social media business model; we're getting the service because we are giving up the data. And most of us have been occasionally surprised and disconcerted by the ways in which the data has been used. Sometimes we decide that we value our privacy more than the service, and we quit. More often, we don't. And our “creepy line” moves a bit. The more often it moves, the less surprising and the less offensive we find it when the government gets access to the same data we've already given to Twitter or Google or Facebook or AT&T.
Viewed another way, the decision to share certain data with a third party is part of a predictable journey. It's a sign that we care about the privacy of the data a little less than we once did. And once shared the data slowly becomes less sensitive. It's the journey from Brandeis to Kodak to Flickr.
If I had to guess, that's the journey we're on with location data. The Supreme Court clearly thinks that routine government access to location data is kind of creepy, and it's tempted to give location some special constitutional status, notwithstanding Smith. But if it does, I predict, it's going to end up looking as foolish and out of touch as Brandeis does today. Why? Because more and more kids are getting smart phones today, sometimes as early as elementary school, and practically every parent who buys one is installing an app that relays the kid's location to the parents. Which means that kids are already beginning to graduate from high school without any sense that their location can or should be hidden from the ultimate authorities, their parents. They will never share the current Supreme Court's instinct that their location is uniquely private.
Saying that I'd rather trust the verdict of millions of Americans than the instincts of nine Supreme Court justices is not the same as saying that there should be no special privacy rules for third-party data. It just means that those rules should not be written by the Court.
In fact, the introduction of new third-party services is routinely used by privacy advocates to call for new restrictions on government access to those services' data. And the new services themselves are eager to deflect their customers' privacy concerns toward regulating the government and not their service. So there's a built-in lobby for legislation that tinkers with the default Smith rule. As a result, Congress has been active in setting special rules for government access to some third-party data.
Under Smith, for example, electronic communications can be obtained without a warrant, just like everything else we share with third parties. But Smith doesn't apply to electronic communications. Instead, government access to those records is governed by the Electronic Communications Privacy Act, enacted nearly twenty years ago and revised a dozen times since then. ECPA is remarkably fine-tuned, setting several different standards for government access to different kinds of private communications, all of them higher than the default that Smith offers. (There is an active campaign right now to further further raise those standards.)
Or take the most famous collection of third party data, the one that got this debate rolling – NSA's collection of the metadata for all calls touching the United States. Even there, actual intrusions into privacy were strictly limited. The government held a lot of data, but it conducted searches on fewer than 500 identifiers a year. All three branches of government imposed limits on NSA's actual access to the data. And Congressional reforms of the program are already being debated, with some changes nearly certain.
None of this suggests a failure of democracy that requires the Supreme Court to step in and impose its own Procrustean definition of “creepy” on the country.
It turns out that Smith v. Maryland provides a good first-order estimate of Americans' evolving expectations of privacy. And where it's wrong about those expectations, it provides a powerful incentive for Congress and the Executive to bring the law into accord with Americans' expectations.
The third-party doctrine of Smith v. Maryland, 442 U.S. 735 (1979), is getting a bad rap from libertarians of the left and the right. Smith holds that the police don't need a search warrant to get information about me from a third party. If I keep a diary in my desk drawer, the police must get a search warrant based on probable cause if they want to read it. If I leave the diary with my mother for safekeeping, though, the third party doctrine says that the police only need to serve her with a subpoena to get it. The same is true if I store the diary in the cloud with Google Drive or Dropbox. If it were on my computer, the police would need a warrant to read it; in the cloud, they don't.
The theory of Smith is that I have a reduced privacy expectation in things I've shared with others. Life teaches us the same lesson. By the third grade we've all discovered the dangers of telling our deepest secrets to a friend. The Founders knew it too. As Ben Franklin famously said in Poor Richard's Almanack, “Three can keep a secret, if two of them are dead.” And, less famously but even more to the point for Smith, “If you would keep your secret from an enemy, tell it not to a friend.”
Why should we rethink a doctrine so grounded in human experience? Advocates point to the mass of data that we increasingly share on the internet, especially via smart phones. If all that data can be obtained without a search warrant, they ask, what is left of the fourth amendment's protection of our privacy? Surely there must be a limiting principle, a point where the intrusions are so great that the fourth amendment kicks in, no matter what Ben Franklin says. For example, Randy Barnett, my co-blogger, asks me whether the Smith doctrine means that the government could sweep up all the data that Americans have given to credit card companies, doctors, and accountants – without implicating the fourth amendment.
I'll answer that by turning the question around. Smith is the law today; so when should it not apply? Randy seems to argue that Smith should recede and the fourth amendment should kick in whenever government starts gathering “too much” information. My first, quick answer to Randy pointed out that he has his own problem finding a limiting principle for that approach: We can agree that, today, a park policeman standing on the steps of the Lincoln Memorial does not need a warrant to observe the behavior of a tourist walking by. After all, the tourist knows that his appearance and actions there are available to the public, and he has no fourth amendment protection from observation. Why should the constitutional analysis change if the same policeman stands in the same spot on Inaugural Day, when he can observe half a million people? And if it should, when is he looking at "too many" people? 200? 20,000?
To be fair, Randy isn't required to extend his "too much" argument to other fourth amendment exceptions. So let me try here to address the question whether Smith must be reconsidered if it allows the government to collect enormous amounts of information about Americans simply because the data is stored in third-party computers. Randy is sure that large-scale government scrutiny of third-party financial and medical records will be deeply shocking to American sensibilities, and the Court's. I'm skeptical. A few simple Google searches turn up stories suggesting that the government is already scrutinizing 4.5 million medical transactions a day:
A provider-screening process is able to capture critical attributes that may help identify fraudulent health care providers. CMS is then able to use software to sift through the 4.5 million claims CMS receives a day and run them through algorithms that search for patterns of fraud.
Another search discloses that, pressed by Congress, the SEC is already using Big Data to scrutinize financial trading patterns as well as the transactions of regulated entities. Yet another shows that financial institutions already send the government 11 million reports a year about customer transactions that look suspicious. And as for phone metadata, I've estimated in the past that American law enforcement serves over a million metadata subpoenas a year on telephone companies, and that the practice is probably a hundred years old.
It seems a little late to decide that all this is “too much,” especially in the absence of evidence that Smith has been seriously abused. Privacy advocates are going to have trouble explaining where the “too much” line should be drawn, and exactly how many existing regulatory regimes the Supreme Court should overturn by undoing Smith.
That's the argument against changing the Smith doctrine, and especially against the claim that "too much" use of Smith requires that the fourth amendment kick in. But I also think that there is a positive case for Smith. Since this post is already too long, I'll cover the positive case in a second post, hopefully later today.
Apart from the word "property," what is it about modern intellectual property law that should appeal to conservatives? The free-floating liability to plaintiffs' lawyers? The income transfers to people who mostly hate middle America? The capture of lawmakers and regulators by a rent-seeking minority? The enshrining of those lobbyists' victories in international law -- enforced in Geneva and immune to democratic change in this country? The law's dramatic turn from the original understanding of the Framers of the Constitution?
Despite these features, only a handful of conservatives seem ready to rethink intellectual property law. One young conservative in that camp is Derek Khanna, whose just-released R Street Policy Paper makes the conservative case for copyright reform. Here's a sample:
As with other enumerated powers of the federal government, Congress has expanded copyright far beyond what was originally intended. Just as Congress frequently neglects to abide the Origination Clause and the Commerce Clause, it likewise has ignored the Copyright Clause’s requirement that these monopoly instruments be granted only for “limited times.” Contributing greatly to this distortion has been the influence of a persistent army of special interest lobbyists, usually representing media companies, rather than the interests of cre- ators and the general public.
In order to restore the original public meaning of copyright, copyright’s term must be shortened. We must reconsider existing international treaties on copyright and not sign any treaty that either would lock in existing terms or extend terms even longer (such as the Trans Pacific Partnership Treaty). Finally, copyright terms must not be extended to “life+100” when the next copyright extension bill is expected to come up in 2018.
I don't know how the Supreme Court will decide ABC v. Aereo, argued last week. But however the case is decided, I suspect there's a real risk that the Court will screw up the law.
Why? Three reasons.
1. The case requires interpretation of a complicated statutory regime that the Court rarely construes. Aereo is exploiting a seam in copyright law that implicates fair use, performance rights, and how these rules apply to cloud computing. Intervening occasionally in complex statutory schemes is a high-risk endeavor for the Court. They are very smart lawyers but if they don’t get a run of cases in the same area, they often lack a feel for how all the pieces fit together.
2. That is surely true in Aereo, where the court is genuinely at sea. Oral argument revealed a widespread disposition to view Aereo's business model as too clever by half -- using thousands of tiny "personal" antennas to collect and transmit broadcast television without paying the fees that apply to cable companies who do the same. The justices seem to be struggling to find a way to slap Aereo down without damaging the legal framework that today protects cloud companies like Dropbox from the copyright plaintiff's bar. The Court seems to be reaching for a creative way out of this predicament. That's not good, for the third reason:
3. This was an April argument. In fact it was so late in April that the opinion probably wasn’t assigned until the last Friday of the month. That’s important because the court aims to finish its business and go on recess by the end of June. And since there will surely be concurring or dissenting opinions, simple fairness and tradition require that the justices in the minority see the majority opinion by June 1. That means the justice drafting the Court's opinion has only five weeks both to figure out how to reach the desired result and to produce a detailed opinion that scans the field and explains how its decision fits into that context. That’s a big task, and it can’t be done by starting from scratch, even with the help of a law clerk. The justice assigned the case will have to fall back on briefs filed by interested parties, all of whom drafted their submissions to teach the Court all the copyright law and facts that fit their interests.
It will be very difficult for the Court to see past those interests as it drafts, especially if it's trying to chart a new path on a tight deadline. Sometimes gaps and mistakes can be cured in the back and forth between the majority and the dissenters. But not in June. By the time the dissent is drafted, there may only be a week or two before recess. The opinions are more likely to talk past each other than to engage in a dialogue.
This is a recipe for error. The errors may not be obvious, of course. They could be no more than a misguided footnote, but that footnote could easily make law for a generation if the Court never returns to this dusty corner of the US Code. As Justice Jackson once said of the Court, "We are not final because we are infallible, but we are infallible only because we are final."
It is also a recipe for a splintered Court. If a justice isn’t sure the assigned drafter will produce an acceptable rationale, or fears there won’t be time to hone the draft into something he or she supports, the temptation will be to begin writing a separate opinion soon, just in case. And, once written, the draft is likely to seem more persuasive to its author than someone else’s work. That's how separate opinions proliferate, so that the lower courts must figure out what the Court actually held by counting noses rather than construing text. A divided Court has the advantage, I suppose, of avoiding error, since none of the justices’ opinions is authoritative. But it often leaves the law less certain than before the Court spoke. Which means that the Court will have to take more cases to clear up the confusion.
I could be completely wrong, but my money is on a decision in Aereo that leaves copyright law worse off than it is right now.