John Yoo, Mark MacCarthy, and I kick off episode 329 by jumping with both feet into the cyberspace equivalent of a dumpster fire. There is probably a pretty good national security case for banning TikTok. In fact, China made the case a lot better than the Trump administration when it declared, "You know that algorithm that tells all your kids what to watch all day? That's actually a secret national security asset of the People's Republic of China." But the administration's process for addressing the national security issue was unable to keep up with President Trump's eagerness to announce some kind of deal. The haphazard and easily stereotyped process probably also contributed to the casual decision of a magistrate in San Francisco to brush aside US national security interests in the WeChat case, postponing the order on dubious first amendment grounds that John Yoo rightly takes to task.
Megan Stifel tells us that the bill for decoupling from China is going to be high – up to $50 billion just for chips if you listen to the Semiconductor Industry Association.
Speaking of big industry embracing big government, Pete Jeydel explains IBM's slightly jarring suggestion that the government should slap export controls on a kind of face recognition technology that Big Blue doesn't sell any more. Actually, when you put it like that, it kind of explains itself.
Megan tells us that the House has passed a bill on the security of IOT devices. The bill, which has also moved pretty far in the Senate, is modest, setting standards only for what the federal government will buy, but Megan has hopes that it will prove to be the start of a broader movement to address IOT security.
I reprise the latest demonstrations that Silicon Valley hates conservatives, and how far it will go to suppress their speech. My favorite is Facebook deciding that a political ad that criticizes transwomen competing in women's sports must be taken down because it "lacks context". Unlike every other political ad since the beginning of time, apparently. Although Twitter's double standard for a "manipulated media" label is pretty rich too: Turns out that in the Twitterverse, splicing Trump's remarks to make him say what the Biden camp is sure he meant is perfectly fair , but splicing a Biden interview so he says what the Trump camp is sure he meant is Evil Incarnate.
Finally, Megan rounds out the week with a host of hacker news. The North Koreans are in bed with Russian cybercrime gangs. (I can't help wondering which one wakes up with fleas.) The Iranians are stealing 2FA codes and some of them have now been indicted by the US Justice Department, though not apparently for the 2FA exploit. A long-running Chinese cybergang has also been indicted. That won't actually stop them, but it will be hard on their Malaysian accomplices, who are already in jail.
Our interview this week is with Michael Brown, a remarkably influential defense technologist. He's been CEO of Symantec, co-wrote the report that led to the passage of FIRRMA and the transformation of CFIUS, and he now runs the Defense Innovation Unit in Silicon Valley. He explains what DIU does and some of the technological successes it has already made possible.
Oh, and we have new theme music, courtesy of Ken Weissman of Weissman Sound Design. Hope you like it!
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The Belfer Center has produced a distinctly idiosyncratic report ranking the world's cyber powers – though they should have called it Jane's Fighting Nerds. Bruce Schneier (@schneierblog) and I puzzle over its rankings, but at least the authors provided the underlying assessments that led them, among other oddities, to rank the Netherlands No. 5, and Israel nowhere in the top ten. The US is number one, but that's partly due to the Center's insistence that the US ranking should be boosted because we're a norms superpower. In my book, that should have cost us a 20% discount off our offensive capabilities ranking. Don't agree? Download the report and pick your own fight!
Our interview today is with Cory Doctorow, diving deep on his pamphlet/book, "How to Destroy Surveillance Capitalism." It's a robust and entertaining three-cornered fight – me, Cory, and the absent Shoshana Zuboff, whose 700-page tome launched the surveillance capitalism meme. You'll enjoy hearing me ask Cory, a Red Diaper Baby born to Trotskyists, to explain why his solution to tech's overreach is so similar to Attorney General Bill Barr's.
Elsewhere in the news roundup, Nate Jones (@n8jones81) and I unpack the Pandora's Box of pain loosed by the European Court of Justice in Schrems II. Facebook is fighting a multilevel rearguard action – in the courts, in two capitals, and in its terms of service -- to try to salvage its current business model.
I cover the latest Tok in the TikTok saga. Oracle has won … something or other. Sultan Meghji (@sultanmeghji) and I puzzle over how the TikTok algorithm can stay in China while the dataset it's training on remains in the United States.
The Justice Department's antitrust lawsuit against Google is getting nearer and nearer, judging from the thrashing in the underbrush. But we still don't have a good idea what part of Google's business will be targeted. Sultan explains the state of play.
In a news flash as surprising as a report that the weather in San Diego will be sunny and fair, Microsoft has confirmed that the Chinese, Iranians, and Russians have launched cyber-attacks on Biden and Trump campaigns. For reasons unknown, the press can't get enough of this thin gruel.
Bruce and Sultan chart the reasons and tactics behind the rise of ransomware and the importance of being a reliable criminal if you want to make money in extortion.
Nate unpacks China's global data security initiative so you don't have to waste your time. The tl;dr is that other countries shouldn't do any of the things China is doing or aspiring to do.
Speaking of things you don't have to read because we took the hit, Bruce tells us what's in the new White House cyber-security policy for space systems. Really, it's all "shoulds" and puts nobody in charge of enforcement. It would be kind to call it the beta version of a space cybersecurity policy.
Sultan argues that there may after all be a limit to the EU's ability to get every part of the internet economy to enforce EU speech codes, and the domain name registries hope they're on the other side of that line.
You probably saw the "op-ed" that an AI "wrote," explaining why humans need not fear it. Bruce, Sultan, and I have plenty of fun mocking Open AI's penchant for Open Hype. But Bruce reminds us that sooner or later the hype will be real, and more than half of Twitter will be machines talking to other machines. Judging from my Twitter feed, that will be an improvement.
Finally, This Week in Sore Losing: In honor of AWS's brief complaining that it should have beat Microsoft to the lucrative JEDI contract, I update an old lawyer's motto: If you've got the law on your side, pound the law. If you've got the facts, pound the facts. And if you've got neither, pound the Orange Man.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
327: “I’ll Take Hacking Tesla for One Million Dollars, Alex”
In the 327th episode of the Cyberlaw Podcast, Stewart is joined by Nick Weaver (@ncweaver), David Kris (@DavidKris), and Dave Aitel (@daveaitel). We are back from hiatus, with a one-hour news roundup to cover the big stories of the last month.
Pride of place goes to the WeChat/Tiktok mess, which just gets messier as the deadline fdraws near. TikTok is getting all the attention but WeChat is by far the thornier policy and technical problem. I predict delays as Commerce wrestles with them. Nick Weaver predicts that TikTok’s lawsuit will push resolution of its situation into January. I’ve got fifty bucks that says it won’t. Lawfare wins either way.
Dave Aitel digs into the attempted Tesla hack. Second best question in the segment: Who’s the insider that enabled an attack on his employer and is still working there three years later? Best question: How many CSO’s can say with confidence that none of their employees would take $1 million to plug a USB stick into the company network?
This Month in Overhyped Judicial Decisions about FISA: David Kris lays out the seven-years-late Ninth Circuit decision that has been billed as striking at the FISA warrantless surveillance law. Talk about overtaken by events. The opinion grumbles about the fourth amendment but doesn’t actually rule on that ground (and its analysis is so partial that it isn’t even persuasive dicta). It boldly finds that the collection violated a statute that has been repealed anyway. And then it says that doesn’t matter because suppression of the evidence isn’t a remedy and the violation didn’t taint the trial. The only really good news for the libertarian left is that Justice can’t appeal to the Supreme Court because, well, it won.
David also takes on the other overhyped FISA decision, a lengthy FISA court review of agencies’ minimization practices with respect to Americans’ data collected under section 702. The court approved practically everything but was predictably and not improperly upset at the FBI’s inability to design social and IT systems that prevent dumb violations of the rules.
Speaking of FISA, important national security provisions remain unsettled, in large part because of Trump’s misguided opposition. Who, David asks, could possibly persuade GOP members that there’s a FISA reform that responds to their sense of grievance over the Russian collusion investigation? I volunteer, with lengthy testimony to the PCLOB and a shorter piece in Lawfare.
Dave Aitel asks why we’re surprised that Iranian hackers are monetizing access to networks that don’t offer national security value to their government. Or that hackers are following their targets into specialized software markets. If you know your target is a law firm, he suggests, you’d be better off looking for flaws in Relativity than in Windows…. Uh, excuse me, but I just felt someone walk over my grave.
Nick and Dave are both critical of the Justice Department’s indictment of Joe Sullivan for obstruction of justice and misprision of felony. That is beginning to look like a case Sullivan can win, and one he just might take it to trial.
Nick thinks the Justice Department is playing a long game in pretending it can seize 280 cryptocurrency accounts used by hackers. It can’t get the funds, but it sure can make it hard for the hackers to get them.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
It's been four years since the FBI began its national security investigation of the Trump campaign, and Americans remain deeply divided over the probe. Democrats think the investigation was more than warranted by the number of suspicious contacts between Team Trump and the Russian government. Republicans think the investigation was a partisan hit job on an anti-establishment candidate.
They're both right.
It would have been national security malpractice not to investigate possible Russian influence over the Trump campaign. Hostile foreign governments will always be tempted to use the openness of American presidential contests to boost their favored candidates or sabotage others. More such investigations will be needed in the future. After spending four years advertising the success of Russia's interference campaign, the U.S. should not be surprised if other countries get the message and launch their own. Given the risks, national security agencies can't be gun-shy about probing foreign government efforts to infiltrate the U.S. political system.
At the same time, there is a lot more evidence than many people realize that the 2016 investigation was pervasively tainted by hostility to Donald Trump. In part, that comes with the territory. Any time government officials order national security surveillance of people who want to kick them out of office, they will be suspected of partisan motives. Put charitably, the Obama administration bungled this dimension; it failed to recognize just how partisan its investigation of a political rival would look, and it did far too little to avoid the appearance of partisanship. Less charitably, there is reason to believe that the Obama administration milked the investigation for partisan advantage.
That less charitable view deserves respect. First because it's backed by considerable evidence. And second because it's unpersuasive to tell half the country that their suspicions are mere conspiracy theories that they should just get over. The U.S. needs a national security system that the whole country has confidence in.
Especially now. The United States has spent nearly 50 years guarding against one kind of intelligence abuse—the government turning its intelligence machinery against individual rights and unpopular minorities. It hasn't had to worry much about a different kind of abuse—employing national security surveillance to achieve partisan political ends.
It's not that it can't happen here, as anyone would know who studied J. Edgar Hoover's collection of dirt on politicians—or his willingness to share that dirt with presidents when they felt the need. The United States has been lucky in recent decades. Divided government and a narrow range of political differences discouraged incumbents from using intelligence capabilities against the opposition.
Now, not so much. If it sees members of the other party not just as wrong but as borderline treasonous, why wouldn't the party in power use national security authorities against them? As that temptation grows, institutional reforms are needed to keep officials from yielding to it and, just as important, to show skeptics that the reforms actually worked.
The Obama administration clearly flunked the second requirement. They quite possibly flunked the first one too. Here are the most salient facts in support of that view—a much more detailed accounting of which is available, complete with footnotes, in my forthcoming testimony to the Privacy and Civil Liberties Oversight Board.
The DNC and the Steele "Dossier"
A major part of the Crossfire Hurricane investigation and the public disclosures it produced was the "dossier" created by Christopher Steele. We all now know that it was a salacious and unverifiable hit job assembled not by a network of intelligence sources but by a mix of Steele's friends, their drinking buddies, and probably a few disinformation specialists from GRU (Russia's military intelligence agency). Worse, Steele assembled that hit piece as a subcontractor to the Democratic National Committee, and judging by his conduct, he thought his role was to lobby the FBI to use its formidable national security powers against the Republican campaign—and to leak both the investigation and the now "FBI-validated" dossier in hopes of ruining Trump's candidacy.
There are reasons to suspect that, despite its denials, the DNC intended that outcome: It hid its ties to Steele behind multiple cutouts and a dubious claim of attorney-client privilege, then falsely denied its connection to Steele for months after the story broke. In the end, Steele's work didn't pay off for Democrats until after the election. But during the transition it stoked the Russia collusion narrative that put a cloud of illegitimacy over the first two years of the Trump administration. That is a remarkable, if unseemly, achievement for a partisan hit job. Other political actors will learn the lesson and can be expected to use cutouts in the future to lobby the national security agencies against their domestic enemies.
Partisan Bias and the Carter Page FISA Application
The one really detailed examination of how the Crossfire Hurricane investigators treated the evidence against the Trump campaign is the inspector general's dissection of the Carter Page wiretap application. That story does not exactly rebut the suspicion that partisanship tainted the probe. The application was full of errors and omissions, and all of them cut against Page and the Trump administration. Almost no one in the Justice Department or FBI stopped to ask if it was wise to pursue a surveillance order against a prominent member of the opposing party without taking a hard look at the evidence. As a result, the investigators left out—or even lied about—a raft of information that would have raised doubts about whether Page was a legitimate surveillance target.
For a while, it was possible to put these errors down to a different cause—not partisanship but a complete collapse in the Foreign Intelligence Surveillance Act (FISA) fact-gathering process. That comforting line of thinking rested on two findings by Inspector General Michael Horowitz—first that he found no evidence of bias and second that he found pervasive errors in 29 unrelated FISA applications. On closer examination, neither of those findings offers much support to the "FISA is broken" hypothesis.
First, on partisan motivation in Crossfire Hurricane, what the inspector general actually found was that no one at the FBI was foolish enough to say in writing or in testimony that they or others at the FBI were operating with a partisan bias. As the inspector general acknowledged in his Senate testimony, the absence of bias evidence didn't prove an absence of bias. In fact, the inspector general did find written evidence of bias—in the texts of Peter Strzok, which are full of animus toward Trump. Strzok had great influence over the Crossfire Hurricane investigation, but the inspector general decided that Strzok's bias didn't count because Strzok never acted completely alone in the investigation. Really, that's it. If I'm ever accused of a crime, I want Michael Horowitz on my jury.
Second, the errors he found in 29 other FISA applications evaporated on a closer look. They were, it turns out, almost all failures to properly footnote the FBI's sources. When the FISA court ordered a review of all 29, the Justice Department found only two material errors, and neither of them cast doubt on the issuance of the wiretap order. That contrasts starkly with the Carter Page application, where the department has admitted that the errors were so serious that at least two and perhaps all four FISA orders should never have been issued.
In short, the only FISA application that targeted a partisan opponent of the administration was corrupted by numerous material omissions and errors and at least one false statement, one of the most influential investigators was a voluble Trump hater, and others may have harbored a bias against Trump that they were too prudent to articulate. Since the FISA process in general now seems to be careful and accurate, if not perfect, the deviation from norm in the case of Carter Page strongly supports the view that anti-Trump bias was at work.
A Conveyor Belt from Press Reports to Surveillance
Actually, there's more. The inspector general passed over in silence the remarkable reliance of the Page application on media reporting. Fully a third of the core FISA case against Page consists of summaries of news stories. By itself, relying on media reports was a likely source of bias against anyone associated with Trump. (If you want to argue about that, all I can say is that I want you on my jury too.) But we don't have to argue about media bias in the abstract. It can be found in the Page application itself, which relies on a Washington Post opinion piece, without disclosing to the court either the source or the fact that it isn't, strictly speaking, a news report at all. Almost as bad, the opinion piece claims that the Trump campaign diluted the GOP platform on Ukraine in ways that favored Russia. (In fact, the campaign accepted a mildly diluted version of an amendment offered by a Ted Cruz delegate, which is a lot more accommodation than delegates for defeated candidates usually get at conventions.) The claim has been investigated extensively, including by Robert Mueller and the Senate Intelligence Committee, without finding any wrongdoing. The nicest thing you can say about the article in retrospect is that it was slanted to take the worst view of the Trump operation. An equally fair summary would be that the story became part of an FBI conveyor belt for turning media bias into a wiretap order. If that doesn't worry you, imagine today's Justice Department obtaining a FISA order against Biden campaign advisers by relying on an article from Breitbart, and simply telling the court, as the Page application does, that the information comes from "an identified news source."
Targeting Michael Flynn
That's not the worst of it. Viewed from the standpoint of partisan abuse, the Michael Flynn story is especially troubling. He had been investigated and cleared by the FBI on Jan. 3, 2017. But two days later, on Jan. 5, the White House obtained a wiretap of Flynn talking to Russian Ambassador Sergey Kislyak about Russia's response to the Obama administration's recent sanctions. The wiretap of Flynn's remarks was legal, because the "target" of the tap was Kislyak not Flynn. But the legality of the collection does not fully resolve what you might call an analytical reverse-targeting after the fact. That's because the White House was only really interested in Flynn's side of the call.
After an Oval Office meeting about Flynn's remarks, Obama administration officials began a concerted campaign to use those remarks against him. Within three weeks, he'd face leaks accusing him of violating the criminal Logan Act, he'd be reinvestigated under an implausible counterintelligence theory, and he'd find himself ambushed by the FBI in a perjury-trap interview. He'd also become the first American to have a FISA-tapped conversation leaked to the press by political rivals. Within four weeks, he'd be gone from government, disgraced and facing criminal prosecution.
By any measure, this was a political use of a FISA wiretap that targeted an American. It may have been a lawful political use of a FISA tap, but that's not something people should be comfortable with. The Obama administration, however, had gotten comfortable with it a few years earlier. When Israel was fighting Obama's Iran nuclear deal in Washington, it worked closely with Hill Republicans. The U.S. apparently tapped the Israelis, again legally, since they were foreign government officials. And the taps may have offered some national security insights; any time a government, however friendly, lobbies Congress against the American president, we ought to know what it's up to. But the foreign intelligence value of understanding what the Israelis were saying paled next to the political value of getting real-time intelligence on the GOP's Hill strategy for stopping the Iran deal. The unfortunate lesson the Obama administration learned in that battle was that the president can use FISA taps against his political enemies as long as he checks the right legal boxes. If it worked against the congressional Republicans, why wouldn't it work against Team Trump?
But turning FISA into just another partisan weapon means it's going to be used like one. If it hurts the other side, it's going to be leaked. Which is what happened with Flynn's conversation. The leak was unprecedented in national security circles, but in Washington politics, it was just another Thursday. More than 40 years had elapsed before the first FISA tap of an American was leaked to the press. I doubt it'll be that long before the second.
The Need for Reform
To be clear, apart from the Flynn leak, none of this was plainly illegal, and no one should want the government to ignore indications that a prominent political figure is working with a hostile government. But the Republicans who were on the receiving end of these intelligence operations have every reason to doubt the good faith of the administration that carried them out. And that in itself will prove fatal to the bipartisan support the intelligence community needs as it responds to foreign influence operations. What's needed are reforms that will prevent future administrations from using the intelligence community against the opposition in this way.
Unfortunately, most of the reform proposals are warmed-up leftovers beloved of individual rights advocates—more paperwork and audits and amici curiae for all FISA applications, not just the ones that pose partisan risk. Others could make things worse, such as the measures to require that the attorney general be briefed on FISA taps with partisan risk. Is there anyone on the GOP side who would be relieved to hear that the Flynn matter was overseen by Sally Yates, who chose partisanship over Justice Department tradition in refusing to defend the new administration's immigration policy in court? Is there anyone on Team Biden who'll be comforted to hear that William Barr will decide whether to investigate the former vice president for ties to Ukraine or China? It's fine for the case to get high-level review; top officials often have better instincts than those in the ranks. But it's not enough. We need to create a career position for a nonpartisan FBI agent or lawyer to challenge the FISA application and every other stage of the investigation. (The attorney general's supplemental reforms memo of Aug. 31, 2020, takes a good step in this direction by requiring that politically sensitive surveillance and search applications be reviewed by a special agent from a field office not involved in the investigation.) The career official should also take the lead in reporting on the investigation to majority and minority congressional leadership, not after the fact but as it proceeds.
And when an operation has both political and national security value, the intelligence it produces needs special and far more limited handling, especially when it goes to political appointees. Every one of them should be required to sign a receipt explaining why he or she needs to read it, and the intelligence community should routinely include tags on some reports that will disclose which one was leaked.
Other measures are simple. The FBI should offer media reports to the FISA court only rarely, and it should disclose their source and any credible claims of bias that have been leveled against the news outlet. Anyone who pays a third party—directly or indirectly—to try to influence the FBI or other national security agency should disclose that fact, just as lobbyists trying to influence Congress or political appointees must.
There's plenty of room to argue about which safeguards will best limit the partisan misuse of the United States's security machinery. I hope that this piece—and my longer testimony to the Privacy and Civil Liberties Oversight Board—are at least sufficient to establish that, without new safeguards, the United States will slowly lose its ability to respond as it must to foreign influence operations.
The Chairman of the U.S. Privacy and Civil Liberties Oversight Board has, perhaps unwisely, invited me to provide written input on ways to reform the country’s intelligence collection authorities. I have commented publicly on the topic many times since my tenure as general counsel of the National Security Agency in the early 1990s. Because recent commentators have failed to address the most pressing need for intelligence reform, I am grateful for the opportunity to do so again. I am posting my testimony here, in advance of the PCLOB's publication so that readers of my shorter piece on the same topic will have the benefit of the more detailed analysis and sourcing in the full testimony.
Our interview this week focuses on section 230 of the Communications Decency Act and features Lauren Willard, counsel to the Attorney General and a moving force behind the well-received Justice Department report on section 230 reform. Among the surprises: Just how strong the case is for FCC rule-making jurisdiction over section 230.
In the news, David Kris and Paul Rosenzweig talk through the fallout from Schrems II, the Court of Justice decision that may yet cut off all data flows across the Atlantic.
Nick Weaver draws our attention to a remarkable lawsuit against Apple. Actually, it’s not the lawsuit, it’s the conduct by Apple that is remarkable, and not in a good way. Apple gift cards are being used to cash out scams that defraud consumers in the US, and Apple’s position is that, gee, it sucks to be a scam victim but that’s not Apple’s problem, even though Apple is in a position to stop these scams and actually keeps 30% of the proceeds. I point out that Western Union – on better facts than Apple's– ended up paying hundreds of millions of dollars in an FTC enforcement action - - and still facing harsh criminal sanctions.
Paul and David talk us through the 2021 National Defense Authorization Act, which is shaping up to make a lot of cyber-security law, particularly law recommended by the Cyber Solarium Commission. On one of its recommendations – legislatively creating a White House cyber coordinator – we all end up lukewarm at best.
David analyzes the latest criminal indictment of Chinese hackers, and I try to popularize the concept of crony cyberespionage.
Paul does a post-mortem on the Twitter hack. And speaking only for myself, I can’t wait for Twitter to start charging for subscriptions to the service, for reasons you can probably guess.
David digs into the story that gives this episode its title – an academic study claiming that face recognition systems can be subverted by poisoning the training data with undetectable bits of cloaking data that wreck the AI model behind the system. How long, I wonder, before Facebook and Instagram start a “poisoned for your protection” service on their platforms?
In quick takes, I ask Nick to comment on the claim that US researchers will soon be building an “unhackable” quantum Internet. Remarkably his response is both pithy and printable.
The decision of the European Court of Justice (CJEU) in Schrems II is gobsmacking in its mix of judicial imperialism and Eurocentric hypocrisy. The decision invalidates the Privacy Shield agreement between the U.S. and the EU on the ground that U.S. protections for individual rights are not "adequate," by which the court means not "essentially equivalent" to the rights provided to individuals under European law. It manages to do this while acknowledging that the court and the EU have no authority to elaborate or enforce these rights against any of the EU's member states. That, the court says, is "irrelevant." It is making the rules for benighted foreign lands like Canada and the United States, not for Europeans. Freed from the prospect that any of the governments that appoint them will have to live with these rules, the judges of the CJEU declare that large chunks of U.S. intelligence law—including some of America's most productive and essential authorities, such as Section 702 of the Foreign Intelligence Surveillance Act (FISA)—are beyond the pale.
In theory, this means that the United States is a privacy-inadequate nation, and any company sending personal data here may be fined under the General Data Protection Regulation (GDPR) up to four percent of gross global income. (Yes, the court left open the question whether a special set of corporate contract clauses remained a legal basis for transferring data to the U.S., but very few lawyers think those clauses will actually provide any protection when challenged, since no private contract can undo the obligations of Section 702.)
It is astonishing that a European court would assume it has authority to kill or cripple critical American intelligence programs by raising the threat of massive sanctions on American companies. In so doing, the court overrode a formal executive agreement reached by the EU with the U.S.; it also rejected the view of the European Commission that U.S. law was adequate to protect individual rights.
Still, the court clearly does think it can force its views on not just the United States but the rest of the world as well. It has already told the Canadians that they don't measure up. Australia and India have been kept in limbo for a decade due to doubts about whether their democracies dance sufficiently to the justices' tune.
Perhaps, had the court been less stiff-necked, it might have forced a change in the laws of these countries. But now the entire project is bound for disaster. China, which is already a great power when it comes to personal data, has signaled to Europe that it will not tolerate interference with its internal affairs. Yet rather than confront a country that clearly lacks protections for individual rights, European bureaucrats have spent 20 years chivvying the United States over data transfers, signing and breaking half a dozen agreements, always asking for more and usually getting additional concessions—including appointment of a special U.S. "ombudsperson" to hear European complaints; enforcement of European law by U.S. agencies like the Federal Trade Commission and Commerce Department; and a special Judicial Redress Act, passed for Europe in 2015, that grants Europeans the right to file FOIA petitions. None of that was good enough for the CJEU. This history shows that, even if the U.S. again tried to modify its law to meet the court's rigid demands in Schrems II, more litigation and more demands—not peace—would be the result.
The time for American concessions is over. Throughout the emergence of this issue, the U.S. has insisted—and the EU has agreed—that data flows across the Atlantic should not be interrupted. Indeed, the World Trade Organization (WTO) agreement signed by Europe makes clear that data flows may not be regulated in the name of privacy if the regulation is a means of "arbitrary or unjustifiable discrimination between countries where like conditions prevail." Nothing could be more discriminatory or arbitrary than 20 years of pursuing the United States for the privacy equivalent of parking tickets while ignoring similar infractions by the member states and an endless series of privacy felonies by the People's Republic of China. It's time for the U.S. to get serious about ending this campaign of harassment.
What can the United States do? Plenty. Here are a few options that belong on the table in the interagency process.
1. Rescind the concessions the U.S. made to get the now-broken deal. This is a no-brainer. Europe has broken the deal it made, and it cannot keep the parts of the deal it likes. The U.S. attorney general should withdraw the special status of European nationals under the Freedom of Information Act and the Judicial Redress Act. The Office of the Director of National Intelligence should abolish the office of the ombudsperson created to give Europeans comfort that their complaints about intelligence collection would be heard. President Trump should rescind PPD-28, the Obama-era set of politically correct limitations on intelligence community activities, which has been kept alive as part of the Privacy Shield negotiations.
2. Prepare to retaliate in a way that shows the U.S. is serious. Americans have never paid much attention to periodic eruptions of the data transfer issue. We are always a little inclined to think that maybe Europeans have something to teach us about privacy and human rights, so righteous American anger about intrusion on our sovereignty has been slow to ignite. But now is the time to show Europe that the U.S. is serious about keeping in place effective counterterrorism measures—and keeping the right to write U.S. laws without getting permission from European governments.
Because this decision violates U.S. rights under the WTO, the executive branch has authority under Section 301 of the Trade Act of 1974 to impose tariffs and other import restrictions on the countries of the European Union. And it should. If the U.S. wants to get Europe's attention, it needs to get Germany's attention, which probably means heavy tariffs on German cars and perhaps car parts. Airplanes and airplane parts are also a touchpoint. As usual, the list of retaliation candidates will need to include something of great value to each member state—Irish whiskey, say, or French wines.
The retaliation process will take a few months. The goal is not to impose the tariffs but to put an end to the crisis—and to Europe's peculiar arrogance about imposing its personal data law on the rest of the world.
3. Make common cause with the U.K., Canada, Australia and perhaps India. The U.S. doesn't have to stand alone. The EU has been threatening the U.K. with an "inadequacy" determination as punishment for Brexit. Its court has already struck at Canadian law. And Australia and India surely know they are next. The U.S. should include these nations in any negotiation, but only if they join America in preparing sanctions against Europe.
4. Find a stopgap solution in one of the member states. The CJEU's admission that it doesn't have anything to say about how member states protect personal data isn't just a confession of hypocrisy. It could be an opportunity to do an end run on the whole mess created by the court. If any one of the member states—Poland, say, or Ireland or Hungary—were willing to sign a national security agreement with the United States, it would be acting within the national security authority conferred on it by Article 4(2) of the Treaty of the European Union.
Suppose, in the pursuit of its national security interests, Poland agreed to allow personal data to flow to the United States without restriction, in exchange for which the United States agreed to share with Poland any counterterrorism data it was able to obtain by virtue of its worldwide intelligence collection. That would only apply to data transferred from Poland, of course, but companies could set up subsidiaries in Warsaw, transfer their data holdings there from elsewhere in Europe—after all, the EU is a single market—and then let them move to the United States.
Or suppose that Poland's government and data protection authority agreed that data exports to the United States could be challenged on the ground that protections for Europeans from U.S. intelligence were inadequate—but only by a plaintiff who could demonstrate concrete economic injury. Since the European objection to U.S. law has been almost entirely theoretical, this has the double advantage of providing redress for actual human rights violations while exposing the fact that, by and large, no one in Europe can point to any.
Whether these one-country solutions would withstand the inevitable legal wrangling, I don't know, but the court left no time for companies to adjust. Getting a Polish exit visa for data from that country would give them breathing room even if the shelter doesn't ultimately survive its journey through the courts.
5. Negotiate an agreement that ends the threat to American companies. If the U.S. can get European governments to take seriously American objections to the notion that Europe can write U.S. law, there is a simple solution to this problem. The CJEU's opinion, though written as though grounded in the rights of man, is in fact based on a European regulation and a European treaty. As a matter of international law, both of those can be overridden by a newer treaty. Indeed, the U.S. entered into a binding executive agreement—the international equivalent of a treaty—when it bargained for the adequacy determination that the court overturned.
How could the court overturn a binding agreement, then? The Americans who negotiated the deal under the Obama administration gave a lot of binding promises about how they would handle European data, but they didn't get a binding promise in return that U.S. law would be deemed adequate and that data flows of compliant companies would not be restricted. Maybe they got snookered. Maybe they couldn't muster the will to draw a line in the sand. Whatever the reason, the agreement is utterly one-sided—all American concessions, plus a little European mood music.
So the U.S. should ask for the concessions it should have gotten last time: a binding assurance that U.S. protections for individual rights are not in need of European editing and that data flows will never be threatened again over this issue.
As democracies with long histories of protecting civil liberties—histories that stand up well next to those of most EU members—the United Kingdom, Australia and Canada should get the same assurances. The CJEU's only source of power to undo the deal is the GDPR and the Treaty of the European Union (which is also the source of the Charter of Fundamental Rights of the European Union). All of those instruments must yield to a binding international agreement with the United States and other democratic nations.
The big news of the week was the breathtakingly arrogant decision of the European Court of Justice, announcing that it would set the rules for how governments could use personal data in fighting crime and terrorism.
Even more gobsmacking, the court decided to impose those rules on every government on the planet – except the members of the European Union, which are beyond its reach. Oh, and along the way the court blew up the Privacy Shield, exposing every transatlantic business to massive liability, and put the EU on a collision course with China over China's most sensitive domestic security operations. This won’t end well. It's the CJEU's version of our Court's Dred Scott ruling. Paul Hughes helps me make sense of the decision.
In the interview, I interview Darrell West, co-author of Turning Point - Policymaking in the Era of Artificial Intelligence..We mostly agree on where AI is already making a difference, where it's still hype, and how it will transform war. Where we disagree is over the policy prescriptions for avoiding the worst outcomes. I disagree with the relentless focus of the book (and every other book in recent years) on the questionable claim of AI bias, and Darrell and I have a spirited disagreement over my claim that his prescription will hide numerical racial and gender quotas in every aspect of life that AI touches.
Iranian cyberspies make pretty good training videos, Sultan Meghji tells us, but they’re not taking any bows after leaving the videos exposed online.
If you thought Twitter’s content resembled middle school, wait until you see their security measures in action. Nate Jones has the details, but my takeaway is that middle school science projects are usually handled a lot more responsibly than Twitter’s “god mode” dashboard.
BIPA, the Illinois biometric privacy act, has inspired lawsuits against users of a database assembled to reduce AI bias. Mark MacCarthy explains that the law prohibits use of biometrics (like pictures of your face) without consent. I observe that this makes BIPA the COVID-19 of privacy law. Anyone who touches this database will be infected with liability, at least if the plaintiff’s surprisingly plausible theory holds up.
Sultan reminds us that the PRC has now been caught twice requiring companies in China to use tax software with built-in malware. You know what they say: “Once is happenstance. Twice is coincidence. Three times is enemy action.” I don’t think we’ll need to wait long to see number three.
Nate gives us a former government lawyer’s take on the CIA’s new authority to conduct cyber covert action. (Yahoo, Lawfare) Ordinarily he’d be skeptical of keeping those decisions away from the White House, but in this case, he’ll make an exception. My take: If unshackling the CIA has produced the APT34 and FSB hacks and data dumps, what’s not to like?
In short takes, I mock the Justice Department spokesperson who claimed that Ghislaine Maxwell was engaged in “a misguided effort to evade detection” when she wrapped her cellphone in tin foil. And Mark and I cross swords over Reddit’s capture by the Intolerant Left. You make the call: When Reddit declares that exposing fake hate crimes as hoaxes is a form of hate speech, is that anecdotal evidence of left-wing bias or stone-cold proof of epistemic closure?
Download the 325th Episode (mp3).
Our interview today is with Bruce Schneier, who has coauthored a paper about how to push security back up the Internet-of-things supply chain: The Reverse Cascade: Enforcing Security on the Global IoT Supply Chain. His solution is hard on IOT affordability and hard on big retailers and other middlemen, who will face new liabilities, but we conclude that it’s achievable and maybe necessary. In fact, the real question is who’ll get there first, a combination of DHS’s CISA and the FTC or the California Secretary of State.
In the News Roundup Megan Stifel (@MeganStifel), Nate Jones (@n8jones81), and David Kris (@DavidKris) and I discuss TikTok's unenviable position -- holding the ball at the wrong end of the court as the clock winds down to 00:00. Every week seems to bring a new administration initiative that could hurt or kill TikTok's US business. The government’s options include a simple ban on TikTok sales to US buyers based on a finding that the company is a threat to national security or the security of Americans. That’s the applicable legal standard under Executive Order 13873; it's brand-new (the regs aren’t even final yet) but it relies on tools that have long been used under the International Economic Emergency Powers Act (IEEPA). A straightforward application of IEEPA remedies would cut TikTok off from the US market, I argue.
Meanwhile, another little-advertised but equally sweeping rule for government contractors is on its way to implementation. It will deny federal contracts, not just contractors who want to deliver certain Chinese products but it will also cut off contractors who jsut use those products themselves.
Not to be outdone by the contracting officers, the Federal Trade Commission and Justice Department are attacking TikTok from a different direction – investigating claims that the company failed to live up to last year’s consent decree on the privacy of children using the app.
And, on top of everything, private sector CISOs are drawing a bead on the app, too, as Wells Fargo and (briefly) Amazon told their employees to take the app off their work phones.
It’s no surprise in the face of these developments that TikTok is working overtime to decouple itself in the public’s mind from China, including going so far as to join the rest of Silicon Valley in signaling discomfort with Hong Kong’s new security rules (and ruler). Megan and I question whether this strategy will succeed.
If Chief Justice Roberts were running for office, he couldn’t have produced a better platform than the Court’s latest tech decision – upholding most of a law that makes robocalls illegal while striking down the one part that authorizes robocalls for collection of government debt. David Kris explains.
Nate unpacks a new Florida DNA privacy law prohibiting life, disability and long-term care insurance companies from using genetic tests for coverage purposes. I express skepticism.
Nate also explains the mysteriously quiet launch of the UK-US Bilateral Data Access Agreement. Four years in the making, and neither side wanted to announce that it had taken effect – what are they worried about, I wonder?
FBI Director Wray gives a compelling speech on the counterintelligence and economic espionage threat from China. He says the bureau opens a new such case every ten hours. And right on schedule comes the prosecution of a professor charged with taking $4M in US grant money to conduct research -- for China.
David and I puzzle over the surprisingly lenient sentence handed to a former Yahoo engineer for hacking the personal accounts of more than 6,000 Yahoo Mail users looking for sexually explicit images and videos.
For This Week in Silicon Valley Speech Suppression, I out Reddit as a particularly fanatical convert to SJW orthodoxy in censoring the right, as the service apparently tells its moderators that it’s hate speech to post stories or video showing a person of color as the aggressor in a confrontation.
And Nate closes us out by drawing again from a bottomless well of problems faced by technological contact tracing.
In the News Roundup, Dave Aitel (@daveaitel), Mark MacCarthy (@Mark_MacCarthy), and Nick Weaver (@ncweaver) and I discuss how French and Dutch investigators pulled off the coup of the year this April, when they totally pwned a shady “secure phone” system used by large numbers of European criminals. Nick Weaver explains that hacking the phones of Entrochat users gave the police access to big troves of remarkably candid criminal text conversations. And, I argue, it shows a flaw in the argument of encryption defenders who say that restricting Silicon Valley encryption will send criminals to less savory companies. That's true, but sleazeball companies are inherently more prone to compromise, as happened here.
This week the EARN IT Act went from Washington-controversial to Washington consensus in the usual way. It was amended into mush. Indeed, there’s an argument that, by guaranteeing that nothing bad will happen to social platforms who adopt end-to-end encryption, the successful Leahy amendment actually makes e2e crypto more attractive than it already is under current law. That’s my view, but Mark MacCarthy still thinks the twitching corpse of EARN IT might cause harm by allowing states to adopt stricter liability for child sex abuse material. He also thinks that it won’t pass. I have ten bucks that says it will, and by the end of the year.
Dave Aitel, new to the news roundup, discusses the bad week TikTok had in its second biggest market. India has banned the app. And judging from some of the teardowns of the code, its days may be numbered elsewhere as well. Dave points to reports that Angry Birds was used to collect user information as well when it was at the height of its popularity. We wax philosophic about why advertising and not national security agencies are breaking new ground in building our Brave New World.
Mark once worked for a credit card association, so he’s the perfect person to comment on the next story, in which the founder of gab discovers that being labeled a “hate speech” platform won’t just get you boycotted by Silicon Valley but by the credit card associations as well. Once we’re in this vein, we mine it, covering Silicon Valley’s concerted campaign to make sure Donald Trump can’t possibly repeat 2016 in 2020. He’s been deplatformed at Twitch this week for something he said in 2016. And Reddit dumped his enormous subreddit for failure to observe its censorship rules – which I point out are designed to censor only people in "the majority." I argue it’s time to defund the speech police.
Nick takes us to a remarkable Washington story. He thinks it’s about a questionable Trump administration effort to redirect $10 million in “freedom tools” funding from cryptolibertarians to Falun Gong coders. I point out that US government funds going to the cryptolibertarians were paying the salary of the notorious Jake Applebaum and buying tools like TAILS that have protected appalling sextortionist criminals. Really, taking the money away from those projects would be a good idea if all we did with it was to burn the bills on cold days to warm the homeless on the Mall.
Returning to This Week in Hacked Phones, Nick explains the latest "man in the middle" attack that works as soon as the phone user visits a website. Any website. Dave sets out the strikingly sophisticated and massive international surveillance system China is now aiming at Uighers all around the world. And Nick warns of two bugs that, if you haven’t spent the weekend fixing, may already be compromising your network.
In quick hits, I mock MIT for thinking that “pedophile” is a racial or ethnic slur but confess that its researchers must know more bad words than I do. What, I ask, is a c****e, anyway? If MIT was cheating on the number of asterisks, we have an idea, but that really is cheating. If you know, please don’t tweet the answer; send it to our email.
For the first time in twenty years, the Justice Department is finally free to campaign for the encryption access bill it has always wanted. Sens. Lindsey Graham (R-S.C.), Tom Cotton (R-Ark.), and Marsha Blackburn (R-Tenn.) introduced the Lawful Access To Encrypted Data Act. (Ars Technica, Press Release) As Nick Weaver points out in the news roundup, this bill is not a compromise. It’s exactly what DOJ wants – a mandate that every significant service provider or electronic device maker build in the ability, when served with a warrant, to decrypt any data it has encrypted.
In our interview, Under Secretary Chris Krebs, head of the Cybersecurity and Infrastructure Security Agency, drops in for a chat on election security, cyberespionage aimed at coronavirus researchers, why CISA needs new administrative subpoena authority, the value of secure DNS, and how cybersecurity has changed in the three years since he took his job.
Germany’s highest court has ruled that the German competition authority can force Facebook to obtain user consent for internal data sharing, to prevent abuse of a dominant position in the social networking market. Maury Shenk and I are dubious about the use of competition law for privacy enforcement. Those doubts could also send the ruling to a still higher forum – the European Court of Justice.
You might think that NotPetya is three years in the rear-view mirror, but the idea of spreading malware via tax software, pioneered by the GRU with NotPetya, seems to have inspired a copycat in China. Maury reports that a Chinese bank is requiring foreign firms to install a tax app that, it turns out, has a covert backdoor. (Ars Technica, Report, NBC)
The Assange prosecution is looking less like a first amendment case and more like a garden variety hacking conspiracy thanks to the government’s amended indictment. (DOJ, Washington Post) And, as usual, the more information we have about Assange, the worse he looks.
Jim Carafano, new to the podcast, argues that face recognition is coming no matter how hard the press and NGOs work to demonize it. And working hard they are. The ACLU has filed a complaint against the Detroit police, faulting them for arresting the wrong man based on a faulty match provided by facial recognition software. (Ars Technica, Complaint)
The Facebook advertiser moral panic is gaining adherents, including Unilever and Verizon, but Nick and I wonder if the reason is politics or a collapse in ad budgets. Whatever the cause, it’s apparently led Mark Zuckerberg to promise more enforcement of Facebook’s policies.
In short hits, the U.S. Department of Homeland Security sent a letter to chief executives of five large tech companies asking them to ensure social media platforms are not used to incite violence. Twitter has permanently suspended the account of leak publisher DDoSecrets. (Ars Technica, Cyber Scoop). Rep. Devin Nunes (R-Calif.) was told what he must have known when he filed his case: he cannot sue Twitter for defamation over tweets posted by a parody account posing as his cow. (Ars Technica, Ruling) Nick explains why it’s good news all around as Comcast partners with Mozilla to deploy encrypted DNS lookups on the Firefox browser. And Burkov gets a nine-year sentence for his hacking.
This is the week when the movement to reform Section 230 of the Communications Decency Act got serious. The Justice Department released a substantive report suggesting multiple reforms. I was positive about many of them (my views here). Meanwhile, Sen. Josh Hawley (R-MO) has proposed a somewhat similar set of changes in his bill, introduced this week. Nate Jones and I dig into the provisions, and both of us expect interest from Democrats as well as Republicans.
The National Security Agency has launched a pilot program to provide secure DNS resolver services for US defense contractors. If that’s such a good idea, I ask, why doesn’t everybody do it, and Nick Weaver tells us they can. Phil Reitinger’s Global Cyberalliance offers Quad9 for this purpose.
Gus Hurwitz brings us up to date on a host of European cyberlaw developments, from terror takedowns (Reuters, Tech Crunch) to competition law to the rise of a disturbingly unaccountable and self-confident judiciary. Microsoft’s Brad Smith, meanwhile, wins the prize for best marriage of business self-interest and Zeitgeist in the twenty-first century.
Hackers used LinkedIn’s private messaging feature to send documents containing malicious code which defense contractor employees were tricked into opening. Nick points out just what a boon LinkedIn is for cyberespionage (including his own), and I caution listeners not to display their tats on LinkedIn.
Speaking of fools who kind of have it coming, Nick tells the story of the now former eBay executives who have been charged with sustained and imaginatively-over-the-top harassment of the owners of a newsletter that had not been deferential to eBay. (Wired, DOJ)
It’s hard to like the defendants in that case, I argue, but the law they’ve been charged under is remarkably sweeping. Apparently it’s a felony to intentionally use the internet to cause substantial emotional distress. Who knew? Most of us who use Twitter thought that was its main purpose. I also discover that special protections under the law are extended not only to prevent internet threats and harassment of service animals but also horses of any kind. Other livestock are apparently left unprotected. PETA, call your office.
Child abusers cheered when Zoom buckled to criticism of its limits on end-to-end encryption, but Nick insists that the new policy offers safeguards for policing misuse of the platform. (Ars Technica, Zoom)
I take a minute to roast Republicans in Congress who have announced that no FISA reauthorization will be adopted until John Durham’s investigation of FISA abuses is done, which makes sense until you realize that the FISA provisions up for reauthorization have nothing to do with the abuses Durham is investigating. So we’re giving international terrorists a break from scrutiny simply because the President can’t keep the difference straight.
Nate notes that a story previewed in April has now been confirmed: Team Telecom is recommending the blocking of a Hong Kong-US undersea cable over national security concerns.
Nick and I mourn the complete meltdown of mobile phone contact tracing. I argue that from here on out, some portion of coronavirus deaths should be classified as mechanogenic (caused by engineering malpractice). Nick proposes instead a naming convention built around the Therac-25.
And we close with a quick look at the latest data dump from Distributed Denial of Secrets. Nick thinks it’s strikingly contemporaneous but also surprisingly unscandalizing.
Section 230 of the Communications Decency Act seems to inspire bipartisan antipathy.
They’re both right, at least directionally. Section 230, which dates to 1996, shields platforms from civil liability stemming from third-party content on their sites. It has been central to the success of crowdsourced platforms like YouTube, Twitter and Facebook, protecting them from potentially staggering liability for the online misbehavior of their users. But by exempting the platforms from the usual rules of liability, Section 230 is also a kind of subsidy, and one that protects some of the biggest companies in the world from expensive litigation.
Such a subsidy made more sense in 1996, when there were only 36 million internet users in the world. Now that 4.6 billion people regularly go online, it’s fair to ask why the U.S. should give internet platforms a sweeping exemption from the laws that govern everyone else. In recent years, more and more politicians on either side of the aisle have been pointedly asking this question, though perhaps for different reasons: Some Democrats still blame social media for making Trump’s election possible, while Republicans fault the industry for how publicly it has regretted its role in the 2016 campaign.
But revoking Section 230 is not a good idea. Mad as people may be at the platforms, those companies will need at least some liability protection if they’re going to keep giving us the crowdsourced content we all consume today. Platforms particularly need protection from defamation liability, which falls on both the author and the publisher. No social media company can police its content for libelous posts. The platforms simply are not equipped to evaluate which statements are true and which are false and defamatory.
That doesn’t end the debate, though. The industry may need an exemption from defamation liability, but why should it be immune if it ignores user misconduct that is entirely predictable and largely preventable? That question led Congress, on an overwhelmingly bipartisan basis, to amend Section 230 by adopting FOSTA, the Allow States and Victims to Fight Online Sex Trafficking Act of 2017. FOSTA withdrew immunity from online platforms that knowingly let their users facilitate sex trafficking. And it turned out that most platforms didn’t need protection for facilitating sex trafficking. A few online sites that depended on prostitution ad revenue went out of business, but Big Social Media continued to thrive.
So the next questions for Section 230 skeptics on both sides of the aisle should be, “Where else has Congress given social media an immunity it didn’t need? And how can policymakers chip away at this overgenerous subsidy without putting at risk the survival of social media?”
The most thoughtful answers I’ve seen come from a Justice Department report released this month. Without grandstanding, it offers several proposals that ought to have bipartisan appeal.
The report begins by acknowledging that social media companies still need protection from liability for things they can’t be expected to police, like defamation. But the report sees a vast difference between being unable to stop criminal behavior and actively promoting it, as the sex trafficking sites did before FOSTA. It draws a simple lesson in FOSTA’s success—the online platforms worth propping up don’t need immunity for facilitating or soliciting illegal conduct.
The Justice Department also suggests that platforms should be required to go further in regulating user conduct— they should face liability if they fail to take reasonable steps to prevent the distribution of child sex abuse materials and terrorist and cyberstalking content. The only thing surprising about this proposal is that Section 230 doesn’t already demand it; the law confers its immunity without asking the platforms to do anything at all in return. It’s past time to spell out exactly what is expected of platforms in exchange for the subsidy they receive. Reasonable efforts to stop things like child sex abuse is certainly not too much to ask.
Among its other suggestions for trimming Section 230 immunity, the report rejects the extreme applications of the law that have gained currency since 1996. Most notably, online platforms have argued that Section 230 creates an immunity from antitrust claims. This is outrageous. If today’s monolithic platforms use their control of the national discourse to suppress criticism of their power or praise for their competitors, they don’t deserve immunity; they deserve an injunction and treble damages. Similarly, there’s no justification for extending platforms’ defamation immunity to the point where they can ignore court libel rulings without consequences (as, for example, Yelp has done).
So far, so bipartisan. If a reformed Section 230 forces social media to be more cautious about facilitating criminal conduct online, neither party will weep. There’s less unanimity about reforming a second immunity granted by Section 230. (Yes, there are two!) The second immunity protects the platforms not when they allow speech but when they suppress it. Conservatives think (rightly, in my view) that Silicon Valley tilts against them in these decisions, whether the result is a takedown or a warning label or a “shadow ban.”
The second immunity protects online platforms from liability when they take down content that is sexual, violent, harassing or “otherwise objectionable,” as long as they act in “good faith.” In an age when everyone objects to everything, this language invites weaponization. It is only prudent for Congress to narrow the definition of “otherwise objectionable” speech so that the provision gives special protection to the platforms mainly when they’re taking down speech that violates the law.
Republicans who think they’ve been victimized by social media censorship will of course find something to support here, but so too can anyone else uncomfortable with letting a handful of Silicon Valley monoliths decide what can and can’t be said online. For starters, unlike the first immunity, which protects against the clear risk of ruinous defamation liability, it’s not even fully clear what lurking liability the second immunity is needed to head off. Successful lawsuits for refusing to publish someone else’s work are not exactly thick on the ground.
The Justice Department’s other recommendation here is to attach some more tangible standards to the statute’s requirement that content be policed in “good faith.” To meet this requirement, the department urges, content moderation policies should be stated “plainly and with particularity” and takedown decisions should be notified in a timely way and explain “with particularity the factual basis for the restriction.” This will certainly be popular on the right, which thinks it’s unfairly targeted for suppression. But plenty of speakers on the left feel the same way. The only obvious cure for the widespread mistrust of platforms is for them to embrace greater transparency and candor. They have resisted, sometimes with good reason, sometimes without; but “trust us” is no longer a persuasive argument. This proposal would encourage them to move away from their largely opaque content moderation practices.
In short, and surely surprising to some, this Justice Department has made a real contribution toward bipartisan reform of Section 230. The temptation among Democrats will be to score partisan points by dismissing the report.
That would be a mistake.
Because, having embraced a candidate tied to the unrealistic position that “section 230 should be revoked, immediately,” Democrats are going to need more workable solutions that keep the essential core of Section 230 while cutting back the platforms’ now-unjustifiable government subsidy.
And, when they go looking for those ideas, they’re going to find that a big chunk of them are already in this report.
Our interview this week is with Chris Bing, a cybersecurity reporter with Reuters, and John Scott-Railton, Senior Researcher at Citizen Lab and PhD student at UCLA. John coauthored Citizen Lab’s report last week on BellTroX and Indian hackers for hire, and Chris reported for Reuters on the same organization’s activities – and criminal exposure – in the United States.
The most remarkable aspect of the story is how thoroughly normalized the hacking of legal and lobbying opponents seems to have become, at least in parts of the US legal and investigative ecosystem. I suggest that instead of a long extradition battle, the US should give the head of BellTroX a ticket to the US and a guaranteed income for the next few years as a witness against his customers.
In the news roundup, Nick Weaver tells the remarkable story of how Facebook funded an exploit aimed at taking down a particularly vile online abuser of young girls -- one who was rendered nearly invulnerable by his use of TAILS, the secure, thumb drive-based communication system (Vice, Gizmodo). This is a great story because it really doesn’t conform to any of the stilted narratives into which most internet security stories are usually jammed.
Well, another week, another Zoom bomb. Now the company is taking heat because it terminated several Tiananmen Square commemorative Zoom sessions after China complained (NYT, Zoom). David Kris and I don’t think Zoom had much choice about cutting off the Chinese customers. Terminating the US account holder who organized a session, however, was a bad move – and one that’s since been corrected by the company.
Nate Jones and I square off again for Round 545 on content moderation, spurred this time by reports that Sen. Josh Hawley is drafting legislation inspired by the Trump Administration’s Section 230 Executive Order. Meanwhile several Republican senators are pushing the FCC to act on the order. Nate and I find rare bipartisan common ground on the proposal that Congress require social media companies to take down foreign government online propagands – and maybe work with the US government to stop it at the source.
David reports on a (deservedly) obscure EU cloud independence project. It seems to have been embraced by Microsoft, which I accuse of going full AT&T – embracing government regulation as a competitive differentiator. As if to prove my point, Microsoft announces that it’s getting out of the business of doing facial recognition for the police – until it can persuade Congress to regulate its competitors.
Why are spies targeting vaccine research? Nate has the answer; he draws on the excellent Risky Biz newsletter analysis of what drives COVID-19 cyberespionage.
Nick flags the potential significance of ARM wrestling, as the UK chip designer ARM fights its JV partner for control of its Chinese joint venture. In a story that made the cut because of Twitter and Linkedin feedback, Nick assigns a “moderate” threat label to the latest Universal Plug n Pwn exploit. (It’s only moderate because there are so many pwned IOT devices already in a position to DDOS targets of opportunity.)
In quick hits, I note that Israel has halted its controversial use of intelligence capabilities to monitor the spread of the coronavirus, but the government reserves the right to revive monitoring if a second wave shows up (JPost, Yahoo). Poor Brewster Kahle is looking like an internet hippie who fell asleep at Woodstock and woke up at Altamont. The Internet Archive is ending its program of offering free, unrestricted copies of e-books, but the publishers who sued over that program may decide to keep suing until they’ve broken his entire “digital library” model, and maybe the Internet Archive as well (NYT, Ars Technica). That would be a shame. Finally, even if you have a thousand talents, honesty may not be one of them. Charles Lieber, the Harvard University professor arrested for lying about his lucrative China thousand-talents contracts, has now been indicted on false statement charges.
Our interview with Ben Buchanan begins with his report on how artificial intelligence may influence national security and cybersecurity. Ben’s quick takes: AI is better for defense than offense, and probably even better for propaganda. The fun part of the interview, in my view, is Ben’s explanation of how to poison the AI that’s trying to hack you – and the scary possibility that China is already experimenting with poisoning Silicon Valley’s content moderation AI.
By popular request, we revisited a story we skipped last week; this time we do a pretty deep dive on the ruling that Capital One can’t claim attorney-client work product privilege in an intrusion response report that Mandiant prepared for the bank after the breach. Steptoe litigator Charles Michael and I talk about how IR firms and CISOs should respond to the decision, assuming it stands up on appeal.
Maury Shenk notes the latest of about a hundred warnings, this time from Christopher Krebs, the director of DHS’s cybersecurity agency and the head of Britain’s GCHQ, that China’s intelligence service – and every other intelligence service on the planet – seem to be targeting COVID-19 research. I ask whether sauce for the Western goose should be sauce for the Chinese gander.
Maury takes us through the week in internet copyright fights. The most overdetermined takedown in history comes when a Trump-hating social media company combines with ideological copyright enforcement and the world’s dumbest content bots to remove a Trump campaign video tribute to George Floyd. The video is still available on Trump’s YouTube channel.
Maury and I puzzle over Instagram’s failure to provide a license to users of its embedding API. This could mean an unwelcome surprise for users who believed that embedding images, rather than hosting them directly, provides insulation against copyright claims.
Finally, much as I love Brewster Kahle, I’m afraid that his latest campaign marks a transition from internet hippie to “holy fool” – and maybe a broke one at that. His Internet Archive, the online library best known for maintaining the Internet Wayback Machine, makes scanned copies of books available to the public on terms that resemble a library’s -- one person gets one copy for a few weeks and then it goes to the next reader. The setup was arguably legal – and no one was suing – until Kahle decided to respond to covid-19 by letting people download more books than his company had paid for. Now he faces an ugly copyright lawsuit.
Speaking of ugly lawsuits, Mark MacCarthy and Paul Rosenzweig comment on the Center for Democracy and Technology’s complaint that Trump violated tech companies’ right to free speech with his executive order on section 230. (Reuters – NYT) I doubt this lawsuit will get far.
This Week in Working the Ref: Facebook and Mark Zuckerberg are facing harsh criticism from users, competitors, and civil rights organizations for failing to censor people those groups hate. (Ars Technica – Politico). Meanwhile, Snap scores points by ending promotion of Trump’s account after concluding that his tweets about official action were incitements to violence. I can’t help wondering what Snap would have done with FDR’s December 8 “day that will live in infamy” speech.
Where is Nate Jones when you need him? He would love this story: A Twitter user sacrificed a Twitter account to show that Trump is treated differently than others by the platform. Of course, the panel notes, that’s pretty much what Twitter says it does.
In quick hits, I serve notice that no one should be surprised if Justice brings an adtech antitrust suit against Google. The Israeli government announces an attack on its infrastructure -- long after it retaliated against Iran for launching the attack. And a pretty good state-level hacker – probably not the Russians, I argue – is targeting industrial firms.
Listen to Episode 319 here: [embed]https://www.steptoe.com/podcasts/TheCyberlawPodcast-319.mp3[/embed]
This episode features an in-depth (and occasionally contentious) interview with Bart Gellman about his new book, Dark Mirror: Edward Snowden and the American Surveillance State, which can be found on his website and on Amazon. I’m tagged in the book as having been sharply critical of Gellman’s Snowden stories, and I live up to the billing in this interview. He responds to my critique in good part. Gellman offers detailed insights into Edward Snowden’s motives and relationships to foreign governments, as well as how journalism – and journalistic lawyering – is done in the Big Leagues.
Our news roundup focuses heavily on the Trump Administration’s executive order on section 230 of the Communications Decency Act (Wall Street Journal – Washington Post). I end up debating all three of my co-panelists – Nate Jones, Nick Weaver, and Evelyn Douek, rejoining us on a particularly good day, given her expertise. We agree to disagree on whether Silicon Valley applies its rules in a fashion that discriminates against conservatives. More interesting is the rough consensus that Silicon Valley’s heavy influence over our speech is worth worrying about and that transparency is one of the better ways to discipline that influence. No one but me is willing to consider the possibility that the executive order represents a good step toward transparency.
Nate and I find much room to agree, though, on the tragicomedy emerging from the reauthorization of three relatively straightforward FISA provisions. Stay tuned for a House-Senate conference, plus heavy lobbying of the President.
Nate and I cover the latest in US-China decoupling – the FCC and Justice Department enthusiasm for kicking Chinese telecom firms out of the country and, in a possible new front, heavy scrutiny being given to Chinese-built transformers.
Evelyn tells us that, as a visa holder, she’s definitely hoping that the courts overturn US rules forcing visa applicants to disclose their social media handles. I predict that her hopes will be dashed.
Finally, Nick explains who needs a “quantum holographic catalyzer” to protect against 5G telecom emissions. Quick answer: No one. It’s a fake cure for fake malady.
In the 2020s, one fears, everyone will feature in a conspiracy theory for fifteen minutes. In an effort to get in front of this development, and the inevitable Twitter mob to follow, I will now disclose the secret symbol that I anticipate will drive future conspiracy theories about the Cyberlaw Podcast.
Many readers are familiar with the podcasts's logo, shown to the right.
Less familiar to readers under 70 is the image to the left. It is a 1957 cartoon published in Pravda, the Soviet Union's dominant newspaper, to mark the surprise launch of a Soviet satellite into earth orbit -- well ahead of anything the United States was able to do.
It triumphantly shows little Soviet Sputnik beaming its signal back to a smiling world (well, as close to smiling as anyone in the Soviet Union ever seemed to get in public). It was a remarkable achievement, and one that the Soviets turned into a great propaganda coup.
The similarities could be a complete coincidence uncovered by a listener who's also a Soviet history buff, but where's the fun in that? Future conspiracy buffs will surely find a secret message hidden in the podcast's choice of logo. But what message? Some may think it's a dog whistle to rally Russian revanchists to our audience. Those who believe I'm an egregious statist will no doubt see it as confirmation of my secret plan to collectivize American agriculture and liquidate the kulaks. Other theories are welcome.
Thanks to Jacob Nelson for the find.
For all the passion it has unleashed, President Trump's executive order on section 230 of the Communications Decency Act is pretty modest in impact. It doesn't do anything to undermine the part of section 230 that protects social media from liability for the things that its users say. That's paragraph (1) of section 230(b), and the order practically ignores it.
Instead, the order is all about paragraph (2), which protects platforms from liability when they remove or restrict certain content: "No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."
This makes some sense in terms of the President's grievance. He isn't objecting to Twitter's willingness to give a platform to people he disagrees with. He objects to Twitter's decision to cordon off his speech with a fact-check warning, as well as all the other occasions on which Twitter and other social media platforms have taken action against conservative speech. So it makes sense for him to focus on the provision that seems to immunize biased and pretextual decisions to downgrade viewpoints unpopular in the Valley.
(I note here that the existence of a liberal bias in the application of social media content mediation is heavily contested, especially by commentators on the left. They point out, correctly, that the evidence of a left-leaning bias is anecdotal and subjective. Of course the same could be said of left-leaning bias in media outlets like the Washington Post or the New York Times. I'm friends with many reporters who deny such a bias exists. Yet most readers of these and other traditional media recognize that there is bias at work there -- rarely reporting the facts, but often in deciding which stories are newsworthy, or how the facts are presented, or past events are summarized. If you are sure there's no bias at work in the mainstream press, then I can't persuade you that the same dynamic is at work on social media's content moderation teams. But if you have seen even a glimmer of liberal bias in the New York Times, you might ask yourself why there would be less in the decisions of Silicon Valley's content police, whose decisions are often made in secret by unaccountable young people who have not been inculcated in a journalistic ethic of objectivity.)
What's interesting and useful in the order's focus on content derogation is that it addresses precisely the claim that anticonservative bias isn't real. For it is aimed at bringing speech suppression decisions into the light, where we can all evaluate them.
In fact, that's pretty much all it's aimed at. The order really only has two and a half substantive provisions, and they're all designed to increase the transparency of takedown decisions.
The first provision tells NTIA (the executive branch's liaison to the FCC) to suggest a rulemaking to the FCC. The purpose of the rule is to spell out what it means for the tech giants to carry out their takedown policies "in good faith." The order makes clear the President's view that takedowns are not "taken in good faith if they are "deceptive, pretextual, or inconsistent with a provider's terms of service" or if they are "the result of inadequate notice, the product of unreasoned explanation, or [undertaken] without a meaningful opportunity to be heard." This is not a Fairness Doctrine for the internet; it doesn't mandate that social media show balance in their moderation policies. It is closer to a Due Process Clause for the platforms. They may not announce a neutral rule and then apply it pretextually. And the platforms can't ignore the speech interests of their users by refusing to give users even notice and an opportunity to be heard when their speech is suppressed.
The second substantive provision is similar. It asks the FTC, which has a century of practice disciplining the deceptive and unfair practices of private companies, to examine social media takedown decisions through that lens. The FTC is encouraged (as an independent agency it can't be told) to determine whether entities relying on section 230 "restrict speech in ways that do not align with those entities' public representations about those practices."
(The remaining provision is an exercise of the President's sweeping power to impose conditions on federal contracting. It tells federal agencies to take into account the "viewpoint-based speech restrictions imposed by each online platform" in deciding whether the platform is an "appropriate" place for the government to post its own speech. It's hard to argue with that provision in the abstract. Federal agencies have no business advertising on, say, Pornhub. In application, of course, there are plenty of improper or unconstitutional ways the policy could play out. But as a vehicle for government censorship it lacks teeth; one doubts that the business side of these companies cares how many federal agencies maintain their own Facebook pages or Twitter accounts. And in any event, we'll have time to evaluate this sidecar provision when it is actually applied.)
That's it. The order calls on social media platforms to explain their speech suppression policies and then to apply them honestly. It asks them to provide notice, a fair hearing, and an explanation to users who think they've been treated unfairly or worse by particular moderators.
I've had many conversations with participants in the debate over the risks arising from social media's sudden control of what ordinary Americans (or Brazilians or Germans) can say to their friends and neighbors about the issues of the day. That is a remarkable and troubling development for those of us who hoped the internet would bring a flowering of views free from the intermediation of traditional sources. But you don't have to be a conservative to worry about how this unprecedented power could be abused.
In another context, I have offered a rule of thumb for evaluating new technology: You don't really know how evil a technology can be until the engineers who depend on it for employment begin to fear for their jobs. Today, social media's power is treated by the companies themselves as a modest side benefit of their astounding rise to riches; they can stamp out views they hate as a side gig while tending to the real business of extending their reach and revenue. But every one of us should wonder, "How they will use that power when the ride ends and their jobs are at risk?" And, more to the point, "How will we discover what they've done?"
Such questions explain why even those who don't lean to the right think that the companies' control of our discourse needs more scrutiny. There are no easy ways to discipline the power of Big Tech in a country that has a first amendment, but the answer most observers offer is more transparency.
We need, in short, to know more about when and how and why the big platforms decide to suppress our speech.
This executive order is a good first step toward finding out.
Our interview is with Mara Hvistendahl, investigative journalist at The Intercept and author of a new book, The Scientist and the Spy: A True Story of China, the FBI, and Industrial Espionage, as well as a deep WIRED article on the least known Chinese AI champion, iFlytek. Mara’s book raises questions about the expense and motivations of the FBI’s pursuit of commercial spying from China.
In the News Roundup, Gus Hurwitz, Nick Weaver, and I wrestle with whether Apple’s lawsuit against Corellium is really aimed at the FBI. The answer looks to be affirmative, since an Apple victory would make it harder for contractors to find hackable flaws in the iPhone.
Germany’s top court ruled that German intelligence can no longer freely spy on foreigners – or share intelligence with other western countries. The court seems to be trying to leave the door open to something that looks like intelligence collection, but the hurdles are many. Which reminds me that I somehow missed the 100th anniversary of the Weimar Republic.
There’s Trouble Right Here in Takedown City. Gus lays out all the screwy and maybe even dangerous takedown decisions that came to light last week. YouTube censored epidemiologist Knut Wittkowski for opposing lockdown. It suspended and then reinstated a popular Android podcast app for the crime of cataloging COVID-19 content. Thanks to Google, anyone can engage in a self-help right to be forgotten with a bit of backdating and a plagiarism claim. And classical musicians are taking it on the chin in their battle with aggressive copyright enforcement bots and a sluggish Silicon Valley response.
In that climate, who can blame the Supreme Court for ducking cases asking for a ruling on the scope of Section 230? They’ve dodged one from the 2d Circuit already, and we predict the same outcome in the next one, from the 9th.
Finally, Gus unpacks the recent report on the DMCA from the Copyright Lobby Off, er, the Copyright Office.
With relief, we turn to Matthew Heiman for more cyber and less law. It sure looks like Israel launched a disruptive cyberattack on Iranian port facility. It was probably a response to Iranian cybermeddling with Israeli water systems.
Nick covers Bizarro-world cybersecurity: It turns out malware authors now can hire their own black-market security pentesters.
I ask about open-source security and am met with derisive laughter, which certainly seems fair after flaws were found in dozens of applications.
I also cover a Turing Test for the 21st Century: Can you sext successfully with an AI and not know it’s an AI? And the news from AI speech imitation is that Presidents Trump and Obama have fake-endorsed Lyrebird.
Gus reminds us that most of privacy law is about unintended consequences, like telling Grandma she’s violating GDPR by posting her grandchildren's photos without their parents' consent.
BEERINT at last makes its appearance, as it turns out that military and intelligence personnel can be tracked with a beer enthusiast app.
Finally, in the wake of Joe Rogan’s deal with Spotify, I offer assurances that the Cyberlaw Podcast is not going to sell out for $100 million.
Our interview guest, Peter Singer, continues to write (with August Cole) what he calls "useful fiction"– thrillers that explore the real-world implications of emerging technologies. His latest is Burn-In: A Novel of the Real Robotic Revolution, to be released May 26, 2020. The thoroughly researched (and footnoted!) book is a painless way to understand the social and economic changes new AI and robotic technologies will make possible and their impact on actual human beings. The interview ranges widely over these policy implications, plus a few plot spoilers.
In the News Roundup, David Kris covers the latest Congressional FISA Follies, leading me into a rant on the utter irresponsibility of subjecting national security authorities to regular expiration – and equally regular ransom demands from the least responsible elements of Congress. Speaking of FISA, it turns out that the December Pensacola shootings were hatched by al-Qaeda's Yemen franchise. Why are we only learning this in May? Because the evidence comes from an iPhone whose security Apple refused to find a way around. The FBI's self-help solution worked in the end, but not until the trail had gone cold.
US-China decoupling is in overdrive this week. Nick Weaver talks about the move by the Trump Administration to achieve semiconductor self-sufficiency – and probably-not-coincidental announcements that TSMC will build a chip factory in Arizona and that the Commerce Department has drafted a new export rule aimed at making it much harder for TSMC to build chips for Huawei. In response, China is preparing a list of unreliable US suppliers of technology. I wonder whether putting companies on the list for diversifying their supply chain out of China will have the long-term effect of making companies more reluctant to open new supply relationships with Chinese companies.
David and I note that recent US accusations of Chinese and Iranian cyber intrusions on COVID-19 research may be more than just the usual imprecations.
And Nick explains why so many US professors are going to jail for undisclosed China ties. The key word is "undisclosed."
Mark MacCarthy previews France's (and Germany's and the EU's and the UK's) increasingly tough sanctions for US social media firms that fail to remove "hate speech" and other bad content within 24 hours (or sometimes one hour). More and more, it seems, Section 230 immunity is just a local US ordinance.
Mark and Nick review the latest trial balloon from Europe's technocrats: How about a Chinese firewall for Europe, ask apparently respectable policy thinkers working for the European Parliament.
David and Nick find themselves agreeing with the latest release from DHS's CISA pouring cold water on online voting.
In quick hits, David notes the Trump administration's now routine extension of the "telecom national security" Executive Order, Nick brings us This Week in NSO Bashing, I touch on a ransomware and doxing threat that has tripped up a celebrity law firm, and Nick and I muse on why cell phone contact tracing seems about to jump the shark.
We close with a surprising catfishing story that leads us into a discussion of the relative hotness of recent NSA directors and whether it's true that being dual-hatted makes you irresistible to women.
J.P. Morgan once responded to Teddy Roosevelt’s charge that his railway trust violated federal law by telling the President, “If we have done anything wrong, send your man to see my man, and we’ll fix it up.” That used to be the gold standard for monopolist arrogance in dealing with government, but Google and Apple have put J.P. Morgan in the shade with their latest instruction to the governments of the world: You can’t use our app to trace COVID-19 infections unless you promise not to use it for quarantine or law enforcement purposes.
The two companies are able to dictate this policy because between them they have about 99% of the phone OS market. That’s more control than Morgan had of US railways, and their dominance apparently gives them the clout to send a message that improves on Morgan's: “If you think we’ve done something wrong, don’t bother to send your man; ours is too busy to meet.”
Nate Jones and I discuss Silicon Valley overreach in this episode. (In that vein, I apologize unreservedly to John D. Rockefeller, to whom I mistakenly attributed the Morgan quote.) The sad result is that what began as a promising technological adjunct to covid-19 contact tracing has been delayed and muddled by ideological engineers in the Valley to the point where it isn’t likely to be deployed and used in a timely way.
Another lesson we draw in today’s episode is for authoritarian governments: Worry less about Cyber Command and more about NGOs. Citizen Lab has released a great paper making the case that WeChat monitors its users outside China, not to suppress their speech but to flag documents and images for later suppression inside China. Ironically, Matthew Heiman notes, Western users of WeChat who circulate human rights material are giving China’s censors the ability to hash and block that material as soon as it crosses the Great Firewall -- where it's really needed.
Meanwhile, Nate points out, Bellingcat has done for Russia’s GRU what Citizen Lab did for China. Perhaps inspired by Germany’s indictment of Dmitry Badin for hacking the Bundestag, Bellingcat doxes him to a fare-thee-well, finding his phone number, car registration, wife's home page, GRU office address, and preposterously bad password.
David Kris, meanwhile, explains the intersection of export control law and the Law of Unintended Consequences, as the US Commerce Department finds that its efforts to isolate Huawei may be excluding US firms from some standards bodies.
Anthony Anscombe joins us from Steptoe’s class action practice to unpack the recent Seventh Circuit decision on Article III standing and the second dumbest privacy law in the country – Illinois’s Biometric Information Privacy Act.
I note that Israel’s passive-aggressive Supreme Court, meanwhile, has found a second way to say, “Meh,” to the Israeli government’s use of intelligence capabilities to do contact tracing.
Matthew lays out what’s at stake as the Senate rewards the House’s “corona-cation” by trying again to pass its FISA bill. That may happen as early as today.
In short hits, every government's hackers are adding COVID-19 to their targets, going after everyone from the WHO to coronavirus researchers. And I make an effort to explain why Apple has brought a DMCA copyright lawsuit against Corellium. It’s all about the “chilling effect” on security research. And maybe one particular Five Eyes researcher, Azimuth. I make the case for Justice Department intervention on Corellium’s behalf – or at least Azimuth’s. Following up on last week's story, Banjo’s CEO finds himself canceled for his (bad) acts as a 17-year-old. And, finally, where is Jean-Paul Sartre when you need him? He’s the only one who can resolve the odd dispute over “authenticity” between Twitter and the US State Department.
Download the 315th Episode (mp3).
We begin the news with a US measure to secure its supply chain for a critical infrastructure – the bulk power grid. David Kris unpacks a new Executive Order restricting purchases of foreign equipment for the grid. As with all these measures, China is the unspoken target.
Nick Weaver, meanwhile, explains the remarkable extent of surveillance built into Xiaomi phones and questions the company's claim that it was merely acquiring pseudonymous ad-related data like others in the industry.
It wouldn't be the Cyberlaw Podcast if we didn't wrangle over using mobile phones to combat the coronavirus. Mark MacCarthy says that several countries – Australia, the UK, and perhaps France – are deviating from the Gapple model for contact tracing. Several others, though, have bought in. India, meanwhile, is planning a much more government-driven approach to using phone apps to deal with the pandemic.
Mark ventures into even more contested territory in response to an article in The Atlantic by Jack Goldsmith and Andrew Woods, who argue that China has won the debate with John Perry Barlow over whether the Internet will be a force for free speech. Mark and I more or less agree, which sends me off on a rant about the growing self-confidence and ham-handedness of Big Tech as they get comfortable in their role as Guardians of What You Can't Say on the Internet. Things you can't say include plausible arguments about the still highly unsettled question of how best to deal with COVID-19 and descriptions of treatment options that have been entertained by President Trump without establishment approval, not to mention "unverified" statements (not, notably, false ones) that could cause "social unrest." Just reading such things, it turns out, will lead at least Facebook to track you down and tell you that it knows what you did and wants to correct your flirtation with thoughtcrime – a practice that earned it praise from Rep. Adam Schiff.
Nick and I note the difficulty Facebook is having getting out of FOSTA cases in Texas, and I ask why FOSTA hasn't already spelled doom for end-to-end encryption since it basically does what the EARN IT Act does, and all right-thinking Americans have been told that that act Spells Doom For End-to-End Encryption.
David explains why Amazon is facing tough new scrutiny from both parties: A Wall Street Journal article that questioned the accuracy of Amazon testimony before Congress has generated claims of perjury, a demand that Jeff Bezos testify, and suggestions that the administration open a criminal antitrust probe.
"You can't decouple from me! I'm decoupling from you!" That's the sentiment from Chinese officials, anyway, as they push forward with their own remarkably familiar supply chain security regulations. David explains that while the rules are similar to those in the United States, they're tougher and more likely to be implemented in a slow, inexorable way. And, of course, the United States is the unspoken target of them all.
In today’s interview, I spar with Harriet Moynihan over the application of international law to cyberattacks, a topic on which she has written with clarity and in detail. We disagree politely but profoundly. I make the case that international law is distinct from what works in cyberspace and is inconsistent with either clarity or effectiveness in deterring cyberattacks. Harriet argues that international law has been a central principle of the post-1945 international system and one that has helped to keep a kind of peace among nations. It’s a good exchange.
In the News Roundup, David Kris and I discuss the state of Team Telecom, which is taking unwonted (but probably not unwelcome) fire for not being tough enough on state-owned Chinese telecom firms. Predictably, Team Telecom is going with the flow, and reportedly seeking to knock four such firms out of the US market.
Maury Shenk reports that Vietnam is suspected of hacking Chinese health authorities. In response to the accusations, the Vietnamese released what looks to me like a word-for-word clone of Chinese cyberespionage boilerplate denials. Sauce for the goose is sauce for the panda.
Gapple’s design for a COVID-19 tracing app isn’t the best way to track infections, I argue, but it’s all that Google and Apple are willing to let governments do, apparently because of Silicon Valley's exquisitely refined and self-evidently superior sense of privacy. Nick Weaver disagrees, arguing that the Gapple system preserves privacy and allows health authorities all the information that they really need. Governments are mostly falling in line with Gapple's demands, either because they buy Nick’s argument or because they have decided that Silicon Valley resistance has the ability to wreck any more centralized system. France is still fighting for its vision of contact tracing. Australia seems to be adopting a lightly tweaked version of the Gapple model. And Germany seems to be surrendering.
Several senators want Cyber Command and CISA to do more to deter coronavirus hackers, David reports. More importantly, he points out that asking a military organization to attack a civilian criminal gang raises a host of legal issues that should be sorted out before rather than after the attack begins.
Failure to protect your client from Chinese government hackers might be malpractice, a DC court rules. But as Maury points out, there’s a long road from winning a motion to dismiss to winning at trial, so the lesson to be drawn from this case won’t be certain for some time.
Three years later, the Shadow Brokers leak is making news, and still providing challenges for private security researchers. Nick reports on how a three-year-old leak led to the latest revelation of an unknown APT group.
Nick and I touch on confused reporting about the latest filing in the mud fight between Facebook and NSO Group over NSO’s hacks of WhatsApp customers. NSO, Facebook says, has used a lot of US servers in those attacks. That matters for the technical question of whether NSO can be sued in the United States, but the volume (several hundred instances) also suggests to Nick that NSO did more than throw exploits over the wall to its customers – it was arguably offering espionage as a service.
David dings IBM for its handling of a researcher’s disclosure of four zero-days – and that leads to a dive into what a good bug bounty program can and can’t do.
Maury notes that Amazon is getting new scrutiny for its handling of third-party sales data, including suspicions on Congress’s part that it may have been lied to. This isn’t the last we’ll hear of this story.
In quick hits, I am nonplussed by Vimeo’s willingness to outsource its definition of “hate group” to, well, a left-wing hate group, the Southern Poverty Law Center.
Nick celebrates the end to what he calls the “Asshat meets BlackHat” affair: Crown Sterling’s “defamation” lawsuit against BlackHat has been settled.
And Nick and I mark the surprising ouster of Marc Rotenberg, EPIC’s long-time director, over what might be called excessive attention to his own COVID-19 privacy.
In this episode, I interview Thomas Rid about his illuminating study of Russian disinformation, Active Measures: The Secret History of Disinformation and Political Warfare. It lays out a century of Soviet, East European, and Russian disinformation, beginning with an elaborate and successful operation against the White Russian expatriate resistance to Bolshevik rule in the 1920s. Rid has dug into recently declassified material using digital tools that enable him to tell previously untold tales – the Soviets' remarkable success in turning opposition to US nuclear missiles in Europe into a mass movement (and the potential shadow it casts on the legendary Adm. Hyman Rickover, father of the US nuclear navy), the unimpressive record of US disinformation campaigns compared to the ruthless Soviet versions, and the fake American lobbyist (and real East German agent) who persuaded a West German conservative legislator to save Willy Brandt's leftist government. We close with two very different predictions about the kind of disinformation we'll see in the 2020 campaign.
In the news, David Kris, Nick Weaver, and I trade perspectives on the Supreme Court's grant of certiorari on the question when it's a crime to access a computer r “in excess of authority.” I predict that the Justice Department's reading of the Computer Fraud and Abuse Act will lose, but it's far from clear what will replace the Justice Department's interpretation.
Remember when the House left town without acting on FISA renewal? That's looking like a worse and worse decision, as Congress goes weeks without returning and Justice is left unable to use utterly uncontroversial capabilities in more and more cases. Matthew Heiman explains.
In Justice Department briefs, all the most damaging admissions are down in the footnotes, and it looks like that's true for the inspector general's report on the Carter Page FISA. Recently declassified footnotes from the report make the FBI's pursuit of the FISA order look even worse, in my view. But at the end of the day, the footnotes don't add much to suspicions of a partisan motivation in the imbroglio.
Speaking of IG reports, the DOD inspector general manages first to raise the possibility that Amazon was the victim of political skullduggery in the big DOD cloud computing award and then to find a way to stick it to Amazon anyway. Meanwhile, the judge overseeing the bid protest gives the Pentagon a chance for a do-over.
Matthew covers intel warnings about China-linked ‘Electric Panda’ hackers and the Syrian government spreading malware via a coronavirus apps. And David notes that a Zoom zero-day is being offered for $500,000.
Nick and I mix it up, first over the Gapple infection tracing plan and their fight with the UK National Health Service and then over Facebook’s decision to suppress posts about anti-lockdown demonstrations that violate the lockdown. I think that's highly questionable and not something Facebook would be doing if the first demonstrations had been Black Lives Matter activists in Detroit – or regime protestors during the Arab Spring for that matter. Nick thinks it's the best way to treat a "zombie death cult serving haterade." So, all in all, exactly the restrained and civil exchange of views you've come to expect from the Cyberlaw Podcast.
Google and Apple have released specifications for how to use a mobile phone to track coronavirus infections. That’s good news. As the country moves toward at least partial resumption of normal life, we’re likely to need good tracking capabilities to avoid a second peak in infections, and that can’t be done without the cooperation of Google and Apple.
But the more I study the design that these companies are promoting, the less attractive it looks. To be blunt, I think the companies were so eager to avoid criticism from privacy groups and Silicon Valley libertarians that they produced a design that raises far too many barriers to effectively tracing infections. The good news, though, is that Google and Apple won’t have the last word. The two companies are creating an absolutely essential set of tools, or APIs, that will allow other tracking apps to interact with phone operating systems. They’ve also sketched what might be described as the default tracking system that they intend to implement “while maintaining strong protections around user privacy.” This default system is less essential, and a good thing too. The Google/Apple default tracking system is seriously flawed, mainly because it elevates privacy over effectiveness. Luckily, national health systems will be free to write better, more workable tracking apps that can still plug into Google and Apple operating systems without buying into the questionable choices those companies seem to favor.
Public health agencies have tracked infections as a way of stopping the spread of disease for more than a century, and not just for pandemics. It’s a routine part of the public health response to syphilis and other sexually transmitted diseases. The process is straightforward. If someone tests positive for an infectious disease, health authorities ask for a list of all his contacts while he had the disease. They track those people down, treat or quarantine them, and then get a list of their contacts. Eventually, everyone in the chain of infection is accounted for, and the chain is broken. COVID-19 is a challenge to this model because asymptomatic infection is so easy, which makes it hard to recreate all of a COVID-positive person’s contacts. Which is what makes mobile phone tracking so attractive. An app that knows where you’ve been for the last two weeks allows a reconstruction of your contacts that your memory can’t match. That’s why there’s been such an emphasis in many countries on using location data for infection tracking. But as the initial enthusiasm encountered reality, public health officials realized that location data on phones was not well-suited to the task. It wasn’t detailed enough, gathering everyone’s locations over the length of the emergency felt unnecessarily intrusive. Singapore came up with a better approach: using not location data but an exchange of Bluetooth signals between the phones of people who were actually close to each other for a period of time. Then if one of them tested positive, the health authorities could collect the contact data and send alerts to everyone who had exchanged signals with the infected party.
But Singapore’s system did not work well with the Android and iOS operating systems. It only worked if Bluetooth was actively seeking connections all the time, which often required that the phone be continually unlocked. The app still hasn’t achieved more than about twenty percent market share.
Google and Apple soon realized that for such a system to work, they would have to adapt the operating system to the needs of a disease tracking app. That’s why they are developing new APIs. At the same time, their engineers seem to have decided that they could make an app that worked like Singapore’s but had more privacy protections. That idea is the source of the default tracking app they are promoting.
It’s also the source of most of the problems with the default app. Probably the biggest mistake the default app makes is trying to build a tracing system will be completely independent of any central health authority. That’s not realistic or wise. Real-world disease tracing systems like Singapore’s (and like those of the United States for a century) all rely on the public health authorities to identify infected people, collect their contacts, and notify those at risk. But Silicon Valley is in love with a “trust no one” approach to security that is grounded in an assumption that centralized systems can be abused if authoritarian governments get access to the data. That’s always possible, but the risk is pretty modest in this case since the only data at issue is two weeks’ worth of contacts. And any authoritarian government worth its salt could get far more location and contact data simply by subpoenaing Google’s adtech files. Nonetheless, Google and Apple are pushing a default app that does not depend on centralized administration and therefore is guarded against that particular abuse.
But in preventing abuses from the center, such an app would invite many abuses from the edge. The design seems to envision a regime in which testing results are known to health authorities, but the place and identities of those who’ve come into contact with each other are never disclosed. That means the health authorities can tell someone who tests positive to notify his contacts, but they will have no way of knowing whether he follows their advice. Predictably, some people won’t. Maybe they’ll forget. Maybe they’ll be worried about retaliation from those they’ve exposed. And maybe they’ll just be freeloaders who wanted to be notified if they were at risk but have no interest in notifying others. By leaving the decision to the user, and even creating additional barriers to notification with a separate “consent to notify” hurdle, the default design from Google and Apple compromises public health in the name of privacy
Other abuses are made possible by the designers’ preference for keeping information from the authorities. In the default design, those getting a notification are told only that they’ve been near an infected person, but not who and not where. Anonymity breeds irresponsibility, as many Zoom users learned in recent weeks. Surely some irresponsible app users will be delighted to cause random grief by sending out false infection alerts to everyone with whom they’ve been in contact. Google and Apple say that they can prevent that by having public health authorities verify test results. But a verification process, such as cryptographic signing of test results, is likely to add substantially to notification friction.
The same effort to keep data out of the hands of a central administrator leads the Google and Apple engineers to store and process all contact data locally on the phone. This is bad news for anyone who loses or has to reset their phone. It looks as though, under the default Google/Apple design, those already unfortunate people also lose their contact records and thus run the risk of missing a notice that they may have been exposed. Indeed, as far as I can see, the design doesn’t even allow people to store a backup of their contacts in the cloud. Similarly, suppose a person using the app comes to the health system’s attention by collapsing in public, or dying before they make it to the hospital, as all too many victims have. In that case, it would be impossible to trace the person’s contacts or notify those affected. Apple’s famously law-enforcement-hostile phone design will ensure that the authorities cannot open either the phone nor the app.
That’s a lot of dysfunction to suffer just to avoid the theoretical risks of a centralized infection tracing system like the ones we’ve been using for the last hundred years.
A related problem is that Google and Apple seem to assume that infection tracing needs the same kinds of user consent as a weather or traffic app. (And, to be fair, the designers may believe that more privacy features will induce more users to download the app.) But it’s increasingly clear that to be effective, a tracing app is going to require a market share well over 50%. That can’t be achieved by throwing something into the app store and waiting for users to find it. Even though Singapore’s app has substantial privacy protections and its populace is generally compliance-minded, its app is being used by less than a fifth of the population. In fact, as I’ve said before, it’s likely that mobile phone infection tracking will only work if Google and Apple are required to install such an app automatically on Androids and iPhones, the way Apple Maps or iTunes updates are auto-downloaded. (Apple is, after all, famous for having automatically sent all its users a U2 album that none of them ordered; last time I checked.) Google and Apple have said that they intend to add a contact tracing platform to their phone operating systems, though it’s not clear that the feature will be on by default.
At the end of the day, the purpose of infection tracing is to notify people who may have been infected. Unfortunately, without a lot of changes, the Google/Apple system will make notification a lot less likely. First, in an effort to reduce the role of central authorities, the app design separates testing from contact tracing, so the health authorities who do the testing can’t confirm that the contacts got notice of the result. Instead, the design imagines that someone who tests positive will be given the option of sending notice. That means friction every step of the way -- a user who tests positive must remember to send the notice, open the app, navigate the “do you really consent?” screen, and wait for the notification to upload. And all of those steps depend entirely on the user’s sense of social responsibility. That’s just a bad idea. At a minimum, public health authorities need to be able to tell who tested positive but didn’t send notifications. That would allow the authorities to ping those who didn’t notify others, reminding them to do so. It would also allow the health authority to impose fines or other sanctions on those who use the system to protect themselves but not others.
Finally, Google and Apple’s assumption that tracking apps should keep location data away from health authorities leads to absurd results. It’s quite true that in most cases, there’s no need to record the exact location where each contact occurred. What matters most is just whether one person was in close proximity to another. But there are times when location matters too. Someone who gets a notice and can’t figure out how the contact occurred might want more data before quarantining himself; maybe there was a wall that Bluetooth didn’t recognize between him and the infected party, or perhaps they were standing outside with a breeze blowing between them. I suspect that few people will thank Google and Apple if we end up with an American tracking app that assumes that it’s better for them to endure an unnecessary two-week quarantine than for them to tell health authorities the location of their potential infection.
Being able to get precise location data in at least some cases will be especially important if commerce resumes and the app is not universally adopted. Suppose that an infected person spends time at a Starbucks, where a quarter of the people in the shop have installed the Google/Apple app. When the infected person sends out a notice, only a quarter of the customers will get it. But some of them will suspect that the contact was at Starbucks. Without knowing the infected person’s location data; authorities would have no idea where the exposures occurred. What if health authorities want to find the other customers, plus the barista? It wouldn’t be that hard if they knew the place and time of the infected person's visit. The barista’s hours are already recorded by Starbucks, and customers who were there at the time could be found through payment records. But it appears that the Google/Apple app makes no provision for public health authorities being able to identify hotspots by time and location. In fact, in the world the two companies envision, potentially infected parties themselves apparently won’t know for sure where their exposure occurred.
These are a lot of problems, most of them stemming from design assumptions that start in the wrong place, privileging decentralization and anonymity over the goal of defeating the virus. Don’t get me wrong. Google and Apple deserve a lot of credit for having stepped up to the challenge of providing essential APIs. We’re going to need their help to get it onto phones all over the country. But the companies also have blind spots and a fear of getting crosswise with privacy advocates is one of them. This means mean we can’t simply trust them to set the parameters for a tracing app that does what we want in the real world.
And we don’t have to. Pandemics force us to trust elected leaders with great power over individuals and businesses. State governments have ordered people to stay home even at the cost of their jobs. Governors have been given authority to seize private supplies of ventilators and to close otherwise lawful businesses and gatherings -- even church services on Easter. And as I’ve written before at greater length, 40 states have adopted a post 9/11 public health emergency law that gives governors broad authority to intervene in private industry to respond to crises. This authority extends even to companies that deal in “communication devices.” If they can mandate that people sacrifice jobs and access to in-person religious services, surely state governors can also order companies like Google and Apple to build an interface that works with tracking apps that actually fit society’s needs and not Silicon Valley’s conventional wisdom.