Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Posted at 08:50 PM in Random posts | Permalink | Comments (5)
GPT-4's rapid and tangible improvement over ChatGPT has more or less guaranteed that it or a competitor will be built into most new and legacy IT products. Some of those applications will be pointless; but some will change users' world. In this episode, Sultan Meghji, Jordan Schneider, and Siobhan Gorman explore the likely impact of GPT4, from Silicon Valley to China.
Kurt Sanger joins us to explain why Ukraine's IT Army of volunteer hackers creates political, legal, and maybe even physical risks for the hackers and for Ukraine. This may explain why Ukraine is looking for ways to "regularize" their international supporters, and probably to steer them toward defending Ukraine's infrastructure rather than attacking Russia's.
Siobhan and I dig into the Biden administration's latest target for cybersecurity regulation -- cloud providers. I wonder if there isn't a bit of bait and switch in operation here. The administration seems at least as intent on regulating cloud providers to catch hackers as to improve defenses.
Say this for China: It never lets a bit of leverage go to waste, even when it should. Case in point: To further buttress its seven-dash-line claim to the South China Sea, China is demanding that companies get Chinese licenses to lay submarine cable in the contested territory. That, of course, incentivizes the laying of cables much further from China, out where they'll be harder for the Chinese to deal with in a conflict. That doesn't sound smart, but some Beijing bureaucrat will no doubt claim it as a win for the wolf warriors. Ditto for the Chinese ambassador's response to the Netherlands restricting chip-making equipment sales to China, which boiled down to "We will make you pay for that. We just don't know how yet." The U.S. is not always good at dealing with other countries or the private sector, so it's nice to be competing with a country that is demonstrably worse at it.
The Security and Exchange Commission has gone from catatonic to hyperactive on cybersecurity. Siobhan notes its latest 48-hour incident reporting requirement and the difficulty of reporting anything useful in that time frame.
Kurt and Siobhan bring their expertise as parents of teens and aspiring teens to the TikTok debate.
I linger over the extraordinary and undercovered mess created by "18F" -- the General Service Administration's effort to bring Silicon Valley's can-do culture to the government's IT infrastructure. It looks like they managed to bring Silicon Valley's arrogance, its political correctness, and its penchant for breaking things but forgot to bring either competence or honesty. Login.gov was 18F's online identity verification for federal agencies disbursing benefits or otherwise dealing with the public. 18F sold it to a host of federal agencies that wanted to control fraud during the pandemic. But it never delivered the biometric checks that federal standards required. First, 18F lied to its federal customers about how or whether it was using biometrics. When it finally admitted the lie, it brazenly claimed it was not checking because the technology was, wait for it, racially biased. This claim ran counter to the only available evidence (GSA claimed that it did its own bias research, research that was apparently never published). Oh, and it refused to give back the $10 million it charged its victims, arguing that the work it did on the project cost more than it billed them, so they didn't lose anything. Except for the fraud that bad identity checks likely enabled in the middle of COVID handouts, a loss everyone has been decidedly incurious about. And one more thing: Among the victims of 18F's scam was Senator Ron Wyden (Ore.), who touted login.gov and its phony biometric checks as the "good" alternative to ID.me, a private identity-checker that encountered political flak over its contract with the IRS. Bottom line advice for 18F alumni: It's not too late to start scrubbing the entity from your LinkedIn profile.
The Knicks have won some games. Blind pigs have found some acorns. But Madison Square Garden (and Knicks) owner, Jimmy Dolan is still pouring good money into his unwinnable but highly entertaining fight to use facial recognition against lawyers he does not want in the Garden. Kurt offers commentary, and probably saves himself the cost of Knicks tickets for all future playoff games.
Finally, in listener feedback, I give Simson Garfinkel's answer to a question I asked (and should have known the answer to) in episode 448.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:21 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast kicks off with the sudden emergence of a serious bipartisan effort to impose new national security regulations on what companies can be part of the U.S. information technology and content supply chain. Spurred by a stalled CFIUS negotiation with TikTok, Michael Ellis tells us, a dozen well-regarded Democrat and Republican Senators have joined to endorse the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which authorizes the exclusion of companies based in hostile countries from the U.S. economy. The administration has also jumped on the bandwagon, making the adoption of some legislation on the topic more likely than in the past.
Jane Bambauer takes us through the district court decision upholding the use of a "geofence warrant" to identify January 6th rioters. We end up agreeing that this decision (and the context) turned out to be the best possible for the Justice Department, silencing the usual left-leaning critics of law enforcement technological adaptation.
Just a few days after issuing a cybersecurity strategy that calls for more regulation, the administration is delivering what it called for. The Transportation Security Administration (TSA) has issued emergency cybersecurity orders for airports and aircraft operators that, I argue, take the regulatory framework from a few baby steps to a plausible set of minimum requirements. Things look a little different in the water and sewage sector, where the regulator is the Environmental Protection Agency (EPA) – not known for its cybersecurity expertise – and the authority to regulate is grounded if at all in very general legislative language. To make the task even harder, EPA is planning to impose its cybersecurity standards using an interpretive rule, against a background in which Congress has done just enough cybersecurity legislating to undermine the case for adopting a broad interpretation.
Jane explores the story that Google was deterred from releasing its impressive AI technology by fear of bad press. That leads us to a meditation on politics inside companies with a guaranteed source of revenue. I offer hope that Google's fears about politically incorrect AI will infect Chinese tech firms.
Jane and I reprise the debate over the United Kingdom's Online Safety Act and end-to-end encryption, which leads to a poli-sci tour of European policymaking institutions.
The other cyber and national security news in Congress is the ongoing debate over renewal of section 702 of the Foreign Intelligence Surveillance Act (FISA), in which it appears that the FBI scored an own-goal. An FBI analyst did unauthorized searches in the 702 database for intelligence on one of the House intelligence committee's moderates, Rep. Darin LaHood, R-Ill. Details are sketchy, Michael notes, but the search was disclosed by Rep. LaHood, and it is bound to have led to harsh questioning during the FBI director's classified testimony, Meanwhile, at least one member of the President's Civil Liberties and Oversight Board is calling for what could be a crippling "reform" of 702 database searches.
Jane and I unpack the controversy surrounding the Federal Trade Commission's investigation of Twitter's compliance with its most recent consent decree. On the law, Elon Musk's Twitter is on its back foot. On the political front, however, the two organizations are more evenly matched. Chances are, both parties are overestimating their own strengths, which could foretell a real donnybrook.
Michael assesses the stories saying that the Biden administration is preparing new rules to govern outbound investment in China. He is skeptical that we'll see heavy regulation in this space.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:27 PM | Permalink | Comments (0)
I've finished the second in what I hope will be a series of posts exploring the risk of partisan abuse of U.S. intelligence authorities. (For the other, see this opinion piece, coauthored with Michael Ellis.) Section 702 renewal is on the agenda for Congress in 2023, and building support for renewal means taking seriously complaints on the right that intelligence agencies were affected by partisan bias in their treatment of Donald Trump's candidacy, presidency, and staff. This means asking whether past practices created at least an appearance or a risk of partisan abuse -- and thus whether any intelligence reforms should address those risks.
In my latest look at the issue, in Lawfare, I note that "respectable" opinion is finally acknowledging that press stories about a Trump-Russia connection may have been slanted by mainstream media, and I examine the role that media bias played in the early stages of the FBI's investigation of Trump world. A few excerpts below:
The Trump-Russia media saga began with a bit of journalistic malpractice. As the GOP convention was preparing to nominate Trump, Gerth tells us, the Washington Post ran one of the early attacks on Trump for kowtowing to Russian interests: a July 18 opinion column from Josh Rogin headlined, "Trump campaign guts GOP's anti-Russian stance on Ukraine." It was wrong. In Gerth's understated words:
The story would turn out to be an overreach. Subsequent investigations found that the original draft of the platform was actually strengthened by adding language on tightening sanctions on Russia for Ukraine-related actions, if warranted, and calling for "additional assistance" for Ukraine. What was rejected was a proposal to supply arms to Ukraine, something the Obama administration hadn't done.
A critical part of the FBI's case against Page was the claim that his many contacts with Russians were part of what its affidavit called "a well-developed conspiracy of cooperation" between the Trump campaign and the Russian government. That's a remarkable claim, and it naturally gives rise to the question of exactly what the parties did to advance this "well-developed conspiracy." The FBI's answer was the GOP platform change—it was presented as a clear step by Trump's associates to move GOP policy closer to protecting Putin's interests.
As evidence of this crucial element, the affidavit relied on what it called an "article in an identified news organization" (that is, Rogin's op-ed) and "assesse[d] that, following Page's meetings in Russia, Page helped influence [the Republican Party] and [the Trump] campaign to alter their platforms to be more sympathetic to the Russian cause." That "assessment" had no basis in fact or any independent investigation; it relied entirely on the inaccurate opinion pieces in the Post, the Times, and the Atlantic.
I go on to suggest FISA reforms to address the problems surfaced by an FBI performance in the Crossfire Hurricane investigation that was disappointing at best -- and a partisan abuse of FISA at worst. You can read the whole thing here: https://www.lawfareblog.com/vicious-cycle-how-press-bias-fed-fisa-abuse-trump-russia-panic
Posted at 11:40 AM | Permalink | Comments (0)
Our last episode of the Cyberlaw Podcast (No. 446) was a long interview on the U.S. national cybersecurity strategy with Chris Inglis, until recently the national cybersecurity director. So this episode 447 focuses only on the most controversial recommendation in the strategy – liability for certain security flaws. Nick Weaver, Maury Shenk and I explore the pros and cons of what's become known as cybersecurity's third rail.
Turning to the U.K., Maury brings us up to date on the pending Online Safety Bill. Signal has threatened to "walk" out of the U.K. if the bill's protections for children threaten its end-to-end encryption ideology. Far from being deterred, members of Parliament are pushing for a tougher bill, and the government is being forced to accommodate them with tough criminal penalties for Big Tech execs who do not take their obligations sufficiently seriously.
Is the Biden administration getting ready to impose restrictions on outbound U.S. investment in critical Chinese industries? The Wall Street Journal says it is, but Justin Sherman thinks that the administration may just be meeting Congress's requirements for a briefing on the topic. Meanwhile, I wonder whether we've got this tech control thing backwards. If ASPI , the Australian think tank, is right, the U.S. has already lost the lead to China in 37 of 44 critical new technologies, so what we really need to worry about is Chinese restrictions on U.S. access to its technology.
Maury and I explore "woke AI," the notion that the "ethical guardrails" built into ChatGPT and other engines are simply disguised forms of political bias. Maury notes that Justice Gorsuch has questioned whether AI engines might have the protection of section 230. That seems like a legally dubious proposition to us, but don't underestimate the willingness of Big Tech's lawyers to argue the point.
TikTok suffered a setback on the Hill last week, as Republicans passed out of committee a bill effectively banning the app. It was a party line vote, showing how what had been a bipartisan issue is now fraying into partisanship, at least in the House. In the Senate, though, Senate Intelligence Committee Chair Mark Warner is working toward a similar outcome on a bipartisan basis, creating real jeopardy for the company over the next two years. If anyone should be hoping China does not sell arms to Russia, I suggest, it is TikTok.
Speaking of China, the most eye-opening story of the week comes from the Globe and Mail. It breaks a story about how aggressively China tried (and with real success) to tilt the 2021 Canadian national election toward the Liberals, using tactics we are bound to see in other countries. My favorite? Persuading China-friendly companies to hire students from China in Canada and then release them to "volunteer" for the CCP's favored candidate.
In other China news, Maury and Nick note that Elon Musk's remarks lending credibility to the Wuhan lab leak theory drew a brushback pitch from official Chinese sources, and Nick and I puzzle over stories that China plans to launch 13,000 satellites to keep up with Starlink. Meanwhile, Twitter's revenue continues to sink. I think we can see bottom for the company, but Nick thinks not.
Nick overcomes my skepticism about Meta's deployment of a tool for taking down nude photos and worse. It is a variant of existing methods, but it has the advantage of not requiring victims to send their nude photos to Meta.
Justin responds to my criticism a few episodes back of Duke's study claiming that Americans' mental health data is being sold by data brokers.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 12:06 PM | Permalink | Comments (0)
Chris Inglis was the first National Cyber Director at the White House, after a long and highly successful career at the National Security Agency (ending with seven years as Deputy Director). In his role as Cyber Director, he brought the office from one employee up close to its planned strength of nearly 100 staffers. He also oversaw the drafting of the first National Cybersecurity Strategy, leaving office just a couple of weeks before the strategy was publicly released
So what does he think now about the strategy, its reception, and its future? I sat down with him to review the strategy's recommendations – especially the hardest ones. Chris speaks candidly about the need for (and the limitations on) cybersecurity regulation, the wide cybersecurity gaps between different sectors of our economy, the reasons for rethinking liability for cybersecurity failures, and how the Office of the National Cyber Director can work with the Deputy National Security Adviser for Cyber and Emerging Technology.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 11:48 AM | Permalink | Comments (0)
As promised, the Cyberlaw Podcast devoted half of this episode to an autopsy of Gonzalez v Google LLC , the Supreme Court's first opportunity in a quarter century to construe section 230 of the Communications Decency Act. And an autopsy is what our panel – Adam Candeub, Gus Hurwitz, Michael Ellis and Mark MacCarthy – came to perform. I had already laid out my analysis and predictions in a separate article for the Volokh Conspiracy, contending that both Gonzalez and Google would lose.
All our panelists agreed that Gonzalez was unlikely to prevail, but no one followed me in predicting that Google's broad immunity claim would fall, at least not in this case. The general view was that Gonzalez's lawyer had hurt his case with shifting and opaque theories of liability, that Google's arguments raised concerns among the Justices but not enough to induce them to write an opinion in such a muddled case.
Evaluating the Justices' performance, Justice Neil Gorsuch's search for a textual answer drew little praise and some derision while Justice Ketanji Jackson won admiration even from the more conservative panelists.
More broadly, there was a consensus that, whatever the fate of this particular case, the Court will find a way to push the lower courts away from a sweeping immunity for platforms and toward a more nuanced protection. But because returning to the original intent of section 230 is not likely after 25 years of investment based on a lack of liability, this more nuanced protection will not have much grounding in the actual statutory language. Call it a return to the Rule of Reason.
In other news, Michael summed up recent developments in cyber war between Russia and Ukraine, including imaginative attacks on Russia's communications system. I ask whether these attacks – which are sexy but limited in impact – make cyber the modern equivalent of using motorcycles as a weapon in 1939.
Gus brings us up to date on recent developments in competition law, including a likely Department of Justice challenge to Adobe's $20 Billion Figma deal, new airline merger challenge, the beginnings of opposition to the Federal Trade Commission's (FTC) proposed ban on noncompete clauses, and the third and final nail in the coffin of the FTC's challenge to the Meta-Within merger.
In European cyber news, the European Union is launching a consultation designed to make U.S. platforms pay more of European telecom networks' costs. Adam and Gus note the rent-seeking involved but point out that rent-seeking in U.S. network construction is just as bad, but seems to be focused on extracting rents from taxpayers instead of Silicon Valley.
The EU is also getting ready to fix the General Data Protection Regulation (GDPR) -- fix in the sense that gamblers fix a prize fight, as it will make sure Ireland never again wins a fight with the rest of Europe over how aggressively to extract privacy rents from U.S. technology companies.
I am excited about Apple's progress in devising a blood glucose monitor that could go into a watch. Adam and Gus tell me not to get too excited until we know how many roadblocks The Food and Drug Administration (FDA) will erect to the use and analysis of the monitors' data.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 11:04 AM | Permalink | Comments (0)
The Supreme Court's oral argument in Gonzalez v. Google left most observers in a muddle over the likely outcome. In three hours of questioning, the Justices defied partisan stereotypes and asked excellent questions, but mostly just raised doubts about how they intended to resolve the case. I had the same problem while listening to the argument in for a Cyberlaw Podcast episode (No. 445) that will be mostly devoted to Gonzalez.
But after going back to look at each Justice's questions separately, I conclude that we do in fact have a pretty good idea how the case will turn out: Gonzalez will lose, and so will Google, whose effort to win a broad victory is likely to be killed – and most enthusiastically by the Court's left-leaning Justices.
First, a bit about the case. Gonzalez seeks to hold Google liable because the terror group ISIS was able to post videos on YouTube, and YouTube recommended or at least kept serving those videos to susceptible people. This contributed, the complaint alleges, to a terror attack in Paris that killed Gonzalez's daughter. Google's defense is that section 230 makes it immune from liability as a "publisher" of third-party content, and that organizing, presenting, and even recommending content is the kind of thing publishers do.
I should say up front that I am completely out of sympathy with Google's position. I was around when section 230 was adopted; it was part of the Communications Decency Act, which was designed to protect children from indecent content on the internet. The tech companies, which were far from being Big Tech at the time, hated the decency part of the bill but couldn't beat it. Instead, they tried to turn the decency lemon into lemonade by asking for relief from a recent defamation ruling that online services who excluded certain content were the equivalent of publishers under defamation law and thus liable for any defamatory third-party content they distributed. Services like AOL and Compuserve pointed out the irony that they were being punished for their effort to build family-friendly online communities -- the opposite of what Congress wanted. "If you want us to exclude indecent content," they argued to Congress, "you have to immunize us from publisher liability when we do that." That was and is a compelling argument, but only for undoing publisher liability under defamation law. To my mind, that's exactly what Congress did when it said, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
But that's not how the courts have read section 230. Seduced by a transformative technology and by aggressive, effective advocacy, the courts read this language to immunize online providers for doing anything that publishers can be said to do. This immunity goes far beyond defamation, as the Gonzalez case shows. There, Google said it should be immune because deciding what content to show or even recommend to users is the kind of thing a publisher does. Of course, carried to its logical extreme, this means that what are now some of the richest companies in the world cannot be held liable even if they deliberately serve how-to-kill-yourself videos to the depressed, body-shaming videos to the anorexic, and ISIS videos to extremists.
So, why not just correct the error, narrow the statutory interpretation to its original purpose, and let Congress actually debate and enact any other protections Big Tech needs? Because, we're told, these companies have built their massively profitable businesses on top of the immunity they sold to the courts. To change now, after twenty-six years of investment, would be disruptive – perhaps even catastrophic. That in a nutshell is the dilemma on whose horns the Court twisted for three hours.
It is generally considered professional folly for appellate lawyers to predict the outcome of a case based on the oral argument. In fact, this is only sometimes true. Judges, and Justices even more so, usually want feedback from counsel on the outcome they're considering. It's hard to get that feedback without telling counsel what they have in mind. That said, some judges believe in hiding the ball, and some just like to ask tough questions. And in complex cases, sometimes the Justices' initial inclinations yield to advocacy in conference or in drafts circulated by other Justices.
That latter fate could be in store for the Gonzalez case. So there's a good chance I'll end up guessing wrong about the outcome. But considering how muddled the argument seemed, I was surprised how much can be learned by going back through each Justice's questions to see what each of them thinks the case is about. It turns out that most of them were very clear about what rules of decision they were contemplating.
Justice Gorsuch. Let's start with Justice Gorsuch. I believe we know what his opinion will say. He laid his theory out for every advocate. He will again indulge his bent for finding the answer in the text of the statute. Briefly, he noted that Congress defined the entities eligible for immunity to include providers of software to "filter, screen, allow or disallow content" and to "pick, choose, analyze, or digest content," Bingo, he seemed to say, there's your textualist solution to the case: Congress told us what publishers do and thus what should be immune. No one, with the possible exception of Justice Kavanaugh, found this particularly compelling, mainly because it's an extraordinarily broad immunity, protecting even platforms that boost content for the worst of motives – to harm competitors, say, or to denigrate particular political candidates or ethnic groups. (The notion has serious technical flaws as well, but I'll pass over them here.)
Justice Kavanaugh. Justice Gorsuch's embrace of broad immunity suggests that he sees this case through a business conservative's eyes: The less liability the state imposes on business, the better. In this, he was joined most clearly by Justice Kavanaugh, who reverted several times to the risk of economic disruption if a narrower reading of section 230 were adopted.
Chief Justice Roberts. If you're looking for a third business conservative on this Court, Chief Justice Roberts is the most likely candidate. And he clearly resonates to Big Tech's concerns about unleashing torrents of litigation; he's reluctant to impose liability for content selection where the criteria for selection are generally applicable (e.g., the site just gives the user what she asks for). But he also recognizes that it's the platform that has the power to select what the user sees, and he wonders why the platform shouldn't be responsible for how it uses that power.
The Chief Justice's qualms about a sweeping immunity, however, are muted. They are expressed much more directly by the Justices on the left.
Justice Sotomayor. Justice Sotomayor returns time and again to the idea that the power to select and recommend can be abused – by encouraging discrimination on racial or ethnic grounds, for example. Her hypotheticals include "an Internet provider who was in cahoots with ISIS" to encourage terrorism and a dating app "that won't match black people to white people." She's not willing to narrow the immunity back to what Congress probably intended in 1996 (spoiler: none of the Justices is), but she bluntly tells the Solicitor General's lawyer what she wants: "Let's assume we're looking for a line because it's clear from our questions we are, okay?" She wants an immunity for what could be called "good" selection criteria – those that are neutral, unbiased, or general-purpose – but not for "bad" criteria.
Justice Jackson. If anyone supports the idea of returning to the 1996 intent, it's Justice Jackson, who tells Google's lawyer that "you're saying the protection extends to Internet platforms that are promoting offensive material…. exactly the opposite of what Congress was trying to do in the statute." At another point, she signals clearly that she disagrees with the Google position that any selection criteria it chooses to use are immune from suit. In another colloquy, she downplays the risk of business disruption as just a "parade of horribles." Not all of her questions sound this theme, but there are enough to conclude that she's close to Justice Sotomayor in her skepticism about the sweeping immunity Big Tech wants.
Justice Kagan. Justice Kagan also sees that section 230 doesn't really fit the modern internet. The Court's job, she seems to say, is "to figure out how ... this statute which was a pre-algorithm statute applies in a post-algorithm world." She thinks the plaintiff's reading could "send us down the road such that 230 really can't mean anything at all." She's daunted by the difficulty of refashion the statute to avoid over-immunizing Big Tech:
I don't have to accept all Ms. Blatt's "the sky is falling" stuff to accept something about, boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we're finding that Google isn't protected. And maybe Congress should want that system, but isn't that something for Congress to do, not the Court?
At the same time, she sees, the immunity Google wants would allow Google to knowingly boost a false and defamatory video and to refuse to take it down. She asks, "Should 230 really be taken to go that far?" I'm guessing that she thinks the answer is "no" and that she, like Justice Sotomayor, is just looking for a line that gets her there. For purposes of the count, let's put her in the middle with the Chief Justice.
So far, the Justice-by-Justice breakdown for giving Google the sweeping immunity it wants is a 2-2-2 split between the left and right with the Chief Justice and Justice Kagan in the middle. That sounds familiar. But it's about to get weird. That's because the three remaining Justices are at least as much social as business conservatives. And Big Tech has a long track record of contempt for social conservatives.
Justice Thomas. You'd think that Justice Thomas, who's been grumbling about section 230 for this reason for years, would have been an easy vote against Google. He clearly has doubts about Google's sweeping claim of immunity for any selection criteria. At the same time, his questions show some sympathy for protecting Google's selection criteria, as long as they're generic and neutral. I still think he'll be a vote to limit the immunity, assuming someone finds a dividing line between good selection criteria and bad.
Justice Alito. Justice Alito is the only Justice to show a hint of conservative resentment at the rise of Big Tech censorship in recent years. He notes that Google could label and preferentially distribute what it considers "responsible" news sources and he questions why such curation should be immune from liability: "That's not YouTube's speech?" he asks. "The fact that YouTube put those at the top, so those are the ones I'm most likely to look at, that's not YouTube's speech?" He also raises the specter of deliberate distribution of bad content: "So suppose the competitor of a restaurant posts a video saying that this rival restaurant suffers from all sorts of health problems, it -- it creates a fake video showing rats running around in the kitchen, it says that the chef has some highly communicable disease and so forth, and YouTube knows that this is defamatory, knows it's -- it's completely false, and yet refuses to take it down. They could not be civilly liable for that? ,,, You really think that Congress meant to go that far?"
And, in another sign that Big Tech may have overplayed its claim that only a sweeping immunity protects the internet from apocalypse, his last question is "Would … Google collapse and the Internet be destroyed if YouTube and, therefore, Google were potentially liable for posting and refusing to take down videos that it knows are defamatory and false?"
By my count, that leaves the Court roughly divided 2-2-4 on whether to give Google a sweeping immunity, with two business conservatives all in for Google (Gorsuch, Kavanaugh), two Justices waffling (Roberts, Kagan), and what might be called a "populistish" grouping of Sotomayor, Jackson, Alito, and (probably) Thomas,
Justice Barrett. Is Justice Barrett a fifth vote for that unlikely left-right alignment? Most likely. Like several of the other Justices, she was puzzled and put off by some of the idiosyncratic arguments made by the lawyer for Gonzalez. She also showed considerable interest that I don't understand in making sure section 230 protects ordinary users for their likes and retweets. But when Google's lawyer rose to speak, Justice Barrett rolled out a barrage of objections like those we heard from the other four immunity skeptics: Do you really, she asked, expect us to immunize a platform that deliberately boosts defamation, terrorism, or racism?
So there it is, by my seat-of-the-pants count -- somewhere between five and seven votes to cut back the broad immunity that a generation of Big Tech lawyers built in the lower courts.
And what about the folly of predicting outcomes from argument? Well, it's hard to deny that I'm running a pretty high risk of ending up with egg on my face. There is a real possibility that the Court will dump the case without ruling on Google's immunity. The lawyer for Gonzalez did himself no favors by shifting positions on his way to oral argument. He ended up claiming that thumbnail extracts of videos were really Google's content, not third-party content, and that simply serving users more videos like the last one they watched was a "recommendation" and thus Google's own speech. The Justice's struggled just to understand his argument, and they may be tempted to dump the case for that reason, ruling that immunity is unnecessary because Google faces no underlying liability for aiding and abetting ISIS (the question presented in a companion case argued the day after Gonzalez).
But dumping the case without a decision is not a neutral act. It leaves in place a raft of immunity-maximizing cases from the lower courts -- precedents that at least seven Justices find troubling. That law won't go away on its own, so I'm guessing they'll feel dutybound to offer some corrective guidance on the scope of 230.
If they do, I bet that six or seven Justices will decisively reject the maximalist immunity sought by Google. They may have trouble tying that rejection to the text of the law (as do the immunity maximalists), and whatever limits they impose on section 230 (e.g., immunity only for "reasonable" or "neutral" content selection) could turn out to be unpersuasive or unstable. But that just means that Big Tech, which won its current legal protection by nuking liability from orbit will have to win some of its protection back by engaging in house-to-house legal combat.
If so, the popcorn's on me.
Posted at 08:59 AM | Permalink | Comments (0)
This bonus episode offers an interview of Bruce Schneier, the prolific security guru, about his latest book, A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back. As usual with Bruce's books, it is a good read, technically up to date and approachable. Much of the book, and of the interview, explores Bruce's view that hacking – subverting the intent of a system of rules without actually breaking the rules – has much in common with lawyering. Finding ways to subvert a Microsoft program, Bruce argues, is not much different from exploiting loopholes in airline mileage programs or finding ways to count cards at a casino without letting the casino know what you're doing. And those exploits are not really so different from what lawyers do when they hunt for unexpected tax loopholes to shelter income.
The analogy only goes so far, as Bruce admits. It is often hard to actually define the "intent" that is being subverted, or to draw a line between subversion within the rules and just plain rule-breaking. And hacking, for all its underdog-beats-The-Man romance, is just a tool, available to everyone, including The Man. The world's best computer hackers mostly work for governments or corporations these days, and the same is true for the world's best legal hackers.
Still, exploring the parallels opens new ways of thinking for those of us who work at the intersection of tech and law. Among the new insights are the development of software programs that diagram statutory and regulatory codes and the likelihood that artificial intelligence will someday soon be red-teaming legislation in real time.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:01 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast opens with a look at some genuinely weird AI behavior, first by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – and then by Google's AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS'ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China's autocracy never gets its AI capabilities off the ground.
One thing that AI is creepily good at is faking people's voices. I try out ElevenLabs' technology in the first advertisement ever to run on the Cyberlaw Podcast.
The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment of how agencies are handling 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI's access – at great cost to our unified approach to terrorism intelligence, I point out. I also complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community.
Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink's effort to keep providing service to Ukraine without assisting offensive military operations.
Chinny does a victory lap over reports that the national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I wonder why Google thinks this is good for Google.
Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act (DMCA) and European Union (EU) and California privacy law are the censor's favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges its customers so little.
Jim and Chinny cover the charm offensive being waged in Washington by TikTok's CEO and the broader debate over China's access to the personal data of Americans, including health data. Jim cites a recent Duke study, which I complain is not clear about when the data being sold is individual and when it is aggregated. Nick reminds us all that aggregate data is often easy to individualize.
Finally, we make quick work of a few more stories:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:34 PM | Permalink | Comments (0)
The latest episode of The Cyberlaw Podcast gets a bit carried away with the China spy balloon saga. Guest host Brian Fleming, along with guests Gus Hurwitz, Nate Jones, and Paul Rosenzweig, share insights (and bad puns) about the latest reporting on the electronic surveillance capabilities of the first downed balloon, the Biden administration’s “shoot first, ask questions later” response to the latest “flying objects,” and whether we should all spend more time worrying about China’s hackers and satellites.
Gus shares a few thoughts on the State of the Union address and the brief but pointed calls for antitrust and data privacy reform. Sticking with big tech and antitrust, Gus recaps a significant recent loss for the FTC and discusses what may be on the horizon for FTC enforcement later this year.
Pivoting back to China, Nate and Paul discuss the latest reporting on a forthcoming (at some point) executive order intended to limit and track U.S. outbound investment in certain key aspects of China’s tech sector. They also ponder how industry may continue its efforts to narrow the scope of the restrictions and whether Congress will get involved. Sticking with Congress, Paul takes the opportunity to explain the key takeaways from the not-so-bombshell House Oversight Committee hearing featuring former Twitter executives.
Gus next describes his favorite ChatGPT jailbreaks and a costly mistake for an AI chatbot competitor during a demo.
Paul recommends a fascinating interview with Sinbad.io, the new Bitcoin mixer of choice for North Korean hackers, and reflects on the substantial portion of the DPRK’s GDP attributable to ransomware attacks.
Finally, Gus questions whether AI-generated “Nothing, Forever” will need to change its name after becoming sentient and channeling Dave Chapelle.
To wrap things up in the week’s quick hits, Gus briefly highlights where things stand with Chip Wars: Japan edition and Brian covers coordinated US/UK sanctions against the Trickbot cybercrime group, confirmation that Twitter’s sale will not be investigated by CFIUS, and the latest on SEC v. Covington.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter.
Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:54 PM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast is dominated by stories about possible cybersecurity regulation. David Kris points us first to an article by the leaders of the Cybersecurity and Infrastructure Security Administration (CISA) in Foreign Affairs. Jen Easterly and Eric Goldstein seem to take a tough line on "Why Companies Must Build Safety Into Tech Products." But for all the tough language, one word, "regulation," is entirely missing from the piece. Meanwhile, the cybersecurity strategy that the White House has reportedly been drafting for months seems to be hung up over how enthusiastically to demand regulation.
All of which seems just a little weird in a world where Republicans hold the House. Regulation is not likely to be high on the GOP to-do list, so calls for tougher regulation are almost certainly more symbolic than real.
Still, this is apparently the week for symbolic calls for regulation. David also takes us through an National Telecommunications and Information Administration (NTIA) report on the anticompetitive impact of Apple's and Google's control of mobile app markets. The report points to many problems and opportunities for abuse inherent in the two companies' headlock on what apps can be sold to phone users. But, as Google and Apple are quick to point out, the stores do play a role in regulating app security, so breaking the headlock could be bad for cybersecurity. In any event, practically every recommendation for action in the report is a call for Congress to step in – and thus almost certainly a nonstarter for reasons already given.
Not to be outdone on the phony regulation beat, Jordan Schneider and Sultan Meghji explore some of the policy and regulatory proposals for AI that have been inspired by the success of ChatGPT. The EU's AI Act is coming in for lots of attention, mainly from parts of the industry that want to be exempted. Sultan and I trade observations about who'll be hollowed out first by ChatGPT, law firms or investment firms.
In other news, Sultan also tells us why the ION ransomware hack matters. Jordan and Sultan find a cybersecurity angle to The Great Chinese Balloon Scandal of 2023. And I offer an assessment of Matt Taibbi's story about the Hamilton 68 "Russian influence" reports. If you have wondered what the fuss was about, do not expect mainstream media to tell you; the media does not come out looking good in this story. Unfortunately for Matt Taibbi, he doesn't look much better than the reporters his story criticizes. David thinks it's a balanced and moderate take on the story, for which I offer an apology and a promise to do better next time.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:23 PM | Permalink | Comments (0)
The big cyberlaw story of the week is the Justice Department's antitrust lawsuit against Google and the many hats the company wears in the online ad ecosystem. Lee Berger explains the Justice Department's theory, which is not dissimilar to the Texas Attorney General's two-year-old lawsuit. When you have lost both the Biden administration and the Texas Attorney General, I suggest, you cannot look too many places for friends – and certainly not to Brussels, which is also pursuing similar claims of its own. So what is the Justice Department's late-to-the-party contribution to this dogpile? At least two things, Lee suggests: a jury demand that will put all those complex Borkian consumer-welfare doctrines in front of a Northern Virginia jury and a "rocket docket" that will allow Justice to catch up with and maybe lap the other lawsuits against the company. This case looks as though it will be long and ugly for Google, unless it turns out to be short and ugly. Still, Mark reminds us, for Justice, finding an effective remedy may be harder than proving anticompetitive conduct.
Nathan Simington assesses the administration's announced deal with Japan and the Netherlands to enforce its tough decoupling policy against China's semiconductor industry. Details are still a little sparse, but some kind of deal was essential for the U.S. campaign to work. But for Japan and the Netherlands, the details are critical, and any arrangement will require flexibility and sophistication on the part of the Commerce Department if it is to work in the long run.
Megan Stifel and I chew over the Justice Department/FBI victory lap over putting a stick in the spokes of The Hive ransomware infrastructure. We agree that the lap was warranted. Among other things, the FBI handled its access to decryption keys with more care than in the past, providing them to many victims before taking down a big chunk of the ransomware gang's tools. The bad news? Nobody was arrested, and the infrastructure can probably be reconstituted in the near term.
Here is an evergreen headline: "Facebook is going to reinstate Donald Trump's account." That could be the opening line of any story in the last few months, and that is probably Facebook's strategy – a long, teasing dance of seven veils so that, by the time Trump starts posting, it will be old news. If that is Facebook's PR strategy, it is working, Mark MacCarthy reports. Nobody much cares, and they certainly do not seem to be mad at Facebook. So the company is out of the woods, but for the ex-President it's a blow to the ego that is bound to sting.
Megan has more good news on the cybercrime front: The FBI identified the North Korean hacking group that stole $100 million in crypto last year – and may have kept the regime from getting its hands on any of the funds.
Nathan unpacks two competing news stories. First, "OMG, ChatGPT will help bad guys write malware." Second: "OMG, ChatGPT will help good guys find and fix security holes." He thinks they are both a bit overwrought, but maybe a glimpse of the future.
Mark and Megan explain TikTok's new offer to Washington. Megan also covers Congress's "TayTay v. Ticketmaster" hearing after disclosing her personal conflict of interest.
Nathan answers my question: how can the FAA be so good a preventing airliners from crashing and so bad at preventing its systems from crashing? The ensuing discussion turns up more on-point bathroom humor than anyone would have expected.
In quick hits, I cover three stories:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:40 PM | Permalink | Comments (0)
We kick off a jam-packed episode of the Cyberlaw Podcast by flagging the news that ransomware gangs' revenue fell substantially in 2022. There is lots of room for error in that Chainalysis finding, Nick Weaver notes, but the drop is large. Among the reasons to think it might be real is a growing resistance to paying ransom on the part of companies and their insurers, who are especially concerned about liability for payments to sanctioned ransomware gangs. I also note a fascinating additional insight from Jon DiMaggio, who infiltrated the Lockbit ransomware gang. He says that when Lockbit threatened to release Entrust's internal files, the company responded with days of Distributed Denial of Service (DDoS) attacks on Lockbit's infrastructure – and never did pay up. That would be a heartening display of courage on the part of corporate victims. It would also be a felony, at least according to the conventional wisdom that condemns hacking back. So I cannot help thinking there is more to the story. Like, maybe Canadian Security Intelligence Service is joining Australian Signals Directorate in releasing the hounds on ransomware gangs. I look forward to hearing more about this undercovered disclosure.
Gus Hurwitz offers two explanations for the Federal Aviation Administration system outage, which grounded planes across the country. There's the official version and the conspiracy theory, as with everything else these days. Nick breaks down the latest cryptocurrency failure; this time it's Genesis. Nick's not a fan of its prepackaged bankruptcy. And Gus and I puzzle over the Federal Trade Commission's determination to write regulations to outlaw most non-compete clauses.
Justin Sherman a first-timer on the podcast, covers recent research showing that alleged Russian social media election interference had no meaningful effect on the 2016 US election. That spurs an outburst from me about the cynical scam that the "Russia, Russia, Russia" narrative became – a 2016 version of election denial for which the press and the left have never apologized.
Nick explains the looming impact of Twitter's interest payment obligation. We're going to learn a lot more about Elon Musk's business plans from how he deals with that crisis than from anything he's tweeted in recent months.
It does not get more cyberlawyerly than a case the Supreme Court will be taking up this term – Gonzalez v. Google. This case will put Section 230 squarely on the Court's docket, and the amicus briefs can be measured by the shovelful. The issue is whether YouTube's recommendation of terrorist videos can ever lead to liability – or whether any judgment is barred by Section 230. Gus and I are on different sides of that question, but we agree that this is going to be a hot case, a divided Court, and a big deal.
And, just to show that our foray into cyberlaw was no fluke, Gus and I also predict that the United States Court of Appeals for the District of Columbia Circuit is going to strike down the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA – the legislative exception to Section 230 that civil society loves to hate. Its prohibition on promotion of prostitution may fall to first amendment fears on the court, but the practical impact of the law may remain.
Next, Justin gives us a quick primer on the national security reasons for regulation of submarine cables. Nick covers the leak of the terror watchlist thanks to an commuter airline's sloppy security. Justin explains TikTok's latest charm offensive in Washington.
Finally, I provide an update on the UK's online safety bill, which just keeps getting tougher, from criminal penalties, to "ten percent of revenue" fines, to mandating age checks that may fail technically or drive away users, or both. And I review the latest theatrical offering from Madison Square Garden – "The Revenge of the Lawyers." You may root for the snake or for the scorpions, but you will not want to miss it.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:54 PM | Permalink | Comments (0)
The Jan. 6 committee exposed norm-breaking in surprising places. Take the conduct of Joint Chiefs of Staff Gen. Mark Milley, who abused the classified information system to hide information about how the Pentagon reacted to the Capitol riot. In my latest piece for Lawfare, I argue that Gen. Milley's conduct overclassified information in violation of the relevant executive order. Worse, it may have prejudiced some of the Jan.6 defendants and denied FOIA access to the most important DOD documents about that day. The press and Congress bitterly criticized a similar handling of the Trump-Zelensky phone transcript, but it's been silent about Gen. Milley. Excerpts from Lawfare below.
Here's Gen. Milley's candid statement about what he did:
The document—I classified the document at the beginning of this process by telling my staff to gather up all the documents, freeze-frame everything, notes, everything and, you know, classify it. And we actually classified it at a pretty high level, and we put it on JWICS, the top secret stuff. It's not that the substance is classified. It was[.] I wanted to make sure that this stuff was only going to go [to] people who appropriately needed to see it, like yourselves. We'll take care of that. We can get this stuff properly processed and unclassified so that you can have it … for whatever you need to do.
In short, Milley overclassified those records to keep them from leaking—to make sure that the Pentagon and those investigating Jan. 6 would control the story.
By now, this story should sound eerily familiar. In 2019, President Trump held a phone call with President Volodymyr Zelenskyy of Ukraine. The call was immediately controversial inside the administration, and White House staff quickly restricted access to the call's transcript by moving it to a server designed to protect highly classified intelligence activities. That move attracted press attention that was harsh, breathless, and extensive—even though such transcripts are usually classified, just not at a level that justifies use of the intelligence activity server. Former CIA Director Leon Panetta said that the use of a top-secret system was "clearly an indication that they were at least thinking of a cover-up if not, in fact, doing that. It's a very serious matter because this is evidence of wrongdoing." After considerable delay, the Trump White House released the transcript publicly, and one official acknowledged that it had been a mistake to move the transcript to a highly classified system.
That was the right answer. Overclassifying government records because of their political sensitivity is a direct violation of the executive order that governs classification. The order, signed by President Obama, says, "In no case shall information be classified in order to prevent or delay the release of information that does not require protection in the interest of national security."
This is an important principle. Classifying information because it's politically sensitive, however appealing it may be to government officials in the moment, undermines the public trust on which the entire system of national security secrecy rests.
But even setting aside the principle of the thing, overclassification is not a victimless crime. Take Milley's decision to withhold records of the Pentagon's response to Jan. 6. It raises serious questions that the chairman wasn't asked in his testimony and that haven't been answered since.
I frequently defend broad national security authorities for government. That's because I've seen some of the threats the government faces. But if it wants to keep those authorities in a time of deepening public suspicion, the government must show that it has internal checks and real accountability to prevent abuse.
Posted at 04:03 PM | Permalink | Comments (0)
In this bonus episode of the Cyberlaw Podcast, I interview Andy Greenberg, long-time WIRED reporter, about his new book, Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.
This is Andy's second author interview on the Cyberlaw Podcast. He was also interviewed about an earlier book, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers. They are both excellent cybersecurity stories.
Tracers in the Dark is a kind of sequel to the Silk Road story, which ended with Ross Ulbricht, aka the Dread Pirate Roberts, pinioned to the table in a San Francisco library, with his laptop open to an administrator's page on the Silk Road digital black market. At that time, cryptocurrency backers believed that Ulbricht's arrest was a fluke, and that, properly implemented, bitcoin was anonymous and untraceable. Greenberg's book tells, story by story, how that illusion was trashed by smart cops and techies (including our own Nick Weaver!) who showed that the blockchain's "forever" records make it almost impossible to avoid attribution over time.
Among those who fell victim to the illusion of anonymity were: two federal officers who helped pursue Ulbricht – and to rip him off; the administrator of AlphaBay, Silk Road's successor as world's biggest dark market; an alleged Russian hacker who made so much money hacking Mt. Gox that he had to create his own exchange to launder it all; and hundreds of child sex abuse consumers and producers.
It is a great story, and Andy brings it up to date in the interview as we dig into two of the US government's massive, multi-billion-dollar bitcoin seizures, both made possible by transaction tracing. In fact, for all the colorful characters in the book, the protagonist is really Chainalysis and its competitors, who have turned tracing into a kind of science.
We close the talk by exploring Andy's deeply mixed feelings about both the world envisioned by cryptocurrency's evangelists and the way Chainalysis is saving us from that world.
Download Bonus Episode 438 (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 01:12 PM | Permalink | Comments (0)
The Cyberlaw Podcast kicks off 2023 by staring directly into the sun(set) of Section 702 authorization. The entire panel, including guest host Brian Fleming (Stewart having been "upgraded" to an undisclosed location) and guests Michael Ellis and David Kris, debates where things could be headed this year as the clock is officially ticking on FISA Section 702 reauthorization. Although there is agreement that a straight reauthorization is unlikely in today's political environment, the ultimate landing spot for Section 702 is very much in doubt, and a game of chicken will likely precede any potential deal. (Baker and Ellis have contributed to the debate, arguing that renewal should be the occasion for legislating against the partisan misuse of intelligence authorities.) That, and everything else, seems to be in play, as this reauthorization battle could result in meaningful reform or a complete car crash come this time next year.
Sticking with Congress, Michael also reacts to President Biden's recent bipartisan call to action regarding "Big Tech" and ponders where Republicans and Democrats could potentially find agreement on an issue everyone seems to agree on (for very different reasons). The panel also discusses the timing of the call and debates whether it is intended to incentivize the Republican-controlled House to act rather than simply increase oversight on the tech industry.
David then introduces a fascinating story about the bold recent action by the Security and Exchange Commission (SEC) to bring suit against Covington & Burling LLP to enforce an administrative subpoena seeking disclosure of the firm's clients implicated in a 2020 cyberattack by Chinese state-sponsored group, Hafnium. David posits that the SEC knows exactly what it is doing by taking such aggressive action in the face of strong resistance, and the panel discusses whether the SEC may have already won by this bold use of its authority in the U.S. cybersecurity enforcement landscape.
Brian then turns to the crypto regulatory and enforcement space to discuss Coinbase's recent settlement with New York's Department of Financial Services. Rather than signal another crack in the foundation of the once high-flying crypto industry, Brian offers that this may just be routine growing pains for a maturing industry that is more like the traditional banking sector, from a regulatory and compliance standpoint, than it may have wanted to believe.
Then, in the China portion of the episode, Michael discusses the latest news on the establishment of "reverse" Committee on Foreign Investment in the United States (CFIUS) review. He thinks it may still be some time before this tool gets finalized (even as the substantive scope appears to be shrinking). Next, Brian discusses a recent D.C. Circuit decision which upheld the Federal Communication Commission's decision to rescind the license of China Telecom at the recommendation of the executive branch agencies known as Team Telecom (Department of Justice, Department of Defense, and Department of Homeland Security). This important, first-of-its-kind decision reinforces the role of Team Telecom as an important national security gatekeeper for U.S. telecommunications infrastructure.
Finally, David highlights an interesting recent story about an FBI search of an apparent Chinese police outpost in New York and ponders what it would mean to negotiate with and be educated by undeclared Chinese law enforcement agents in a foreign country.
In a few updates and quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 12:33 PM | Permalink | Comments (0)
Our first episode for 2023 features Dmitri Alperovitch, Paul Rosenzweig, and Jim Dempsey trying to cover a months' worth of cyberlaw news. Dmitri and I open with an effort to summarize the state of play in the tech struggle between the U.S. and China. I say recent developments show the U.S. doing better than expected. U.S. companies like Facebook and Dell are engaged in voluntary decoupling as they imagine what their supply chains will look like if the conflict gets worse. China, after pouring billions into a so-far-fruitless effort to take the lead in high-end chip production, may be pulling back on the throttle. Dmitri is less sanguine, noting that Chinese companies like Huawei have shown that there is life after sanctions, and there may be room for a fast-follower model in which China dominates production of slightly less sophisticated chips, where much of the market volume is concentrated. Meanwhile, any Chinese retreat is likely tactical; where it has a dominant market position, as in rare earths, it remains eager to hobble U.S. companies.
Jim lays out the recent medical device security requirements adopted in the omnibus appropriations bill. It is a watershed for cybersecurity regulation of the private sector. It's also overdue for digitized devices that in some cases can only be updated with another open-heart surgery. How much of a watershed it is may become clear when the White House cyber strategy, which has been widely leaked, is finally released. Paul explains it's likely to show enthusiasm not just for more cybersecurity regulation but for liability as a check on bad cybersecurity. Dmitri points out that Biden administration enthusiasm for regulation may not lead to legislation now that Republicans control the House.
We all weigh in on LastPass's problems with hackers --and with candid, timely disclosures. For reasons fair and unfair, two-thirds of the LastPass users on the show have abandoned the service over the Christmas break. I blame LastPass's acquisition by private equity; Dmitri tells me that's painting with too broad a brush.
I offer an overview of the Twitter Files stories by Bari Weiss, Matt Taibbi, and others. When I say that the most disturbing revelations concern the massive government campaigns to enforce orthodoxy on COVID-19, all hell breaks loose. Paul in particular thinks I'm egregiously wrong to worry about any of this. No chairs are thrown, mainly because I'm in Virginia and Paul's in Costa Rica. But it's a heartfelt, entertaining, and maybe even illuminating debate.
In shorter and less contentious segments:
Download the 436th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:42 PM | Permalink | Comments (0)
Despite the title, rest assured that the Cyberlaw Podcast has not gone woke.
This bonus episode is focused on how cybersecurity is undermined by the attorney-client privilege. To explore that question, I interview Josephine Wolff and Dan Schwarcz, who along with Daniel Woods have written an article with the same title as this post.
Their thesis is that breach lawyers have lost perspective as they've waged a no-holds-barred (and frequently losing) battle to preserve the attorney-client privilege for forensic reports that diagnose their clients' cybersecurity breaches. Remarkably for the authors of a law review article, they did actual field research, and it tells us a lot.
The authors interviewed all the players in breach response -- the breached company's information security teams, the breach lawyers, the forensics investigators who parachute in for incident response, the insurers and insurance brokers, and more. I am reminded of Tracy Kidder's astute observation that, in building a house, there are three main players – owner, architect, and builder – and that if you get any two of them in a room alone, they will spend all their time bad-mouthing the third. Wolff, Schwarcz, and Woods seem to have done that with the breach response players, and while the bad-mouthing is spread around, it falls hardest on the lawyers.
The main problem is that invoking attorney-client privilege to keep breach forensics confidential is not an easy sell. The courts have been unsympathetic. To overcome the undertow of judicial skepticism, breach lawyers end up imposing more and more draconian restrictions on forensic investigators and their communications. The upshot is that no forensics report at all may be written for many breaches (up to 95% of them, Josephine estimates). How does the breached company find out what it did wrong and what lessons it should learn from the incident? Simple. Their lawyer talks to the forensic firm, translates its advice into a high-level PowerPoint, and orally explains the cybersecurity details to the company's management and information security team. Really, what could go wrong?
In closing, Dan and Josephine offer some ideas for how to get out of this mess. I push back. All in all, it's the most fun I've ever had talking about insurance law.
Download the Bonus 435th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 05:44 PM | Permalink | Comments (0)
It's been a news-heavy week, but we have the most fun in this episode with ChatGPT. Jane Bambauer, Richard Stiennon, and I pick over the astonishing number of use cases and misuse cases disclosed by the release of ChatGPT for public access. It is talented – writing dozens of term papers in seconds. It is sociopathic – the term papers are full of falsehoods, down to the made-up citations to plausible but nonexistent URLs for New York Times stories. And it has too many lawyers – Richard's request that it provide his bio (or even Albert Einstein's) was refused on what are almost certainly data protection grounds. Luckily, either ChatGPT or its lawyers are also bone stupid, since reframing the question tricks the machine into subverting the legal and PC limits it labors under. I speculate that it beat Google to a PR triumph precisely because Google had even more lawyers telling their Artificial Intelligence what not to say.
In a surprisingly undercovered story, Apple has gone all in on child pornography. Its phone encryption already makes the iPhone a safe place to record child sexual abuse material (CSAM); now Apple will encrypt users' cloud storage with keys it cannot access, allowing customers to upload CSAM without fear of law enforcement. And it has abandoned its effort to identify such material by doing phone-based screening. All that's left of its effort to stop such abuse is a feature allowing parents to force their kids to activate an option that prevents them from sending or receiving nude photos. Jane and I dig into the story, as well as Apple's questionable claim to be offering the same encryption to its Chinese customers.
Nate Jones brings us up to date on the National Defense Authorization Act, or NDAA. Lots of second-tier cyber provisions made it into the bill, but not the provision requiring that critical infrastructure companies report security breaches. A contested provision on spyware purchases by the U.S. government was compromised into a more useful requirement that the intelligence community identify spyware that poses risks to the government.
Jane updates us on what European data protectionists have in store for Meta, and it's not pretty. The EU data protection supervisory board intends to tell the Meta companies that they cannot give people a free social media network in exchange for watching what they do on the network and serving ads based on their behavior. If so, it's a one-two punch. Apple delivered the first blow by curtailing Meta's access to third-party behavioral data. Now even first-party data could be off limits in Europe. That's a big revenue hit, and it raises questions whether Facebook will want to keep giving away its services in Europe.
Mike Masnick is Glenn Greenwald with a tech bent – often wrong but never in doubt, and contemptuous of anyone who disagrees. But when he's right, he's right. Jane and I discuss his article recognizing that data protection is becoming a tool that the rich and powerful can use to squash annoying journalist-investigators. I have been saying this for decades. But still, welcome to the party, Mike!
Nate points to a post pleading for more controls on the export of personal data from the U.S. It comes not from the usual privacy enthusiasts but from the U.S. Naval Institute, and it makes sense.
Jane and I take time to marvel at the story of France's Mr. Privacy and the endless appetite of Europe's bureaucrats for serial grifting, as long as it combines enthusiasm for American technology with hostility to the technology's source.
Nate and I cover what could be a good resolution to the snake-bitten cloud contract competition at the Department of Defense. The Pentagon is going to let four cloud companies -- Google, Amazon, Oracle And Microsoft – share the prize.
You didn't think we'd forget Twitter, did you? Jane, Richard, and I all comment on the Twitter Files. Consensus: the journalists claiming these stories are nothingburgers are driven more by ideology than their nose for news. Especially newsworthy are the remarkable proliferation of shadowbanning tools Twitter developed for suppressing speech it didn't like, and some considerable though anecdotal evidence that Twitter's many speech rules company were often twisted to suppress speech from the right -- even when the rules did not quite fit, as with LibsofTikTok -- while similar behavior on the left went unpunished. Richard tells us what it feels like to be on the receiving end of a Twitter shadowban.
The podcast introduces a new feature: "We Read It So You Don't Have To," and Nate provides the tl;dr on an New York Times story: How the Global Spyware Industry Spiraled Out of Control.
And in quick hits and updates:
Download the 434th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:14 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast delves into the use of location technology in two big events – the surprisingly widespread lockdown protests in China and the January 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to my provocative question: What’s the difference? Jamil’s answer (and mine, for what it’s worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users’ information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded.
Whether we should be relying on Google’s made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective rules penalizing companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish such law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence of wrongdoing. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is happy to hose me down.
The UK is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests.
The Department of Homeland Security’s Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry’s complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to reach agreement on a report format that doesn’t resemble the State of the Union address.
It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities.
Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That’s very likely a World Trade Organization (WTO) violation, I offer, but one that pales in comparison to thirty years of European WTO-violating threats to constrain data exports to the U.S. When you think of it as retaliation for the use of EU privacy law to attack U.S. intelligence programs, the EV subsidy is easy to defend.
I ask Nick if we learned anything new this week from Twitter coverage. His answer – that Elon Musk doesn’t understand how hard content moderation is – doesn’t exactly come as news. Nor, really, does most of what we learned from Matt Taibbi’s review of Twitter’s internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn’t come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D., Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him.
Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose.
Finally, in quick hits:
Download the 433rd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:15 PM | Permalink | Comments (0)
We spend much of this episode of the Cyberlaw Podcast talking about toxified tech – new technology that is being demonized by the press and others. Exhibit One, of course, is "spyware," i.e., hacking tools that allow governments to access phones or computers otherwise closed to them. The Washington Post and the New York Times have led a campaign to turn NSO's Pegasus tool for hacking phones into a radioactive product. Jim Dempsey, though, reminds us that not too long ago, in defending end-to-end encryption, tech policy advocates insisted that the government did not need to mandate access to encrypted phones because they could just hack them instead. David Kris joins in, pointing out that, used with a warrant, there's nothing uniquely dangerous about hacking tools of this kind. I offer an explanation for why the public policy community and its Silicon Valley funders have changed their tune on the issue: Having won the end-to-end encryption debate, they feel free to move on to the next anti-law-enforcement campaign.
That campaign includes private lawsuits against NSO by companies like WhatsApp, whose case was briefly delayed by NSO's claim of sovereign immunity on behalf of the (unnamed) countries it builds its products for. That claim made it to the Supreme Court, David reports, where the U.S. government recently filed a devastating brief that will almost certainly send NSO back to court without any sovereign immunity protection.
Meanwhile, in France, Amesys and its executives are being prosecuted for facilitating the torture of Libyan citizens at the hands of the Muammar Qaddafi regime. Amesys evidently sold an earlier and less completely toxified technology – packet inspection tools – to Libya which is alleged to have tracked down dissidents with it. The criminal case is pending.
And in the U.S., a plethora of tech toxification campaigns are under way, all aimed at Chinese products. This week, Jim notes, the Federal Communications Commission came to the end of a long road that began with jawboning in the 2000s and culminated in a flat ban on installing Chinese telecom gear in U.S. networks. On deck for toxification are DJI's drones, which several Senators see as a comparable national security threat that should be handled with a similar ban. Maury Shenk tells us that the British government is taking the first steps on a similar path, this time starting with a ban on some government uses of Chinese surveillance camera systems.
Those measures do not always work, Maury tells us, pointing to a story that hints at trouble ahead for U.S. efforts to decouple Chinese from American artificial intelligence research and development.
Maury and I take a moment to debunk efforts to persuade readers that Artificial Intelligence (AI) is toxic because Silicon Valley will use it to take our jobs. AI code writing is not likely to graduate from facilitating coding any time soon, we agree. Whether AI can do more in replacing Human Resources (HR) staff may be limited by a different toxification campaign – the largely phony claim that AI is full of bias. Amazon's effort to use AI in HR, I predict, will be sabotaged by this claim, as its effort to avoid charges of bias will almost certainly lead the company's HR department to build race and gender quotas into its AI engine.
And in a few quick hits:
Download the 432nd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:07 PM | Permalink | Comments (0)
The Cyberlaw Podcast leads with the growing legal cost of Elon Musk's anti-authoritarian takeover of Twitter. Turns out that authority figures have a mean streak, and a lot of weapons, many grounded in law, as Twitter is starting to learn. Brian Fleming explores one of them -- the apparently unkillable notion that the Committee on Foreign Investment in the U.S. (CFIUS) should review Musk's Twitter deal because of a relatively small share that went to investors with Chinese and Persian Gulf ties. CFIUS may in fact be seeking information on what Twitter data those investors will have access to, but I am skeptical that CFIUS will be moved to act on what it learns. More dangerous for Twitter and Musk, says Charles-Albert Helleputte, is the possibility that the company will lose its one-stop-shop privacy regulator for failure to meet the elaborate compliance machinery set up by European privacy bureaucrats. At a quick calculation, that could expose Twitter to fines up to 120% of annual turnover. That would smart. Finally, I reprise my take on all the people leaving Twitter for Mastodon as a protest against Musk allowing the Babylon Bee and President Trump back on the platform. If the protestors really think Mastodon's system is better, there's no reason Twitter can't adopt it, or at least the version that Francis Fukuyama and Roberta Katz have proposed.
If you are looking for the far edge of the Establishment's Overton Window on China policy, you cannot do better than the U.S.-China Economic and Security Review Commission, a consistently China-skeptical but mainstream body. Brian reprises the Commission's latest report. Its headline is about Chinese hacking, but the report does not offer much hope of a solution to that problem, other than more decoupling.
Chalk up one more victory for Trump-Biden continuity, and one more loss for the State Department. Michael Ellis reminds us that the Trump administration took much of Cyber Command's cyber offense decisionmaking out of the National Security Council and put it back in the Pentagon. This made it much harder for the State Department to stall cyber offense operations. When it turned out that this made Cyber Command more effective and no more irresponsible, the Biden Administration followed its predecessor's lead, preparing a memo that will largely ratify Trump's order, with a few tweaks.
I unpack Google's expensive (nearly $400 million) settlement with 40 States over location history. Google's promise to its users that it would stop storing location history if the feature was turned off was poorly and misleadingly drafted, but I doubt there is anyone who actually wanted to keep Google from using location for most of the apps where it remained operative, so the settlement is a good deal for the states, and a reminder of how unpopular Silicon Valley has become in red and blue states alike.
Michael tells the doubly embarrassing story of an Iranian hack of the U.S. Merit Systems Protection Board. It is embarrassing enough for the board to be hacked using a log4j exploit that should have been patched long ago. But it is worse that an Iranian government hacker got access to a U.S. government network – and decided that its access is best used for mining cryptocurrency.
Brian tells us that the U.S. goal of reshoring chip production is making progress, with Apple planning to use TSMC chips from a new fab in Arizona.
In a few updates and quick hits:
Download the 431st Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:35 PM | Permalink | Comments (0)
We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 federal election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does a lot of nominations, the House does a lot of investigations, and neither does much legislation. Which could leave renewal of a critically important intelligence authority, Section 702 of FISA, out in the cold. As supporters of renewal, Adam and I conclude that the best hope for the provision is to package it with trust-building measures to guard against partisan misuse of national security authorities.
I also note that foreign government cyberattacks on our election machinery, something much anticipated in election after election, once again failed to make an appearance. At this point, I argue, election interference falls somewhere between Y2K and Bigfoot on the "things we need to worry about" scale.
In other news, cryptocurrency conglomerate FTX has collapsed in a welter of bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details.
A new panelist to the podcast, Chinny Sharma explains for a disbelieving US audience the UK government's plan to scan all the country's internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the UK government doesn't use a private service for the task.
Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he's determined to linger on the moments of comic relief. Dunning-Krueger makes an appearance.
Chinny and I speculate on what may emerge from the Biden administration's plan to reconsider the relationship between CISA and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict that it will spur turf wars and end in new coordination authority for CISA. In addition, the Obama administration's egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the FTC will finally have to coordinate its privacy enforcement efforts with CISA's cybersecurity standards and priorities.
Adam reviews the European Parliament's report on Europe's spyware problems. He's impressed (as am I) by the report's willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition party, a problem that has been unthinkable for seventy years in the United States. Though maybe not any more, which, we agree, is another reason for Congress to quickly put into place more guardrails against such abuse.
Nick notes the US government's seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it's worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money.
Disinformation – or the appalling acronym MDM (for mis-, dis-, and mal-information) – has been in the news lately. A recent paper counted the staggering cost of efforts to suppress "disinformation" during covid times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya – falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even speech that sounds a lot like foreign leaders we don't like.
Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and a new kind of AI that writes code. The short version is that GitHub has been training an AI engine on all the open source code on its site so that an algorithm can "autosuggest" lines of new code as you're writing the boring parts of your program. Sounds great, except that the resulting algorithm tends to reproduce the code it was trained on --- without imposing the license conditions, such as copyleft, that were part of the original code. Not surprisingly, copyleft advocates are suing on the ground that important information was improperly stripped from their code, particularly the provision that turns all code that incorporates their open source into open source itself. I remind listeners that this incorporation feature is why Microsoft famously likened open source to cancer. Nick tells me that it's really more like herpes, demonstrating that he has apparently had a lot more fun writing code than I ever had.
In updates and quick hits:
Download the 430th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 03:01 PM | Permalink | Comments (0)
The war that began with the Russian invasion of Ukraine grinds on. Cybersecurity experts have spent much of 2022 trying to draw lessons about cyberwar strategies from the conflict. Dmitri Alperovitch takes us through the latest learning, cautioning that all of it could look different in a few months, as both sides adapt to the others' actions.
David Kris joins Dmitri to evaluate a Microsoft report hinting at how China may be abusing its edict that software vulnerabilities must be reported first to the Chinese government. The temptation to turn such reports into 0-day exploits is strong, and Microsoft notes with suspicion a recent rise in Chinese 0-day exploits. Dmitri worried about just such a development while serving on the Cyber Safety Review Board, but he is not yet convinced that we have the evidence to make a case against the Chinese mandatory disclosure law.
Sultan Meghji keeps us in Redmond, digging through a deep Protocol story on how Microsoft has helped build Artificial Intelligence (AI) capacity in China. The amount of money invested, and the deep bench of AI researchers from China, raise real questions about how the United States can decouple from China – and whether China will eventually decide to do the decoupling.
I express skepticism about the White House's latest initiative on ransomware, a 30+ nation summit that produced a modest set of concrete agreements. But Sultan and Dmitri have been on the receiving end of deputy national security adviser Anne Neuberger's forceful personality, and they think we will see results. We'd better. Banks report that ransomware payments doubled last year, to $1.2 billion.
David introduces the high-stakes struggle over when cyberattacks can be excluded from insurance coverage as acts of war. A recent settlement between Mondelez and Zurich has left the law in limbo.
Sultan tells me why AI is so bad at explaining the results it reaches. He sees light at the end of the tunnel. I see more stealthy imposition of woke values. But we find common ground in trashing the Facial Recognition Act, a lefty Democrats' bill that throws together every bad idea for regulating facial recognition ever put forward and adds a few more. A red wave election will be worth it just to make sure this bill stays dead.
Finally, Sultan reviews the National Security Agency's report on supply chain security. And I introduce the elephant in the room, or at least the mastodon: Elon Musk's takeover at Twitter and the reaction to it. I downplay the probability of CFIUS reviewing the deal. And I mock the Elon-haters who fear that Musk's scrimping on content moderation will turn Twitter into a hellhole that includes *gasp!* Republican speech. Turns out that they are fleeing Twitter for Mastodon, which pretty much invented scrimping on content moderation.
Download the 429th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:06 PM | Permalink | Comments (0)
You heard it on the Cyberlaw Podcast first, as we did a mashup of the week's top stories: Nate Jones commenting on Elon Musk's expected troubles running Twitter at a profit and Jordan Schneider noting the U.S. government's creeping, halting moves to constrain TikTok's sway in the U.S. market. Since Twitter has never made a lot of money, even before it was carrying loads of new debt, and since pushing TikTok out of the U.S. market is going to be an option on the table for years, why doesn't Elon Musk position Twitter to take its place? (Breaking news: Apparently the podcast has a direct line to Elon Musk's mind; he is reported to be entertaining the idea of reviving Vine to compete with TikTok.)
It's another big week for China news, as Nate and Jordan cover the administration's difficulties in finding a way to thwart China's rise in quantum computing and artificial intelligence (AI). Jordan has a good post about the tech decoupling bombshell. But the most intriguing discussion concerns China's remarkably limited options for striking back at the Biden Administration for its harsh sanctions.
Meanwhile, under the heading, When It Rains, It Pours, Elon Musk's Tesla faces a criminal investigation over its self-driving claims. Nate and I are skeptical that the probe will lead to charges, as Tesla's message about Full Self-Driving has been a mix of manic hype and depressive lawyerly caution.
Jamil Jaffer introduces us to the Guacamaya "hacktivist" group whose data dumps have embarrassed governments all over Latin America – most recently with reports of Mexican military arms sales to narco-terrorists. On the hard question – hacktivists or government agents? – Jamil and I lean ever so slightly toward hacktivists.
Nate covers the remarkable indictment of two Chinese spies for recruiting a U.S. law enforcement officer in an effort to get inside information about the prosecution of a Chinese company believed to be Huawei. We pull plenty of great color from the indictment, and Nate notes the awkward spot that the defense team now finds itself in, since the point of the espionage seems to have been, er, trial preparation.
To balance the scales a bit, Nate also covers suggestions that Google's former CEO Eric Schmidt, who headed an AI advisory committee, had a conflict of interest because he also invested in AI startups. There's no suggestion of illegality, though, and it is not clear how the government will get cutting edge advice on AI if it does not get it from investors and industry experts like Schmidt.
Jamil and I have mildly divergent takes on Transportation Security Administration's new railroad cybersecurity directive. He worries that it will produce more box-checking than security. My concern is that it mostly reinforces current practice rather than raising the bar.
And in quick updates:
Download the 428th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 10:14 AM | Permalink | Comments (0)