Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Posted at 08:50 PM in Random posts | Permalink | Comments (5)
In this bonus episode of the Cyberlaw Podcast, I interview Jimmy Wales, the cofounder of Wikipedia. Wikipedia is a rare survivor from the Internet Hippie Age, coexisting like a great herbivorous dinosaur with Facebook, Twitter, and the other carnivorous mammals of Web 2.0. Perhaps not coincidentally, Jimmy is the most prominent founder of a massive internet institution not to become a billionaire. We explore why that is, and how he feels about it.
I ask Jimmy whether Wikipedia's model is sustainable, and what new challenges lie ahead for the online encyclopedia. We explore the claim that Wikipedia has a lefty bias, and whether a neutral point of view can be maintained by including only material from trusted sources. I ask Jimmy about a concrete example -- what looks to me like an idiosyncratically biased entry in Wikipedia for "Communism."
We close with an exploration of the opportunities and risks posed for Wikipedia by ChatGPT and other large language AI models.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected] Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 10:15 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since the story is squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling-you-have-to-laugh-to-keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post on the story, the AI returned exactly the case law the lawyer wanted – because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing.
I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked ChatGPT, "Are the other cases you provided fake," the model denied it. Well, all right then. Who among us has not asked Westlaw, "Are the cases you provided fake?" and accepted the answer without checking? Somehow, I can't help suspecting that the lawyer's claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you're wondering whether AI poses existential risk, the answer for at least one law license is almost certainly "yes."
But the bigger stories of the week were the cries from Google and Microsoft leadership for government regulation of their new AI tools. Microsoft's President, Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. Jeffery Atik and Richard Stiennon point out that, as usual, Brad Smith is advocating for a process that Microsoft could master pretty easily. Google's Sundar Pichai also joins the "regulate me" party, but a bit half-heartedly. I argue that the best measure of Silicon Valley's confidence in the accuracy of AI is easy to find: Just ask when Google and Apple will let their AI models identify photos of gorillas. Because if there's anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes.
Moving from policy to tech, Richard and I talk about Google's integration of AI into search; I see some glimmer of explainability and accuracy in Google's willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.
Jeffery takes us through the latest chapters in the U.S. - China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia's CEO foresees. Certainly, Jeffery and I agree, Invidia has much to fear from a Chinese effort to build a national champion in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether the effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.
China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn't be able to repay China's infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the money and the friend, even if you don't hack him.
Finally, Richard and Jeffery both opine on Ireland's imposing – under protest – a $1.3bn fine on Facebook for sending data to the United States despite the Court of Justice of the European Union's (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their third deal to satisfy the CJEU that U.S. law is "adequate" to protect the rights of Europeans. Speaking of which, anyone who's enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will debate the latest privacy framework. If we can, we'll release it as a bonus episode of this podcast, but listening live should be even more fun!
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:41 PM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast features part 1 of our two-part interview with Paul Stephan, author of The World Crisis and International Law – a are deeper and more entertaining read than the title suggests. Paul lays out the long historical arc that links the 1980s to the present day. It’s not a pretty picture, and it gets worse as he ties those changes to the demands of the Knowledge Economy. How will these profound political and economic clashes resolve themselves? We’ll cover that in part 2.
Meanwhile, in the news roundup, I tweak Sam Altman for his relentless embrace of regulation for his industry during testimony last week in the Senate. I compare him to another Sam with a similar regulation-embracing approach to Washington, but Chinny Sharma thinks it’s more accurate to say he was simply doing the opposite of everything Mark Zuckerberg did in past testimony. Chinny and Sultan Meghji unpack some of Altman’s proposals, from a new government agency to license large AI models, to safety standards and audit.
I mock Sen. Blumenthal for his complaint that “Europe is ahead of us” in industry-killing regulation. That earns him immortality in the form of a new Cybertoon, below (as before, a hat tip to Bing Image Creator for the graphic help).
Speaking of Cybertoonz, I note that an earlier Cybertoon scooped a prominent Wall Street Journal article covering bias in AI models was scooped – by two weeks.
Paul explains the Supreme Court’s ruling on social media liability for assisting ISIS, and why it didn’t tell us anything of significance about section 230.
Chinny and I analyze reports that the FBI misused its access to a section 702 database. All of the access mistakes came before the latest round of procedural reforms, and, on reflection, I think the fault lies less with the FBI and more with DOJ and the DNI, who came up with access rules that all but guaranteed mistakes and didn’t ensure that the database could be searched when security requires it.
Chinny reviews a bunch of privacy scandal wannabe stories
Download the 458th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 10:53 AM | Permalink | Comments (0)
Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models' lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it's nice to see someone trying. Another effort, Anthropic's creation of an explicit "constitution" of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of "open source" principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up.
The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that's because Europe doesn't have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are likely to be quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT.
Anne-Gabrielle Haie is more friendly to the EU's data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider.
Paul Rosenzweig introduces us to one of the U.S. government's most impressive technical achievements in cyberdefense – tracking down, reverse engineering, and then killing Snake, possibly Russia's best hacking tool.
Paul and I chew over China's most recent self-inflicted wound in attracting global investment – the raid on Capvision. I agree that it's going to discourage investors who need information before they part with their cash. But I also offer a lukewarm justification for China's fear that Capvision's business model encourages leaks.
Maury reviews Chinese tech giant Baidu's ChatGPT-like search add-on. I wonder whether we can ever trust any such models for search, given their love affair with plausible falsehoods.
Paul reviews the technology that will be needed to meet what's looking like a national trend to require social media age verification.
Maury reviews the ruling upholding the lawfulness of the UK's interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy.
Finally, in quick hits:
Download the 457th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:56 PM | Permalink | Comments (0)
The willingness of Lina Khan's FTC to pursue untested -- and sometimes unlikely -- legal theories has been the subject of much sober commentary. But really, what fun is sober commentary? So here's the Cybertoonz take on the FTC's new litigation strategy. And, again, many thanks to Bing's Image Creator, which draws way better than I do.
Posted at 10:34 AM | Permalink | Comments (0)
The "godfather of AI" has left Google, offering warnings about the existential risks for humanity of the technology. Mark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There's more agreement on the White House summit on AI risks, which seems to have followed Mark's "let's worry about tomorrow tomorrow" prescription. I think existential risks are a real concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I revert to my past view that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas, which provokes lively pushback from both Jim Dempsey and Mark.
Other prospective AI regulators, from the FTC's Lina Khan to the Italian data protection agency, come in for commentary. I'm struck by the caution both have shown, perhaps a sign they recognize the difficulty of applying old regulatory frameworks to this new technology. It's not, I suspect, because Lina Khan's FTC has lost its enthusiasm for pushing the law further than it can reasonably be pushed. This week's example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate 'remedy" for what look like Facebook foot faults in complying with an earlier FTC order.
Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to the EU's General Data Protection Regulation (GDPR) and California's privacy law.
Mark reviews Pornhub's reaction to the Utah law on kids' access to porn. He thinks age verification requirements are due for another look by the courts.
Jim explains the state appellate court decision ruling that the NotPetya attack on Merck was not an act of war and thus not excluded from its insurance coverage.
Nate and I recommend Kim Zetter's revealing story on the SolarWinds hack. The details help to explain why the Cyber Safety Review Board hasn't examined SolarWinds – and why it absolutely has to. The reason is the same for both: Because the full story is going to embarrass a lot of powerful institutions.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:54 PM | Permalink | Comments (0)
Posted at 06:31 AM | Permalink | Comments (0)
Lawfare has published an op-ed on this topic by Rick Salgado and me. The gist is that the government has been adapting FISA section 702 to thwart cyberspies and ransomware gangs. We argue that this gives CISOs a stake in the debate over renewing 702:
For Section 702 to be an effective weapon against cyberattacks, CISOs must become informed participants in the debate. If you are one of the many CISOs who think the government should do more to thwart attacks on your networks, your voice in defense of 702 is critical. But you should also hold the government's feet to the fire to make 702's potential real, through effective real-time threat sharing.
Perhaps the easiest way for corporate CISOs to get started is by educating company government affairs staff. Once you've explained what Section 702 could do to protect the company—especially if the government adopts measures to quickly share information with CISOs—you just need to ask that the company's public stance on Section 702 take into account the big contribution the law could make toward protecting the company's own networks.
Posted at 08:29 AM | Permalink | Comments (0)
We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That's the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they've been intercepted and stored, with some arguing that the FBI should get a second court authorization -- maybe even a warrant based on probable cause -- to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they've dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were made public this week. It has also emerged that the government is using the database millions of times a year to identify the victims of cyberattacks. That's the kind of problem 702 is made for: some foreign hackers are a national security threat, and their whole business model is to use U.S. infrastructure to communicate (in a very special way) with U.S. networks. So it turns out that all those civil libertarians who want to make it hard for the government to search the 702 database for the names of Americans are actually proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks!
Justin Sherman covers China's plans to attack and even take over enemy (i.e., U.S.) satellites. The story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that DOD has gotten a little too comfortable waging war against people who don't really have an army, and that the Ukraine conflict shows how much tougher things get when there's an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.)
Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. The Court will hear argument next year on the constitutionality of public officials blocking people who post mean comments on the officials' Facebook pages.
Justin and I break down a story about whether Twitter is complying with more government demands now that Elon Musk is in charge. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it's so much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don't fight. But with criticism of Elon Musk's Twitter already turned up to 11, that's not likely to persuade him.
Adam and I are impressed by Citizen Labs' report on search censorship in China. We'd both like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less attention. If you suspect that's because there's more U.S. censorship than U.S. companies want to admit, here's a bit of supporting evidence: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling Chinese political speech than China's main search engine, Baidu. This jibes with my experience, when Bing's Image Creator refused to construct an image using Taiwan's flag. (It was OK using U.S. and German flags, but it also balked at China's.) To be fair, though, Microsoft has fixed that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags.
Adam covers the EU's enthusiasm for regulating other countries' companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American.
I introduce a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the EPA's effort to impose cybersecurity requirements on public water systems. The problem from EPA's standpoint is that it used an "interpretation" of a statute that doesn't actually say much about cybersecurity.
Michael Ellis and I cover a former NSA director's business ties to Saudi Arabia – and confess our unease at the number of generals and admirals moving from command of U.S. forces abroad to a consulting gig with the countries where they just served. Recent restrictions on the revolving door for intelligence officers gets a mention.
Adam covers the Quebec decision awarding $500 thousand to a man who couldn't get Google to consistently delete a false story portraying him as a pedophile and conman.
Justin and I debate whether Meta's Reels feature has what it takes to be a plausible TikTok competitor. Justin is skeptical. I'm a little less so. Meta's claims about the success of Reels aren't entirely persuasive, but I think it's too early to tell.
The D.C. Circuit has killed off the state antitrust case trying to undo Meta's long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn't apply the same way to the FTC, which will get to pursue the same lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into dubious battles as though they were conscripts in Bakhmut, I ask, when will the Commission start recruiting in Russian prisons?
Well, that was fast. Adam tells us that the Brazil court order banning Telegram because it wouldn't turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn't out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance. That seems unsustainable for Telegram.
And in another regulatory walkback, Italy's privacy watchdog is letting ChatGPT return to the country. I suspect the Italian government is cutting a deal to save face as it abandons its initial position that ChatGPT violated data protection principles when it scraped public data to train the model.
Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don't see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:58 PM | Permalink | Comments (0)
For those who are interested in the Canadian Ski Marathon, here's my very informal introduction to the 2023 event.
Posted at 09:03 PM | Permalink | Comments (0)
Every government on the planet -- or nearly so -- announced last week an ambition to regulate artificial intelligence. Nate Jones and Jamil Jaffer take us through the announcements. What's particularly discouraging is the lack of imagination, as governments mostly dusted off their old prejudices to handle this new problem. Europe is obsessed with data protection, the Biden administration just wants to talk and wait and talk some more, while China must have asked an AI chatbot to assemble every regulatory proposal for AI ever made by anyone and translate it into Chinese law.
Meanwhile, companies trying to satisfy everyone are imposing weird limits on their AI, such as Microsoft's rule that asking for an image of Taiwan's flag is a violation of its terms of service. (For the record, so is asking for China's flag but not asking for an American or German flag.)
Matthew Heiman and Jamil take us through the strange case of the airman who leaked classified secrets on Discord. Jamil thinks we brought this on ourselves by not taking past leaks sufficiently seriously.
Jamil and I cover the imminent Montana statewide ban on TikTok. He thinks it's a harbinger; I think it may be a distraction that, like Trump's ban, produces more hostile judicial rulings.
Nate unpacks the California Court of Appeals' unpersuasive opinion on law enforcement use of geofencing warrants.
Matthew and I dig into the unanimous Supreme Court decision that should have independent administrative agencies like the FTC and SEC trembling. The court held that litigants don't need to wend their way through years of proceedings in front of the agencies before they can go to court and challenge the agencies' constitutional status. We both think that this is just the first shoe to drop. The next will be a full-bore challenge to the constitutionality of agencies beholden neither to the executive or Congress. If the FTC loses that one, I predict, the old socialist realist statue "Man Controlling Trade" that graces its entry may be replaced by one that both PETA and the Chamber of Commerce would probably like better. My thanks to Bing's Image Creator for the artwork.
In quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:13 PM | Permalink | Comments (0)
In this episode, we dive into some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention-based AI, and then into reports on AI safety from OpenAI and Stanford. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed in the name of "trust and safety" by Silicon Valley's content suppression bureaucracies). But there's no doubt that a potential existential issue lurks below the surface of the most ambitious AI projects.
Whether or not ChatGPT's stochastic parroting will ever pose a threat to humanity, Nick Weaver reports, it clearly poses a threat to a lot of people's reputations.
I confess that there's surprisingly little cyberlaw in the biggest intel leak of the last decade. It turns out that leakers can do as much damage as cyberspies, just by folding, carrying, and photographing classified documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance.
Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-VA) and Sen. John Thune (R-SD), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action.
Nick is impressed by the law enforcement operation that shuttered Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years.
I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive has detailed provisions for how the intelligence community will tell its current employees about the new post-employment restrictions, but it offers very little guidance to intelligence community alumni who have already moved to the private sector.
Nick is enthusiastic about the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance.
Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states and their Congressional delegations for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried about their own children. When the idea of age-checking internet users gets endorsed by the UK, Utah, and The New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s.
And in quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:51 PM | Permalink | Comments (0)
Dmitri Alperovitch joins the Cyberlaw Podcast to discuss the state of semiconductor decoupling between China and the West. It's a broad movement, fed by both sides. China has announced that it's investigating Micron to see if its memory chips should still be allowed into China's supply chain (spoiler: almost certainly not). Japan has tightened up its chip-making export control rules, aligning them with U.S. and Dutch restrictions, all with the aim of slowing China's ability to make the most powerful chips. Meanwhile, South Korea is boosting its chipmakers with new tax breaks, and Huawei is reporting a profit squeeze.
The Biden administration spent much of last week on spyware policy, Winnona DeSombre Berners reports. How much it actually accomplished isn't clear. The spyware executive order restricts U.S. government purchases of surveillance tools that threaten U.S. security or that have been misused against civil society targets. And a group of like-minded nations have set forth the principles they think should govern sales of spyware. But it's not as though countries that want spyware are going to have a tough time finding it, I observe, despite all the virtue signaling. Case in point: Iran is getting plenty of new surveillance tech from Russia these days. And spyware campaigns continue to proliferate.
Winnona and Dmitri nominate North Korea for the title "Most Innovative Cyber Power," acknowledging its creative use of social engineering to steal cryptocurrency and gain access to U.S. policy influencers.
Dmitri covers the Tiktok beat, including the prospects of the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act., which he still rates high despite criticism from the right. Winnona and I debate the need for another piece of legislation given the breadth of CFIUS review and International Emergency Economic Powers Act sanctions.
Dmitri and I note the arrival of GPT-4 cybersecurity, as Microsoft introduces "Security Copilot." We question whether this will turn out to be a game changer, but it does suggest that bespoke AI tools could play a role in cybersecurity (and pretty much everything else).
In other AI news, Dmitri and I wonder at Italy's decision to cut itself off from access to ChatGPT by claiming that it violates Italian data protection law. That may turn out to be a hard case to prove, especially since the regulator has no clear jurisdiction over OpenAI, which is now selling nothing in Italy. In the same vein, there may be a safety reason to be worried by how fast AI is proceeding these days, but the letter proposing a six-month pause for more safety review is hardly persuasive – especially in a world where "safety" seems to mostly be about stamping out bad pronouns.
In news Nick Weaver will kick himself for missing, Binance is facing a bombshell complaint from the Commodities Futures Trading Commission (CFTC). (The Binance response is here.) The CFTC clearly had access to the suicidally candid messages exchanged among Binance's compliance team. I predict criminal indictments in the near future and wonder if the CFTC's taking the lead on the issue has given it a jurisdictional leg up on the SEC in the turf fight over who regulates cryptocurrency.
Finally, we close with a review of a book arguing that pretty much anyone who ever uttered the words "China's peaceful rise" was the victim of a well-planned and highly successful Chinese influence operation.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:17 PM | Permalink | Comments (0)
The Capitol Hill hearings featuring TikTok's CEO lead off episode 450 of the Cyberlaw Podcast. The CEO handled the endless stream of Congressional accusations and suspicion about as well as could have been expected. And it did him as little good as a cynic would have expected. Jim Dempsey and Mark MacCarthy think Congress is moving toward action on Chinese IT products – probably in the form of the bipartisan Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act. But passing legislation and actually doing something about China's IT successes are two very different things.
The FTC is jumping into the policy arena on cloud services, Mark tells us, and it can't escape its DNA; it's dwelling on possible industry concentration and lock-in and not asking much about the national security implications of knocking off a bunch of American cloud providers when the alternatives are largely Chinese cloud providers. The FTC's myopia means that the administration won't get as much help as it could from the FTC on cloud security measures. I reissue my standard objection to the FTC's refusal to follow the FCC's lead in deferring on national security to executive branch concerns. Mark and I disagree about whether the FTC Act requires the Commission to limit itself to consumer protection.
Jim Dempsey reviews the latest AI releases, including Google's Bard, which seems to have many of the same hallucination problems as OpenAI's. Jim and I debate what I consider the wacky and unjustified fascination in the press with catching AI engaging in wrongthink. I believe it's just a mechanism for justifying the imposition of left-wing values on AI output – which already scores left/libertarian on 14 of 15 standard tests for identifying ideological affiliation. Similarly, I question the effort to stop AI from hallucinating footnotes in support of its erroneous facts. If ever there were a case for a separate AI citechecker, for generative AI correction of AI errors, the fake citation problem seems like a natural.
Speaking of Silicon Valley's lying problem, Mark reminds us that social media is absolutely immune for false user speech, even after it gets notice that the speech is harmful and false. He reminds us of his thoughtful argument in favor of tweaking section 230 to more closely resemble the notice and action obligations found in the Digital Millennium Copyright Act (DMCA). I argue that the DMCA has not so much solved the incentives for overcensoring speech as it has surrendered to them.
Jim introduces us to an emerging trend in state privacy law: privacy bills that industry supports. Iowa's new law is the exemplar; Jim questions whether it will satisfy users in the long run.
I summarize Hachette v. Internet Archive, in which Judge John G. Koeltl delivers a harsh rebuke to internet hippies everywhere, ruling that the Internet Archive violated copyright in its effort to create a digital equivalent to public library lending. The judge's lesson for the rest of us: You might think fair use is a thing, but it's not. Get over it.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:35 PM | Permalink | Comments (0)
GPT-4's rapid and tangible improvement over ChatGPT has more or less guaranteed that it or a competitor will be built into most new and legacy IT products. Some of those applications will be pointless; but some will change users' world. In this episode, Sultan Meghji, Jordan Schneider, and Siobhan Gorman explore the likely impact of GPT4, from Silicon Valley to China.
Kurt Sanger joins us to explain why Ukraine's IT Army of volunteer hackers creates political, legal, and maybe even physical risks for the hackers and for Ukraine. This may explain why Ukraine is looking for ways to "regularize" their international supporters, and probably to steer them toward defending Ukraine's infrastructure rather than attacking Russia's.
Siobhan and I dig into the Biden administration's latest target for cybersecurity regulation -- cloud providers. I wonder if there isn't a bit of bait and switch in operation here. The administration seems at least as intent on regulating cloud providers to catch hackers as to improve defenses.
Say this for China: It never lets a bit of leverage go to waste, even when it should. Case in point: To further buttress its seven-dash-line claim to the South China Sea, China is demanding that companies get Chinese licenses to lay submarine cable in the contested territory. That, of course, incentivizes the laying of cables much further from China, out where they'll be harder for the Chinese to deal with in a conflict. That doesn't sound smart, but some Beijing bureaucrat will no doubt claim it as a win for the wolf warriors. Ditto for the Chinese ambassador's response to the Netherlands restricting chip-making equipment sales to China, which boiled down to "We will make you pay for that. We just don't know how yet." The U.S. is not always good at dealing with other countries or the private sector, so it's nice to be competing with a country that is demonstrably worse at it.
The Security and Exchange Commission has gone from catatonic to hyperactive on cybersecurity. Siobhan notes its latest 48-hour incident reporting requirement and the difficulty of reporting anything useful in that time frame.
Kurt and Siobhan bring their expertise as parents of teens and aspiring teens to the TikTok debate.
I linger over the extraordinary and undercovered mess created by "18F" -- the General Service Administration's effort to bring Silicon Valley's can-do culture to the government's IT infrastructure. It looks like they managed to bring Silicon Valley's arrogance, its political correctness, and its penchant for breaking things but forgot to bring either competence or honesty. Login.gov was 18F's online identity verification for federal agencies disbursing benefits or otherwise dealing with the public. 18F sold it to a host of federal agencies that wanted to control fraud during the pandemic. But it never delivered the biometric checks that federal standards required. First, 18F lied to its federal customers about how or whether it was using biometrics. When it finally admitted the lie, it brazenly claimed it was not checking because the technology was, wait for it, racially biased. This claim ran counter to the only available evidence (GSA claimed that it did its own bias research, research that was apparently never published). Oh, and it refused to give back the $10 million it charged its victims, arguing that the work it did on the project cost more than it billed them, so they didn't lose anything. Except for the fraud that bad identity checks likely enabled in the middle of COVID handouts, a loss everyone has been decidedly incurious about. And one more thing: Among the victims of 18F's scam was Senator Ron Wyden (Ore.), who touted login.gov and its phony biometric checks as the "good" alternative to ID.me, a private identity-checker that encountered political flak over its contract with the IRS. Bottom line advice for 18F alumni: It's not too late to start scrubbing the entity from your LinkedIn profile.
The Knicks have won some games. Blind pigs have found some acorns. But Madison Square Garden (and Knicks) owner, Jimmy Dolan is still pouring good money into his unwinnable but highly entertaining fight to use facial recognition against lawyers he does not want in the Garden. Kurt offers commentary, and probably saves himself the cost of Knicks tickets for all future playoff games.
Finally, in listener feedback, I give Simson Garfinkel's answer to a question I asked (and should have known the answer to) in episode 448.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:21 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast kicks off with the sudden emergence of a serious bipartisan effort to impose new national security regulations on what companies can be part of the U.S. information technology and content supply chain. Spurred by a stalled CFIUS negotiation with TikTok, Michael Ellis tells us, a dozen well-regarded Democrat and Republican Senators have joined to endorse the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which authorizes the exclusion of companies based in hostile countries from the U.S. economy. The administration has also jumped on the bandwagon, making the adoption of some legislation on the topic more likely than in the past.
Jane Bambauer takes us through the district court decision upholding the use of a "geofence warrant" to identify January 6th rioters. We end up agreeing that this decision (and the context) turned out to be the best possible for the Justice Department, silencing the usual left-leaning critics of law enforcement technological adaptation.
Just a few days after issuing a cybersecurity strategy that calls for more regulation, the administration is delivering what it called for. The Transportation Security Administration (TSA) has issued emergency cybersecurity orders for airports and aircraft operators that, I argue, take the regulatory framework from a few baby steps to a plausible set of minimum requirements. Things look a little different in the water and sewage sector, where the regulator is the Environmental Protection Agency (EPA) – not known for its cybersecurity expertise – and the authority to regulate is grounded if at all in very general legislative language. To make the task even harder, EPA is planning to impose its cybersecurity standards using an interpretive rule, against a background in which Congress has done just enough cybersecurity legislating to undermine the case for adopting a broad interpretation.
Jane explores the story that Google was deterred from releasing its impressive AI technology by fear of bad press. That leads us to a meditation on politics inside companies with a guaranteed source of revenue. I offer hope that Google's fears about politically incorrect AI will infect Chinese tech firms.
Jane and I reprise the debate over the United Kingdom's Online Safety Act and end-to-end encryption, which leads to a poli-sci tour of European policymaking institutions.
The other cyber and national security news in Congress is the ongoing debate over renewal of section 702 of the Foreign Intelligence Surveillance Act (FISA), in which it appears that the FBI scored an own-goal. An FBI analyst did unauthorized searches in the 702 database for intelligence on one of the House intelligence committee's moderates, Rep. Darin LaHood, R-Ill. Details are sketchy, Michael notes, but the search was disclosed by Rep. LaHood, and it is bound to have led to harsh questioning during the FBI director's classified testimony, Meanwhile, at least one member of the President's Civil Liberties and Oversight Board is calling for what could be a crippling "reform" of 702 database searches.
Jane and I unpack the controversy surrounding the Federal Trade Commission's investigation of Twitter's compliance with its most recent consent decree. On the law, Elon Musk's Twitter is on its back foot. On the political front, however, the two organizations are more evenly matched. Chances are, both parties are overestimating their own strengths, which could foretell a real donnybrook.
Michael assesses the stories saying that the Biden administration is preparing new rules to govern outbound investment in China. He is skeptical that we'll see heavy regulation in this space.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:27 PM | Permalink | Comments (0)
I've finished the second in what I hope will be a series of posts exploring the risk of partisan abuse of U.S. intelligence authorities. (For the other, see this opinion piece, coauthored with Michael Ellis.) Section 702 renewal is on the agenda for Congress in 2023, and building support for renewal means taking seriously complaints on the right that intelligence agencies were affected by partisan bias in their treatment of Donald Trump's candidacy, presidency, and staff. This means asking whether past practices created at least an appearance or a risk of partisan abuse -- and thus whether any intelligence reforms should address those risks.
In my latest look at the issue, in Lawfare, I note that "respectable" opinion is finally acknowledging that press stories about a Trump-Russia connection may have been slanted by mainstream media, and I examine the role that media bias played in the early stages of the FBI's investigation of Trump world. A few excerpts below:
The Trump-Russia media saga began with a bit of journalistic malpractice. As the GOP convention was preparing to nominate Trump, Gerth tells us, the Washington Post ran one of the early attacks on Trump for kowtowing to Russian interests: a July 18 opinion column from Josh Rogin headlined, "Trump campaign guts GOP's anti-Russian stance on Ukraine." It was wrong. In Gerth's understated words:
The story would turn out to be an overreach. Subsequent investigations found that the original draft of the platform was actually strengthened by adding language on tightening sanctions on Russia for Ukraine-related actions, if warranted, and calling for "additional assistance" for Ukraine. What was rejected was a proposal to supply arms to Ukraine, something the Obama administration hadn't done.
A critical part of the FBI's case against Page was the claim that his many contacts with Russians were part of what its affidavit called "a well-developed conspiracy of cooperation" between the Trump campaign and the Russian government. That's a remarkable claim, and it naturally gives rise to the question of exactly what the parties did to advance this "well-developed conspiracy." The FBI's answer was the GOP platform change—it was presented as a clear step by Trump's associates to move GOP policy closer to protecting Putin's interests.
As evidence of this crucial element, the affidavit relied on what it called an "article in an identified news organization" (that is, Rogin's op-ed) and "assesse[d] that, following Page's meetings in Russia, Page helped influence [the Republican Party] and [the Trump] campaign to alter their platforms to be more sympathetic to the Russian cause." That "assessment" had no basis in fact or any independent investigation; it relied entirely on the inaccurate opinion pieces in the Post, the Times, and the Atlantic.
I go on to suggest FISA reforms to address the problems surfaced by an FBI performance in the Crossfire Hurricane investigation that was disappointing at best -- and a partisan abuse of FISA at worst. You can read the whole thing here: https://www.lawfareblog.com/vicious-cycle-how-press-bias-fed-fisa-abuse-trump-russia-panic
Posted at 11:40 AM | Permalink | Comments (0)
Our last episode of the Cyberlaw Podcast (No. 446) was a long interview on the U.S. national cybersecurity strategy with Chris Inglis, until recently the national cybersecurity director. So this episode 447 focuses only on the most controversial recommendation in the strategy – liability for certain security flaws. Nick Weaver, Maury Shenk and I explore the pros and cons of what's become known as cybersecurity's third rail.
Turning to the U.K., Maury brings us up to date on the pending Online Safety Bill. Signal has threatened to "walk" out of the U.K. if the bill's protections for children threaten its end-to-end encryption ideology. Far from being deterred, members of Parliament are pushing for a tougher bill, and the government is being forced to accommodate them with tough criminal penalties for Big Tech execs who do not take their obligations sufficiently seriously.
Is the Biden administration getting ready to impose restrictions on outbound U.S. investment in critical Chinese industries? The Wall Street Journal says it is, but Justin Sherman thinks that the administration may just be meeting Congress's requirements for a briefing on the topic. Meanwhile, I wonder whether we've got this tech control thing backwards. If ASPI , the Australian think tank, is right, the U.S. has already lost the lead to China in 37 of 44 critical new technologies, so what we really need to worry about is Chinese restrictions on U.S. access to its technology.
Maury and I explore "woke AI," the notion that the "ethical guardrails" built into ChatGPT and other engines are simply disguised forms of political bias. Maury notes that Justice Gorsuch has questioned whether AI engines might have the protection of section 230. That seems like a legally dubious proposition to us, but don't underestimate the willingness of Big Tech's lawyers to argue the point.
TikTok suffered a setback on the Hill last week, as Republicans passed out of committee a bill effectively banning the app. It was a party line vote, showing how what had been a bipartisan issue is now fraying into partisanship, at least in the House. In the Senate, though, Senate Intelligence Committee Chair Mark Warner is working toward a similar outcome on a bipartisan basis, creating real jeopardy for the company over the next two years. If anyone should be hoping China does not sell arms to Russia, I suggest, it is TikTok.
Speaking of China, the most eye-opening story of the week comes from the Globe and Mail. It breaks a story about how aggressively China tried (and with real success) to tilt the 2021 Canadian national election toward the Liberals, using tactics we are bound to see in other countries. My favorite? Persuading China-friendly companies to hire students from China in Canada and then release them to "volunteer" for the CCP's favored candidate.
In other China news, Maury and Nick note that Elon Musk's remarks lending credibility to the Wuhan lab leak theory drew a brushback pitch from official Chinese sources, and Nick and I puzzle over stories that China plans to launch 13,000 satellites to keep up with Starlink. Meanwhile, Twitter's revenue continues to sink. I think we can see bottom for the company, but Nick thinks not.
Nick overcomes my skepticism about Meta's deployment of a tool for taking down nude photos and worse. It is a variant of existing methods, but it has the advantage of not requiring victims to send their nude photos to Meta.
Justin responds to my criticism a few episodes back of Duke's study claiming that Americans' mental health data is being sold by data brokers.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 12:06 PM | Permalink | Comments (0)
Chris Inglis was the first National Cyber Director at the White House, after a long and highly successful career at the National Security Agency (ending with seven years as Deputy Director). In his role as Cyber Director, he brought the office from one employee up close to its planned strength of nearly 100 staffers. He also oversaw the drafting of the first National Cybersecurity Strategy, leaving office just a couple of weeks before the strategy was publicly released
So what does he think now about the strategy, its reception, and its future? I sat down with him to review the strategy's recommendations – especially the hardest ones. Chris speaks candidly about the need for (and the limitations on) cybersecurity regulation, the wide cybersecurity gaps between different sectors of our economy, the reasons for rethinking liability for cybersecurity failures, and how the Office of the National Cyber Director can work with the Deputy National Security Adviser for Cyber and Emerging Technology.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 11:48 AM | Permalink | Comments (0)
As promised, the Cyberlaw Podcast devoted half of this episode to an autopsy of Gonzalez v Google LLC , the Supreme Court's first opportunity in a quarter century to construe section 230 of the Communications Decency Act. And an autopsy is what our panel – Adam Candeub, Gus Hurwitz, Michael Ellis and Mark MacCarthy – came to perform. I had already laid out my analysis and predictions in a separate article for the Volokh Conspiracy, contending that both Gonzalez and Google would lose.
All our panelists agreed that Gonzalez was unlikely to prevail, but no one followed me in predicting that Google's broad immunity claim would fall, at least not in this case. The general view was that Gonzalez's lawyer had hurt his case with shifting and opaque theories of liability, that Google's arguments raised concerns among the Justices but not enough to induce them to write an opinion in such a muddled case.
Evaluating the Justices' performance, Justice Neil Gorsuch's search for a textual answer drew little praise and some derision while Justice Ketanji Jackson won admiration even from the more conservative panelists.
More broadly, there was a consensus that, whatever the fate of this particular case, the Court will find a way to push the lower courts away from a sweeping immunity for platforms and toward a more nuanced protection. But because returning to the original intent of section 230 is not likely after 25 years of investment based on a lack of liability, this more nuanced protection will not have much grounding in the actual statutory language. Call it a return to the Rule of Reason.
In other news, Michael summed up recent developments in cyber war between Russia and Ukraine, including imaginative attacks on Russia's communications system. I ask whether these attacks – which are sexy but limited in impact – make cyber the modern equivalent of using motorcycles as a weapon in 1939.
Gus brings us up to date on recent developments in competition law, including a likely Department of Justice challenge to Adobe's $20 Billion Figma deal, new airline merger challenge, the beginnings of opposition to the Federal Trade Commission's (FTC) proposed ban on noncompete clauses, and the third and final nail in the coffin of the FTC's challenge to the Meta-Within merger.
In European cyber news, the European Union is launching a consultation designed to make U.S. platforms pay more of European telecom networks' costs. Adam and Gus note the rent-seeking involved but point out that rent-seeking in U.S. network construction is just as bad, but seems to be focused on extracting rents from taxpayers instead of Silicon Valley.
The EU is also getting ready to fix the General Data Protection Regulation (GDPR) -- fix in the sense that gamblers fix a prize fight, as it will make sure Ireland never again wins a fight with the rest of Europe over how aggressively to extract privacy rents from U.S. technology companies.
I am excited about Apple's progress in devising a blood glucose monitor that could go into a watch. Adam and Gus tell me not to get too excited until we know how many roadblocks The Food and Drug Administration (FDA) will erect to the use and analysis of the monitors' data.
In quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 11:04 AM | Permalink | Comments (0)
The Supreme Court's oral argument in Gonzalez v. Google left most observers in a muddle over the likely outcome. In three hours of questioning, the Justices defied partisan stereotypes and asked excellent questions, but mostly just raised doubts about how they intended to resolve the case. I had the same problem while listening to the argument in for a Cyberlaw Podcast episode (No. 445) that will be mostly devoted to Gonzalez.
But after going back to look at each Justice's questions separately, I conclude that we do in fact have a pretty good idea how the case will turn out: Gonzalez will lose, and so will Google, whose effort to win a broad victory is likely to be killed – and most enthusiastically by the Court's left-leaning Justices.
First, a bit about the case. Gonzalez seeks to hold Google liable because the terror group ISIS was able to post videos on YouTube, and YouTube recommended or at least kept serving those videos to susceptible people. This contributed, the complaint alleges, to a terror attack in Paris that killed Gonzalez's daughter. Google's defense is that section 230 makes it immune from liability as a "publisher" of third-party content, and that organizing, presenting, and even recommending content is the kind of thing publishers do.
I should say up front that I am completely out of sympathy with Google's position. I was around when section 230 was adopted; it was part of the Communications Decency Act, which was designed to protect children from indecent content on the internet. The tech companies, which were far from being Big Tech at the time, hated the decency part of the bill but couldn't beat it. Instead, they tried to turn the decency lemon into lemonade by asking for relief from a recent defamation ruling that online services who excluded certain content were the equivalent of publishers under defamation law and thus liable for any defamatory third-party content they distributed. Services like AOL and Compuserve pointed out the irony that they were being punished for their effort to build family-friendly online communities -- the opposite of what Congress wanted. "If you want us to exclude indecent content," they argued to Congress, "you have to immunize us from publisher liability when we do that." That was and is a compelling argument, but only for undoing publisher liability under defamation law. To my mind, that's exactly what Congress did when it said, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
But that's not how the courts have read section 230. Seduced by a transformative technology and by aggressive, effective advocacy, the courts read this language to immunize online providers for doing anything that publishers can be said to do. This immunity goes far beyond defamation, as the Gonzalez case shows. There, Google said it should be immune because deciding what content to show or even recommend to users is the kind of thing a publisher does. Of course, carried to its logical extreme, this means that what are now some of the richest companies in the world cannot be held liable even if they deliberately serve how-to-kill-yourself videos to the depressed, body-shaming videos to the anorexic, and ISIS videos to extremists.
So, why not just correct the error, narrow the statutory interpretation to its original purpose, and let Congress actually debate and enact any other protections Big Tech needs? Because, we're told, these companies have built their massively profitable businesses on top of the immunity they sold to the courts. To change now, after twenty-six years of investment, would be disruptive – perhaps even catastrophic. That in a nutshell is the dilemma on whose horns the Court twisted for three hours.
It is generally considered professional folly for appellate lawyers to predict the outcome of a case based on the oral argument. In fact, this is only sometimes true. Judges, and Justices even more so, usually want feedback from counsel on the outcome they're considering. It's hard to get that feedback without telling counsel what they have in mind. That said, some judges believe in hiding the ball, and some just like to ask tough questions. And in complex cases, sometimes the Justices' initial inclinations yield to advocacy in conference or in drafts circulated by other Justices.
That latter fate could be in store for the Gonzalez case. So there's a good chance I'll end up guessing wrong about the outcome. But considering how muddled the argument seemed, I was surprised how much can be learned by going back through each Justice's questions to see what each of them thinks the case is about. It turns out that most of them were very clear about what rules of decision they were contemplating.
Justice Gorsuch. Let's start with Justice Gorsuch. I believe we know what his opinion will say. He laid his theory out for every advocate. He will again indulge his bent for finding the answer in the text of the statute. Briefly, he noted that Congress defined the entities eligible for immunity to include providers of software to "filter, screen, allow or disallow content" and to "pick, choose, analyze, or digest content," Bingo, he seemed to say, there's your textualist solution to the case: Congress told us what publishers do and thus what should be immune. No one, with the possible exception of Justice Kavanaugh, found this particularly compelling, mainly because it's an extraordinarily broad immunity, protecting even platforms that boost content for the worst of motives – to harm competitors, say, or to denigrate particular political candidates or ethnic groups. (The notion has serious technical flaws as well, but I'll pass over them here.)
Justice Kavanaugh. Justice Gorsuch's embrace of broad immunity suggests that he sees this case through a business conservative's eyes: The less liability the state imposes on business, the better. In this, he was joined most clearly by Justice Kavanaugh, who reverted several times to the risk of economic disruption if a narrower reading of section 230 were adopted.
Chief Justice Roberts. If you're looking for a third business conservative on this Court, Chief Justice Roberts is the most likely candidate. And he clearly resonates to Big Tech's concerns about unleashing torrents of litigation; he's reluctant to impose liability for content selection where the criteria for selection are generally applicable (e.g., the site just gives the user what she asks for). But he also recognizes that it's the platform that has the power to select what the user sees, and he wonders why the platform shouldn't be responsible for how it uses that power.
The Chief Justice's qualms about a sweeping immunity, however, are muted. They are expressed much more directly by the Justices on the left.
Justice Sotomayor. Justice Sotomayor returns time and again to the idea that the power to select and recommend can be abused – by encouraging discrimination on racial or ethnic grounds, for example. Her hypotheticals include "an Internet provider who was in cahoots with ISIS" to encourage terrorism and a dating app "that won't match black people to white people." She's not willing to narrow the immunity back to what Congress probably intended in 1996 (spoiler: none of the Justices is), but she bluntly tells the Solicitor General's lawyer what she wants: "Let's assume we're looking for a line because it's clear from our questions we are, okay?" She wants an immunity for what could be called "good" selection criteria – those that are neutral, unbiased, or general-purpose – but not for "bad" criteria.
Justice Jackson. If anyone supports the idea of returning to the 1996 intent, it's Justice Jackson, who tells Google's lawyer that "you're saying the protection extends to Internet platforms that are promoting offensive material…. exactly the opposite of what Congress was trying to do in the statute." At another point, she signals clearly that she disagrees with the Google position that any selection criteria it chooses to use are immune from suit. In another colloquy, she downplays the risk of business disruption as just a "parade of horribles." Not all of her questions sound this theme, but there are enough to conclude that she's close to Justice Sotomayor in her skepticism about the sweeping immunity Big Tech wants.
Justice Kagan. Justice Kagan also sees that section 230 doesn't really fit the modern internet. The Court's job, she seems to say, is "to figure out how ... this statute which was a pre-algorithm statute applies in a post-algorithm world." She thinks the plaintiff's reading could "send us down the road such that 230 really can't mean anything at all." She's daunted by the difficulty of refashion the statute to avoid over-immunizing Big Tech:
I don't have to accept all Ms. Blatt's "the sky is falling" stuff to accept something about, boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we're finding that Google isn't protected. And maybe Congress should want that system, but isn't that something for Congress to do, not the Court?
At the same time, she sees, the immunity Google wants would allow Google to knowingly boost a false and defamatory video and to refuse to take it down. She asks, "Should 230 really be taken to go that far?" I'm guessing that she thinks the answer is "no" and that she, like Justice Sotomayor, is just looking for a line that gets her there. For purposes of the count, let's put her in the middle with the Chief Justice.
So far, the Justice-by-Justice breakdown for giving Google the sweeping immunity it wants is a 2-2-2 split between the left and right with the Chief Justice and Justice Kagan in the middle. That sounds familiar. But it's about to get weird. That's because the three remaining Justices are at least as much social as business conservatives. And Big Tech has a long track record of contempt for social conservatives.
Justice Thomas. You'd think that Justice Thomas, who's been grumbling about section 230 for this reason for years, would have been an easy vote against Google. He clearly has doubts about Google's sweeping claim of immunity for any selection criteria. At the same time, his questions show some sympathy for protecting Google's selection criteria, as long as they're generic and neutral. I still think he'll be a vote to limit the immunity, assuming someone finds a dividing line between good selection criteria and bad.
Justice Alito. Justice Alito is the only Justice to show a hint of conservative resentment at the rise of Big Tech censorship in recent years. He notes that Google could label and preferentially distribute what it considers "responsible" news sources and he questions why such curation should be immune from liability: "That's not YouTube's speech?" he asks. "The fact that YouTube put those at the top, so those are the ones I'm most likely to look at, that's not YouTube's speech?" He also raises the specter of deliberate distribution of bad content: "So suppose the competitor of a restaurant posts a video saying that this rival restaurant suffers from all sorts of health problems, it -- it creates a fake video showing rats running around in the kitchen, it says that the chef has some highly communicable disease and so forth, and YouTube knows that this is defamatory, knows it's -- it's completely false, and yet refuses to take it down. They could not be civilly liable for that? ,,, You really think that Congress meant to go that far?"
And, in another sign that Big Tech may have overplayed its claim that only a sweeping immunity protects the internet from apocalypse, his last question is "Would … Google collapse and the Internet be destroyed if YouTube and, therefore, Google were potentially liable for posting and refusing to take down videos that it knows are defamatory and false?"
By my count, that leaves the Court roughly divided 2-2-4 on whether to give Google a sweeping immunity, with two business conservatives all in for Google (Gorsuch, Kavanaugh), two Justices waffling (Roberts, Kagan), and what might be called a "populistish" grouping of Sotomayor, Jackson, Alito, and (probably) Thomas,
Justice Barrett. Is Justice Barrett a fifth vote for that unlikely left-right alignment? Most likely. Like several of the other Justices, she was puzzled and put off by some of the idiosyncratic arguments made by the lawyer for Gonzalez. She also showed considerable interest that I don't understand in making sure section 230 protects ordinary users for their likes and retweets. But when Google's lawyer rose to speak, Justice Barrett rolled out a barrage of objections like those we heard from the other four immunity skeptics: Do you really, she asked, expect us to immunize a platform that deliberately boosts defamation, terrorism, or racism?
So there it is, by my seat-of-the-pants count -- somewhere between five and seven votes to cut back the broad immunity that a generation of Big Tech lawyers built in the lower courts.
And what about the folly of predicting outcomes from argument? Well, it's hard to deny that I'm running a pretty high risk of ending up with egg on my face. There is a real possibility that the Court will dump the case without ruling on Google's immunity. The lawyer for Gonzalez did himself no favors by shifting positions on his way to oral argument. He ended up claiming that thumbnail extracts of videos were really Google's content, not third-party content, and that simply serving users more videos like the last one they watched was a "recommendation" and thus Google's own speech. The Justice's struggled just to understand his argument, and they may be tempted to dump the case for that reason, ruling that immunity is unnecessary because Google faces no underlying liability for aiding and abetting ISIS (the question presented in a companion case argued the day after Gonzalez).
But dumping the case without a decision is not a neutral act. It leaves in place a raft of immunity-maximizing cases from the lower courts -- precedents that at least seven Justices find troubling. That law won't go away on its own, so I'm guessing they'll feel dutybound to offer some corrective guidance on the scope of 230.
If they do, I bet that six or seven Justices will decisively reject the maximalist immunity sought by Google. They may have trouble tying that rejection to the text of the law (as do the immunity maximalists), and whatever limits they impose on section 230 (e.g., immunity only for "reasonable" or "neutral" content selection) could turn out to be unpersuasive or unstable. But that just means that Big Tech, which won its current legal protection by nuking liability from orbit will have to win some of its protection back by engaging in house-to-house legal combat.
If so, the popcorn's on me.
Posted at 08:59 AM | Permalink | Comments (0)
This bonus episode offers an interview of Bruce Schneier, the prolific security guru, about his latest book, A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back. As usual with Bruce's books, it is a good read, technically up to date and approachable. Much of the book, and of the interview, explores Bruce's view that hacking – subverting the intent of a system of rules without actually breaking the rules – has much in common with lawyering. Finding ways to subvert a Microsoft program, Bruce argues, is not much different from exploiting loopholes in airline mileage programs or finding ways to count cards at a casino without letting the casino know what you're doing. And those exploits are not really so different from what lawyers do when they hunt for unexpected tax loopholes to shelter income.
The analogy only goes so far, as Bruce admits. It is often hard to actually define the "intent" that is being subverted, or to draw a line between subversion within the rules and just plain rule-breaking. And hacking, for all its underdog-beats-The-Man romance, is just a tool, available to everyone, including The Man. The world's best computer hackers mostly work for governments or corporations these days, and the same is true for the world's best legal hackers.
Still, exploring the parallels opens new ways of thinking for those of us who work at the intersection of tech and law. Among the new insights are the development of software programs that diagram statutory and regulatory codes and the likelihood that artificial intelligence will someday soon be red-teaming legislation in real time.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:01 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast opens with a look at some genuinely weird AI behavior, first by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – and then by Google's AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS'ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China's autocracy never gets its AI capabilities off the ground.
One thing that AI is creepily good at is faking people's voices. I try out ElevenLabs' technology in the first advertisement ever to run on the Cyberlaw Podcast.
The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment of how agencies are handling 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI's access – at great cost to our unified approach to terrorism intelligence, I point out. I also complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community.
Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink's effort to keep providing service to Ukraine without assisting offensive military operations.
Chinny does a victory lap over reports that the national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I wonder why Google thinks this is good for Google.
Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act (DMCA) and European Union (EU) and California privacy law are the censor's favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges its customers so little.
Jim and Chinny cover the charm offensive being waged in Washington by TikTok's CEO and the broader debate over China's access to the personal data of Americans, including health data. Jim cites a recent Duke study, which I complain is not clear about when the data being sold is individual and when it is aggregated. Nick reminds us all that aggregate data is often easy to individualize.
Finally, we make quick work of a few more stories:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:34 PM | Permalink | Comments (0)
The latest episode of The Cyberlaw Podcast gets a bit carried away with the China spy balloon saga. Guest host Brian Fleming, along with guests Gus Hurwitz, Nate Jones, and Paul Rosenzweig, share insights (and bad puns) about the latest reporting on the electronic surveillance capabilities of the first downed balloon, the Biden administration’s “shoot first, ask questions later” response to the latest “flying objects,” and whether we should all spend more time worrying about China’s hackers and satellites.
Gus shares a few thoughts on the State of the Union address and the brief but pointed calls for antitrust and data privacy reform. Sticking with big tech and antitrust, Gus recaps a significant recent loss for the FTC and discusses what may be on the horizon for FTC enforcement later this year.
Pivoting back to China, Nate and Paul discuss the latest reporting on a forthcoming (at some point) executive order intended to limit and track U.S. outbound investment in certain key aspects of China’s tech sector. They also ponder how industry may continue its efforts to narrow the scope of the restrictions and whether Congress will get involved. Sticking with Congress, Paul takes the opportunity to explain the key takeaways from the not-so-bombshell House Oversight Committee hearing featuring former Twitter executives.
Gus next describes his favorite ChatGPT jailbreaks and a costly mistake for an AI chatbot competitor during a demo.
Paul recommends a fascinating interview with Sinbad.io, the new Bitcoin mixer of choice for North Korean hackers, and reflects on the substantial portion of the DPRK’s GDP attributable to ransomware attacks.
Finally, Gus questions whether AI-generated “Nothing, Forever” will need to change its name after becoming sentient and channeling Dave Chapelle.
To wrap things up in the week’s quick hits, Gus briefly highlights where things stand with Chip Wars: Japan edition and Brian covers coordinated US/UK sanctions against the Trickbot cybercrime group, confirmation that Twitter’s sale will not be investigated by CFIUS, and the latest on SEC v. Covington.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter.
Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:54 PM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast is dominated by stories about possible cybersecurity regulation. David Kris points us first to an article by the leaders of the Cybersecurity and Infrastructure Security Administration (CISA) in Foreign Affairs. Jen Easterly and Eric Goldstein seem to take a tough line on "Why Companies Must Build Safety Into Tech Products." But for all the tough language, one word, "regulation," is entirely missing from the piece. Meanwhile, the cybersecurity strategy that the White House has reportedly been drafting for months seems to be hung up over how enthusiastically to demand regulation.
All of which seems just a little weird in a world where Republicans hold the House. Regulation is not likely to be high on the GOP to-do list, so calls for tougher regulation are almost certainly more symbolic than real.
Still, this is apparently the week for symbolic calls for regulation. David also takes us through an National Telecommunications and Information Administration (NTIA) report on the anticompetitive impact of Apple's and Google's control of mobile app markets. The report points to many problems and opportunities for abuse inherent in the two companies' headlock on what apps can be sold to phone users. But, as Google and Apple are quick to point out, the stores do play a role in regulating app security, so breaking the headlock could be bad for cybersecurity. In any event, practically every recommendation for action in the report is a call for Congress to step in – and thus almost certainly a nonstarter for reasons already given.
Not to be outdone on the phony regulation beat, Jordan Schneider and Sultan Meghji explore some of the policy and regulatory proposals for AI that have been inspired by the success of ChatGPT. The EU's AI Act is coming in for lots of attention, mainly from parts of the industry that want to be exempted. Sultan and I trade observations about who'll be hollowed out first by ChatGPT, law firms or investment firms.
In other news, Sultan also tells us why the ION ransomware hack matters. Jordan and Sultan find a cybersecurity angle to The Great Chinese Balloon Scandal of 2023. And I offer an assessment of Matt Taibbi's story about the Hamilton 68 "Russian influence" reports. If you have wondered what the fuss was about, do not expect mainstream media to tell you; the media does not come out looking good in this story. Unfortunately for Matt Taibbi, he doesn't look much better than the reporters his story criticizes. David thinks it's a balanced and moderate take on the story, for which I offer an apology and a promise to do better next time.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:23 PM | Permalink | Comments (0)