Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Posted at 08:50 PM in Random posts | Permalink | Comments (5)
The big cyberlaw story of the week is the Justice Department's antitrust lawsuit against Google and the many hats the company wears in the online ad ecosystem. Lee Berger explains the Justice Department's theory, which is not dissimilar to the Texas Attorney General's two-year-old lawsuit. When you have lost both the Biden administration and the Texas Attorney General, I suggest, you cannot look too many places for friends – and certainly not to Brussels, which is also pursuing similar claims of its own. So what is the Justice Department's late-to-the-party contribution to this dogpile? At least two things, Lee suggests: a jury demand that will put all those complex Borkian consumer-welfare doctrines in front of a Northern Virginia jury and a "rocket docket" that will allow Justice to catch up with and maybe lap the other lawsuits against the company. This case looks as though it will be long and ugly for Google, unless it turns out to be short and ugly. Still, Mark reminds us, for Justice, finding an effective remedy may be harder than proving anticompetitive conduct.
Nathan Simington assesses the administration's announced deal with Japan and the Netherlands to enforce its tough decoupling policy against China's semiconductor industry. Details are still a little sparse, but some kind of deal was essential for the U.S. campaign to work. But for Japan and the Netherlands, the details are critical, and any arrangement will require flexibility and sophistication on the part of the Commerce Department if it is to work in the long run.
Megan Stifel and I chew over the Justice Department/FBI victory lap over putting a stick in the spokes of The Hive ransomware infrastructure. We agree that the lap was warranted. Among other things, the FBI handled its access to decryption keys with more care than in the past, providing them to many victims before taking down a big chunk of the ransomware gang's tools. The bad news? Nobody was arrested, and the infrastructure can probably be reconstituted in the near term.
Here is an evergreen headline: "Facebook is going to reinstate Donald Trump's account." That could be the opening line of any story in the last few months, and that is probably Facebook's strategy – a long, teasing dance of seven veils so that, by the time Trump starts posting, it will be old news. If that is Facebook's PR strategy, it is working, Mark MacCarthy reports. Nobody much cares, and they certainly do not seem to be mad at Facebook. So the company is out of the woods, but for the ex-President it's a blow to the ego that is bound to sting.
Megan has more good news on the cybercrime front: The FBI identified the North Korean hacking group that stole $100 million in crypto last year – and may have kept the regime from getting its hands on any of the funds.
Nathan unpacks two competing news stories. First, "OMG, ChatGPT will help bad guys write malware." Second: "OMG, ChatGPT will help good guys find and fix security holes." He thinks they are both a bit overwrought, but maybe a glimpse of the future.
Mark and Megan explain TikTok's new offer to Washington. Megan also covers Congress's "TayTay v. Ticketmaster" hearing after disclosing her personal conflict of interest.
Nathan answers my question: how can the FAA be so good a preventing airliners from crashing and so bad at preventing its systems from crashing? The ensuing discussion turns up more on-point bathroom humor than anyone would have expected.
In quick hits, I cover three stories:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 08:40 PM | Permalink | Comments (0)
We kick off a jam-packed episode of the Cyberlaw Podcast by flagging the news that ransomware gangs' revenue fell substantially in 2022. There is lots of room for error in that Chainalysis finding, Nick Weaver notes, but the drop is large. Among the reasons to think it might be real is a growing resistance to paying ransom on the part of companies and their insurers, who are especially concerned about liability for payments to sanctioned ransomware gangs. I also note a fascinating additional insight from Jon DiMaggio, who infiltrated the Lockbit ransomware gang. He says that when Lockbit threatened to release Entrust's internal files, the company responded with days of Distributed Denial of Service (DDoS) attacks on Lockbit's infrastructure – and never did pay up. That would be a heartening display of courage on the part of corporate victims. It would also be a felony, at least according to the conventional wisdom that condemns hacking back. So I cannot help thinking there is more to the story. Like, maybe Canadian Security Intelligence Service is joining Australian Signals Directorate in releasing the hounds on ransomware gangs. I look forward to hearing more about this undercovered disclosure.
Gus Hurwitz offers two explanations for the Federal Aviation Administration system outage, which grounded planes across the country. There's the official version and the conspiracy theory, as with everything else these days. Nick breaks down the latest cryptocurrency failure; this time it's Genesis. Nick's not a fan of its prepackaged bankruptcy. And Gus and I puzzle over the Federal Trade Commission's determination to write regulations to outlaw most non-compete clauses.
Justin Sherman a first-timer on the podcast, covers recent research showing that alleged Russian social media election interference had no meaningful effect on the 2016 US election. That spurs an outburst from me about the cynical scam that the "Russia, Russia, Russia" narrative became – a 2016 version of election denial for which the press and the left have never apologized.
Nick explains the looming impact of Twitter's interest payment obligation. We're going to learn a lot more about Elon Musk's business plans from how he deals with that crisis than from anything he's tweeted in recent months.
It does not get more cyberlawyerly than a case the Supreme Court will be taking up this term – Gonzalez v. Google. This case will put Section 230 squarely on the Court's docket, and the amicus briefs can be measured by the shovelful. The issue is whether YouTube's recommendation of terrorist videos can ever lead to liability – or whether any judgment is barred by Section 230. Gus and I are on different sides of that question, but we agree that this is going to be a hot case, a divided Court, and a big deal.
And, just to show that our foray into cyberlaw was no fluke, Gus and I also predict that the United States Court of Appeals for the District of Columbia Circuit is going to strike down the Allow States and Victims to Fight Online Sex Trafficking Act, also known as FOSTA-SESTA – the legislative exception to Section 230 that civil society loves to hate. Its prohibition on promotion of prostitution may fall to first amendment fears on the court, but the practical impact of the law may remain.
Next, Justin gives us a quick primer on the national security reasons for regulation of submarine cables. Nick covers the leak of the terror watchlist thanks to an commuter airline's sloppy security. Justin explains TikTok's latest charm offensive in Washington.
Finally, I provide an update on the UK's online safety bill, which just keeps getting tougher, from criminal penalties, to "ten percent of revenue" fines, to mandating age checks that may fail technically or drive away users, or both. And I review the latest theatrical offering from Madison Square Garden – "The Revenge of the Lawyers." You may root for the snake or for the scorpions, but you will not want to miss it.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:54 PM | Permalink | Comments (0)
The Jan. 6 committee exposed norm-breaking in surprising places. Take the conduct of Joint Chiefs of Staff Gen. Mark Milley, who abused the classified information system to hide information about how the Pentagon reacted to the Capitol riot. In my latest piece for Lawfare, I argue that Gen. Milley's conduct overclassified information in violation of the relevant executive order. Worse, it may have prejudiced some of the Jan.6 defendants and denied FOIA access to the most important DOD documents about that day. The press and Congress bitterly criticized a similar handling of the Trump-Zelensky phone transcript, but it's been silent about Gen. Milley. Excerpts from Lawfare below.
Here's Gen. Milley's candid statement about what he did:
The document—I classified the document at the beginning of this process by telling my staff to gather up all the documents, freeze-frame everything, notes, everything and, you know, classify it. And we actually classified it at a pretty high level, and we put it on JWICS, the top secret stuff. It's not that the substance is classified. It was[.] I wanted to make sure that this stuff was only going to go [to] people who appropriately needed to see it, like yourselves. We'll take care of that. We can get this stuff properly processed and unclassified so that you can have it … for whatever you need to do.
In short, Milley overclassified those records to keep them from leaking—to make sure that the Pentagon and those investigating Jan. 6 would control the story.
By now, this story should sound eerily familiar. In 2019, President Trump held a phone call with President Volodymyr Zelenskyy of Ukraine. The call was immediately controversial inside the administration, and White House staff quickly restricted access to the call's transcript by moving it to a server designed to protect highly classified intelligence activities. That move attracted press attention that was harsh, breathless, and extensive—even though such transcripts are usually classified, just not at a level that justifies use of the intelligence activity server. Former CIA Director Leon Panetta said that the use of a top-secret system was "clearly an indication that they were at least thinking of a cover-up if not, in fact, doing that. It's a very serious matter because this is evidence of wrongdoing." After considerable delay, the Trump White House released the transcript publicly, and one official acknowledged that it had been a mistake to move the transcript to a highly classified system.
That was the right answer. Overclassifying government records because of their political sensitivity is a direct violation of the executive order that governs classification. The order, signed by President Obama, says, "In no case shall information be classified in order to prevent or delay the release of information that does not require protection in the interest of national security."
This is an important principle. Classifying information because it's politically sensitive, however appealing it may be to government officials in the moment, undermines the public trust on which the entire system of national security secrecy rests.
But even setting aside the principle of the thing, overclassification is not a victimless crime. Take Milley's decision to withhold records of the Pentagon's response to Jan. 6. It raises serious questions that the chairman wasn't asked in his testimony and that haven't been answered since.
I frequently defend broad national security authorities for government. That's because I've seen some of the threats the government faces. But if it wants to keep those authorities in a time of deepening public suspicion, the government must show that it has internal checks and real accountability to prevent abuse.
Posted at 04:03 PM | Permalink | Comments (0)
In this bonus episode of the Cyberlaw Podcast, I interview Andy Greenberg, long-time WIRED reporter, about his new book, Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.
This is Andy's second author interview on the Cyberlaw Podcast. He was also interviewed about an earlier book, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers. They are both excellent cybersecurity stories.
Tracers in the Dark is a kind of sequel to the Silk Road story, which ended with Ross Ulbricht, aka the Dread Pirate Roberts, pinioned to the table in a San Francisco library, with his laptop open to an administrator's page on the Silk Road digital black market. At that time, cryptocurrency backers believed that Ulbricht's arrest was a fluke, and that, properly implemented, bitcoin was anonymous and untraceable. Greenberg's book tells, story by story, how that illusion was trashed by smart cops and techies (including our own Nick Weaver!) who showed that the blockchain's "forever" records make it almost impossible to avoid attribution over time.
Among those who fell victim to the illusion of anonymity were: two federal officers who helped pursue Ulbricht – and to rip him off; the administrator of AlphaBay, Silk Road's successor as world's biggest dark market; an alleged Russian hacker who made so much money hacking Mt. Gox that he had to create his own exchange to launder it all; and hundreds of child sex abuse consumers and producers.
It is a great story, and Andy brings it up to date in the interview as we dig into two of the US government's massive, multi-billion-dollar bitcoin seizures, both made possible by transaction tracing. In fact, for all the colorful characters in the book, the protagonist is really Chainalysis and its competitors, who have turned tracing into a kind of science.
We close the talk by exploring Andy's deeply mixed feelings about both the world envisioned by cryptocurrency's evangelists and the way Chainalysis is saving us from that world.
Download Bonus Episode 438 (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 01:12 PM | Permalink | Comments (0)
The Cyberlaw Podcast kicks off 2023 by staring directly into the sun(set) of Section 702 authorization. The entire panel, including guest host Brian Fleming (Stewart having been "upgraded" to an undisclosed location) and guests Michael Ellis and David Kris, debates where things could be headed this year as the clock is officially ticking on FISA Section 702 reauthorization. Although there is agreement that a straight reauthorization is unlikely in today's political environment, the ultimate landing spot for Section 702 is very much in doubt, and a game of chicken will likely precede any potential deal. (Baker and Ellis have contributed to the debate, arguing that renewal should be the occasion for legislating against the partisan misuse of intelligence authorities.) That, and everything else, seems to be in play, as this reauthorization battle could result in meaningful reform or a complete car crash come this time next year.
Sticking with Congress, Michael also reacts to President Biden's recent bipartisan call to action regarding "Big Tech" and ponders where Republicans and Democrats could potentially find agreement on an issue everyone seems to agree on (for very different reasons). The panel also discusses the timing of the call and debates whether it is intended to incentivize the Republican-controlled House to act rather than simply increase oversight on the tech industry.
David then introduces a fascinating story about the bold recent action by the Security and Exchange Commission (SEC) to bring suit against Covington & Burling LLP to enforce an administrative subpoena seeking disclosure of the firm's clients implicated in a 2020 cyberattack by Chinese state-sponsored group, Hafnium. David posits that the SEC knows exactly what it is doing by taking such aggressive action in the face of strong resistance, and the panel discusses whether the SEC may have already won by this bold use of its authority in the U.S. cybersecurity enforcement landscape.
Brian then turns to the crypto regulatory and enforcement space to discuss Coinbase's recent settlement with New York's Department of Financial Services. Rather than signal another crack in the foundation of the once high-flying crypto industry, Brian offers that this may just be routine growing pains for a maturing industry that is more like the traditional banking sector, from a regulatory and compliance standpoint, than it may have wanted to believe.
Then, in the China portion of the episode, Michael discusses the latest news on the establishment of "reverse" Committee on Foreign Investment in the United States (CFIUS) review. He thinks it may still be some time before this tool gets finalized (even as the substantive scope appears to be shrinking). Next, Brian discusses a recent D.C. Circuit decision which upheld the Federal Communication Commission's decision to rescind the license of China Telecom at the recommendation of the executive branch agencies known as Team Telecom (Department of Justice, Department of Defense, and Department of Homeland Security). This important, first-of-its-kind decision reinforces the role of Team Telecom as an important national security gatekeeper for U.S. telecommunications infrastructure.
Finally, David highlights an interesting recent story about an FBI search of an apparent Chinese police outpost in New York and ponders what it would mean to negotiate with and be educated by undeclared Chinese law enforcement agents in a foreign country.
In a few updates and quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 12:33 PM | Permalink | Comments (0)
Our first episode for 2023 features Dmitri Alperovitch, Paul Rosenzweig, and Jim Dempsey trying to cover a months' worth of cyberlaw news. Dmitri and I open with an effort to summarize the state of play in the tech struggle between the U.S. and China. I say recent developments show the U.S. doing better than expected. U.S. companies like Facebook and Dell are engaged in voluntary decoupling as they imagine what their supply chains will look like if the conflict gets worse. China, after pouring billions into a so-far-fruitless effort to take the lead in high-end chip production, may be pulling back on the throttle. Dmitri is less sanguine, noting that Chinese companies like Huawei have shown that there is life after sanctions, and there may be room for a fast-follower model in which China dominates production of slightly less sophisticated chips, where much of the market volume is concentrated. Meanwhile, any Chinese retreat is likely tactical; where it has a dominant market position, as in rare earths, it remains eager to hobble U.S. companies.
Jim lays out the recent medical device security requirements adopted in the omnibus appropriations bill. It is a watershed for cybersecurity regulation of the private sector. It's also overdue for digitized devices that in some cases can only be updated with another open-heart surgery. How much of a watershed it is may become clear when the White House cyber strategy, which has been widely leaked, is finally released. Paul explains it's likely to show enthusiasm not just for more cybersecurity regulation but for liability as a check on bad cybersecurity. Dmitri points out that Biden administration enthusiasm for regulation may not lead to legislation now that Republicans control the House.
We all weigh in on LastPass's problems with hackers --and with candid, timely disclosures. For reasons fair and unfair, two-thirds of the LastPass users on the show have abandoned the service over the Christmas break. I blame LastPass's acquisition by private equity; Dmitri tells me that's painting with too broad a brush.
I offer an overview of the Twitter Files stories by Bari Weiss, Matt Taibbi, and others. When I say that the most disturbing revelations concern the massive government campaigns to enforce orthodoxy on COVID-19, all hell breaks loose. Paul in particular thinks I'm egregiously wrong to worry about any of this. No chairs are thrown, mainly because I'm in Virginia and Paul's in Costa Rica. But it's a heartfelt, entertaining, and maybe even illuminating debate.
In shorter and less contentious segments:
Download the 436th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:42 PM | Permalink | Comments (0)
Despite the title, rest assured that the Cyberlaw Podcast has not gone woke.
This bonus episode is focused on how cybersecurity is undermined by the attorney-client privilege. To explore that question, I interview Josephine Wolff and Dan Schwarcz, who along with Daniel Woods have written an article with the same title as this post.
Their thesis is that breach lawyers have lost perspective as they've waged a no-holds-barred (and frequently losing) battle to preserve the attorney-client privilege for forensic reports that diagnose their clients' cybersecurity breaches. Remarkably for the authors of a law review article, they did actual field research, and it tells us a lot.
The authors interviewed all the players in breach response -- the breached company's information security teams, the breach lawyers, the forensics investigators who parachute in for incident response, the insurers and insurance brokers, and more. I am reminded of Tracy Kidder's astute observation that, in building a house, there are three main players – owner, architect, and builder – and that if you get any two of them in a room alone, they will spend all their time bad-mouthing the third. Wolff, Schwarcz, and Woods seem to have done that with the breach response players, and while the bad-mouthing is spread around, it falls hardest on the lawyers.
The main problem is that invoking attorney-client privilege to keep breach forensics confidential is not an easy sell. The courts have been unsympathetic. To overcome the undertow of judicial skepticism, breach lawyers end up imposing more and more draconian restrictions on forensic investigators and their communications. The upshot is that no forensics report at all may be written for many breaches (up to 95% of them, Josephine estimates). How does the breached company find out what it did wrong and what lessons it should learn from the incident? Simple. Their lawyer talks to the forensic firm, translates its advice into a high-level PowerPoint, and orally explains the cybersecurity details to the company's management and information security team. Really, what could go wrong?
In closing, Dan and Josephine offer some ideas for how to get out of this mess. I push back. All in all, it's the most fun I've ever had talking about insurance law.
Download the Bonus 435th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 05:44 PM | Permalink | Comments (0)
It's been a news-heavy week, but we have the most fun in this episode with ChatGPT. Jane Bambauer, Richard Stiennon, and I pick over the astonishing number of use cases and misuse cases disclosed by the release of ChatGPT for public access. It is talented – writing dozens of term papers in seconds. It is sociopathic – the term papers are full of falsehoods, down to the made-up citations to plausible but nonexistent URLs for New York Times stories. And it has too many lawyers – Richard's request that it provide his bio (or even Albert Einstein's) was refused on what are almost certainly data protection grounds. Luckily, either ChatGPT or its lawyers are also bone stupid, since reframing the question tricks the machine into subverting the legal and PC limits it labors under. I speculate that it beat Google to a PR triumph precisely because Google had even more lawyers telling their Artificial Intelligence what not to say.
In a surprisingly undercovered story, Apple has gone all in on child pornography. Its phone encryption already makes the iPhone a safe place to record child sexual abuse material (CSAM); now Apple will encrypt users' cloud storage with keys it cannot access, allowing customers to upload CSAM without fear of law enforcement. And it has abandoned its effort to identify such material by doing phone-based screening. All that's left of its effort to stop such abuse is a feature allowing parents to force their kids to activate an option that prevents them from sending or receiving nude photos. Jane and I dig into the story, as well as Apple's questionable claim to be offering the same encryption to its Chinese customers.
Nate Jones brings us up to date on the National Defense Authorization Act, or NDAA. Lots of second-tier cyber provisions made it into the bill, but not the provision requiring that critical infrastructure companies report security breaches. A contested provision on spyware purchases by the U.S. government was compromised into a more useful requirement that the intelligence community identify spyware that poses risks to the government.
Jane updates us on what European data protectionists have in store for Meta, and it's not pretty. The EU data protection supervisory board intends to tell the Meta companies that they cannot give people a free social media network in exchange for watching what they do on the network and serving ads based on their behavior. If so, it's a one-two punch. Apple delivered the first blow by curtailing Meta's access to third-party behavioral data. Now even first-party data could be off limits in Europe. That's a big revenue hit, and it raises questions whether Facebook will want to keep giving away its services in Europe.
Mike Masnick is Glenn Greenwald with a tech bent – often wrong but never in doubt, and contemptuous of anyone who disagrees. But when he's right, he's right. Jane and I discuss his article recognizing that data protection is becoming a tool that the rich and powerful can use to squash annoying journalist-investigators. I have been saying this for decades. But still, welcome to the party, Mike!
Nate points to a post pleading for more controls on the export of personal data from the U.S. It comes not from the usual privacy enthusiasts but from the U.S. Naval Institute, and it makes sense.
Jane and I take time to marvel at the story of France's Mr. Privacy and the endless appetite of Europe's bureaucrats for serial grifting, as long as it combines enthusiasm for American technology with hostility to the technology's source.
Nate and I cover what could be a good resolution to the snake-bitten cloud contract competition at the Department of Defense. The Pentagon is going to let four cloud companies -- Google, Amazon, Oracle And Microsoft – share the prize.
You didn't think we'd forget Twitter, did you? Jane, Richard, and I all comment on the Twitter Files. Consensus: the journalists claiming these stories are nothingburgers are driven more by ideology than their nose for news. Especially newsworthy are the remarkable proliferation of shadowbanning tools Twitter developed for suppressing speech it didn't like, and some considerable though anecdotal evidence that Twitter's many speech rules company were often twisted to suppress speech from the right -- even when the rules did not quite fit, as with LibsofTikTok -- while similar behavior on the left went unpunished. Richard tells us what it feels like to be on the receiving end of a Twitter shadowban.
The podcast introduces a new feature: "We Read It So You Don't Have To," and Nate provides the tl;dr on an New York Times story: How the Global Spyware Industry Spiraled Out of Control.
And in quick hits and updates:
Download the 434th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:14 AM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast delves into the use of location technology in two big events – the surprisingly widespread lockdown protests in China and the January 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to my provocative question: What’s the difference? Jamil’s answer (and mine, for what it’s worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users’ information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded.
Whether we should be relying on Google’s made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective rules penalizing companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish such law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence of wrongdoing. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is happy to hose me down.
The UK is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests.
The Department of Homeland Security’s Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry’s complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to reach agreement on a report format that doesn’t resemble the State of the Union address.
It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities.
Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That’s very likely a World Trade Organization (WTO) violation, I offer, but one that pales in comparison to thirty years of European WTO-violating threats to constrain data exports to the U.S. When you think of it as retaliation for the use of EU privacy law to attack U.S. intelligence programs, the EV subsidy is easy to defend.
I ask Nick if we learned anything new this week from Twitter coverage. His answer – that Elon Musk doesn’t understand how hard content moderation is – doesn’t exactly come as news. Nor, really, does most of what we learned from Matt Taibbi’s review of Twitter’s internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn’t come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D., Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him.
Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose.
Finally, in quick hits:
Download the 433rd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:15 PM | Permalink | Comments (0)
We spend much of this episode of the Cyberlaw Podcast talking about toxified tech – new technology that is being demonized by the press and others. Exhibit One, of course, is "spyware," i.e., hacking tools that allow governments to access phones or computers otherwise closed to them. The Washington Post and the New York Times have led a campaign to turn NSO's Pegasus tool for hacking phones into a radioactive product. Jim Dempsey, though, reminds us that not too long ago, in defending end-to-end encryption, tech policy advocates insisted that the government did not need to mandate access to encrypted phones because they could just hack them instead. David Kris joins in, pointing out that, used with a warrant, there's nothing uniquely dangerous about hacking tools of this kind. I offer an explanation for why the public policy community and its Silicon Valley funders have changed their tune on the issue: Having won the end-to-end encryption debate, they feel free to move on to the next anti-law-enforcement campaign.
That campaign includes private lawsuits against NSO by companies like WhatsApp, whose case was briefly delayed by NSO's claim of sovereign immunity on behalf of the (unnamed) countries it builds its products for. That claim made it to the Supreme Court, David reports, where the U.S. government recently filed a devastating brief that will almost certainly send NSO back to court without any sovereign immunity protection.
Meanwhile, in France, Amesys and its executives are being prosecuted for facilitating the torture of Libyan citizens at the hands of the Muammar Qaddafi regime. Amesys evidently sold an earlier and less completely toxified technology – packet inspection tools – to Libya which is alleged to have tracked down dissidents with it. The criminal case is pending.
And in the U.S., a plethora of tech toxification campaigns are under way, all aimed at Chinese products. This week, Jim notes, the Federal Communications Commission came to the end of a long road that began with jawboning in the 2000s and culminated in a flat ban on installing Chinese telecom gear in U.S. networks. On deck for toxification are DJI's drones, which several Senators see as a comparable national security threat that should be handled with a similar ban. Maury Shenk tells us that the British government is taking the first steps on a similar path, this time starting with a ban on some government uses of Chinese surveillance camera systems.
Those measures do not always work, Maury tells us, pointing to a story that hints at trouble ahead for U.S. efforts to decouple Chinese from American artificial intelligence research and development.
Maury and I take a moment to debunk efforts to persuade readers that Artificial Intelligence (AI) is toxic because Silicon Valley will use it to take our jobs. AI code writing is not likely to graduate from facilitating coding any time soon, we agree. Whether AI can do more in replacing Human Resources (HR) staff may be limited by a different toxification campaign – the largely phony claim that AI is full of bias. Amazon's effort to use AI in HR, I predict, will be sabotaged by this claim, as its effort to avoid charges of bias will almost certainly lead the company's HR department to build race and gender quotas into its AI engine.
And in a few quick hits:
Download the 432nd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:07 PM | Permalink | Comments (0)
The Cyberlaw Podcast leads with the growing legal cost of Elon Musk's anti-authoritarian takeover of Twitter. Turns out that authority figures have a mean streak, and a lot of weapons, many grounded in law, as Twitter is starting to learn. Brian Fleming explores one of them -- the apparently unkillable notion that the Committee on Foreign Investment in the U.S. (CFIUS) should review Musk's Twitter deal because of a relatively small share that went to investors with Chinese and Persian Gulf ties. CFIUS may in fact be seeking information on what Twitter data those investors will have access to, but I am skeptical that CFIUS will be moved to act on what it learns. More dangerous for Twitter and Musk, says Charles-Albert Helleputte, is the possibility that the company will lose its one-stop-shop privacy regulator for failure to meet the elaborate compliance machinery set up by European privacy bureaucrats. At a quick calculation, that could expose Twitter to fines up to 120% of annual turnover. That would smart. Finally, I reprise my take on all the people leaving Twitter for Mastodon as a protest against Musk allowing the Babylon Bee and President Trump back on the platform. If the protestors really think Mastodon's system is better, there's no reason Twitter can't adopt it, or at least the version that Francis Fukuyama and Roberta Katz have proposed.
If you are looking for the far edge of the Establishment's Overton Window on China policy, you cannot do better than the U.S.-China Economic and Security Review Commission, a consistently China-skeptical but mainstream body. Brian reprises the Commission's latest report. Its headline is about Chinese hacking, but the report does not offer much hope of a solution to that problem, other than more decoupling.
Chalk up one more victory for Trump-Biden continuity, and one more loss for the State Department. Michael Ellis reminds us that the Trump administration took much of Cyber Command's cyber offense decisionmaking out of the National Security Council and put it back in the Pentagon. This made it much harder for the State Department to stall cyber offense operations. When it turned out that this made Cyber Command more effective and no more irresponsible, the Biden Administration followed its predecessor's lead, preparing a memo that will largely ratify Trump's order, with a few tweaks.
I unpack Google's expensive (nearly $400 million) settlement with 40 States over location history. Google's promise to its users that it would stop storing location history if the feature was turned off was poorly and misleadingly drafted, but I doubt there is anyone who actually wanted to keep Google from using location for most of the apps where it remained operative, so the settlement is a good deal for the states, and a reminder of how unpopular Silicon Valley has become in red and blue states alike.
Michael tells the doubly embarrassing story of an Iranian hack of the U.S. Merit Systems Protection Board. It is embarrassing enough for the board to be hacked using a log4j exploit that should have been patched long ago. But it is worse that an Iranian government hacker got access to a U.S. government network – and decided that its access is best used for mining cryptocurrency.
Brian tells us that the U.S. goal of reshoring chip production is making progress, with Apple planning to use TSMC chips from a new fab in Arizona.
In a few updates and quick hits:
Download the 431st Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:35 PM | Permalink | Comments (0)
We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 federal election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does a lot of nominations, the House does a lot of investigations, and neither does much legislation. Which could leave renewal of a critically important intelligence authority, Section 702 of FISA, out in the cold. As supporters of renewal, Adam and I conclude that the best hope for the provision is to package it with trust-building measures to guard against partisan misuse of national security authorities.
I also note that foreign government cyberattacks on our election machinery, something much anticipated in election after election, once again failed to make an appearance. At this point, I argue, election interference falls somewhere between Y2K and Bigfoot on the "things we need to worry about" scale.
In other news, cryptocurrency conglomerate FTX has collapsed in a welter of bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details.
A new panelist to the podcast, Chinny Sharma explains for a disbelieving US audience the UK government's plan to scan all the country's internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the UK government doesn't use a private service for the task.
Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he's determined to linger on the moments of comic relief. Dunning-Krueger makes an appearance.
Chinny and I speculate on what may emerge from the Biden administration's plan to reconsider the relationship between CISA and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict that it will spur turf wars and end in new coordination authority for CISA. In addition, the Obama administration's egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the FTC will finally have to coordinate its privacy enforcement efforts with CISA's cybersecurity standards and priorities.
Adam reviews the European Parliament's report on Europe's spyware problems. He's impressed (as am I) by the report's willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition party, a problem that has been unthinkable for seventy years in the United States. Though maybe not any more, which, we agree, is another reason for Congress to quickly put into place more guardrails against such abuse.
Nick notes the US government's seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it's worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money.
Disinformation – or the appalling acronym MDM (for mis-, dis-, and mal-information) – has been in the news lately. A recent paper counted the staggering cost of efforts to suppress "disinformation" during covid times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya – falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even speech that sounds a lot like foreign leaders we don't like.
Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and a new kind of AI that writes code. The short version is that GitHub has been training an AI engine on all the open source code on its site so that an algorithm can "autosuggest" lines of new code as you're writing the boring parts of your program. Sounds great, except that the resulting algorithm tends to reproduce the code it was trained on --- without imposing the license conditions, such as copyleft, that were part of the original code. Not surprisingly, copyleft advocates are suing on the ground that important information was improperly stripped from their code, particularly the provision that turns all code that incorporates their open source into open source itself. I remind listeners that this incorporation feature is why Microsoft famously likened open source to cancer. Nick tells me that it's really more like herpes, demonstrating that he has apparently had a lot more fun writing code than I ever had.
In updates and quick hits:
Download the 430th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 03:01 PM | Permalink | Comments (0)
The war that began with the Russian invasion of Ukraine grinds on. Cybersecurity experts have spent much of 2022 trying to draw lessons about cyberwar strategies from the conflict. Dmitri Alperovitch takes us through the latest learning, cautioning that all of it could look different in a few months, as both sides adapt to the others' actions.
David Kris joins Dmitri to evaluate a Microsoft report hinting at how China may be abusing its edict that software vulnerabilities must be reported first to the Chinese government. The temptation to turn such reports into 0-day exploits is strong, and Microsoft notes with suspicion a recent rise in Chinese 0-day exploits. Dmitri worried about just such a development while serving on the Cyber Safety Review Board, but he is not yet convinced that we have the evidence to make a case against the Chinese mandatory disclosure law.
Sultan Meghji keeps us in Redmond, digging through a deep Protocol story on how Microsoft has helped build Artificial Intelligence (AI) capacity in China. The amount of money invested, and the deep bench of AI researchers from China, raise real questions about how the United States can decouple from China – and whether China will eventually decide to do the decoupling.
I express skepticism about the White House's latest initiative on ransomware, a 30+ nation summit that produced a modest set of concrete agreements. But Sultan and Dmitri have been on the receiving end of deputy national security adviser Anne Neuberger's forceful personality, and they think we will see results. We'd better. Banks report that ransomware payments doubled last year, to $1.2 billion.
David introduces the high-stakes struggle over when cyberattacks can be excluded from insurance coverage as acts of war. A recent settlement between Mondelez and Zurich has left the law in limbo.
Sultan tells me why AI is so bad at explaining the results it reaches. He sees light at the end of the tunnel. I see more stealthy imposition of woke values. But we find common ground in trashing the Facial Recognition Act, a lefty Democrats' bill that throws together every bad idea for regulating facial recognition ever put forward and adds a few more. A red wave election will be worth it just to make sure this bill stays dead.
Finally, Sultan reviews the National Security Agency's report on supply chain security. And I introduce the elephant in the room, or at least the mastodon: Elon Musk's takeover at Twitter and the reaction to it. I downplay the probability of CFIUS reviewing the deal. And I mock the Elon-haters who fear that Musk's scrimping on content moderation will turn Twitter into a hellhole that includes *gasp!* Republican speech. Turns out that they are fleeing Twitter for Mastodon, which pretty much invented scrimping on content moderation.
Download the 429th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:06 PM | Permalink | Comments (0)
You heard it on the Cyberlaw Podcast first, as we did a mashup of the week's top stories: Nate Jones commenting on Elon Musk's expected troubles running Twitter at a profit and Jordan Schneider noting the U.S. government's creeping, halting moves to constrain TikTok's sway in the U.S. market. Since Twitter has never made a lot of money, even before it was carrying loads of new debt, and since pushing TikTok out of the U.S. market is going to be an option on the table for years, why doesn't Elon Musk position Twitter to take its place? (Breaking news: Apparently the podcast has a direct line to Elon Musk's mind; he is reported to be entertaining the idea of reviving Vine to compete with TikTok.)
It's another big week for China news, as Nate and Jordan cover the administration's difficulties in finding a way to thwart China's rise in quantum computing and artificial intelligence (AI). Jordan has a good post about the tech decoupling bombshell. But the most intriguing discussion concerns China's remarkably limited options for striking back at the Biden Administration for its harsh sanctions.
Meanwhile, under the heading, When It Rains, It Pours, Elon Musk's Tesla faces a criminal investigation over its self-driving claims. Nate and I are skeptical that the probe will lead to charges, as Tesla's message about Full Self-Driving has been a mix of manic hype and depressive lawyerly caution.
Jamil Jaffer introduces us to the Guacamaya "hacktivist" group whose data dumps have embarrassed governments all over Latin America – most recently with reports of Mexican military arms sales to narco-terrorists. On the hard question – hacktivists or government agents? – Jamil and I lean ever so slightly toward hacktivists.
Nate covers the remarkable indictment of two Chinese spies for recruiting a U.S. law enforcement officer in an effort to get inside information about the prosecution of a Chinese company believed to be Huawei. We pull plenty of great color from the indictment, and Nate notes the awkward spot that the defense team now finds itself in, since the point of the espionage seems to have been, er, trial preparation.
To balance the scales a bit, Nate also covers suggestions that Google's former CEO Eric Schmidt, who headed an AI advisory committee, had a conflict of interest because he also invested in AI startups. There's no suggestion of illegality, though, and it is not clear how the government will get cutting edge advice on AI if it does not get it from investors and industry experts like Schmidt.
Jamil and I have mildly divergent takes on Transportation Security Administration's new railroad cybersecurity directive. He worries that it will produce more box-checking than security. My concern is that it mostly reinforces current practice rather than raising the bar.
And in quick updates:
Download the 428th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 10:14 AM | Permalink | Comments (0)
This episode features Nick Weaver, Dave Aitel and I exploring a Pro Publica story (and forthcoming book) on the FBI's difficulties in seeking to become the nation's principal resource on cybercrime and cybersecurity. We end up concluding that, for all its strengths, the bureau's structural weaknesses in addressing cybersecurity are going to thwart its ambitions for years to come.
Speaking of being thwarted for years, the effort to decouple U.S. and Chinese tech sectors continues apace. Nick and Dave weigh in on the latest (rumored) initiative -- cutting off China's access to U.S. quantum computing and AI technology -- and what that could mean for U.S. semiconductor companies, among others.
We could not stay away from the Elon Musk-Twitter story, which briefly had a national security dimension, due to news that the Biden Administration was considering a Committee on Foreign Investment in the United States (CFIUS) review of the deal. That's not a crazy idea, but in the end, we are skeptical that it will amount to much.
Dave and I exchange views on whether it is logical for the Administration to pursue cybersecurity labels for cheap Internet of things (IoT) devices. He thinks it makes less sense than I do, but we agree that the end result will be to crowd the cheapest competitors from the market.
Nick and I discuss the news that Kanye West is buying Parler. Neither of us thinks much of the deal as an investment.
And in updates and quick takes:
Download the 427th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 09:52 PM | Permalink | Comments (0)
David Kris opens this episode of the Cyberlaw Podcast by laying out some of the massive disruption that the Biden Administration has kicked off in China's semiconductor industry – and among its Western suppliers. The reverberations of the administration's new measures will be felt for years, and the Chinese government's response, not to mention the ultimate consequences, remains uncertain.
Richard Stiennon, our industry analyst, gives us an overview of the cybersecurity market, where tech and cyber companies have taken a beating but cybersecurity startups continue to gain funding.
Mark MacCarthy reviews the industry from the viewpoint of the trustbusters. Google is facing what looks like a serious adtech platform challenge from many directions – the EU, the Justice Department, and several states. Facebook, meanwhile, is lucky to be a target of the Federal Trade Commission, which rather embarrassingly had to withdraw claims that Facebook's acquisition of Within would remove an actual (as opposed to a hypothetical) competitor from the market. No one seems to have challenged Google's acquisition of Mandiant, meanwhile. Richard suspects that is because Google is not likely to do anything much with the company.
David walks us through the new White House national security strategy – and puts it in historical context.
Mark and I cross swords over PayPal's determination to take my money for saying things Paypal doesn't like. Visa and Mastercard are less upfront about their willingness to boycott businesses they consider beyond the pale, but all money transfer companies have rules of this kind, he says. We end up agreeing that transparency, the measure usually recommended for platform speech suppression, makes sense for Paypal and its ilk, especially since they're already subject to extensive government regulation.
Richard and I dive into the market for identity security. It's hot, thanks to zero trust computing. Thoma Bravo is leading a rollup of identity companies. I predict security troubles ahead for the merged portfolio.
In updates and quick hits:
Download the 426th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:41 PM | Permalink | Comments (0)
It's been a jam-packed week of cyberlaw news, but the big debate of the episode is triggered by the White House blueprint for an AI 'bill of rights'. I've just released a long post about the campaign to end "AI bias" in general, and the blueprint in particular. In my view, the bill of rights will end up imposing racial and gender (not to mention intersex!) quotas on a vast swath of American life. Nick Weaver argues that AI is in fact a source of secondhand racism and sexism, something that will not be fixed until we do a better job of forcing the algorithm to explain how it arrives at the outcomes it produces. We do not agree on much, but we do agree that lack of explainability is a big problem for the new technology.
President Biden has issued an executive order meant to resolve the U.S.-EU spat over transatlantic data flows -- at least for a few years, until the anti-American EU Court of Justice finds it wanting again. Nick and I explore some of the mechanics created by the executive order. I argue that masking the identities of foreign intelligence targets will be bad for the comprehensibility of U.S. intelligence reports and for the privacy of U.S. persons. On the other hand, the quasijudicial system the order creates is cleverly designed to discourage litigant grandstanding.
Matthew Heiman covers the biggest CISO news of the week, the month, and the year – the criminal conviction of Uber's CSO, Joe Sullivan, for failure to disclose a data breach to the Federal Trade Commission. Matthew is less surprised by the verdict than others, but we agree that it will change the way CISOs do their job and relate to their fellow corporate officers.
Brian Fleming joins us to cover an earthquake in U.S.-China tech trade – the sweeping new export restrictions on U.S. chips and technology. This will be a big deal for all U.S. tech companies, we agree, and probably a disaster for them in the long run if U.S. allies don't join the party.
I go back to dig a little deeper on a story we covered with just a couple of hours' notice last week – the Supreme Court's grant of review in two cases touching on Big Tech's liability for hosting the content of terror groups. It turns out that only one of the cases is likely to turn on section 230. That's Google's almost laughable claim that holding YouTube liable for recommending terrorist videos is holding it liable as a publisher. The other case will almost certainly turn on when distribution of terrorist content can be punished as "material assistance" to terror groups.
Brian walks us through the endless negotiations between TikTok and the U.S. over a security deal. We are both puzzled over the partisanization of the TikTok security issue, although I suggest one reason why that might be happening.
Matthew catches us up on a little-covered Russian hack and leak operation aimed at former MI6 boss Richard Dearlove and British Prime Minister Boris Johnson. Matthew gives Dearlove's security awareness a low grade.
Finally, two updates:
Download the 425th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:37 AM | Permalink | Comments (0)
You probably haven't given much thought recently to the wisdom of racial and gender quotas that allocate jobs and other benefits to racial and gender groups based on their proportion of the population. That debate is pretty much over. Google tells us that discussion of racial quotas peaked in 1980 and has been declining ever since. While still popular with some on the left, they have been largely rejected by the country as a whole. Most recently, in 2019 and 2020, deep blue California voted to keep in place a ban on race and gender preferences. So did equally left-leaning Washington state.
So you might be surprised to hear that quotas are likely to show up everywhere in the next ten years, thanks to a growing enthusiasm for regulating technology – and a large contingent of Republican legislators. That, at least, is the conclusion I've drawn from watching the movement to find and eradicate what's variously described as algorithmic discrimination or AI bias.
Claims that machine learning algorithms disadvantage women and minorities are commonplace today. So much so that even centrist policymakers agree on the need to remedy that bias. It turns out, though, that the debate over algorithmic bias has been framed so that the only possible remedy is widespread imposition of quotas on algorithms and the job and benefit decisions they make.
To see this phenomenon in action, look no further than two very recent efforts to address AI bias. The first is contained in a privacy bill, the American Data Privacy and Protection Act (ADPPA). The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.
Dubious claims of algorithmic bias are everywhere
I got interested in this issue when I began studying claims that algorithmic face recognition was rife with race and gender bias. That narrative has been pushed so relentlessly by academics and journalists that most people assume it must be true. In fact, I found, claims of algorithmic bias are largely outdated, false, or incomplete. They've nonetheless been sold relentlessly to the public. Tainted by charges of racism and sexism, the technology has been slow to deploy, at a cost to Americans of massive inconvenience, weaker security, and billions in wasted tax money – not to mention driving our biggest tech companies from the field and largely ceding it to Chinese and Russian competitors.
The attack on algorithmic bias in general may have even worse consequences. That's because, unlike other antidiscrimination measures, efforts to root out algorithmic bias lead almost inevitably to quotas, as I'll try to show in this article.
Race and gender quotas are at best controversial in this country. Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.
The campaign against algorithmic bias seeks to overturn this consensus – and to do so largely by stealth. The ADPPA that so many Republicans embraced is a particularly instructive example. It begins modestly enough, echoing the common view that artificial intelligence algorithms need to be regulated. It requires an impact assessment to identify potential harms and a detailed description of how those harms have been mitigated. Chief among the harms to be mitigated is race and gender bias.
So far, so typical. Requiring remediation of algorithmic bias is a nearly universal feature of proposals to regulate algorithms. The White House blueprint for an artificial intelligence bill of rights, for example, declares, "You should not face discrimination by algorithms and systems should be used and designed in an equitable way."
All roads lead to quotas
The problems begin when the supporters of these measures explain what they mean by discrimination. In the end, it always boils down to "differential" treatment of women and minorities. The White House defines discrimination as "unjustified different treatment or impacts disfavoring people based on their "race, color, ethnicity, [and] sex" among other characteristics. While the White House phrasing suggests that differential impacts on protected groups might sometimes be justified, no such justification is in fact allowed in its framework. Any disparities that could cause meaningful harm to a protected group, the document insists, "should be mitigated."
The ADPPA is even more blunt. It requires that, among the harms to be mitigated is any "disparate impact" an algorithm may have on a protected class – meaning any outcome where benefits don't flow to a protected class in proportion to its numbers in society. Put another way, first you calculate the number of jobs or benefits you think is fair to each group, and any algorithm that doesn't produce that number has a "disparate impact."
Neither the White House nor the ADPPA distinguish between correcting disparities caused directly by intentional and recent discrimination and disparities resulting from a mix of history and individual choices. Neither asks whether eliminating a particular disparity will work an injustice on individuals who did nothing to cause the disparity. The harm is simply the disparity, more or less by definition.
Defined that way, the harm can only be cured in one way. The disparity must be eliminated. For reasons I'll discuss in more detail shortly, it turns out that the disparity can only be eliminated by imposing quotas on the algorithm's outputs.
The sweep of this new quota mandate is breathtaking. The White House bill of rights would force the elimination of disparities "whenever automated systems can meaningfully impact the public's rights, opportunities, or access to critical needs" – i.e., everywhere it matters. The ADPPA in turn expressly mandates the elimination of disparate impacts in "housing, education, employment, healthcare, insurance, or credit opportunities."
And quotas will be imposed on behalf of a host of interest groups. The bill demands an end to disparities based on "race, color, religion, national origin, sex, or disability." The White House list is far longer; it would lead to quotas based on "race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law."
Blame the machine and send it to reeducation camp
By now, you might be wondering why so many Republicans embraced this bill. The best explanation was probably offered years ago by Sen. Alan Simpson (R-WY): "We have two political parties in this country, the Stupid Party and the Evil Party. I belong to the Stupid Party." That would explain why GOP committee members didn't read this section of the bill, or didn't understand what they read.
To be fair, it helps to have a grasp of the peculiarities of machine learning algorithms. First, they are often uncannily accurate. In essence, machine learning exposes a neural network computer to massive amounts of data and then tells it what conclusion should be drawn from the data. If we want it to recognize tumors from a chest x-ray, we show it millions of x-rays, some with lots of tumors, some with barely detectable tumors, and some with no cancer at all. We tell the machine which x-rays belong to people who were diagnosed with lung cancer within six months. Gradually the machine begins to find not just the tumors that specialists find but subtle patterns, invisible to humans, that it has learned to associate with a future diagnosis of cancer. This oversimplified example illustrates how machines can learn to predict outcomes (such as which drugs are most likely to cure a disease, which websites best satisfy a given search term, and which borrowers are most likely to default) far better and more efficiently than humans.
Second, the machines that do this are famously unable to explain how they achieve such remarkable accuracy. This is frustrating and counterintuitive for those of us who work with the technology. But it remains the view of most experts I've consulted that the reasons for the algorithm's success cannot really be explained or understood; the machine can't tell us what subtle clues allow it to predict tumors from an apparently clear x-ray. We can only judge it by its outcomes.
Still, those outcomes are often much better than any human can match, which is great, until they tell us things we don't want to hear, especially about racial and gender disparities in our society. I've tried to figure out why the claims of algorithmic bias have such power, and I suspect it's because machine learning seems to show a kind of eerie sentience.
It's almost human. If we met a human whose decisions consistently treated minorities or women worse than others, we'd expect him to explain himself. If he couldn't, we'd condemn him as a racist or a sexist and demand that he change his ways.
To view the algorithm that way, of course, is just anthropomorphism, or maybe misanthropomorphism. But this tendency shapes the public debate; academic and journalistic studies have no trouble condemning algorithms as racist or sexist simply because their output shows disparate outcomes for different groups. By that reductionist measure, of course, every algorithm that reflects the many demographic disparities in the real world is biased and must be remedied.
And just like that, curing AI bias means ignoring all the social and historical complexities and all the individual choices that have produced real-life disparities. When those disparities show up in the output of an algorithm, they must be swept away.
Not surprisingly, machine learning experts have found ways to do exactly that. Unfortunately, for the reasons already given, they can't unpack the algorithm and separate the illegitimate from the legitimate factors that go into its decisionmaking.
All they can do is send the machine to reeducation camp. They teach their algorithms to avoid disparate outcomes, either by training the algorithm on fictional data that portrays a "fair" world in which men and women all earn the same income and all neighborhoods have the same crime rate, or simply by penalizing the machine when it produces results that are accurate but lack the "right" demographics. Reared on race and gender quotas, the machine learns to reproduce them.
All this reeducating has a cost. The quotafied output is less accurate, perhaps much less accurate, than that of the original "biased" algorithm, though it will likely be the most accurate results that can be produced consistent with the racial and gender constraints. To take one example, an Ivy League school that wanted to select a class for academic success could feed ten years' worth of college applications into the machine along with the grade point averages the applicants eventually achieved after they were admitted. The resulting algorithm would be very accurate at picking the students most likely to succeed academically. Real life also suggests that it would pick a disproportionately large number of Asian students and a disproportionately small number of other minorities.
The White House and the authors of the ADPPA would then demand that the designer reeducate the machine until it recommended fewer Asian students and more minority students. That change would have costs. The new student body would not be as academically successful as the earlier group, but thanks to the magic of machine learning, it would still accurately identify the highest achieving students within each demographic group. It would be the most scientific of quota systems.
That compromise in accuracy might well be a price the school is happy to pay. But the same cannot be said for the individuals who find themselves passed over solely because of their race. Reeducating the algorithm cannot satisfy the demands of individual fairness and group fairness at the same time.
How machine learning enables stealth quotas
But it can hide the unfairness. When algorithms are developed, all the machine learning, including the imposition of quotas, happens "upstream" from the institution that will eventually rely on it. The algorithm is educated and reeducated well before it is sold or deployed. So the scale and impact of the quotas it's been taught to impose will often be hidden from the user, who sees only the welcome "bias-free" outcomes and can't tell whether (or how much) the algorithm is sacrificing accuracy or individual fairness to achieve demographic parity.
In fact, for many corporate and government users, that's a feature, not a bug. Most large institutions support group over individual fairness; they are less interested in having the very best work force -- or freshman class, or vaccine allocation system -- than they are in avoiding discrimination charges. For these institutions, the fact that machine learning algorithms cannot explain themselves is a godsend. They get outcomes that avoid controversy, and they don't have to answer hard questions about how much individual fairness has been sacrificed. Even better, the individuals who are disadvantaged won't know either; all they will only know is that "the computer" found them wanting.
If it were otherwise, of course, those who got the short end of the stick might sue, arguing that it's illegal to deprive them of benefits based on their race or gender. To head off that prospect, the ADPPA bluntly denies them any right to complain. The bill expressly states that, while algorithmic discrimination is unlawful in most cases, it's perfectly legal if it's done "to prevent or mitigate unlawful discrimination" or for the purpose of "diversifying an applicant, participant, or customer pool." There is of course no preference that can't be justified using those two tools. They effectively immunize algorithmic quotas, and the big institutions that deploy them, from charges of discrimination.
If anything like that provision becomes law, "group fairness" quotas will spread across much of American society. Remember that the bill expressly mandates the elimination of disparate impacts in "housing, education, employment, healthcare, insurance, or credit opportunities." So if the Supreme Court this term rules that colleges may not use admissions standards that discriminate against Asians, in a world where the ADPPA is law, all the schools will have to do is switch to an appropriately reeducated admissions algorithm. Once laundered through an algorithm, racial preferences that otherwise break the law would be virtually immune from attack.
Even without a law, demanding that machine learning algorithms meet demographic quotas will have a massive impact. Machine learning algorithms are getting cheaper and better all the time. They are being used to speed many bureaucratic processes that allocate benefits, from handing out food stamps and setting vaccine priorities to deciding who gets a home mortgage, a donated kidney, or admission to college. As shown by the White House AI Bill of Rights, it is now conventional wisdom that algorithmic bias is everywhere and that designers and users have an obligation to stamp it out. Any algorithm that doesn't produce demographically balanced results is going to be challenged as biased, so for companies that offer algorithms the course of least resistance is to build the quotas in. Buyers of those algorithms will ask about bias and express relief when told that the algorithm has no disparate impact on protected groups. No one will give much thought (or even, if the ADPPA passes, a day in court) to individuals who lose a mortgage, a kidney, or a place at Harvard in the name of group justice.
That's just not right. If we're going to impose quotas so widely, we ought to make that choice consciously. Their stealthy spread is bad news for democracy, and probably for fairness.
But it's good news for the cultural and academic left, and for businesses who will do anything to get out of the legal crossfire over race and gender justice. Now that I think about it, maybe that explains why the House GOP fell so thoroughly into line on the ADPPA. Because nothing is more tempting to a Republican legislator than a profoundly stupid bill that has the support of the entire Fortune 500.
Posted at 06:06 PM | Permalink | Comments (0)
We open today's episode with early news of the Supreme Court's decision to review whether section 230 protects platforms from liability for materially assisting terror groups whose speech they distribute (or even recommend). I predict that this is the beginning of the end of the house of cards that aggressive lawyering and good press have built for the platforms on the back of section 230. Why? Because Big Tech stayed out of the Supreme Court too long. Now, when section 230 finally gets to the Court, everyone hates Silicon Valley and its entitled content moderators. Jane Bambauer, Gus Hurwitz, and Mark MacCarthy weigh in admirably, despite the unfairness of having to comment on a cert grant that is less than two hours old.
Just to remind us why everyone hates Big Tech's content practices, we do a quick review of the week's news in content suppression.
For a change of pace, Mark has some largely unalloyed good news. The ITU will not be run by a Russian; instead it has elected an American, Doreen Bodan-Martin to lead it.
Mark tells us that all the Sturm und Drang over tougher antitrust laws for Silicon Valley has wound down to a few modestly tougher provisions that have now passed the House. That is all that will be passed this year, and perhaps in this Administration.
Gus gives us a few highlights from FTCland:
Jane unpacks a California law prohibiting cooperation with subpoenas from other states without an assurance that the subpoenas aren't enforcing laws against abortions that would be legal in California. California is playing the role in twenty-first century federalism that South Carolina played in the nineteenth and twentieth centuries; I predict that some enterprising red state attorney general is likely to challenge the validity of California's law – and win.
Gus notes that private antitrust cases remain hard to win, especially without evidence, as Amazon and major book publishers gain the dismissal of antitrust lawsuits over book pricing.
Finally, in quick hits and updates:
Download the 424th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 09:17 PM | Permalink | Comments (0)
This episode features a much deeper, and more diverse, examination of the Fifth Circuit decision upholding Texas's social media law than we did last week. We devote the last half of this episode to a structured dialogue between Adam Candeub and Alan Rozenshtein about the decision. Both have written about it, Alan critically and Adam supportively. I lead off, arguing that, contrary to legal Twitter's dismissive reaction, the opinion is a brilliant and effective piece of Supreme Court advocacy. Alan thinks that's exactly the problem; he objects to the opinion's grating self-certainty and refusal to acknowledge the less convenient parts of past case law. Adam is closer to my view. We all seem to agree that the opinion succeeds as an audition for Judge Oldham to become Justice Oldham in the DeSantis Administration.
We walk through the opinion and what its critics don't like, touching on the competing free expression interests of social media users and of the platforms themselves, whether there's any basis for an injunction today, given the relative weakness of the overbreadth argument, and whether "exercising editorial discretion" is a fundamental right under the first amendment or just an artifact of older technologies. Most intriguingly, we find unexpected consensus that Judge Oldham's (and Justice Thomas's) common carrier argument may turn out to be the most powerful argument in the case when it reaches the Court.
In the news roundup, we focus on the sprint to pass additional legislation before the end of the Congress. Michael Ellis explains the debate between the Cyberspace Solarium Commission alumni and business lobbyists over enacting a statutory set of obligations for systemically critical infrastructure companies.
Adam outlines a strange-bedfellows bill that has united Sens. Amy Klobuchar (D-Minn.) and Ted Cruz (R-Texas) in an effort to give small media companies and broadcasters an antitrust immunity to bargain with the big social media platforms over the use of their content. Adam is a skeptic, Alan less so.
The Pentagon, reliably braver when facing bullets than a bad Washington Post story, is performing to type in the flap over fake social media accounts. Michael tells us that the accounts pushed pro-U.S. stories but had met with little success before Meta and Twitter caught on and kicked them off their platforms. Now the Department of Defense is conducting a broad review of military information operations. I predict fewer such efforts and don't mourn their loss.
Adam and I touch on a decision of Meta's Oversight Board criticizing Facebook's automated image takedowns. I offer a new touchstone for understanding content regulation at the Big Platforms: They just don't care, so they've turned the whole effort over to second-rate AI and second-rate employees. There's a lot of explanatory power there.
Michael walks us through the Department of the Treasury's new flexibility on sending communications software and services to Iran.
And, in quick hits, I note that:
Download the 423rd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets. Especially the pets.
Posted at 07:53 PM | Permalink | Comments (0)
The big news of the week was a Fifth Circuit decision upholding Texas's law regulating social media speech suppression. The decision was poorly received by the usual supporters of social media censorship but I found it both remarkably well written and surprisingly persuasive. That does not mean it will survive the almost inevitable Supreme Court review but Judge Oldham wrote an opinion that could be a model for a Supreme Court decision upholding the Texas law.
The big hacking story of the week was a brutal takedown of Uber, probably by the dreaded Advanced Persistent Teenager. Dave Aitel explains what happened and why no other large corporation should feel smug or certain that the same cannot happen to them. Nick Weaver piles on.
Maury Shenk explains a recent European court decision upholding sanctions on Google for its restriction of Android phone implementations.
Dave points to some of the less well publicized aspects of the Twitter whistleblower's testimony before Congress. We agree on the bottom line – that Twitter is utterly incapable of protecting either U.S. national security or even the security of its users' messages. If there were any doubt about that, it would be laid to rest by Twitter's dependence on Chinese government advertising revenue.
Maury and Nick tutor me on The Merge, which moves Ethereum from "proof of work' to "proof of stake," massively reducing the climate footprint of the cryptocurrency. They are both surprisingly upbeat about it.
Maury also lays out a new European proposal for regulating the internet of things – and, I point out, for massively increasing the cost of all those things.
China is getting into the attribution game. It has issued a report blaming the National Security Agency for intruding on Chinese educational institution networks. Dave is not impressed.
The Department of Homeland security, in breaking news from 2003, has been storing the contents of phones it seizes on the border. Dave predicts that DHS will have to further pull back on its current practices. I'm less sure.
Now that China is regulating vulnerability disclosures, are Chinese companies reluctant to disclose vulnerabilities outside China? The Atlantic Council has a report on the subject, but Dave thinks the results are ambiguous at best.
In quick hits:
Download the 422nd Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:58 PM | Permalink | Comments (0)
Gus Hurwitz brings us up to speed on major tech bills in Congress. They are all dead. But some of them don't know it yet.
The big privacy bill, American Data Privacy and Protection Act, was killed by the left, but I argue that it's the right that should be celebrating, since the bill would have imposed race and gender preferences all across the economy, and the GOP members who supported the measure in the House were likely sold a bill of goods by industry lobbyists.
The big antitrust bill, American Innovation and Choice Online Act, is also a zombie, Gus argues, lurching undead toward the Senate floor but unlikely to muster the GOP votes needed to pass, mainly because content moderation has become a simple partisan issue: the GOP wants less (or fairer) moderation, Dems want more of what Silicon Valley has been dishing out for the past few years. If the bill doesn't produce viewpoint competition in the tech sector, it offers nothing for the GOP, and industry lobbyists are happily driving wedges into that divide.
The same divide also caused a stutter in the bill allowing newspapers to bargain collectively with the big platforms. It may make it to the floor, but it's already losing body parts.
Meanwhile, the White House is having a weirdly inconclusive "listening session" that might better have been called a "talking but not really proposing anything session."
When Iran launched a wiper attack on Albania because of its harboring of Mujahedin-e-Kalq, Albania broke relations with Iran and the U.S. promised consequences. In fact, all the U.S. seems to have done is impose meaningless sanctions on the already-sanctioned Iranian spy ministry. What was Iran's response? A second cyberattack on Albania. Nate Jones runs down the story. Jamil Jaffer and I question whether governmental sanctions on foreign intelligence agencies, which never promised much, are now delivering an appearance of haplessness and not of strength.
Jamil and I dwell on the criminal trial of Joe Sullivan for how he handled some hackers who got access to personal data stored by Uber. He was the chief security officer, and he decided to pay the hackers a bug bounty in exchange for their promising to destroy the data. That allowed Uber to avoid treating (and reporting) the incident as a breach. Creative lawyering or too creative by half? I could go either way, but calling it obstruction of justice and wire fraud seems like a reach. Nonetheless, that's what the Justice is charging in a case that opened last week. It is heavily politicized, and all the politics – corporate and governmental – line up against Sullivan. Whether the jury will do the same is another question. Meanwhile, everyone from other CISOs to former New York Times reporter Nicole Perlroth are questioning the prosecution's merits and warning of its likely consequences. However the case comes out, I predict that the biggest loser will be the FBI, which will never again get the kind of welcome from CISOs that it has in more innocent days.
Jamil critiques Apple's decision to support China's chip industry with new orders – and its claim that the chips it puts in its phones for the China market will stay in China.
The sanctions on Tornado Cash come back to the podcast for the second week in a row, Nate tells us, this time as litigation. Coinbase is funding an APA and constitutional challenge to Treasury's anctioning of a pile of code rather than a person or entity. My money is on the Treasury winning in the end.
In quicker hits,
Download the 421st Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 06:51 PM | Permalink | Comments (0)
This is our return-from-hiatus episode. Jordan Schneider kicks things off by covering the passage of a major U.S. semiconductor-building subsidy bill, while new contributor Brian Fleming talks with Nick Weaver about new foreign investment restrictions on chip (and maybe other) technology companies, as well as new export controls on (artificial Intelligence (AI) chips going to China. Jordan also covers a big corruption scandal arising from China’s big chip-building subsidy program, leading me to wonder when we’ll have our version.
Brian and Nick cover the month’s biggest cryptocurrency policy story, the imposition of OFAC sanctions on Tornado Cash. They agree that, while the outer limits of sanctions aren’t entirely clear, they are likely to show that sometimes the U.S. Code actually does trump digital code. Nick points listeners to his bracing essay, OFAC Around and Find Out.
Paul Rosenzweig reprises his role as the voice of reason in the debate over location tracking and Dobbs. (Literally. Paul and I did an hour-long panel on the topic last week. It’s available here.) I reprise my role as Chief Privacy Skeptic, calling the Dobbs/location fuss an overrated tempest in a teapot.
Brian takes on one aspect of the Mudge whistleblower complaint criticizing Twitter's security: Twitter’s poor record at keeping foreign spies from infiltrating its workforce and getting wide access to its customer records. Perhaps coincidentally, he notes, a former Twitter employee was just convicted of “spying lite”, proving the company is just as good at national security protection as it is at content moderation.
Meanwhile, returning to onshore aspects of U.S.-China economic relations, Jordan tells us about the survival of high-level government concerns about TikTok. I note that, in the years since these concerns first surfaced in the Trump era, TikTok’s lobbying efforts have only grown more sophisticated. Speaking of which, Klon Kitchen has done a good job of highlighting DJI’s increasingly sophisticated lobbying in Washington D.C.
The Cloudflare decision to deplatform Kiwi Farms kicks off a donnybrook, with Paul and Nick on one side and me on the other. It’s a classic Cyberlaw Podcast debate.
In quick hits and updates:
Download the 420th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 03:52 PM | Permalink | Comments (0)
Just when you thought you had a month free of the Cyberlaw Podcast, it turns out that we are persisting, at least a little. This month we offer a bonus episode, in which Dave Aitel and I interview Michael Fischerkeller, one of three authors of "Cyber Persistence Theory: Redefining National Security in Cyberspace."
The book is a detailed analysis of how cyberattacks and espionage work in the real world – and a sharp critique of military strategists who have substituted their models and theories for the reality of cyber conflict. We go deep on the authors' view that conflict in the cyber realm is all about persistent contact and faits accomplis rather than compulsion and escalation risk. Dave pulls these threads with enthusiasm.
I recommend the book and interview in part because of how closely the current thinking at United States Cyber Command is mirrored in both.
Download the 419th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 07:40 PM | Permalink | Comments (0)
As Congress barrels toward an election that could see at least one house change hands, efforts to squeeze a few big bills into law are mounting. The one with the best chance (better than I expected) would drop $52 billion in cash and a boatload of tax breaks on the semiconductor
industry. Michael Ellis points out that this is industrial policy without apology, and a throwback to the 1980s, when the government organized Sematech to shore up US chipmaking. Thanks to a bipartisan consensus on the need to fight a Chinese challenge, and elimination of controversial provisions that tried to hitch a ride on the bill, there now looks to be a clear path to enactment for this bill.
And if there were doubt about how serious the Chinese challenge in chips will be, we highlight an undercovered story revealing that China’s chipmaking champion, SMIC, has been making 7-nanometer chips for months without making a public announcement. That’s a diameter that Intel and GlobalFoundries, the main US producers, have yet to reach in commercial production.
The national security implications are plain. If commercial products from China are cheap enough to sweep the market, even security-minded agencies will be forced to buy them, as it turns out the FBI and DHS have both been doing with Chinese drones. Nick recommends that policymakers read his Lawfare piece showing just how cheaply the US (and Ukraine) could be making drones.
Responding to the growing political concern about national security and Chinese products, TikTok’s owner ByteDance, has increased its U.S. lobbying budget to more than $8 million a year, Christina Ayiotis tells us; that's an amount, I point out, that just about matches what Google spends on lobbying.
In the same vein, Nick Weaver and Michael question why the government hasn’t come up with the extra $3 billion to fund “rip and replace” for Chinese telecom gear. That effort will certainly get a boost from reports that Chinese telecom gear was offered on especially favorable terms to carriers who service America’s nuclear missile locations. I note that the Obama administration actually paid these same rural carriers to install Chinese equipment in the teens, as part of the 2009 stimulus law. I can’t help wondering why US taxpayers should pay those carriers both to install and to remove the same gear.
In news not tied to China, Nick tells us about the House’s serious progress on a compromise federal data privacy bill. It’s probably still doomed, given resistance from Dems (and maybe the GOP) in the Senate. I argue that that’s a good thing, given the bill's egregious effort to impose “disparate impact” quotas for race, color, religion, national origin, sex, and disability on the outcomes of every algorithm that processes even a little personal data. This is a transformative social engineering project, imposed by a single section (207) of the bill without any serious debate.
Tina grades Russian information warfare based on its latest exploit: hacking a Ukrainian radio broadcaster to spread fake news about Zelensky’s health, As a hack, it gets a passing grade, but as a believable bit of information warfare, it’s a bust.
Tina, Michael and I evaluate YouTube’s new policy on removing “misinformation” related to abortion, and the risk that this policy, like so many Silicon Valley speech suppression schemes, will start out sounding plausible and end up enforcing political correctness.
Nick and I celebrate DOJ’s increasing though still episodic success in seizing cryptocurrency from hackers and ransomware gangs. It may just be Darwin at work, but it’s nice to see.
Nick offers the recommended long read of the week -- Brian Krebs’s takedown of the VPN malware supplier, 911.
And in updates and quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com.
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
* This week's title is an obscure Rhode Island tribute to the Industrial Trust Building, known to a generation of children as the ‘Dusty Old Trust” building until a new generation christened it the “Superman Building.”
Posted at 07:34 AM | Permalink | Comments (0)