Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Posted at 08:50 PM in Random posts | Permalink | Comments (5)
For those who've followed the progress of a dangerous stealth quota provision in Congress, I'm pleased to report that what looked three weeks ago like a retreat on the issue has turned into a full-fledged rout.
A new discussion draft of the widely touted American Privacy Rights Act (APRA) has been released. This bill was hailed as a bipartisan and bicameral compromise with overwhelming support when it first appeared. The original version contained a detailed blueprint for imposing race, gender, and other preferences on algorithms that use personal data. After a long analysis of the risks of such an approach ran here in the Volokh Conspiracy, a second version of the bill was released that dropped most of the detail but still had troubling provisions that could have encouraged similar preferences, as pointed out in a second Volokh Conspiracy post.
Now a third discussion draft has been released, and it drops all of the algorithmic discrimination and civil rights provisions that were driving quotas. It is a complete victory for those of us who objected to the smuggling of race and gender preferences into the digital infrastructure that will govern our economy and society for the next several decades.
The bill will go to markup next week. It remains controversial. A good summary of the issues can be found in this piece by Brandon Pugh and Steven Ward of R Street. There will be some bare-fisted R-on-R fighting over the bill, a priority for the retiring chair of the House commerce committee. But at least quotas won't be part of the bargain.
On a personal note, this has been an unusual experience for me. There is no doubt that staff and members of the commerce committee have been paying attention to these posts, and modifying the bill to respond to them. But exactly which staff and which members has never been entirely clear. So I can only lift a virtual glass to the anonymous heroes who performed such effective work in the trenches: And I promise that I'll be glad to buy you an actual beer if I ever learn who you are!
Posted at 10:03 AM | Permalink | Comments (0)
There are new twists in the saga of algorithmic bias and the American Privacy Rights Act, or APRA. That's the privacy bill that would have imposed race and gender quotas on AI algorithms. I covered that effort two weeks ago in a detailed article for the Volokh Conspiracy.
A lot has happened since then. Most importantly, publicity around its quota provisions forced the drafters of APRA into retreat. A new discussion draft was released, and it dropped much of the quota-driving language. Then, a day later, a House commerce subcommittee held a markup on the new draft. It was actually more of a nonmarkup markup; member after member insisted that the new draft needed further changes and then withdrew their amendments pending further discussions. With that ambiguous endorsement, the subcommittee sent APRA to the full committee.
Still, it is good news that APRA now omits the original disparate impact and quota provisions. No explanation was offered for the change, but it seems clear that few in Congress want to be seen forcing quotas into algorithmic decisionmaking.
That said, there's reason to fear that the drafters still hope to sneak algorithmic quotas into most algorithms without having to defend them. The new version of APRA has four provisions on algorithmic discrimination. First, the bill forbids the use of data in a manner that "discriminates in or otherwise makes unavailable the equal enjoyment of goods and services" on the basis of various protected characteristics. Sec. 113(a)(1). That promising start is immediately undercut by the second provision, which allows discrimination in the collection of data either to conduct "self-testing" to prevent or mitigate unlawful discrimination or to expand the pool of applicants or customers. Id. at (a)(2). The third provision requires users to assess the potential of an algorithm "to cause a harm, including harm to an individual or group of individuals on the basis of protected characteristics." Id. at (b)(1)(B)(ix). Finally, in that assessment, users must provide details of the steps they are taking to mitigate such harms "to an individual or group of individuals." Id.
The self-assessment requirement clearly pushes designers of algorithms toward fairness not simply to individuals but to demographic groups. Algorithmic harm must be assessed and mitigated not just on an individual basis but also on a group basis. Judging an individual on his or her group identity sounds a lot like discrimination, but APRA makes sure that such judgments are immune from liability; it defines discrimination to exclude measures taken to expand a customer or applicant pool.
So, despite its cryptic phrasing, APRA can easily be read as requiring that algorithms avoid harming a protected group, an interpretation that leads quickly to quotas as the best way to avoid group harm. Certainly, agency regulators would not have trouble providing guidance that gets to that result. They need only declare that an algorithm causes harm to a "group of individuals" if it does not ensure them a proportionate share in the distribution of jobs, goods, and services. Even a private company that likes quotas because they're a cheap way to avoid accusations of bias could implement them and then invoke the two statutory defenses -- that its self-assessment required an adjustment to achieve group justice, and that the adjustment is immune from discrimination lawsuits because it is designed to expand the pool of beneficiaries.
In short, while not as gobsmackingly coercive as its predecessor, the new APRA is still likely to encourage the tweaking of algorithms to reach proportionate representation, even at the cost of accuracy.
This is a big deal. It goes well beyond quotas in academic admissions and employment. It would build "group fairness" into all kinds of decision algorithms – from bail decisions, and health care to Uber trips, face recognition, and more. What's more, because it's not easy to identify how machine learning algorithms achieve their weirdly accurate results, the designers of those algorithms will be tempted to smuggle racial or gender factors into their products without telling the subjects or even the users.
This process is already well under way -- even in healthcare, where compromising the accuracy of an algorithm for the sake of proportionate outcomes can be a matter of life or death. A recent paper on algorithmic bias in health care published by the Harvard School of Public Health recommended that algorithm designers protect "certain groups" by "inserting an artificial standard in the algorithm that overemphasizes these groups and deemphasizes others."
This kind of crude intervention to confer artificial advantages by race and gender is in fact routinely recommended by experts in algorithmic bias. Thus, the McKinsey Global Institute advises designers to impose what it calls "fairness constraints" on their products to force algorithms to achieve proportional outcomes. Among the approaches it finds worthy are "post-processing techniques [that] transform some of the model's predictions after they are made in order to satisfy a fairness constraint." Another recommended approach "imposes fairness constraints on the optimization process itself." In both cases, to be clear, the model is being made less accurate in order to fit the designer's views of social justice. And in each case, the compromise will fly below the radar. The designer's social justice views are hidden by a fundamental characteristic of machine learning; the machine produces the results that the trainers reward. If they only reward results that meet certain demographic requirements, that's what the machine will produce.
If you're wondering how far from reality such constraints wander, take a look at the "text to image" results originally produced by Google Gemini. When asked for pictures of German soldiers in the 1940s, Gemini's training required that it serve up images of black and Asian Nazis. The consequences of bringing such political correctness to healthcare decisions could be devastating – and much harder to spot.
That's why we can't afford APRA's quota-nudging approach. The answer is not to simply delete those provisions, but to address the problem of stealth quotas directly. APRA should be amended to make clear the fundamental principle that identity-based adjustments of algorithms require special justification. They should be a last resort, used only when actual discrimination has provably distorted algorithmic outcomes – and when other remedies are insufficient. They should never be used when apparent bias can be cured simply by improving the algorithm's accuracy. To take one example, face recognition software ten or fifteen years ago had difficulty accurately identifying minorities and darker skin tones. But today those difficulties can be largely overcome by better lighting, cameras, software, and training sets. Such improvements in algorithmic accuracy are far more likely to be seen as fair than forcing identity-based solutions.
Equally important, any introduction of race, gender, and other protected characteristics into an algorithm's design or training should be open and transparent. Controversial "group justice" measures should never be hidden from the public, from users of algorithms or from the individuals who are affected by those measures.
With those considerations in mind, I've taken a very rough cut at how APRA could be amended to make sure it does not encourage widespread imposition of algorithmic quotas:
"(a) Except as provided in section (b), a covered algorithm may not be modified, trained, prompted, rewarded or otherwise engineered using race, ethnicity, national origin, religion, sex, or other protected characteristic --
(1) to affect the algorithm's outcomes or
(2) to produce a particular distribution of outcomes based in whole or in part on race, ethnicity, national origin, religion, or sex.
(b) A covered algorithm may be modified, trained, prompted, rewarded or engineered as described in section (a) only:
(1) to the extent necessary to remedy a proven act or acts of discrimination that directly and proximately affected the data on which the algorithm is based and
(2) if the algorithm has been designed to ensure that any parties adversely affected by the modification can be identified and notified whenever the modified algorithm is used.
(c) An algorithm modified in accordance with section (b) may not be used to assist any decision unless parties adversely affected by the modification are identified and notified. Any party so notified may challenge the algorithm's compliance with section (b). "
It's not clear to me that such a provision will survive a Democratic Senate and a House that is Republican by a hair. But Congress's composition could change dramatically in a few months. Moreover, regulating artificial intelligence is not a just a federal concern.
Left-leaning state legislatures have taken the lead in adopting laws on AI bias; last year, the Brennan Center identified seven jurisdictions with proposed or enacted laws addressing AI discrimination. And of course the Biden administration is purauing multiple anti-bias initiatives. Many of these legal measures, along with a widespread push for ethical codes aimed at AI bias, will have the same quota-driving impact as APRA.
Conservative legislators have been slow to react to the enthusiasm for AI regulation; their silence guarantees that their constituents will be governed by algorithms written to blue-state regulatory standards. If conservative legislatures don't want to import stealth quotas, they will need to adopt their own laws restricting algorithmic race and gender discrimination and requiring transparency whenever algorithms are modified using race, gender and similar characteristics. So even if APRA is never amended or adopted, the language above, or some more artful version of it, could become an important part of the national debate over artificial intelligence.
Posted at 10:12 AM | Permalink | Comments (0)
More than two-thirds of Americans think the Supreme Court was right to hold Harvard's race-based admissions policy unlawful. But the minority who disagree have no doubt about their own moral authority, and there's every reason to believe that they intend to undo the Court's decision at the earliest opportunity.
Which could be as soon as this year. In fact, undoing the Harvard admissions decision is the least of it. Republicans and Democrats in Congress have embraced a precooked "privacy" bill that will impose race and gender quotas not just on academic admissions but on practically every private and public decision that matters to ordinary Americans. The provision could be adopted without scrutiny in a matter of weeks; that's because it is packaged as part of a bipartisan bill setting federal privacy standards -- something that has been out of reach in Washington for decades. And it looks as though the bill breaks the deadlock by giving Republicans some of the federal preemption their business allies want while it gives Democrats and left-wing advocacy groups a provision that will quietly overrule the Supreme Court's Harvard decision and impose identity-based quotas on a wide swath of American life.
This tradeoff first showed up in a 2023 bill that Democratic and Republican members of the House commerce committee approved by an overwhelming 53-2 vote. That bill, however, never won the support of Sen. Cantwell (D-WA), who chairs the Senate commerce committee. This time around, a lightly revised version of the bill has been endorsed by both Sen. Cantwell and her House counterpart, Cathy McMorris Rodgers (R-WA). The bill has a new name, the American Privacy Rights Act of 2024 (APRA), but it retains the earlier bill's core provision, which uses a "disparate impact" test to impose race, gender, and other quotas on practically every institutional decision of importance to Americans.
"Disparate impact" has a long and controversial history in employment law; it's controversial because it condemns as discriminatory practices that disproportionately affect racial, ethnic, gender, and other protected groups. Savvy employers soon learn that the easiest way to avoid disparate impact liability is to eliminate the disparity – that is, to hire a work force that is balanced by race and ethnicity. As the Supreme Court pointed out long ago, this is a recipe for discrimination; disparate impact liability can "leave the employer little choice . . . but to engage in a subjective quota system of employment selection." Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 652-53 (1989), quoting Albemarle Paper Co. v. Moody, 422 U.S. 405, 448 (1975) (Blackmun, J., concurring).
In the context of hiring and promotion, the easy slide from disparate impact to quotas has proven controversial. The Supreme Court decision that adopted disparate impact as a legal doctrine, Griggs v. Duke Power Co., 401 U.S. 432 (1971), has been persuasively criticized for ignoring Congressional intent. G. Heriot, Title VII Disparate Impact Liability Makes Almost Everything Presumptively Illegal, 14 N.Y.U. J. L. & Liberty 1 (2020). In theory, Griggs allowed employers to justify a hiring rule with a disparate impact if they could show that the rule was motivated not by animus but by business necessity. A few rules have been saved by business necessity; lifeguards have to be able to swim. But in the years since Griggs, the Supreme Court and Congress have struggled to define the business necessity defense; in practice there are few if any hiring qualifications that clearly pass muster if they have a disparate impact.
And there are few if any employment qualifications that don't have some disparate impact. As Prof. Heriot has pointed out, "everything has a disparate impact on some group:"
On average, men are stronger than women, while women are generally more capable of fine handiwork. Chinese Americans and Korean Americans score higher on standardized math tests and other measures of mathematical ability than most other national origin groups….
African American college students earn a disproportionate share of college degrees in public administration and social services. Asian Americans are less likely to have majored in Psychology. Unitarians are more likely to have college degrees than Baptists.…
I have in the past promised to pay $10,000 to the favorite charity of anyone who can bring to my attention a job qualification that has made a difference in a real case and has no disparate impact on any race, color, religion, sex, or national origin group. So far I have not had to pay.
Id. at 35-37. In short, disparate impacts are everywhere in the real world, and so is the temptation to solve the problem with quotas. The difficulty is that, as the polls about the Harvard decision reveal, most Americans don't like the solution. They think it's unfair. As Justice Scalia noted in 2009, the incentives for racial quotas sets the stage for a "war between disparate impact and equal protection." Ricci v. DeStefano, 557 U.S. 557, 594 (2009).
Not surprisingly, quota advocates don't want to fight such a war in the light of day. That's presumably why APRA obscures the mechanism by which it imposes quotas.
Here's how it works. APRA's quota provision, section 13 of APRA, says that any entity that "knowingly develops" an algorithm for its business must evaluate that algorithm "to reduce the risk of" harm. And it defines algorithmic "harm" to include causing a "disparate impact" on the basis of "race, color, religion, national origin, sex, or disability" (plus, weirdly, "political party registration status"). APRA Sec. 13(c)(1)(B)(vi)(IV)&(V).
At bottom, it's as simple as that. If you use an algorithm for any important decision about people -- to hire, promote, advertise, or otherwise allocate goods and services -- you must ensure that you've reduced the risk of disparate impact.
The closer one looks, however, the worse it gets. At every turn, APRA expands the sweep of quotas. For example, APRA does not confine itself to hiring and promotion. It provides that, within two years of the bill's enactment, institutions must reduce any disparate impact the algorithm causes in access to housing, education, employment, healthcare, insurance, or credit.
No one escapes. The quota mandate covers practically every business and nonprofit in the country, other than financial institutions. APRA sec. 2(10). And its regulatory sweep is not limited, as you might think, to sophisticated and mysterious artificial intelligence algorithms. A "covered algorithm" is broadly defined as any computational process that helps humans make a decision about providing goods or services or information. APRA, Section 2 (8). It covers everything from a ground-breaking AI model to an aging Chromebook running a spreadsheet. In order to call this a privacy provision, APRA says that a covered algorithm must process personal data, but that means pretty much every form of personal data that isn't deidentified, with the exception of employee data. APRA, Section 2 (9).
Actually, it gets worse. Remember that some disparate impacts in the employment context can be justified by business necessity. Not under APRA, which doesn't recognize any such defense. So if you use a spreadsheet to rank lifeguard applicants based on their swim test, and minorities do poorly on the test, your spreadsheet must be adjusted until the scores for minorities are the same as everyone else's.
To see how APRA would work, let's try it on Harvard. Is the university a covered entity? Sure, it's a nonprofit. Do its decisions affect access to an important opportunity? Yes, education. Is it handling nonpublic personal data about applicants? For sure. Is it using a covered algorithm? Almost certainly, even if all it does is enter all the applicants' data in a computer to make it easier to access and evaluate. Does the algorithm cause harm in the shape of disparate impact? Again, objective criteria will almost certainly result in underrepresentation of various racial, religious, gender, or disabled identity groups. To reduce the harm, Harvard will be forced to adopt admissions standards that boost black and Hispanic applicants past Asian and white students with comparable records. The sound of champagne corks popping in Cambridge will reach all the way to Capitol Hill.
Of course, Asian students could still take Harvard to court. There is a section of APRA that seems to make it unlawful to discriminate on the basis of race and ethnicity. APRA Sec. 13(a)(1). But in fact APRA offers the nondiscrimination mandate only to take it away. It carves out an explicit exception for any covered entity that engages in self-testing "to prevent or mitigate unlawful discrimination" or to" diversify an applicant, participant, or customer pool." Harvard will no doubt say that it adopted its quotas after its "self-testing" revealed a failure to achieve diversity in its "participant pool," otherwise known as its freshman class.
Even if the courts don't agree, the Federal Trade Commission can ride to the rescue. APRA gives the Commission authority to issue guidance or regulations interpreting APRA – including issuing a report on best practices for reducing the harm of disparate impact. APRA Sec. 13(c)(5)&(6). What are the odds that a Washington bureaucracy won't endorse race-based decisions as a "best practice"?
It's worth noting that, while I've been dunking on Harvard, I could have said the same about AT&T or General Electric or Amazon. In fact, big companies with lots of personal data face added scrutiny under APRA; they must do a quasipublic "impact assessment" explaining how they are mitigating any disparate impact caused by their algorithms. That creates heavy pressure to announce publicly that they've eliminated all algorithmic harm. That will be an added incentive to implement quotas, but as with Harvard, many big companies don't really need an added incentive. They all have active internal DEI bureaucracies that will be happy to inject even more race and gender consciousness into corporate life, as long the injection is immune from legal challenge.
And immune it will be. As we've seen, APRA provides strong legal cover for institutions that adopt quota systems. And I predict that, for those actually using artificial intelligence, there will be an added layer of obfuscation that will stop legal challenges before they get started. It seems likely that the burden of mitigating algorithmic harm will quickly be transferred from the companies buying and using algorithms to the companies that build and sell them. Algorithm vendors are already required by many buyers to certify that their products are bias-free. That will soon become standard practice. With APRA on the books, there won't be any doubt that the easiest and safest way to "eliminate bias" will be to build quotas in.
That won't be hard to do. Artificial intelligence and machine learning vendors can use their training and feedback protocols to achieve proportional representation of minorities, women, and the disabled.
During training, AI models are evaluated based on how often they serve up the "right" answers. Thus, a model designed to help promote engineers may be asked to evaluate the resumes of actual engineers who've gone through the corporate promotion process. Its initial guesses about which engineers should be promoted will be compared to actual corporate experience. If the machine picks candidates who performed badly, its recommendation will be marked wrong and it will have to try again. Eventually the machine will recognize the pattern of characteristics, some not at all obvious, that make for a promotable engineer.
But everything depends on the training, which can be constrained by arbitrary factors. A company that wanted to maximize two things -- the skill of its senior engineers and their intramural softball prowess -- could easily train its algorithm to downgrade engineers who can't throw or hit. The algorithm would eventually produce the best set of senior managers consistent with winning the intramural softball tournament every year. Of course, the model could just as easily be trained to produce the best set of senior engineers consistent with meeting the company's demographic quotas. And the beauty from the company's point of view is that the demographic goals never need to be acknowledged once the training has been completed – probably in some remote facility owned by its vendor. That uncomfortable topic can be passed over in silence. Indeed, it may even be hidden from the company that purchases the product, and it will certainly be hidden from anyone the algorithm disadvantages.
To be fair, unlike its 2023 predecessor, APRA at least nods in the direction of helping the algorithm's victims. A new Section 14 requires that institutions tell people if they are going to be judged by an algorithm, provide them with "meaningful information" about how the algorithm makes decisions, and give them an opportunity to opt out.
This is better than nothing, for sure. But not by much. Companies won't have much difficulty providing a lot of information about how its algorithms work without ever quite explaining who gets the short end of the disparate-impact stick. Indeed, as we've seen, the company that's supposed to provide the information may not even know how much race or gender preference has been built into its outcomes. More likely it will be told by its vendor, and will repeat, that the algorithm has been trained and certified to be bias-free.
What if a candidate suspects the algorithm is stacked against him? How does section 14's assurance that he can opt out help? Going back to our Harvard example, suppose that an Asian student figures out that the algorithm is radically discounting his achievements because of his race. If he opts out, what will happen? He won't be subjected to the algorithm. Instead, presumably, he'll be put in a pool with other dissidents and evaluated by humans -- who will almost certainly wonder about his choice and may well presume that he's a racist. Certainly, opting out provides the applicant no protection, given the power and information imbalance between him and Harvard. Yet that is all that APRA offers.
Let's be blunt; this is nuts. Overturning the Supreme Court's Harvard admissions decision in such a sneaky way is bad enough, but imposing Harvard's identity politics on practically every part of American life -- housing, education, employment, healthcare, insurance, and credit for starters – is worse. APRA's effort to legalize, if not mandate, quotas in all these fields has nothing to do with privacy. The bill deserves to be defeated or at least shorn of sections 13 and 14.
These are the provisions that I've summarized here, and they can be excised without affecting the rest of the bill. That is the first order of business. But efforts to force quotas into new fields by claiming they're needed to remedy algorithmic bias will continue, and they deserve a solution bigger than defeating a single bill. I've got some thoughts about ways to legislate protection against those efforts that I'll save for a later date. For now, though, passage of APRA is an imminent threat, particularly in light of the complete lack of concern expressed so far by any member of Congress, Republican or Democrat.
Posted at 04:19 PM | Permalink | Comments (0)
Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century." I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out.
So sue me. I'm back on hiatus.
The conversation in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet Union as a child through the founding of Crowdstrike and becoming a founder of Silverado Policy Accelerator and an advisor to the Defense Department. Dmitri shares his journey, including his early start in cryptography and his role in investigating the 2010 Chinese hack of Google and other companies, which he named Operation Aurora.
Dmitri opens his book with a chillingly realistic scenario of a Chinese invasion of Taiwan. He explains that this is not merely a hypothetical exercise, but a well-researched depiction based on his extensive discussions with Taiwanese leadership, military experts, and his own analysis of the terrain.
Then, we dive into the main themes of his book -- which is how to prevent his scenario from coming true. Dmitri stresses the similarities and differences between the US-Soviet Cold War and what he sees as Cold War II between the U.S. and China. He argues that, like Cold War I, Cold War II will require a comprehensive strategy, leveraging military, economic, diplomatic, and technological deterrence.
Dmitri also highlights the structural economic problems facing China, such as the middle-income trap and a looming population collapse. Despite these challenges, he stresses that the U.S. will face tough decisions as it seeks to deter conflict with China while maintaining its other global obligations.
We talk about diversifying critical supply chains away from China and slowing China's technological progress in areas like semiconductors. This will require continuing collaboration with allies like Japan and the Netherlands to restrict China's access to advanced chip-making equipment.
Finally, I note the remarkable role played in Cold War I by Henry Kissinger and Zbigniew Brzezinski, two influential national security advisers who were also first-generation immigrants. I ask whether it's too late to nominate Dmitri to play the same role in Cold War II. You heard it here first!
You can download episode 501 here.
The Cyberlaw Podcast is open to feedback. Send comments to @stewartbaker on Twitter or to [email protected]. Remember: The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 10:27 AM | Permalink | Comments (0)
There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this the podcast will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity.
Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the compromise of a Microsoft signing key that gave China access to U.S. government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require that it renew the focus on security -- at a time when the company feels compelled to put all its effort into building AI into its offerings. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they’ve been questioned by the rest of government for decades.
With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anybody can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access for the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off were well-practiced and well-resourced. They’ve likely done this before, and will likely do it again, making them and others like them open source's long-term strategic vulnerability.
It wouldn’t be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court’s Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why, you might ask, would the Republicans who control the House go along with this bill? I offer two reasons: first, business lobbyists want both preemption and a way to avoid lawsuits over discrimination, even if it means relying on quotas; second, maybe former Wyoming Senator Alan Simpson (R) was right, and the Republican Party really is the Stupid Party.
Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is likely to sweep the world. Nick is critical of Israel’s approach; I am less so. But there’s no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be.
Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we’ve heard since 2016. The bottom line: China is using AI to advance its interests in American social media and to probe U.S. weaknesses, but so far the effort doesn’t seem to be having much effect.
Nick answers the question, “Will AI companies run out of training data?”He thinks they already have. He invokes the Hapsburgs to explain what’s going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical about both legal risks.
Paul comments on two U.S. quasi-agreements, with the UK and the EU, on AI cooperation.
Adam breaks down the FCC’s burst of initiatives, which are a belated celebration of the long-awaited arrival of a Democratic majority on the Commission -- for the first time since President Biden’s inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security.
Adam covers the security researcher who responded to a North Korean hacking attack by taking down that country's internet, Adam acknowledges that maybe my advocacy of hacking back wasn’t quite as crazy as he thought when he was in government.
In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he’ll be expected to channel Max Schrems.
And Paul closes with a tribute to what has made the last 500 episodes so much fun for me, our guests, and our audience. Thanks to you all for the gift of your time and your tolerance!
Direct Download is here.
You can subscribe to The Cyberlaw Podcast using iTunes, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 12:27 PM | Permalink | Comments (0)
This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere. Having promised to take stock of the podcast when it reached episode 500, I've decided that I, the podcast, and the listeners all deserve a break.
So, I'll be taking one after the next episode. No final decisions have been made, so don't delete your subscription, but don't expect a new episode any time soon. It's been a great run, from the dawn of the podcast age in 2014, through the ad-fueled podcast boom, which I manfully resisted, to the podcast market correction that's still under way. It was a pleasure to engage with listeners from all over the world. (Yes, even the EU! )
As they say, in the podcast age, everyone is famous for fifteen people. That's certainly been true for me, and I'll always be grateful for listeners' support – not to mention for all the great contributors who've joined the podcast over the years.
Turning back to cyberlaw, there are a surprising number of people arguing that there's no reason to worry about existential and catastrophic risks from proliferating or runaway AI risks. Some of that is people seeking clever takes; a lot of it is ideological, driven by fear that talking about the end of the world will distract attention from the dire danger of face recognition. One useful antidote to this view is the Gladstone Report, written for the State Department's export control agency. David Kris gives an overview of the report for this episode of the Cyberlaw Podcast. The report explains the dynamic, and some of the evidence, behind all the doom-saying, a discussion that is more persuasive than the report's prescriptions for avoiding disaster through regulation.
Speaking of the moral panic over face recognition, Paul Stephan and I unpack a New York Times piece saying that Israel is using face recognition in its Gaza conflict. Actually, we don't so much unpack it as turn it over and shake it, only to discover it's largely empty. Apparently, the editors of the NYT thought that tying face recognition to Israel and Gaza was all their readers needed to understand that the technology is evil, evil, evil.
More interesting is this story arguing that the National Security Agency, traditionally at the forefront of computers and national security, may have to sit out the AI revolution. The reason, David tells us, is that NSA's access to mass quantities of data for training is complicated by rules and traditions against intelligence agencies accessing data about Americans. And there are few training databases not contaminated with data about and by Americans.
While we're feeling sorry for the intelligence community's struggles with new technology, Paul notes that Yahoo News has assembled a long analysis of all the ways that personalized technology is making undercover operations impossible for CIA and FBI alike.
Michael Ellis weighs in with a review of a report by the Foundation for the Defence of Democracies on the need for a U.S. Cyber Force to man, train, and equip warfighting nerds for Cyber Command. It's a bit of an inside baseball solution, heavy on organizational boxology, but we're both persuaded that the current system for attracting and retaining cyberwarriors is not working. As "Yes, Minister" would tell us, we must do something, and this is something.
In contrast, it's fair to say that the latest Senate Judiciary proposal for a "compromise" 702 renewal bill is nothing, or at least nothing much – a largely phony compromise that substitutes ideological baggage for real-world solutions. David and I are unimpressed -- and surprised at how muted the Biden administration has been in trying to wrangle the Democratic Senate toward a workable bill.
Paul and Michael review the latest trouble for TikTok – a likely FTC lawsuit over privacy. And Michael and I puzzle over the stories claiming that Meta may have "wiretapped" Snapchat analytic data. They come trial lawyers suing Meta, and they raise are a lot of unanswered questions, such as whether users consented to the collection of the data. In the end, we can't help thinking that if Meta had 41 of its lawyers reviewing the project, they probably found a way to avoid wiretapping liability.
The most intriguing story of the week is the complex and surprising three-or four-cornered fight in northern Myanmar over hundreds of thousands of women trapped in call centers to run romance and pig-butchering scams. Angry that many of the women and many of the victims are Chinese, China persuaded a warlord to attack the call centers and free many of the women, deeply embarrassing the current Myanmar ruling junta and its warlord allies, who'd been running the scams. And we thought our southern border was a mess!
And in quick hits:
Direct Download: https://traffic.libsyn.com/steptoecyber/The_Cyberlaw_Podcast_499_.mp3
You can subscribe to The Cyberlaw Podcast using iTunes, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 09:22 AM | Permalink | Comments (0)
The Biden administration has been aggressively pursuing antitrust cases against Silicon Valley giants like Amazon, Google, and Facebook. This week it was Apple's turn. The Justice Department (joined by several state AGs) filed a gracefully written complaint accusing Apple of improperly monopolizing the market for "performance smartphones." This questionable market definition will be a weakness for the government throughout the case, but the complaint does a good job of identifying ways in which Apple has built a moat around its business without an obvious benefit for its customers. The complaint focuses on Apple's discouraging of multipurpose apps and cloud streaming games, its lack of message interoperability, the tying of Apple watches to the iPhone to make switching to Android expensive, and its insistence on restricting digital wallets on its platform. This lawsuit will continue well into the next presidential administration, so much depends on the outcome of the election this fall.
Volt Typhoon is still in the news, Andrew Adams tells us, as the government continues to sound the alarm about Chinese intent to ravage American critical infrastructure in the event of a conflict. Water systems are getting most of the attention this week. I can't help wondering how we expect the understaffed and under-resourced water and sewage companies in this country to defeat sophisticated state-sponsored attackers. This leads Cristin and me to a discussion of how the SEC's pursuit of CISO Tim Brown and its demands for more security disclosures will improve the country's cybersecurity. Short answer: It won't.
Cristin covers the legislative effort to force a divestiture of Tiktok. The bill has gone to the Senate, where it is moving slowly, if at all. Speaking as a parent of teenagers and voters, Cristin is not surprised. Meanwhile, the House has sent a second bill to the Senate by a unanimous vote. This one would block data brokers from selling American's data to foreign adversaries. Andrew notes that the House bill covers data brokers. Other data moguls, like Google and Apple, would face a similar restriction under a new executive order, so the government will have multiple opportunities over the next few months to deal with Chinese access to American personal data.
In the wake of the Murthy argument looking at the first amendment and administration efforts to increase social media censorship of mostly right-wing posts, Andrew reports that the FBI has resumed its outreach to social media companies, at least where it identifies foreign influence campaigns. Meanwhile, the FDA, which piled on to criticize ivermectin advocates, has withdrawn its dubious and condescending tweets.
Finally, Cristin reports on the spyware agreement sponsored by the United States. It has collected several new supporters. Whether this will reduce spyware installations or simply change the countries that supply the spyware remains to be seen.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 08:10 AM | Permalink | Comments (0)
The Supreme Court is getting a heavy serving of first amendment social media cases. Gus Hurwitz covers two that made the news last week. In the first, Justice Barrett spoke for a unanimous court in spelling out the very factbound rules that determine when a public official may use a platform’s tools to suppress critics posting on his or her social media page. Gus and I agree that this might mean a lot of litigation, unless public officials wise up and simply follow the Court’s broad hint: If you don’t want your page to be treated as official, simply say up top that it isn’t official.
The second social media case making news was being argued as we recorded. Murthy v. Missouri appealed a broad injunction against the US government pressuring social media companies to take down posts the government disagrees with. The Court was plainly struggling with a host of justiciability issues and a factual record that the government challenged vigorously. If the Court reaches the merits, it will likely address the question of when encouraging the suppression of particular speech slides into coerced censorship.
Gus and Jeffrey Atik review the week’s biggest news – the House has passed a bill to force the divestment of TikTok, despite the outcry of millions of influencers. Whether the Senate will be quick to follow suit is deeply uncertain.
Melanie Teplinsky covers the news that data about Americans’ driving habits is increasingly being sent to insurance companies to help them adjust their rates.
Melanie also describes the FCC’s new Cyber Trust Mark for IOT devices. Like the Commission, our commentators think this is a good idea.
Gus takes us back to more contest territory: What should be done about the use of technology to generate fake pictures, especially nude fake pictures. We also touch on a UK debate about a snippet of audio that many believe is a fake meant to embarrass a British Labour politician.
Gus tells us the latest news from the SVR’s compromise of a Microsoft network. This leads us to a meditation on the unintended consequences of the SEC’s new cyber incident reporting requirements.
Jeffrey explains the bitter conflict over app store sales between Apple and Epic games.
Melanie outlines a possible solution to the lack of cybersecurity standards (not to mention a lack of cybersecurity) in water systems. It’s interesting but it’s too early to judge its chances of being adopted.
Melanie also tells us why JetBrains and Rapid7 have been fighting over “silent patching.”
Finally, Gus and I dig into Meta’s high-stakes fight with the FTC, and the rough reception it got from a DC district court.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 07:53 AM | Permalink | Comments (0)
This bonus episode of the Cyberlaw Podcast focuses on the national security implications of sensitive personal information. Sales of personal data have been largely unregulated as the growth of adtech has turned personal data into a widely traded commodity. This in turn has produced a variety of policy proposals – comprehensive privacy regulation, a weird proposal from Sen. Wyden (D-OR) to ensure that the US governments cannot buy such data while China and Russia can, and most recently an Executive Order to prohibit or restrict commercial transactions affording China, Russia, and other adversary nations with access to Americans' bulk sensitive personal data and government related data.
To get a deeper understanding of the executive order, and the Justice Department's plans for implementing it, I interview Lee Licata, Deputy Section Chief for National Security Data Risk.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 02:01 PM | Permalink | Comments (0)
We open this episode by exploring the first National Cybersecurity Strategy, issued almost exactly a year ago. Since the only good way to judge a strategy is by its implementation, we pull in Kemba Walden, who was first the principal Deputy and then the Acting Cyber Director as the strategy came together. She is generally positive, and urges us to wait for the soon-to-be-released posture report from her old office. Kemba, meanwhile, has joined the Paladin Global Institute, designed to further Kemba's (and Paladin's) interest in aligning private investment and public security.
Turning from the strategic to the tactical, Sultan Meghji and I dig into the ransomware attack on Change Healthcare, and the heavy financial and human costs it imposed.
We also cover the sometimes overlooked response of America's adversaries to U.S. cyber strategies. I note that decoupling goes both ways, as China is slowly but surely extirpating U.S. tech from its infrastructure, and Chinese consumers have joined the campaign, at great cost to Apple. Meanwhile, Russian online disinformation, laughably overrated in 2016, is reported to be more effective in 2024, at least in countries with large Russian minorities.
The latest infrastructure supply chain concern is in U.S. ports, where Chinese-made cranes have achieved deep market penetration, despite suspicious components. Kemba, a veteran of port security debates, chronicles the history of the issue and of the U.S. response.
Brandon Pugh and Sultan remind us that even big companies with valuable secrets can be victimized by employees stealing intellectual property.
Brandon also analyzes the President's state of the union references to protection of kids on line, seen by some as a boost to the Kids Online Safety Act.
We dive deep into recommendations from Bruce Schneier on How Public AI Can Strengthen Democracy – essentially an effort to bring the healthcare "public option" model to the development of AI. Kemba is open to the idea; Sultan questions whether we need it.
Brandon reports on two bills unanimously approved by the House Commerce Committee. The first would force divestment of TikTok; the second would bar the sale of personal data to adversary nations like China and Russia. I can't resist weighing in, even though I'll be doing an entire bonus episode (496) this week on a White House executive order to restrict data transfers to adversaries.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 03:11 PM | Permalink | Comments (0)
The United States is in the process of rolling out a sweeping regulation for personal data transfers. But the rulemaking is getting limited attention, perhaps because it targets transfers to our rivals in the new Cold War – China, Russia, and their allies. Adam Hickey whose old office is drafting the rules, explains the history of the initiative, which stems from endless CFIUS efforts to impose such controls on a company-by-company basis.
Now, with an executive order as the foundation, DOJ has published an advance notice of proposed rulemaking that promises what could be years of slow-motion regulation. Faced with a similar issue – the national security risk posed by connected vehicles, particularly those sourced in China – the Commerce Department has issued a laconic notice whose telegraphic style contrasts sharply with the highly detailed Justice draft.
I take a stab at the riskiest of ventures – predicting the results in two Supreme Court cases about social media regulations adopted by Florida and Texas. Four hours of strong appellate advocacy and a highly engaged Court make predictions risky, but here goes. I divide the Court into two camps – on one hand the Justices (Thomas, Alito, probably Gorsuch) who think that the censorship we should worry about comes from powerful speech-monopolizing platforms and on the other hand the Justices (Kavanagh, the Chief) who see the cases through a lens that values corporate free speech. Many of the remainder (Kagan, Sotomayor, Jackson) see social media content moderation as understandable, consistent with their own biases, and justified, but they're uneasy about the power of large platforms and reluctant to grant a sweeping immunity from regulation to those companies. To my mind, this foretells a decision striking down the laws insofar as they restrict content moderation, but one that won't resolve all the issues raised by the two laws and won't overturn them entirely on the current record. There are too many provisions in those laws that some of the Justices considered reasonable for Netchoice to win a sweeping victory. So I look for an opinion that rejects regulation aimed at "private censorship" but expressly leaves open or even approves other, narrower measures disciplining platform power, leaving the lower courts to deal with them on remand.
Kurt Sanger and I dig into the SEC's amended complaint against Tim Brown and SolarWinds, alleging material misrepresentation with respect to company cybersecurity. The amended complaint tries to bolster the case against the company and its CISO, but at the end of the day it's less than fully persuasive. SolarWinds didn't have the best security, and it was slow to recognize how much harm its compromised software was causing its customers. But the SEC's case for disclosure feels like 20-20 hindsight. Unfortunately, CISOs will now have to spend the next five years trying to guess which intrusions will look material in hindsight.
I cover the National Institute of Standards and Technology's (NIST) release of version 2.0 of the Cybersecurity Framework, particularly its new governance and supply chain features.
Adam reviews the latest update on section 702 of FISA, which likely means the program will stumble zombie-style into 2025, thanks to a certification expected in April. We agree that Silicon Valley is likely to seize on the opportunity to engage in virtue-signaling litigation over the final certification.
Kurt explains the remarkable power of adtech data for intelligence purposes, and Senator Ron Wyden's (D-OR) effort to make sure such data is denied to U.S. agencies but not to China, Russia, and the rest of the world. He also pulls Adam and me into the debate over whether we need a federal backup for cyber insurance. Bruce Schneier thinks we do, but none of us is persuaded.
Finally, Adam and I consider the divide between CISA and GOP election officials. We agree that it has its roots in CISA's imprudent flirtation with election security mission creep, as it moved from assessing the cybersecurity of voting machines to trying to combat "malinformation," otherwise known as true facts that the administration found inconvenient. We wish CISA well in the vital job of protecting voting machines and processes and hope that it will manage in this cycle to stick to its cyber knitting.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 05:29 PM | Permalink | Comments (0)
This episode of the Cyberlaw Podcast kicks off with the Babylon Bee’s take on Google Gemini’s woke determination to inject a phony diversity into images of historical characters: "After decades of nothing but white Nazis, I can finally see a strong, confident black female wearing a swastika. Thanks, Google!" Jim Dempsey and Mark MacCarthy join the discussion because Gemini’s preposterous image diversity quotas deserve more than snark. In fact, I argue, they were not errors; they were entirely deliberate efforts by Google to give its users not what they want but what Google in its wisdom thinks they should want. That such bizarre results were achieved by Google’s sneakily editing user prompts to ask for, say, “indigenous” founding fathers simply shows that Google has found a unique combination of hubris and incompetence. More broadly, Mark and Jim suggest, the collapse of Google’s effort to control its users raises this question: Can we trust AI developers when they say they have installed guardrails to make their systems safe?
The same might be asked of the latest in what seems an endless stream of experts demanding that AI models defeat users by preventing them from creating “harmful” deepfake images. Later, Mark points out that most of Silicon Valley recently signed on to promises to combat election-related deepfakes. In the 2010s, we all learned to hate the tech companies; in the 2020s, it seems, they've learned to hate us.
Speaking of hubris, Michael Ellis covers the State Department’s stonewalling of a House committee trying to find out how generously the Department funded a group of ideologues trying to cut off advertising revenues for right-of-center news and comment sites. We take this story a little personally, having contributed op-eds to several of the blacklisted sites.
Michael explains just how much fun Western governments had taking down the infamous Lockbit ransomware service. I credit the Brits for the humor displayed as governments imitated Lockbit’s graphics, gimmicks, and attitude. There were arrests, cryptocurrency seizures, indictments, and more. It was fun while it lasted. But a week later, Lockbit was claiming that its infrastructure was slowly coming back on line.
Jim unpacks the FTC’s case against Avast for collecting the browsing habits of its antivirus customers. He sees this as another battle in the FTC’s war against corporate claims that privacy can be preserved by “de-identifying” personal data.
Mark notes the EU’s latest investigation into TikTok. And Michael explains how the Computer Fraud and Abuse Act relates to Tucker Carlson’s ouster from the Fox network.
Mark and I take a moment to promote next week’s review of the Supreme Court oral argument over Texas and Florida social media laws. The argument was happening while we were recording, but it was already clear that the outcome will be a mixed bag. Tune in next week for more.
Jim explains why the administration has produced an executive order about cybersecurity in America’s ports, and the legal steps needed to bolster port security.
Finally, in quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 05:28 PM | Permalink | Comments (0)
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having "emergent" capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn "generalization" of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. We're lawyers holding forth on the frontiers of technology, so take it with a grain of salt.
Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it's not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There's no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. Unfortunately, for all the heavy breathing, the public measures adopted by the West do not seem likely to defeat or deter China's strategy.
The group is not impressed by the New York Times' claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don't do it well, I argue.
Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Meanwhile, Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations.
We will be eagerly waiting for resolution of the European fight over Facebook's subscription fee and the implementation by websites of "Pay or Consent" privacy terms. I predict that Eurocrats' hypocrisy will be tested by the effort to reconcile rulings for elite European media sites, which have already embraced "Pay or Consent," with a nearly foregone ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task.
Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more.
Cristin also covers a detailed new Google TAG report on commercial spyware.
And in quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 06:31 PM | Permalink | Comments (0)
The latest episode of The Cyberlaw Podcast features guest host Brian Fleming, while Stewart Baker is participating in the Canadian Ski Marathon. Brian is joined for the news roundup by Jane Bambauer, Gus Hurwitz, and Nate Jones.
They begin by discussing the latest U.S. government efforts to protect sensitive personal data, including the FTC's lawsuit against data broker Kochava and the forthcoming executive order restricting certain bulk sensitive data flows to China and other countries of concern.
Nate and Brian then discuss whether Congress has a realistic path to end the Section 702 reauthorization standoff before the April expiration and debate what to make of a recent multilateral meeting in London to discuss curbing spyware abuses.
Gus and Jane then talk about the big news for cord-cutting sports fans, as well as Amazon's ad data deal with Reach, in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models.
Nate considers the implications of Ukraine's "defend forward" cyber strategy in its war against Russia. Jane next tackles a trio of stories detailing challenges, of the policy and economic varieties, facing Meta on the content moderation front, as well as an emerging problem policing sexual assaults in the Metaverse.
Bringing it back to data, Gus wraps the news roundup by highlighting a novel FTC case brought against Blackbaud stemming from its data retention practices.
In this week's quick hits, Gus and Jane reflect on the FCC's ban on AI-generated voice cloning in robocalls, Nate touches on an alert from CISA and FBI on the threat presented by Chinese hackers to critical infrastructure, Gus comments on South Korea's pause on implementation of its anti-monopoly platform act and the apparent futility of nudges (with respect to climate change attitudes or otherwise), and finally Brian closes with a few words on possible broad U.S. import restrictions on Chinese EVs and how even the abundance of mediocre AI-related ads couldn't ruin Taylor Swift's Super Bowl.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 02:51 PM | Permalink | Comments (0)
It was a week of serious cybersecurity incidents and unimpressive responses. As Melanie Teplinsky reminds us, the U.S. government has been agitated for months about China's apparent strategic decision to hold U.S. infrastructure hostage to cyberattack in a crisis. Now the government has struck back at Volt Typhoon, the Chinese threat actor pursuing that strategy. It claimed recently to have disrupted a Volt Typhoon botnet by taking over a batch of compromised routers. Andrew Adams explains how the court-ordered takeover was managed. It was a lot of work, and there is reason to doubt the effectiveness of the effort. The compromised routers can be re-compromised if they are turned off and on again. And the only ones that were uncompromised by the U.S. seizure are those inside the U.S., leaving open the possibility of DDOS attacks from abroad. Finally, DDOS attacks on our critical infrastructure shouldn't exactly be an existential threat. All things considered, I argue that there's a serious disconnect between the government's hair-on-fire talk about Volt Typhoon and its business-as-usual response.
Speaking of cyberattacks we could be overestimating, Taiwan just had an election that China cared a lot about. According to one detailed report, the Chinese threw a lot of cyber at Taiwanese voters -- and failed to make much of an impression. Richard Stiennon and I mix it up over whether the Chinese will do better trying to influence the 2024 outcome here.
While we're covering humdrum responses to cyberattacks, Melanie explains U.S. sanctions on Iranian military hackers for their hack of U.S. water systems that were more or less fish in a barrel.
For comic relief, Richard lays out the latest drama around the EU AI Act, now being amended in a series of backroom deals and off-the-books promises. I predict that the effort to pile pet-rock provisions on top of anti-American protectionism will end, not in a GDPR-style triumph for Europe but in a continent-wide AI desert. The EU market is now small enough for AI companies to bypass Europe entirely at the first sign of toxic regulation.
The U.S. is not the only player whose response to cyberintrusions is looking inadequate this week. Richard explains Microsoft's recent disclosure of a Midnight Blizzard attack on the company and a number of its customers. The company's obscure explanation of how its technology contributed to the attack and, worse, its effort to turn the disaster into an upsell opportunity earned Microsoft a patented Alex Stamos spanking.
Andrew explains the recent Justice Department charges against three people who facilitated the big $400m FTX hack that coincided with the exchange's collapse. Does that mean the hack wasn't an inside job? Not so fast, Andrew cautions. The government hasn't recovered the $400m, and it isn't claiming the three SIM-swappers it has charged are the only conspirators.
Melanie explains why we've seen a sudden surge in state privacy legislation. It turns out that industry has stopped fighting the idea of state privacy laws and is now selling a light-touch model law that omits things like a private right of action.
I give a lick and a promise to a "privacy" regulation now being pursued by CFPB for consumer financial information. I put privacy in quotes, because it's really an effort to create a whole new market for personal data, one that will assure better data management while undermining the competitive advantage of big data holdings. Bruce Schneier likes the idea. So do I, in principle, but it means a massive re-engineering of a big industry by technocrats who may not be quite as smart as they think they are. Bruce, if you want to come on the podcast to explain and debate the whole thing, send me email!
Spies are notoriously nasty, and often petty, but one of the nastiest and pettiest, Joshua Schulte, was sentenced to 40 years in prison last week. Andrew has the details.
There may be some good news on the ransomware front. More victims are refusing to pay. Melanie, Richard, and I explore ways to keep that trend going. I urge consideration of a tax on ransom payments.
I also flag a few new tech regulatory measures likely to come down the pike in the next few months. The FCC will likely use the TCPA to declare the use of AI-generated voices in robocalls illegal. And Amazon is likely to find itself held liable for the safety of products sold by third parties on the Amazon platform.
Finally, a few quick hits:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 11:29 AM | Permalink | Comments (0)
This was a big week for AI-generated deep fakes. Sultan Meghji, who's got a new AI startup of his own, walked us through four stories that illustrate how AI will lead to more confusion about what's real and what's not. First, a fake Biden robocall urged people not to vote in the New Hampshire primary. Second, a bot purporting to offer Dean Phillips's views on the issues was penalized by OpenAI because it didn't have Phillips's consent. Third, fake nudes of Taylor Swift led to a ban on Twitter searches for her image. And, finally, podcasters used AI to resurrect George Carlin and got sued by his family for violating copyrightish law. The moral panic over AI fakery meant that all of these stories were too long on "end of the world" and too short on "we'll live through this."
Regulators of AI are not doing a much better job of maintaining perspective. Mark MacCarthy reports that New York City's AI hiring law, which has punitive disparate-impact disclosure requirements for automated hiring decision engines, seems to have persuaded NYC employers, conveniently, that none of them are using automated hiring decision enginess, so they don't have to do any disclosures. Not to be outdone, the European Court of Justice has decided that pretty much any tool to aid in decisions is an automated decision making technology subject to special (and mostly nonsensical) data protection rules.
Is AI regulation beginning to suffer from backlash? Could be. Sultan and I report on a very plausible Republican plan to attack the Biden AI executive order on the ground that its main enforcement mechanism, the Defense Production Act, simply doesn't authorize the measures the order calls for.
In other Big Tech regulation, Maury Shenk explains the EU's application of the Digital Markets Act to tech companies like Apple and Google. Apple isn't used to being treated like just another tech company, and its contemptuous response to the EU's rules for its app market could easily spur regulatory sanctions. Looking at Apple's proposed compliance with the California court ruling in the Epic case and the European Digital Market Act, Mark says it's time to think about price regulating mobile app stores.
Even handing out big checks to technology companies turns out to be harder than it first sounds. Sultan and I talk about the slow pace of payments to chip makers, and the political imperative to get the deals done before November (and probably before March).
Senator Ron Wyden, D-Ore. is still flogging NSA and the danger of government access to personal data. This time, he's on about NSA's purchases of commercial data. So far, so predictable. But he's also misrepresenting the facts by claiming flatly that NSA buys domestic metadata, ignoring NSA's clear statement that the metadata it buys is "domestic" only in the sense that it covers communications with one end inside the country. Communications with foreign countries that flow into and out of the U.S. have long been considered appropriate foreign intelligence targets, as witness the current debate over FISA section 702.
Maury and I review a Jim Dempsey's effort to construct a liability regime for insecure software. His proposal looks reasonable, but Maury reminds me that he and I produced something similar twenty years ago, that is still not even close to adoption anywhere in the U.S.
I can't help but rant about Amazon's arrogant, virtue-signaling, and customer-hating decision to drop a feature that makes it easy for Ring doorbell users to share their videos with the police. Whose data is it, anyway, Amazon? Sadly, I'm afraid we know the answer.
It looks as though there's only one place where hasty, ill-conceived tech regulation is being rolled back. China. Maury reports on China's decision to roll back video game regulations, to fire its video game regulator, and to start approving new games at a rapid clip -- though only after a regulatory crackdown had knocked more than $60 billion off the value of its industry.
We close the news roundup with a few quick hits:
Finally, as a listener bonus, we hear from Rob Silvers, Under Secretary for Policy at the Department of Homeland Security and Chair of the Cyber Safety Review Board (CSRB). Under Rob's leadership, DHS has proposed legislation to give the CSRB a legislative foundation. The Senate homeland security committee recently held a hearing about that idea. Rob wasn't invited, so we asked him to come on the podcast to respond to issues that the hearing raised – conflicts of interest, subpoena power, choosing the incidents to investigate, and more.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 12:50 PM | Permalink | Comments (0)
Okay, maybe past Cybertoonz have been a little hard on the FTC, hinting that it has paid no attention to national security concerns around personal data. In light of the Commission's recent ruling in the X-Mode case, it's become clear that the FTC is focusing on how personal data is used to protect national security. So to give the Commission equal time on the issue, we've turned Cybertoonz over to Chair Lina Khan to express its views.
Posted at 04:33 PM | Permalink | Comments (0)
The Supreme Court heard argument last week in two cases seeking to overturn the Chevron doctrine, which requires courts to defer to administrative agencies in interpreting the statutes that the agencies administer. The cases have nothing to do with cybersecurity, but Adam Hickey thinks they're almost certain to have a big impact on cybersecurity policy. That's because, based on the argument, Chevron is going to take a beating from the Court, if it survives at all. With Chevron weakened, it will be much tougher to repurpose existing law to deal with new regulatory problems. Given how little serious cybersecurity legislation has been passed in recent years, any new regulation is bound to require some stretching of existing law – and thus to be easier to challenge.
Case in point: Even without a new look at Chevron, the EPA was balked in court when it tried to stretch its authorities to justify cybersecurity rules for water companies. Now, Kurt Sanger tells us, EPA, FBI, and CISA have combined to release cybersecurity guidance for the water sector. The guidance may be all that can be done under current law, but it's pretty generic; and there's no reason to think that underfunded water companies will actually take it to heart. Given Iran's demonstrated interest in causing aggravation and maybe worse in that sector, Congress is almost certainly going to feel pressure to act on the problem.
CISA's emergency cybersecurity directives to federal agencies are coming fast and furious. That's a bad sign, since they are a library of flaws that are already being exploited. As Adam points out, they also reveal just how quickly patches are being turned into attacks and deployed. I wonder how sustainable the current patch system will prove to be. (In fact, it's already unsustainable; we just don't have anything to replace it.)
Here's some good news. The Russians have been surprisingly bad at turning cybersecurity flaws into serious infrastructure problems even for a wartime enemy like Ukraine. Additional information about Russia's attack on Ukraine's largest telecom provider suggests that the cost to get infrastructure back was lower than expected and mostly consisted of spending to win the victim telco's customers back.
Companies are starting to report breaches under the new, tougher SEC rule, Adam tells us, and Microsoft is out of the gate early. Russian hackers stole the company's corporate emails, Microsoft says, but it insists the breach wasn't material. I predict we'll see a lot of such hair splitting as companies adjust to the rule. If so, Adam predicts, we're going to be drowning in 8ks.
Kurt notes recent FBI and CISA warnings about the national security threat posed by Chinese drones. The hard question is what's new in those warnings. A question about whether antitrust authorities might want to investigate DJI's enormous market share leads to another about the FTC's utter lack of interest in getting guidance from the executive branch when its jurisdiction overlaps with a national security concern. Case in point: After listing a boatload of "sensitive location data" that should not be sold, the FTC had nothing to say about the personal data of people serving on US military bases. Nothing "sensitive" there, the FTC seems to think, at least not compared to homeless shelters and migrant camps. I'm gobsmacked, which naturally leads to a new Cybertoon.
Michael Ellis takes us through Apple's embarrassing failure to protect users of its Airdrop feature. It comes on top of Apple's decision to live down to the worst Big Tech caricature in handling the complaints of app developers about its app store. Michael explains how Apple managed to beat 9 out of 10 claims in Epic's lawsuit and still end up looking like the sorest of losers.
Adam is encouraged by a sign of maturity on the part of OpenAI, which has trimmed its overbroad rules on not assisting military projects.
Michael takes us inside a new US surveillance court just for Europeans, and we end up worrying about the risk that the Obama administration will come back to impose new law on the Biden team.
Adam explains yet another European Court of Justice decision on GDPR. This time it's a European government in the dock. The result is the same, though: national security is pushed into a corner, and the data protection bureaucracy takes center stage.
Finally, we end with a sad disclosure. While bad cyber news will continue, cyber-enabled day drinking will not. Uber has announced the end of Drizly, its liquor delivery app.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 10:20 AM | Permalink | Comments (0)
The FTC has begun a new campaign against data brokers who are collecting and selling "sensitive" location information. Cybertoonz asks the obvious question.
Posted at 08:47 PM | Permalink | Comments (0)
Matthew Heiman kicks off this episode of the podcast with a breakdown of Russia's attack on Ukraine's largest mobile operator. The attack was strikingly effective in destroying much of Kyivstar's infrastructure, and strikingly ineffective in achieving any meaningful Russian objectives, since service was quickly restored. Perhaps to even up the score, Ukraine supporters launched an even less effective cyberattack on an Iranian medical software company, presumably as retribution for Iran's supplying drones to Russia.
Hacking as an act of war may turn out to be more important in court than on the battlefield, at least when the victims file insurance claims, Jim Dempsey tells us. Merck's effort to get insurance coverage for its NotPetya losses despite an act of war exclusion has been settled. Which means that, if you want to know what cyberwar means for your insurance coverage, you need to review your current policy, which has almost certainly changed since the Merck case began.
Moving to the world of cybersecurity regulation, Cristin Flynn Goodwin recommends digging into the output of the reigning American champion for prescriptive cybersecurity rules, New York's Department of Financial Services, which recently sanctioned a cryptocurrency firm for a host of violations, including insufficient cybersecurity.
In Washington, meanwhile, the administration is promising to impose new cybersecurity requirements on hospitals, many of whom have been crippled by ransomware attacks. The hospitals aren't taking it well, but Jim thinks the legal basis for regulation can be found in the Golden Rule: The feds are supplying the gold, so they will make the rules.
It's "dogpile on the SEC" week, and no one is feeling sorry for the agency. Cristin reminds us that the SEC's X/ Twitter account was hacked and a market-moving tweet released last week, apparently because the SEC failed to abide by its own regulatory guidance about securing accounts with multi-factor authentication. That's also the subject of a recent Cybertoon, which asks whether the SEC should pay Elon Musk a whistleblower award for outing the agency's security failings.
The FTC's war on location data brokers continues to heat up. Jim reports on the FTC's settlement with one geolocation broker and its sweeping complaint against another. We also return to the FTC's settlement with Rite Aid over use of facial recognition, and its transformation of the settlement into a caution for users and makers of artificial intelligence products.
Speaking of AI, Cristin and I debate what should be done about the use of AI to create fake nudes of real people and other harassing tactics.
I argue that AI has bigger problems to deal with, citing Anthropic's recent report on just how hard it is to counteract malicious AI training.
Matthew and I marvel over the way that a longstanding insurgency in northern Myanmar has turned into a cybersecurity problem.
Finally, I pass on some listener feedback about an earlier episode that asked whether Apple knew about the highly sophisticated Triangulation exploit used against Kaspersky and the Russian government. It turns out that plenty of security pros find it plausible that Apple would not have been aware of the attack.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 10:43 AM | Permalink | Comments (0)
Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we’ll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question we’ll be asking is simple: How could an attack like this be introduced without Apple’s knowledge and support? We don’t get to this question until near the end of the episode, and I don’t claim great expertise in exploit design, but it’s very hard to see how such an elaborate compromise could be slipped past Apple’s security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation.
Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can’t get patents as “inventors.” What's more interesting is the possibility that they’ll make a whole lot of technology “obvious” and thus unpatentable. Speaking of obvious, claiming that companies violate copyright when they train AI models on New York Times content requires a combination of arrogance and cluelessness that can only be found at, well, the New York Times.
Paul Stephan joins us to note that the National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety.
Jeffery notes that U.S. lawmakers have finally woken up to the EU’s misuse of tech regulation to protect the continent’s failing tech sector. Even the continent’s tech sector seems unhappy with the EU’s AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences, a problem that inspires this week’s Cybertoon.
Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a New York Times article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn’t prove the case.
Finally, Jeffery notes an SEC “sweep” examining the industry’s AI use.
Paul explains the competition law issues raised by app stores – and the inconsistent outcome of app store litigation against Apple and Google. Apple's app store skated free in a case tried before a judge, but Google lost before a jury and has now entered into an expensive settlement with other app makers. Yet it’s hard to say that Google’s handling of its app store monopoly is more egregiously anticompetitive than Apple’s.
We do our own research in real time to address an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters. The FTC has clearly adopted Paul’s dictum, “The best time to kick someone is when they’re down.” And its complaint shows a lack of care consistent with that posture. I criticize the FTC for claiming without citation that Rite Aid ignored racial bias in its facial recognition software. Justin and I dig into the bias data; in my view, if FTC documents could be reviewed for unfair and deceptive marketing, this filing would lead to sanctions.
The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn’t on board with the whole package.
We move from government regulation of Silicon Valley to Silicon Valley regulation of government. Apple has decided that it will now require a judicial order to give government’s access to customers’ “push notifications.” And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India’s hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw.
Paul and Jeffery decode the EU’s decision to open a DSA content moderation investigation into X. We also dig into the welcome failure of X's lawsuit to block California’s content moderation law.
Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. And the U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work. Justin looks at a recent report presenting actual evidence on the question of TikTok’s standards for boosting content of interest to the Chinese government.
And in quick takes,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 10:36 AM | Permalink | Comments (0)
As covered in this week's Cyberlaw Podcast, the AI Act is getting some poor reviews -- from the US Congress as well as Europe's tech sector. How could something like this happen in the home of the vaunted "Brussels Effect?" Fear not. Cybertoonz has the explanation.
Posted at 08:42 AM | Permalink | Comments (0)
This is 2023's last and probably longest episode. To lead off, Megan Stifel takes us through a batch of stories about ways that AI, and especially AI "alignment" efforts, manage to look remarkably fallible. Anthropic has released a paper showing that race, gender, and age discrimination by AI models is real but could be dramatically reduced simply by instructing the model to "really, really, really" avoid such discrimination. (The Techcrunch headline writers had fun snarking on the idea that "racist" AI could be cured by asking nicely, but in fact the discrimination identified by Anthropic was severe bias against older white men, and so was the residual bias that asking nicely didn't eliminate.) The bottom line from Anthropic seems to be, "Our technology is a really cool toy, but it can't be used for for anything that matters.") In keeping with that theme, Google's highly touted OpenAI competitor Gemini was released to mixed reviews; the model couldn't correctly identify recent Oscar winners or a French word with six letters (it offered "amour"). There was good news for people who hate AI's ham-handed political correctness; it turns out you can ask another AI model how to jailbreak your model, a request that can make the task go 25 times faster.
This could be the week that determines the fate of FISA section 702, David Kris reported. When we recorded, it looked as though two bills would go to the House floor, and only one would survive. (Since then, that plan has been dropped, at least for now, in favor of a short-term extension of 702 into April.) The two bills reflect a split between the two committees overseeing the program. Judiciary's bill is a grudging renewal of 702 for a mere three years, full of procedures designed to cripple the program. The intelligence committee's bill also harries the FBI for its past failures but preserves the core of 702.
Gus Hurwitz looks at the FTC's last-ditch appeal to stop the Microsoft-Activision merger. The best case for the Commission, he suspects, is that the appeal will be rejected without actually repudiating the pet theories of the FTC's hipster antitrust lawyers.
Megan and I examine the latest HHS proposal to impose new cybersecurity requirements on hospitals. David, meanwhile, looks for possible motivations behind the FBI's procedures for companies who want help in delaying SEC cyber incident disclosures. Then Megan and I consider the tough new UK rules for establishing the age of online porn consumers. I argue that, if successful, they'll undermine Pornhub's litigation campaign against American states trying to regulate children's access to porn sites.
The race to 5G is over, Gus notes, and it looks like even the winners lost. Faced with the threat of Chinese 5G domination and an industry sure that 5G was the key to the future, many companies and countries devoted massive investments to the technology, but it's now widely deployed and no one sees much benefit. There is more than one lesson here for industrial policy and the unpredictable way technologies disseminate.
23andme gets some time in the barrel, with Megan and I both dissing its "lawyerly" response to a history of data breaches – namely changing its terms of service to make it harder for customers to sue over data breaches.
Gus reminds us that the Biden FCC, which only gained a working majority in the last month or two, is apparently determined to catch up with the FTC in advancing foolish and doomed regulatory initiatives. This week's example, remarkably, isn't net neutrality. It's worse. The Commission is building a sweeping regulatory structure on an obscure and laconic section of the 2021 infrastructure act that calls for the FCC to "facilitate equal access to broadband internet access service…": If you think we're hyperventilating, read Commissioner Brendan Carr's eloquent takedown of the whole initiative.
Senator Ron Wyden (D-OR) has a bee in his bonnet over government access to smartphone notifications. Megan and I do our best to understand his concern and how seriously to take it.
Wrapping up, Gus offers a quick take on Meta's broadening attack on the constitutionality of the FTC's current structure. David takes satisfaction from the Justice Department's patient and successful pursuit of Russian Hacker Vladimir Dunaev for his role in creating TrickBot. Gus notes that South Korea's law imposing internet costs on content providers is no match for the law of supply and demand.
And finally, in quick hits we cover:
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 11:34 AM | Permalink | Comments (0)
In this episode, Paul Stephan lays out the reasoning behind U.S. District Judge Donald W. Molloy's decision enjoining Montana's ban on TikTok. There are some plausible reasons for such an injunction, and the court adopts them. There are also less plausible and redundant grounds for an injunction, and the court adopts those as well. Asked to predict the future course of the litigation, Paul demurs. It will all depend, he thinks, on the Supreme Court's effort to sort out social media and the first amendment in the upcoming term. In the meantime, watch for bouncing rubble in the District of Montana courthouse. (Grudging credit for the graphics goes to Bing's Image Creator, which refused to accept the prompt until I said the rubble was bouncing because of a gas explosion and not a bomb. Way to discredit trust and safety, Bing!)
Jane Bambauer and Paul also help me make sense of the litigation between Meta and the FTC over children's privacy and the Commission's previous consent decrees. A recent judicial decision has opened the door for the FTC to modify an earlier court-approved order – on the surprising ground that the order was never incorporated into the judicial ruling that approved it. This in turn gave Meta a chance to make an existential constitutional challenge to the FTC's fundamental organization, a challenge that Paul thinks the Supreme Court is likely to take seriously.
Maury Shenk and Paul analyze the "AI security by design" principles drafted by the U.K. and adopted by an ad hoc group of nations that showed a split in the EU's membership and pulled in parts of the Global South. As diplomacy, it was a coup. As security policy, it's mostly unsurprising. I complain that there's little reason for special security rules to protect users of AI, since the threats are largely unformed, though Maury pushes back. What governments really seem to want is not security for users but security from users, a paradigm that diverges from decades of technology policy.
Maury requests listener comments on his recent AI research and examines Meta's divergent view on open source AI technology. He offers his take on why the company's path might be different from Google's or Microsoft's.
Jane and I are in accord in dissing California's aggressive new AI rules, which appear to demand a public notice every time a company uses a spreadsheets containing personal data to make a business decision. I predict that it will be the most toxic fount of unanticipated tech liability since Illinois's Biometric Information Privacy Act.
Maury, Jane and I explore the surprisingly complicated questions raised by Meta's decision to offer an ad-free service for around $10 a month.
Paul and I explore the decline of global trade interdependence and the rise of a new mercantilism. Two cases in point: the U.S. decision not to trust the Saudis as partners in restricting China's AI ambitions and China's weirdly self-defeating announcement that it intends to be an unreliable source of graphite exports to the United States in future.
Jane and I puzzle over a rare and remarkable conservative victory in tech policy: the collapse of Biden administration efforts to warn social media about foreign election meddling.
Finally, in quick hits,
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Posted at 12:03 PM | Permalink | Comments (0)