Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Skating on Stilts -- the award-winning book
Now available in traditional form factor from Amazon and other booksellers.
It's also available in a Kindle edition.
And for you cheapskates, the free Creative Commons download is here.
Posted at 08:50 PM in Random posts | Permalink | Comments (5)
Posted at 06:56 PM | Permalink | Comments (0)
Posted at 05:19 PM | Permalink | Comments (0)
For those who've followed the progress of a dangerous stealth quota provision in Congress, I'm pleased to report that what looked three weeks ago like a retreat on the issue has turned into a full-fledged rout.
A new discussion draft of the widely touted American Privacy Rights Act (APRA) has been released. This bill was hailed as a bipartisan and bicameral compromise with overwhelming support when it first appeared. The original version contained a detailed blueprint for imposing race, gender, and other preferences on algorithms that use personal data. After a long analysis of the risks of such an approach ran here in the Volokh Conspiracy, a second version of the bill was released that dropped most of the detail but still had troubling provisions that could have encouraged similar preferences, as pointed out in a second Volokh Conspiracy post.
Now a third discussion draft has been released, and it drops all of the algorithmic discrimination and civil rights provisions that were driving quotas. It is a complete victory for those of us who objected to the smuggling of race and gender preferences into the digital infrastructure that will govern our economy and society for the next several decades.
The bill will go to markup next week. It remains controversial. A good summary of the issues can be found in this piece by Brandon Pugh and Steven Ward of R Street. There will be some bare-fisted R-on-R fighting over the bill, a priority for the retiring chair of the House commerce committee. But at least quotas won't be part of the bargain.
On a personal note, this has been an unusual experience for me. There is no doubt that staff and members of the commerce committee have been paying attention to these posts, and modifying the bill to respond to them. But exactly which staff and which members has never been entirely clear. So I can only lift a virtual glass to the anonymous heroes who performed such effective work in the trenches: And I promise that I'll be glad to buy you an actual beer if I ever learn who you are!
Posted at 10:03 AM | Permalink | Comments (0)
There are new twists in the saga of algorithmic bias and the American Privacy Rights Act, or APRA. That's the privacy bill that would have imposed race and gender quotas on AI algorithms. I covered that effort two weeks ago in a detailed article for the Volokh Conspiracy.
A lot has happened since then. Most importantly, publicity around its quota provisions forced the drafters of APRA into retreat. A new discussion draft was released, and it dropped much of the quota-driving language. Then, a day later, a House commerce subcommittee held a markup on the new draft. It was actually more of a nonmarkup markup; member after member insisted that the new draft needed further changes and then withdrew their amendments pending further discussions. With that ambiguous endorsement, the subcommittee sent APRA to the full committee.
Still, it is good news that APRA now omits the original disparate impact and quota provisions. No explanation was offered for the change, but it seems clear that few in Congress want to be seen forcing quotas into algorithmic decisionmaking.
That said, there's reason to fear that the drafters still hope to sneak algorithmic quotas into most algorithms without having to defend them. The new version of APRA has four provisions on algorithmic discrimination. First, the bill forbids the use of data in a manner that "discriminates in or otherwise makes unavailable the equal enjoyment of goods and services" on the basis of various protected characteristics. Sec. 113(a)(1). That promising start is immediately undercut by the second provision, which allows discrimination in the collection of data either to conduct "self-testing" to prevent or mitigate unlawful discrimination or to expand the pool of applicants or customers. Id. at (a)(2). The third provision requires users to assess the potential of an algorithm "to cause a harm, including harm to an individual or group of individuals on the basis of protected characteristics." Id. at (b)(1)(B)(ix). Finally, in that assessment, users must provide details of the steps they are taking to mitigate such harms "to an individual or group of individuals." Id.
The self-assessment requirement clearly pushes designers of algorithms toward fairness not simply to individuals but to demographic groups. Algorithmic harm must be assessed and mitigated not just on an individual basis but also on a group basis. Judging an individual on his or her group identity sounds a lot like discrimination, but APRA makes sure that such judgments are immune from liability; it defines discrimination to exclude measures taken to expand a customer or applicant pool.
So, despite its cryptic phrasing, APRA can easily be read as requiring that algorithms avoid harming a protected group, an interpretation that leads quickly to quotas as the best way to avoid group harm. Certainly, agency regulators would not have trouble providing guidance that gets to that result. They need only declare that an algorithm causes harm to a "group of individuals" if it does not ensure them a proportionate share in the distribution of jobs, goods, and services. Even a private company that likes quotas because they're a cheap way to avoid accusations of bias could implement them and then invoke the two statutory defenses -- that its self-assessment required an adjustment to achieve group justice, and that the adjustment is immune from discrimination lawsuits because it is designed to expand the pool of beneficiaries.
In short, while not as gobsmackingly coercive as its predecessor, the new APRA is still likely to encourage the tweaking of algorithms to reach proportionate representation, even at the cost of accuracy.
This is a big deal. It goes well beyond quotas in academic admissions and employment. It would build "group fairness" into all kinds of decision algorithms – from bail decisions, and health care to Uber trips, face recognition, and more. What's more, because it's not easy to identify how machine learning algorithms achieve their weirdly accurate results, the designers of those algorithms will be tempted to smuggle racial or gender factors into their products without telling the subjects or even the users.
This process is already well under way -- even in healthcare, where compromising the accuracy of an algorithm for the sake of proportionate outcomes can be a matter of life or death. A recent paper on algorithmic bias in health care published by the Harvard School of Public Health recommended that algorithm designers protect "certain groups" by "inserting an artificial standard in the algorithm that overemphasizes these groups and deemphasizes others."
This kind of crude intervention to confer artificial advantages by race and gender is in fact routinely recommended by experts in algorithmic bias. Thus, the McKinsey Global Institute advises designers to impose what it calls "fairness constraints" on their products to force algorithms to achieve proportional outcomes. Among the approaches it finds worthy are "post-processing techniques [that] transform some of the model's predictions after they are made in order to satisfy a fairness constraint." Another recommended approach "imposes fairness constraints on the optimization process itself." In both cases, to be clear, the model is being made less accurate in order to fit the designer's views of social justice. And in each case, the compromise will fly below the radar. The designer's social justice views are hidden by a fundamental characteristic of machine learning; the machine produces the results that the trainers reward. If they only reward results that meet certain demographic requirements, that's what the machine will produce.
If you're wondering how far from reality such constraints wander, take a look at the "text to image" results originally produced by Google Gemini. When asked for pictures of German soldiers in the 1940s, Gemini's training required that it serve up images of black and Asian Nazis. The consequences of bringing such political correctness to healthcare decisions could be devastating – and much harder to spot.
That's why we can't afford APRA's quota-nudging approach. The answer is not to simply delete those provisions, but to address the problem of stealth quotas directly. APRA should be amended to make clear the fundamental principle that identity-based adjustments of algorithms require special justification. They should be a last resort, used only when actual discrimination has provably distorted algorithmic outcomes – and when other remedies are insufficient. They should never be used when apparent bias can be cured simply by improving the algorithm's accuracy. To take one example, face recognition software ten or fifteen years ago had difficulty accurately identifying minorities and darker skin tones. But today those difficulties can be largely overcome by better lighting, cameras, software, and training sets. Such improvements in algorithmic accuracy are far more likely to be seen as fair than forcing identity-based solutions.
Equally important, any introduction of race, gender, and other protected characteristics into an algorithm's design or training should be open and transparent. Controversial "group justice" measures should never be hidden from the public, from users of algorithms or from the individuals who are affected by those measures.
With those considerations in mind, I've taken a very rough cut at how APRA could be amended to make sure it does not encourage widespread imposition of algorithmic quotas:
"(a) Except as provided in section (b), a covered algorithm may not be modified, trained, prompted, rewarded or otherwise engineered using race, ethnicity, national origin, religion, sex, or other protected characteristic --
(1) to affect the algorithm's outcomes or
(2) to produce a particular distribution of outcomes based in whole or in part on race, ethnicity, national origin, religion, or sex.
(b) A covered algorithm may be modified, trained, prompted, rewarded or engineered as described in section (a) only:
(1) to the extent necessary to remedy a proven act or acts of discrimination that directly and proximately affected the data on which the algorithm is based and
(2) if the algorithm has been designed to ensure that any parties adversely affected by the modification can be identified and notified whenever the modified algorithm is used.
(c) An algorithm modified in accordance with section (b) may not be used to assist any decision unless parties adversely affected by the modification are identified and notified. Any party so notified may challenge the algorithm's compliance with section (b). "
It's not clear to me that such a provision will survive a Democratic Senate and a House that is Republican by a hair. But Congress's composition could change dramatically in a few months. Moreover, regulating artificial intelligence is not a just a federal concern.
Left-leaning state legislatures have taken the lead in adopting laws on AI bias; last year, the Brennan Center identified seven jurisdictions with proposed or enacted laws addressing AI discrimination. And of course the Biden administration is purauing multiple anti-bias initiatives. Many of these legal measures, along with a widespread push for ethical codes aimed at AI bias, will have the same quota-driving impact as APRA.
Conservative legislators have been slow to react to the enthusiasm for AI regulation; their silence guarantees that their constituents will be governed by algorithms written to blue-state regulatory standards. If conservative legislatures don't want to import stealth quotas, they will need to adopt their own laws restricting algorithmic race and gender discrimination and requiring transparency whenever algorithms are modified using race, gender and similar characteristics. So even if APRA is never amended or adopted, the language above, or some more artful version of it, could become an important part of the national debate over artificial intelligence.
Posted at 10:12 AM | Permalink | Comments (0)
More than two-thirds of Americans think the Supreme Court was right to hold Harvard's race-based admissions policy unlawful. But the minority who disagree have no doubt about their own moral authority, and there's every reason to believe that they intend to undo the Court's decision at the earliest opportunity.
Which could be as soon as this year. In fact, undoing the Harvard admissions decision is the least of it. Republicans and Democrats in Congress have embraced a precooked "privacy" bill that will impose race and gender quotas not just on academic admissions but on practically every private and public decision that matters to ordinary Americans. The provision could be adopted without scrutiny in a matter of weeks; that's because it is packaged as part of a bipartisan bill setting federal privacy standards -- something that has been out of reach in Washington for decades. And it looks as though the bill breaks the deadlock by giving Republicans some of the federal preemption their business allies want while it gives Democrats and left-wing advocacy groups a provision that will quietly overrule the Supreme Court's Harvard decision and impose identity-based quotas on a wide swath of American life.
This tradeoff first showed up in a 2023 bill that Democratic and Republican members of the House commerce committee approved by an overwhelming 53-2 vote. That bill, however, never won the support of Sen. Cantwell (D-WA), who chairs the Senate commerce committee. This time around, a lightly revised version of the bill has been endorsed by both Sen. Cantwell and her House counterpart, Cathy McMorris Rodgers (R-WA). The bill has a new name, the American Privacy Rights Act of 2024 (APRA), but it retains the earlier bill's core provision, which uses a "disparate impact" test to impose race, gender, and other quotas on practically every institutional decision of importance to Americans.
"Disparate impact" has a long and controversial history in employment law; it's controversial because it condemns as discriminatory practices that disproportionately affect racial, ethnic, gender, and other protected groups. Savvy employers soon learn that the easiest way to avoid disparate impact liability is to eliminate the disparity – that is, to hire a work force that is balanced by race and ethnicity. As the Supreme Court pointed out long ago, this is a recipe for discrimination; disparate impact liability can "leave the employer little choice . . . but to engage in a subjective quota system of employment selection." Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 652-53 (1989), quoting Albemarle Paper Co. v. Moody, 422 U.S. 405, 448 (1975) (Blackmun, J., concurring).
In the context of hiring and promotion, the easy slide from disparate impact to quotas has proven controversial. The Supreme Court decision that adopted disparate impact as a legal doctrine, Griggs v. Duke Power Co., 401 U.S. 432 (1971), has been persuasively criticized for ignoring Congressional intent. G. Heriot, Title VII Disparate Impact Liability Makes Almost Everything Presumptively Illegal, 14 N.Y.U. J. L. & Liberty 1 (2020). In theory, Griggs allowed employers to justify a hiring rule with a disparate impact if they could show that the rule was motivated not by animus but by business necessity. A few rules have been saved by business necessity; lifeguards have to be able to swim. But in the years since Griggs, the Supreme Court and Congress have struggled to define the business necessity defense; in practice there are few if any hiring qualifications that clearly pass muster if they have a disparate impact.
And there are few if any employment qualifications that don't have some disparate impact. As Prof. Heriot has pointed out, "everything has a disparate impact on some group:"
On average, men are stronger than women, while women are generally more capable of fine handiwork. Chinese Americans and Korean Americans score higher on standardized math tests and other measures of mathematical ability than most other national origin groups….
African American college students earn a disproportionate share of college degrees in public administration and social services. Asian Americans are less likely to have majored in Psychology. Unitarians are more likely to have college degrees than Baptists.…
I have in the past promised to pay $10,000 to the favorite charity of anyone who can bring to my attention a job qualification that has made a difference in a real case and has no disparate impact on any race, color, religion, sex, or national origin group. So far I have not had to pay.
Id. at 35-37. In short, disparate impacts are everywhere in the real world, and so is the temptation to solve the problem with quotas. The difficulty is that, as the polls about the Harvard decision reveal, most Americans don't like the solution. They think it's unfair. As Justice Scalia noted in 2009, the incentives for racial quotas sets the stage for a "war between disparate impact and equal protection." Ricci v. DeStefano, 557 U.S. 557, 594 (2009).
Not surprisingly, quota advocates don't want to fight such a war in the light of day. That's presumably why APRA obscures the mechanism by which it imposes quotas.
Here's how it works. APRA's quota provision, section 13 of APRA, says that any entity that "knowingly develops" an algorithm for its business must evaluate that algorithm "to reduce the risk of" harm. And it defines algorithmic "harm" to include causing a "disparate impact" on the basis of "race, color, religion, national origin, sex, or disability" (plus, weirdly, "political party registration status"). APRA Sec. 13(c)(1)(B)(vi)(IV)&(V).
At bottom, it's as simple as that. If you use an algorithm for any important decision about people -- to hire, promote, advertise, or otherwise allocate goods and services -- you must ensure that you've reduced the risk of disparate impact.
The closer one looks, however, the worse it gets. At every turn, APRA expands the sweep of quotas. For example, APRA does not confine itself to hiring and promotion. It provides that, within two years of the bill's enactment, institutions must reduce any disparate impact the algorithm causes in access to housing, education, employment, healthcare, insurance, or credit.
No one escapes. The quota mandate covers practically every business and nonprofit in the country, other than financial institutions. APRA sec. 2(10). And its regulatory sweep is not limited, as you might think, to sophisticated and mysterious artificial intelligence algorithms. A "covered algorithm" is broadly defined as any computational process that helps humans make a decision about providing goods or services or information. APRA, Section 2 (8). It covers everything from a ground-breaking AI model to an aging Chromebook running a spreadsheet. In order to call this a privacy provision, APRA says that a covered algorithm must process personal data, but that means pretty much every form of personal data that isn't deidentified, with the exception of employee data. APRA, Section 2 (9).
Actually, it gets worse. Remember that some disparate impacts in the employment context can be justified by business necessity. Not under APRA, which doesn't recognize any such defense. So if you use a spreadsheet to rank lifeguard applicants based on their swim test, and minorities do poorly on the test, your spreadsheet must be adjusted until the scores for minorities are the same as everyone else's.
To see how APRA would work, let's try it on Harvard. Is the university a covered entity? Sure, it's a nonprofit. Do its decisions affect access to an important opportunity? Yes, education. Is it handling nonpublic personal data about applicants? For sure. Is it using a covered algorithm? Almost certainly, even if all it does is enter all the applicants' data in a computer to make it easier to access and evaluate. Does the algorithm cause harm in the shape of disparate impact? Again, objective criteria will almost certainly result in underrepresentation of various racial, religious, gender, or disabled identity groups. To reduce the harm, Harvard will be forced to adopt admissions standards that boost black and Hispanic applicants past Asian and white students with comparable records. The sound of champagne corks popping in Cambridge will reach all the way to Capitol Hill.
Of course, Asian students could still take Harvard to court. There is a section of APRA that seems to make it unlawful to discriminate on the basis of race and ethnicity. APRA Sec. 13(a)(1). But in fact APRA offers the nondiscrimination mandate only to take it away. It carves out an explicit exception for any covered entity that engages in self-testing "to prevent or mitigate unlawful discrimination" or to" diversify an applicant, participant, or customer pool." Harvard will no doubt say that it adopted its quotas after its "self-testing" revealed a failure to achieve diversity in its "participant pool," otherwise known as its freshman class.
Even if the courts don't agree, the Federal Trade Commission can ride to the rescue. APRA gives the Commission authority to issue guidance or regulations interpreting APRA – including issuing a report on best practices for reducing the harm of disparate impact. APRA Sec. 13(c)(5)&(6). What are the odds that a Washington bureaucracy won't endorse race-based decisions as a "best practice"?
It's worth noting that, while I've been dunking on Harvard, I could have said the same about AT&T or General Electric or Amazon. In fact, big companies with lots of personal data face added scrutiny under APRA; they must do a quasipublic "impact assessment" explaining how they are mitigating any disparate impact caused by their algorithms. That creates heavy pressure to announce publicly that they've eliminated all algorithmic harm. That will be an added incentive to implement quotas, but as with Harvard, many big companies don't really need an added incentive. They all have active internal DEI bureaucracies that will be happy to inject even more race and gender consciousness into corporate life, as long the injection is immune from legal challenge.
And immune it will be. As we've seen, APRA provides strong legal cover for institutions that adopt quota systems. And I predict that, for those actually using artificial intelligence, there will be an added layer of obfuscation that will stop legal challenges before they get started. It seems likely that the burden of mitigating algorithmic harm will quickly be transferred from the companies buying and using algorithms to the companies that build and sell them. Algorithm vendors are already required by many buyers to certify that their products are bias-free. That will soon become standard practice. With APRA on the books, there won't be any doubt that the easiest and safest way to "eliminate bias" will be to build quotas in.
That won't be hard to do. Artificial intelligence and machine learning vendors can use their training and feedback protocols to achieve proportional representation of minorities, women, and the disabled.
During training, AI models are evaluated based on how often they serve up the "right" answers. Thus, a model designed to help promote engineers may be asked to evaluate the resumes of actual engineers who've gone through the corporate promotion process. Its initial guesses about which engineers should be promoted will be compared to actual corporate experience. If the machine picks candidates who performed badly, its recommendation will be marked wrong and it will have to try again. Eventually the machine will recognize the pattern of characteristics, some not at all obvious, that make for a promotable engineer.
But everything depends on the training, which can be constrained by arbitrary factors. A company that wanted to maximize two things -- the skill of its senior engineers and their intramural softball prowess -- could easily train its algorithm to downgrade engineers who can't throw or hit. The algorithm would eventually produce the best set of senior managers consistent with winning the intramural softball tournament every year. Of course, the model could just as easily be trained to produce the best set of senior engineers consistent with meeting the company's demographic quotas. And the beauty from the company's point of view is that the demographic goals never need to be acknowledged once the training has been completed – probably in some remote facility owned by its vendor. That uncomfortable topic can be passed over in silence. Indeed, it may even be hidden from the company that purchases the product, and it will certainly be hidden from anyone the algorithm disadvantages.
To be fair, unlike its 2023 predecessor, APRA at least nods in the direction of helping the algorithm's victims. A new Section 14 requires that institutions tell people if they are going to be judged by an algorithm, provide them with "meaningful information" about how the algorithm makes decisions, and give them an opportunity to opt out.
This is better than nothing, for sure. But not by much. Companies won't have much difficulty providing a lot of information about how its algorithms work without ever quite explaining who gets the short end of the disparate-impact stick. Indeed, as we've seen, the company that's supposed to provide the information may not even know how much race or gender preference has been built into its outcomes. More likely it will be told by its vendor, and will repeat, that the algorithm has been trained and certified to be bias-free.
What if a candidate suspects the algorithm is stacked against him? How does section 14's assurance that he can opt out help? Going back to our Harvard example, suppose that an Asian student figures out that the algorithm is radically discounting his achievements because of his race. If he opts out, what will happen? He won't be subjected to the algorithm. Instead, presumably, he'll be put in a pool with other dissidents and evaluated by humans -- who will almost certainly wonder about his choice and may well presume that he's a racist. Certainly, opting out provides the applicant no protection, given the power and information imbalance between him and Harvard. Yet that is all that APRA offers.
Let's be blunt; this is nuts. Overturning the Supreme Court's Harvard admissions decision in such a sneaky way is bad enough, but imposing Harvard's identity politics on practically every part of American life -- housing, education, employment, healthcare, insurance, and credit for starters – is worse. APRA's effort to legalize, if not mandate, quotas in all these fields has nothing to do with privacy. The bill deserves to be defeated or at least shorn of sections 13 and 14.
These are the provisions that I've summarized here, and they can be excised without affecting the rest of the bill. That is the first order of business. But efforts to force quotas into new fields by claiming they're needed to remedy algorithmic bias will continue, and they deserve a solution bigger than defeating a single bill. I've got some thoughts about ways to legislate protection against those efforts that I'll save for a later date. For now, though, passage of APRA is an imminent threat, particularly in light of the complete lack of concern expressed so far by any member of Congress, Republican or Democrat.
Posted at 04:19 PM | Permalink | Comments (0)
Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century." I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out.
So sue me. I'm back on hiatus.
The conversation in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet Union as a child through the founding of Crowdstrike and becoming a founder of Silverado Policy Accelerator and an advisor to the Defense Department. Dmitri shares his journey, including his early start in cryptography and his role in investigating the 2010 Chinese hack of Google and other companies, which he named Operation Aurora.
Dmitri opens his book with a chillingly realistic scenario of a Chinese invasion of Taiwan. He explains that this is not merely a hypothetical exercise, but a well-researched depiction based on his extensive discussions with Taiwanese leadership, military experts, and his own analysis of the terrain.
Then, we dive into the main themes of his book -- which is how to prevent his scenario from coming true. Dmitri stresses the similarities and differences between the US-Soviet Cold War and what he sees as Cold War II between the U.S. and China. He argues that, like Cold War I, Cold War II will require a comprehensive strategy, leveraging military, economic, diplomatic, and technological deterrence.
Dmitri also highlights the structural economic problems facing China, such as the middle-income trap and a looming population collapse. Despite these challenges, he stresses that the U.S. will face tough decisions as it seeks to deter conflict with China while maintaining its other global obligations.
We talk about diversifying critical supply chains away from China and slowing China's technological progress in areas like semiconductors. This will require continuing collaboration with allies like Japan and the Netherlands to restrict China's access to advanced chip-making equipment.
Finally, I note the remarkable role played in Cold War I by Henry Kissinger and Zbigniew Brzezinski, two influential national security advisers who were also first-generation immigrants. I ask whether it's too late to nominate Dmitri to play the same role in Cold War II. You heard it here first!
You can download episode 501 here.
The Cyberlaw Podcast is open to feedback. Send comments to @stewartbaker on Twitter or to [email protected]. Remember: The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Posted at 10:27 AM | Permalink | Comments (0)
Gabriel Schoenfeld of the Hudson Institute reviewed Skating On Stilts for The Wall Street Journal. Here's an excerpt:
One of the biggest privacy battles began in June 2006 after the New York Times disclosed the highly classified CIA-Treasury program to monitor al Qaeda finances by way of the Belgium-based financial clearinghouse known as Swift. The program, an intelligence-gathering operation, was perfectly legal and was conducted with court-approved warrants. It had already facilitated the capture of the chief plotter of the 2002 Bali bombing that killed 202 people.
The Times story obviously compromised the secret effort to monitor terrorist finances, and it spurred the European Commission to try to shut it down. "European privacy bureaucrats," Mr. Baker says, "crowed that they had crippled the American program." An infuriating irony, he adds, was that Europe's own efforts to collect and analyze hotel-registration data were "far more intrusive and less carefully constructed than Treasury's program."
What accounts for such behavior? Mr. Baker castigates civil libertarians of the left and right who, though blaming the government for its supposedly alarmist policies, were themselves alarmist about the policies' threat to liberty. Such people claimed, as Mr. Baker puts it, that "a frightened U.S. government . . . [launched] a seven-year attack on our privacy that a new administration is only slowly (too slowly, say the advocates) beginning to moderate." Mr. Baker tells a different story, of officials urgently trying to solve post-9/11 security problems while unfairly attacked at every turn.
Mr. Baker's argument is the more persuasive one, not least in the wake of recent events—in Fort Hood (Nidal Malik Hasan), in the skies over Detroit (Umar Abdulmutallab), in Times Square (Faisal Shahzad) and in the New York City subways (Najibullah Zazi). The Obama administration, to its credit, has left in place the policies that Mr. Baker fought for, and we are safer for it.
Click here to read the full review.
Posted at 11:28 AM | Permalink | Comments (1)
A review of Skating on Stilts was published in The Washington Times. You can read the introduction below or click here for the full review.
In "Skating on Stilts," Stewart Baker warns that the exponential growth in airplane travel, information technology and bio-technologies has been empowering new, increasingly lethal forms of terrorism against America. This was first demonstrated by al Qaeda's success on Sept. 11 when it exploited gaps in our air travel system to hijack four aircraft simultaneously and launch its catastrophic attacks.
In an important book that deserves wide recognition, Mr. Baker sounds the alarm that in the future, we will fail to defeat such lethal threats, which are escalating, unless we succeed in overcoming resistance to government policy changes in regulating them. Such resistance is being mounted by business, foreign governments and privacy groups that favor something of a lassez-faire approach to national defense matters.
Drawing on Mr. Baker's expertise as the Department of Homeland Security's first assistant secretary for policy (2005-09), as the National Security Agency's top lawyer in the 1990s and in his current practice at one of Washington's top law firms, this book, an insider's memoir, describes his efforts while in government service to tackle these threats.
Posted at 05:27 PM | Permalink | Comments (0)
Here's an excerpt from another review for Skating on Stilts, this time from Homeland Security Watch's Jessica Herrera-Flanigan:
In his upcoming book, Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism, Baker offers an intriguing view of our homeland security posture that ties back to the central theme that technology is both our savior and our enemy as it empowers not only us but our foes. Coming from Baker, who has been described by the Washington Post as “one of the most techno-literate lawyers around,” the analysis of homeland security technology from a policy/legal prism is refreshing. This is not a Luddite’s view of why technology harms, but an expert’s finely woven story of “how the technologies we love eventually find new ways to kill us, and how to stop them from doing that…”
Stewart Baker provides insight into a DC perspective of homeland security and the struggle of a Department to tackle technology, privacy, and information sharing. The book provides some valuable lessons for those who are on the frontlines of homeland security policy as they attempt to tackle future threats. For an observer of homeland security development, Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism is a must-read.
Posted at 09:36 AM | Permalink | Comments (0)
Here's an excerpt from Mickey McCarter's review of Skating on Stilts. You can find the full review at Homeland Security Today.
Privacy protection has its place but putting too much emphasis on it can stop counterterrorism forces in their tracks.
At least, that's a central idea explored time and again in a new book from Stewart Baker, the former assistant secretary of policy at the Department of Homeland Security (DHS). The book--Skating on Stilts: Why We Aren't Stopping Tomorrow's Terrorism--largely deals with Baker's firsthand experiences with major developments in various elements of homeland security, ranging from cybersecurity to bioterrorism to aviation security.
Yet the book tells its tales as a means of looking forward. Baker, an infatigable intellectual, often appears to be an endless source of methodical musings on the challenges of DHS and counterterrorism measures generally in person. His book captures his ability to be proactively prescriptive while reflecting on policy debates old and new.
Despite the serious nature of the subject matter, however, Baker enhances his reputation as a raconteur by enjoying himself. He is well served by his ability personalize the obstacles that continue to daunt homeland security efforts, making them seem relevant to the reader. This makes the book so easy to read that you may find yourself revisiting past chapters of policy battles fought and pondering their relevance to homeland security today, much like the author himself.
Posted at 12:30 PM | Permalink | Comments (0)
Hoover has agreed to release Skating on Stilts under a Creative Commons Attribution-NoDerivs License 3.0.
This means you can copy it as many times as you like, send it to your friend, and post it on line. But there are a few limitations: You must give me credit as the original author, and you may not alter, transform, or build upon this work without permission.
Finally, for any reuse or distribution, you must make clear to others the license terms of this work. Since the terms are reproduced at the end of each chapter, and my name is on the cover, you are safe if you copy each chapter as you find it.
(The sticklers among you may notice that the copyright notice says that the publisher "publisher has made an online version of this work available" under a CC license. The publisher assures me that the version I'm posting here is the online version, so you don't need to go looking for a version that says "this is the CC version." You've already found it.)
I've put the first few chapters up already. Just click on the link to the right that says "Download free chapters here." I'll post another chapter tomorrow, along with a blog post.
Posted at 10:10 PM in Excerpts from the book | Permalink | Comments (0)
“Stewart Baker's provocative book draws on his experience as a top homeland security official to raise important questions about the balance between security considerations and privacy concerns. This is a 'must read' for all concerned with striking the right balance."
- US Senator Susan M. Collins, Ranking Member of the Homeland Security and Governmental Affairs Committee
Posted at 12:57 PM in Excerpts from the book | Permalink | Comments (0)
“If you have time to read only one book on the sorry state of our homeland security—make it this one. Free from political correctness, extremely well-informed, and written with great flare—by a high-ranking former government official, who has seen it all.”
—Amitai Etzioni, author of Security First
Posted at 12:47 PM | Permalink | Comments (0)
“Stewart Baker offers the perspective of a warrior in the trenches and frames the world of exploding information and data as that terrible intersection between perseverant, cunning bad guys and vulnerable, fatigued good guys. His mix of wrenching personal life stories and policy debates challenges us to recognize the complexity of the post 9/11 security challenges.”
– Admiral James Loy, Deputy Secretary of Homeland Security (2003-2005)
Posted at 12:50 PM in Excerpts from the book | Permalink | Comments (0)
“I don't agree with all, or even most, of the perspective Stewart Baker brings to the modern security debate. But I am very glad to have read what he has to say, both for the inside details on how security policy evolved after 9/11 and to engage with the argument he makes. The book is trenchant and well-written, and anyone who cares about the balance between privacy and public safety should be familiar with it.”
-James Fallows, national correspondent for the Atlantic
Posted at 12:56 PM in Excerpts from the book | Permalink | Comments (0)
Stewart Baker's book, Skating on Stilts, is a behind-the-scenes story about how the federal government has tried to strike a delicate balance between security and liberty. This is no dry academic treatise. Baker's prose is by turns artful and provocative. He pulls no punches in his assessment of homeland security, his critics, and of his own role in the events leading up to and following the September 11 attacks. Few people have experienced this period in American history as closely as Baker, and his memoir is thoughtful and often riveting.
-- Shane Harris, author of The Watchers
Posted at 01:06 PM in Excerpts from the book | Permalink | Comments (0)
“Policy meets reality. No post-9/11 official spent more time trying to figure out how to keep America safe, free, and prosperous at the same time than Stewart Baker. His story offers important lessons for battling terrorism in the future.”
-James Jay Carafano, coauthor of Winning the Long War and director of the Heritage Foundation’s Douglas and Sarah Allison Center for Foreign Studies
Posted at 01:01 PM in Excerpts from the book | Permalink | Comments (0)
"Too much commentary about our efforts to prevent another 9/11 is based on prejudice, fear, disinformation and willfull disregard of the threats we face. Stewart Baker has courageously written an open and honest history of our recent efforts -- rare in government memoirs -- that no serious homeland security policymaker can ignore."
- Amb. John Bolton, former U.S. Ambassador to the United Nations
Posted at 01:00 PM in Excerpts from the book | Permalink | Comments (0)
“Stewart Baker makes a
cogent, vivid, and persuasive case that we should protect privacy by
auditing
the government's use of data about individuals and punishing misuse --
but most
definitely not by treating such data as private property nor by building
walls
around it, as we did before 9/11, that bar government-wide cooperation
in
fighting terrorism. This book will fundamentally change the terms of the
technology-privacy debate.”
R. James Woolsey, Director of Central Intelligence 1993-1995
Posted at 01:08 PM in Excerpts from the book | Permalink | Comments (0)
"With penetrating intellect, pragmatic sensibility, and broad counter-terrorism experience, Stewart Baker provides chilling insights into the terrifying threats presented by ever-more-high-tech terrorism, the maddening inadequacy of our defenses, and the lobbies responsible for this inadequacy. He recounts in vivid detail how the same entrenched business interests, bureaucratic turf wars, and anti-American Euro-bureaucrats that helped pave the way for 9/11 have continued to oppose policies that would make us safer. Especially devastating is Baker’s portrayal of the deeply misguided “privacy lobby” who insist on perpetuating security dangers in order to avert highly improbable governmental abuses. This despite the exponentially increasing likelihood of cyber-attacks “that could leave us without power, money, petroleum, or communications for months” and biological attacks 'equivalent to a nuclear detonation.'"
--Stuart Taylor, columnist for National Journal, Contributing Editor for Newsweek, and Nonresident Senior Fellow in Governance Studies at the Brookings Institution
Posted at 12:52 PM in Excerpts from the book | Permalink | Comments (0)
Posted at 08:10 PM | Permalink | Comments (2)
“Skating on Stilts is both a memoir and a guidebook. Baker takes us through the challenges of his days at DHS, and presents a framework for action that will stand the test of time.”
- John Hamre, former Deputy Secretary of Defense and President, Center for Strategic and International Studies
Posted at 12:46 PM in Excerpts from the book | Permalink | Comments (0)
“Stewart Baker was one of the leading thinkers in developing the architecture for Homeland Security and his insights and experience provide a unique perspective on our national security challenges going forward.”
- Michael Chertoff, former Secretary of Homeland Security
Posted at 12:44 PM in Excerpts from the book | Permalink | Comments (0)
Posted at 12:42 PM in Excerpts from the book | Permalink | Comments (0)
“A most unusual memoirist, Baker is a government bureaucrat with a philosopher's bent and a passion to tell you what he didn't achieve. And this tough-minded, candid work is a cautionary tale to those who slough off hard decisions with a dismissive claim that we do not have to make choices between our values and our security. As Baker points out, security is a value and those who pretend otherwise--be they business interests, privacy advocates, or international groups--put Americans at risk. His chilling retelling of the events leading up to 9/11 seem to echo some of the events of the current day, especially as he reminds us that Mohamed el Kahtani, the one 9/11 hijacker that was actually stopped, left this country with the promise "I'll be back." Kahtani, of course, was later captured, but we need no reminder that like-minded terrorists remain to threaten us.”
-General Michael Hayden, director of the Central Intelligence Agency (2006–2009) and director of the National Security Agency (1999–2005)
Posted at 12:41 PM in Excerpts from the book | Permalink | Comments (0) | TrackBack (0)