« December 2009 | Main | February 2010 »
Posted at 10:02 AM | Permalink | Comments (0) | TrackBack (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
But it wasn’t just our
European allies who let us down. Our own
government made plenty of errors as well.
Abdulmutallab went on to study in Dubai and then Yemen, where he made the
transition from radicalism to terrorism.
He cut ties to his father, saying that he had found the true Islam and
that “You should just forget about me, I’m never coming back.” Alarmed, the father contacted the US embassy
in Nigeria just five weeks before the attack, warning officials of his son’s
extreme views and presence in Yemen. In
the end, he was interviewed by both consular officials and CIA officers, who
prepared reports on the conversation but did not revoke Abdulmutallab’s visa –
perhaps because of an error in spelling his name.
They did enter Abdulmutallab’s
name into a lookout system in case he sought a visa in the future. Information on the Nigerian was also added to
a 550,000-name classified database on terrorism suspects. But the information was not deemed sufficient
to add Abdulmutallab to the formal Terrorist Screening Data Base, with its
400,000 names – let alone to the much smaller and more selective lists used to
screen air passengers, the 4,000-name no-fly list or the 16,000-name list of
“selectees” who are always screened with care before being allowed on a
plane. One reason for this decision was
a failure to connect Abdulmutallab to a separate stream of intelligence
suggesting that al Qaeda’s Yemeni arm was planning attacks, perhaps involving a
Nigerian operative.
Despite all these failures,
our border security system seems to have worked. The Transportation Security Agency, which
screens air passengers, had no clue that Abdulmutallab was a risky traveler,
and so it did nothing special as he boarded flight 253. In contrast, Customs and Border Protection,
the agency responsible for screening travelers at the border, had access to
both the 400,000-name TSDB and the State Department’s consular databases. It also very likely had information about
Abdulmutallab’s lack of baggage and his cash ticket purchase, both of which
should have been included in his travel reservation data. According to press reports, this information
had already led CBP to flag Abdulmutallab for secondary screening when the
flight landed in Detroit. There, border
agents could have inspected his passport and asked about his travel to
Posted at 09:55 PM in Excerpts from the book, Excerpts from the book -- Chapter 6A | Permalink | Comments (1) | TrackBack (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
As I write, detailed reviews
of the incident are under way. But the
basic facts are not in dispute, and they raise serious questions about our air
security strategy.
Abdulmutallab began his
journey in Ghana, flying first to Lagos and then to Amsterdam before
transferring to flight 253. He had 80
grams (about three ounces) of plastic explosive sewn into his underwear and
carried a syringe full of acid to use as a detonator. He passed through airport screening three
times, attracting no special attention at any of the airports.
Abdulmutallab had only
carry-on luggage for a purported two-week trip, and he’d paid cash for his
round-trip ticket. None of that was
deeply suspicious by itself. Cash
purchases aren’t as rare in Africa as they are in Europe or North America. And for anyone who’s waited – and waited --
for luggage at the end of a long flight, a traveler who can carry on all the
luggage he needs for a two-week stay is cause more for envy than for
suspicion.
But there was plenty of reason
to be suspicious of Abdulmutallab, and the information was already in the hands
of the US and UK governments.
Umar Abdulmutallab began his
journey to Islamic terrorism where so many did.
In
Indeed you can. This attitude permeated European
thinking. It was the reason we had
revised the VWP program to insist on greater information sharing about
suspected terrorists from our counterparts in Europe. Unfortunately, even the British, with whom we
had a relatively close counterterrorism relationship, had not agreed to a broad
sharing of information about Islamic radicals – even foreign radicals –
operating within their borders. In 2008,
lacking any information from the British that might have spurred a deeper
inquiry on terrorism grounds, the
Posted at 09:53 PM in Excerpts from the book, Excerpts from the book -- Chapter 6A | Permalink | Comments (2) | TrackBack (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
If you’ve got to fly during the holidays, Christmas
Day is as good as it gets. For a brief
moment, the crowds drop off. Airports
are almost peaceful. And if you start
the day early in Europe, you can be in the United States in time for Christmas
dinner.
Nearly 300 passengers were
taking advantage of that brief respite on December 25, 2009. Northwest flight 253 from Amsterdam to
Detroit had been uneventful. No one
thought anything about the young Nigerian complaining of a stomach bug; he had
spent twenty minutes in the toilet and then covered himself with a blanket when
he returned to his window seat in the middle of the plane.
The flight was well into its
descent when Umar Abdulmutallab burst into smoke and flames. As the flames
climbed the wall of the plane, and a brave Dutch passenger struggled with the
man at the center of the fire, the passengers could be forgiven for wondering
whether they were flying on Christmas Day or Groundhog Day. For the 2009 attack bore an eerie resemblance
to another Christmas season attack eight years earlier.
It was another transatlantic
flight, another al Qaeda terrorist from outside the Middle East, and another
near miss. Once again, the solo
terrorist had trouble triggering the explosive – in his underwear this time,
instead of his shoe. Once again, he
didn’t get a second chance, as passengers and crew subdued him and extinguished
the flames.
Counting the “liquids plot” of
August 2006, this was al Qaeda’s third post-9/11 attempt to bring down
transatlantic jets. The fixation on
destroying transatlantic flights is reminiscent of an earlier fixation on the
World Trade Center. It’s safe to assume
that they’ll keep trying until they succeed.
We’d known that for
years. We’d revamped our entire Visa
Waiver Program just to make it harder for European al Qaeda members to launch
transatlantic attacks. Yet we hadn’t
managed to keep an al Qaeda operative and explosives off flight 253.
Why not?
Posted at 09:50 AM in Excerpts from the book, Excerpts from the book -- Chapter 6A | Permalink | Comments (0) | TrackBack (0)
The title says it all. Hoover Press has accepted "Skating on Stilts" for publication this spring, assuming I get it out of my computer and to the proofreaders by the end of the month.
Of course, I could tinker with it for months, but Judge Lamberth gave me a candid and helpful interview on the sources of the wall on Friday, and I've incorporated it this weekend. So this is a great time to quit tinkering and ship the damn thing. Which I am doing.
I've added a new chapter on TSA and the privacy machine. It tracks my article on the Christmas Day attack for National Review, which never put the piece on line. So I'll post the chapter in installments over the next couple of weeks.
One last thing: I need a subtitle, since "Skating on Stilts" is memorable but not informative. Working subtitle is the rather bland "Terror, Technology, and Privacy." That sort of misses the memoir element of the book, and frankly I'm not sure I'd read a book with that subtitle.
So, other candidates are still in the mix. How about "My Three Years at DHS, Trying to Keep Terrorists, Technology, and the Privacy Lobby from Killing You"? Other suggestions welcome.
Posted at 08:12 PM in Random posts | Permalink | Comments (1) | TrackBack (0)
Secretary Napolitano is doing a lightning tour of Europe, trying to build on the sense of urgency about air screening since the Christmas attack. Her tour has already produced one of the few DHS blog entries with a human touch, as DAS Koumans conveys a sense of the glamour and leisure that come with DHS international travel:
So, we took off at 6 PM from Washington, D.C., got two or three hours of sleep on the flight as the Secretary spent most of her time preparing for the next day’s meetings, and landed at 6:45 AM local time in Spain. ... We had only had 10 minutes before we began our first event. Here's hoping no one noticed we went to our first two bilateral meetings in the clothes we slept in!
The Secretary seems to have focused European attention by asking for wider deployment of whole body imaging machines, which has made Europeans realize that there may be more serious privacy issues than letting government officials see travel reservation data -- information that is already shown to airline workers on two continents:
"A good PNR system may be, at least, as efficient" as the scanners, said the Secretay's EU counterpart. So the EU, US, and European interior ministers agreed on a work program that would consider "what and how operational cooperation sharing [of PNR] could be further improved and compatible approaches could be developed among partners committed to aviation security, the rule of law, and international humans rights."
Of course putting this in a US-EU context is fraught with opportunities for delay and mischief. The EU still has no serious responsibility for actually stopping terrorists. At best, it grades the work of the member state agencies that actually look for terrorists. At worst, it finds reasons to get in their way. (Hints of that can be seen at the end of the statement above, which translates as, "Oh my goodness. We can't say anything nice about aviation security -- especially in a statement adopted with the United States -- unless we give more than equal time to the rule of law and international human rights!")
So, whether the EU can overcome its genes and actually improve security cooperation remains to be seen. If the European Parliament follows past practice, you wouldn't give high odds:
"It will be very difficult for the European Council to get a majority in parliament for this proposal," Manfred Weber, deputy head of the Christian Democratic faction of the European Parliament, was quoted as saying.
Justice spokesman for the Green party faction, Jan Philipp Albrecht, accused EU domestic commissioner Jacques Barrot and the European Council of creating facts for their own purposes. Albrecht said that the Lisbon Treaty, which came into force at the beginning of December, empowered the European Parliament to full participation in decision-making on internal affairs, with the power to block legislation.
Actually, though, I'm betting the other way. I think MEPs like Weber and Albrecht are going to eat their words.
The Lisbon Treaty means the Parliament can't sit on the sidelines carping any more. They actually have the power to kill new security measures. But that means they'll have to take responsibility for killing them.
Carping while the security measures took effect without Parliament's approval was good politics. But actually killing security measures in Parliament will turn out to be very bad politics. After all, a lot of Europeans would have died on flight 253 if the bomber had succeeded. Do Weber and Albrecht expect to be bragging after the next attack that they blocked measures that might have stopped it?
No, my guess is that in the end this could turn out to be a useful exercise in sobering up the European Parliament.
Posted at 06:36 PM in Random posts | Permalink | Comments (0) | TrackBack (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
By the time we were done putting the report together, I realized,
we hadn’t just touched the third rail. We were tapdancing on it. By candidly
treating the end of online anonymity and the adoption of tough security
regulation as options, we were goring some of the noisiest oxen in Washington.
Well, what the hell, I thought. Maybe the time was right for a
reconsideration of security regulation, especially after the hodge-podge the
states were making of the issue.
I was wrong.
Memories of Dick Clarke’s fate were too fresh, and by mid-2008 the
Administration was running out of time. I showed a draft of the report to the
front office and sent the Homeland Security Council a copy. Not much later I
got a call. The Council didn’t want to even raise regulation as an option in
the interagency discussions. They feared that industry and Congress would kill
the little progress that had been made if regulation was even treated as an
option. In fact, they wanted to bury the report. Instead of thinking about the
future, they’d focus only on tasks that could be done in the waning months of
the Bush Administration.
It was disappointing but I understood. Chertoff, who'd been a rock
in other disputes, was now focused only on fights he could win and changes he
could implement in six months or less.
And we had reached that point in an administration where accomplishing
even the simplest and most obvious tasks had become nearly impossible. Energy was draining out of the Bush team, and
what remained was soon focused on a cascading financial crisis that left no
time for next year’s threats.
I thought that there might be value in letting the Obama
administration consider these issues without explain that it was reviewing
options proposed under President Bush. The new administration might have more
leeway to consider the attribution and regulation issues with an open mind.
I was wrong about that too.
The Obama administration brought a flurry of energy and
determination to the problem. As well it
should have. Barack Obama and John
McCain, after all, had been the first presidential candidates whose campaign
networks were systematically penetrated and exploited by foreign
intelligence-collectors. And candidate
Obama had pledged that cybersecurity would be a top national security priority
in his administration. Nevertheless, the
new Administration's resolution seemed to waver within weeks of the
inauguration.
The new administration did produce a cybersecurity strategy only a
few months into the term, but White House watchers learned a lot from what it
said and how it was edited. The draft
was reportedly produced on the schedule set by the President – within sixty
days of his request. But it didn't go to
him on that schedule. Instead, it went
through a new set of edits, as office after office protected itself, its
prerogatives, or its constituencies by removing controversial passages.
The result was mostly pabulum.
Pabulum of a sort that would have been familiar to the Clinton and Bush
White Houses, of course, since they too had blinked when faced with hard
choices over cybersecurity.
For example, the strategy recognizes that improving authentication
of people and machines is a key to improving cybersecurity. While much of its
attention is focused on just making sure that federal networks can properly
identify users, it acknowledges as a goal the creation of a “global, trusted
eco-system” that could form the basis of a secure network. But it call for that
system to be built by working with “international partners” and by building an
ecosystem that is seen to protect “privacy rights and civil liberties.” Hard experience tells us that if building a
secure network depends on the full support of the international and privacy
communities, it will never happen.
Business too was fully protected from the specter of security
regulation in the Obama administration's strategy document, which mentioned regulation just once – to declare
that it would be considered only “as a last resort.”
By the time the editing was done, Washington knew that nothing
dramatic would come from the cybersecurity initiative – or the new
cybersecurity coordinator job the President had announced with fanfare. Indeed, the position remained unfilled until
the end of 2009.
Three Presidents in a row had tried to change course and head off
the worst consequences of Moore's law for our national and personal
security.
All three had failed.
The privacy and business lobbies that guard the exponential status
quo had defeated them all.
Posted at 08:44 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0) | TrackBack (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
And what about the “hard” option – just plain regulating? You know, just putting network security
requirements into the Federal Register?
We couldn’t ignore that option, I thought. In fact, a lot of the
most critical industries were already subject to government regulation. These
included financial institutions, energy, and telecommunications. And some of
these industries were already subject to cybersecurity regulation. Financial
institutions, for example, must follow a unified set of cybersecurity rules.
But even financial regulators don’t require particular security measures. The
rules are largely procedural, resembling the instructions on a bottle of
shampoo: Institutions must study their
vulnerabilities, cure them, assess the effectiveness of the cure, and repeat.
It’s hard to write rules that go beyond such procedural steps,
because the attackers change tactics faster than regulations can be amended.
What's more, the cost of mandatory security would be very high; it would slow
innovation and productivity growth severely.
Even so, there's a case for mandating particular security measures
for regulated industries. It’s the
Howard Crank problem all over again. Every year, the exponential growth of
information technology makes our lives a little better, our businesses a little
more efficient and profitable. And every year it leaves us a little more
vulnerable to a military strike on our infrastructure that could leave us
without power, money, petroleum, or communications for months.
Large parts of the country could find themselves living like
post-Katrina New Orleans -- but without the National Guard over the horizon.
That risk isn’t part of most companies’ balance sheet. It’s not hard to see
that as the kind of market failure that requires regulation.
But even if there is a market failure, the government still isn’t
well-equipped to solve it. At a minimum, the regulatory agencies would have to
find a way to coordinate and issue standards much faster than they now write
regulations. Today, the practical speed
limit is eighteen months from new idea to final rule. There's not much point in replacing a
predictable market failure with an equally predictable government failure.
And what about all the vulnerable IT networks that are not in the
hands of regulated industries? If they
are compromised, the harm goes beyond the users of those networks. The
compromised machines can be used to attack others, including government
systems. To set standards in that world would certainly require new
legislation.
Industry, we knew, wouldn’t like any talk about regulation. But
they were fighting the last war. New
security legislation had in fact already been enacted, though in an odd, and
mostly unfortunate, way. Laws have been adopted in all but five states that
require companies to disclose any security breaches that lead to the disclosure
of sensitive customer data. The more the federal government has dithered over
security rules for industry, the more aggressively the states have moved into
the opening. Their breach notification laws are becoming de facto security
regulations for all companies. First, they punish bad security by forcing
companies who are compromised to admit that fact, as long as some personal data
was accessed. Second, in a crude way, they recognize that good security
measures can make notification unnecessary, and that encourages companies to
invest in technologies that are so recognized. For example, many state laws
recognize that encrypted data may be safe even if the system it is stored on
has been compromised. So, naturally, many companies have expanded their use of
encryption to avoid embarrassing breach notifications.
The problem with these laws is that they don’t necessarily point
companies in the direction of real security improvements. Because they only
punish companies for breaches that disclose personal data, they have encouraged
the companies to lock up or discard certain kinds of customer data – rather
than focusing on keeping hackers out of their systems more broadly.
The problem is particularly acute in the area of stolen and lost
laptops. Thousands of business laptops are lost or stolen every day. Usually,
the thief wants the laptop, not the data. But if there is personal data in the
laptop, that data has technically been compromised, thus forcing companies to
send embarrassing notices to everyone whose data was in the computer. After a
few such cases, companies begin to divert their security budget to
double-locking laptop drives with passwords and encryption. Those measures
won’t keep Ghostnet out of their networks, but they get the highest investment
priority because of the peculiarities of state law.
By the same token, state laws expressly recognizing encryption of
data as a defense have artificially heightened the priority that security
offices assign to the deployment of encryption, even though it too would have
done little to block a sophisticated attack. There are plenty of measures other
than encryption that may be equally effective at providing a defense in depth,
but state legislatures have not been able to draft laws that reward more
comprehensive security.
Finally, state laws vary substantially, creating great tension for
law-abiding companies, which find they cannot actually comply with all of the
different laws. For all those reasons, we noted, there is growing support for a
federal law that would set a single breach disclosure standard. We thought that
such a law could also create incentives for higher cybersecurity standards. In
fact, replacing inconsistent state notification laws with a security-minded
federal law would be a victory for both security and innovation.
Posted at 08:42 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
Posted at 09:01 PM | Permalink | Comments (2)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
The softest option was to nudge industry toward security measures by offering liability
protection in exchange. This is the most comfortable form of regulation for
business, because instead of punishing bad behavior it rewards good behavior.
This is something we understood at DHS, where we administered the Safety Act.
That act provides liability protection to companies who manufacture and sell
qualified anti-terrorism technology.
The idea behind the act is simple: Some anti-terrorism
technologies work well but not perfectly; they reduce risk but don’t eliminate
it. Unfortunately, after a terrorist incident, the people who have been fully
protected by the technology will be grateful, and the people who haven't been
fully protected will sue, claiming that the technology was defective, since it
didn't protect them from all harm. That's not a recipe for encouraging the
deployment of new technology.
So, to keep fear of liability from squelching advances in
technology, the Safety Act sets a cap on liability for approved technologies.
There are a lot of conditions built into the act. Companies must, for example,
carry whatever level of liability insurance DHS considers necessary to
compensate people who may be harmed in a terrorist attack. But in return, the
threat of open-ended, company-killing liability is taken off the table.
We thought that DHS could use the Safety Act itself to encourage
companies to adopt some cybersecurity technologies. The protections of the act
aren’t limited to physical products; they also cover services and information
technology. We thought the act could even be applied to security services and
processes, vulnerability assessments, and cybersecurity standards.
But the Safety Act wasn’t perfectly adapted to
cybersecurity tools. Most hackers are not terrorists. In addition, network
security measures work in layers. There is no single magic bullet that provides
all security needs. If many security products fail to prevent an attack, and
not all of them are covered by the Act, sorting out which ones caused the damage
could require endless, expensive lawsuits. And, because network threats change
so often, products designated under the Act would have to be updated
frequently. And even with regular updates, the extent to which a particular
technology provides protection will likely erode over time as attackers seek
ways around the defense. At what point should protection be modified or
withdrawn, we wondered, and who will press for that change? Finally, the insurance market for cybersecurity
products remains at best a work in progress, so it wasn't clear that adequate
coverage was available. For these reasons, we concluded, the Safety Act was
probably better as a model of what could be done without regulation than as a
tool that could be used immediately to encourage broad cybersecurity measures.
We also noted a second “soft” way to influence business --
government purchasing standards. Many critical infrastructure companies do
business with the U.S. Government. The government has great weight as a buyer
of technologies, and it can influence the market for security by the standards
it sets for its purchases. The government cannot, however, dictate terms to
suppliers of technology. The government may be the single largest buyer of some
technology, but it is far outweighed in the aggregate by private sector
purchasers. Further, without new policies, the government wouldn't really act
as a “single” buyer. IT procurement is
divided among many agencies, and these agencies would fight security standards
that raise costs or reduce competition.
We wanted the government to consider a more unified approach to
its procurement of information technologies.
We thought the government could establish government-wide contract
models that incorporated preferred technologies and security practices
requirements into federal contracts. In fact, some steps on this road had
already been taken. Federal purchases are required by law to meet certain
federal information security standards.
We knew, though, that using procurement to enhance commercial IT
security is easier said than done. The U.S. Government’s first efforts to
leverage its procurement power for IT security in procurements began in the
1970s, when the government established the Trusted Computer Security Evaluation
Criteria—the “Orange Book”—and began to evaluate commercial products that were
submitted for review. The idea, then as now, was to use federal contracts as an
incentive for vendors to incorporate security measures in their products.
The scheme never had as big a security impact as hoped; the
commercial market for computers rapidly outpaced the government market, and
private purchasers came to perceive their security needs as different from
those of the government. Sellers and buyers alike complained that security
evaluation slowed adoption of current IT hardware and software.
For all those reasons, the procurement process has not so far
turned out to be an effective way to influence network security.
Posted at 08:39 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
Cybersecurity regulation had been talked about for years. The Bush
Administration had floated the possibility in 2002. Or, to be more precise,
Richard Clarke had floated the idea.
Clarke was a flamboyant bureaucratic warrior camouflaged by the
dress and haircut of a high school math teacher. A career official with a knack
for building empires -- and making enemies -- he had risen to take charge of
both cybersecurity and terrorism policy in President Clinton’s National
Security Council. He later became famous briefly for his scathing denunciation
of the Bush White House’s response to terrorism warnings. But in 2000 he was better known as the man
who had sponsored the failed Clinton Administration plan to build a monitoring
network.
Clarke was held over by the Bush Administration, with the same two
portfolios he had held under President Clinton -- terrorism and cybersecurity.
But he never seems to have gained the same support in the new Administration as
he had in the old one. After the attacks of 9/11, pushed out of the terrorism job, he poured
himself into his cybersecurity role, spending much of 2002 drafting a strategy
for the new Administration.
Always a hard-charger, Clarke had high ambitions for his new
effort. He planned a grand event to unveil the strategy in September of 2002.
Reportedly, the strategy sidled up toward new mandates for industry, calling on
technology companies to contribute to a security research fund and pressing
Internet service providers to bundle firewalls and other security technology
with their services. But just days before the event, Clarke’s wings were
publicly clipped. Industry had found more sympathetic ears at the White House,
and he had too few friends at the top. His carefully honed strategy was
unveiled, not as a final document but merely as a draft, for comment. And even
for that purpose, anything that could offend industry, anything that hinted at
government mandates, was stripped out.
For Clarke it must have been the final straw. He’d already been
pulled off the terrorism account with brutal swiftness after 9/11, and now his
year of effort on cybersecurity had ended in a public rejection of his work.
He stayed in the White House just long enough to produce a final
strategy document that was as tepid as the draft. Then he quit.
Industry had claimed another scalp in its long campaign to head
off federal mandates aimed at improving computer security. The President
(though not industry) eventually paid a heavy price for Clarke's
resentment. The one-time security
adviser became a harsh Bush critic, in testimony before the 9/11 Commission and
other writings.
I thought of Clarke’s fate as we put together the report.
Regulation had become an electrified third rail. Especially in a generally
business-friendly administration, advocating more regulation was not likely to
be career-enhancing.
But the status quo clearly wasn’t working. Moore's law was working
against us. We had to find a way to change incentives, to get information technologists to start building
security into the foundation of our networks. It’s not that I thought
regulation was always going to be the right answer. But I was sure that it had
to be on the table. Especially because regulation didn’t have to mean classic
command-and-control Federal Register rulemaking.
Government doesn’t have to issue mandatory rules to influence
private sector behavior. It can use a variety of incentives to encourage
security. So the policy office laid out a range of approaches, ranging from
soft to hard.
Posted at 08:37 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
By the end of the Bush Administration, DHS was used to the idea
that even the most obvious security measures would be opposed by privacy
groups. We still had an obligation to do what we could to head off the building
security risks. We also knew intrusion
prevention, valuable as it was, wouldn’t do that by itself.
We needed a broader strategy. In mid-2008, the Homeland Security
Council asked DHS to provide options for a set of long-term strategy questions.
The policy office was assigned to pull them together.
We found a lot of tough tactical questions that needed to be
answered, but the real problem was our strategic posture. And only two ideas
that offered any hope of curing our strategic vulnerabilities – attribution and
regulation.
Attribution
Here’s our strategic security problem in a nutshell: We are attacked every day by an imaginative,
highly motivated, and anonymous adversary.
We can only prevail if we mount near-perfect defenses. And, since there's no penalty for mounting an
attack, the adversary simply tries again and again until something work.
This defensive strategy is, quite simply, too hard. A wholly
passive strategy almost never works in the real world.
Take burglary. We certainly spend money on defense. A good lock on
your door can keep burglars out of your home. But the lock isn’t all that good
by itself. We take it for granted that burglars can’t sit on our doorsteps day
after day, studying our lock and trying new lockpicks every evening to see what
works. If they could, they'd find a way in sooner or later.
Burglars don’t sit on your doorstep because they're afraid of
being busted. It’s the threat of the police that makes your lock as effective
as it is.
Defending networks is the same kind of problem. Security measures
are all well and good, but unless we can also identify and deter attackers,
defense alone will never do the job.
We have a lot of ways to punish attackers once we identify them.
It's identifying them that’s hard.
We began by trying to use the tools of law enforcement to identify
the attackers. Practically all computer attacks are crimes, after all. They
usually violate fraud, extortion, and computer abuse laws. Many attacks would
be deterred if the perpetrators faced a realistic risk of arrest and
prosecution.
But crossing international boundaries on the Internet is easy.
Attackers discovered very early that they could cover their tracks by breaking
into lightly guarded computers in several countries and hopping from one to the
next before launching an attack on their real target. That way, the police
would have to track them back from country to country before discovering their
real location. And doing that would require subpoenas valid in each country.
That wasn’t easy. To get one country to enforce another country’s
subpoena requires patience and lengthy legal analysis. The country that’s being
asked to enforce the subpoena will only do so if it too views computer attacks
as crimes. It has to have the ability to carry out the search very quickly.
Otherwise the logs will be overwritten and the evidence gone. Indeed, unless
the information can be gathered nearly instantaneously, the attackers will
always have the advantage. They can compromise new machines and add new hops to
their route faster than the police can serve subpoenas to track them.
This problem has been obvious for more than two decades. The
United States began encountering it in the 1980s, and by 1989, it had persuaded
the Council of Europe to propose work on an international agreement to
streamline the identification process. Getting that far took great effort. The Justice Department had to explain why it
needed such an instrument over and over to less computer-savvy governments.
Not until late 2001 was there actual agreement in principle on a
few very basic steps – making computer hacking a crime and naming a contact
point to handle subpoena requests quickly. And that simply marked the start of
a long slow international lawmaking process. The convention didn’t come into
effect until 2004, when a grand total of three countries ratified it. As of
2009, fifteen countries had fully ratified and acceded to the convention, and
28 more were in various stages of adopting it. As international efforts go,
that is a considerable success (although the numbers are inflated by the
European Union, which has pressed its 27 members to join, along with EU
satellites like Liechtenstein).
And what does the Convention do to solve the attribution
problem? In essence, the members of the
Convention have agreed that they will adopt a common set of computer crimes and
that they will assist each other in investigating these crimes.
That’s it. A good thing, no doubt, but hardly likely to stop the
massive attacks we see today. Hackers have compromised hundreds of thousands,
sometimes millions, of machines. If they chose to hop from one of those to the
next before launching an attack, the authorities would need to serve hundreds
of thousands of subpoenas in dozens of countries – and to do it as fast as the
hackers could move from one machine to the next. The hackers can move at the
speed of light – literally. The
governments can move at the speed of paper, courts, and sealing wax. It's no contest.
At best, the Convention offers a partial solution to computer crime
as it existed in the 1980s. But building a consensus for even its limited measures took over a decade. And even then,
the consensus was distinctly limited in geographic reach. Neither Russia nor
China has shown any inclination to adopt the Convention. Nor, for that matter,
have thoroughly wired countries like South Korea, Brazil, Nigeria, Singapore,
and Australia. So even if we still lived in the 1980s, there would still be
plenty of places in the world for hackers to hide.
The only alternative to the Convention that the international
community has found is worse – and in a thoroughly predictable ways. Led by Russia, the United Nations has
recently been touting the idea of “disarmament talks” for cyberspace.
There are several possible motivations for such a proposal. One possibility is that the Russians
genuinely believe that an arms control treaty for cyberspace would be good for
all concerned, demilitarizing and taking the fear of disaster out of the networks
on which the world relies. Unfortunately,
that’s not particularly likely. You
can't have a real arms control agreement unless you can verify
compliance. But as we’ve seen, a principal feature of computer attacks is
the difficulty of attribution. If
attacks continued after “disarmament” how would we know that anyone had
disarmed?
The Russians model seems to be the multilateral chemical and
biological weapons conventions negotiated in Geneva during the Cold War. By the usual standards of the international
community these are wildly successful agreements, adopted by more than 150
countries. They proved wildly successful
from the Soviet point of view as well, since the United States actually
abandoned its chemical and biological weapons after signing the conventions
while the Soviets kept theirs in place.
Even more remarkably, the United States managed to get a black eye in
the process, because it had the temerity in 2001 to tell the international
community that the convention was unverifiable, that it could not prevent
proliferation of biological weapons, and that there was no point in
establishing intrusive inspection regimes that would not work.
From the Russian point of view, replaying this drama has no
downside. If an agreement is reached,
the US, with its hypercompliant legal culture now fully integrated into
military planning, will undoubtedly adhere to any ban the new agreement
imposes. But countries that want to use
the tools of cyberwarfare will be free to do so, relying on the anonymity that
cloaks attackers today. If the US sees
that trap and refuses to accept an unenforceable agreement, the international
community will replay the drama that accompanied the US refusal to negotiate an
unenforceable biological weapons protocol.
Just agreeing to consider the proposal, as the new Administration
seems to have done, allows Russia to divide us from our allies in Europe -- who
always seem eager to put new international legal limits on warfare, even if the
limits can’t actually be enforced.
In the end, then, our inability to solve the problem of
attribution and anonymity poses severe threats not just to our pocketbooks but
to our national security and our international standing. We thought that it was foolish to solve the
problem with what Harvard law professor Larry Lessig once called “East Coast
code” – laws and treaties. Instead, we
thought, the answer would prove to be “West Coast code” – software and hardware
design. In the long run, we needed an architecture that automatically and reliably
identifies every machine and person in the network.
We knew that privacy groups would melt down if anyone proposed to
do that for the Internet. Anonymity has become (wrongly in my view) equated
with online privacy. Any effort to cut back online anonymity will be resisted
strongly by privacy groups. And they'll be able to find popular support, at
least for a time. Practically everyone
does something on line that they are ashamed of.
At the same time, practically everyone spends large parts of the
day on a network where their every action is identified and monitored.
Most corporate networks have robust attribution and audit capabilities, and the
insecurity of the public networks is forcing private networks to study the
conduct of their users ever more closely in the hopes of identifying compromised
machines before they can cause damage.
In trying to chart a broad network security strategy, I thought we
needed more research and incentives to improve audit and attribution
capabilities in hardware and software. And we needed architectural and legal
innovations to encourage one secure and attributable network to link up
securely with another. In the long run, and perhaps in the short run, that sort
of organic linking among attributable systems may be the only way to build a
network on which identification is rapid and sure.
That doesn’t mean the old, anonymous Internet has to disappear.
But I suspect we’ll have to create a new network that coexists alongside the
old one. Users who value security – who want an assurance that their financial
assets and their secrets will not be stolen by hackers– will choose the secure
alternative, at least most of the time.
The policy office at DHS put that idea forward as an option for
consideration by the Homeland Security Council.
Posted at 08:34 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
Posted at 08:46 AM in Random posts | Permalink | Comments (0)
Michael Yon's claim that he was handcuffed for refusing to answer unreasonable questions at Seattle's airport is, of course, just his side of the story. I've offered some reasons in earlier posts to think that his story is a bit odd, since he seems at first to have blamed TSA for the actions of the customs and border authorities, whose justification for asking the questions is a lot easier to imagine.
But why do we have to imagine CBP's justification for the questions? Commenters here and elsewhere have complained that border officials haven't been allowed to tell their side of the story. Declan McCullagh, a journalist who wears his libertarian sympathies on his sleeve, has disclosed that one border official may have posted comments revealing that Yon has a longstanding beef with CBP . With Declan's parentheticals, here's the post by the apparent border official:
"Just refusing to answer a question about yourself and your travel does not get you handcuffed during a secondary inspection in customs. You have to do alot (sic) more. And Yon was lying in his post where he claims that the airport police rescued him. CBP does not answer to the airport police and the airport police has no authority to interfere with CBP actions. Conservatives should get their facts straight before they go off. Yon actually has a history with CBP. His Thai girlfriend got stopped and questioned by CBP and he has a hissy fit afterward." (This may be a reference to Aew, Yon's Thai friend, who he said a year ago was unreasonably hassled by Minneapolis CBP agents.)
There's a lot to chew on in that short comment, which can apparently be traced to a dhs.gov domain. It calls into question Yon's impartiality about CBP. And it raises questions about why Yon left so much doubt about which agency asked him questions and handcuffed him; after the Minneapolis event, surely Yon is quite familiar with CBP officers and what authorities they have. In fact, it's easy, after reading his account of his friend's experience, to suspect that he was eager for a confrontation over border questioning.
But most interesting is Yon's threat to sue the government for this post. McCullagh reports that " Yon is saying he's "going to ask" his attorney "to examine a lawsuit for libel against the federal government for the post on BlackFive."" That would be a Privacy Act lawsuit for disclosing facts about an individual without statutory justification.
I am not comfortable with the leaking of private data, even in these circumstances. But neither can I justify Yon's tactic of trying to punish the government employees who contradicted his tale of oppression. Think what we all would say about a mainstream journalist who tried to use lawsuits to prevent the disclosure of bias in a story he had written. In equity, if not in law, when Yon published his account of the encounter, he should have waived any privacy right in keeping the event out of the public record. All the Privacy Act is doing now is suppressing speech about an event that has attracted lots of public attention.
I don't agree with Declan often, but he's got this dead right. He challenged Yon to authorize the release of the CBP records about the event, so we can all evaluate what happened ourselves.
One possibility, if Yon is interested and CBP is willing, might be an authorized disclosure. The CBP cited the Privacy Act; if you read the text of it, you'll see that the law allows disclosure "pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains." So if the Privacy Act is the sole obstacle to setting the record straight, a mere written request might do the trick.
Until that happens, it looks as though the truth is a privacy victim too.
Posted at 02:51 PM in Random posts | Permalink | Comments (0)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
It’s remarkable when you think about it. Right now, this minute,
agents of an authoritarian government are covertly turning on cameras and
microphones in homes and offices all across America, spying on the unsuspecting
and the innocent. They’re recording our every thought, our every keystroke, as
we prepare private documents or visit websites.
And they’re able to do that today thanks to the hard work of privacy advocates.
How did the privacy community end up facilitating surveillance and
espionage on an unprecedented scale?
History, mainly, and a lack of imagination.
The men and women who built the computer industry grew up in a
very different era from those who pioneered the air travel industry. Air travel enthusiasts first launched
commercial flights between the two world wars, when government was big and
military risks were on everyone’s mind. The pioneers were children of their
age. They foresaw a world in which air
travel was used for military and espionage purposes; they understood that
unregulated flights could lead to disaster as the skies filled up. To manage
those risks, they helped the government fashion a comprehensive regulatory
scheme for pilots, airlines, and airplanes.
Computer technology, in contrast, was born in the wake of World
War II, at a time when the challenge of totalitarianism was on everyone’s mind.
The men and women who built the earliest computers were children of a different
era. They most feared that their
machines would be misused by authoritarian governments. Unlike an earlier
generation of technologists, they struggled to limit government’s role in their
industry. And they succeeded. From electronic intercepts to information
processing practices, for the next forty years, laws on information technology
were aimed as much at regulating the government as at regulating the industry.
By the time the threat of widespread computer misuse finally
arrived, the privacy groups already had a narrative fixed in their mind. They
could not imagine any threat to computer users’ privacy that could be worse
than the one they saw in the United States government. Saying no to the
government was their default position.
Posted at 08:33 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
According to recent coverage, Michael Yon is very quietly recanting his original claim that TSA handcuffed him for refusing to answer questions about his income. A blog in the American Thinker is a classic. The author is so determined to trash TSA that he begins his article this way:
Give me one good reason to buy a ticket on any domestic airline in the United States. Death in the family? I'll walk. Business trip? Let's try video conferencing. Vacation? Go Amtrak!
Michael Yon - a guy I've dubbed this generation's Ernie Pyle - was placed in handcuffs at the Seattle airport, not because he showed up on a terrorist's watch list but because he refused to divulge to the TSA bullies how much money he made.
With that off his chest, the author admits that the only problem with the rant is that TSA had nothing to do with Yon's experience, after which the author returns immediately to trashing TSA for something it didn't do.
Michael discovered later that it was not TSA but rather Customs that was asking questions no American should have to answer. You have to wonder how bad things would have been if the underwear bomber had succeeded.
And, Michael, for your information, Customs and Border Protection officers have been asking people about their money for, oh, a century or two. Carrying $10,000 across the border without declaring it is a crime, to give one example of why they might be interested. And all that time you spent in Afghanistan? I for one would like to know that people who say they spent months in Afghanistan reporting on the conflict were actually doing that, not participating in fight on the wrong side. And I would hope they'd ask those questions no matter what last name the traveler has and no matter how Anglo the traveler looks.
After all, a chip that big probably has to be declared.
But the incident also raises questions about Michael Yon's reporting. CBP's uniforms are black, not blue, and they don't say TSA anywhere. CBP meets you when your plane lands, not before it takes off, and it doesn't put everyone's bags through an x-ray or step you through a magnetometer. Instead, the officer asks you for your passport, and says "Welcome home." It's not that hard to tell the difference between CBP and TSA officers, especially if you get close enough to them to, you know, refuse to answer their questions and get handcuffed.
I've been assuming that Yon's reportage from Aghanistan is (a) great stuff and (b) a harbinger of what journalism will become. Now I doubt (a) and fear (b). Really, this is the Internet at its worst -- recursive broadcasts of a story that gets everyone's juices flowing and turns out to be utterly bogus.
Come on, Michael, the main stream media would have published a very clear correction by now. That's the least you can do.
Posted at 05:37 PM in Random posts | Permalink | Comments (6)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
It didn't matter how obviously necessary a security measure
was. Resistance to any change was
strong. A case in point was the effort to install intrusion monitoring on the
federal government's own networks.
To succeed, most cyberattacks must do two things. The hackers
first have to get malicious code into the network they’ve targeted. Then they
have to get stolen information out. If we can detect either step, we can thwart
the attack. So one way to defend our networks is to do a thorough job of
monitoring traffic as it goes in and out.
We’ve known this for a decade. The Clinton administration’s
cybersecurity strategy, drafted in 1999 and released in early 2000, called for
a network of intrusion detection monitors that could inspect packets going into
and out of all federal government networks. President Clinton requested funds for
intrusion monitoring in his outgoing budget. But civil libertarians
quickly launched a campaign against it.
It was an odd battle for them to choose. The point of the
monitoring network was to inspect government communications. Even the most
extreme privacy zealot shouldn’t be shocked to discover that the government was
reading its own mail, much less that
it was inspecting its mail for malware.
By then, government agencies were already screening emails for spam; the
intrusion detection network simply extended that concept to other unwanted
packets. What’s more, since roughly the 1980s, these computers had been
displaying warnings users that government systems are subject to monitoring.
But privacy groups were spoiling for a fight. They portrayed the
proposal as the second coming of Big Brother.
"I think this is a very frightening proposal," an ACLU
representative told ZDNet News.
"We feel the government should spend its resources closing
the security holes that exist, rather than to watch people trying to break
in," said a counsel for the Center for Democracy and Technology.
"I think the threats (of network vulnerability) are
completely overblown," said the general counsel for the Electronic Privacy
Information Center, adding that claims of a security threat is leading to
"a Cold War mentality" that threatens ordinary citizens' privacy.
In the end, civil liberties resistance was so strong that only the
Defense Department was allowed to build an intrusion detection network. For
years thereafter, the civilian agencies experienced intrusions that could have
been prevented by the intrusion prevention system proposed by President
Clinton. But once burned was twice shy. The privacy groups had thoroughly
tainted the idea of intrusion prevention on the Hill, and there was real
reluctance to revisit the issue. When the Bush Administration wrote its
cybersecurity strategy, it did not even try to revive the idea.
Finally, though, five years later, the Bush Administration decided
to force the issue. Mike McConnell, the Director of National Intelligence, had
been my boss at NSA, and he had spent the years after leaving NSA building a
cybersecurity practice at a large consulting firm. A quiet, self-deprecating
Southerner with a talent for briefing higher-ups, McConnell was determined to
move cybersecurity to the front burner.
He didn’t have to work too hard to persuade DHS to take on the
challenge. We were alarmed at the ease with which attacks were being launched
against civilian agencies. With the backing of President Bush and Mike
McConnell, we again proposed an intrusion detection network for civilian
agencies. And civil libertarians once again renewed the fight to stop us – as
though nothing had changed in ten years. Without the slightest evidence of
irony, they again raised privacy objections to the government monitoring its
own communications.
We got further than President Clinton did, but not much. Congress
appropriated funds for the project, but it had not been fully implemented when
Barack Obama was elected President. Spooked by the privacy outcry, the Obama
Administration postponed full implementation of intrusion monitoring so that it
could again examine all of the privacy issues. Pilot projects are underway, but
final decisions about how, when, and whether to implement effective intrusion
monitoring are still awaiting consensus among the lawyers.
Meanwhile, attacks similar to those that compromised the Dalai Lama’s
network are continuing. The privacy debate had caused ten years of delay, and
it may yet kill an effective intrusion prevention system.
Posted at 08:26 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
Posted at 12:15 AM in Random posts | Permalink | Comments (39)
This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
S |
omehow, this problem too ended up on DHS’s plate. We were supposed
to figure out what could be done to improve the country's network security.
It was a snakebitten assignment. Two Presidential cybersecurity
strategies – one devised by the Clinton Administration and one by the Bush
Administration -- had already run into the ground before DHS was created.
Perhaps those who created DHS hoped that it could succeed where
two Presidents had failed. In any event, they gave the new department
responsibility for civilian cybersecurity. The National Communications System,
which ensures the availability of telecommunications in the event of an
emergency, was transferred from Defense. The FBI gave up its National
Infrastructure Protection Center, which focused on cybersecurity (and promptly
recreated the capability under another name so that it could keep fighting for
the turf). The Federal Computer Incident Response Center, which handled
computer incident response for civilian agencies, came over from the General
Services Agency.
These offices fit well with other DHS missions. Two of its big components -- the Secret
Service and Immigration and Customs Enforcement -- have cybercrime units. And DHS was supposed to
protect from physical attack the critical infrastructure on which the economy
depends.
In carrying out these duties, DHS could get technical help from
the National Security Agency, which was in charge of protecting military and
classified networks. But the responsibility for civilian cybersecurity
obligations left DHS on the hot seat. If we couldn’t find a way to head off
disaster, no one else in government would.
For the first few years of the department’s existence, to be
candid, we didn’t accomplish much. There were lots of reasons for that. Fixing
travel and border security was more urgent. Staff turnover was high and
expertise thin in our cybersecurity
offices. But the real reason we didn’t get far was that the same forces arrayed
against change in the travel arena were lined up against change in information
technology.
Businesses had staked their futures on continued exponential
growth in information technology. They didn’t want policy changes that might
change the slope of that curve even a little. Privacy groups instinctively
opposed anything that would give the government more information about, well,
about anything. And even when it was supportive, the international community
was so slow to change direction that it posed an obstacle to any policy that
was less than twenty years old.
Posted at 08:15 AM in Excerpts from the book, Excerpts from the book -- Chapter 7 | Permalink | Comments (0)
Here's one: Honor Canadian law. Keep the data. And take the long way 'round.
Posted at 08:46 PM in Random posts | Permalink | Comments (0)
I think the verbs tell the story. (Okay, maybe they're gerunds.). Anyway, our role as individuals is to --
(1) follow
(2) prepare to follow
(3) inform
Really, pretty much what you'd expect from any self-governing people.
Do you see any way that those three verbs could possibly lead to approval of home medkits?
Neither do I.
In fact, I think the document resolves the bureaucratic battle conclusively against DIY preparedness. It says individuals are supposed to "follow guidance" about keeping food and other materials at home. But in case you didn't understand the first time that you're only supposed to do what the government tells you, the bit about keeping materials at home gets an added and quite redundant qualifier. While you're following government guidance about keeping materials at home, remember that you're only to keep materials "as recommended by authorities."
So the bad news would appear to be that the administration isn't going to help you prepare a home medkit. No standard packaging and labels, no encouragement for doctors to prescribe the kits responsibly, no sober discussion of the risks.
The good news is that I still haven't been threatened with prosecution for promoting off-label use of antibiotics. So when you're asked why you got a medkit on your own, you can say you're just keeping the material at home "as recommended by an authority" -- Skating on Stilts.
Posted at 11:55 AM in Random posts | Permalink | Comments (0)
Stop for a moment to imagine the scene. Postal workers will be asked to drive into contaminated neighborhoods even though they can't be sure their countermeasures will work against whatever strain has been spread there. The neighborhoods are full of people desperate to get antibiotics, so for protection, the postal workers will first have to meet up with guys with guns whom they've never seen before. They'll collect antibiotics from pickup points that they may or may not have gone to before. They'll meet the guys with guns there, or someplace else that may have to be made up at the last minute. Then they'll start out on routes that almost certainly will be new to them. As they go, they will seamlessly and fairly make decisions about whether to deliver the antibiotics to homes where no one is present, to rural mailboxes that may or may not be easily rifled, to people on the street who claim to live down the way, to the guys with guns who are riding with them and have friends or family at risk, and to men in big cars who offer cash for anything that falls off the truck.
And all this will put antibiotics in the hands of every single exposed person within 48 hours, from a no-notice standing start.
Yeah, that should work.I got the prescription.
Some public health officials may try to make you feel guilty about "hoarding" antibiotics or contributing to antibiotic resistance. Poppycock. If you buy while supplies are plentiful, you're actually making a bigger market for these products and contributing to the maintenance of production capability. And if you don't take them irresponsibly, you won't affect resistance.
In fact, you're even being socially responsible. If we do suffer an anthrax attack and the Postal Service is having trouble keeping up, a sure bet if ever there was one, you can defer your delivery in favor of someone who has no stash. You'll take a bit of strain off a system that is going to need all the relief it can get.
(In addition to the glow of virtue, you can feel a bit of that leftover 60s civil disobedience thrill. When I tried to put this home stockpile advice in a speech toward the tail end of the last administration, I was informed by the lawyers that advocating an unapproved use of prescription medicine is a criminal offense under FDA law. And, while taking antibiotics for an anthrax attack is an approved use, getting antibiotics in case of an anthrax attack is not an approved use. I think that may mean that this post is, um, a felony. If so, well, power to the people and come and get me, coppers!)
What's unfortunate about the executive order is that there's not a hint that the administration is considering the home stockpile as the first and best way to prepare for a possible attack. If that's really the government's last word on the subject, it's like telling passengers that the best response to an air hijacking is to sit tight and wait for the authorities to arrive.
It's insufferably paternalistic and it's bad advice.
The only good part about it is, no one is going to listen.
Posted at 11:26 PM in Random posts | Permalink | Comments (16)