This
is
another
excerpt from my book on technology, terrorism, and
DHS, tentatively titled "Skating on Stilts." (If you want to
read the excerpts in a more coherent fashion, try the categories on the
right labeled "Excerpts from the book." I'm afraid I can't fix the bug
in TypePad that prevents me from putting them in the category in
reverse-chronological order, but I have started putting chapters up in
pdf form from time to time.) Comments and factual quibbles
are welcome, either in the comments section or by email:
[email protected]. If you're dying to order the book, send
mail to the same address.
--Stewart Baker
By the end of the Bush Administration, DHS was used to the idea
that even the most obvious security measures would be opposed by privacy
groups. We still had an obligation to do what we could to head off the building
security risks. We also knew intrusion
prevention, valuable as it was, wouldn’t do that by itself.
We needed a broader strategy. In mid-2008, the Homeland Security
Council asked DHS to provide options for a set of long-term strategy questions.
The policy office was assigned to pull them together.
We found a lot of tough tactical questions that needed to be
answered, but the real problem was our strategic posture. And only two ideas
that offered any hope of curing our strategic vulnerabilities – attribution and
regulation.
Attribution
Here’s our strategic security problem in a nutshell: We are attacked every day by an imaginative,
highly motivated, and anonymous adversary.
We can only prevail if we mount near-perfect defenses. And, since there's no penalty for mounting an
attack, the adversary simply tries again and again until something work.
This defensive strategy is, quite simply, too hard. A wholly
passive strategy almost never works in the real world.
Take burglary. We certainly spend money on defense. A good lock on
your door can keep burglars out of your home. But the lock isn’t all that good
by itself. We take it for granted that burglars can’t sit on our doorsteps day
after day, studying our lock and trying new lockpicks every evening to see what
works. If they could, they'd find a way in sooner or later.
Burglars don’t sit on your doorstep because they're afraid of
being busted. It’s the threat of the police that makes your lock as effective
as it is.
Defending networks is the same kind of problem. Security measures
are all well and good, but unless we can also identify and deter attackers,
defense alone will never do the job.
We have a lot of ways to punish attackers once we identify them.
It's identifying them that’s hard.
We began by trying to use the tools of law enforcement to identify
the attackers. Practically all computer attacks are crimes, after all. They
usually violate fraud, extortion, and computer abuse laws. Many attacks would
be deterred if the perpetrators faced a realistic risk of arrest and
prosecution.
But crossing international boundaries on the Internet is easy.
Attackers discovered very early that they could cover their tracks by breaking
into lightly guarded computers in several countries and hopping from one to the
next before launching an attack on their real target. That way, the police
would have to track them back from country to country before discovering their
real location. And doing that would require subpoenas valid in each country.
That wasn’t easy. To get one country to enforce another country’s
subpoena requires patience and lengthy legal analysis. The country that’s being
asked to enforce the subpoena will only do so if it too views computer attacks
as crimes. It has to have the ability to carry out the search very quickly.
Otherwise the logs will be overwritten and the evidence gone. Indeed, unless
the information can be gathered nearly instantaneously, the attackers will
always have the advantage. They can compromise new machines and add new hops to
their route faster than the police can serve subpoenas to track them.
This problem has been obvious for more than two decades. The
United States began encountering it in the 1980s, and by 1989, it had persuaded
the Council of Europe to propose work on an international agreement to
streamline the identification process. Getting that far took great effort. The Justice Department had to explain why it
needed such an instrument over and over to less computer-savvy governments.
Not until late 2001 was there actual agreement in principle on a
few very basic steps – making computer hacking a crime and naming a contact
point to handle subpoena requests quickly. And that simply marked the start of
a long slow international lawmaking process. The convention didn’t come into
effect until 2004, when a grand total of three countries ratified it. As of
2009, fifteen countries had fully ratified and acceded to the convention, and
28 more were in various stages of adopting it. As international efforts go,
that is a considerable success (although the numbers are inflated by the
European Union, which has pressed its 27 members to join, along with EU
satellites like Liechtenstein).
And what does the Convention do to solve the attribution
problem? In essence, the members of the
Convention have agreed that they will adopt a common set of computer crimes and
that they will assist each other in investigating these crimes.
That’s it. A good thing, no doubt, but hardly likely to stop the
massive attacks we see today. Hackers have compromised hundreds of thousands,
sometimes millions, of machines. If they chose to hop from one of those to the
next before launching an attack, the authorities would need to serve hundreds
of thousands of subpoenas in dozens of countries – and to do it as fast as the
hackers could move from one machine to the next. The hackers can move at the
speed of light – literally. The
governments can move at the speed of paper, courts, and sealing wax. It's no contest.
At best, the Convention offers a partial solution to computer crime
as it existed in the 1980s. But building a consensus for even its limited measures took over a decade. And even then,
the consensus was distinctly limited in geographic reach. Neither Russia nor
China has shown any inclination to adopt the Convention. Nor, for that matter,
have thoroughly wired countries like South Korea, Brazil, Nigeria, Singapore,
and Australia. So even if we still lived in the 1980s, there would still be
plenty of places in the world for hackers to hide.
The only alternative to the Convention that the international
community has found is worse – and in a thoroughly predictable ways. Led by Russia, the United Nations has
recently been touting the idea of “disarmament talks” for cyberspace.
There are several possible motivations for such a proposal. One possibility is that the Russians
genuinely believe that an arms control treaty for cyberspace would be good for
all concerned, demilitarizing and taking the fear of disaster out of the networks
on which the world relies. Unfortunately,
that’s not particularly likely. You
can't have a real arms control agreement unless you can verify
compliance. But as we’ve seen, a principal feature of computer attacks is
the difficulty of attribution. If
attacks continued after “disarmament” how would we know that anyone had
disarmed?
The Russians model seems to be the multilateral chemical and
biological weapons conventions negotiated in Geneva during the Cold War. By the usual standards of the international
community these are wildly successful agreements, adopted by more than 150
countries. They proved wildly successful
from the Soviet point of view as well, since the United States actually
abandoned its chemical and biological weapons after signing the conventions
while the Soviets kept theirs in place.
Even more remarkably, the United States managed to get a black eye in
the process, because it had the temerity in 2001 to tell the international
community that the convention was unverifiable, that it could not prevent
proliferation of biological weapons, and that there was no point in
establishing intrusive inspection regimes that would not work.
From the Russian point of view, replaying this drama has no
downside. If an agreement is reached,
the US, with its hypercompliant legal culture now fully integrated into
military planning, will undoubtedly adhere to any ban the new agreement
imposes. But countries that want to use
the tools of cyberwarfare will be free to do so, relying on the anonymity that
cloaks attackers today. If the US sees
that trap and refuses to accept an unenforceable agreement, the international
community will replay the drama that accompanied the US refusal to negotiate an
unenforceable biological weapons protocol.
Just agreeing to consider the proposal, as the new Administration
seems to have done, allows Russia to divide us from our allies in Europe -- who
always seem eager to put new international legal limits on warfare, even if the
limits can’t actually be enforced.
In the end, then, our inability to solve the problem of
attribution and anonymity poses severe threats not just to our pocketbooks but
to our national security and our international standing. We thought that it was foolish to solve the
problem with what Harvard law professor Larry Lessig once called “East Coast
code” – laws and treaties. Instead, we
thought, the answer would prove to be “West Coast code” – software and hardware
design. In the long run, we needed an architecture that automatically and reliably
identifies every machine and person in the network.
We knew that privacy groups would melt down if anyone proposed to
do that for the Internet. Anonymity has become (wrongly in my view) equated
with online privacy. Any effort to cut back online anonymity will be resisted
strongly by privacy groups. And they'll be able to find popular support, at
least for a time. Practically everyone
does something on line that they are ashamed of.
At the same time, practically everyone spends large parts of the
day on a network where their every action is identified and monitored.
Most corporate networks have robust attribution and audit capabilities, and the
insecurity of the public networks is forcing private networks to study the
conduct of their users ever more closely in the hopes of identifying compromised
machines before they can cause damage.
In trying to chart a broad network security strategy, I thought we
needed more research and incentives to improve audit and attribution
capabilities in hardware and software. And we needed architectural and legal
innovations to encourage one secure and attributable network to link up
securely with another. In the long run, and perhaps in the short run, that sort
of organic linking among attributable systems may be the only way to build a
network on which identification is rapid and sure.
That doesn’t mean the old, anonymous Internet has to disappear.
But I suspect we’ll have to create a new network that coexists alongside the
old one. Users who value security – who want an assurance that their financial
assets and their secrets will not be stolen by hackers– will choose the secure
alternative, at least most of the time.
The policy office at DHS put that idea forward as an option for
consideration by the Homeland Security Council.