Foreign Policy has published my article on how attribution can be used to deter foreign governments'cyberespionage. Excerpts below:
The Obama-Xi summit in Sunnylands ended without any Chinese concessions on cyber-espionage. This came as no surprise; cyber spying has been an indispensable accelerant for China's military and economic rise. And though Beijing may someday agree that international law governs cyberspace, that won't help the victims of espionage, which is not regulated by international law. So if negotiation won't work, what will? Not a strategy that relies entirely on defense. That's like trying to end street crime by requiring pedestrians to wear body armor.
The good news is that there has been a revolution in our ability to identify cyberspies. It turns out that the same human flaws that make it nearly impossible to completely secure our networks are at work in our attackers too. And, in the end, those flaws will compromise the anonymity of cyberspies...
But attribution is only half the battle if we want to deter cyber-espionage. The other half is retribution. Once we identify the attackers, we need to persuade them to choose another line of work. If we're serious about stopping cyberespionage, there are plenty of tools at our disposal ...
The government already uses classified information to label terrorist supporters and drug kingpins as "specially designated nationals" and to impose sanctions on them -- seizing their bank accounts and assets, for example, and prohibiting U.S. citizens from doing business with them. The United States even has such programs for sanctioning Belarusian kleptocrats and conflict diamond purveyors. Maybe it makes sense for Washington to use sanctions to punish misdeeds in Belarus or West Africa, but shouldn't it first use these measures to punish people who are invading homes and offices in, you know, the United States?
It's unclear why the president hasn't done this already -- he's already got all the authority he needs to impose sanctions on cyber spies and their enablers. Under the International Emergency Economic Powers Act, the president could determine that cyber spying poses "an unusual and extraordinary threat" to the United States and declare it a "national emergency." He could then publish a list of hackers who would be subject to sanctions. In keeping with past practice, he could rely heavily on classified data to make the designations -- without disclosing any of it....
But punishing individual hackers is only part of the story. What if the United States applied all of these measures not just to the hackers themselves but to companies that benefit from the data they filch from U.S. networks? There's no difference in criminal responsibility between a thief and the customer he's stealing for. But there could be all the difference in the world between hackers who do their work from the safe environs of a protective government and the hackers' customers, who can't be truly successful in today's world if they aren't part of the global marketplace. And going global means exposing their companies, executives, and assets to the legal systems of the United States, Europe, and a host of other countries that are furious at the wholesale espionage aimed at their companies. If a few big companies in China find that having a cozy relationship with hackers means criminal prosecutions and asset seizures, they're a lot more likely to say "Thanks, but no thanks" to offers of stolen data.
Of course, to bring those cases, the government will have to have those companies dead to rights, and so far it doesn't. U.S. security researchers have done a great job of tracking the thieves back home. But they've had trouble identifying the companies who ultimately benefit from cyberspying.
That too is an attribution problem -- the next one we have to solve if we want to really discourage commercial cyber-espionage. It will be difficult, but no harder than the first attribution problem looked five years ago. Given the stakes, improving cyber-attribution should be at the top of U.S. intelligence priorities. And now that private researchers have demonstrated how much attribution can be accomplished without all the resources and authorities of the CIA and NSA, those agencies should be embarrassed by their poor record to date. And they may not have much time before someone -- Iran, North Korea, Hezbollah -- causes a power outage or other control system failure in the United States. If they can't tell the president who did that, the heads of those agencies will be looking for new jobs. As part of the attribution effort the United States needs for defense, it shouldn't be that hard to identify the customers who benefit from cyber-espionage....
In recent months, the Hill has been buzzing with new ideas for identifying and punishing cyberspies and the companies that benefit from them.
At a recent hearing before the Senate Judiciary Committee's Subcommittee on Crime and Terrorism, I testified about some of these ideas. Senators Sheldon Whitehouse (D-RI) and Lindsey Graham (R-SC) expressed particular interest in measures to impose sanctions on countries that support hackers as well as potential visa restrictions.
Another example is the Deter Cyber Theft Act (S. 884), which has been sponsored by a bipartisan group of senators, that includes Senators Carl Levin (D-MI), John McCain (R-AZ), Tom Coburn (R-OK), and Jay Rockefeller (D-WV). This bill would require intelligence agencies to annually report to Congress on countries and entities that engage in cyber-espionage as well as to identify intellectual property that has been stolen as a result of hacking. It further permits the president to prevent the importation into the United States of products that are linked to foreign cyber-espionage activities, such as articles that have been manufactured using stolen IP or that have been produced by companies that have benefited from it. In short, the bill would nudge the government towards broader attribution, greater naming and shaming, and some efforts to deny companies the fruits of using stolen information.
If these measures result in the punishment of Chinese companies, there is no doubt but that China will seek to reciprocate. But once again, asymmetry is likely to complicate their task. U.S. intelligence agencies do not steal commercial secrets for U.S. companies so it will be hard for China to mirror these measures without faking the evidence. In short, a focus on the beneficiaries of commercial espionage could cause real pain for cyber spies and their customers.
The Director of National Intelligence issued a statement late last night about the NSA collection flap. It's the smartest thing the government has released so far, and its justification for the program in question seems to confirm my speculation in Foreign Policy yesterday.
First, large-scale collections give the government a way to screen for patterns in communications that will bring to light terrorists who are unknown to the government. As the DNI puts it, "The collection is broad in scope because more narrow collection would limit our ability to screen for and identify terrorism-related communications. Acquiring this information allows us to make connections related to terrorist activities over time."
Second, the government justifies collecting a reservoir of data because it is only allowed to consume the data a spoonful at a time. Here's the DNI:
In short, there's less difference between this "collection first" program and the usual law enforcement data search than first meets the eye. In the standard law enforcement search, the government establishes the relevance of its inquiry and is then allowed to collect the data. In the new collection-first model, the government collects the data and then must establish the relevance of each inquiry before it's allowed to conduct a search.
If you trust the government to follow the rules, both models end up in much the same place. I realize that some folks simply will not trust the government to follow those rules, but it's hard to imagine a system with more checks and restrictions and doublechecks than one that includes all three branches and both parties looking over NSA's shoulder.
In theory, you could add the check of exposing the system to the light of day, but that means wrecking much of its intelligence value. Or you could simply prohibit the collection-first model (and lose the ability to spot terrorism patterns by matching disparate bits of data). I doubt that those "solutions" are worth the price.
There may be a lot less to the NSA “scandal” than meets the eye. In an article for Foreign Policy, I explain why I am quite confident that the program underlying the FISA court order is lawful:
[T]his is not some warrantless or extra-statutory surveillance program. The government had to persuade up to a dozen life-tenured members of the federal judiciary that the order is lawful. You may not like the legal interpretation that produced this order, but you can’t say it’s lawless.
In fact, it’s a near certainty that the underlying program has been carefully examined by all three branches of government and by both parties. As the Guardian story makes clear, Senator Ron Wyden has been agitating for years about what he called an interpretation of national security law that goes beyond anything the American people understood or would support. He could easily have been talking about orders like this. So it’s highly likely that the law behind this order was carefully vetted by both intelligence committees, Democrat-led in the Senate and Republican-led in the House. (Indeed, today the leaders of both committees gave interviews defending the order.) And in the executive branch, any legal interpretations adopted by the Bush administration would have been carefully scrubbed by President Obama’s Justice Department.
The two other questions about the program are why such a sweeping collection and how can something that broad be lawful. Here's my guess about answers to the first question:
Imagine that the United States is intercepting al Qaeda communications in Yemen. Its leader there calls his weapons expert and says, “Our agent in the U.S. needs technical assistance constructing a weapon for an imminent operation. I’ve told him to use a throw-away cell phone to call you tomorrow at 11 a.m. on your throw-away phone. When you answer, he’ll give you the number of a second phone. You will buy a phone in the bazaar, and call him back on the second number at 2 p.m.”
Now, this is pretty good improvised tradecraft, and it would leave the government with no idea where or who the U.S.-based operative is or what phone numbers to monitor. It doesn’t have probable cause to investigate any particular American. But it surely does have probable cause to investigate any American who makes a call to Yemen at 11 a.m., Sanaa time, hangs up after a few seconds, and then gets a call from a different Yemeni number three hours later. Finding that person, however, isn’t easy, because the government can only identify the suspect by his calling patterns, not by his name.
So how does the NSA go about finding the one person in the United States whose calling pattern matches the terrorists’ plan? Well, it could ask every carrier to develop the capability to store all of their calls and to search them for patterns like this. But that would be very expensive, and its effectiveness is really only as good as the weakest, least cooperative carrier. And even then it wouldn’t work without massive, real-time information sharing -- any reasonably intelligent U.S.-based terrorist would just buy his first throwaway phone from one carrier and his second phone from a different carrier.
The only way to make the system work, and the only way to identify and monitor the one American who is plotting with al Qaeda’s operatives in Yemen, is to pool all the carriers’ data on U.S. calls to and from Yemen and to search it all together -- and for the costs to be borne by all of us, not by the carriers.
In short, the government has to do it.
And here's my guess about how to answer the second question:
The technique that squares that circle is minimization. As long as the minimization rules require that all searches of the collected data must be justified by probable cause, Americans are protected from arbitrary searches. In the standard law enforcement model that we’re all familiar with, , privacy is protected because the government doesn’t get access to the information until it presents evidence to the court sufficient to identify the suspects. In the alternative model, the government gets possession of the data but but is prohibited by the court and the minimization rules from searching it until it has enough evidence to identify terror suspects based on their patterns of behavior.
That’s a real difference. Plenty of people will say that they don’t trust the government with such a large amount of data, that there’s too much risk that it will break the rules, even reules enforced by a two-party, three-branch system of checks and balances. Even I, when I first read the order, had a moment of chagrin and disbelief at its sweep.
But for those who don’t like the alternative model, the real question is “compared to what?” Those who want to push the government back into the standard law enforcement approach will have to explain how it will allow us to catch terrorists who use half-way decent tradecraft -- or why sticking with the standard approach is so fundamentally important that we should do so even if it means more acts of terror at home.
"The commission argued that American companies “ought to be able to retrieve their electronic files or prevent the exploitation of their stolen information” by designing their computer files to self-destruct if they fall into the wrong hands. But the authors of the report also say that if the damage “continues at current levels,” the government should consider allowing American companies to counterattack — essentially taking cyberwar private.
“If counterattacks against hackers were legal, there are many techniques that companies could employ that would cause severe damage to the capability” of the Chinese or other groups committing computerized theft, the report said. But it added a qualifier: “while properly empowered law enforcement authorities are mobilized.” Many in the administration have opposed such ideas, fearing that they could lead to a cycle of escalation between the United States and other nations that could easily spin out of control."
The commission also adopts another view first popularized here: that attribution of attacks should be followed by retribution, and it comes up with at least one clever bit of retribution that I'd missed: restrictions on access to US stock exchanges:
"The new report does propose specific remedies. One is to mandate that foreign companies that want to be listed on stock exchanges in the United States first pass a review by the Securities and Exchange Commission about whether they use stolen intellectual property. “They all want their shares to be traded here, so this would impose a real cost,” Mr. Blair said. Similarly, whether companies protect intellectual property would be considered by the Committee on Foreign Investment in the United States, which judges whether an investment in the United States could pose a security risk. Currently it looks only at national security implications of investments; this would add a new criterion."
UPDATE: The actual report has now been released and is available here. It is not quite as aggressive as the early press coverage suggested, though it still represents movement away from the Justice Department's conventional wisdom. Here's the recommendation, which appears under the heading "Reconcile necessary changes in the law with a changing technical environment":
When theft of valuable information, including intellectual property, occurs at network speed, sometimes merely containing a situation until law enforcement can become involved is not an entirely satisfactory course of action. While not currently permitted under U.S. law, there are increasing calls for creating a more permissive environment for active network defense that allows companies not only to stabilize a situation but to take further steps, including actively retrieving stolen information, altering it within the intruder’s networks, or even destroying the information within an unauthorized network. Additional measures go further, including photographing the hacker using his own system’scamera, implanting malware in the hacker’s network, or even physically disabling or destroying the hacker’s own computer or network...Finally, new laws might be considered for corporations and individuals to protect themselves in an environment where law enforcement is very limited. Statutes should be formulated that protect companies seeking to deter entry into their networks and prevent exploitation of their own network information while properly empowered law-enforcement authorities are mobilized in a timely way against attackers. Informed deliberations over whether corporations and individuals should be legally able to conduct threat-based deterrence operations against network intrusion, without doing undue harm to an attacker or to innocent third parties, ought to be undertaken.
I'm testifying today on supply chain vulnerabilities and cybersecurity. The testimony is in a hearing held by the House Commerce Committee's Subcommittee on Communications and Technology. Here's my quick diagnosis of the issue:
Intrusions on our networks have reached new heights. They have moved from penetration of government and military systems to wholesale compromises of companies, trade associations, think tanks, and law firms. Most of these attacks have been carried out for espionage purposes – stealing commercial, diplomatic, and military secrets on a massive scale.
This espionage campaign has paid dividends for our adversaries, and it’s likely to pay more, because any network that can be compromised for the purpose of espionage can be compromised for the purpose of sabotage. The next time we face the prospect of a serious military conflict, we can expect our adversaries to threaten the destruction of computer networks – and the civilian infrastructure they support – inside the United States, probably before we have fired a shot. From the American point of view, this is a new and profoundly destabilizing vulnerability. From our adversaries’ point of view, it is an exciting new weapon with enormous potential to neutralize many of our traditional military advantages.
To make things worse, one of the countries that the Obama administration has criticized most often for cyberattacks, China, is also a major supplier of increasingly sophisticated electronic equipment to the United States. Given the value of cyberespionage for waging both war and peace, it’s only reasonable to assume that every potential adversary asks itself whether it can make the job of its cyberwarriors easier by tinkering with electronic gear before it’s shipped to the United States. Or, as I put it in Skating on Stilts, a book about technology challenges to policymakers, if the “countries that [view] us as an intelligence target … could get their companies to compromise U.S. networks, they’d do it in a heartbeat.”
The remainder of the testimony discusses the limited legal authority that government has to deal with the risk of "intrusion-friendly" technology from abroad:
CFIUS is an inadequate tool for this job. It gives the government only haphazard insight and leverage over the security of telecommunications and information technology. That’s because CFIUS has jurisdiction only over corporate acquisitions. Team Telecom, which I also oversaw from a DHS perspective, adds a bit to that authority, giving national security agencies an ability to impose conditions on foreign telecommunications carriers seeking Federal Communications Commission licenses to operate in the United States. But Team Telecom has no explicit authority in law; its reach is no greater than the FCC’s. As a result, even the most dangerous and unreliable suppliers of commercial telecom and IT equipment are free to sell their products in the United States without an inquiry into the security risks the products may pose.
I close with a look at new measures emerging from the government's recent focus on this risk, from the executive order on cybersecurity to various provisions adopted under the defense authorization or the appropriations process.
Full testimony is here: Baker testimony to House Commerce on supply chain security.
I'll be testifying this morning before the Senate Judiciary Committee's subcommittee on crime and terrorism. My testimony will touch on the Attribution Revolution in cybersecurity, the need to move from attribution to creative forms of retribution, and the need to give victims more leeway to investigate the hackers who attack them. Here are some excerpts:
That is why I will focus my remarks today on what is shaping up to be an “attribution revolution.” The theory is simple. The same human flaws that have left our networks ever more exposed to attack are undermining our attackers’ anonymity. This is what I like to call Baker’s Law: “Our security may be toast. But so is theirs.”
As numerous recent reports show, attackers are only human. They make mistakes when they’re in a hurry or overconfident. They leave bits of code behind on abandoned command-and-control computers. They reuse passwords and email addresses and computers. Their remote access tools are full of vulnerabilities. These are openings that private researchers – from Mandiant and Trend Micro to SecDev and the Citizen Lab – have exploited; they’ve traced cyberattacks to the command and control computers used to carry them out, then to homes and offices of the hackers that perpetrate them. These reports have identified individuals and institutions closely associated with hacking US companies and agencies. They’ve found the universities where the hackers trained. They’ve found the hackers’ names and instant message addresses . Using these clues, researchers have even tracked the hackers down and called them up for comment. They’ve found the companies that employ the hackers today. In at least one case, hacking victims in the Republic of Georgia have turned the tables and used their attackers’ malware to take an attacker’s picture with his own desktop camera.
The attribution revolution has truly begun.
But attribution is only half of the formula if we want to deter cyberespionage. The other half is retribution. Once we identify our attackers, we need to persuade them to choose another line of work.
That does not necessarily mean that we should rely exclusively or even primarily on the Department of Justice or the Federal Bureau of Investigation. We must look beyond traditional criminal prosecutions to deter cyberespionage.
This brings me, finally, to the role that private companies should play. I’ll be blunt. We can't rely exclusively on the Federal Bureau of Investigation. ... We need better ways to draw on the resources of the private sector and their investigators.
Right now, however, the Justice Department is doing more to hurt than to help companies that want to respond aggressively to the theft of their secrets and their intellectual property.
Let me give you one example. Suppose that a private investigator finds that data is being exfiltrated from his client to a particular command and control server. If the server is in the United States, the investigator may be able to persuade the owner, who is probably himself a hacking victim, to grant access to the server. This happens a lot, and it has great value, especially for attribution. The investigator may be able to identify the attackers and even recapture some of the stolen data.
But what if the hackers get wise and move the server to another location that they actually own? Can the investigator follow them to that other server and use what he knows about the gang’s passwords to get access to the evidence and the stolen data stored there?
Not according the United States Department of Justice, which has begun actively and publicly discouraging any investigations that do not rely on the consent of the network owner, even when the network owner is the hacker himself. Recently, an anonymous Justice Department spokesman told Bloomberg BNA that intruding on an attacker’s network would be both bad policy and “likely a violation” of the Computer Fraud and Abuse Act.
This is unfortunate in so many ways that I can understand why the spokesman insisted on anonymity.
Here's a link to the whole thing: Download S Baker- Crime and Terrorism SubCommittee Testimony 5-7-13 - Attribution Revolution. (And, yes, I bowdlerized Baker's Law for the august halls of Congress.)
Most people know that China's largest telecommunications supplier, Huawei, has been largely excluded from the US market because of official allegations that it will enable Chinese cyberespionage and wiretapping. What none of us realized, apparently, is the real reason that Huawei's been forced out.
Luckily, the company's head of Cyber Security, John Suffolk, is happy to set us straight. According to his blog, it's because Huawei is too much of a civil liberties hero to be allowed into the US market: "Maybe this is why America doesn't want us to sell our equipment to American companies; maybe they will worry that we will see what they do with American Citizens personal data, monitoring and storing of everything that passes through telecommunications."
When I said in a recent post that, "The ACLU must be really popular these days in Beijing," I didn't realize how quickly China's advocates would start channeling American civil libertarians.
If you’re looking for laws of unintended consequences, you can’t do better than privacy. Take two examples plucked from last week’s front pages:
Here’s the New York Times reporting on massive fraud in the billion-dollar settlement of claims that the Agriculture Department discriminated against black, Hispanic, and female farmers:
“It was the craziest thing I have ever seen,” one former high-ranking department official said. “We had applications for kids who were 4 or 5 years old. We had cases where every single member of the family applied.” The official added, “You couldn’t have designed it worse if you had tried.”
… “[T]here was no way to refute what they said,” said Sandy Grammer, a former program analyst from Indiana who reviewed claims for three years. “Basically, it was a rip-off of the American taxpayers.”
The true dimensions of the problem are impossible to gauge. The Agriculture Department insists that the names and addresses of claimants are protected under privacy provisions.
And here’s a Boston Herald report on its attempt to find out how many benefits the Tsarnaevs received before their bombing attack on the Boston Marathon:
The Patrick administration clamped down the lid yesterday on Herald requests for details of Tamerlan Tsarnaev’s government benefits, citing the dead terror mastermind’s right to privacy.
Across the board, state agencies flatly refused to provide information about the taxpayer-funded lifestyle for the 26-year-old man and his brother and accused accomplice Dzhokhar Tsarnaev, 19.
On EBT card status or spending, state welfare spokesman Alec Loftus would only say Tamerlan Tsarnaev, his wife and 3-year-old daughter received benefits that ended in 2012. He declined further comment.
On unemployment compensation, labor department spokesman Kevin Franck refused to say whether Tamerlan Tsarnaev ever collected, saying it was “confidential and not a matter of public record.”
On Dzhokhar Tsarnaev’s college aid, University of Massachusetts Dartmouth spokesman Robert Connolly said, “It is our position — and I believe the accepted position in higher education — that student records including academic records and financial records (including financial aid) cannot under federal law be released without a student’s consent.”
On cellphones, the Federal Communications Commission would not say whether either brother had a government-paid cellphone, also citing privacy laws.
Who knew? Thanks to privacy law, people making dubious claims on a judgment fund don't have to be identified as though they were litigants; and benefit recipients are protected from embarrassment even after death has made embarrassment the least of their troubles.
Actually, privacy laws have a long history of unintended consequences. Libertarians were outraged when citizens got arrested for recording the police; but those arrests were often based on state privacy laws that prohibited "eavesdropping" on conversations without all parties' permission. And laws inspired by Louis Brandeis’s famous right to privacy have become the mechanism by which celebrities extract fees for commercial use of their photos.
These unintended consequences aren't really an accident. We think we know what we want when we pass laws protecting privacy, but it turns out that our notions of privacy are remarkably fluid and situational, so by the time the laws are actually applied they don’t actually correspond to our sense of right and wrong. It works about as well as a law codifying and punishing rude behavior in public.
But in another way, there’s nothing at all surprising about the consequences of privacy laws. From arresting citizen photographers to clamping a lid on government scandals, privacy laws almost always turn out to be remarkably convenient for the powers that be.
Again, that’s not an accident. As particular privacy laws lose their connection to evolving cultural standards, we slowly stop enforcing them (see, e.g., Brandeis, supra) . But they still get dusted off and enforced in a couple of situations: (1) To punish people whom the authorities don’t like but who haven’t violated any other laws and (2) to protect the kind of people who end up running the government.
Or, to put it another way, it looks as though privacy laws are doing for the twenty-first century what loitering laws did for the twentieth.
PHOTO: Kai Strandskov
UPDATE: I realized after posting that I had improperly lumped two unintended consequences of privacy laws together in blaming Louis Brandeis for arrests of citizens photographing the police. Instead those arrests are the heritage of privacy campaigners from the 1960s, who insisted that anti-eavesdropping law prohibit all unconsensual recording. Louis Brandeis did, however, inspire the quasi-intellectual-property "right of publicity," an equally unintended outcome of laws adopted to preserve privacy.
There's been considerable speculation about how the government handled Tamerlan Tsarnaev's return from Russia. Before Tsarnaev's return, both the FBI and the CIA had suggested that Tsarnaev belonged in the government's classified terrorist database, and according to some reports an alert for Tsarnaev was entered into the DHS border system. Yet according to Secretary Napolitano these systems "pinged" when Tsarnaev left the country but not when he returned six months later.
The lack of a ping upon Tsarnaev's return to the United States suggests a gap in US border defenses. In general, the outbound "ping" is not a big deal. It tells us that a terror risk is leaving the country, more a matter for celebration than suspicion. We don't usually inspect or question departing passengers, so it would have taken a pretty unusual notice to earn Tsarnaev much scrutiny on departure.
But his return should have been different. He was entering the country, and at the border the government's authority to stop travelers, to question them, and to search their luggage, including their electronics, is at its zenith.
If we have any doubts about the intentions of a returning green-card holder, this is the time and place to question him. When the FBI paid a visit to Tsarnaev's home, Tsarnaev had complete control of the interview. He could throw the agents out whenever he chose, and he could certainly refuse to let them look at his computer and phone.
At the border, though, he can't. We could have learned a lot more about Tsarnaev's journey into radicalism there. For example, the FBI's preliminary investigation included checking to see if Tsarnaev had posted on certain radical Islamist websites, but it couldn't know what he might have downloaded, and it's not clear that the FBI had any way to tell what he might have posted under a pseudonym. Again, the government had its best chance of discovering those things by conducting a secondary inspection of Tsarnaev when he returned from Russia.
So why didn't it? I doubt that it was a lack of DHS resources or a flood of higher priority travelers. There may be half a million people in the terror database, but on any given day, there can't be more than a couple of dozen flying into the United States. Since DHS conducts hundreds if not thousands of secondary inspections at airports every day, you'd expect it to routinely take a look at everyone who's ended up in the terrorist database.
Unless they've been cleared. There was a hint in Secretary Napolitano's testimony that Tsarnaev wasn't interviewed at the border because the FBI had closed its investigation. This may mean that the administration has adopted a policy of treating the closure of an FBI preliminary investigation as "clearing" the subject of the investigation.
There are lots of ways that such a policy could have come about. The FBI could have claimed exclusive authority to interview terror risks at the border, so when DHS calls to say "We've got this guy Tsarnaev coming in; do you want to talk to him?" and the bureau says, "Nah, we closed his case," then DHS is expected to stand down.
If that's the policy, it's dumb. The FBI may have closed its investigation because it didn't find anything using its very limited authorities, but that should not prevent other agencies from using broader authorities to explore the Russian warnings in more detail.
It's also possible that DHS itself has adopted a policy of not inspecting people in the terrorist database if the investigation has been closed. That would be equally dumb; closing an investigation for lack of evidence is not the same as clearing the traveler of all suspicion, especially given the FBI's limited ability to act on a vague tip from Russia.
In any event, this is one place where we should be seeking lessons from the Tsarnaev matter. It sure looks as though the system failed. Tsarnaev should have been given a very close look as he entered the United States. But it seems as though someone -- probably at the FBI, perhaps at DHS or elsewhere -- decided we should just say "Welcome home" and wave him through.
We should know who made that call, and we should know why.
PHOTO source: FBI, Wikipedia
This White House sure knows how to snatch defeat from the jaws of victory.
The President's threat to veto CISPA (Download Cyber - S A P ) will likely kill cybersecurity legislation for the year.
Here's the sentence that I believe will eat away at support for the legislation among its last defenders in Silicon Valley: "The Administration ... remains concerned that the bill does not require private entities to take reasonable steps to remove irrelevant personal information when sending cybersecurity data to the government or other private sector entities."
Those last four words signal a big change in the status quo. Most companies today can share information voluntarily with the government without legal constraint, though electronic service providers must demand a subpoena before sharing information. And practically all companies, including electronic service providers, may share cybersecurity information with other private companies without worrying that the government is looking over their shoulders.
So in demanding that CISPA limit sharing with "other private sector entities," the Administration is proposing a sweeping new regulatory scheme for the private sector. The scheme will actually impair cybersecurity by restricting the information-sharing companies now conduct to protect their networks.
And while the Statement of Administration Position tries to make the new regulatory scheme sound less harsh by claiming that it only requires "reasonable" steps to remove "irrelevant" private information, those words are code for "You'll need a lawyer before you share any cybersecurity information with anyone." After all, reasonableness is a famously elastic concept in the law; you only really know whether your actions were reasonable five years after the fact, when the judge rules.
And what is "irrelevant personal data" exactly? Can an ISP identify the IP address of the computers sending DDOS packets toward a victim? Much of the time an IP address is personal -- it identifies an individual, or at least a family. So is it "relevant" under the Administration's new proposal? Maybe. Stopping a DDOS attack is often easier if the victim knows the attackers' IP addresses, but does the ISP have to verify that the IP address will actually help the victim stop the attack before handing it over? Will these quick decisions all be second-guessed at leisure by some privacy bureaucrat?
I say security, you say liability. Let's call the whole thing off.
It's hard to see any company supporting a bill that turns today's largely functional and scandal-free cybersecurity information exchanges into minefields of uncertainty. And in the absence of industry support, CISPA will be SOPA without Hollywood.
What's remarkable is that the President started this debate by asking for almost exactly what the House Intelligence Committee has delivered. Here is the Administration's original legislative proposal on information sharing. On a quick review, I don't see any limitations in the President's proposal on what data the private sector can share -- only limitations on what the government can do with the information it receives. Now that it comes to that, I don't see a lot of the things that the President is suddenly highlighting as fatal flaws in CISPA.
So the short version of this story is simple: The President says he will veto CISPA because it lacks features that he didn't even bother to include in his own version of the same bill.
This is some of the flakiest policy making I've ever seen at such a high level, and it strongly suggests that the Administration just isn't that serious about information sharing for cybersecurity.
PHOTO: Donovan Govan
NOTE: For those who complained that Steve McQueen was an anachronistic cultural reference, please note that I have taken your advice to heart and am now appealing to an entirely different generation.
Here's the scant good news on cybersecurity It’s getting harder for attackers to hide. The same security weaknesses that bedevil our networks can be found on the systems used by our attackers. A shorter version is something I call Baker’s Law: “Our security sucks. But so does theirs.”
That’s good news because, with a little gumption, we can exploit hacker networks, gather evidence that identifies our attackers, and eventually take action that will make them regret their career choices.
Unfortunately, the United States has been sitting out this attribution revolution. Our vaunted CyberCommand may be energetically exploiting hacker networks, but it isn’t helping private victims of cyberespionage. Foreign governments are hacking US companies, law firms, activists, and individuals with abandon, but our government seems unable or unwilling to stop the attacks or identify the attackers. In fact, hacking victims who want to gather evidence against the bad guys are being warned off, told that conducting a private investigation could put them at risk of prosecution. As an anonymous Justice Department recently told the press,
“Arguments for or against hack-back efforts fall into two categories: law and policy,” the DOJ spokesman told BNA. “Both recommend against hack-back. Under current law, accessing a computer that you do not own or operate without permission is likely a violation of law. And while there might be something satisfying about the notion of hack-back on a primal level, it is not good policy either.”
Actually, the spokesman could have stated the Department’s policy even more concisely: “We don't know how to protect you, but we do know how to keep you from protecting yourselves.”
Justice wants to cut off the debate over hacking back. But it’s too late for that. Even if Justice adopts something tougher than its carefully qualified (and longstanding) statement that hackbacks are “likely a violation” of federal law, all it can really do is drive hackbacks offshore, leaving US companies more exposed to intrusions than companies in more tough-minded jurisdictions.
Exhibit A for this theory is a recent cybersecurity report from two Luxembourg entities, a private computer incident response team and iTrust Consulting. Because it turns out that, as far as hackbacks go, little Luxembourg has more cojones than the entire United States cybersecurity establishment.
The report, by Paul Rascagnères, focuses on “APT1” -- the cyberespionage gang recently identified by Mandiant as Unit 61398 of the Chinese People’s Liberation Army. For those of us who think hackback is a useful cybersecurity policy tool, the report is both informative and fun -- because Rascagnères served APT1 a double helping of what the unit has been dishing out to the rest of us for years.
Inspired by Mandiant, Rascagnères decided to go hunting for the hacking unit’s command and control infrastructure. Unlike Mandiant, though, he didn’t start with victims and track back to the controllers. Instead, he started at the other end, scanning whole networks of machines to find ones that were running Poison Ivy, the hackers’ favorite Remote Access Tool, or RAT. Poison Ivy operates in a client-server model, where the client is installed on a victim's computer and connects to the attacker’s server. The server software presents a graphical user interface for surreptitiously controlling another persons computer. (Several screenshots of this “exploit GUI” are included in the report.)
The first thing Rascagnères discovered was that APT1 only ran its Poison Ivy servers during office hours – 8 to 5 Shanghai time. That by itself was a pretty good clue for attribution, but Rascagnères was just getting started.
Building on another researcher’s identification of weaknesses in Poison Ivy, Rascagnères did what any red-blooded Luxembourger would do (someone please cover the Justice Department’s eyes): he broke into and mapped the hackers’ exploitation network.
And he collected valuable intelligence about how the Chinese unit is responding to the publicity generated by Mandiant’s report. The Mandiant report described a unit that controlled many victims through a single command and control server, often a compromised machine in the United States. This meant that when Mandiant got access to that command and control machine, Mandiant could identify dozens of other victim networks.
What Rascagnères found was more sophisticated – and partially protected from Mandiant’s technique. Now, it appears, the Chinese hacking unit is covering its tracks by assigning every victim his own dedicated proxy server connected to his own Poison Ivy server. Both machines are remotely controlled by mechanisms (Remote Desktop Protocol and VMWare remote desktop) that obscure the actual location of APT1. All of this makes it much harder to develop signatures of compromise, since exposing one exfiltration route reveals only a single “bad” IP address and no additional victims.
But Rascagnères caught the Chinese unit recycling IP addresses. When a victim realized he’d been infiltrated and started blocking his dedicated Poison Ivy IP address, the unit simply assigned that address to a different victim. So it’s still possible to assemble a list of victims and bad IP addresses, but only if each victim shares every “bad” IP address used against him, and that information is widely disseminated to other potential victims. These changes tell us a couple of things about the Chinese unit. First, they’re too cheap, too poor, or too invested to get a new IP address for every new compromise; that’s a weakness we can work. And second, given how easily their new scheme can be defeated by widespread information sharing, they must be betting against adoption of CISPA. (The ACLU must be really popular these days in Beijing.)
Even these discoveries didn’t end the drama. At one point, the Chinese hackers realized that their network had been penetrated. They started searching for the intruder, but so hamhandedly that he spotted the effort. He installed a keylogger on the Poison Ivy servers that he had hacked and waited for the Chinese to log in to their proxy servers. Then he dropped his compromised connection to the Poison Ivy servers and instead hacked the proxy servers using the Chinese hackers’ credentials. Once in the proxy server, his connection to the network looked like every other victim network communicating with its controller.
That’s impressive but Luxembourg’s finest wasn’t even close to done. While he was in the hacker’s network Rascagnères copied their remote access logs to map the attackers’ workstation machines. Then he rifled the Poison Ivy servers to find the tools the hackers were using -- as well as all the data they were stealing from victim networks. The data had been password-protected by the hackers, so he brute-forced their passwords. And, while the Chinese unit was probably still desperately trying to figure out whether they’d successfully locked the intruder out, he exfiltrated all their stuff out from under their noses.
For those who’ve been the victims of Unit 61398, that sure sounds familiar. And deeply satisfying. Unless you’re the United States Justice Department, in which case it sounds like a felony, and “not good policy either.”
Justice couldn’t be more wrong. This kind of tactic is absolutely essential if we want to create an effective defense against cyberespionage. Thanks to Luxembourg’s machismo, we won’t have to learn Unit 61398’s new tactics by trial and error; and we already have ways to thwart the new tactics, plus a store of tools and stolen data.
Oh and one more thing: while he was playing with their command and control system, Rascagnères discovered that it didn’t correctly parse data sent by a victim machine. Using that flaw, he wrote what looks to me like the first public zero-day exploit of the hackers’ own tool and released the code for other researchers to use.
Perhaps the Justice Department thinks that the government could have found all of this out on its own. Maybe the government already knows all this from its own supersecret penetrations of Chinese hacker networks, achieved without any help from vigilantes like Rascagnères. I kind of doubt it, but the more important fact is that it doesn’t really matter to all the private victims in this country what the government knows. We need to know it too. And because it wants to protect its sources and methods, the government isn’t likely ever to tell us. After all, it didn’t tell us about Unit 61398, or about Luckycat, or about Ghostnet. Everything we know about China’s hackers we owe to brave private citizens like Trend Micro and Mandiant and Citizen Lab, who went right up to the line that Justice is busily waving everyone away from.
Now we owe a lot to Paul Rascagnères, though he seems to have treated the Justice Department’s line the way Steve McQueen treated the fence in The Great Escape.
Well, God bless him, he’s showing us a new path to cybersecurity. It’s better than the old path, for sure. And no matter what the Justice Department says to American companies, the rest of the world is going to follow.
ART CREDIT: iTrust Consulting and Malware.lu
CAVEAT: As always, I welcome corrections to my understanding of technical matters.
The House intel committee is amending CISPA to address privacy criticisms. Politico's Tony Romm reports on some of the likely amendments:
Still another amendment specifies clearly that CISPA won't allow companies to "hack back" their hackers in pursuit of stolen trade secrets ...
Really? A government that can't protect us is debating new measures to make sure we can't protect ourselves?
Well, it does sound kind of familiar ...UPDATE: To be fair, I've now seen the proposed amendment, and it tries to avoid taking a position on active defense, simply saying that CISPA doesn't give any additional authority to private actors who want to investigate their attackers. That's still a bad idea, and rather than putting forward a sponsor's amendment, the committee leadership should tell us exactly who asked them to reduce computer hacking victims to helpless computer hacking victims. This article hints that the idea came from the White House and the Justice Department's leadership.
The continuing resolution awaiting the President's signature that I wrote about yesterday could have a big impact on the federal government's procurement of IT equipment from Chinese companies. As described in an earlier post, the resolution includes a provision that bars purchases of an "information technology system" that was "produced, manufactured or assembled" by entities "owned, directed, or subsidized by the People's Republic of China" unless the head of the purchasing agency consults with the FBI and determines that the purchase is "in the national interest of the United States."
While the provision doesn't prohibit purchases of Chinese-government-influenced systems, it makes such purchases politically difficult. How will China react? Not well. China has spent years trying to curtail its own purchases of IT from outside its borders, but that won't stop it from calling the bill protectionist and claiming a violation of US WTO obligations. Legally, China may have trouble making such a claim stick. China has not signed on to the WTO's government procurement code; it is just an observer.
But China may not have to make the claim stick in its own right. That's because the provision doesn't hit China directly. Instead, it restricts purchases from Chinese-government-influenced entities, no matter where those entities manufacture their products. This means that the provision could prevent purchases of Lenovo computers manufactured in Germany, or Huawei handsets designed in Britain. Both of these countries have joined the WTO government procurement code, which obliges its members not to discriminate against other member countries in procuring data processing software and hardware. This means the US could see WTO challenges to the provision from its own allies (unless they're so sick of Chinese hacking that they decide to emulate the new provision rather than attack it).
Would such claims prevail? You might think that they would face an uphill fight; most WTO undertakings have an exemption for national security measures, and the procurement code is no exception. What's more, there's no doubt that buying commercial IT products from an untrusted source does raise serious security issues. Indeed, we can thank China's hackers for demonstrating to the world just how serious those security issues are.
But when I dug out the national security exemption, I was surprised to see that the US Trade Representative's office had negotiated a strikingly weak security exemption for the WTO procurement code. The first paragraph of the exemption (article XXIII) only allows the US to restrict procurements that are "indispensable for national security or for national defence purposes." In other words, the exemption is based on the nature of the goods being bought, and not on the nature of the threat. The US can make a good case that attacks on the Commerce Department or the Justice Department information systems threaten national security, but it's hard to argue that the IT systems those departments buy are themselves indispensable for national security.
There's a second security provision in the code that might help the US defend the provision. It allows "measures necessary to protect public morals, order or safety" but only if they are "not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where the same conditions prevail or a disguised restriction on international trade." I think the US could defend the provision on the ground that it protects order and safety, but it would have the burden of showing that in application it is not an "unjustifiable discrimination" or a "disguised restriction" on trade. These words virtually invite a highly subjective inquiry by a WTO panel, and there's no telling how that would turn out.
Having stacked the deck against security in negotiating the code, USTR is no doubt now lobbying strenuously inside the administration for an interpretation that will make the continuing resolution meaningless.
On first look there are a couple of ways it might do that. For one, it could take the provision at face value. "National interest" waivers are permitted under the law, and the President could require agencies to consider the nation's WTO obligations in determining the national interest, setting the stage for numerous waivers. That won't be attractive to the White House, though. It will expose the President to two rounds of criticism, first when he announces the national interest standard and again when each waiver is granted.
So the administration may look for another way out, perhaps by narrowing the definition of an "information technology system." Borrowing from interpretations of the Buy American Act, the administration could decide that a new information technology "system" is created whenever an English-language manual is shrinkwrapped to a Chinese-sourced router. As long as the shrinkwrapping is done by an American contractor, the newly minted "system" might fall outside the scope of the law. But that interpretation so clearly flouts the intent of the provision that it could raise serious political problems on both sides of the aisle for the administration, which could find itself painted as an apologist for Chinese cyberespionage -- something it has worked hard to avoid in the past.
Anger over Chinese cyberespionage continues to mount in Congress, and it's beginning to show in legislation. Not just the bills Congressmen introduce, the ones Congress passes.
Demonstrating remarkable bipartisan angst about Chinese hacking and the risks in Chinese high tech equipment, Congress has added tough sanctions to the continuing resolution that funds the federal government and is now awaiting the President's signature. The sanctions provision bars federal government purchases of IT equipment "produced, manufactured or assembled" by entities "owned, directed, or subsidized by the People's Republic of China" unless the head of the purchasing agency consults with the FBI and determines that the purchase is "in the national interest of the United States":
Sec. 516. (a) None of the funds appropriated or otherwise made available under this Act may be used by the Departments of Commerce and Justice, the National Aeronautics and Space Administration, or the National Science Foundation to acquire an information technology system unless the head of the entity involved, in consultation with the Federal Bureau of Investigation or other appropriate Federal entity, has made an assessment of any associated risk of cyber-espionage or sabotage associated with the acquisition of such system, including any risk associated with such system being produced, manufactured or assembled by one or more entities that are owned, directed or subsidized by the People's Republic of China.
(b) None of the funds appropriated or otherwise made available under this Act may be used to acquire an information technology system described in an assessment required by subsection (a) and produced, manufactured or assembled by one or more entities that are owned, directed or subsidized by the People's Republic of China unless the head of the assessing entity described in subsection (a) determines, and reports that determination to the Committees on Appropriations of the House of Representatives and the Senate, that the acquisition of such system is in the national interest of the United States.
This could turn out to be a harsh blow for companies like Lenovo that have so far escaped the spotlight trained on Huawei and ZTE. But it may also bring some surprises for American companies selling commercial IT gear to the government. It's not clear that they even know which of their suppliers and assemblers are directed or subsidized by the Chinese government. Where the IT system is manufactured doesn't answer the question; sanctions will depend not on where the system is made but on whether the company that supplies it is tainted by close ties to China's government.
It will make life equally awkward for the Obama Administration, which has been slowly and hesitantly toughening its stance on Chinese cyberespionage. The CR language will force the pace of retaliation, probably faster than the administration would like. But the statutory alternative to implementing the ban is for the administration to certify purchases as in the national interest -- possibly over the objections of FBI analysts who mistrust the gear.
The continuing resolution passed both houses with this provision in it; the President could in theory refuse to sign it. But this is a much-anticipated funding bill that heads off a government shutdown. With Congress having for once avoided a Perils-of-Pauline crisis, it's politically impossible for the President to put Pauline back on the railroad tracks -- especially so the government can buy suspect equipment from China. A veto is even less palatable than living with the provision.
Can cyberwar be limited by international law and diplomacy? Those who believe in international "norms" for cyberwar usually argue that cyberattacks on financial institutions are beyond the pale.
For example, Harold Koh has declared the State Department's view that cyberwarriors "must distinguish military objectives ... from civilian objects, which under international law are generally protected from attack." And Richard Clarke, a former White House adviser, claimed in 2010 that "most countries would agree to sign a treaty not to attack each other’s international financial and banking system networks. They don’t want to cross that Rubicon, or the entire international banking system could go down."
I can't help noticing that, since these speeches were given, DDOS attacks on Western banks have been attributed to Iran and North Korea has been blamed for cyberattacks on banks in South Korea. If you're looking for norms in actual conflicts, as opposed to speeches, cyberattacks on the financial sector are starting to look, well, normal.
The piece is based on a blog diary kept by Wang Dong, identified in recent reports as the notorious Ugly Gorilla, whose code has been found in many successful attacks on US networks. Though it never reveals Wang's employer or his job, the blog makes clear that the life of even a talented PLA hacker is not a happy one:
With no money and little free time, he found solace on the Internet. He shopped, chatted with friends and courted a girlfriend. He watched movie and television shows. He drew particular inspiration from the Fox series "Prison Break," and borrowed its name for his blog.
Richard Bejtlich, Mandiant's security chief, said posts written by the blogger, who called himself "Rocy Bird," provided the most detailed first-person account known to date of life inside the hacking establishment. Although the blog was discontinued four years ago, the techniques described in it remain the same. "It is relevant," said Bejtlich. "Things have not changed that much."
The hacker, whose real family name is Wang, posted some 625 entries between 2006 and 2009. "Fate has made me feel that I am imprisoned," he wrote in his first entry on Sina.com. "I want to escape."
Hmm, maybe he can.
In the past, I've proposed that the US deny visas to people and institutions that contribute to cyberattacks. But sometimes carrots work better than sticks, and visas can certainly play that role as well.
The Justice Department is authorized to issue a couple of hundred "S" visas each year to foreign nationals "in possession of critical reliable information concerning a criminal organization or enterprise." The visa allows family members to enter as well, and it becomes a permanent residency if the witness's "information has substantially contributed to the success of an authorized criminal investigation."
Systematically hacking US companies and agencies surely constitutes a criminal enterprise under US law, and I note that an investigation can apparently be deemed a success without leading to a criminal conviction.
So under current law, the Justice Department could send QQ messages to all the guys we've already identified as Chinese hackers, saying "The first of you who shows up at a US consulate with a full flash drive will get an S visa and a million bucks; the second one will get an S visa and $100,000. The third will get an S visa and $10,000. And the rest of you will be indicted with the evidence supplied by the first three, making China a prison you'll never break out of."
Somehow it just seems fitting for Prison Break to meet Prisoner's Dilemma.
That might sound like breaking news from 1983, but this time we're not talking movie plots, we're talking business. Specifically how Chinese cyberespionage could affect Hollywood's bottom line. The Hollywood Reporter asked me to talk about that impact in a guest column, out this week. Here's some of what I said:
Hollywood might be blinded by its own product. China's cyberspies aren't intrepid Jolt-drinking loners (with an occasional adoring girlfriend) navigating dangerous networks to snatch secrets and flee before they're geo-located by their opponent's giant global tracking system.
No, the hacking campaigns described by Mandiant and others have all the flash and derring-do of your latest trip to the dry cleaners. ...
It's routine. So routine, in fact, that most of the hacking is done between 8 a.m. and 5 p.m. Beijing time. ...
Hollywood might not have big secrets, but it's got plenty of little secrets that someone in China probably wants. No government on Earth is more sensitive to its depiction in mass media than China's. Why wouldn't its government want to read the earliest versions of Hollywood's scripts or have a ringside seat while studio execs debate how best to accommodate Chinese censors?
And don't rule out what might be called crony espionage, either. Any company that has juice with the central government is a candidate for the cheapest form of state aid: free access to the secrets of their competitors and joint-venture partners. China is an enormous market, with the potential for great profits. But if the other side knows just how hungry the studios are -- by reading their internal communications -- the studios won't leave the table with more than crumbs. Once you know the other side's bottom line, it's amazing how good a negotiator you can be.
Disputes that arise after the deal is done can be handled the same way. People who sue Chinese companies, along with their lawyers, are targeted by hackers. When security researchers are asked how many of the 100 largest U.S. law firms have been compromised by China, estimates range from 80 to, well, 100.
As for corruption, there's no more sensitive topic in China. If a Western company is under investigation for paying bribes to Chinese officials, as many entertainment companies are now rumored to be, it's safe to assume that the Chinese government will want to know -- ahead of time -- what the company is planning to tell the U.S. Securities and Exchange Commission.
Last fall, Orin Kerr and I engaged in an online debate over the Computer Fraud and Abuse Act -- specifically whether it is lawful for the victim of computer crime to follow his stolen data into networks controlled by the thief. The debate spread across several posts and into the comments, but it's been pulled into one place here.
Despite its length, I felt that Orin and I still hadn't closed on some important issues, so I was pleased when the Federalist Society invited us to engage in a podcast dialogue about what has been called "active" or "comprehensive" defense. The podcast is here.
The podcast reveals a surprising amount of common ground between Orin and me, especially on the policy front. We agree that law enforcement and intelligence agencies have full authority to engage in such tactics, and that private companies can "borrow" that authority by working with law enforcement agencies -- including the Alameda County Sheriff.
We also agree that the CFAA does not deal effectively with the problem of foreign government hacking, and Orin allowed that a tailored amendment to the CFAA to allow more effective responses would be worth considering. Orin pushes me to specify the limits that backhackers should observe, and I acknowledge the need for some government check on abuses, as well as some limits on backhacking (mainly restricting private parties to the collection of evidence rather than allowing self-help retribution).
The call-in questioners are an all-star team in themselves. Paul Rosenzweig of Lawfare forces me to admit that the foreign-law aspects of backhacking are particularly challenging (for the FBI as well as the private sector). And the eminent Edwin Williamson digs into the international law of investigating computer crime. Letters of marque and reprisal also make a cameo appearance.
For active defense aficionados, it's essential listening.
Well, I have to say that attribution is coming along pretty well, as witness the devastating Mandiant report and the risible Chinese response. (My personal favorite: "A spokesman for China’s Ministry of Foreign Affairs [argued] that cyberattacks were difficult to trace because they were 'often carried out internationally and are typically done so anonymously.'" Hmm, or maybe not quite so anonymously as the Ministry thought, huh?)
But attribution is only half of the formula if we want to deter cyberespionage. The other half is retribution. Somebody has to pay.
In that regard, I was challenged recently by some national security staffers to identify practical ways we could punish cyberspies, especially those attacking our private sector. They asked how to do that without compromising the classified sources and methods we’ll need to do attribution right.
Civil suits, they thought, would never work. It's next to impossible for a U.S. court to get jurisdiction over a hacker in Russia or China. And trials happen in public, after full discovery of the other side’s evidence.
The good news (if that’s what you call it) is that we deal with these sorts of limitations—lack of jurisdiction and the need to protect classified information—all the time with other kinds of bad guys. When it comes to fighting terrorists or narcotraffickers, we already use classified information to identify terrorist supporters or drug kingpins as "specially designated nationals” and to impose sanctions on them – seizing their bank accounts and assets, for example, and prohibiting U.S. citizens from doing business with them. They do have an opportunity to challenge their designation, but, in both the administrative and the judicial proceedings, classified information used in the designation or the review is protected. The most that a litigant can do is compel an in camera review of the information by a judge – and perhaps obtain an unclassified summary of the information, minus the sources and methods.
Remarkably, the President could start a cyberespionage retribution program like this tomorrow, on his own. Under the International Emergency Economic Powers Act, the President could determine that state-sponsored cyberspying poses “an unusual and extraordinary threat” to the United States and declare a “national emergency.”
Presidents have done that many times in the past. Right now, we have in place sanctions against officials in Belarus for threatening democracy in that country, purveyors of conflict diamonds, transnational organized crime organizations, and drug kingpins. In some cases, Congress has followed suit and passed statutes to consolidate or support sanctions programs (e.g., conflict diamonds, drug kingpins), but the sanctions began with a declaration by the President of a “national emergency.”
Not to sell short the cause of democracy in Belarus, but it seems to me that foreign hackers using the Internet to rob our companies and our government blind is at least as “unusual and extraordinary” a threat to our national interests as many of the individuals and groups already designated under earlier programs.
You might ask, however, whether applying sanctions to an individual hacker will really do any good—after all, sanctions don’t have much of a practical effect on people who don’t do business with the United States in the first place.
There are two answers to that question. First, I'm struck by how many of the guys who've been identified as cyberspies come from a demimonde, half in government and half out. Most of them clearly yearn to become entrepreneurs. They can't do that easily without traveling. Sooner or later, they'll come here.
Second, what if we applied sanctions not just to the hackers themselves but to the companies that benefit from the data they filch from U.S. systems? Legally, there’s not much difference in criminal responsibility between a thief and the guy he’s stealing for. We won’t have to designate more than a few large companies as “cyberspies” and seize their US assets before other companies start saying “Thanks, but no thanks” to offers of stolen data.
Of course, to do that, we'd have to have those companies dead to rights, and so far we don't. US security researchers have done a great job of tracking the thieves back home. But so far researchers have had trouble identifying the companies who ultimately benefit from cyberspying.
That too is an attribution problem – the second and last attribution problem we have to solve if we want to close the loop. It looks pretty difficult, but no harder than the first attribution problem looked five years ago. Nailing the customers is going to take a major intelligence campaign, but in the end I think we can catch both the cyberspies and their spymasters red-handed. (If nothing else, we'll benefit from what I like to think of as Baker's Law: "Our security may suck, but so does theirs.")
Then, when we do catch them, it’ll be time for the toughest available sanctions. A sanctions program along these lines could raise the cost of hacking and dampen demand for hacking services. And it's not like anything else is working. The President could launch it tomorrow without additional legislative authorities.
So why doesn't he? C'mon, let's give those Belorussian kleptocrats a rest and go after a real threat for a change.
Who was it that said, "We can't wait"?
He was right.
Bloomberg Businessweek has a remarkable story about the identification of another Chinese hacker. It's a long, tangled, and fascinating tale of good sleuthing by several researchers, but the trail ends with Zhang Changhe, a digital entrepreneur and teacher -- at a People's Liberation Army school that is suspected of training PLA hackers.
In the denouement, Bloomberg actually calls the guy on his mobile phone and gets partial confirmation of the evidence assembled by security researchers:
A Chinese-language search on Google turns up a link to several academic papers co-authored by a Zhang Changhe. One, from 2005, relates to computer espionage methods. He also contributed to research on a Windows rootkit, an advanced hacking technique, in 2007. In 2011, Zhang co-authored an analysis of the security flaws in a type of computer memory and the attack vectors for it. The papers identified Zhang as working at the PLA Information Engineering University. The institution is one of China’s principal centers for electronic intelligence, where professors train junior officers to serve in operations throughout China, says Mark Stokes of the Project 2049 Institute, a think tank in Washington. It’s as if the U.S. National Security Agency had a university.
The gated campus of the PLA Information Engineering University is in Zhengzhou, about four miles north of Zhang Changhe’s mobile shop. The main entrance is at the end of a tree-lined lane, and uniformed men and women come and go, with guards checking vehicles and identification cards. Reached on a cell-phone number listed on the QQ blog, Zhang confirms his identity as a teacher at the university, adding that he was away from Zhengzhou on a work trip. Asked if he still maintained the Henan Mobile telephone business, he says: “No longer, sorry.” About his links to hacking and the command node domains, Zhang says: “I’m not sure.” About what he teaches at the university: “It’s not convenient for me to talk about that.” He denies working for the government, says he won’t answer further questions about his job, and hangs up.
"It is not clear that the use of offensive operations in response to hostile actions against private parties would in fact mitigate the threat those parties face, or that the benefits would necessarily outweigh the risks. It is certain, however, that taking such actions would raise a host of thorny domestic and international legal and policy issues."
In fact, some of the issues Herb raises aren't "thorny" at all. Should companies defending themselves be able to hire experts to assist them, he asks. Well duh. Is there anyone who thinks that they shouldn't be able to get such help?
And Herb's stance on the international issues is strikingly prescriptive:
"Finally, international forums must be identified where such issues can be discussed and agreement sought. Such forums would have to involve all stakeholders and not presume that only national governments have rights to engage." (Emphasis added.)
Why Herb thinks these things are mandatory, I can't guess. If a right of self-defense depends getting agreement in an international forum that involves all stakeholders, it's safe to say that there won't be much left to defend by the time the negotiators are done.
That said, for a short piece, Herb's article does a good job of flagging the issues that need to be addressed by those of us who advocate a greater private role in counterhacking.
Once again, Ellen Nakashima of the Washington Post has broken a cybersecurity story:
A new intelligence assessment has concluded that the United States is the target of a massive, sustained cyber-espionage campaign that is threatening the country’s economic competitiveness, according to individuals familiar with the report.
The National Intelligence Estimate identifies China as the country most aggressively seeking to penetrate the computer systems of American businesses and institutions to gain access to data that could be used for economic gain.
The report, which represents the consensus view of the U.S. intelligence community, describes a wide range of sectors that have been the focus of hacking over the past five years, including energy, finance, information technology, aerospace and automotives, according to the individuals familiar with the report, who spoke on the condition of anonymity about the classified document.
I read the story at the ABA winter meeting, where Harvey Rishikof, Emily Frye, Steve Chabinsky, and I talked about whether private companies could do more to protect themselves than simply raise the wall around their systems:
The issue, agreed three experts who spoke on the panel, is to what extent private concerns may go to track down the intruders who break into their computer systems and where the intruders hide that data to avoid detection. The dilemma, said Steven Chabinsky, is that the federal government has the statutory authority to carry out such investigations but lacks the resources and capabilities, while the private sector has the capability but lacks clear legal authority.
The two events are tied together by something Steve Chabinsky said during the panel discussion: We're used to the idea that cybersecurity is an arms race, with defense chasing offense and vice versa, and that the US and its adversaries are constantly trying to counter the other's tactics. What we haven't absorbed is how quickly proliferation occurs.
Once a nation has found a tool that overtops America's national security defenses, the tool will only work for a while. Eventually its thrust will be parried by the Defense Department. At that point, the code isn't good for its original purpose, but it's still plenty good for breaking into private networks, and it will keep working until a good defensive tactic has spread across the entire Internet.
So as network attackers develop new tools, they have every reason to repurpose the old ones, either shifting the old attacks to softer targets or rewarding criminal allies and less talented nations like North Korea and Iran by handing them lightly used offensive tools. The Defense Department keeps building higher walls, and the Russians and Chinese keep building higher ladders.
When the US wall gets to thirteen feet, what do the Russians do with all their twelve-foot ladders? Naturally, they don't want them to go to waste; they look around for companies that still have eleven-foot walls.
There's a deeply discouraging aspect to this dynamic. It means that all of us, whether individuals, law firms, oil companies, banks, or human rights groups, are caught up in a race between governments. And if our defenses aren't good enough to keep out the most sophisticated governments, it won't be long before those governments come after us, directly or by proxy. So all those comforting lists of defensive tactics that stop 90% of attacks suddenly aren't so comforting. Adopting those tactics is like building an eleven-foot wall. It just increases the market value of used twelve-foot ladders.
I'm not sure I've fully plumbed the policy implications of the "used ladder market" effect, but some things seem clear. Using an arms race metaphor tends to trigger calls for American restraint and arms control negotiation. But that won't work here; the only way to show restraint in this contest is to stop defending against new attacks. And even then, attackers have a long-term incentive to hand off their used tools to other actors, so you're leaving your network open to more and more bad guys.
I suspect that the used ladder effect is another argument for moving from pure defense to a mixed strategy that includes attribution, punishment, and deterrence. The market for ladders wouldn't be so robust if there were a pack of Dobermans on the other side of even the ten-foot walls. But it also reveals just how much we're going to need ways for DOD to share information about the attacks it is experiencing, because those attacks are bound to be heading our way, and soon.
PHOTO credit: SOIR
Every new computing technology seems to bring with it a privacy flap. Cloud computing is going through that phase right now, at least outside the United States. Canadian and European elites fear that putting data in the cloud will somehow let the US government paw through it at will, a fear that usually centers on Section 215 of the USA PATRIOT Act.
The debate has been fed by interest groups worried about their future in a world of cloud computing. It was first raised as part of a campaign by the British Columbia Government Employees Unionagainst the outsourcing of British Columbia's health insurance data processing. (Full disclosure: I worked on the issue for clients both at the time and more recently.)
After years of remission, the issue has recently returned even more virulently, when Europe’s small cloud providers began using the Patriot Act as a marketing tool. In November of 2011, two European companies announced the creation of a European cloud offering that they advertised as providing a “safe haven from the reaches of the U.S. Patriot Act” in a press release that goes on to say, “Under the Patriot Act, data from EU users of U.S.-owned cloud-based services can currently be shared with U.S. law enforcement agencies without the need to tell the user.”
This is pretty clearly a reference to section 215 of the Patriot Act, which once allowed the FBI to “gag” recipients of 215 orders. (That authority was substantially cut back by Congress in 2005; now recipients may challenge gag orders in court annually until they are revoked. See 50 USC 1861(f)(2)(A).)
As a competitive strategy, this line of attack has some problems. It assumes that, while US-owned companies can be compelled to produce data from around the world, European companies can safely refuse to comply. The argument that the US can compel global compliance is grounded in a line of cases ordering banks to produce records from foreign branches. Unfortunately for the European companies making this pitch, the line of cases is named after the unsuccessful party – the Bank of, uh, Nova Scotia-- which is rather plainly not a US company and thus hardly the best case to cite if you're arguing that people can defeat American discovery orders by giving their records to companies headquartered outside the US.
Nonetheless, the argument is still shaking up customers and officials in Europe, who are understandably not comforted by the response that even European cloud companies can be compelled to produce records. I think for several reasons that this risk has been severely hyped – there are only a couple of hundred section 215 orders a year, compared to tens of thousands of criminal subpoenas, and the Justice Department discourages foreign fishing expeditions. But those reasons have been discussed by others. Instead of digging into them, I’d like to explore a point that hasn’t been discussed as widely: the utter uselessness of serving a section 215 order on a cloud computing company.
In essence, it seems to me pretty clear that section 215, entitled “Access to certain business records,” is designed to collect a company’s business records. And a company’s business records are ordinarily viewed as the records the company uses to conduct business, not information belonging to the company’s customers.
Why does this matter for European privacy buffs? Because the records that cloud companies need to conduct business are very different from the records kept by the Bank of Nova Scotia. Banks must keep track of how much money you move in and out of your account, since that determines how much interest they owe you, how much they can charge for wire transfers and bounced checks, and so on.
Put another way, your transactions are part of the bank’s business records. But the records you store on cloud computing platforms aren’t part of the cloud company’s business records, because that’s not how they measure their costs and revenues, among other reasons.
Judging by this calculation, the data that cloud computing companies need to send out their bills is a lot less interesting. They need to keep records of how many CPUs the customer rented, for how many hours, with how much storage space (RAM and disk), on how fast a network. Last I looked, that is information that I already tell the world about my own computer when I visit any site on the Internet.
If that’s all the US government can get by serving a 215 order on cloud companies, it’s no wonder that we haven’t actually seen or heard of such an order in real life.
So, am I right? The best argument against this conclusion is that the title of section 215 doesn’t really tell you what the government can demand. Although the title speaks of “access to business records,” the body of the provision allows the court to order “the production of any tangible things (including books, records, papers, documents, and other items).” That sounds pretty broad, but also pretty familiar. Under the federal rules of criminal procedure, a federal grand jury “subpoena may order the witness to produce any books, papers, documents, data, or other objects.” But as broad language as this language is, the government doesn’t ordinarily use grand jury subpoenas to order people to produce things that belong to other parties. That practice is prudent, given that some courts, notably the Sixth Circuit in Warshak v. United States, think the fourth amendment requires use of a warrant, and the Congressional authorizations for administrative subpoenas require notice to the target.
The link between section 215 and criminal investigative practice is firmed up by a sentence added to section 215 when it was renewed in 2005. The new sentence says, in essence, that section 215 can only “only require the production of a tangible thing if such thing can be obtained with a [grand jury] subpoena.” See 50 U.S.C. § 1861(c)(2)(D).
It seems to me that this puts a special new burden on the Europeans who think that section 215 is a problem for American cloud providers. When it was the spooky, subterranean, and evil Patriot Act they were construing, they could plausibly say, “Who knows what the government is doing behind those gag orders in that secret court?” But now that the tie between 215 and grand jury subpoenas has been clearly written into the statute, there is no dearth of information about US practice. We have fifty years or more of criminal procedure, and tens of thousands of criminal subpoenas a year, to draw upon. If grand jury subpoenas have been used to obtain third-party records across international boundaries, especially from cloud providers, then the European merchants of FUD have a point. If not, they can safely be ignored, by customers and policymakers alike.
PHOTO: Michael Jastremski