« Episode 59: The Plaintiffs' Bar Takes on the Internet of Things That Hate Us | Main | Episode 60: Appeasing the Unappeasable »

Mar 30, 2015



I have to overall agree with you that this bill isn't doing anything productive. Full disclosure, I'm an employee of Critical Stack, but my opinions are my own and not directly related to the threat intelligence feed you mention.

Before coming to Critical Stack, I was a threat intelligence analyst (among other things) at NSA/CSS Threat Operations Center. I'd first like to refute your point on automated data being "imperfect". While some of our current feeds provider more reliable data than others, typically automated data like this is produced based upon an observed behavior. The important facet is that this data has a shelf-life. If you're not regularly updating your intelligence feeds, it becomes much less effective.

Areas where automated analysis can be very high confidence include C&C domains and IPs, botnet control nodes, and open mail relays used for spam. Registered domains typically have a much longer shelf life than an IP address, but even then expires at a minimum upon expiration of the domain name. Botnets also typically overturn their bot herds very quickly to circumvent detection, so the information should expire much more quickly. When I have deployed fingerprints for things such as malware protocols or botnet activity, the results are very high confidence because the detection has low noise ratio.

Getting back to the bill, one of the primary reasons I left the Intelligence Community (IC) is part of the problem this bill is purporting to solve. However, there is no legal reason why the IC cannot share this information today, it's merely a policy decision - and this law doesn't fix that. In fact, this bill has explicit exceptions to prevent the DoD from receiving any threat information, which is obviously a political bow to the anti-NSA lobby. And again, there's no real reason why clearing houses such as the Information Sharing and Analysis Centers (ISACs) cannot share information with the Federal Government. The TLP sanitization procedures already used today are already quite effective. The IC has comparable policies.

Here's are the pieces that are currently missing: liability protection, source sanitization, and definition of vague terms such as "personal information". The bill tries to address the first item. Source sanitization is necessary to get the IC and other Federal Government (and I'm sure private companies) to share more information. If you get a domain through an intelligence sharing clearing house, you should not be able to distinguish if the NSA picked that up from the foreign adversaries networks, the FBI picked it up from an investigation, or a DIB partner was victim of malware using it. Lastly, as you allude to, the restrictions on "private information" are going to kill much of what is already good in the private sector. If a signature or analyst that identified an email address as malicious because it sent malicious links or attachments, that email address should be shared, even if it was hijacked. Perhaps, if there is reason to believe the email account was highjacked, we can set an expiration on that intelligence of one week. If it is not highjacked, maybe that intelligence expires after 6 months or a year.

Also, a side note, the CTIC is another political ploy. "not more than 50 permanent positions" is not enough manpower to effectively perform the mission with all the liaison positions that are required for this to occur.

I really value your legal opinions on these cyber issues that confront us. I think the more *informed* legal opinions we get in the mix, the sooner we'll get back on the right track.


Thanks for the thoughtful response. One question: What is TLP sanitization?

I'm not sure we disagree much on the limitations of automated collection of threat information. I agree that such collection is sometimes very accurate and that even accurate information ages rapidly. But I can't help believing that some of the automated systems are collecting information about behavior that is almost always evidence of bad intent. But not always; mistakes and randomness happen. In fact, one of the Critical Stack sources says pretty much exactly that: "The following IP addresses have been detected performing TCP SYN to to a non-listening service or daemon. No assertion is made, nor implied, that any of the below listed IP addresses are accurate, malicious, hostile, or engaged in nefarious acts."
Maybe the conduct in question is almost certainly malicious, at the 95% confidence level. We would surely be willing to block addresses if there's a 95% chance that they're bad, but a company sharing information of that kind is taking a legal risk; it doesn't have a "reasonable belief" that all the personally identifying address data it's sharing is directly related to a threat. In fact, it has a pretty good idea that 5% is not; it just doesn't know which 5%. That's going to give the company lawyer pause when the time comes to share. And this bill will introduce that pause into information-sharing forums that are working smoothly now. That's not progress.



The Traffic Light Protocol (TLP) is the US-CERT standard for information labelling used by the ISACs and others. It isn't a classification, but rather a dissemination control. Organizations would use TLP:RED, for instance when an existing NDA-style agreement is in place to prevent disclosure of the information outside of the specified recipients. And it reduces from there. Here are the detailed descriptions: https://www.us-cert.gov/tlp

As to that specific source, packetmail.net, the network in question is a honeynet and *shouldn't* receive any legitimate traffic. The possibility, is that one could spoof a source IP address specifically to deny someone service to a resource that uses this as an automatic block. Other than that, there's likely a low probability of the IPs being not malicious. (raw data: https://www.packetmail.net/iprep.txt).

Additionally, I would argue that an IP address should not be considered "personal information". In practice, the owner of an IP address may not be the party that controls it (for instance, Virtual Private Servers, loved by cyber actors). The location could also vary daily.

It sounds like we probably have similar concerns here, but with language like this being proposed for law, it should really be defined up front by informed lawmakers and members of the cybersecurity community.


I really appreciate the dialogue you and Derek are exchanging- he's an incredibly deep thinker and I look forward to everyone else finding out what a special team we really have over here.

In regards to the specifics of the Critical Stack Intel Marketplace I will chime in on what I perceive as a misunderstanding. It is exactly the issues you raise that we are transparently empowering network operators to decide for themselves which set of feeds they choose to trust. Each feed operator manages, creates and runs their feeds with dramatically different policies and procedures. Specifically, our client solves a separate but related problem- it streams the intelligence down to the sensors in realtime making it actionable. We additionally give feed consumers the ability to whitelist items if they feel necessary.

The reality is that for many organizations "bad" itself is a matter of policy. Take one of our fact based feeds- a live streaming feed of TOR exit nodes. Is TOR bad on your specific network? My response would be, I don't know, is it? On a large open university network probably not; on a federal network perhaps it is "bad."


Liam Randall

The comments to this entry are closed.