Friday, July 19, 2013

On the Misuse of Indicators

This being only my third blog post, you might think it's too early to make predictions about my most commonly-used terms, but I'm ready to go out on a limb.  You can pretty much rest assured that you'll be reading the word indicator pretty frequently in these posts.  You would think that most of us in the DFIR field probably already know what an indicator is, but I'm not so sure.  There are at least two different but confusingly-similar types of indicators we deal with every day, and the meaning of "indicator" has a lot to do with who is using the word, and the context in which they are using it.

What is an Indicator?

"Finger-pointing-icon" by debivort,,
19 July 2013
In non-technical terms, an indicator is a piece of information that points to a certain conclusion.  An individual indicator may or may not be enough to effectively support the conclusion to which it points, but as you collect more indicators that agree with each other, the conclusion becomes more likely.  Given enough of these indicators, you may end up with a defensible statement about the likelihood of your conclusion being true.  Your indicators serve as the data points by which you prove your argument. 

An indicator can be almost any type of data that you think captures some sort of repeatable occurrence or pattern.  It could be a simple domain name or IP address known to be used by a piece of malware, a piece of recurring data in a transaction, a combination of adversary actions that make up a distinct behavior, or nearly anything else.  Both of my previous posts dealt heavily with indicators, so please refer to them if you need specific examples. 

Attribution vs. Detection

In my experience, there are at least four different types of indicators:

  1. Attribution Indicators are used to distinguish activity or artifacts traceable to a specific threat actor.
  2. Detection Indicators (often called "Indicators of Compromise" or IOCs) are observables that you look for to help find security incidents.
  3. Prediction Indicators are behavior patterns that foreshadow other events ("We just announced a major new business initiative, therefore we can expect recon from this adversary within 30 days.").
  4. Profiling Indicators help predict which of your users, facilities or projects are likely to be the subject of targeted attacks.
Although all four types are definitely interesting, almost no one collects types #3 or #4, so let's ignore those for now.

Attribution indicators are used primarily for doing intelligence analysis to determine the actor behind an attack or artifact (e.g., "Who wrote this malware?" or "Which actor is most likely responsible for this set of recon scans against my web server?").  Attribution indicators attempt to answer the question "Who?"  This is a pretty difficult question, and there's a lot of ambiguity in the attribution process.  You typically need several indicators that all agree with each other pretty well to even arrive at the ballpark of a successful attribution, and even that is an oversimplification.  It leaves out the vital contribution of the human analyst to make decisions, weigh evidence and arrive at defensible conclusions despite ambiguous and possibly conflicting information.  Still, indicators are at the heart of the attribution process.

Detection indicators are linked to observable events on your hosts or network.  You can monitor for these indicators, and if you find them, you may have a security incident.  Detection indicators attempt to answer the questions "Is?" (e.g., "Is this web transaction a SQL injection attack?" or "Is the XYZ trojan active on my network?").  

So Which Is It?

Confusingly, both attribution and detection indicators share many of their data types.  A domain name could be an attribution indicator, a detection indicator, or both.  

For example, consider the once-notorious (but now defunct) Chinese Dynamic DNS site 3322[.]org.  In most networks I've ever monitored, any traffic to *.3322[.]org domains was at least highly suspicious, if not outright malicious.  These domain were pretty good detection indicators, because they were highly likely to serve drive-by downloads or act as C2 nodes for banking trojans (just to give two examples).  However, the simple fact that it was a very popular DDNS site made it nearly useless for attribution.  There were probably hundreds of threat actors active on those domains at any one time, and except for a few who were lazy enough to re-use their subdomains, it was basically impossible to tell who was who. Without additional information, *.3322[.]org usually wasn't a very good attribution indicator.

The opposite situation is also a frequent problem.  Suppose the PANDA BALLS group is known to have a fondness for the popular grammar website, which they deface and use to serve malware drivebys in a watering hole attack.  That domain would be a great attribution indicator; when you have an artifact from a confirmed compromise and it references, it's a little piece of evidence that begins to support a conclusion about which adversary is responsible.  On the other hand, if you treat that domain name as a detection indicator, you're going to run into trouble when your IDS throws a constant stream of alerts on legitimate traffic from your technical writing staff!

What To Do About It?

An indicator is not an indicator is not an indicator.  Print that out and paste it to the monitors of your entire intel and detect staff.  There are different types of indicators, with different purposes according to the type of work at hand.  If you have an "indicator database" you're probably already in trouble, because you are likely mixing your indicator types indiscriminately.

At the very least, you should start tagging your indicators according to their purpose.  Consider how you would dump a list of all the detection indicators for signature-generation purposes, or how you could take an artifact and compare it to only your attribution indicators.  If you can't do this, you may need to rethink your indicator management strategy.

No comments:

Post a Comment