An Indicator of Compromise (IOC) is defined as a piece of information that can be used to identify the potential compromise of an environment: from a simple IP address to a set of tactics, techniques and procedures used by an attacker in a campaign. Although when we speak of IOC we always tend to think of indicators such as IP or domains, the concept goes beyond this, and depending on their granularity, we can find three types of indicators:
- Atomic indicators: those that cannot be broken down into smaller parts without losing their usefulness, such as an IP address or domain name.
- Calculated indicators: those derived from data involved in an incident, such as a hash of a file.
- Behavioral indicators: those that, from the treatment of the previous ones, allow the representation of the behavior of an attacker, his tactics, techniques and procedures (TTP).
The TTP are associated with operational intelligence, while the atomic and calculated indicators are associated to tactical intelligence, and in this area, with a very short time of life, fall most of the shared indicators: more than half of the IOC exchanged in threat intelligence platforms are as simple as hashes, IP and DNS domains. The problem? What we have known for years and is simply reflected in approaches such as The pyramid of pain: it is trivial for an attacker to change these indicators. In fact, just the three most commonly exchanged types are the easiest to alter, which considerably limits their usefulness.
For an attacker, altering a hash is trivial, from the time the artifact is compiled to the time it is executed; altering an IP used in a command and control system or in an exfiltration server is very simple, and in relation to domain names we can say that the complication is also minimal. Therefore, an attacker with basic capabilities can evade detections based on these types of indicators, as shown in the diagram of The pyramid of pain. On the other hand, if we go to behavioral indicators, to TTP, modifying them is much more complicated for the attacker, so if we are able to detect these modus operandi, we are more likely to identify a compromise.
So why, if the atomic and calculated indicators are the least useful, are they the most exchanged? In my opinion, the answer is fairly simple: they are automatically loaded into security tools, producing visible and immediate results. If we receive a tactical intelligence feed, for example associated with domain names, IP or similar, we can load this feed into many perimeter systems to detect activity associated with the indicators. We can make a query against the SIEM to see if there has been historical activity against them -and we can leave it programmed to detect new activity almost in real time-. In short, they hardly require human intervention to be useful. On the contrary, if the information that is exchanged is associated to operational intelligence, to the TTP of the attackers, these behaviors usually come described in a documentary way or, at least, more difficult to automate and to turn into actionable that the atomic and calculated indicators. Even a standard such as STIX, which allows the definition of TTP through its objects, is not immediately convertible into automatable intelligence.
What is the situation, therefore? Most shared intelligence is easily evaded by an attacker and also has a very short lifespan, so it does not seem the most useful. On the other hand, the operational intelligence, of greater utility and greater time of life, linked to behavioral indicators, hardly is exchanged, very possibly because they usually require intervention or human treatment. If we want to cause “damage” to the attacker we must focus our IOC in its TTP, not in its indicators of low level.
In order to identify TTP, relationships between analyzable events are required; these relationships are usually temporary, but can also be associated with dependencies between activities, for example. To be able to detect them automatically we need two elements; the first one is an acquisition and processing capacity where actions are registered not only linked to alerts (to misuses or anomalies) but also to “normal” activities in the environment. This capacity is usually the SIEM, where the information useful for the security of the different technological platforms -from the endpoint to the network- is centralized.
With the information already acquired and processed, we need an automatic analysis capacity: something that allows us to interrogate the SIEM and extract TTP (through relationships) from the information it stores; different manufacturers use different approaches: Microsoft has defined KQL, Kusto Query Language, while Elastic also provides rules for hunting TTP in its technology. But each of these approaches works in the ecosystem of its manufacturer and not in the others, which prevents an effective exchange of intelligence. SIGMA tries to provide a generic, independent approach, and may become a de facto standard in the short term. But SIGMA also does not natively allow for the specification of some known TTPs in advanced players, so it should be enhanced or complemented with post-processing capabilities in order to expand the range of its detection capabilities.
In short, we exchange the most easily usable intelligence, but not the best; in order to detect significant movements of advanced threats we have to exchange more valuable intelligence, and for that we need in one way or another for this intelligence to be processed automatically and in common with all environments, in the form of a standard. Until we achieve this, we will continue to focus on indicators of little value. However, these indicators are also useful and we should continue sharing them as up to now: although they are not the best for detection, they should not discard them. We must “simply” expand our capabilities, not discard the current ones and implement others that are completely different.