What’s The Deal With Detection Logic?

Detection logic is used by a variety of different mechanisms in modern endpoint protection software. It is also known by many different names in the cyber security industry. Similar to how the term “virus” is used by laypeople to describe what security people call “malware” (technically, “virus” is the term used to describe a program that spreads by making a copy of itself in another program, data file, or boot sector), detection logic has been called everything from “signatures” to “fingerprints” to “patterns” to “IOCs”. So, when someone talks about a virus, they’re usually actually referring to malware. And when someone talks about signatures, they’re frequently referring to detection logic. Well, unless they’re specifically talking about the simple detections that were used back in the 80s and 90s.

Here at F-Secure, we often refer to detection logic as, simply, “detections”.

In previous installments of this series, I wrote about scanning engines, behavioral engines, and network reputation – and how they work together to block malicious threats. Now, I’d like to explain what the detection logic used by these engines looks like and how it’s created.

This is a slightly longer article, so bear with me. I found that, in order to explain things in enough detail to make any sense, I needed to cover quite a few different areas. There are many touch-points between these different concepts, and hence I opted for one long post instead of a series of separate ones. I’ve passed a draft of this text through quite a few folks, but some people are on their summer vacation, so I’m hoping it’s okay for me to reveal all of this stuff.

There are many different types of detection logic in modern security products. I’ll cover each one in its own section.

Research poster

What was that thing again? Oh yeah, complex mathematics and artificial intelligence.

Cloud-Based Detection Logic

Most modern protection software include one or more client components that perform queries via the Internet. Detection logic is often shared between client and server. In really simple cases, a unique object identifier (such as its cryptographic hash) is sent to a server, a simple good/bad/unknown verdict is delivered back, and the client factors that result into its final decision.

In more complex cases, the client extracts a set of metadata from an object, sends that to a server, which then applies its own set of rules, and returns a verdict, or a new set of values that the client then processes before reaching a verdict.

Even the simplest of cloud lookups are quite powerful. Back end systems have access to our full sample storage systems as well as information and metadata acquired across our entire customer base. This additional information is something the endpoint alone wouldn’t have access to. By performing a cloud query, this metadata can be provided to the client, where it becomes available to all of the different protection technologies on the system. The usage of prevalence score  to determine suspiciousness is one good example of this.

Cloud-based detections are, for the most part, generated automatically. This provides an extremely fast response time from sample discovery to protection. Turnaround times from sample to detection are in the order or seconds or minutes.

I’d be remiss if I didn’t mention cloud scanning in this section. Samples can be uploaded from the endpoint to the cloud in order to perform complex analysis operations that wouldn’t be possible on the client-side. These analysis steps can include passing the sample through multiple scanning engines, static analysis, and dynamic analysis procedures. Detections on the back end are designed to deliver a verdict by processing the metadata generated by these analyses. I’ll cover this in more detail in the section on back end detections.

Heuristic Detection Logic

Heuristic detection logic is designed to look for patterns typically present in malicious files in order to determine suspiciousness. Heuristic detections can be generated by machine learning techniques, or by hand.

A hand-generated piece of heuristic detection logic might, for instance, check how often a “+” character occurs in a script, which could be indicative of common script obfuscation techniques. More complex manually generated heuristic detections may search for multiple patterns. A suspiciousness score is generally calculated based on presence and type of these patterns, and it is then combined with other metadata in order to reach a final verdict.

By training a machine learning system to recognize structural patterns that commonly occur in malicious files, but not in clean files, complex heuristic logic can be built. This is usually achieved by feeding large sets of expert-vetted malicious and clean files into a machine learning system, and then thoroughly testing the output. When the resulting bundle of detection logic is performant, free of false positives, and free of false negatives, it is deployed to the endpoint. This process should be repeated periodically in order to adapt to threat landscape changes. Again, these detection bundles work by looking for characteristics and patterns in samples and applying a set of mathematical models to calculate a level of suspiciousness.

The structure of a portable executable

Information contained in the structure of a PE file can sometimes be used to determine a file’s suspiciousness.

The quality of heuristic detections depend largely on how well-trained they are. This, in turn, depends on the quality of samples used in the training and testing steps, which, in turn, depends on a mixture of automation and human guidance. Great results can be achieved if you have enough expertise, time and resources to get the process right.

Behavioral Rules

Engines that behaviorally analyze code execution contain a set of rules designed to determine whether observed behavioral patterns are suspicious or malicious.

Some behavioral rules can be created by automation. Others are meticulously hand-crafted. By researching new exploit techniques and attack vectors, behavioral rules can be created before new techniques become widespread in malware. Generic behavioral rules can then be designed to catch all manner of malicious activity, allowing us to stay ahead. Our researchers in Labs are always on the lookout for future trends in the malware landscape. When they identify new methodologies, they go to work on creating new rules based on this research.

So, how do behavioral rules work? When something executes (this can mean running an executable, opening a document in its appropriate reader, etc.), various hooks in the system generate an execution trace. This trace is passed to detection routines which are designed to trigger on certain sequences of events. An example of some of the types of events a behavioral rule might check include:

  • Process creation, destruction or suspension.
  • Code injection attempts.
  • File system manipulation.
  • Registry operations.

Each event passed to a behavioral rule includes a set of useful metadata relevant to that event. Global metadata for the execution trace, which includes things like PID, file path, and parent process, is also made available to rules. Metadata acquired from cloud queries and other protection components is also made available. This includes things like prevalence score and file origin tracking (a history of the file from the moment it appeared on the system, that can include things like the URL where it was downloaded from). As you can imagine, some pretty creative logic can be built with all of this information at hand.

Of course, we constantly update the logic and functionality of our behavioral detection components as the threat landscape changes, and as we make improvements. Who wouldn’t?

Generic Detection Logic

By creating complex programs designed to be run on the endpoint, we can automate certain reverse engineering techniques in order to detect malware. This is where things get interesting, and also a little long-winded to explain.

There are many file types out there that can contain malicious code. Here are a few examples.

In order to work with all of these different file formats, parsers are frequently needed. Parsers identify and extract useful embedded structures from their respective containers. Examples of embedded structures include things like resources (sounds, graphics, videos and data), PE headers, PE sections, Authenticode signatures, Android manifests, and compiled code.

In simple terms, a parser breaks an input file down into its constituent parts, and provides them as an easily accessible data structure to detection logic. A lot of file formats are complex and contain dozens of corner cases. By parsing a file and then providing a data structure to detection logic, we avoid having to replicate complex parsing code over and over. We can also make sure that the code is robust and well-performing.

Files (of varying type) are frequently embedded inside malicious files. For example, a malicious PDF may contain embedded flash code, executable code, shellcode, or scripts. Often these embedded files are encrypted or obfuscated. Hence, when the outermost malicious file is opened or executed, it decrypts objects contained within itself, writes them to memory or disk, and then executes them.

Malicious executables themselves often use off-the-shelf packers or protectors, such as UPX or ASPack, to obfuscate their code and structures. In most cases, multiple packers are applied on top of each other, and in many cases, custom obfuscation, written by the malware author, is also used. Getting to the actual code inside a malicious executable is akin to peeling away layers of an onion.

The problem with packed files is that they all look quite similar. And some non-malicious executables also utilize packers. In order to properly identify maliciousness, you often have to remove those layers.

Removing off-the-shelf packers is pretty straightforward – recognizing standard protectors is simple, as are the methods for decrypting them. Unpacking custom obfuscation is trickier. One way to unravel custom obfuscation is to extract the decryption code and data blocks from the malware itself, run the whole thing in a sandbox, and allow the malware’s own code to unpack the image for you. Another common trick is to write a routine that analyzes disassembly of the malware’s deobfuscation code, extracts the encryption key and location of the data to be worked on, and then extracts the image.

Peeling away all the defensive layers of a malicious executable allows detection logic to analyze the actual malware code hidden deep inside. From here, the sky’s the limit in terms of how you further process the resulting image. These same techniques are used to pull objects out of non-PE files, such as PDFs and MS Office documents, and similar decryption tricks can be used to unobfuscate things like malicious JavaScript.

Creating this sort of detection logic can sometimes be time-consuming. However, the rewards are definitely worth it. Generic detection logic written in this way is not only capable of catching large numbers of malicious files, it has a pretty good shelf-life. In many cases, new variants of the same malware family are detected by a single well-written generic, with no modifications. If you check out our world map, you’ll notice that most, if not all of our top 10 detections are usually manually written generics.

To be honest, what I’ve written here only scratches the surface of what can and is being done with these technologies. Bear in mind that detection logic also has access to information from things like cloud queries and file origin tracking and can work on any sort of data stream, including memory space and incoming network data, and you can probably understand why the topic is so complex. An in-depth explanation would probably fill a book.

Network Detection Logic

Blocking attacks on the network, such as exploit kits, is something we focus on pretty heavily. Stopping threats at an early point in the attack chain is an effective way of keeping machines free from infection.

As mentioned in the previous section, data streams arriving over the network can be analyzed in a similar way to how files on disk are inspected. The difference is that, on the network, as information arrives, mechanisms exist to block or filter access to further attack vectors on-the-fly.

Network detection logic on the endpoint gets access to IP addresses, URLs, DNS queries, TLS certificates, HTTP headers, HTTP content, and a whole host of other metadata. It also gets access to network reputation information, including URL, certificate, and IP reputation (via cloud queries). With access to all of this information, you can do some interesting stuff. Here are some examples.

  • The behavior of a web site can be examined in real-time, allowing exploit kits to be detected and blocked.
  • Communication between a bot and its command and control server can be blocked while the IP address is still being queried, or based on the IP the bot is attempting to contact. We can also block C&C communications based on network traffic patterns.
  • Phishing sites can be blocked based on the content of HTTP header, body, or both.
  • We can block multiple types of malicious redirections, including flash redirections.

Network-level interception technologies allow us to extract and store metadata about objects arriving on a system. That metadata is made available to other protection components on the system, where it is factored into their decision processes.

Memory Detection Logic

Hunting for signs of malicious code in active memory is a useful technique, especially when installing onto a system that wasn’t previously protected. Certain types of malware, especially rootkits, can only be detected using this method.

Some malware actively prevent the installation of endpoint protection software onto a system. Cleaning a system prior to attempting to run our an installer is, therefore, an important step. We run forensics routines, that include memory scanning capabilities, early in the installation phase of our product in order to remove any infections that may have occurred in the past. These same forensics routines can be run periodically or manually on the system to ensure nothing slipped past our protection layers.

As I mentioned earlier, getting through layers of obfuscation can sometimes be a complex task. By looking at the address space that malware is using, you can often find a deobfuscated image. However, this isn’t always the case. Off-the-shelf packers usually dump the program into memory and kick if off, but if the malicious program is using its own custom obfuscator, there are various tricks it can use to keep itself obfuscated, even in memory. A simple way to do this would be to obfuscate strings. In a more complex case, code is converted into a proprietary, one-off, compilation-time generated instruction set that is run under a custom virtual machine.

Back End Detection Logic

The large amounts of processing power available in our back end systems grants us the perfect opportunity to do the sort of expensive, time-consuming examination of samples that wouldn’t be possible on the endpoint. By performing a series of analysis steps and combining the output of those operations with metadata collected from client queries and our sample storage systems, we can automatically process and categorize large volumes of samples in a way that wouldn’t be possible by hand. We also get a huge amount of good threat intelligence out of these processes.

Some of the same technologies used in our endpoint products can be re-purposed to perform rigorous static analyses procedures. This is achieved by writing special detection code designed to extract the metadata and characteristics of a sample, instead of simply delivering a verdict. This is just one method that we use to dissect samples. Other systems, designed to process particular types of files, also provide our decision logic with relevant metadata. Additionally, we process samples through systems designed to heuristically determine suspiciousness based on structural features.

Malware recognition automation

Classy and Gemmy, two malware recognition automation components developed as part of a research project with Aalto university about five years ago.

URLs and samples that can be executed are sent into sandbox environments for dynamic analysis. A trace of the sample’s behavior is obtained by instrumenting the environment where the sample is executed. That trace provides additional metadata used to categorize the sample.

Sometimes these types of analyses provide us with new samples. For instance, a malicious sample may attempt to connect to a command and control server. When this behavior is observed, the address of the server is fed back into the system for analysis. Another example, and one I mentioned earlier, relates to how some malware drop embedded payloads. When this happens, the dropped sample is fed back into the system. This is why you’ll find that we have detection logic not just for initial malicious payloads, but also for all subsequent pieces in the attack chain.

Since our products query the cloud, we are able to gather metadata over our entire customer base. This metadata can be used to determine the suspiciousness of a sample. For instance, the prevalence of a sample can give us clues as to whether it might be malicious.

Once we have processed a file or URL fully, we feed all gathered metadata into a rules engine that we call Sample Management Automation (SMA). The rules in this system are partially hand-written and partially adaptive, based on changes in the threat environment. This system is the brain of our whole operation. It does everything from categorizing and tagging samples in our storage systems to determining verdicts and alerting on new threats.

Beta Detections

No good software is written without proper testing. Sometimes our researchers figure out creative ways to detect malware, but aren’t sure how they’ll work in the real world. For instance, a new type of detection method might end up being be too aggressive, and trigger a lot of false positives, but we won’t really know that until it’s out in the wild. In these cases, we use beta detections.

A beta detection reports back to us what it would have done, without actually triggering any actions in the product or system itself. By collecting upstream data from these pieces of code, they can be tuned to behave optimally, and released once they’re working as intended.

Beta detections are often pretty cutting-edge stuff. In many cases, they’re new pieces of technology designed to deal with threats we’re already catching, only by using different, more efficient methods. We’re always looking to catch as many real samples as possible with every hand-crafted detection we deploy, so beta detections also provide us with a nice test bed for new theories on that front. We utilize beta detections both in our back ends, and on endpoints.

Conclusion

As you’ve probably gathered, detection logic comes in many forms and can be used to do a whole bunch of different things. The technologies behind all of these different detection methods are designed to work together to protect machines and users against a range of different attack vectors. The whole system is rather complex, but it’s also very powerful. And we’re evolving these technologies and methodologies constantly as the threat landscape changes. By using multiple different protection layers, and not putting all of our eggs into one basket, we make it difficult for attackers to bypass our technology stack. So the next time you hear someone talking about “signatures” in the context of modern endpoint protection products, you can be sure they’re either rather uninformed, or they’re peddling fiction.

P.S. In case you’re wondering, I was joking above when I wrote about not knowing if it was okay for me to reveal this information. We’ve been getting lots of nice feedback on this explainer series, and we’re more than happy to openly describe our technologies processes to everyone.



Articles with similar Tags