Desktop version

Home arrow Computer Science arrow Securing Systems Applied Security Architecture and Threat Models

Source

Analysis

As we explored in Chapter 3, there is a natural division of privilege and access between user and kernel execution. Even at administrative privileges, this boundary is still important to acknowledge and protect. Gaining the kernel, the attacker has everything, absolutely everything under attacker control.

Since it’s so dangerous, why would designers install software into the kernel at all (or make use of kernel software)? In order to capture events and activities across every application, as we noted in Chapter 3, security software will have to “breach” or get out of its assigned process space. The kernel has visibility to everything—all drivers and their associated activity flow through the kernel. In order to gain access to all this activity, and protect against anything abnormal or malicious, security software has to first achieve full visibility. This is typically achieved by running software within the kernel.

The user mode application must initialize and start the kernel software (usually as a driver). But after the kernel software has been started, the flow should be from kernel to user. In this way, attack from user to kernel is prevented once the entire security system has been started. That still leaves the attacker an avenue to the kernel through the initialization sequence of the kernel driver during startup. That call cannot be eliminated; kernel drivers must get started; kernel services need to be initialized and opened. This one opening call remains an attack surface that will need mitigation.

In Figure 9.1, there is only one kernel mode component, the module that “watches,” that is, catches events taking place throughout the operating system and from within applications.

The kernel driver should not allow itself to be opened from just any binary. Doing so, allowing any user mode binary to open it, opens up the attack surface to whatever code happens to get an opportunity to run on the operating system. Instead, the attack surface can be reduced if the kernel driver performs some sort of validation that only allows the one true Antivirus (AV) Engine to open it. Depending upon the operating system’s capabilities, there are a few methods to provide this authentication: binary signature validation over a binary hash, which can then be re-calculated. What the kernel driver must not do is simply check the name of the binary. Filenames are easily changed.

Cryptographic signatures are best, but not all operating systems can provide this capability to kernel mode processes. The solution to this problem, then, is operating system dependent. Hence, without knowing the operating system specifics, the requirement can be expressed generally. It may also be true that our endpoint security software must run under many different operating systems. We can express the requirement at a higher level, requiring authentication of the AV Engine binary, rather than specifying the authentication method. Whatever executable validation method is offered by the operating system will fulfill the requirement.

Previously, we examined three components that are shown in Figure 9.1. A user interface is shown that allows the user to control and configure the security software to meet the user’s needs. There is an AV Engine that performs the security analysis on files, network traffic, and other events. The events are captured through the kernel driver sitting across the kernel/user mode trust boundary. Obviously, the user interface must be accessible by the user; it must run as an application in “user mode,” like any other application. However, unlike a word processor or spreadsheet application, the user interface can set security policy, that is, the user can turn off and turn on various security functions, such as whether files are scanned as they’re opened (“realtime scanning”), or set policy such that suspicious files are only scanned at scheduled times at a periodicity of the user’s choosing. From a security perspective, this means that the user interface has the power to change how security is implemented on the machine—the power to change how well or how poorly the machine is defended by the security software.

In consumer-oriented products, the user will have the ability to turn security functions off and on. In corporate environments, usually only system administrators have this power. The user interface can take control of the security of the system. That, of course, makes the user interface component an excellent target for attack.

Likewise, the AV Engine performs the actual examination to determine whether files and traffic are malicious. If the engine can be fooled, then the attacker can execute an exploit without fear of discovery or prevention. Consequently, a denial of service (DoS) attack on the AV Engine may be a very powerful first step to compromising the endpoint. To contrast this with the user interface, if the attacker is successful in stopping the user interface the security services should continue to protect, regardless. On the other hand, if the attacker can stop the AV Engine, she or he then has access to an unprotected system. Each of these components presents an important target; the targets offer different advantages, however.

As was explained in Chapter 3, any component that runs in the kernel should be considered a target. The kernel acts as the superuser, with rights to everything in the operating environment and visibility to all events. If a kernel component contains an exploit, the consequences of exercising the exploit are catastrophic, at least in the context of the running operating system.

Testing shows the presence, not the absence of bugs.1

Despite best efforts, no software can be proved error free. With that fact in mind, the defense should be built such that the impact of a vulnerability, perhaps a vulnerability that can be used to run code of the attacker’s choosing, will be minimized. If we can’t prevent the leak into production of a dreadful vulnerability, at least we can make the exercise, the access to that vulnerability, as difficult as possible. For security software, we have already stated that rigorous assurance steps must be built into the testing for the software. Still, we shouldn’t completely depend upon the success of the testing; rather, we should make it very difficult for an attacker to access a kernel component.

If the kernel driver is written correctly to its requirements (see Requirements below), it should only accept an incoming connection at startup, exactly once, and from the validated engine component only. By meeting these requirements, the attacker must compromise the engine in order to have access to the kernel mode driver. Additionally, since the driver initialization and open happen very early in the startup sequence of the operating system, the attacker must be ready and waiting to take advantage of any vulnerability at that early moment during startup. That places another barrier to the attacker. Although this additional barrier is not insurmountable, it does mean that merely getting the attacker’s code to execute in user space does not guarantee access to the kernel driver.

For most operating systems, there’s protection to the kernel driver by guaranteeing that only a single open operation may occur. Further, the open call may only be made from a validated binary. An additional restriction can be that the open driver call may only take place during the startup sequence. The attacker has only one avenue through the endpoint security software to get to the kernel. And that avenue is solely through the engine, once, at a point at which there is no logged-in user.

The foregoing implies that an attempt to get from user to the kernel driver will have to be attached to operating system initialization (boot sequence) during normal operation. (Usually, such a system change requires higher privileges.) Then the attacker must either lie in wait or force a restart. A forced restart may get noticed by the user as unusual and unexpected, in response to which the user might perform a security check. This is not an easy or straightforward attack scenario. It has been done. But it’s far from ideal and fraught with possible failures.

The AV engine itself will have to be written to be as self-defensive as possible. Even if the AV engine validates the user interface before allowing itself to be configured by the user interface, still, those input values should not be entirely trusted. The user interface may have bugs in it that could allow attacker control. In addition, the user interface might pass an attack through to the engine from external configuration parameters. We’ll examine that input in a moment.

The AV engine has another input. In order to determine whether files are malicious, they may have to be opened and examined. Most certainly, in today’s malware-ridden world, a percentage of the files that are examined are going to contain attacks. There’s nothing to stop the attacker from placing attacks in suspicious files that go after any vulnerabilities that may lie within the file examination path. Thus, the AV Engine must protect itself rigorously while, at the same time, examining all manner of attack code. In fact, the entire path through which evil files and traffic pass must expect the worst and most sophisticated types of attacks. The file open, parse, and examination code must resist every imaginable type of file-based attack.

If the foregoing comes as a surprise to you, I will note that most industrial-grade, commercial antivirus and malware engine examination code must resist attack in precisely this manner; the need for rigorous self-protection has been in place for many years,[1] as of the writing of this book. Rigorous self-protection has become quite normal in the world of security software, and, especially, malware protection software.

That this code is written to be expressively self-protective and resistant doesn’t mean that bugs don’t get introduced to these sorts of engines from time to time. They do, and will continue to be. But any vendor with some amount of integrity understands this problem and will do their best to avoid getting caught out by a bit of attack code that was in a malicious file. Still, I would count “self-protection” and “attack resistance” as security requirements for the entire examination code path. What that comes down to is careful memory handling, safe library functions, and rigorous input validation.

Input validation to be implemented in general purpose, data-driven code is actually not a simple problem. This has been mentioned above (the business analytics data gathering and processing modules). For the security person to blithely declare “validate all inputs” begs a very real and somewhat difficult problem. If the coder doesn’t know what the inputs are, precisely, how can the input be validated?

Although a complete solution to a data driven, general purpose parsing and examination engine is beyond this book, I do want to note in passing that this remains a nontrivial software design and implementation problem. The solution set is likely to contain data-determined ranges and acceptable input sets based upon each file format. In addition, in order to prove the defenses, a level of assurance may be attained through a formal and thorough set of software fuzzers[2] that become a part of the parser and the examination engine’s test plan.

Figure 9.1 introduces a few more components to endpoint security software’s architecture. Because the capability to examine many different file types is a specialized function, it’s typical to place the file-parsing capability within its own, specialized module. That’s how our fictitious antivirus software is written, as well. The makers of the software want to examine as broad a range of file types as possible. This is so that attackers cannot simply package up their attack in some obscure file type, which then allows the attack to slip past the security software. In order to offer the customer as complete a set of protections as possible, the software needs a “can opener” that is a “jack of all trades,” readily opening and understanding the formats of just about every file type imaginable that may occur on each operating system that’s supported by the software. So, as is typical in this situation, the file-opening software is a separate module.

Suspicious files are passed to the file module to be opened and normalized. The normalized file is then passed back for examination by the engine. The file parsing module only accepts communication from the engine and no other component. If the filing module requires any configuration, it is passed through from the user interface to the engine and then passed when the file parser is started. Would you consider the delivery of configuration information an attack surface of the file-parsing module?

Another special component is the communications module. Figure 9.1 presents a standalone system, so why does it need a communications module? In Figure 9.2, we can see that the system engages in communications with automated entities beyond the endpoint itself. Even if this were not true, a communicator might be employed for intermodule communications within the system. In this case, the simple, independently operating case, it would be a matter of design style as to whether to abstract communications functions into a separate module. The design was chosen in this case, not only

Endpoint security software with management

Figure 9.2 Endpoint security software with management.

because the system does in fact support inbound and outbound communications (as depicted in Figure 9.2) but also for reasons of performance.

The engine must react to events in as fast a manner as possible in order to stop an attack or, at the very least, recognize an attack as quickly as possible. Time to identification is a critical factor. The software maker wants to identify attacks in as little processing time as possible. Communications, and especially network communications, can take quite a lot of computer time. Although 250 ms is hardly a blink of an eye (one quarter of a second), a huge amount of processing can take place in 50 ms. Two hundred-fifty microseconds is almost an eon in computer time. By spinning off network communications to a specialized module, the engine code saves processing time for what’s important. The engine won’t have to block continued processing until a response has been received—the communication module does this instead. In fact, the AV engine won’t even waste any precious processing time setting up the asynchronous response handler. All of these time-intensive functions can occur outside the security processing chain. In this system, all communications have been extracted into their own module in order to remove them from performance-hungry examination code.

The engine sends events and alerts to the communications module, which then passes these along to the destination, whether that destination is local, that is, the user, or some other destination. In this system, the communicator is also responsible for any log and event files that exist on the machine.

The user interface passes the communications modules’s configuration to it at system startup time. The communications module takes input from the engine in the form of alerts, which then must be passed along to any external management software and to the user, or placed into the log files on local storage. Communications also takes its configuration and any user actions that need to be passed to any destination from the user interface. Where communications are sent is data driven, dictated by the configuration given to the module during its initialization or when the configuration changes.

If the communications module can be stopped by an attacker, malicious actions can thereby be hidden from the user or any management software monitoring events on the machine. If the module can be compromised, the attacker might have the ability to obscure, or even change, events as they’re sent onwards. Further, inbound events, such as the management software changing the security policies on the endpoint, could be changed to the attacker’s benefit. For this reason, the communications module must validate what it is given and must only allow the other modules in the system to send it messages.

Inward-bound communications, such as software updates, will flow from Communicator to the user interface manager. In this system, the user interface module is also responsible for updating software, such as a new set of malware identity signatures, new policies (and other configuration items), or even updates to the system’s modules themselves. We will take up the requirements for inbound communications below, with the introduction of management software in Figure 9.2.

As we have seen with other systems that we’ve analyzed, configuration files are a prime target. Security policy and the functioning of the system can be changed through the software’s configuration file. In a product that must operate whether it has a network connection or not, such as endpoint security software, the product must configure itself based upon files kept on local storage. That’s a thorny problem because the configuration files are a target that can change protections. Why?

The software that reads the configuration files, as we have seen, must run in user space and at lower privileges. Typically, this would be the logged-in user of the system. The logged-in user has the right to open any file available for that level of privilege. Since the software is running as the logged-in user and must have the rights to read its own configuration file, under many operating systems, the configuration file can be read by any application running as the logged-in user. That’s a problem.

Furthermore, if the user decides to make changes, the user interface software has to be able to write the configuration files back to disk, once again, as the logged-in user. Do you begin to see the problem? If the logged in user’s configuration module (the user interface, here) can access and change the files, so can the user. This means that any of the user’s applications can change the files. This also implies that all an attacker has to do is get the user to run an application whose malicious task is to change the configuration files. Bingo! Security software would now be under the attacker’s control. And that must be prevented. It’s a thorny problem.

I’m sure it’s obvious to you that the configuration files used by any security software constitute a high-value target. This is no different from other systems that we’ve examined. And it is a standard pattern, not only for security software but for any software that provides a critical and/or sensitive function, as well. The configuration file is usually (and typically) a valuable target.

Different operating systems provide various mechanisms for addressing this problem. Under the UNIX family of operating systems, an application can be started by the logged-in user, but the application can switch to another user, a specialized user that only has the capability to run that software. This non-human, application-only user will usually only have rights to its own files. The Windows family of operating systems has other mechanisms, such as slipping into a higher privilege for a moment while sensitive files are read or written and then slipping back to the logged-in user for the continuing run. For both of these mechanisms, the superuser can circumvent these protections easily. That is the way the superuser privileges are designed: The superuser can do whatever it wants within the operating system. However, it should be noted that if an attacker has gained superuser privileges, there is usually no need to mess around with configuration files, since the attacker already owns the operating system and all its functions and can do whatever is desired. Further exploits become unnecessary.

Typically, therefore, security software protection mechanisms don’t attempt much, if any, restriction of superuser privileges. These “super” privileges are designed into modern operating systems and, as such, are very difficult to protect against. Instead, it is usual to focus on restricting what the logged-in user can do. In this way, the focus is on preventing privilege escalation, rather than preventing what escalated privileges can accomplish.

I will note that some security software in today’s market employs sophisticated file restriction mechanisms that go beyond what operating systems provide. For the sake of this analysis, we will not delve into these more extraordinary measures. It’s enough to understand that the configuration file, in and of itself, is a target and needs protection. And the inputs in the user interface that read and process the configuration file also comprise an important attack surface that requires protection. For the sake of this analysis, we will confine ourselves to protections that can be provided by the operating system and obvious software protections that can be built into the security software. Protecting files and secrets on local storage could fill an entire chapter, or perhaps even an entire book, devoted solely to this subject. It is enough, for this analysis, to identify the configuration files and the user interface inputs, input fields from the user, and the inputs when the configuration file is read as attack surfaces requiring protection.

There are a few subtle conditions when writing files that may allow an attacker to misuse the output to write files of the attacker’s choice. Indeed, there are vulnerabilities in file output routines that can even allow an attacker execution of code. The point is to misuse the user interface file writing routines to play tricks on the application or the operating system to get malicious code onto the machine or to get malicious code executed through the security software’s file output routines.

Furthermore, the user interface will have to be written such that items taken from the user through the user interface can’t directly be output to the configuration file. That might be a convenient way to get a configuration file attack back through the user interface. The foregoing suggests, of course, that outputs to the configuration file will also require security attention. The output should be considered an attack surface.

It is perhaps obvious, and not worth mentioning except for completeness, that inputs of the user interface are, indeed, an attack surface. We’ve seen this in previous analyses. The treatment for input attack surfaces is always the same at a high level: input validation. Injections of any sort must be prevented.

Do you see an attack vector surface that hasn’t been covered in the preceding text? By all means, once again, check my work. Be my security architect peer review. Has anything significant been missed?

In Figure 9.2, we introduce management of the security software. If for no other reason than updating the antivirus signatures for new strains of viruses, this architecture would not be realistic without this component. Somehow, the AV Engine has to be easily and readily updated on a regular and continuing basis. New strains of computer viruses occur at an alarming rate, sometimes thousands per day. Somehow, that information has to get to our system as there won’t be timely protection until it has been updated. This is one of the responsibilities of the communications module. The update has to come from somewhere trustworthy.

If an attacker can control the update of the malware signatures, they may be able to hide an attack. For instance, imagine that the update to the signature set identifies a new strain of virus. If the attacker can prevent the update, through whatever means, then the new virus strain will not be identified and, thus, prevented. Consequently, signature (and code) updates are important targets requiring significant protections.

  • [1] Although the media may trumpet the occasional instances in which malware engines fail,it may place these into context to consider that a typical instance of a malware engine willexamine tens of thousands of samples correctly and, most importantly, safely.
  • [2] Fuzzing is a software testing technique that employs random input values in order to flushout instabilities in software. Core Software Security: Security at the Source, by James Ransomeand Anmol Misra (CRC Press, © 2014), has a more complete description of fuzzing and itsapplication to security testing.
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics