Chapter 4

Creating Secure Code

First Principle of Code Protection: Code Isolation and Confinement

In today’s Internet-driven world, we often find ourselves running untrusted code on our devices. This code is contained in programs, applications, extensions, plug-ins, and codecs for media players, which we have become accustomed to naively trusting and downloading. However, malware creators take advantage of unknowing users who download or use infected code. Such infected code can be directed toward a variety of devices including our computers, tablets, and smartphones. Hackers may hide their malware code in any component where they see an opportunity to catch unaware users. Specialized codecs for media are examples of components that are frequently used to conceal code. Other examples include pdf viewers and frequently used applications such as Microsoft Outlook. Legacy routines such as UNIX’s Sendmail, which was exposed in the famous book Under the Cuckoo’s Nest, have continually been targeted by hackers for decades. Furthermore, honeypots, which are advertised Internet websites from which users frequently retrieve what appear to be useful applications or information, can contain hidden malware that downloads along with the intended software.

As a standard process, when users recognize that an application may be running untrusted code, they should immediately kill that application. Users should also run virus scanners to check for and delete any infected files and malware on a regular basis.

In an ideal situation, modules and applications running on the same operating system (OS) should be built to function separately from each other in order to preserve their integrity and to protect the system’s software. This principle is that, if system applications are run together, any security threat that affects one part of the system can easily spread and affect all the other applications and system areas as they operate in that shared area of the computer. However, the more they are separated, the more difficult it becomes for untrusted code and malware to cause extensive damage. Hence, the principle of confinement—the process of separating applications from each other and from systems components—ensures that misbehaving applications are not able to harm the rest of the applications and systems software of the overall system.

Code Isolation Techniques

Code isolation and confinement techniques are employed as a means to limit potential cross infection of code, force any damage to stay within an isolated area, and to protect against cross-area contamination. And within the isolation area, medialization and isolation methods are employed to minimize the spread between modules within the enclosed isolation unit. These isolation techniques make it more difficult for malware to spread or transmit their effects between modules in an isolation unit by limiting their interaction and information exchange and then completely blocking any spread across the chasm to separate isolation confinement areas to infect the isolated “good-code” protected modules.

There are four commonly employed confinement approaches designed to mitigate the effects of untrusted code and possible malware upon our isolated and protected production code. Among these are:

1.  Physical confinement: The most primitive of these confinement methods is physical separation. The idea of this process is to design hardware that is only partially integrated. In essence, there should be physical air space between one piece of equipment and another. The same would apply for the separation of networks, if possible, with physical space between them. Physical confinement methods provide some definite advantages. If one device is attacked, other devices or networks that are separated physically will remain unharmed. However, this confinement method has some key drawbacks. It is difficult to operate a data center when all its components are physically isolated from one another. The user terminals and PCs that access the computer systems can be physically isolated from those systems, but many of the applications and routines that run on the system need to interact with each other, making it nearly impossible to preserve this physical isolation. Physical confinement is illustrated in Figure 4.1.

2.  Virtual confinement: The second approach involves the creation of “virtual” machines as a method of confinement. This is typically employed in data centers where one computer will run many different applications. Under virtual-machine approaches, the confinement occurs within each device as applications are isolated to specific OSs and then multiple OSs are hosted on one computer, each OS running its own set of applications. For example, application A runs on Windows 10 while application B runs on Linux, with both OSs running on the same device. These OSs do not communicate with each other in any manner. This confinement method safeguards those applications running over one OS from those running over another. Furthermore, it safeguards one OS itself from anything affecting the other. However, the drawback is that such an arrangement is difficult to manage. An example of virtual confinement is shown in Figure 4.2.

Image

Figure 4.1 Separating machines by physically placing them on separate networks.

3.  Operating system confinement: As a third method of program code confinement, restrictions are imposed on OSs themselves by affecting the process by which these OSs may talk to each other. An intermediary system is employed to facilitate communication between the two different OSs as a means to enforce confinement while enabling communication. The intermediary OS performs the task of first locating the two communicating OSs and then using a standard system call to communicate information from one OS to the other. Based on the criteria identified by a set of parameters within the system call, the intermediary device can be used to isolate infected or untrusted code located in the originating OS by choosing not to enable communication between OSs if any of the parameters appear suspicious.

4.  The isolation of threads: A fourth method of confinement is to isolate threads of code. Within the same address space, one address thread should be made to run in parallel to another providing software fault isolation (SFI). As the threads are isolated from each other, they don’t share the information they carry from one specific source. Instead, they treat their operation on that information as separate and independent, even though it may originate from the same source, and the separated threads don’t exchange their independently computed results.

Returning to the analysis of system calls, the Open Systems Interconnection (OSI) seven-layer model exemplifies the use of this process. The seven-layer model’s encapsulation process, under which each successive layer adds a header to the previous encapsulation packet or frame, utilizes a system of such system calls. The software at each layer communicates in an addressed data packet to the next layer by calling the OS and requesting that the OS pass that addressed packet on to the next layer’s software. It is important to point out that the applications and the software implementing each layer of the model (e.g., Transmission Control Protocol [TCP] at level 4, Internet Protocol [IP] at level 3, and Ethernet at level 2) don’t know where in the computer each successive layer is located. Only the OS knows, and the OS is in charge of maintaining and using this location information. Therefore, system calls to the OS are used to relay information between each layer of the model. System calls occur as each layer of the seven-layer model needs to pass information to the next layer (e.g., TCP to Ethernet). This system call process, carried out by passing location information through and by means of the OS, is a means of facilitating confinement and protection.

Image

Figure 4.2 Applications in separate partitions of one computer, each with their own OS.

Implementation of the Four Code-Confinement Methods

The emerging and more popular approach for code-confinement implementation is to employ a reference monitor. However, there are also specific monitoring and code jailing (isolation) routines and methods that are remnant components of specific older OSs.

Reference Monitors

The primary key to implementing confinement techniques is increasingly to employ a reference monitor. A reference monitor is a separate program that observes the chosen and implemented isolation method and regularly checks to ensure that that chosen technique is properly functioning in its mission of isolating modules of code from each other and results in isolating malware from good code. In one possible implementation, a reference monitor might be placed between a program and an OS so that it can track the data flow between these areas. Reference monitors are small with little overhead. But in order for them to be effective and efficient, they need to be updated regularly so that they are able to detect the latest security threats. On the other hand, hackers can’t easily install Trojans and backdoors that can kill or damage these reference monitors.

OS Chroots

There are certain kinds of routines provided with OSs. Many OSs were built on the design of the original UNIX OS. In doing so, designers adopted a lot of the same operations and processes used to separate areas in the original UNIX OS and incorporated them in modern OSs. An operation in an OS can change the root directory so that, once modified, it can be used to create restrictions in the environment. One such operation is called a Chroot, which is an operation that changes what appears as the root directory for current running processes and their children. A running process appears to be running from a certain directory, but it can be modified so that it looks like it’s coming from an entirely different area. In essence, Chroots allow users to create modified environments known as Chroot “jails”; a program that is run inside this area cannot access files outside of the designated environment—they are locked into a narrow jail operating area. The programs are allowed to run inside this restricted environment, but their interaction with applications and facilities outside the jail area is limited. Chroot operations are typically used for guest accounts or FTP (file transport) sites. However, it’s important to remember that simple Chroot jails do not limit network access, which is a source of infection.

OS Jail Routines

Another routine is called a Jailkit. Jailkits are utility programs that are used in UNIX environments. Any kind of program that users want to run is placed inside a Jailkit environment, which restricts their network access and interaction with other programs and applications on the protected device. This environment is more restricted than a simple Chroot jail. It particularly affects Java and C++ object-based programs since such programs are frequently used to trigger routines in other areas of the targeted device or other networks.

There are ways in which untrusted code or malware can escape from jail. Essentially, this may occur when attacking programs open a temporary guest account and are then allowed to run as “guests.” However, jails should only be executed by a root routine.

There are multiple ways to escape jail from a root routine:

■  Create a device that permits rogue programs or code to access raw data on a disk

■  Send signals but not IP packets to a non-Chroot process, requesting an action

■  Allow systems to trigger a reboot of the system but prevent running of additional programs or code

■  Bind a Chroot process to specific ports or sockets depending on their purpose and goal

FreeBSD Jail

Another type of operation and confinement area is the FreeBSD jail. It is stronger than a traditional Chroot jail and its purpose is to confine stronger mechanisms than those handled by Chroot by binding sockets with specified IP addresses and authorized ports. It allows applications to communicate with other processes and programs located inside the jail. When a FreeBSD jail is being used, the root is limited so that it cannot load kernel modules, which are the core module used by the OS and its supported applications to communicate with the computer hardware. Only the kernel module talks to the hardware. For other modules to communicate with the hardware, they must communicate through the OS’s kernel module. Thus, the kernel is a subroutine of the OS that enables programs hosted by an OS to talk to the hardware and for the hardware to talk to the OS and its supported applications.

One of the problems with the UNIX jail approach is that only specific types of programs can run effectively in jail-restricted environments. For example, web servers can run in jail, and audio, video, and media routines can also be jailed and run effectively. However, programs and applications that we use often and continuously, such as web browsers and mail clients, do not run effectively in jail environments. This is a serious drawback given how frequently we all search the web and send and receive e-mail messages.

Both Chroot and jail routines as well as service calls to the OS have disadvantages, the biggest of which for Chroot and jail routines is that these operations tend to have coarse inflexible policies. They operate on the basis of an “all or nothing” access policy and thus are not suitable for applications that are routinely used by the general user, particularly for web browsing. Furthermore, unless a specialized procedure is used, Chroot and jails are not good at preventing malicious applications from accessing the network or communicating with other machines, and they don’t prevent malware or untrusted code from trying to crash the host OS. The common employment of system calls to communicate through the OS as well as to invoke OS routine has been a standard for decades, but this method also has flaws. Service-call routines also need to have developed their own special protection processes to prevent possible damage from malware that addresses them specifically. Furthermore, every system call must be included in the monitoring process, which requires large amounts of processing resources due to the frequency with which these processes are invoked; and every suspicious call that is discovered needs to be tracked, blocked, and unauthorized. Unfortunately, every solution to one problem brings its own disadvantages, which require their own solutions.

Linux’s Ptrace Monitor and Systrace Routines

For the Linux OS, Ptrace is a commonly used routine. With UNIX and Linux (which is derived from UNIX), Ptrace acts as a monitor to intercept calls and verify the calls’ authenticity and safety. If a particular program fails to pass the monitor parameters, Ptrace attempts to destroy the message’s source while maintaining as protected all states, system directories, and user IDs. Monitors such as Ptrace are thus continuously deciding if code is safe, employing policies concerning this monitoring routine that must be stringent and inflexible.

Image

Figure 4.3 Monitor system reviewing all browser requests.

Another computer security utility, Systrace, limits each application’s access through a set of system-call policies. Systrace runs on Mac OS X, Linux, and most UNIX-like systems. In particular, Systrace supports 64-bit Linux versions. Systrace concentrates on situations wherein an application wants to open a call to the OS. Systrace monitors intercept these calls, apply the corresponding policy, and then proceed to allow or block the request. Similarly to Ptrace, Systrace follows an “all or nothing” standard. Only monitored system calls ever go to the OS for execution in the implementation of this isolation technique. If the call is not monitored, it does not get processed. Figure 4.3 illustrates the Systrace process sequence.

Employing Applications Such as Ostia or NACI

A more modern monitoring approach involves using applications such as Ostia, which use a delegation architecture. Under Ostia monitoring, a program might submit an “open call” rather than a standard systems call to the OS. Ostia would disallow that call and the associated transmission and would terminate the calling application. To support its monitoring mission, Ostia employs a policy file that details all appropriate activity and blocks and then terminates whatever is not allowed in the policy file. Unfortunately, defining and outlining the set of correct policies is a difficult process, and the allowed activity may change over time as new situations, new programs, and new updates to the OS are installed.

National Agency Check Inquiries (NACI) is another modern monitoring system. It is the minimum level of investigation required of federal employees as a condition of employment with the federal government. NACI also restricts a program’s ability to perform system calls based on a precise allowance policy file system. It typically does so in Intel x86 devices, which compose the large array of servers deployed in modern cloud data centers.

Isolation of Virtual Machines

Modern cloud computing starts the process of virtualizing each computer, network, and data storage in the data center and even the desktops that access the systems in that center. With virtualized computing, thousands of large-server PCs (usually with x86 processors) perform the processing functions for the systems in a distributed fashion spread over an array of such processors. In these virtualized environments, the design begins with an underlying OS (called a hypervisor) as the core enabler and a set of guest OSs running on top of the hypervisor, each running their own set of application programs and isolated from each other. These guest OSs tend to be Linux, Mac OS X, or MS Windows.

Image

Figure 4.4 Virtual machine separation and isolation architecture.

Computer Virtualization

The virtualization of a computer is a process whereby multiple OSs can run simultaneously as guest OSs on top of an overall computer OS—the hypervisor, also termed the virtual machine monitor (VMM). Each guest OS (virtual OS) can then support a specific set of applications in isolation from other guest (virtual) OSs, each with their own applications. Each virtual/guest OS and its applications operate independently from each other, and their programs are isolated and protected from another virtual/guest OS’s applications. It is the hypervisor’s job to enable each guest OS and its applications; to allow them to pass information to the computer hardware, if appropriate; and to block intercommunication between guest operating systems unless strict policy rules are met. Figure 4.4 illustrates the concept of virtual machine isolation.

Keep in mind that there may be up to 1000 x86-based computers in a data center, and these isolation techniques are meant to ensure that infections can’t spread from one OS and its applications to another across one and then 1000 machines. VMMs (hypervisors), accompanied by other OS routines (Chroots, jails, etc.) protect the crossing point between OSs, both local and across the complete center. The hypervisor, acting as the overall computer OS and VMM, isolates each guest OS from each other, continuously observes all possible attempted cross interactions, and only allows those that meet its preestablished policy restrictions.

Threats to Computer Virtualization

As with any isolation method, there are specific elements to be aware of. Covert (hidden) channels represent a threat to the isolation of two guest OSs by enabling unintended communication between these OSs and their components. Advances in technology have enabled monitors to catch and prohibit or deny guest OSs to use covert channels for cross-OS communication. However, other systems such as antivirus programs may be affected by covert channels. Given that antivirus systems have to be protected so that they can detect malware and untrusted code, it becomes important that they be able to detect activity at the root kernel, which is the part of the OS that talks to the hardware. Ensuring that the intrusion detection system (IDS) runs as part of the VMM/hypervisor is critical. This enables the monitor to detect untrusted code and malware as it attempts to communicate to the hardware (and in particular to the network) and to other applications under other guest OSs. Running the IDS with the VMM is the first step of a security process. The IDS operates as a virus signal detector that looks for certain kinds of suspect code and malware, which it identifies by their behavior patterns. The second security step requires the VMM to compute a hash or user app code and then to compare the potential threat’s hash code with the IDS’s generated hash. If the potential threat doesn’t match and is thus unknown, the VMM proceeds to kill the suspect program. A third security step involves ensuring the integrity of the guest OS’s kernel through trial system calls. Once again, discrepancies between the system call issued and the stored data will cause the VMM to take action against the presumed threat. Finally, a virus signature detector may be deployed to run a basic virus signature detector scan on each of the guest OSs and their hosted application programs.

Subverting VM Isolation

The standard name given to VMMs is the hypervisor, which operates not only as an over-system monitor of activity but as an overall OS supporting a set of guest OSs. When policy restrictions allow, the hypervisor enables communications between OSs and between guest OSs and hardware, while monitoring all such activity. Subversion of the hypervisor is a dangerous problem. If a virus can gain access to and invade the hypervisor, this threat can affect not only a particular computer but potentially all the computers in the data center and all users connected to that data center. And such malware can hide in the hypervisor and trigger damage immediately or be triggered to cause considerable damage at a later date.

VM-Based Malware

VM-based malware is malware that specifically targets hypervisors. Because of the vulnerability of the complete processing center to such malware, newer malware tends to be constructed to try new methods of penetrating a hypervisor’s defenses. The nature of this problem is one of constant change and creation. New threats and viruses are being created and distributed so frequently that Microsoft delivers a security bulletin detailing new emerging threats on a monthly basis. New virus-detector components in antivirus software play an important role in detecting the new mutations of viruses that are specifically targeting and attacking hypervisors and other protection and security tools.

Software Fault Isolation

Many types of software exist that can easily be corrupted and thus pose a security threat, and these are particularly a problem due to their vast deployment and usage. Among these are codecs; these support and enable specific media players, which need and will have a specific codec to execute the media files. Device drivers, such as USB connectors and other external devices that are plugged into computers, can also easily be corrupted. Additionally, automated downloads create risk as they trigger actions on devices automatically. Other types of threats include common but unsafe instructions. Java JMP instructions, load instructions, and cross domain calls are all particularly risky as they represent a particular target for concealed threats.

Image

Figure 4.5 Sections of application code and the data each uses in its execution process.

Given this volatile computing environment, it is extremely important to isolate software through the SFI process. This involves carrying out segment matching by running special routines to recognize secure segments of code. Insurance routines and variable address sandboxing techniques are used in combination with segment matching to ensure proper isolation. However, complete isolation is inappropriate for data processing, and there are other existing vulnerabilities to this technique. For example, shared memory issues represent a threat between two attempted memory-sharing programs, which should be prohibited from sharing common virtual memory. Performance monitoring routines, where detected 4% slowdowns might be an indicator that an extra set of code is operating on the machine, can provide hints to the existence and execution of untrusted code. However, SFI routines have their own limitations, many of which extend to the x86 Intel-based machine implementation that populates our cloud data center computer environments.

Figure 4.5 shows how the SFI process partitions memory into segments.

QUESTIONS

1.  Explain why one performs the code isolation technique.

2.  What are four confinement methods?

3.  Out of the four methods, which one is most important and why?

4.  Describe three OS routines and explain what they all have in common.

5.  Are there any disadvantages?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.74.25