Trusted Platform Architectural Adaptations

This section describes the adaptations that turn existing computer architectures into TPs. The ability to adapt existing architectures is essential, because a TP must not prevent (and preferably not hinder) users from doing anything that they normally do, and they normally use applications that execute on existing computer architectures. As considered in Chapter 1, any hardware modifications are as simple as possible because the hardware cost of a platform is a hard limit on the price of the platform, and price is always a major factor in the success of any mass-produced product.

In particular, we provide two important mechanisms:

  • A trustworthy way of collecting information about a system and storing it, via extracting and storing measurement information called integrity metrics. Some metrics are prescribed by TCPA, but others (including application-level metrics) may be set by the OS or the owner.

  • A trustworthy way of retrieving such metrics, so that the user of a computing platform, or third parties, may compare them with expected values to decide whether a component (i.e., a software or hardware configuration) on the platform is trusted for the purpose intended by that user when using the platform.

According to the TCPA specification, to be a TP, a platform needs to include the following:

  • At least one root of trust for measuring integrity metrics

  • Exactly one root of trust for storing and reporting integrity metrics

  • At least one Trusted Platform Measurement Store

  • At least one item of TCPA Validation Data

  • Exactly one Trusted Platform Agent

The first two components have been introduced in the previous section; the others will be introduced in due course. We begin by giving more detail about the root of trust for storing and reporting integrity metrics, namely the TPM, and then move on to the other TCPA components.

The Trusted Platform Module

In this section, we consider the following topics:

  • Functional components of a (TPM)

  • Protected capabilities and shielded locations

  • Platform Configuration Registers

  • TPM protection

  • Integration of a TPM into a platform

  • Authorization and physical presence

  • Audit functions

Functional Components of a TPM

Figure 3-3 shows the functional elements of a typical TPM. These include a random-number generator, hashing, and asymmetric encryption capabilities. Appendix C provides a basic introduction to such security mechanisms. Further details of the standardized cryptographic capabilities provided by the TPM are given in Chapter 9.

Figure 3-3. The logical architecture of TPM


Protected Capabilities and Shielded Locations

The TPM has capabilities and data that are protected from interference and prying. A TPM is capable of being trusted by remote parties—not just the owner of a platform.

The TCPA specification introduces the concept of a TPM because it contains all the functions that must be trusted if platform is to be trusted. With a small number of exceptions for practical purposes, a TPM does not contain functions if they are not critical to trusting the platform. All TCPA non-TPM functions must work properly, of course; but if they don't, their failings can be detected via consistency checks. On the other hand, if a function in the TPM does not do what it should, it is impossible to detect that it is not doing the right thing.

TCPA does not wish to dictate the implementation of TPs, so it defines the TPM in terms of protected capabilities and shielded locations for data. These terms are the language used to describe a closed set of functions and data that are separated from other functions.

A protected capability is one whose correct operation is necessary for the operation of the platform to be trusted. Protected capabilities have privileged access to shielded locations.

A shielded location is an area in which data is protected against interference and snooping.

These TCPA protected capabilities and shielded locations are provided by the TPM. For practical reasons, only a small amount of the platform's data will be stored in a shielded location. For example, shielded locations permanently protect private TPM keys (such as the endorsement key), together with private platform data such as the key at the root of encrypted key hierarchies (the storage root key). Other private keys and data are stored outside shielded locations whenever possible, but in an encrypted form, such that they can only be decrypted via at least one key stored in a shielded location. They appear as plain text only within a shielded location in the TPM (except when the plain text is deliberately exported from the TPM to the platform).

The intention is that a TPM should have minimal cost—to promote wide deployment. The first discrete TPMs will probably be adaptations of existing smart card technologies. This means that they will be chips with a physical layout and chip passivation that makes it difficult to probe the chip, and circuitry to reduce the chance of deducing secrets by monitoring supply current or by changing voltage supply or clock frequencies, for example. The most likely implementation of a TPM will contain a processor and support for some standard security functions, such as generating unpredictable numbers (random numbers), asymmetric keys, asymmetric encryption and decryption, hashes, and signatures. However, the TPM differs from a standard cryptographic co-processor/accelerator by providing several additional mechanisms for platform integrity checks, platform identity, and Protected Storage.

It is worth repeating that a TPM must be a computing engine. If a TPM isn't implemented as a separate protected computing engine, the implication is that the entire TP is sufficiently well protected for all processing and storage in the platform to have the properties of protected capabilities and shielded locations. The entire platform is a TPM.

The functions of a TPM are central to a TP because it provides the root of trust for storing and reporting integrity metrics. Later sections consider this in more detail.

Platform Configuration Registers

Possibly the most unusual aspect of a TPM is its set of Platform Configuration Registers (PCRs). These store integrity metrics in a way that prevents misrepresentation of the presented values or of the sequence in which they were presented. The detailed format and operations of PCRs are considered in Chapter 6.

PCRs are the TCPA's solution to the problem of storing summaries of integrity metrics. If values of integrity metrics are individually stored and updates of integrity metrics must be individually stored, it is difficult to place an upper bound on the size of memory that is required to store them. The reason is that an unknown number of integrity metrics may be measured in a platform, and a given integrity metric may change with time and a new value may need to be stored. Authenticating the source of measurements of integrity metrics is an intractable problem, and a new value of an integrity metric must not simply overwrite an existing value. (If this is permitted, a rogue could erase an existing value that indicates subversion and replace it with a benign value.)

The TCPA solution is to provide a way to store sequences of integrity metrics, rather than just individual integrity metrics. Values of integrity metrics are not “stored” individually inside a TPM; instead they are appended to a sequence whose representation has a fixed size. The states of all sequences inside a TPM are set to a known value at power-up. Each new integrity metric must then modify the value of a sequence. The actual TCPA method is to concatenate the value of a new integrity metric with the existing value of the sequence, compute a digest of the concatenation, and use that digest as the new representation of the sequence. These digests are stored in the Platform Configuration Registers inside the TPM.

This method enables one or more sequences to represent an arbitrary number of integrity metrics and their updates. The fewer the number of sequences, the more difficult it becomes to interpret the meaning of the value of a sequence. The greater the number of sequences, the more costly it becomes to provide storage in the TPM. A particular implementation must make a trade-off between cost and difficulty of interpretation. Version 1 of the TCPA specification requires a TPM to have at least 16 PCRs, approximately half of which store types of integrity metrics defined by TCPA, leaving the remainder for software developers.

TPM Protection

The TPM requires complete protection from software attack and partial protection from hardware attack. Protection from software attack is essential to provide confidence to both local users and to remote users, because the TPM's job is to be the part of a TP that cannot be hacked. Protection from physical attack is mostly to provide confidence that a TPM owned by someone else is a genuine TPM, whether the host TP is local or remote. Otherwise, rogues could easily clone a TPM and produce subverted copies of the genuine TPM that would be indistinguishable from the genuine TPM. This is particularly of concern when interacting with remote platforms.

The TCPA definition of the TPM's protected capabilities and shielded locations explicitly dictates logical isolation of the TPM's capabilities and data from other platform software. This is why the obvious implementation of a TPM is a processing engine that is separate from the platform's main processing environment.

As mentioned in Chapter 1, the TPM is required to have a limited degree of protection against physical attack. There is no known way of making the TPM tamper-proof (so we assume that it is not tamper-proof), and it is costly to make something very tamper-resistant. So the level of protection accorded to a TPM will depend on the intended purpose of the platform and will be reflected in the cost of the platform. Some TPMs will require high protection; others will be minimal. All TPMs are required to show at least some evidence of tampering after a physical attack.

Integrating a TPM into a Platform

Whenever it is done and whatever method is used, the attachment of a TPM to a platform must result in strong physical or logical binding of the TPM to the RTM/CRTM.

A TPM is most likely a chip, but that chip does not become a TPM proper until an endorsement key has been installed in the TPM and an endorsement certificate has been created. This certificate is attestation by someone, probably the TPM manufacturer or the platform manufacturer, that proof of possession of a particular endorsement key is proof of a genuine TPM. At some point, a TPM must be attached to a platform, but no rules exist on when this must take place. The platform design might rely on a strong physical connection between the TPM and the circuit board, or it might use a weak physical connection augmented by a strong cryptographic connection. Dedicated trusted circuit boards could be built from scratch to include TPM chips, or it might be cheaper to convert an ordinary circuit board into a trusted circuit board by adding particular software and inserting a TPM into a socket. The platform does not become a TP until someone, probably the platform manufacturer, produces a platform certificate, which is an attestation by the manufacturer that a particular TPM has been incorporated into a properly designed and built platform.

Clearly, many options are available when manufacturing a TP, depending on when the endorsement key is created for a TPM and when a TPM is added to a platform. There is even an option in the method used to create the endorsement key. A TPM is fully capable of creating an endorsement key, but asymmetric key creation is a statistical process that could take one or even two minutes. It might be too time-consuming for a production line to wait this long, so there is the option of (pre)creating a unique endorsement key in a secure environment and inserting the key into a TPM. However the key is created, it must have the same strong cryptographic properties, including a guarantee that no record of the private key exists outside of the TPM. (A TPM contains a CEKPUsed flag that indicates whether the endorsement key was created inside the TPM or was inserted into the TPM.)

In PCs, soldering the TPM to a motherboard is both an acceptable and practical level of binding for most applications. If the TPM is mounted on a circuit board and inserted in a slot into the platform, some extra protection is required to strengthen this link. Solutions can affect the material (involving, for example, physical locking, strong gluing, and/or self-destructing components in case of extraction) or the software (using cryptographic techniques to identify a BBB/BIOS to the TPM). If binding uses cryptography, the design must ensure that the secret used for mutual authentication cannot be discovered. Note that a cryptographic binding requires the BBB/BIOS to have some level of tamper-resistance, both physically and logically, but it becomes possible for the TPM to be physically disconnected from the platform. Although a socketed TPM solution has complications compared to a soldered TPM, this approach has the advantage of offering a wide range of possible configurations (covering different types of systems and security) using the same TPM model. Only the motherboard would differ. In addition, such a solution has the advantage of simplifying maintenance. In the case of a malfunctioning TPM, only the TPM needs to be replaced, rather than the entire motherboard. Nevertheless, re-establishing the link between the new TPM and the motherboard without compromising security must involve the manufacturer, so it is potentially expensive. In such a case, the manufacturer must void the platform certificate, and a trusted entity must destroy the TPM and vouch that this is the case. The process of broadcasting the revocation of a certificate is generally considered to be an unsolved problem, and to date, TCPA has not addressed the issue.

Authorization and Physical Presence

Many TPM capabilities require proof of authorization, via a challenge/response protocol. (They never require presentation of the authorization information itself.) The basic technique is unremarkable, in the sense that a rolling nonce is passed back and forth and is incorporated with the authorization secret to create an HMAC digest. (For further explanation of these terms, see Appendix C.) This enables a remote entity to authorize capabilities without exposing the authorization secret over a network. The authorization protocol is redundant if the TP itself uses an authorization secret and “talks” to its TPM using the authorization protocol, but the authorization protocol must still be used: The TPM has no capability that permits direct presentation of authorization secrets to the TPM. This probably means that all TPs need software to perform the authorization protocol, to communicate either with other TPs or with the local TPM.

The authorization secret is always 20 bytes long. This length was deliberately chosen to be the same as an SHA-1 digest. But TCPA says nothing about the source of authorization secrets. This enables you to choose any method of user authentication for a TP. An authorization secret could be a 20-byte random number in a smart card (giving a very high level of security), or it could be the output of an SHA-1 hash applied to a four-character password (giving a low level of security).

Two authorization protocols are available: One operates with just one specific value of authorization data; the other operates with multiple values of authorization data. As might be expected, they have different properties.

An instance of the Object Specific Authorization Protocol operates on just one specific value of authorization data. Its advantage is that it can perform multiple authorizations on the same target object but requires use of the authorization secret only once—it generates a temporary secret that is used for the remainder of the authorization session. This could be important when an authorization secret is actually entered into a platform by a user: The secret can be entered once to start the session and destroyed after generation of the session secret. This type of authorization must be used when communicating confidential information to the TPM—usually the introduction of new authorization information to a TPM when creating a new TPM object (such as a new key) within the TPM. The authorization data is modulo-2 added (coded as “exclusive-or”) with the secret information to provide confidentiality.

The Object-Independent Authorization Protocol operates using multiple authorization secrets. Its advantage is that the same authorization session can be used to access multiple target objects.

The authorization secret belonging to the owner of a TPM is introduced to the TPM when the owner uses the takeOwnership command. At that point, no existing user secret is available to provide confidentiality for introduction of the owner's authorization, so the secret is encrypted using the TPM's endorsement key. This makes it more difficult for a rogue to take control of a TPM, because the rogue must know the endorsement key beforehand. But the owner is not a “super-user.” Every target object has its own authorization secret that must be presented in order to use that object. If the owner wants to access an object, he needs the authorization secret for that object, just like any other user. Even so, because an existing authorization secret is used to introduce a “child” (or “derived”) authorization secret, there could be a suspicion that a parent is snooping on a platform and obtaining a child's authorization secret when it is introduced to the platform. To alleviate this concern, TCPA provides an optional two-step “change authorization” protocol that stops such snooping. This process requires the cooperation of the parent, because the parent may need to retain the ability to snoop! If the parent won't cooperate, the two-step “change authorization” protocol won't work.

Suppose that an owner (who is not the user) doesn't want anyone to use the TPM and doesn't even want to take control of a TPM and have an authorization secret, or perhaps the owner has lost an authorization secret. A number of so-called physical presence commands can be used so that a person can control the TPM without using an authorization secret and cryptographic techniques. These are a necessity to cope with set-up situations and error conditions.

These commands are specified to require proof of physical presence (presumed to be the owner) at the actual platform. Misuse of these physical commands can cause a temporary “Denial of Service” that might need to be corrected with other physical presence commands or a reboot, but such misuse can't cause a security breach. All but one of these commands assumes just a high probability of physical presence and can be implemented using software on the main platform. In a PC, for example, these commands could execute during the Power-On Self-Test[1] (POST), when it is unlikely that a rogue software entity can take control of a platform.

[1] The platform POST is executed in the early stages of the boot process; this is the right place to easily control most of the TPM's configuration at startup, before higher-level software is executed. This would be equivalent to today's chipset configuration on PCs.

The PhysicalEnable command is the exception to the rule. PhysicalEnable requires absolute confidence that a person is physically present at a platform and probably requires a hidden dedicated physical switch or jumper. This exception is almost certainly an inconvenience for a user and extra expense for manufacturers, but a “turned off” TPM is the ultimate TP protection recourse to privacy, so turning a TPM back on with PhysicalEnable is accorded absolute protection against software attack.

Audit Functions

TCPA auditing is very simplistic in [TCPA 2001a] and is of limited use. “Proper” TCPA auditing will have to wait until a future release of the specification.

The TPM described in version 1 of the TCPA specification has a set of audit functions that create a summary of the type of TPM commands that are used and the order in which they are used. The summary is held in the variable called auditDigest, which may be considered as a Platform Configuration Register that records the use of TPM capabilities. It is possible to select the commands that are and are not audited. Unfortunately, it is widely considered that these audit functions are insufficient: auditDigest does not record what was actually done with a command, for example, which means that parameters are not logged. So audit functionality is deliberately ignored in the TCPA specification that describes the TPM's software interface.

Other TCPA Components

TCPA defines more functions than just the RTM and the TPM. These TCPA functions are not roots of trust, and it is necessary for a challenger to determine, via integrity metrics, whether these functions can be considered to be trustworthy. Once trusted, some of these functions are themselves used to determine the trustworthiness of other TCPA functions, and ultimately these functions may determine the trustworthiness of ordinary platform software. In this way, a transitive chain of trust is constructed from the roots of trust (the RTM and TPM) to the applications executing on an operating system. It follows that these additional TCPA functions can be (and are) ordinary platform software. They execute on the normal platform CPU, just like ordinary software. Note that there is a difference between “ordinary software” in general (i.e., software that just runs on a TP) and “ordinary software” that is part of (provides functionality for) the TSS. The only things that need to be trusted are things within the TPM. Misbehavior of anything outside the TPM, including the TSS, can be detected by consistency checks.

TCPA defines data as well as functions. Challengers use these data to interpret the measured values of integrity metrics, so these are preferably stored on TPs, in the form of digitally signed certificates.

Measurement Agents

Measurement agents are similar to the RTM and perform integrity measurements and store the results in a TPM, but they are not roots of trust. They must be measured before they can be trusted.

The RTM could measure all the integrity metrics of a platform. In most platforms, however, the software environment is booted in a series of steps. At each step, some software initializes resources and then calls more complex software that couldn't execute without that resource. If the RTM in such a platform is actually the platform itself (executing the CRTM), it is often more practical to put measurement function instructions in each successive layer of software in the boot process. Each set of measurement instructions makes the platform act as an agent of the RTM at that stage of booting the platform.

In a platform using measurement agents, the RTM does a few simple operations, measures the next software to be executed in the boot process, stores the result in the TPM, and passes control to that next software. The next software does more complex operations, measures the next software to be executed in the boot process, stores the result in the TPM, and passes control to that next software. And the process continues, forming a chain of trust from the RTM, because the RTM is implicitly trusted and other software is always measured and the result stored in the TPM before it is executed. If the boot software (or, indeed, any software) is a rogue and breaks this rule, that rogue software cannot hide itself because its measurements were stored in the TPM before the rogue software was executed.

The TCPA specification for PC platforms requires the main platform processor to execute a series of measurement agents during the boot process, with each agent doing integrity measurements under the control of measurement instructions that were themselves measured by the previous layer of boot software. Both the OS loader and the OS itself contain such measurement agents.

After the OS is loaded, its measurement agent stays resident and continues to measure new software and store the result in the TPM before that software executes. The question is “How much new software should be recorded?” The answer is that it is strictly necessary to record an integrity metric only when execution of the new software will cause the platform to step outside the platform's current trust policy, whatever that may be. It is essential to record the transition from a more trusted state to a less trusted state, and impossible to change from a less trusted state to a more trusted state without some drastic process (such as a reboot). So although it may be necessary to record the start of individual daemons (for example), it is unnecessary to record when the daemon stops (because “what's done is done”). Very fine resolution of trusted states will require more complex management processes (to interpret the PCRs), so many platforms will use a coarse-grained trust policy, in which very little needs to be recorded.

Trusted Platform Measurement Store

When the RTM or a measurement agent makes a measurement, it is actually making a record of the software that is about to be loaded and calculating a summary of that software. The Trusted Platform Measurement Store is the repository for the record, and the summary is stored in the TPM. The record of the software could be the actual software, but more likely it is a reference (probably just the name, supplier, and version, or URL) of the software.

As will be explained later, the Trusted Platform Measurement Store doesn't have to be trusted, but (obviously) it must work correctly. This means that ordinary software and storage can be used for the Trusted Platform Measurement Store. If the platform is working properly, the information in the Trusted Platform Measurement Store will be consistent with the summary in the TPM. (Further details are given in the section “Integrity.”) If the platform isn't working properly, the information in the Trusted Platform Measurement Store won't be consistent with the summary in the TPM. It won't be possible to find out where the inconsistency is, but at least a challenger can deduce that something is wrong in a target platform.

Repository for TCPA Validation Data

As will be explained in the section “Integrity,” validation of integrity measurements requires digitally signed assertions or statements that contain predicted values of integrity metrics. For convenience, these are stored in a repository somewhere (anywhere) in the platform so that they can be supplied with the measured integrity metrics. This simplifies the challenger's job.

Validation data is signed by the entity that vouches for some particular aspect of a platform. It isn't secret, and it isn't private (because it applies to all platforms of that class or type). It can be held anywhere on a platform, or it may be merely referenced by a platform. So it may be that software is supplied with validation data and that an option ROM includes validation data, for example. Or it may be that the platform contains just a list of URLs to web sites containing the validation data. The exact solution does not matter, as long as it can be trusted (for example, URLs must not be allowed to expire).

Trusted Platform Agent

The Trusted Platform Agent is software that coordinates the supply of integrity metrics to a challenger and is probably part of the operating system. It must work properly, but it doesn't need to be trusted because the consistency of integrity data depends only on the trustworthiness of the TPM. The TPA gets signed summaries of integrity metrics from the TPM, measurement logs created by the RTM and measurement agents, and validity data from the repository. Then it sends the information to a challenger.

This integrity response process is described in more detail in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.185.147