6Test and Evaluation of Trusted Computing

Trusted computing is irreplaceable in guaranteeing platform security and establishing chain of trust. For protecting personal, organizational and national interest, it is of great importance to ensure security and reliability of trusted computing technology through test and evaluation method. Test and evaluation is also useful for improving product’s quality, enhancing interoperability and compatibility and even promoting development of trusted computing industry. Research achievement of test and evaluation could also provide scientific method and technical support for evaluating product quality and admitting market access.

Main research directions for test and evaluation of trusted computing technology include compliance test for TPM/TCM specification, analysis of trusted computing mechanisms and certification/evaluation of trusted computing products. Research trend for compliance test of TPM/TCM is promoting automation level, so as to achieve better performance and reduce influence of human factor on test result. Analysis of trusted computing mechanism mainly adopts formal methods such as theorem rover to verify all kinds of security properties of abstract protocols. Certification and evaluation of trusted computing products focus on carrying out security evaluation on overall quality of product with the idea of security assurance and process engineering.

6.1Compliance Test for TPM/TCM Specifications

Compliance test for TPM/TCM specification is key part of test and evaluation research of trusted computing. First, specifications are relatively strict requirements on security and function correctness of product, thus compliance test is a primary method for testing these properties. Second, compliance test is helpful for vendors to find products’ defects and improve products’ quality and furthermore promotes development of trusted computing industry. Third, TPM products from different vendors or TPM and other software/hardware may interact with each other frequently, and compliance test is helpful to improve compatibility and interoperatibility and reduce cost of products’ deploying and updating. Finally, research of compliance tests will provide not only scientific method for products certification and accreditation but also technical and instrumental support for test and evaluation institutions in China.

Currently, the main difficulty of compliance test research lies in automation of test method and quality analysis of test result. Existing researches, such as works on TPM compliance test given by Rulr University in German [46], perform completely in manual manner. This kind of test is usually time-consuming and expensive, and lacks acceptable analysis of test quality. For this problem, we have used extended finite state machine (EFSM) as theoretical basis and made breakthrough in several aspects of key technologies of trusted computing test and evaluation, including test model of trusted computing platform, test methods and quality analysis of test result [77].

Figure 6.1: Compliance test steps for TPM/TCM.

6.1.1Test Model

Formal test model could describe specification accurately and explicitly. It is fundamental to promote automation level and quality analysis of test. We build this model with the following steps, as is depicted in Figure 6.1.

(1)Describing specification: We adopt formal program-specifying language to describe trusted computing specifications. In this way, we divide TPM functions into related sets, and abstract concrete definitions into relatively simple descriptions. These descriptions avoid incompleteness and ambiguity of natural language description and eliminate interference of inessential content on the following test.

(2)Analyzing function of inner-module commands: Based on the description of specification, we analyze data flow and control flow dependencies between commands belonging to same function modules of TPM/TCM.

(3)Analyzing function of cross-module commands: Based on the description of specification, we analyze data flow and control flow dependencies between commands belonging to different function modules of TPM/TCM.

(4)Defining state and transition: According to the dependencies between internal commands of each function module, we define states and transitions of each TPM/TCMfunctionmodule based on EFSM theory. We further combine states and transitions of different modules to construct an overall test model of TPM/TCM.

6.1.1.1Describing Specification

The purpose of describing specification is abstracting and simplifying specification content and eliminating ambiguity or incompleteness of natural language. This work can be further divided into three steps: dividing functions, abstracting and simplifying functions and formally describing functions.

Dividing functions is the first step. The content of TPM/TCM-related specifications is divided into chapters, but granularities of different chapters are not balanced and dependent relationships between chapters are very complex. Based on specifications, we categorize specification contents into classes, which consist of high-cohesion and low-coupling functions. For example, we combine contents of key management and basic cryptographic service in protection of platform data into a cryptography subsystem, so as to depict TPM’s basic cryptography functions.

Based on dividing function, function modules and subsystem can be abstracted and simplified. The purpose of this step is extracting core content and highlighting characteristic functions of each subsystem. Take cryptography subsystem as an example; although there are as many as seven key types, we take only three representative types into account: encryption, storage and signing keys. We ignore other types of keys as they are essentially similar to encryption, storage or signing keys. After simplifying key types, operations in cryptography subsystem can be divided into key management and key usage: The former includes creating, loading and evicting keys, and the latter includes encrypting, sealing and signing.

Finally, we precisely describe each subsystem by formal Z language. Z language is the most widely used program specification language. Its advantage for formal analysis has already been embodied and verified in several projects of large software engineering. In analysis work by Z language, analyzers usually first define data structures and variables and then give function operations. For cryptography subsystem, we introduce several abstract data types, including KeyHandle, KeyType (with three optional values) and global variable maxcount (for restricting maximum key count that a TPM/TCM can hold). These definitions are defined in Figure 6.2.

For key management operations, we define Z language functions TPMCreateKey, TPMLoadKey and TPMEvictKey, as illustrated in Figure 6.3. For key usage operations like TPMEncrypt and TPMDecrypt, we define input, output and critical internal processing steps, as is shown in Figure 6.4. Note that we must define the necessary system states and their initialization. For example, in Figure 6.5, key handles must be mapped onto different types by using KeyHasType, and SRK must exist when TPM initializes.

6.1.1.2Analyzing Functions

Based on specification description, we can further analyze logic relationships between variables and operations within a function module. These logic relationships are embodied in both data flow and control flow aspects. The former indicates change of internal data and state of chip and its influence on other commands, and the latter indicates output parameters of a command and their influence on other commands.

Take cryptography subsystem as an example; TPMLoadkey needs the key blob outputted by TPMCreatekey as its input, but these two commands may be executed in different TPM life cycles (within one power-on life cycle, TPMLoadKey can be executed without previous execution of TPMCreatekey). Thus, there exists data flow relationship between these two commands but not control flow relationship. As a comparison, TPMSeal’s inputs include a key handle that is generated by TPMLoadKey, and any operation using a sealing key must be executed after a successful execution of TPMLoadKey. Thus, TPMSeal depends on TPMLoadkey in both aspects of data flow and control flow. Dependent relationships between key operations are illustrated in Figure 6.6.

Figure 6.2: Data types and global variables.
Figure 6.3: Key creating, loading and evicting.
Figure 6.4: Sealing and unsealing operations.
Figure 6.5: The definition of abstracted states of system and their initialization.
Figure 6.6: Dependent relationships between key operations.

6.1.1.3Extracting State

Based on specification description and function analysis, we can construct an EFSM model of TPM. The key of this work is extracting states and transitions between them in the model. In the extracting state, we define system variables and initial state according to specification, and then divide initial state into several states according to the value of the variable.

Internal Variables and Initial State. The state of the TPM is decided by values of its internal variables, and these variables can be further classified into single-value variables and collective variables. Single-value variables are those variables that are assigned and changed in a single dimension. For example, the bool variables that indicate activation, enabling and ownership state of the TPM could only be assigned to TRUE or FALSE. Collective variables are those sets that range in multidimension and consist of several independent variables. As an example, the above-mentioned variable keys, which record information of loaded keys within the TPM, consist of key handle variables whose amount is at most maxcount.

Assume that the subsystem of TPM has state-related variables x1.xi.xn, A1, . . . , Aj, . . . , Al. xi represents some single-value variable, and its type ranges in T1, . . . , Tn. Aj represents some collective variable, and types of its internal elements are TT1, . . . , TT. Then, the initial state can be represented as (using Z language)

Sinitial={x1,,xn,A1,,Al|x1T1,xnTn,A1TT1,,AlTTl}

There are only one collective variable key in the Z language-based description of TPM cryptography subsystem and a constraint function KeyHasType. The initial state of the subsystem is

SInitial={keys|#keysmaxcount}

Dividing State. Given the initial state, further division can be processed according to the values of state variables. For single-value variables, states can be divided according to a certain policy, such as the method of boundary value analysis and the method of category classification. A Policy-based division of state S according to the variable xi is defined as follows:

Partition(S,xi,policy)={AS1,,ASj,,ASmi|ASj=Δ(Pj(xi))=trueASj1,ASj2:ASj1(xi)ASj2(xi)=0i=1miASi=Ti)}

Here, ASj represents substates of state S divided by variable xi. Pj is the predicate to constraint xi, such as xi >=0; ASj1(xi) represents value space for variable xi in state ASj1. For collective variables, usually a method based on set division is adopted to divide states space. We call a family of subsets π a partition of set A, if and only if π meets the following requirements: first, Ø ∉ π; second, any two elements in π are nonintersect; third, union of all elements in π equals to A. If a function f : TV reflects set T to type V, and V is a finite set that satisfies ran(f) = {v1, v2, . . . , vn}, then a division of function value of type T is as follows:

π={{t:T|f(t)=v1},{t:T|f(t)=v2},,{t:T|f(t)=vn}}

Thus, state S can be divided by function f according to collective variable Aj, and the division result is

Partition(S,A,f)={BS1,,BSkj}{t:tBS1(Aj)|f(BS1(Aj))=v1}{t:tBSkj(Aj)|f(BSkj(Aj))=vkj}}

Here, BS1(Aj) represents a set of elements with type TTj, which belong to collective variables Aj in state BS1.

For cryptography subsystem, its states can be divided based on collective variables as follows:

Partition(SInitial,keys,keysHasType)={SSign,SStorage,SBind}

Here, all elements in SSign are signing key handle, and SStorage and SBind are defined similarly. Up to now, combination number of states is 7(C32+C31+C33=2n1=7)and concrete states after dividing are s1 = {SSign}, s2 = {SStorage}, s3 = {SBind}, s4 = {SSign, SStorage}, s5 = {SStorage, SBind}, s6 = {SBind, SSign}, and s7 = {SBind, SSign, SStorage}.

The final state space can be achieved by combining each state division. The count of state space after combination is Ckj1+Ckj2++Ckjkj=2kj1.If variables within a state include both single-value ones x1, . . . , xn and collective ones A1, . . . , Al, then the final state space is a complete combination of all variables. The count of final states is

i=1,j=1i=n,j=lmi(2kj1)

If there are too many state variables, then the final state space will expand dramatically. In this situation, there are two alternative choices. On one hand, state variables could be redivided into coarser granularity. On the other hand, some constraints can be imposed on final state space according to practical demands (such as test demands). For example, if we add constraint condition on the above result as follows, then the space will reduce to only four states (s1, s4, s6, s7):

P(S)={S|keykeys,keyHasType(key)=TPM_SIGN}

6.1.1.4Extracting Transition

Extracting transition is the last step to construct an EFSM for TPM. Transition mainly consists of operation that causes states to transfer: For two TPM states si, sj, if there is some operation OP that makes si transfer to sj, then (si, sj, OP) can be added to transition sets of model. Figure 6.7 depicts basic algorithms for extracting transition. In the figure, def(s) = {key|keys}, S is set of internal states and OP indicates set of TPM operations.

For cryptography subsystem, operations that cause transition include TPMCreateKey, TPMLoadKey, TPMEvictKey, TPMSeal, TPMUnSeal, TPMSign, TPMVerify, TPMEncrypt and TPMDecrypt. An EFSM model of cryptography subsystem achieved by applying the above method is illustrated in Figure 6.8, and concrete meaning about states and transitions is shown in Tables 6.1 and 6.2.

Figure 6.7: Extraction algorithm of state transition.

Table 6.1: State description of EFSM.

Figure 6.8: EFSM model of cryptography subsystem.

6.1.2Test Method

6.1.2.1Automatic Generation Method of Test Case

Automatic generation method of test cases is the key and most challenging part in compliance test for the TPM/TCM specification. To reduce the complexity of generation, abstract test case is generated automatically. Though not executable, this case identifies execution sequence of TPM/TCM commands and embodies data flow and control flow relationships between commands. In the second phase, the abstract cases are converted into practical ones. Through filling concrete parameters in test commands, internal function logic of each command is actually embodied, and the test cases are ready to be executed.

Table 6.2: State transition of EFSM.1

State transition Description
t1 {s0TPM_CWK, Pt3/ < yretValue(0), yordinal(TPM_CWK), ytag(RspTag), ykeyType(xkeyType) >,0– > s0} Here Pt3 = (xtag(auth1Tag), xkeyHandlekeys)
t2,t6,t11 {s2(s5, s7) – TPM_EK, Pt2/ < yretValue(0), yordinal(TPM_EK), ytag(RSPTag) >,0– > s0(s2, s5)} Here Pt2 =< xtag(reqAuth), xkeyHandlekeys >
t3,t5,t10 {s0(s2, s5) – TPM_LK, Pt3/ < yretValue(0), yordinal(TPM_LK), ytag(resAuth1) >, 0– > s2(s5, s7)} Here Pt3 = (xtag(reqAuth1), xpkeyHandlekeys, xkeyType(SignKey))
t4,t7,t12 {s2(s5, s7) – TPM_Sign, Pt16/ < ytag(rspAuth2), yretValue(0), yordinal(TPM_Sign) >,0– > s2(s5, s7)} Here Pt16 = (xtag(reqAuth2))
t8,t13 {s5(s7) – TPM_Seal, Pt10/ < ytag(rspAuth1), yretValue(0) >, 0– > s5(s7)} Here Pt10 = (xtag(reqAuth1), xkeyHandlekeysxkeyType(TPM_STORAGE))
t9,t14 {s5(s7) – TPM_UnSeal, Pt11/ < ytag(rspAuth2), yretValue(0), yordinal(TPM_UnSeal) >,0– > s5(s7)} Here Pt11 = (xtag(reqAuth2))
t15 {s7TPM_UB, Pt9/ < ytag(repAuth1), yretValue(0), yordinal(TPM_UB) >,0– > s7} Here Pt9 = (xtag(reqAuth1), xKeyHandlekeys)

1In identifier of transition (s x, P/op, y –> sʹ), S represents source state of transition, sʹ represents destination state of transition, x represents input of transition, P represents condition of transition, op represents operations in transition and y represents output of transition. Input variables in this table include xtag, xordinal, xkeyType, xkeyHandle. Output variables include ytag, yretValue, yordinal, ykeyHandle, ykeyType. Input commands include TPM_CWK (createkey), TPM_EK, TPM_LK, TPM_UB, TPM_Seal, TPM_UnSeal, TPM_Sign. The symbol ytag (value) represents assign value to ytag. Environment and global variable include keys(set of handles in TPM).

Basic steps of abstract generation are depicted in left part of Figure 6.9. For each command A, we first find its dependent relationship with other commands and then form a command sequence according to the dependent relationship. In this sequence, any command that A depends on should be located in a position before A, and any command that depends on A should be located after A. Then we adjust location of commands (except A) in the sequence and get an abstract test case of A. After repeating this process for all commands and merging some test cases when necessary, we can finally get abstract test cases of compound function, which consists of a series of commands.

Basic steps for practical test case are depicted in the right part of Figure 6.9. For any command A, we first find its data flow dependent relationship with other commands. Then, according to the associated parameters between commands embodied in the dependent relationship, we insert represent value of each equivalence class into abstract test cases of A. Finally, according to input parameters of A and relationship of input/output parameters, we supplement necessary parameters and get the practical test case of A.

Figure 6.9: Basic steps for test case generation.

This method only needs small quantity of manual work in the initial phase of generation of test case. The relatively high degree of automation reduces the burden of testers and promotes work efficiency greatly. This method also supports accurate analysis of test quality, which builds trustworthiness of test result upon verifiable algorithm but not default trust on testers. This feature is helpful to exclude interference of artificial elements and enhance objectivity of test and clients’ confidence in test result.

6.1.2.2Analysis of Test Cases

Quality analysis of test cases is a critical method to quantify trustworthiness of test method. Given a relatively complete and universal test model, quality of test case actually depends on simplification of the model of test case generation and clipping of generated cases. In ideal situations, key index of quality like test coverage rate can be determined by analyzing principles of simplification and clipping. For example, if full-state coverage principle is adopted, then testers can guarantee that every optional value of every internal variable is covered by at least one test case. But in practice, it is often the case that full-state coverage is too expensive and coverage rate of state or transition beyond certain threshold is satisfactory for most cases. In other words, while generating test cases, coverage rate should be monitored. Generation process should immediately halt as long as this rate reaches some threshold value.

In completed test works, we typically set a threshold of test coverage rate as termination condition of test case generation. Thus, quality of test case can be measured by analysis tree of reachability. The so-called analysis tree of reachability is a tree structure graph describing FSM’s behavior trace. Root of the tree (level-0 node) corresponds to the initial state of FSM, and nodes at Nth level indicate reachable states after N time transition from initial state. Some transitions may be blocked if the transition predicate of EFSM is judged to false under certain test inputs. Through depth-first or width-first traverse algorithm, the tree graph can be thoroughly traversed so as to analyze the test quality. Concrete algorithm to generate analysis tree of reachability is as follows:

(1)Set traverse depth l, do a depth-first traverse upon EFSM model of TPM/TCM starting from a designated initial node and get analysis tree of reachability.

(2)Put all reached nodes into a set Straversal.

(3)Terminate the generation of analysis tree of reachability when traversal depth is greater than l.

(4)Every feasible path in the tree is correspondent to a test case. In order to get test coverage rate, the generated test cases should be reanalyzed according to the tree.

For the cryptography subsystem mentioned above, its test quality can be analyzed by an analysis tree of reachability. EFSM model of cryptography subsystem is illustrated in Figure 6.8. Exemplar paths in Figure 6.10 can be extracted from this model. In this figure, nodes represent model states (black node is the initial state) and paths represent transitions. The formula of coverage rate is #Straversal/#Stotal where Stotal is the total state/transition space. In order to make transition coverage rate of test cases of cryptography subsystem exceed 80%, we have actually generated over 70,000 test cases in the final.

6.1.3Test Implementation

Based on the above-mentioned method, we have designed and implemented a comprehensive solution for compliance test for the TPM/TCM specification. Basic process of this solution is depicted in Figure 6.11. We first formally describe TPM/TCM and establish the EFSM model. Then we automatically generate test cases and executable scripts by test case generation tool on EFSM, and finally get test report by inputting and executing the scripts on the test platform.

Figure 6.10: Analysis tree of reachability of cryptography subsystem.
Figure 6.11: Basic steps for test implementing.

Currently, we have completed the compliance test on these TPM products, including Atmel AT97SC3203, NSC TPM 1.2 and Lenovo TPM 1.1b. It can be concluded that the actual situation of product quality is not optimistic, although compliance to TPMv1.2 specification is commonly claimed by TPM manufacturers. Parts of errors may cause attack and lead to serious loss. These errors are as follows:

Error about resource handle: Resource handle should be generated randomly, but some products generate handles regularly. This leads to unfreshness of handles. If an attacker is able to release a handle in TPM/TCM, a man-in-the-middle attack may exist. For example, user loads a key KEY2 into TPM and gets a handle 0 × 050000001, then attacker could release 0 × 050000001 and reload a new key KEY1. KEY1 may be a corrupted key that shares the same usage authorization value with KEY2, and a TPM with this error may allocate 0 × 050000001 to KEY1. An insensitive user will not perceive these changes and continue to cite the key with handle 0 × 05000001 (now it is KEY1) to encrypt secret data, and ciphertext will finally be decrypted by attackers.

Error about command ordinal: Ordinal is the parameter to identify a TPM command. We find that parts of ordinals in tested product are not compliant with specification. Although what kind of influence will be brought by this kind of error cannot be determined now, it will greatly influence interoperability and universality of the software that uses TPM.

Error about return code: Parts of products sometimes return internal returning codes defined by vendors but not in specifications. This may give attackers chance that is useful for malicious purpose. Through analyzing these internal values, attacker may find vulnerabilities in TPM and even carrying out attacking.

6.2Analysis of Security Mechanism of Trusted Computing

The research content of analysis of security mechanism of trusted computing is analysis and verification of various security properties of abstract protocols and running mechanism by theoretical and formal method. Typically, researchers use model checking and theorem proving methods to analyze and verify confidentiality, authenticity, anonymity and other security properties of TPM/TCM authorization protocols (AP), DAA protocols, PCR extending mechanism and establishing process of chain of trust in trusted computing platform.

6.2.1Analysis Based on Model Checking

Model checking method depicts target system as an FSM system, describes properties that system supposed to have as logic formulas and uses automated means to verify whether each trace of (FSM of) target system satisfies the properties. The advantage of model checking is twofold. First, it is completely automatic. As long as the user gives logic description of target system and properties, model checking could complete all verification work automatically. Second, model checking could present vulnerabilities at the model layer in a relatively intuitive manner. If modeling method used in test work is relatively reasonable, the detected vulnerabilities are expected to be directly exploited in practical attacks. The disadvantage of model checking lies in that it can analyze only target with finite states. With expansion of target system, states of model (that need to be checked) will increase dramatically. Because potential behavior of verifying target like protocols is always infinite, model checking often suffers from state explosion problem and is doomed to be an imperfect method.

When using model checking method to analyze protocols, we must first give a clear description of protocol to avoid ambiguity in protocol description and prepare for modeling the protocol. Then we make assumptions about cryptography algorithms and attackers, and describe analysis target and properties by tool’s modeling language. After all these preparation, we run tool to get and analyze the verification result. In the following sections, we give an example in which TCM authorization protocol is analyzed using SPIN [53], so as to illustrate concrete steps to analyze trusted computing protocols by model checking method.

6.2.1.1Symbolic Definition of Protocol

TCM uses authorization protocol to protect its internal resources. A simple description of AP is illustrated in Figure 6.12.

For a formal definition of AP protocol, we introduce the following symbols and definitions:

AP: ordinal of commands that establishes AP session

ET: entity type that is protected by TCM

EV: entity value that is protected by TCM

seq: sequence number for preventing replay attack

Nc: CallerNonce, nonce provided by caller

Nt: TCMNonce, nonce generated and provided by TCM

CMD: ordinal of commands that is to be executed (and access the resource)

D1, D2: data needed in executing CMD

Figure 6.12: AP authorization protocol.
Figure 6.13: Execution flow of AP authorization session.

S: identifier of authorization protocols

RS: return code representing successful execution

A concrete execution flow of AP is illustrated in Figure 6.13.

6.2.1.2Modeling Target Protocols

The following assumptions should be made when analyzing AP protocol. First, cryptography algorithm used in protocol is secure. Second, honest users and malicious attackers coexist in running the protocol. Third, attacker cannot forge HMAC value, and HMAC computation is ignored in formal analysis. Fourth, attacker could intercept, tamper and forge any messages of user.

Under the above assumptions, we adopt PROMELA (PROcess MEta LAnguage) of SPIN to describe target protocol and expected properties. We set three roles in AP protocol: user U to start the protocol, attacker A and TCM. A acts between U and TCM, and is capable of tampering and intercepting all plaintext message. We introduce two half-duplex communication channels User-to-A and A-to-TCM. Assuming two concurrent AP sessions are supported by TCM, the following two global variables could be defined:

(1)byte tcm_session[2], which indicates session between TCM and attackers.

(2)byte user_session[2], which represents session between user and attackers.

For each session, there are two states Success and Fail to indicate successful and failure termination of session. Definition of message structure shared by all participants is as follows.

Message channel is defined as follows:

Finally, the behavior of user, attacker and TCM could be modeled. There are three processes modeled in SPIN, which represent user, TCM and attacker, respectively. The user process could initiate session, request to execute authorized commands and receive result. The TCM process could actually open session, execute commands and send result. Attacker is active and his behavior is not restricted by any specification. He can do any operation such as message interception, tampering and forgery. For example, an attacker could request to open a session by impersonating a user, send valid message to TCM or user, store all intermediate message, replay previous message and terminate sessions.

Depicting Target Property. Description of target property by natural language is as follows: Let S be a session set, TS represent the state of session between TCM and attacker and US represent the state of session between user and attacker. Then it is expected that for any s ∈ S, states of TS and US should be consistent with each other.

The target property described by PROMELA is as follows:

#definesessionProperty(tcm_session[0]==user_session[0]&&rtcm_session[1])==user_session[1]).

Analyzing Result. Based on the above model checking method, we have found a kind of replay attack against AP authorization protocol, as is depicted in Figure 6.14. This attack consists of three phases. The first phase is attack phase. User creates an AP session according to the specification of AP protocol, and then attacker intercepts and saves the concrete information of command and forces to terminate the session. The second phase is user’s normal execution phase. User is completely unaware of any abnormity and opens another authorization session. In this way, the session that is actually not closed in the previous phase is maintained. The last phase is replay attack phase. The attacker leverages information saved in the first phase to re-execute the TCM command that user wants to execute previously, so as to change the TCM’s internal state of resources that will not be perceived by the user.

6.2.2Analysis Based on Theorem Proving

Besides model checking, research institutions have also carried out works on analysis of security mechanism of trusted computing, such as authorization protocol, chain of trust and DAA protocol, by belief logic, logic of secure system and applied pi calculus. The methods like belief logic typically take all behaviors of target into account and dedicate to proving properties that their behaviors satisfy; thus, we classify the analysis works into theorem proving category. Theorem proving methods are based on strict method of logical analysis, so their advantage lies in soundness of analysis. Namely, once a security property is proved, it is determinately correct (in the model set by theorem proving system). Generally speaking, theorem proving method dedicates to proving security properties of protocols but not finding their defects, and it is inferior to model checking in the aspect of automation degree. But as the development of secure logic theory and technology, these disadvantages have been gradually remedied.

Figure 6.14: Sequence of commands in replay attack of AP.

Currently, security mechanism analysis of trusted computing based on theorem proving is still in its infancy. First, construction methods of trusted execution environment and related security technologies are constantly emerging, so analysis works are unable to keep up with the pace of technology development. Second, against concrete problems in practical applications of trusted computing, such as combination scheme of remote attestation, TNC and secure channel, there has not been any compellent result yet. Last but not least, for complex analysis target like DAA and trusted virtualization platform, existing formal description methods and definition of security properties are not mature enough. Thus, it still needs to be thoroughly discussed how to use all kinds of analysis methods especially theorem proving in trusted computing area. In the following sections, we briefly introduce several existing works of analyzing security mechanism of trusted computing using theorem proving.

6.2.2.1Analysis of Trusted Computing Protocols

Researchers from Saar University in Germany have successfully described zero-knowledge proof (ZKP) and ZKP-based DAA protocol using equivalence theory of applied pi calculus [51]. Based on extended applied pi calculus, the researchers leverage ProVerif tool to propose the first mechanical (i. e., not using theory of provable security based on computation) verification result of DAA and successfully detect a security defect: authenticity in the process of issuing credential cannot be ensured. In other words, the attacker may make DAA credential issuer fail to figure out which platforms actually hold DAA credentials.

Researchers from HP company in UK find that authorization data of TPM key are likely to be shared in practical application. In this situation, the malicious user could impersonate TPM to communicate with other users. If the malicious user (like administrator) knows authorization data of SRK, he can forge the whole storage hierarchy [129]. Though transportation protection can be used to partially avoid this situation, authorization protocol is still vulnerable to offline-dictionary attack. To handle these problems, the author proposes a new authorization protocol and proves its security against the above two attacks by ProVerif.

Chinese researchers successfully find a session substitution attack in implementation of OSAP [130] using SVO logic (which is a kind of belief logic). In detail, if implementation of OSAP does not bind session handle and its corresponding entity, a malicious user may use an OSAP session created by himself to access unauthorized internal resource in TPM.

6.2.2.2Analysis of Critical Mechanisms of Trusted Computing

From the view of protocol composition, researchers from Carnegie Mellon University in USA have proposed Logic of Secure System (LS2) for system analysis. By introducing predicates and deriving formula for describing security function and properties of system into existing protocol composition analysis logic, LS2 gets strong description ability while inheriting local reasoning property (attacker behavior could be excluded in reasoning) and is regarded to be a powerful tool for system security analysis. Based on LS2, researchers find a defect of dynamic root of trust-based integrity collection and reporting. That is, a program may actually not run even if it is measured and extended into PCR [52].

Researchers from CNRIA of France have analyzed state-related security mechanism [131, 132], such as PCR extending. Researchers find that analysis process in ProVerif tool may not terminate when analyzing state-related security mechanism. The tool may also present false attacks. Against this problem, researchers have carried out specialized theoretical analysis on PCR extending mechanism and confirmed that minimum complete set of PCR extending mechanism is finite. This conclusion makes it possible for the analysis tool to verify global security of the system quickly by actually verifying finite system variation. To solve the false attack problem, researchers have extended the ProVerif tool eliminating the false attack and proved its correctness theoretically.

6.3Evaluation and Certification of Trusted Computing

Compared with test and formal analysis, security evaluation is more comprehensive and expensive. Besides security of specification and products, security evaluation also pays close attention to production process. Up to now, TCG have already proposed directive documents for guiding evaluation organization to carry out security evaluation of trusted computing. TCG have also carried out certification projects on TPM and TNC. These projects verify correctness and rationality of security evaluation, and conclude an authoritative and overall judgment on the trusted computing products. This judgment is a valuable guidance for product selection, and a positive evaluation is of great importance for enhancing users’ confidence.

6.3.1Common Criteria

The primary basis for product evaluation given by TCG is common criteria (CC). The core purpose of CC is to introduce principle of security engineering into evaluation. Namely, the whole process of evaluation target, including designing, developing and application, will be enforced under principle of security engineering to ensure the product security.

The common criteria consists of two relative independent parts – demands on security assurance and demands on security function – and seven evaluation assurance levels (EAL) are set according to security assurance demands. Besides demands on security assurance and security function, CC further requires authority to formulate security demands for specific types of products, which is called as “Protection Profile (PP).” PP should fulfill requirements of certain users, be with certain level of abstraction and be independent of specific product. Given one or more PPs, manufacturers can form concrete security demands about its products, and these demands are called “Security Target” (ST). Evaluation organizations can evaluate PP and ST, and complete evaluation of target product. In overview of specification architecture, TCG has explicitly specified purpose, implementation environment, implementation processed security evaluation and relationship between evaluation and certification. As the most influential industry alliance of trusted computing, TCG has already drawn up PP for TPM in PC.

6.3.2TPM and TNC Certification

Based on security evaluation and with reference to compliance test result, TCG carries out TPM and TNC certification and accreditation projects.

Compliance test. TCG develops its own software for compliance test for TPM specification, and accepts self-test results carried out by vendors themselves.

Security evaluation. TCG develops its own PP for TPM in PC, admits CC evaluation results carried out by vendors and requires that the level of the results is no more than EAL4.

Currently, SLB9635TT1.2 from Infineon Technologies in Germany is the first TPM product certified by TCG, which is claimed to be compliant to TPMv1.2 revision 103.

TNC certification is based on the following two aspects of evidences:

Compliance test. TCG develops its own software for compliance test for TNC specification, and accepts results of self-test carried out by vendors themselves.

Interoperability test. TNC compliance workgroup of TCG performs interoperability test work one or two times a year. Vendors are forced to participate in and complete this test.

Currently, IC4500 access control suites, EX4200 switches and StrongSwan etc. TNC-related produces from Juniper Technologies in USA and Hochschule Hannover in Germany have been certified by TCG. These products implement various interfaces in TNC specification, and are typically tested as a component of TNC architecture. For example, EX4200 switch implements IF-PEP-REDIUS1.0 interface, and is certified as policy enforcement point (PEP) of TNC.

6.4Comprehensive Test and Analysis System of Trusted Computing Platform

Currently, for representative trusted computing products such as security chip and trusted software, industrial administration urgently needs effective comprehensive and effective methods to test and analyze compliance of product and security of cryptography algorithm, so as to meet practical requirements on quality inspection, market admittance and government procurement. Trusted computing vendors also need help from professional test and evaluation organizations, in order to improve product quality, enhance product compliance and get professional reference for product updating and revising.

For the above-mentioned practical requirements, we design and implement a comprehensive test and analysis system for trusted computing platform. Our system is based upon methods of compliance test for TPM specification presented in Section 6.2 and research achievements of analysis of trusted computing protocols mentioned in 6.3.1. It also references mature specifications and methods of test and analysis given by researchers all over the world. The system mainly targets on trusted computing-related products like Chinese domestic security chips and trusted computers, and implements multiple functions including compliance test for TPM specification, test for cryptography algorithm correctness and performance, test for randomness and protocol emulation.

The system is now put into practice by Chinese domestic test and evaluation organizations of information security. In works of testing multiple Chinese products, the system shows its great test scalability and good test ability. The system also gets good praise from all application organizations. The following section first introduces the architecture and functions of system and then explains in detail each function, including compliance test of security chip, test of cryptography algorithm, randomness test and protocol emulation. Finally, practical application of the system is also briefly mentioned.

6.4.1Architecture and Functions of System

The system adopts client/server architecture. It consists of test controller, test agent and supporting database, as shown in Figure 6.15. Test agent is the agent software that is embedded into the target platform to be tested. For example, when performing compliance test for security chip specification, test agent runs as a daemon in target platform where security chip is located and waits for test instructions from the test controller. Test controller is the center of the whole system, which supports control and administration interfaces to test engineers. Supporting database independently stores test cases and test results needed in test work. The design of system aims at supporting simultaneously concurrent tests on several trusted computing products. It also significantly enhances the flexibility and elasticity of device deployment. All these features facilitate test work as much as possible.

Figure 6.15: Architecture and interfaces of system.

Test controller. Test controller is the center of the system. It is a cross-platform control tool that provides administration and control GUI to test engineers. On test controller, test engineers could carry out works like test case generation, test scheme and task management, test result analysis and test report generation. While test engineers assign any of these works, test controller conducts a concrete test activity by interacting with the test agent, that is, sending instructions to test agent, receiving and analyzing test results and generating test report. Test controller is also responsible for database management, namely, storing and retrieving test schemes, tasks, results and reports.

Test agent. Test agent is the module to execute concrete test instructions. It is deployed on the platform to be tested, which supports TPM/TCM security function. On receiving test controller’s test instructions, test agent invokes TSS or TDDL interfaces, completes actual tests and data collection of security chip and returns test results or collected data to test controller.

Supporting database. Database is deployed on a dedicated database server, and is responsible for storing test cases, schemes, tasks and results.

Our system is implemented according to the above-mentioned architecture. It supports flexible and comprehensive test and evaluation upon TPM/TCM and trusted computers. Its functions include the following:

Comprehensive compliance test for TPM/TCM and TSS/TSM specification. The system could conduct compliance test for specification upon all kinds of functions of TPM/TCM and TSS/TSM according to trusted computing specification. Composition of test cases can be flexibly adjusted according to test demands and desired reliability.

Correctness and performance test of cryptography algorithm. By executing test cases that contain series of pre-defined test vectors for commands, the system could perform correctness and performance test upon various encryption and signing algorithms of security chips or algorithm libraries such as RSA, SM2 and SM4.

Randomness test of randoms. The system could conduct tests on SHA1 and SM3 hash algorithms implemented by products. These tests are compliant to FIPS-140-2 specification, including frequency test, frequency in block test, runs test, poker test, matrix rank test, linear complexity test and approximate entropy test.

Emulation of security chip and protocol. Through emulation of functions of security chip and environment of trusted computing protocols, the system provides effective and convenient emulation experiment environment for designers of security chip and trusted computing protocols, which can be used to verify feasibility, security and performance of new functions, new protocols and new interfaces.

Comprehensive analysis of test results. For results of any test task, the system not only provides original test logs but also various advanced statistics information of test. Users could select demonstration styles within multiple contents and layers and export reports in different formats.

6.4.2Compliance Test for TPM/TCM Specification

Compliance test for TPM/TCM specification is one of the main functions of this system. This kind of test mainly relies on TPM-related specifications and Chinese domestic specification – Functionality and Interface Specification of Cryptographic Support Platform for Trusted Computing. The system is capable of doing compliance test upon logic relationship of inputs and outputs of most trusted computing commands. It can also conduct test upon data flow and control flow relationships of multiple commands. The test cases can either be generated manually by test engineers or be generated randomly by test system according to index of test quality designated by test engineers. This kind of test is a black box-type test, which is executed through invoking TSS/TSM software interfaces or TDDL interfaces.

Concrete execution of test task relies on the test agent that is deployed on the platform to be tested. Completely according to controller’s instructions, the test agent is mainly responsible for initializing security chip (activating and enabling security chip and taking ownership), parsing instructions, carrying out concrete tests and returning results. Besides, because some test cases may influence each other and some even corrupt TPM (for example, after test command TPM_SetStatus, TPM may enter an idle state and not respond to any request), the test agent must monitor the state of test environment at real time, eliminate undesired influences between test cases and recover corrupted test environment. If some test cases cannot be executed due to the interference from external factors, the test agent must notify controller in time to get manual intervention.

Test control and administration relies on test controller, which is deployed upon control terminal. Test controller can generate and manage test cases, schemes and tasks, and is also responsible for demonstrating and exporting test results. The test cases can be divided into command-level class and function-level class. The former focuses on noncompliance of output parameters under correct and error input parameters of one command, and the latter concentrates on compliance of compound functions of multiple commands. A test scheme consists of several test cases, and typically represents a concrete test target in technical aspect. A test task is composed of a test scheme and concrete information of test object, test engineer and test conditions, and depicts all aspects of information about an independent test work. While starting a test work, test engineer can directly retrieve pre-stored test scheme from database, attach additional information and execute the test work using test task as a unit. The test results will be demonstrated as the test engineer has configured. Test engineer could query original test logs, such as input or output parameters of arbitrary commands in arbitrary test case, or directly lookup statistics of test result in figure or table mode, or export test result as a standard test report.

6.4.3Tests of Cryptography Algorithms and Randoms

Cryptography algorithms test is a function specially developed for verifying security foundation of trusted platform. According to specifications about trusted computing and cryptography algorithms, the system could execute correctness and performance tests upon encryption and signing algorithms used by security chip and trusted software stack. The test is mainly composed of pre-defined test cases and executed in black box form. For SM2 and SM4 algorithms in TCM/TSM, the test cases are defined according to the following specifications: “Public Key Cryptographic Algorithm SM2 based on Elliptic Curves” and “SM4 Block cipher Algorithm” published by OSCCA. The system executes tests by invoking interfaces to do encryption, signing and hash computation on random messages and verifying correctness of the results. For RSA algorithms in TPM/TSS, including RSA-OAEP encryption and signing algorithm with standards of PKCS#11 v1.5 or PSS (Probabilistic Signature Scheme) padding, the test cases are generated according to corresponding specifications like RSA PKCS#1 and test vectors recommended by PKCS-related specifications. The test cases are executed in the same way as testing TCM/TSM.

In the aspect of randomness test of random, the system mainly tests SHA1 and SM3 hash algorithms implemented in products according to Chinese or international specifications. After retrieving 1,000 random files with length of 128 KB by invoking interfaces of product, test agents could execute more than ten kinds of tests on these files as follows:

Frequency Test. This test judges whether probability distribution of random is uniform. Namely, the test collects statistics of bit0 and 1, and ideally the probability distribution of bit0 and 1 are identical.

Frequency Test in Block. This test is similar to frequency test, but the concrete test target is a subset in random string with designated length but not the whole random string.

Runs Test. This test examines runs of 0 and 1 in a target string. A run of 0 or 1 consists of several continuous 0 or 1 bits, and a run with length k includes k same bits.

Longest Runs Test in Block. This test examines the longest run in a certain M-bits length block. Purpose of this test is to judge whether longest run in target bit string conforms to the situation of the whole random string.

Poker Test. This test divides target strings into binary substrings with length m (m is arbitrary and there are 2m kinds of possible substrings), and examines whether counts of these substrings are largely the same.

Matrix Rank Test. This test examines rank of non-intersect submatrixes in target string, so as to check the linear correlation of fixed length substrings.

Discrete Fourier Test. This test checks features of fast discrete Fourier transform. In detail, the test checks periodical feature of random string, and this kind of feature embodies the difference between ideal random and target string.

General Statistical Test. This test examines the number of bits between different matching patterns, so as to check whether the target string can be significantly compressed without loss of any information. If so, the randomness of target string is poor.

Linear Complexity Test. This test examines length of feedback register, so as to judge whether target string is complex enough to be considered as random. A feature of random string is that it has relatively long feedback register. On the contrary, a relatively short feedback register indicates relatively poor randomness.

Approximate Entropy Test. This test examines overlapped m-bits patterns, so as to compare appearing frequency of overlapped blocks in two strings with adjacent length (e. g., two strings with length m and m + 1).

Accumulation Sum Test. This test examines offset of accumulation relative to 0, so as to check whether accumulation sum of substring of target string is too large or too small compared to that of a random string.

6.4.4Simulation of Security Chip and Protocol

Research of trusted computing technology is constantly emerging, and amount of new protocols and mechanisms are published. These new research achievements often add new functions to current security chip or change internal function logic of current security chip. In this situation, we integrate simulation function of security chips and protocols into our system, so as to verify correctness and feasibility of these new achievements in software simulation and test their performance in different environments and parameters. These test results can provide valuable reference for further research.

In the aspect of security chip simulation, the system implements a simulation and management tool of security chip. This tool aims at providing a scalable customization function of TPM/TCM to researchers. In order to test trustworthiness and performance of new functions, researchers could get a new simulator with new functions by implementing new interfaces and corresponding functions and integrating them into emulator through appropriate configuration of management tool of security chip. In this way, researchers could add new functions to security chip without changing or recompiling original code. Hence, researchers could focus on their innovations on new protocols and functions, and efficiency of development and research will be significantly promoted.

In the aspect of protocol simulation, the system implements tests of security chip interfaces involved in protocols and analysis of protocol performance.

The first is test and evaluation on security chip interfaces in trusted computing protocols. Parts of interfaces of security chip are used in trusted computing protocols, which embody function of security chip in computation and communication of protocols. Security chip often plays important role in protocols, but its resource and computation capability is rather limited. Hence, the design of interfaces directly affects security and performance of protocols. Currently, we have implemented various new protocol schemes of DAA, PBA, trusted channel and trusted data distribution that have been proposed by academic fields in recent years. Based on the implementation, we have also verified implementation feasibility of interfaces in protocols, and evaluated influence of these interfaces on other functions of security chip.

The second is performance analysis of trusted computing protocols. As a relatively theoretical work, performance analysis of trusted computing protocols only considers quantity of pure algebraic computations. This work cannot embody complex factors that influence protocol’s performance in practical environments. To solve this problem, our system integrates all kinds of parameters that will influence implementation of trusted computing protocols, including security level, algebraic structure (such as elliptic curve on which algebraic computation of protocol runs), basic cryptography algorithm (such as algorithms of encryption, signing and bilinear mappings) and running environment (such as the number of processing cores). Researchers could select appropriate parameters and test and simulate protocol performance conveniently and effectively. The test results will provide scientific basis and guidance to design and implementation of practical protocols.

6.4.5Promotion and Application

The system has been promoted and applied in Chinese domestic governmental departments and vendors of trusted computing products. After several improvements and update, the system was formally accepted by Chinese supervision departments of cryptography-related products and vendors of trusted computing products in 2009. As far as we know, the system is the only practically applied test system for trusted computing platform and cryptography algorithms in China.

Up to now, the system has worked well on testing TCM products from major chip vendors and security computers from mainstream vendors in China. In these works, series of problems have been detected by our system, such as poor compliance and interoperability. Through feedback and communication with vendors, most of problems have been fixed in time. This process is of great importance for improving products’ quality and avoiding negative influence of defects on products’ availability and security. Problems of Chinese domestic trusted computing specifications have also been found in test. For example, parts of descriptions and regulations about key management, key certification, command audit, counter and locality are found to be incomplete and implicit. These findings provide concrete basis for improving and updating of specifications.

Figure 6.16: Test results of some type of Chinese domestic TCM product.

Figure 6.16 presents test results of some type of Chinese domestic TCM product and its corresponding TSM. The left graphs are test reports exported by system in default format. These reports record all original information about test tasks and execution results of each test case. From the reports, errors can be concluded as shown in the right graph. The detected errors cover nearly all function modules of the TCM and TSM, and most of the errors are led by implementation with poor compliance (logic of functions are not compliant to specifications) and incompleteness (part of functions defined in specifications are not implemented). The main errors are as follows:

Incomplete support for command parameters. For example, all the TCM products in earliest work do not support all parameters of Tspi_ TCM_GetCapability() in specification. Similar to this, some vendors restrict format or length of part of input parameters, which makes valid inputs to be rejected by TCM.

Deviation in comprehending specification. For example, due to deviation in comprehending specification, some vendors confuse the usage of „rgbExternalData“ and „rgbValdationData“ domains in „pValidataionData“ parameter of Tspi_ TCM_Quote(), and swap their functions in design and implementation.

Poor robustness of functions. This kind of error influences reliability of trusted computing application. For example, specification specifies that a „NULL“ value of parameter „rgbMemory“ of Tspi_ Context_FreeMemory() indicates that memory space bound to context has been released. But in some TSM products, TSM process cannot handle this case and even falls into memory error and immediate crash.

Implicit descriptions and definitions of Chinese domestic trusted computing specifications, have also been found in test. Some of these problems are listed below.

In the aspect of key management, the specification specifies that in some situation TSM key object could be used without any authorization data. This facilitates TSM users, but brings potential security vulnerabilities. We have found that some vendors in our test work do not obey this rule in implementation for better security.

In the aspect of locality, the specification has not defined format of parameter „localityValue“ of Tspi_PcrComposite_SetPcrLocality(), and different vendors adopt different formats.

In the aspect of command audit, the specification specifies that Tspi_ TCM_GetAuditDigest() should return “number of commands to be audited.” This implicit description leads to different explanations. Some products generate only one audit record when same command executes several times, but others generate multiple audit records for multiple executions of the same command.

6.5Summary

Research of test and evaluation of trusted computing mainly focuses on compliance test for TPM/TCM specification, analysis of trusted protocols and mechanisms and certification and evaluation of products. In the aspect of compliance test for TPM/TCM specification, current works typically model security chip based on FSM theory, automatically generate test cases based on the model and perform test work. In the aspects of analysis of protocols and mechanisms, current works often adopt formal methods, such as model checking, programming logic, belief logic and logic of secure system, to verify various security properties of DAA, authorization protocols and process of establishing chain of trust. In the aspect of product evaluation and certification, TCG has issued projects to evaluate TPM and TNC products according to common criteria and achieved comprehensive conclusion about product security. To meet requirements of test and evaluation on trusted computing products, we have implemented a comprehensive test and analysis system for trusted computing platform. This system integrates functions like compliance test for TPM/TCM specification, correctness and performance test on cryptography algorithms, randomness test of random and analysis and simulation of trusted computing protocols. Currently, the system has already been put into practice and is widely accepted.

Test and evaluation of trusted computing are of great importance for improvement of product quality, advance of related technology, healthy development of industry and enhancement of Chinese information security. But as mentioned in Chapter 1, current research works on test and evaluation of trusted computing is relatively scarce, and they are required to be improved in both aspects of coverage and deepness. First, test and evaluation of trusted computing lags behind trusted computing technology. There are few effective methods to examine security and reliability properties of latest trusted computing technology, such as dynamic root of trust and ARM TrustZone. Second, existing methods focus on relatively simple products and mechanisms, which are components but not the whole computing platform. In other words, research on overall security of complex target, such as trusted computing platform or trusted execution environment, is the blank condition. Third, products or mechanisms like chain of trust lack corresponding specifications, thus they can hardly been tested or analyzed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.27.202