Chapter Eight

PROTECTING WHAT WE DO COLLECT

WHAT INTELLIGENCE AND SENSITIVE information the United States does choose to collect or store should be carefully protected from both the Insider Threat and the External Hack. Such protection requires new risk-management approaches to personnel vetting, a change in philosophy about classified networks, and adoption of best commercial practices for highly secure private sector networks.

Our comments in this chapter deal with personnel with security clearances and classified networks throughout the US Government and not just those in the Intelligence Community. We believe that this broad scope is necessary, and we note that previous reviews have been limited to the Intelligence Community. In general, we believe that the same standards applied to government employees with security clearances and IT networks with classified information should apply to private sector contractor personnel and networks dealing with Secret and Top Secret data.

A. Personnel Vetting and Security Clearances

RECOMMENDATION 37

We recommend that the US Government should move toward a system in which background investigations relating to the vetting of personnel for security clearance are performed solely by US Government employees or by a non-profit, private sector corporation.

RECOMMENDATION 38

We recommend that the vetting of personnel for access to classified information should be ongoing, rather than periodic. A standard of Personnel Continuous Monitoring should be adopted, incorporating data from Insider Threat programs and from commercially available sources, to note such things as changes in credit ratings or any arrests or court proceedings.

RECOMMENDATION 39

We recommend that security clearances should be more highly differentiated, including the creation of “administrative access” clearances that allow for support and information technology personnel to have the access they need without granting them unnecessary access to substantive policy or intelligence material.

RECOMMENDATION 40

We recommend that the US Government should institute a demonstration project in which personnel with security clearances would be given an Access Score, based upon the sensitivity of the information to which they have access and the number and sensitivity of Special Access Programs and Compartmented Material clearances they have. Such an Access Score should be periodically updated.

In the government as in other enterprises, vast stores of information are growing in databases. Even one unreliable individual with access to parts of a database may be capable of causing incalculable damage by compromising sensitive information. Unfortunately, almost every agency with sensitive information has experienced a major incident in which a disloyal employee caused significant damage by revealing sensitive data directly or indirectly to another government or to others who would do us harm. All of the individuals involved in these cases have committed criminal acts after having been vetted by the current security clearance process and, in several well-known cases, after having been polygraphed. Although parts of the Intelligence Community have improved their personnel vetting systems and they may perform well, the general picture throughout the US Government is of an inadequate personnel vetting system.

We believe that the current security clearance personnel vetting practices of most federal departments and agencies are expensive and time-consuming, and that they may not reliably detect the potential for abuse in a timely manner.

The security clearance system should be designed to have an extremely low false-positive rate (granting or continuing a clearance when one should have been denied). Access to sensitive information should be recorded in more detail (e.g., who has access to what and when). The nature and degree of vetting procedures should be adjusted periodically and more closely tied to the sensitivity of the information to which access is granted.

1. How the System Works Now

There are essentially three levels of security clearance (Secret, Top Secret, and Top Secret/SCI). For those obtaining any level of security clearance, the fundamentals of the personnel vetting system are similar. The applicant is asked to provide the names of a score or more of contacts. An investigator attempts to meet with those people whose names have been provided by the applicant. In many agencies, the investigator is often an employee of a private sector company that is paid by the number of investigations it completes.

If the investigators are unable to meet with the contacts in person, they may in some cases accept a telephone interview. In many agencies, the investigator begins the discussion with all contacts by informing them that anything they say about the applicant can be seen by the applicant because of the requirements of privacy laws. Not surprisingly, very few contacts suggested by the applicant provide derogatory information, especially because they know that their remarks may be disclosed to their friend or acquaintance.

Investigators are required to develop interviewees in addition to those suggested by the applicant. Often the investigator will attempt to inquire of neighbors, those living in the next apartment or house. Increasingly, however, neighbors may not know each other well. Online “friends” sometimes have a better idea about someone than the people living in physical proximity.

As part of an initial security review, investigators may also access some publicly available and commercially available databases. Such database reviews are used largely to corroborate information supplied by the applicant on a lengthy questionnaire. Agencies may require a financial disclosure form to be completed, revealing the financial health and holdings of an applicant (although often those declarations are not verified). Some agencies require a polygraph for Top Secret/SCI clearances. Once a clearance has been granted, SECRET-level clearances are often updated only once a decade. Top Secret/SCI clearances may be updated every five years. Random testing for drug use and random polygraphing may occur in between clearance updates.

In many agencies, the current personnel vetting system does not do well in detecting changes in a vetted individual’s status after a security clearance has been granted. In most agencies, the security clearance program office might not know if an employee between vettings had just become involved in a bankruptcy, a Driving Under the Influence arrest, a trip to a potentially hostile country, or a conversion to a radical cause such as al-Qa’ida.

Once granted a certain level of clearance because of a need to do part of their jobs, employees are often in a position to read other material at that classification, regardless of its relevance to their job. However, some sensitive projects or sensitive intelligence collection programs (“compartments”) have dissemination controls (“bigot lists”). Sometimes access to these programs may be granted based solely on job-related needs and may not trigger an updated or closer review of personnel background material.

As the system works today, the use of special compartmented access programs, limiting access to data, is occasioned often by the means that were employed to collect the information, not by the content of the information, or the target of the collection, or the damage that could be done by unauthorized disclosure of content or target.

2. How the System Might Be Improved

A series of broad changes could improve the efficacy of the personnel vetting system.

First, and consistent with practical constraints, agencies and departments should move in the direction of reducing or terminating the use of “for-profit” corporations to conduct personnel investigations. When a company is paid upon completion of a case, there is a perverse incentive to complete investigations quickly. For those agencies that cannot do vetting with their own government employee staff, consideration should be given to the creation of a not-for-profit entity modeled on the Federally Funded Research and Development Centers (FFRDC), such as RAND and MITRE, to conduct background investigations and to improve the methodology for doing so. We recommend that a feasibility study be launched in the very near future.

Second, security clearance levels should be further differentiated so that administrative and technical staff who do not require access to the substance of data on a network are given a restricted level of access and security clearance that allows them to do their job, but that does not expose them to sensitive material.

Third, information should be given more restricted handling based not only on how it is collected, but also on the damage that could be created by its compromise.

Fourth, departments and agencies should institute a Work-Related Access approach to the dissemination of sensitive, classified information. While not diminishing the sharing of information between and among agencies, the government should seek to restrict distribution of data to personnel whose jobs actually require access to the information. Typically, analysts working on Africa do not need to read sensitive information about Latin America. Yet in today’s system of information-sharing, such “interesting but not essential” data is widely distributed to people who do not really need it.

Implementing this sort of Work-Related Access will necessitate a greater use of Information Rights Management (IRM) software. Greater use of the software means actually widely employing it, not just procuring it. It may also require a significant improvement on the state of the art of such software, as discussed later in this chapter.

Fifth, we believe that after being granted their initial clearances, all personnel with access to classified information should be included in a Personnel Continuous Monitoring Program (PCMP). The PCMP would access both internally available and commercially available information, such as credit scores, court judgments, traffic violations, and other arrests. The PCMP would include the use of anomaly information from Insider Threat software. When any of these sources of information raised a level of concern, the individual involved would be re-interviewed or subject to further review, within existing employee rights and guidelines.

Sixth, ongoing security clearance vetting of individuals should use a risk-management approach and depend upon the sensitivity and quantity of the programs and information to which they are given access.

We recommend a pilot program of Access Scoring and additional screening for individuals with high scores. Everyone with a security clearance might, for example, be given a regularly updated Access Score, which would vary depending upon the number of special access programs or compartments they are cleared to be in, the sensitivity of the content of those compartments, and the damage that would be done by the compromise of that information.

It would be important that the Access Score be derived not only from the accesses granted by the individual’s parent agency, and not only from the list of intelligence programs for which the individual was accredited, but also from all of the restricted programs to which that individual has access from any department, including the Departments of Defense, Energy, Homeland Security, and others.

The greater an individual’s Access Score, the more background vetting he or she would be given. Higher scores should require vetting more frequent than the standard interval of five (Top Secret) or 10 (Secret) years. At a certain Access Score level, personnel should be entered into an Additional Monitoring Program. We recognize that such a program could be seen by some as an infringement on the privacy of federal employees and contractors who choose on a voluntary basis to work with highly sensitive information in order to defend our nation. But, employment in government jobs with access to special intelligence or special classified programs is not a right. Permission to occupy positions of great trust and responsibility is already granted with conditions, including degrees of loss of privacy. In our view, there should be a sliding scale of such conditions depending on the number and sensitivity of the security accesses provided.

We believe that those with the greatest amount of access to sensitive programs and information should be subject to Additional Monitoring, in addition to the PCMP discussed earlier. The routine PCMP review would draw in data on an ongoing basis from commercially available data sources, such as on finances, court proceedings, and driving activity of the sort that is now available to credit scoring and auto insurance companies. Government-provided information might also be added to the database, such as publicly available information about arrests and data about foreign travel now collected by Customs and Border Patrol.

Those with extremely high Access Scores might be asked to grant permission to the government for their review by a more intrusive Additional Monitoring Program, including random observation of the meta-data related to their personal, home telephone calls, e-mails, use of online social media, and web surfing. Auditing and verification of their Financial Disclosure Forms might also occur.

A data analytics program would be used to sift through the information provided by the Additional Monitoring Program on an ongoing basis to determine if there are correlations that indicate the advisability of some additional review. Usually, any one piece of information obtained by an Additional Monitoring Program would not be determinative of an individual’s suitability for special access. Such a review could involve interviewing the individual involved to obtain an explanation, or contacting her supervisor, or initiating more intrusive vetting. For example, a bankruptcy and a DUI arrest might indicate that the individual is under stress that might necessitate a review of his suitability for sensitive program access. A failure to report a foreign trip as required might trigger a further investigation. Employees whose “outside of work” activities show up in a big data analytics scan as possibly being of concern might have their use of government computers and databases placed under additional scrutiny. We emphasize that employees with special access must not be stripped of their rights or subjected to Kafkaesque proceedings. For employees to be willing to participate in a Continuous Monitoring Program, they must know that they will have an opportunity to explain actions that may be flagged by data review.

We have noted that in the wake of recent security violations, some agencies are considering the more extensive use of polygraphy. There are widely varying views about the efficacy of polygraphing, but there can be no disputing that it cannot be a continuous process. It is unable to reveal events which occur after its use. The Personnel Continuous Monitoring Program, with its ongoing ingesting of information from commercial and government databases, augmented by data analytics, is more likely to reveal any change in the status of an employee between programmed security clearance reviews.

Finally, the security clearance vetting process should also protect the rights of those with access to special programs and information. The President should also ensure that security clearance status not be affected by use of Whistle-Blower, Inspector General, or Congressional Oversight programs (see Appendix D).

About five million people now have active security clearances granted by some arm of the US Government, of which almost 1.5 million have Top Secret clearance. Although we do not have the capability to determine if those numbers are excessive, they certainly seem high. We believe that an interagency committee, representing not just the Intelligence Community, should review in detail why so many personnel require clearances and examine whether there are ways to reduce the total. Such a study may find that many of those with Secret-level clearances could do with a more limited form of access.

Personnel with Security Clearances (10/12)178

Confidential/Secret

Top Secret

Government Employees

2,757,333

791,200

Contractors

582,524

483,263

Other

167,925

135,506

Subtotal

3,507,782

1,409,969

Total

4,917,751

Once granted a clearance, only a very few have had it revoked for cause. Personnel lose clearances mainly because they retire or otherwise leave government service or change jobs. Indeed, many who leave government service manage to maintain their clearances as part-time advisors or by working with contractors. The strikingly small number of people who have their clearances revoked may be because the initial vetting process in all agencies does such a good job and because very few people become security risks after they are initially cleared. But, the numbers suggest to us that the re-vetting process, which usually occurs every five years, may in some agencies not be as rigorous as it should be. Sometimes the initial vetting is assumed to be correct and the only thing that is checked are the “new facts” that have occurred in the preceding five years. Sometimes the reviews that are supposed to take place every five years are delayed. Many agencies do not have a program to obtain some kinds of important information in between security updates.

Percent of Personnel Whose Security Clearances Were Revoked (FY 12)179

CIA

0.4

FBI

0.1

NGA

0.3

NRO

0.5

NSA

0.3

State

0.1

3. Information Sharing

RECOMMENDATION 41

We recommend that the “need-to-share” or “need-to-know” models should be replaced with a Work-Related Access model, which would ensure that all personnel whose role requires access to specific information have such access, without making the data more generally available to cleared personnel who are merely interested.

Classified information should be shared only with those who genuinely need to know. Beyond the use of compartments, however, the vast bulk of classified information is broadly available to people with security clearances. Analyses of the failure to prevent the September 11th, 2001 attacks concluded that information about those individuals involved in the plot had not been shared appropriately between and among agencies. Although some of that lack of sharing reflected intentional, high-level decisions, other data was not made broadly available because of a system that made it difficult to disseminate some kinds of information across agencies. Thus, after the attacks, the mantra “Need to Share” replaced the previous concept of “Need to Know.”

In some contexts, that new approach may have gone too far or been too widely misunderstood. The “Need to Share” called for the distribution of relevant information to personnel with a job/task defined requirement for such information. It did not call for the profligate distribution of classified information to anyone with a security clearance and an interest in reading the information.

The problem with the “need-to-share” principle is that it gives rise to a multitude of other risks. Consistent with the goal of risk management, the appropriate guideline is that information should be shared only with those who need to know. There is no good reason to proliferate the number of people with whom information is shared if some or many of those people do not need or use that information in their work. The principle of “need to share” can endanger privacy, heighten the risk of abuse, endanger public trust, and increase insider threats.

To be sure, the matching of one agency’s records against another agency’s records—for example, comparing fingerprints collected off of bomb fragments in Afghanistan to fingerprints culled at US border crossings—is one of the most important information tools we have in combating terrorism. Such sharing must continue, but can (and often does) take place on a machine-to-machine basis with strict control on which human beings can obtain access to the data.

To its credit, the Intelligence Community has been taking steps to restrict the number of people who have access to confidential or classified information. We applaud these steps. We recommend that seemingly compelling arguments about the importance of information-sharing should be qualified by a recognition that information should not be shared with those who do not have a genuine need to know.

B. Network Security180

RECOMMENDATION 42

We recommend that the Government networks carrying Secret and higher classification information should use the best available cyber security hardware, software, and procedural protections against both external and internal threats. The National Security Advisor and the Director of the Office of Management and Budget should annually report to the President on the implementation of this standard. All networks carrying classified data, including those in contractor corporations, should be subject to a Network Continuous Monitoring Program, similar to the EINSTEIN 3 and TUTELAGE programs, to record network traffic for real time and subsequent review to detect anomalous activity, malicious actions, and data breaches.

RECOMMENDATION 43

We recommend that the President’s prior directions to improve the security of classified networks, Executive Order 13587, should be fully implemented as soon as possible.

RECOMMENDATION 44

We recommend that the National Security Council Principals Committee should annually meet to review the state of security of US Government networks carrying classified information, programs to improve such security, and evolving threats to such networks. An interagency “Red Team” should report annually to the Principals with an independent, “second opinion” on the state of security of the classified information networks.

RECOMMENDATION 45

We recommend that all US agencies and departments with classified information should expand their use of software, hardware, and procedures that limit access to documents and data to those specifically authorized to have access to them. The US Government should fund the development of, procure, and widely use on classified networks improved Information Rights Management software to control the dissemination of classified data in a way that provides greater restrictions on access and use, as well as an audit trail of such use.

Information technology (IT) has become so central to the functioning of the government in general and national security in particular that policy officials need to be conversant with the technology. No longer can senior officials relegate concerns about IT networks to management or administrative staff. Policy officials are ultimately responsible for the IT networks of their organizations. They need to understand the systems and issues raised by technologists. Toward that end, technologists should be part of more policy, decision-making, and oversight processes. Similarly, national security policy officials need to take the time to understand in detail how the various components of the Intelligence Community work, and especially how their collection programs operate.

The security of classified networks is, in the age of cyber war, one of the highest priorities in national security. Nonetheless, the status of security improvement and the state of the cyber defenses of our sensitive networks have not been topics for regular review by senior interagency policy officials. Department and agency leaders have also had little way to verify if the reports of their subordinates concerning the security of their classified networks are entirely accurate or complete. We recommend that there be an annual review by NSC Principals of the security of classified networks and the implementation of programmed upgrades. To inform the principals’ discussion, we also recommend that the staffs of OMB and NSC lead a process to identify issues and potential deficiencies. We also suggest that a “Red Team” be created to provide a second opinion to Principals on the security vulnerabilities of all classified networks.

The security of government networks carrying classified information has traditionally been outward looking. It was assumed that anyone who had access to the network had been subjected to extensive vetting and was therefore trustworthy and reliable.

There are two flaws in that thinking. First, as has been demonstrated, some people who have been given Top Secret/SCI clearances are not trustworthy. Second, it may be possible for unauthorized individuals to gain access to the classified networks and to assume the identity of an authorized user. The government’s classified networks require immediate internal hardening.

Beyond measures designed to control access to data on networks, there is a need to increase the security of the classified networks in general. Many of the US Government’s networks would benefit from a major technological refresh, to use newer and less vulnerable versions of operating systems, to adopt newer security software proven in the private sector, and to re-architect network designs to employ such improvements as Thin Client and air-gapped approaches.

Despite what some believe is the inherent security of classified networks, as the so-called Buckshot Yankee incident demonstrated, it is possible for foreign powers to penetrate US networks carrying classified information. Just as some foreign powers regularly attempt to penetrate private sector networks in the US to steal intellectual property and research, others are engaged in frequent attempts to penetrate US networks with secret data.

To improve the security of classified networks, we believe that such networks should be given at least as much internal and external security as the most secure, unclassified networks in the private sector. Although many US corporations have inadequate network security, some in financial services have achieved a high level of assurance through the use of a risk management approach. State-of-the-art cyber security products used in private sector companies are not as often used on classified US Government networks as we would have believed likely.

We believe that inadequacy can be explained by two factors: (1) classified network administrators have traditionally focused on perimeter network defenses and (2) the procurement process in the government is too lengthy and too focused on large-scale system integrator contracts that do not easily allow for the agile adoption of new security products that keep up with the ever-changing threat. In our view, every department and agency’s IT security budget and procurement processes ought to include funding set aside and procedures for the rapid acquisition and installation of newly developed security products related to recently appearing threats. These systems should be reviewed and procurement measures made through a decision-making process that considers cost-benefit analysis, cost-effectiveness, and risk management.

1. Executive Order 13587

In recognition of the need to improve security on government networks with classified data, President Obama issued Executive Order 13587 to improve the security of classified networks against the Insider Threat. We have found that the implementation of that directive has been at best uneven and far too slow. Every day that it remains unimplemented, sensitive data, and therefore potentially lives, are at risk. Interagency implementation monitoring was not performed at a sufficiently high level in OMB or the NSS. The Administration did not direct the re-programming of adequate funds. Officials who were tardy in compliance were not held accountable. No central staff was created to enforce implementation or share best practices and lessons learned.

The implementation of Executive Order 13587 is in marked contrast to the enforcement of compliance with a somewhat similar effort, the conversion of government networks for Y2K. The Y2K software upgrades were carried out under the aegis of Executive Order 13073, issued only 22 months before the implementation deadline. That order established an Interagency Council co-chaired by an Assistant to the President and by the Director of OMB. It required quarterly reports to the President.

We believe that the implementation of Executive Order 13578 should be greatly accelerated, that deadlines should be moved up and enforced, and that adequate funding should be made available within agency budget ceilings and a Deputy Assistant to the President might be directed to enforce implementation. The interagency process might be co-led by the Deputy Director of OMB.

In addition to the Insider Threat measures discussed above, we believe that government classified networks could have their overall security improved by, among other steps, priority implementation of the following:

•   Network Continuous Monitoring techniques on all classified networks similar to the EINSTEIN-TUTELAGE Program now being implemented on US Government unclassified networks and the systems of certain private sector, critical infrastructure companies.

•   A Security Operations Center (SOC) with real-time visibility on all classified US Government networks. There are now many SOCs, but no one place where fusion and total visibility takes place; and

•   More severe limits on the movement of data from unclassified to classified networks. Although such data being uploaded is scanned today, the inspection is unlikely to detect a Zero Day threat (i.e., malicious software that has not been seen before).

2. Physical and Logical Separation

We believe that the most cost-effective efforts to enhance the security of IT networks carrying classified data are likely to be those that create greater physical and logical separation of data, through network segmentation, encryption, identity access management, access control to data, limitation of data storage on clients, and “air-gapping.” Among the measures we suggest be more carefully considered are:

•   The creation of Project Enclaves on networks, with firewalls, access control lists, and multi-factor (including biometric) authentication required for entry.

•   Project-based encryption for data at rest and in use. Today, most data at rest on classified networks is not encrypted (although the networks and the data in transit are). Encrypting data whether at rest or in transit and linking that encryption with Identity Access Management (IAM) or IRM software would prevent reading by those not authorized even if they do access the data.

•   IRM. To determine and limit who has access to data in a Project Based Encryption file, agencies should be encouraged to consider the use of IRM software that specifies what groups or individuals may read, or forward, or edit, or copy, or print, or download a document. IRM is known by other terms, such as Digital Rights Management, in some agencies. The IRM software should be linked to a multi-factor Identity Access Management system so that administrative and technical staff, such as System Administrators, and others cannot access the content of the data.

•   Separation of Networks. Networks can be physically separated to varying degrees, from using separate colors on a fiber to using different fibers, to using different physical paths. In true “air-gapping,” a network shares no physical devices whatsoever with other networks. In logical separation, networks may be maintained separate by firewalls, access controls, identity access management systems, and encryption. We believe that every relevant agency should conduct a review using cost-benefit analysis, and risk-management principles to determine if it would make sense to achieve greater security by further physical and logical separation of networks carrying data of highly sensitive programs.

We have found that there are few choices and perhaps insufficiently robust products today among Identity Rights Management software and among Insider Threat Anomaly Detection software. We believe that the government should fast track the development of Next-Generation IRM and Next-Generation Insider Threat software, waiving the normal research and procurement rules and timetables. The development of NextGen software in these areas should not, however, be an excuse for failure to deploy the software that is now available.

Fortunately, the government itself may have developed the basis for a more robust IRM software. The National Institute for Standards and Technology (NIST) of the Department of Commerce has created an open source platform for Next-Generation IRM software. Private sector developers should be granted access to that platform quickly, as well as encouraged to develop their own systems.

The NIST open source software, like other software now being used in some agencies, prevents the downloading of sensitive data from central servers. Analysts may access the data and employ it, but may not transfer it. With the NIST software, the user sees an image of the data, but is unable to download it to a client and then to a thumb drive, CD, or other media. In general, we believe that sensitive data should reside only on servers and not on clients.

IRM systems and “data-on-server only” policies allow for auditing of data access, but they also generally presume the use of a data-tagging system when data is initially ingested into a network or system. We believe that additional work needs to be done to make that phase of data control less onerous, complex, and time-consuming. Government-sponsored development or procurement would promote the more rapid solution of those problems with data tagging.

NSA, among others, is returning to the Thin Client architecture, which many agencies abandoned 15–20 years ago in favor of cheaper, Commercial Off The Shelf (COTS) models. In the Thin Client architecture, the user may employ any screen on the network after properly authenticating. The screens, however, are “dumb terminals” with little software loaded on the devices. All applications and data are stored on servers, which are easier to secure and monitor than are large numbers of distributed clients. The use of a Thin Client architecture is, we believe, a more secure approach for classified networks and should be more widely used.

C. Cost-Benefit Analysis and Risk Management

RECOMMENDATION 46

We recommend the use of cost-benefit analysis and risk-management approaches, both prospective and retrospective, to orient judgments about personnel security and network security measures.

In our statement of principles, we have emphasized that in many domains, public officials rely on a careful analysis of both costs and benefits. In our view, both prospective and retrospective analysis have important roles to play in the domain under discussion, though they also present distinctive challenges, above all because of limits in available knowledge and challenges in quantifying certain variables. In particular, personnel security and network security measures should be subject to careful analysis of both benefits and costs (to the extent feasible).

Monetary costs certainly matter; public and private resources are limited. When new security procedures are put in place—for example, to reduce insider threats—the cost may well be ascertainable. It may be possible to identify a range, with upper and lower bounds. But the benefits of security procedures are likely to be more challenging to specify. It remains difficult, even today, to quantify the damage done by the recent leaks of NSA material. In principle, the question is the magnitude of the harm that is averted by new security procedures. Because those procedures may discourage insider threats from materializing, it will not be feasible to identify some averted harms.

Even if so, some analysis should be possible. For example, officials should be able to see to what extent new security procedures are helpful in detecting behavior with warning signs. Retrospective analysis can improve judgments by showing what is working and what is not. Risk-management approaches generally suggest hedging strategies on investment in preventative measures when detailed actuarial data are not available. That approach, along with breakeven analysis,181 may be necessary when considering risk contingencies that have never come to fruition in the past.

 

 

178 Office of Director of National Intelligence, 2012 Report on Security Clearance Determinations, p. 3, Table 1, (January 2013) available at www.fas.org/sgp/othergov/intel/clear-2012.pdf.

179 Office of Director of National Intelligence, 2012 Report on Security Clearance Determinations, p. 7, Table 5, (January 2013) available at www.fas.org/sgp/othergov/intel/clear-2012.pdf.

180 Michael Morell affirmatively recused himself from Review Group discussions of network security to mitigate the insider threat due to ongoing business interests.

181 See OMB Circular A-4.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.185.180