Chapter 6. Trusting Users

It’s tempting to conflate user trust with device trust. Security-conscious organizations might deploy X.509 certificates to users’ devices to gain stronger credentials than passwords provide. One could say that the device certificate strongly identifies the user, but does it? How do we know that the intended user is actually at the keyboard? Perhaps they left their device unlocked and unattended?

Conflating user identity with device identity also runs into problems when users have multiple devices, which is increasingly becoming the norm. Credentials need to be copied between several devices, putting them at increased risk of exposure. Devices might need different credentials based on their capabilities. In networks that have kiosks, this problem becomes even more difficult.

Zero trust networks identify and trust users separately from devices. Sometimes identifying a user will use the same technology that is used to identify devices, but we must be clear that these are two separate credentials.

This chapter will explore what it means to identify a user and store their identity. We will discuss when and how to authenticate users. User trust is often stronger when multiple people are involved, so we will discuss how to create group trust and how to build a culture of security.

Identity Authority

Every user has an identity, which represents how they are known in a larger community. In the case of a networked system, the identity of a user is how they are recognized in that system.

Given the large number of individuals in the world, identifying a user can be a surprisingly hard problem. Let’s explore two types of identity:

  • Informal identity
  • Authoritative identity

Informal identity is how groups self-assemble identity. Consider a real-world situation where you meet someone. Based on how they look and act, you can build up an identity for that person. When you meet them later, you can reasonably assume that they are the same person based on these physical characteristics. You might even be able to identify them remotely—for example, by hearing their voice.

Informal identity is used in computer systems. Pseudonymous accounts—accounts that are not associated with one’s real-world name—are common in online communities. While the actual identity of an individual is not necessarily known in these communities, through repeated interactions an informal identity is created.

Informal identity works in small groups, where trust between individuals is high and the risks are relatively low. This type of identity has clear weaknesses when the stakes are higher:

  • One can manufacture a fictitious identity.
  • One can claim the identity of another person.
  • One can create several identities.
  • Multiple individuals can share a single identity.

When a stronger form of identity is required, an authority needs to create authoritative identity credentials for individuals. In the real world, this authority often falls to governments. Government-issued IDs (e.g., a driver’s license or passport) are distributed to individuals to represent their identity to others. For low-risk situations, these IDs alone are sufficient proof of one’s identity. However, for higher risk situations, cross-checking the credentials against the government database provides a better guarantee.

Computer systems often need centralized authority for user identity as well. Like in the real world, users are granted credentials (of varying strength) which identify them in the system. Based on the degree of risk, cross-checking the credentials against a centralized database may be desired. We will discuss how these systems should function later.

Credentials can be lost or stolen, so it is important that an identity authority have mechanisms for individuals to regain control of their identity. In the case of government-issued identification, a person often needs to present other identifying information (e.g., a birth certificate or fingerprint) to a government authority to have their ID reissued. Computer systems similarly need mechanisms for a user to regain control of their identity in the case of lost or stolen credentials. These systems often require presenting another form of verification, say a recovery code or alternative authentication credential. The choice of required material to reassert one’s identity can have security implications which we will discuss later.

Bootstrapping Identity in a Private System

Storing and authenticating user identity is one thing, but how do you generate the identity to begin with? Humans interacting with computer systems need a way to digitally represent their identity, and we seek to bind that digital representation as tightly to the real-world human as possible.

The genesis of a digital identity, and its initial pairing to a human, is a very sensitive operation. Controls to authenticate the human outside of your digital system must be strong in order to prevent an attacker from masquerading as a new employee, for instance. Similar controls might also be exercised for account recovery procedures where the user is unable to provide their current credentials.

Attacking Identity Recovery Systems

Users occasionally misplace or forget authentication material such as passwords or smart cards. To recover the factor (i.e., reset the password), the user must be authenticated by alternative and sometimes untraditional means. Attacks on such systems are frequent and successful. For example, in 2012, a popular journalist’s Amazon account was broken into, and the attacker was able to recover the last four digits of the most recent credit card used. With this information, the attacker called Apple support and “proved” his/her identity using the recovered number. Be sure to carefully evaluate such reset processes—“secret” information is often less secret than it appears.

Given the sensitivity of this operation, it is important to put good thought and strong policy around how it is managed. It is essentially secure introduction for humans, and the good news is, we know how to do that pretty well!

Government-Issued Identification

It probably comes as no surprise that one of the primary recommendations for accomplishing human authentication is through the use of government-issued identification. After all, human authentication is precisely what they were designed for in the first place!

In some implementations, it may even be desirable to request multiple forms of ID, raising the bar for potential forgers/imposters. It goes without saying that staff must be properly trained in validating these IDs, lest the controls be easily circumvented.

Nothing Beats Meatspace

Despite our best efforts, human-based authentication schemes remain stronger than their digital counterparts. It’s always a good idea to bootstrap a human’s new digital identity in person. Email or other “blind” introductions are heavily discouraged. For instance, shipping a device configured to trust the user on first use (sometimes referred to as TOFU) is not uncommon. However, this method suffers from physical weakness since the package is vulnerable to interception or redirection.

Oftentimes, the creation of the digital identity is preceded by a lengthy human process, such as a series of interviews or the completion of a business contract. The result is that the individual has been previously exposed to already-trusted individuals who have learned some of his/her qualities along the way. This knowledge can be leveraged for further human-based authentication, as shown in Figure 6-1.

Figure 6-1. A trusted administrator relies on a trusted employee and a valid ID to add a new user to an inventory system

For instance, a hiring manager is in a good position to escort a new hire to helpdesk for human authentication, since the hiring manager is presumably already familiar with the individual and can attest to their identity. While this would be a strong signal of trust, just like anything else in a zero trust network, it should not be the only method of authentication.

Expectations and Stars

There are usually many pieces of information available prior to bootstrapping a digital identity. It is desirable to use as many pieces of information as is reasonable to assert that all of the stars line up as expected. These expectations are similar to ones set in a typical zero trust network; they are simply accrued and enforced by humans.

These expectations can range from the language(s) they speak to the home address printed on their ID, with many other creative examples in between. A thorough company may choose to even use information learned through a background check to set real-world expectations. Humans use methods like this every day to authenticate each other (both casually and officially), and as a result, these methods are mature and reliable.

Storing Identity

Since we need to bridge identity from the physical world to the virtual world, identity must be transformed into bits. These bits are highly sensitive and oftentimes need to be stored permanently. Therefore, we will discuss how to store this data to ensure its safety.

User Directories

To  trust users, systems typically need centralized records of those users. One’s presence in such a directory is the basis by which all future authentication will occur. Having all this highly sensitive data stored centrally is a challenge which unfortunately cannot be avoided.

A zero trust network makes use of rich user data to make better authentication decisions. Directories will store traditional information like usernames, phone numbers and organization role, and also extended information like expected user location or the public key of an X.509 certificate they have been issued.

Given the sensitive nature of the data being stored on users, it’s best to not store all information together in a single database. Information about users isn’t typically considered secret, but becomes sensitive when using such data to make authorization decisions. Additionally, having broad knowledge of all users in a system can be a privacy risk. For example, a system that stores the last known location of all users could be used to spy on users. Stored user data can also be a security risk, if that data can be leveraged to attack another system. Consider systems that ask users fact-based information as a means to further validate their identity.

Instead of storing all user information in a single database, consider splitting the data into several isolated databases. These databases should ideally only be exposed via a constrained API, which limits the information divulged. In the best case, raw data is never divulged, but rather assertions can be made about a user by the application that has access to the data. For example, a system that stores a user’s previous known location could expose the following APIs:

  • Is the user currently or likely to be near these coordinates?
  • How frequently does the user change locations?

Directory Maintenance

Keeping user directories accurate is critical for the safety of a zero trust network. Users are expected to come and go over the lifetime of a network system, so good onboarding and offboarding procedures should be created to keep the system accurate.

As much as possible, it’s best to integrate technical identity systems (LDAP or local user accounts) into organizational systems. For example, a company might have human resource systems to track employees that are joining or leaving the company. It is expected that these two sources of data are consistent with each other, but unless there is a system that has integrated the two or is checking their contents, the sets of data will quickly diverge. Creating automated processes for connecting these systems is an effort that will quickly pay dividends.

The case of two divergent identity systems raises an important point—which system is authoritative? Clearly one system must be the system of record for identity, but that choice should be made based on the needs of the organization. It doesn’t much matter which system is chosen, only that one is authoritative and all other identity systems derive their data from the system of record.

Minimizing Data Stored Can Be Helpful

A system of record for identity does not need to contain all identity information. Based on our earlier discussion, it can be better to purposefully segment user data. The system of record needs to only store the information that is critical for identifying an individual. This could be as simple as storing a username and some personal information for the user to recover their identity should they forget it. Derivative systems can use this authoritative ID to store additional user information.

When to Authenticate Identity

Even though authentication is mandatory in a zero trust network, it can be applied in clever ways to significantly bolster security while at the same time working to minimize user inconvenience.

While it might be tempting (and even logical) to adopt a position of “It’s not supposed to be easy; it’s supposed to be secure,” user convenience is among one of the most important factors in designing a zero trust network. Security technologies that present a poor user experience are often systematically weakened and undermined by their own users. A poor experience will disincentivize the user from engaging with the technology, and shortcuts to sidestep enforcement will be taken more often.

Authenticating for Trust

The act of authenticating a user is, essentially, the system seeking to validate that the user is indeed who they say they are. As you’ll learn in the next section, different authentication methods have different levels of strength, and some are strongest when combined with others. Due to the fact that these authentication mechanisms are never absolute, we can assign some level of trust to the outcome of the operation.

For instance, you may need only a password to log into a subscription music service, but your investment account probably requires a password and an additional code. This is because investing is a sensitive operation: the system must trust that the user is authentic. The music service, on the other hand, is not as sensitive and chooses to not require an additional code, because doing so would be a nuisance.

By extension, a user may pass additional forms of authentication in order to raise their level of trust. This can be done specifically in a time of need. A user whose trust score has eroded below the requirements for a particular request can be asked for additional proof, which if passed will raise the trust to acceptable levels.

This is far from a foreign concept; it can be seen in common use today. Requiring users to enter their password again before performing a sensitive operation is a prime example of this concept in action. It should be noted, however, that the amount of trust one can gain through authentication mechanisms alone should not be unbound. Without it, consequences of poor device security and other undesirable signals can be washed out.

Trust as the Authentication Driver

Since authentication derives trust, and it is our primary goal to not frivolously drag users through challenges, it makes sense to use trust score as the mechanism that mandates authentication requirements. This means that a user should not be asked to further authenticate if their trust score is sufficiently high and, conversely, that a user should be asked to authenticate when their score is too low. This is to say that, rather than selecting particular actions which require additional authentication, one should assign a required score and allow the trust score itself to drive the authentication flow and requirements. This gives the system the opportunity to choose a combination of methods in order to meet the goal, possibly reducing the invasiveness by having context about the level of sensitivity and knowledge of how much each method is trusted.

This approach is fundamentally different from traditional authentication design approaches, which seek to designate the most sensitive areas and actions and authenticate them the heaviest, perhaps despite previous authentication and trust accumulation. In some ways, the traditional approach can be likened to perimeter security, in which sensitive actions must pass a particular test, after which no further protections are present. Instead, leveraging the trust score to drive these decisions removes arbitrary authentication requirements and installs adaptive authentication and authorization that is only encountered when necessary.

The Use of Multiple Channels

When authenticating and authorizing a request, using multiple channels to reach the requestor can be very effective. One-time codes provide an additional factor, especially when the code-generating system is on a separate device. Push notifications provide a similar capability by using an active connection to a mobile device. There are many applications of this idea, and they can take different forms.

Depending on the use case, one might choose to leverage multiple channels as an integral part of a digital authentication scheme. Alternatively, those channels might be used purely as an authorization component, where a requestor might be prompted to approve a risky operation. Both uses are effective in their own right, though user experience should (as always) be kept in mind when deciding when and where to apply them.

Channel Security

Communication channels are constructed with varying degrees of authentication and trust. When leveraging multiple channels, it is important to understand how much trust should be placed on the channel itself. This will dictate which channels are selected for use and when. For instance, physical rotating code devices are only as secure as the system used to distribute them or the identification check required to physically obtain one from your administrator. Similarly, a prompt via a corporate chat system is only as strong as the credentials required to sign in to it. Be sure to use a different channel than the one you are trying to authenticate/authorize in the first place.

Leveraging multiple channels is effective not because compromising a channel is hard, but because compromising many is hard. We will talk more about these points in the next section.

Caching Identity and Trust

Session caching is a relatively mature technology which is well documented, so we won’t spend too much time talking about it, but it is important to highlight some design choices that are important for secure operation in a zero trust network.

Frequent validation of the client’s authorization is critical. This is one of the only mechanisms allowing the control plane to effect changes in data plane applications as a result of changes in trust. The more frequently this can be done, the better. Some implementations authorize every request with the control plane. While this is ideal, it may not be a realistic prospect, depending on your situation.

Many applications validate SSO tokens only at the beginning of a session and set their own tokens after that. This mode of operation removes session control from the control plane and is generally undesirable. Authorizing requests with control plane tokens rather than application tokens allows us to easily revoke when trust levels fluctuate or erode.

How to Authenticate Identity

Now that we know when to authenticate, let’s dig into how to authenticate a user. The common wisdom, which is also applicable in zero trust networks, is that there are three ways to identify a user:

Something they know

Knowledge the user alone has (e.g., a password).

Something they have

A physical credential that they user can provide (e.g., a token with a time-sensitive token).

Something they are

An inherent trait of the user (e.g., a fingerprint or retina).

We can authenticate a user using one or more of these methods. Which method or methods chosen will depend on the level of trust required. For high-risk operations, which request multiple authentication factors, it’s best to choose methods that are not in the same grouping of something you know, something you have, or something you are. This is because the attack vectors are generally similar within a particular grouping. For example, a hardware token (something you have) can be stolen and subsequently used by anyone. If we pair that token with a second token, it’s highly likely that both devices will be near each other and stolen together.

Which factors to use together will vary based on the device that the user is using. For example, on a desktop computer, a password (something you know) and a hardware token (something you have) is a strong combination that should generally be preferred. For a mobile device, however, a fingerprint (something you are) and passphrase (something you know) might be preferred.

Physical Safety Is a Requirement for Trusting Users

This section focuses on technological means to authenticate the identity of a user, but it’s important to recognize that users can be coerced to thwart those mechanisms. A user can be threatened with physical harm to force them to divulge their credentials or to grant someone access under a trusted account. Behavioral analysis and historical trending can help to mitigate such attempts, though they remain an effective attack vector.

Something You Know: Passwords

Passwords  are the most common form of authentication used in computer systems today. While often maligned due to users’ tendency to choose poor passwords, this authentication mechanism provides one very valuable benefit: when done well, it is an effective method for asserting that a user’s mind is present.

A good password has the following characteristics:

It’s long

A recent NIST password standard states a minimum of 8 characters, but 20+ character passwords are common among security-conscious individuals. Passphrases are often encouraged to help users remember a longer password.

It is difficult to guess

Users tend to overestimate their ability to pick truly random passwords, so generating passwords from random number generators can be a good mechanism for choosing a strong password, though convenience is affected if it cannot be easily committed to memory

It is not reused

Passwords need to be validated against some stored data in a service. When passwords are reused, the confidentiality of that password is only as strong as the weakest storage in use.

Choosing long, difficult-to-guess passwords for every service or application a user interacts with is a high bar for users to meet. As a result, users are well served to make use of a password manager to store their passwords. Using this tool will allow users to pick much harder-to-guess passwords and thereby limit the damage of a data breach.

When building a service that authenticates passwords, it’s important to follow best practices. Passwords should never be directly stored or logged. Instead, a cryptographic hash of the password should be stored. The cost to brute force a password (usually expressed in time and/or memory requirements) is determined by the strength of the hashing algorithm. The NIST periodically releases standards documents that include recommended password procedures. As computers become more powerful, the current recommendations change, so it’s best to consult industry best practices when choosing algorithms.

Something You Have: TOTP

Time-based one-time password, or TOTP, is an authentication standard where a constantly changing code is provided by the user. RFC 6238 defines the standard implemented in hardware devices and software applications. Mobile applications are often used to generate the code, which works well, since users tend to have their phones close by.

Whether using an application or hardware device, TOTP requires sharing a random secret value between the user and the service. This secret and the current time are passed through a cryptographic hash and then truncated to produce the code to be entered. As long as the device and the server roughly agree on the current time, a matching code confirms that the user is in possession of the shared key.

The storage of the shared key is critical, both on the device and on the authenticating server. Losing control of that secret will permanently break this authentication mechanism. The RFC recommends encrypting the key using a hardware device like a TPM, and then limiting access to the encrypted data.

Exposing the shared key to a mobile device places it in greater danger than it is on a server. The device could connect to a malicious endpoint that might be able to extract the key. To mitigate this vector, an alternative to TOTP is to send the user’s mobile phone a random code over an encrypted channel. This code is then entered on another device to authenticate that the user is in possession of their mobile phone.

SMS Is Not a Secure Communication Channel

Sending the user a random code for authentication requires that the authentication code is reliably delivered to the intended device and is not exposed during transit. Systems have previously sent random codes as an SMS message, but the SMS system does make sufficient guarantees to protect the random code in transit. Using SMS for this system is therefore not recommended.

Something You Have: Certificates

Another method to authenticate users is to generate per-user X.509 certificates. The certificate is derived from a strong private key and then signed using the private key of the organization that provided the certificate. The certificate cannot be be modified without invalidating the organization’s signature, so the certificate can be used as a credential with any service that is configured to trust the signature of the organization.

Since an X.509 certificate is meant for consumption by a computer, not by humans, it can provide much richer details when presented to a service for authentication. As an example, a system could encode metadata about the user in the certificate and then trust that data since it has been signed by a trusted organization. This can alleviate the need to create a trusted user directory in less mature networks.

Using certificates to identify users relies heavily on those certificates being securely stored. It is strongly preferred to both generate and store the private key component on dedicated hardware so as to prevent digital theft. We’ll talk more about that in the next section.

Something You Have: Security Tokens

Security tokens are hardware devices that are used primarily for user authentication, but they have additional applications. These devices are not mass storage devices storing a credential that was provisioned elsewhere. Instead, the hardware itself generates a private key. This credential information never leaves the token. The user’s device interacts with the hardware’s APIs to perform cryptographic operations on behalf of the user, proving that they are in possession of the hardware.

As the security industry progresses, organizations are increasingly turning toward hardware mechanisms for authenticating user identity. Devices like smart cards or Yubikeys can provide a 1:1 assertion of a particular identity. By tying identity to hardware, the risk that a particular user’s credentials can be duplicated and stolen without their knowledge is greatly mitigated, as physical theft would be required.

Storing a private key in hardware is by far the most secure storage method we have today. The stored private key can then be used as the backing for many different types of authentication schemes. Traditionally, they are used in conjunction with X.509, but a new protocol called Universal 2nd Factor (U2F) is gaining rapid adoption. U2F provides an alternative to full-blown PKI, offering a lightweight challenge-response protocol that is designed for use by web services. Regardless of which authentication scheme you choose, if it relies on asymmetric cryptography, you should probably be using a security token.

While these hardware tokens can provide strong protections against credential theft, they cannot guarantee that the token itself isn’t stolen or misused. Therefore, it’s important to recognize that while these tokens are great tools in building a secure system, they cannot be a complete replacement for a user asserting their identity. If we want the strongest guarantee that a particular user is who they claim to be, using a security key with addtional authentication factors (e.g., a password or biometric sensor) is still strongly recommended.

Something You Are: Biometrics

Asserting identity by recognizing physical characteristics of the user is called biometrics. Biometrics is becoming more common as advanced sensors are making their way into devices we use every day. This authentication system offers better convenience and potentially a more secure system, if biometric signals, such as the following, are used wisely.

  • Fingerprints
  • Handprints
  • Retina scans
  • Voice analysis
  • Face recognition

Using biometrics might seem like the ideal authentication method. After all, authenticating a user is validating that they are who they say they are. What could be better than measuring physical characteristics of a user? While biometrics is a useful addition to system security, there are some downsides that should not be forgotten.

Authenticating via biometrics relies on accurate measurement of a physical characteristic. If an attacker is able to trick the scanner, they are able to gain entry. Fingerprints, being a common biometric, are left on everything a person touches. Attacks against fingerprint readers have been demonstrated—attackers obtain pictures of a latent fingerprint and then 3D print a fake one, which the scanner accepts.

Additionally, biometric credentials cannot be rotated, since they’re a physical characteristic. They can also present an accessibility issue if, for example, an individual is born without fingerprints (a condition known as adermatoglyphia) or if they lost their fingers in an accident.

Finally, biometrics can present surprising legal challenges when compared against other authentication mechanisms. In the United States, for example, a citizen can be compelled by a court to provide their fingerprint to authenticate to a device, but they cannot be compelled to divulge their password, owing to their Fifth Amendment right against self-incrimination.

Out-of-Band Authentication

Out-of-band authentication purposefully uses a separate communication channel than the original channel the user used to authenticate that request. For example, a user logging into a website for the first time on a device might receive a phone call to validate the request. By using an out-of-band check, a service is able to raise the difficulty of breaking into an account, since the attacker would need control of the out-of-band communication channel as well.

Out-of-band checks can come in many forms. These forms should be chosen based on the desired level of strength needed for each interaction:

  • A passive email can inform users of potentially sensitive actions that have recently taken place.
  • A confirmation can be required before a request is completed. Confirmation could be a simple “yes,” or it could involve entering a TOTP code.
  • A third party could be contacted to confirm the requested action.

When used well, out-of-band authentication can be a useful tool to increase the security of the system. As with all authentication mechanisms, some level of taste is required to choose the right authentication mechanism and frequency, based on the request taking place.

Single Sign On

Given the large number of services users interact with, the industry would prefer to decouple authentication from end services. Having authentication decoupled provides benefits to both the service and the user:

  • Users only need to authenticate with a single service.
  • Authentication material is stored in a dedicated service, which can have more stringent security standards.
  • Security credentials in fewer locations means less risk and eased rotations.

Single sign-on (SSO) is a fairly mature concept. Under SSO, users authenticate with a centralized authority, after which they will typically be granted a token of sorts. This token is then used in further communication with secured services. When the service receives a request, it contacts the authentication authority over a secure channel to validate the token provided by the client.

This is in contrast to decentralized authentication. A zero trust network employing decentralized authentication will use the control plane to push credentials and access policy into the data plane. This empowers the data plane to carry out authentication on its own, whenever and wherever necessary, while still being backed by control plane policy and concern. This approach is sometimes favored over a more mature SSO-based approach since it does not require running an additional service, though it introduces enough complexity that it is not recommended.

SSO tokens should be validated against the centralized authority as often as possible. Every call to the control plane to authorize an SSO token provides an opportunity to revoke access or alter the trust level (as known to the caller).

A popular mode of operation involves the service performing its own sign in, backed by SSO authentication. The primary drawback of this approach is that it allows the control plane to authorize the request only once, and leaves the application to make all further decisions. Trust variance and invalidation is a key aspect of a zero trust network, so decisions to follow this pattern should not be taken lightly.

Existing Options

SSO has been around for a long time, and as such, there are many mature protocols/technologies to support it, including these popular ones:

  • SAML
  • Kerberos
  • CAS

It is critical that authentication remain a control plane concern in a zero trust network. As such, when designing authentication systems in a zero trust network, aim for as much control plane responsibility as possible, and validate authorization with the control plane as often as is reasonably possible.

Moving Toward a Local Auth Solution

Local authentication that is extended out into remote services is another authentication mechanism that is increasingly becoming a possibility. In this system, users authenticate their presence with a trusted device, and then the device is able to attest to that identity with a remote service. Open standards like the FIDO Alliance’s UAF standard use asymmetric cryptography and local device authentication systems (e.g., passwords and biometrics) to move trust away from a large number of services to relatively few user-controlled endpoints.

UAF, in a way, looks a lot like a password manager. However, instead of storing passwords, it stores private keys. The authenticating service is then given the user’s public key and is thereby able to confirm that that the user is in possession of the private key.

By moving authentication into a smart local device, a number of benefits emerge:

  • Replay attacks can be mitigated via a challenge-and-response system.
  • Man-in-the-middle attacks can be thwarted by having the authentication service refuse to sign the challenge unless it originated from the same domain the user is visiting.
  • Credential reuse is nonexistent, since per-service credentials can be trivially generated.

Authenticating and Authorizing a Group

Nearly every system has a small set of actions or requests that must be closely guarded. The amount of risk one is willing to tolerate in this area will vary from application to application, though there is practically no lower limit.

One of the risks you pass as you approach zero is the amount of trust in any single human being. Just like in real life, there are many times in which it is desirable to gain the consent of multiple individuals in order to authorize a particularly sensitive action. There are a couple ways that this can be achieved in the digital realm, and the cool part is, we can cryptographically guarantee it!

Shamir’s Secret Sharing

Shamir’s Secret Sharing is a scheme for distributing a single secret among a group of individuals. The algorithm breaks the original secret into n parts, which can then be distributed (Figure 6-2). Depending on how the algorithm was configured when the parts were generated, k parts are needed to recalculate the original secret value.

When protecting large amounts of data using Shamir’s Secret Sharing, a symmetric encryption key is usually split and distributed instead of using the algorithm directly on data. This is because the size of secret that is being split needs to be smaller than some of the data used in the secret-sharing algorithm.

Figure 6-2. An example ssss session

A Unix/Linux version of this algorithm is called ssss. Similar applications and libraries exist for other operating systems or programming languages.

Red October

Cloudflare’s Red October project is another approach to implementing group authentication to access shared data. This web service uses layered asymmetric cryptography to encrypt data such that a certain number of users need to come together to decrypt the data. Encrypted data isn’t actually stored on the server. Instead, only user public/private key pairs (encrypted with a user chosen password) are stored.

When data is submitted to be encrypted, a random encryption key is generated to encrypt the data. This encryption key is then itself encrypted using unique combinations of user-specific encryption keys, based on an unlock policy that the user requests. In the simplest case, a user might encrypt some data such that two people in a larger group need to collaborate to decrypt the data. In this scenario, the original encrypted data’s encryption key is therefore doubly encrypted with each unique pair of user encryption keys.

See Something, Say Something

Users in a zero trust network, like devices, need to be active participants in the security of the system. Organizations have traditionally formed dedicated teams to focus on the security of the system. Those teams, more often than not, took that mandate to mean that they were solely responsible for the system’s security. Changes needed to be vetted by them to ensure that the system’s security was not compromised. This approach produces an antagonistic relationship between the security team and the rest of the organization, and as result, reduces security.

A better approach is to build a culture of collaboration toward the security of the system. Users should be encouraged to speak up if something they do or witness looks odd or dangerous, even if it’s small. This sharing of knowledge will give much better context on the threats that the security team is working to defend against. Reporting phishing emails, even when users did not interact with them, can let the security team know if a determined attacker is attempting to infiltrate the network.

Devices which are lost or stolen should be reported immediately. Security teams might consider providing ways for users to alert them day or night in the event that their device has gone missing.

When responding to tips or alerts from users, security teams should be mindful of how their response to the incident affects the organization more broadly. A user who is shamed for losing a device will be less willing to report the loss in a timely manner in the future. Similarly, a late-night false alarm should be met with thanks to ensure that reporters don’t second-guess themselves. As much as possible, try to bias the organization toward over-reporting.

Trust Signals

Historical user activity is a rich source of data for determining the trustworthiness of a user’s current actions. A system can be built which mines user activity to build up a model of expected behavior. This system will then compare current behavior against that model as a method for calculating a trust score of a user.

Humans tend to have predictable access patterns. Most people will not try to authenticate multiple times a second. They also are unlikely to try to authenticate hundreds of times. These types of access patterns are extremely suspicious and are often mitigated via active methods like CAPTCHAs (automated challenges which only a human is able to answer) or locked accounts. Reducing false positives requires setting fairly high bars to be actively banned. Including this activity in an overall threat assessment score can help catch suspicious, but not obviously bad, behavior.

Looking at access patterns doesn’t need to be restricted to authentication attempts. Users’ application usage patterns can also reveal malicious intent. Most users tend to have fairly limited roles in an organization and therefore might only need to access a subset of data that is available to them. In an attempt to increase security, organizations will begin removing access rights from employees unless they definitely need the access to do their job. However, this type of restrictive access control can impact the ability of the organization to respond quickly to unique events. System administrators are a class of users which are given broad access, thereby weakening this approach as a defense mechanism. Instead of choosing between these two extremes, we can score the user’s activity in aggregate and then use their score to determine if they are still trusted to access a particularly sensitive resource. Having hard stops in the system is still important—it’s the less clear cases where the system should trust users, but verify their trustworthiness via logged activity.

Lists of known bad traffic sources, like the one provided by Spamhaus, can be another useful signal for the trustworthiness of a user. Traffic that is originating from these addresses and is attempting to use a particular user’s identity can point toward a potentially compromised user.

Geolocation can be another useful signal for determining trust of a user. We can compare the user’s current location against previously visited locations to determine if it is out of the ordinary. Has the user’s device suddenly appeared in a new location in a timeframe that they couldn’t reasonably travel? If the user has multiple devices, are they reporting conflicting locations? Geolocation can be wrong or misleading, so systems shouldn’t weight it too strongly. Sometimes users forget devices at home or geolocation databases are simply incorrect.

Summary

This chapter focused on how to establish trust in users in a system. We talked about how identity is defined and the importance of having an authority to reference when checking the identity of a user in the system. Users need to be entered into a system to have an identity, so we talked about some ideal ways to bootstrap their identity.

Identity needs to be stored somewhere, and that system is a very valuable target for attackers. We talked about how to store the data safely, the importance of limiting the breadth of data being stored in a single location, and how to keep stored identity up to date as users come and go.

With authoritative identity defined and stored, we turned our attention to authenticating users that claim to have a particular identity. Authentication can be an annoyance for users, so we discussed when to authenticate users. We don’t want users to be inundated with authentication requests, since that will increase the likelihood that they accidentally authenticate against a malicious service. Therefore, finding the right balance is critical.

There are many ways that users can be authenticated, so we dug into the fundamental concepts. We discussed several authentication mechanisms that are in use today. We also looked at some authentication mechanisms that are on the horizon as system security practices are responding to threats.

Oftentimes, increasing trust in a system of users involves creating procedures where multiple users play a role to accomplish a goal. We discussed group authentication and authorization systems like “two person rules,” which can be used to secure extremely sensitive data. We also talked about building a culture of awareness in an organization by encouraging users to report any suspicious activity.

Finally, zero trust networks can leverage user activity logs to build a profile of users to compare against when evaluating new actions. We enumerated some useful signals which can be used to build that profile.

The next chapter looks at how trust in applications can be built.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.42.94