Because Zero Trust is a strategic initiative, it's important to benchmark your Zero Trust journey and measure your improvements over time. The Zero Trust Maturity Model documents improvements made to your individual Zero Trust environments. Designed using a standard Capability Maturity Model, the Zero Trust Maturity Model leverages the five-step methodology for implementing Zero Trust and should be used to measure the maturity of an individual protect surface containing a single DAAS element.
Step | Initial (1) The initiative is undocumented and performed on an ad hoc basis with processes undefined. Success is dependent on individual efforts. | Repeatable (2) The process is documented and is predictably repeatable, using lessons learned in the initial phase. | Defined (3) Processes for success have been defined and documented. | Managed (4) Processes are monitored and controlled; efficacy is measurable. | Optimized (5) The focus is on continuous optimization. |
1. Define your protect surface. Determine which single DAAS element will be protected inside the defined protect surface. | The DAAS element is unknown or discovered manually; data classification is not done or is incomplete. | The use of automated tools to discover and classify DAAS elements has begun, but is not standardized. | Data classification training and processes have been introduced and are maturing; protect surface discovery is becoming automated. | New or updated DAAS elements are immediately discovered, classified, and assigned to the correct protect surface in an automated manner. | Discovery and classification processes are fully automated. |
2. Map the transaction flows. The mapping of the transaction flows to and from the protect surface shows how various DAAS components interact with other resources on your network and, therefore, where to place the proper controls. | Flows are conceptualized based on interviews and workshops. | Traditional scanning tools and event logs are used to construct approximate flow maps. | A flow mapping process is in place. Automated tools are beginning to be deployed. | Automated tools create precise flow maps. All flow maps are validated with system owners. | Transaction flows are automatically mapped across all locations in real time. |
3. Architect a Zero Trust environment. A Zero Trust architecture is designed based upon the protect surface and the interaction of resources based upon the flow maps. | With little visibility and an undefined protect surface, the architecture cannot be properly designed. | Protect surface is established based on current resources and priorities. | The basics of the protect surface enforcement is complete, including placing segmentation gateways in the appropriate places. | Additional controls are added to evaluate multiple variables (e.g., endpoint controls, SAAS, and API controls). | Controls are enforced using a combination of hardware and software capabilities. |
4. Create Zero Trust policy. Create Zero Trust policy following the Kipling Method of Who, What, When, Where, Why, and How. | Policy is written at Layer 3. | Additional “who” statements are starting to be identified to address business needs; user IDs of applications and resources are known, but access rights are unknown. | The team works with the business to determine who or what should have access to the protect surface. | Custom user-specific elements are created and defined by policy, reducing policy space and number of users with access. | Layer 7 policy is written for granular enforcement; only known traffic and legitimate application communication are allowed. |
5. Monitor and maintain. Telemetry from all controls in the protection chain are captured, analyzed, and used to stop attacks in real time and enhance defenses to create more robust protections over time. | Visibility into what is happening on the network is low. | Traditional SIEM or log repositories are available, but the process is still mostly manual. | Telemetry is gathered from all controls and is sent to a central data lake. | Machine learning tools are applied to the data lake for context into how traffic is used in the environment. | Data is incorporated from multiple sources and used to refine steps 1–4; alerts and analyses are automated. |
18.227.102.111