© Sreejith Keeriyattil 2019
S. KeeriyattilZero Trust Networks with VMware NSXhttps://doi.org/10.1007/978-1-4842-5431-8_5

5. Bird’s-Eye View of a Zero Trust Network

Sreejith Keeriyattil1 
(1)
Bengaluru, Karnataka, India
 

You have learned the basics of how to create distributed firewall rules as well as how to create security groups, service composers, etc. It’s time to get ruthless with real-world scenarios. The intention here is to explain how you can implement a Zero Trust policy-based network on a real production system.

Introduction

Like any design, the architecture may differ according to the business requirements. A financial services company requires tighter security control than a company handling less critical business solutions. Every company needs a top-class security model deployed on its network. No company is looking for second-class solutions, but you’ll often have a trade-off between cost and functionality. This forced you to choose a design that meets your company’s greatest needs. For instance, a landlocked country doesn’t need a heavy navy presence; instead, they focus their defense strategies on building a more dependable army and air force. You can always consider real-life examples when designing security models.

A wide range of complicated attacks threaten Internet-facing companies today. Most fend off such threats only due to sheer luck. Sometimes companies are not even aware that data theft happened at all. They become aware only when the hackers threaten to release the data if they don’t agree to their demands. You can argue that even cutting edge security models fail to protect some companies from being embarrassed in front of the world when they are exposed by hackers.

The Zero Trust model cannot completely prevent such attacks. Take into consideration Google’s way of designing systems. Google always designs with failure in mind. For their infrastructure system, their rule of thumb is that hardware can fail at any time. They create strategies based on how the system will react when there is a failure. The application will be grouped according to business criticality and they spend more money on monitoring critical applications.

You can model your security infrastructure based on business requirements. It is important to discuss this with various stakeholders before committing to any design approach. If you are passionate about cars and you wanted to buy a new one, you would naturally go for the best available car in the market. What you might not consider due to your passion are the budget, affordability, mileage, and many other things.

As a security consultant, it is always easier to suggest the top-selling and latest security tool available in the market. There is an old saying in IT groups that states, “Nobody gets fired for buying IBM”. There are many real examples where projects start with high expectations and, at the end, the budget skyrockets and the entire project wraps up before getting anything production-ready. This can happen when designing security systems as well if you are not careful.

As a security consultant, your job is to determine the requirements and priorities. You have to defend each extra penny the customer is paying. Setting up a cutting-edge defense system only to protect a blog or a less critical gaming application is not a good use of money or time.

This chapter focuses on how to design a general system based on the Zero Trust network.

It is important to understand the basic requirements and why you do things you’ll do in NSX. This information can be carried on to other areas.

Stakeholder Meetings

Before going ahead with your design, it is important to have a detailed discussion with each of the application owners. As a security consultant, your job is to make sure all the relevant policies are applied to the system. For that, you might also need to know the kind of application you are protecting. There are multiple design approaches and deployment models available in the market and there are situations where models will differ across each team. You need to know:
  • Kind of database and the data the application stores

  • How the application is connected to other applications

  • Whether the application is web facing

  • How they are planning to take on load balancing

  • Application upgrade model

Application Architecture

Application architecture can be quite complex and demanding. At first, it might seem this has nothing to do with the security model and its implementation. The application architecture defines how code reacts to certain events and how the servers are organized at different levels. This gives you an important piece of information about the data flow.

In Zero Trust networks, it is vital to understand the packet flow inside the data center. Application dependency mapping documents are sometimes the most difficult thing to achieve in any infrastructure project. As it happens, each application owner is very confident about how their application works in their isolated system. They have factored in all the possibilities and edge cases to make sure all the possible issues are addressed. When it’s deployed to production, many are unaware of how the performance or changes in other applications impact their servers. You need to have a clear understanding of the business logic and application flow to segment these into firewall policies. There are different architecture models available in the market, several of which are discussed in the following sections.

Layered Architecture

Figure 5-1 shows one of the most commonly used web application architectures, the layered model. There is a two-tiered architecture and three-tiered architecture. The logic is separated into different layers and each layer has a particular purpose. The most common initial separation layer is the web tier, which ingests the user requests coming from the Internet. This is the only layer that is Internet-facing, and its responsibility is to cater to web requests.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig1_HTML.jpg
Figure 5-1

Layered model

As most of the user hits happen at this layer, the load balancing feature will have to be applied to this layer for extreme bandwidth scenarios. The web layer is mainly composed of the server, which has a stateless feature. This makes scaling webservers easy.

Next is the application tier, which contains the server that runs the business logic. Here, you take the user requests and run the logic and generate an expected result. As you see, this is one of the core components and has been designed and optimized for different scenarios.

The third layer is the database tier. You have already applied the logic to the user request, so the output might be doing a CRUD (Create, Read, Update, Delete operation) on a database that’s sitting in the database tier.

Layered architecture can be quite useful in the quick development of the software, as the responsibilities are divided across different tiers. But often there are drawbacks with this design. Over time, there is a chance that the codebase will end up as a monolithic design and becomes difficult to manage and upgrade. A tiered architecture, in some cases, can make it difficult for anyone to understand the overall architecture, thereby making the entire system more complex.

Layered Architecture Security Modeling Approach

While designing a layered application architecture (generally known as a three-tier model layer), isolation is important. The application is designed in a way such that each layer has a specific purpose, so the security design should also use this approach. This doesn’t mean you have to go back to a traditional perimeter model in order to deploy the solution. The flow from layer to layer should be properly identified and protected. The webserver should only connect to the application where it is serving the business logic, and all other flows should use DENY mode. In addition, the application that performs the CRUD operations on the database should be allowed to connect only to those specific ports. This will help in achieving real layer-based architecture in security modeling. As you can see, understanding the architecture helps you keep the application and security architecture in sync. This provides more scalable and more reliable security over time.

Event-Driven Architecture

Event-driven architecture, as the name suggests, waits for an event and takes specific actions when the event is generated. This type of architecture isn’t the best or most generic type. But it’s used with many modern web application architectures, notably e-commerce. For example, consider the communication between two web services in a typical e-commerce setup. You have an order application service listening for user requests. Once it receives a request, it will update its registry and publish an event to another application service.

The inventory application service will be subscribed to the order events. It receives the request, checks the database for inventory, and publishes a no_stock or in_stock reply. This kind of communication logic between services is quite common nowadays, particularly in cloud-based architectures.

Event-Driven Architecture Security Modeling Approach

Along with the Zero Trust generic approaches, event-based architecture ends with more intricate communication flow between services. There has to be a clear understanding of the requirements and the publish/subscribe process to events. A fundamental study should be made of the required flows. Fortunately, VMware NSX has well-designed tools to take care of this exhaustive task of identifying the flows. Using a combination of Application Rule Manager (ARM) and Log Insight, you can gain a fair understanding of the flows and create firewall rules based on them.

There are third-party tools like Tufin that do a similar job. Depending on the complexity of the setup, more in-depth analysis and care should be taken in the preparation phase of configuring NSX security.

Microservices Architecture

Microservices architecture is the new default cloud-based architecture. Designing microservices means splitting a monolithic application into smaller services. It’s based on the UNIX philosophy “do one thing and do it well”. You could arguably say that the microservices design pattern is a natural progression, where we have moved toward containers and cloud-based deployments. Both the user experience and the load in this current Internet era have increased. Where everything is online, there is a need to spur on demand by frequently adding new features. When you consider the same scenario a decade back, there wasn’t such an urgent need to add new features and optimization. There was always an emphasis on security and optimization of resources. But as the Internet landscape has changed, the new generation is more willing to move on from one product to another without hesitation. If the product they are using doesn’t meet their expectations, they quickly change from one e-commerce website to another, especially if the website slows or is not easy to use.

As a result, there are plenty of changes happening on the infrastructure and application sides. We can’t consider security as isolated from all the changes happening outside. The security model has to incorporate the design changes happening elsewhere.

Microservices Architecture Security Model Approach

Designing a security model for a microservices-based architecture is time-consuming. There can be multiple REST API calls happening between applications at any given time. With Zero Trust, it is important to take every flow into account. This is where the automation and ease of configuration comes to assist. In such cases, you need to make heavy use of automation. NSX has a good feature set for automation in the name of label, service composer, and REST API integration. All these features can be used to add and remove virtual machines in to and out of the security groups based on the application’s needs. Given the complex nature of microservice call flow, automation is a must when implementing security

Managing and Monitoring Servers

Take great care with the placement and security of your management servers. All your infrastructure management servers should be well protected and restricted by LDAP user access. Allowing admin and default credential access is probably the most irresponsible thing anyone can do. In a setup where they don’t have well-defined processes and policies to commission hardware into, the data center will fall for this trap. Companies that are building new data centers usually end up missing some of these key points.

For instance, access to each computer or switch should be restricted only to a valid user. All these servers should be in another management network VLAN, but that is not a valid reason for allowing the default credential. The password-management policy should be followed across the hardware system.

I propose a jump server-based management setup. This simplifies applying all your firewall and access control to a single server or a group. The jump server can be properly deployed with tightened security controls and the latest patches applied and you can be 100% sure that there are no loopholes in it. Users with LDAP access log in to the jump server and, from there, they can navigate further to other servers for management purposes. Even then, only flows that are required should be allowed from these servers. A jump server shouldn’t has full control over the infrastructure. You can divide jump server access according to the organization’s division. The monitoring team that’s accessing the jump server shouldn’t have full control over the servers. Their job is to monitor and report any issues. Likewise, the security team, patching team, etc. should have required permissions only to do their roles. Figure 5-2 shows the access model view.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig2_HTML.jpg
Figure 5-2

Access model view

With the introduction of DevOps, IT teams often have a blend of development and operations skillsets. In such models, you need to take extra care when giving control over the infrastructure to diverse teams. In deployment models, like deploying servers using infrastructure as a code, it is excellent in simplifying manual processes. At the same time, this gives more responsibility to the person who executes it. A manual error or grave mistake can lead to disaster in these cases.

Application Grouping

This chapter discussed the major application design architecture. There are other varieties of architecture that can be used for system design, like microkernel architecture and space-based architecture. The point is that you need to have an initial understanding of the application architecture before you jump into the Zero Trust security model design.

Once you have the information you need and initial discussion with the stakeholders, you can move into the design of the Zero Trust security model.

It is advantageous to use a application-wise, horizontal approach instead of trying to take it all down as a whole. Start with a single application. Register all the servers in that application group. Once you have the list of servers, you are ready to start your work on the particular applications.

In the next steps, you need to determine how each application interacts with other blocks in the data center, such as with other applications, management servers, logging servers, and monitoring and audit servers. It is important to consider the management applications, as well. By design, you should not allow anyone or any groups to have free access to any part of the infrastructure they don’t need.

Security Groups

Security groups form a vital part of the NSX server grouping. You already read about how to create security groups. One of the initial steps in any design is to identify on what basis you will group the servers. This is important, as it forms a core part of the security system’s design. The grouping has to be scalable and reliable for a long time. Changing the grouping policies every time you add a new application or extend an existing application is not sound design.

The approach you use here can also depend on the application architecture in general. For a three-tier architecture, it is common to group webservers into one security domain and app servers into another. This method might not work if the application architecture doesn’t strictly follow a layered model.

Naming also has importance in modeling. Most organizations use a naming convention and aren’t detailed in the description of the names. This can work if the information is well documented and is updated regularly. Manual errors always begin with outdated documents. A new employee won’t be able to identify the purpose of a security group if the name is not well defined.

Security groups (SGs) and service names have to be meaningful and should include the purpose and the application to which they apply. Also, take care so you don’t end up with an SG overhaul. If you start creating a security group for each VM, you’ll soon end up with too many SGs and it will be difficult to identify which ones need to be applied. A rule of thumb is to limit security groups to the web, app, and database application tiers.

The T-Shirt Company Example

You have read about the prerequisites and general recommendations to follow before trying to deploy the Zero Trust networks. Now it’s time to delve more deeply into how all these pieces work together in a live production setup.

You have learned about security groups, services, service composers, DFW, third-party integrations, and other useful features. In a live production setup, these will act as an arsenal in the hands of a security consultant. Consultants have to use the right tools to get the right defense model per application. As discussed, there is no one size fits all approach in security. Even though you need to give equal importance to all applications, it is a fact that some applications are more critical than others.

This section uses a fictional T-shirt company to illustrate this process. It’s an online e-commerce store that sells T-shirts.

Here are the assumptions about this example:
  • This model is by no means a standard that can be applied everywhere.

  • The use case creates some scenarios to help you understand how Zero Trust solves these problems.

  • The problems can be solved using many techniques, but this example mentions only one of the best ways.

  • This is by no means a standard design practice from VMware. For that, refer to the VMware designing guidelines.

  • The Zero Trust model and VMware NSX design are evolving in each version. The versions have to be checked for any improvements or outdated features.

  • This use case is security-focused and ignores other critical infrastructure components.

  • The application architecture model mentioned here is generic and is by no means a standard e-commerce application architecture.

The first question is often, “why can’t we put everything into a public cloud?”. You could use AWS, Google Cloud, or Azure to achieve this. The question seems reasonable to a normal user, as the main advantage that public cloud vendors profess is that you don’t need to manage your infrastructure any more. This will be done automatically by the service providers.

If you learn one thing and only one thing from this book, it’s this—all the server, storage, application, databases, programming languages, and everything that is used to build an end-to-end web application must promote revenue to the funding company. No one uses a shiny new tool just because it is new and achieves some features that were not there before. The real question almost all CTOs will ask is how will it affect the bottom line. Can the company generate some percentage increase in revenue? Only if there is a financial benefit will most organizations go forward with expenditures.

The choice between a public cloud and an on-premise system depends on the business requirements and challenges. There is no one-size-fits-all solution.

Infrastructure for the T-Shirt Company

As portrayed in Figure 5-3, the T-Shirt company infrastructure is a scalable three-tier application. The application contains modules and services that do a specific job and help the overall system work seamlessly.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig3_HTML.jpg
Figure 5-3

T-Shirt company architecture

On an individual component point of view, each service can act and run independently. For example, the catalogue application runs in a separate virtual machine and uses a separate database (or the same database that’s shared across the application). The isolation you need purely depends on the applications.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig4_HTML.jpg
Figure 5-4

T-Shirt company process flow

As detailed in Figure 5-4, the user request flow looks like this:
  1. 1.

    A user requests the IP to DNS server for www.tshirtcompany.com.

     
  2. 2.

    The DNS responds with a public IP address of the load balancer, which is the entry point for the infrastructure.

     
  3. 3.

    Once the user has the IP address, the browser will connect to the public IP address and will connect to the load balancer.

     
  4. 4.

    The load balancer’s job is to redistribute the request to the backend servers according to their load.

     
  5. 5.

    The request now hits the frontend webserver and the user will be welcomed with a web page with various options.

     
  6. 6.

    The user can search through the catalogue and filter the products based on her interests. All the catalogue entries will be served by the Catalog application, which has a MySQL DB backend.

     
  7. 7.

    Once the user has decided on the T-shirt to buy, she can add the product to the Cart application.

     
  8. 8.

    After all the required products are added to the cart, the user has to proceed with ordering using the Order application. The Order application has a MongoDB backend.

     
  9. 9.

    The Checkout includes different payments options that the user can use to place the order.

     
  10. 10.

    Once the order is complete, it will be shown on the order application page. This will also confirm the status of the request.

     
  11. 11.

    Separate applications can be used to track the shipping status and the successful completion of the request.

     
  12. 12.

    This can be used in conjunction with a messaging queue, which can subscribe to any events about the order status and display them accordingly.

     
  13. 13.

    The order is successfully shipped and marked as completed.

     
  14. 14.

    The user will be able to see the ordered products in her account. This includes relevant details, like the address of the shipment, and so on.

     
  15. 15.

    Options to create a new user, apply coupon codes, and so on, can be added to the design, as needed.

     

These points represent the request flow of an e-commerce application. According to the requirements, you can scale and add complexity to the application as you want. This job is up to the application developers and business owners as to how many features they need to attract new customers. In the modern e-commerce landscape, customers are looking for new and improved user experiences to keep them using the same website.

Say another competitor comes up with an amazing new product based on order history using AI and machine learning. Users might be tempted and gradually move toward other sites, as those competitors showed them products they have an affinity for. This simplifies their advertising costs and improves their margins.

The T-shirt company should be scalable and reliable enough to add such features with minimal effort. In a monolithic application, adding new features is a difficult task.

In this case study, the whole setup runs in a VMware SDDC setup. You are tasked to design the security model to protect these applications.

The T-shirt company purchased VMware NSX and a related toolset to better aid everyone involved in this project. The toolset includes the following:
  • VMware vSphere license

  • VMware NSX license

  • VMware Log Insight

  • VMware Network Insight

  • TrendMicro/Panorama Checkpoint third-party product

License requirements and related information can easily be obtained from the VMware website. Refer to the VMware official website for information about licensing.

The coming sections analyze each application and the security rules required. You already read about creating rules and security groups. The focus here is on the type of rules that need to be created.

The setup here assumes that this is a greenfield deployment. That means that this is a new project and is not in production yet. Brownfield deployments are what you do on top of existing applications. Both have their challenges.

In a greenfield deployment, you are dealing with unknown facts. The setup is new and even the application developers might not be totally sure how their application behaves

In a brownfield deployment, you have an existing setup that you need to migrate. Activity will be done on a live production setup. You need to be sure about the DENY rules and packet flows. Migrating the existing firewall rules from a traditional firewall to NSX DFW is a challenge. You might need to rewrite the entire firewall rules in some cases.

This next step looks into the firewall rules that are required for the different services, such as those shown in Figure 5-5:
  • Load balancer

  • Frontend webservers

  • Catalog application

  • Order application

  • User application

  • Cart application

  • Payment application

  • Shipping application

  • For MySQL databases

  • For MongoDB databases

  • RabbitMQ messaging queue

  • Management and monitoring servers

../images/483938_1_En_5_Chapter/483938_1_En_5_Fig5_HTML.jpg
Figure 5-5

T-shirt company server inventory

Load Balancer

This is the first entry point for all the requests that hit the T-shirt company (TSC) infrastructure. The load balancer can be physical or virtual. NSX has built-in functionalities of a load balancer. There is an option to enable multiple load balancer types in VMware NSX. Inline and one-arm load balancer models can be deployed within an NSX setup without any additional hassle. This will always depend on the use case. As discussed, some infrastructure needs to be extremely scalable and have resource-intensive operations like SSL offloading or complex health checks of the server pools. These options are shown in Figure 5-6.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig6_HTML.jpg
Figure 5-6

NSX load balancer options

If the frontend application demands resource-intensive operations, physical appliance-based load balancers like Citrix Netscaler or F5 have to be used. This resolves one problem, but you need to keep in mind that managing and maintaining these physical appliances is outside the scope of the NSX and you’ll need to individually take care of these appliances. There are separate command sets and management tools for administering the load balancer.

For an end-to-end infrastructure automation, this might come as a roadblock. In an immutable infrastructure setup, the infrastructure automation tool should be capable of deploying the entire infrastructure stack using the scripted methods. In the case of the appliance, it won’t be able to take part in the automation process.

All these points need to be kept in mind when you select your load balancing tools. The virtual appliance of popular load balancers can also provide most of the feature sets available in physical load balancers. Figure 5-7 shows the backend.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig7_HTML.jpg
Figure 5-7

Load balancer backend

The load balancer has to forward the request using an algorithm that’s best suited to the application. You can even use a round-robin algorithm where each request is forwarded in a round-robin fashion to the backend pool of servers. There would be a health check script, which can be based on a specific port or ICMP or HTTP request. The purpose is to identify whether the backend pool of servers is available and healthy to forward packets for further processing.

If the load balancer sits outside of NSX, the firewall rules have to be configured in the customer’s firewall. In this case, you can configure the rule in the edge device, as the packets to a particular transport zone must enter through the edge appliances.

The idea is to drop the packet as early as you can, before allowing it to enter into the DC.

Edge Firewall Rules
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig8_HTML.jpg
Figure 5-8

NSX edge firewall

The rules you define in the edge firewall (see Figure 5-8) will be matched against the packets that enter the NSX. Any other malformed packets will be discarded at this stage.

Here are the required steps:
  1. 1)

    Rule moves from the load balancer to the backend pools.

     
  2. 2)

    Define the ports on which the frontend webservers will be listening.

     
  3. 3)

    Create a service for those ports.

     
  4. 4)

    Apply the rule in the edge firewall.

     
  5. 5)

    There is a default DENY at the end, so any traffic that is not destined for the frontend servers will be dropped.

     

This example has only considered the production web frontend servers. If there are other physical servers or monitoring servers outside the VMware NSX environment, they have to be included in the rule. Otherwise, the flow will be blocked by default.

Frontend Webserver Pool

Webservers sitting at the frontend are often the most targeted servers. To access the application, you have to first get inside and then spread through the system. Webservers are often placed in the demilitarized zone with high levels of security. A hacker might target these webservers in order to deface the website, thereby causing the company international embarrassment. There may not be any intention to crash or steal assets, but as you know, negative news can be just as harmful. If the incident is faced by a financial company, this can be a big hit to their trust factor.

In most cases, you only have to focus on the ingress and egress flow for the webserver pool. Frontend servers have to be extremely scalable by design. When the load increases, the stateless nature of the application should aid in creating multiple webserver copies according to the incoming request bandwidth. Security should scale along with this. Autoscaling is the most used word nowadays for distributed system designs.

If you are enabling this kind of feature in your webserver design, be sure to enable it on the security modeling too. For instance, if you have to create an additional five webservers to feed bandwidth demands, make sure all the servers are automatically included in the webserver security groups, so that the polices are applied to the respective vNICs.

In doing so, you ensure that the security standards and policies are met. Figure 5-9 shows the frontend flow.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig9_HTML.jpg
Figure 5-9

Frontend webserver’s connectivity flows

You can apply the same process to a distributed firewall.

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-10 shows the frontend rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig10_HTML.jpg
Figure 5-10

Frontend webserver’s DFW rules

Once a rule is published on all the hosts, the vNIC will start filtering packets based on the ruleset.

Catalog Application Server

Catalog application servers are responsible for listing inventory from the MySQL database in the backend. All inventory should be logged in to the database.

Inventory can be broken down into multiple sections. For example:
  • New items

  • T-shirts on discount

  • Regular polo T-shirts

This can be designed and pushed into the schema of the database table. Because the webserver to application and then to database connectivity is a generic three-tier architecture, you need to understand which other applications need to access the catalog. The users need to log in and connect to the Catalog server to access the products. From the shipping agent’s point of view, they need to be able to update the inventory and mark the product as out of stock when this is the case.

This applications have to talk to each other to keep others updated. As mentioned, the catalog has to be updated based on availability and quantity. If this synchronization process is not happening seamlessly, the whole system is going to break down sooner or later. Figure 5-11 shows the catalog flow.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig11_HTML.jpg
Figure 5-11

Catalog server’s connectivity flows

The same rules can be deployed in DFW as micro-segmentation/Zero Trust policy rules.

Figure 5-12 shows the catalog server rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig12_HTML.jpg
Figure 5-12

Catalog server’s DFW rules

The catalog server has direct connectivity to the database that stores all the inventory details. This can be treated as one of the critical connections points. Even with Zero Trust policy rules, all the required handling has to be done on a database level to ensure that there are no other vulnerable ports exposed in the system that can be accessed through the Catalog application.

Cart Application Server

Once the user determines which product she wants to buy, she usually adds the item to the cart. Users can add multiple items to their carts. Users then check out once they are finished shopping.

This means the cart needs to be stored somewhere. You can use a MongoDB database for this purpose. MongoDB is a document store database that has its own advantages compared to MySQL. You can use MySQL for the purpose as well.

There are multiple checkpoints to consider before adding the request to the cart. The system must first check the inventory/catalog for the availability of the stock. Then it determines how long it will take the product to reach the user. This means there is a bidirectional connection between the cart and the catalog applications. You can add other features, like a cart expiry, to the system. Since they don’t impact the security model discussed here, this section skips those details. Figure 5-13 shows the cart application’s server connectivity flows.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig13_HTML.jpg
Figure 5-13

Cart application’s server connectivity flows

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-14 shows the cart application’s server DFW rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig14_HTML.jpg
Figure 5-14

Cart application’s server DFW rules

User Application Server

Once the user decides to check out, she has to verify the details before starting the order process. Users have to be registered at the site. If they are not logged in yet, they have to log in and access the cart. Users also have to update their address information if necessary. The user application server can save the user details on the MongoDB backend.

As discussed, there are many advanced features you can enable on a per user basis. You can even apply a discount for loyal users who place regular orders. You can encourage new users to visit regularly. All this will fall under the application design. From a security perspective, identifying the user credentials is important. All relevant measures have to be taken to ensure the database is hardened enough.

If the user is not registered, she has to be redirected to a registration page to enter the details.

Figure 5-15 shows the user application’s server connectivity flows.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig15_HTML.jpg
Figure 5-15

User application’s server connectivity flows

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-16 shows the user application’s DFW rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig16_HTML.jpg
Figure 5-16

User application’s DFW rules

Order Application Server

The next step is to place the order. Necessary application checks have to be performed as well. This is also a business-critical application. A user who casually browses the catalog may not buy something every time. But when a user decides to check out, she is going to place an order. The IT infrastructure has to be flawless and completely secure during this operation. This is an area where the company is getting revenue and the employees are getting paid. This process has to be secure and perfect.

The longer the user has to wait before ordering a product, the more she is going to be frustrated. No business wants to lose a purchase at this point.

All the order information will be again saved in a MongoDB database.

Figure 5-17 shows the order application’s server connectivity flows.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig17_HTML.jpg
Figure 5-17

Order application’s server connectivity flows

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-18 shows the order application’s server DFW rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig18_HTML.jpg
Figure 5-18

The order application’s server DFW rules

Payment Application Server

The payment application usually forwards the request to the third-party banking interfaces. If the payment application saves the credit card details, there have to be enough security and encryption methods so that the user data is secure. There have been cases where user information (such as credit details) was stolen from websites. If you are going to save the data, make sure you have strict methods in place to secure it. The purpose of the payment application is to interact with the banking interface, do the transaction, and update the response.

The response of the banking interfaces are not in your control, but the necessary rules and ports need to be opened for this transaction to happen.

This section doesn’t list all the details required for payment applications. Only general connectivity flows are listed, as shown in Figure 5-19.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig19_HTML.jpg
Figure 5-19

Payment application’s server connectivity flows

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-20 shows the payment server’s DFW rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig20_HTML.jpg
Figure 5-20

Payment server’s DFW rules

Shipping Application

Once the order has been placed, the request has to be updated to the shipping application. It is the responsibility of the shipping application to keep track of the request updates. For this, you can use a messaging queue system as an asynchronous communication system for updates (see Figure 5-21).
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig21_HTML.jpg
Figure 5-21

The shipping application uses the RabbitMQ messaging queue

A messaging queue is a system used in a distributed system for asynchronous communication. In this example, the application will be subscribed to a particular queue for any changes. If there are changes published to the queue by any connected application, they will be updated to all the subscribers. Figure 5-22 shows the shipping application’s server connectivity flows.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig22_HTML.jpg
Figure 5-22

Shipping application’s server connectivity flows

These are also micro-segmentation/Zero Trust policy rules.

Figure 5-23 shows the shipping application’s DFW rules.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig23_HTML.jpg
Figure 5-23

Shipping application’s DFW rules

Database and O&M Servers

Operations and management server connectivity can be complicated. Port connectivity has to be application-specific. It might seem that allowing full access to management systems would be a great idea. This would most likely create a big loophole in your Zero Trust system. Hackers are normally looking for a management server to get easy access to your system, so this particular loophole could be dangerous and could act as a single point of failure.

Make security groups specific to management and monitoring servers and apply only required rules.

You have learned about each component and its respective flows. This sets you up for the requirements to designing better security solutions.

All servers can be added to the nested security groups. Another rule can be created for the traffic to management and monitoring servers (see Figures 5-24 and 5-25).
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig24_HTML.jpg
Figure 5-24

O&M DFW rules

../images/483938_1_En_5_Chapter/483938_1_En_5_Fig25_HTML.jpg
Figure 5-25

Default DENY rule

Horizon VDI

In addition to the infrastructure server, what if the T-shirt company wanted to use an end-to-end VMware solution and a VMware horizon desktop infrastructure? You would need to add similar rulesets to make sure you are adding all the RDP sessions and scrutinizing the flows before requests enter the server farm.

An additional security group with Horizon clients can be included in the design; the required flows have to be decided based on the LDAP user types. For example, a normal user shouldn’t have unrestricted access to all the servers in the system, but an administrator should be able to access the servers and perform maintenance tasks.

As discussed in the beginning of the chapter, most attacks originate from an internal network. Nowadays, most companies have mandatory security courses. But still, as this involves people from multiple backgrounds, not everyone is sufficiently aware of the security risks of clicking on an email attachment in spam. Most people think their security infrastructure or anti-virus software will take care of everything.

It is not the single desktop system you should be worried about. In most cases, the modern virus is created to have maximum impact. Unlike the previous generation of virus programs, new programs try to spread into the network first instead of crashing a single system.

Zero Trust policies and security groups have to prepare for these attacks along with their automated security.

Handling Scalability

Needing to add servers for scalability reasons happens almost instantly with any modern infrastructure. In the T-shirt application, you have to take these details into account when creating security groups. Each application can be added to a security group that has similar firewall rules.

If you use the service composer in conjunction with this approach, the results can be phenomenal. When you create a new webserver or a new application server, you can add specific tags that place the servers automatically in the security groups. The DFW policies are added to the vNIC automatically as well.

This should be applied to all application services. This practice has to be carried out across all the security groups. As the firewall policies increase with the number of servers, this will help you reduce manual tasks. Designing with automation in mind at the start can give you an advantage in later phases.

Brownfield Setup

In most scenarios, there are instances in which a customer wants to migrate from a traditional setup to a software-defined network architecture. The advantage in this case is that you know the existing setup, including its advantages and challenges and the problems you are trying to solve. But there should be a good amount of preparation to start the project. There is no plug-and-play solution, so you need to prepare and plan the phases and design changes.

Understanding the Current Architecture

One key point to mention is to make sure that all the relevant documentation is available and up to date before you even think about starting or changing things. Changing some part of the setup and trying to go back, only to find that there were no references or documentation on how you did it before is a disaster. So up to date documentation is the key.

Before tearing down the design, try to understand which part of the application works well and what advantages you’re getting from this compared to a traditional perimeter appliance-based design. The performance of the hardware appliance would be great if you were using a separate physical server. They are firewall vendors with their own proprietary feature set, which can helpful in solving problems. This may not be available in NSX, but knowing what you don’t have is as important as understanding what you do have. You are not looking to replace feature to feature. Identify the weaknesses in the design. When you have a comprehensive understanding of the strengths and weaknesses, you are in a good place to start planning.

An AS-IS and TO-BE detailing will always be helpful at the beginning of the project. Even SWOT analysis of the two methods will give you a fair idea of where you stand and where you are going to end up. This helps you convey a clear idea of what to expect, even to non-technical managers.

You could be bombarded with a lot of marketing terms and end up believing that the tool can do almost everything. In that case, you will end up being disappointed, even when the newly deployed stack is a considerable improvement in the design. Avoid such pitfalls by including all the stakeholders and ensuring that they understand what they are going to get at the end.

Be sure you back up the current configurations and the export options available in the firewall appliance. VMware NSX has very little support for the import feature from another firewall appliances. This ruleset, imported in a familiar format, will help create the distributed firewall rules.

Register the current requirements of auditing and logging. The same has to be implemented in a new setup, as the requirements for security audits are generic across the industry.

The latter part of the project can follow the greenfield system to design and model the security architecture based on Zero Trust networks.

How Zero Trust Helps: An Example

This section covers different kinds of attacks that can happen to any Internet company. There are distributed denial of service attacks happening daily all over the world. To coordinate such attacks, hackers typically take control of the network’s infrastructure. Hackers sometimes get into the IoT network and use the IoT sensors to launch an attack.

They then illegally use your network. If a botnet was installed on your network and you were not aware, it can be used against any other Internet companies to launch a botnet attack. It is always crucial to have a defense mechanism against these common attack types.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig26_HTML.jpg
Figure 5-26

Example infrastructure

Consider Figure 5-26. You can even take the example of the T-shirt company’s infrastructure.

Without Zero Trust

Without Zero Trust, there is no block of the lateral movement between the machines. Once a server/workstation is affected, the virus/ransomware can easily spread throughout the network (see Figure 5-27).
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig27_HTML.jpg
Figure 5-27

Attacker gains access to the infrastructure

Because the network is not segmented, the attacker can easily access across the network, unless he hits a firewall policy (see Figure 5-28).
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig28_HTML.jpg
Figure 5-28

Malware is spreading through the network

Gradually, the malware looks for any kind of shared service or credential that can be used to hop into another server/workstation and launch attacks. This continues until it meets a definite policy that prevents this. In a network that has no segmentation, this type of attack can cause widespread destruction.

As you saw in the T-shirt company application, you deploy the firewall policies on all the vNICs. If the network is affected by malware, the spread is controlled. The DFW will filter out this traffic as unwanted and illegal and will drop the vNIC itself, as shown in Figure 5-29.
../images/483938_1_En_5_Chapter/483938_1_En_5_Fig29_HTML.jpg
Figure 5-29

DFW identifies the attack and stops the spread

Summary

This chapter explained the core ideas of building a Zero Trust network. This idea can be reviewed further according to your project requirements and implemented across the VMware environment. The benefit you get from this design is very noticeable, as you have more control over your network. The next chapter discusses the tools that can aid you with this process. It covers the REST API and the automation options available with the NSX.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.211.87