Security for Cloud and Wireless Networks

With the advent of technology, there has always been a drive to reduce cost and maintenance efforts, and increase efficiency, reliability, performance, and security. This gave way to the evolution of the technological framework and the inception of technologies such as cloud computing and wireless connectivity. Today, a majority of attacks on the corporate side are targeted toward cloud instances. On the other hand, unprotected wireless networks are textbook entry points for threat actors looking to gain access to the organization's infrastructure. 

In this chapter, we will analyze how each segment of a cloud and wireless network can be protected and the various strategies that can be implemented to defend them. In order to do this, we will use examples of cloud providers such as AWS and CipherCloud. 

The following topics will be covered in this chapter:

  • An introduction to secure cloud computing 
  • Amazon Web Services
  • Microsoft Azure security technologies
  • CipherCloud
  • Securing cloud computing
  • Wireless network security
  • Security assessment approaches
  • Software-defined radio attacks

Technical requirements

To get the most out of this chapter, you should be familiar with the following tools and platforms:

  • AWS cloud security components such as Amazon CloudFront, AWS Key Management Service, Amazon Inspector, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, Amazon GuardDuty, Amazon Cognito, AWS Shield, AWS Artifact, and Amazon Macie.
  • Applications such as the Cisco Wireless Security Suite, WatchGuard Wi-Fi Security, SonicWall Distributed Wi-Fi Solution, Acunetix, aircrack-ng, Cain and Abel, Ettercap, Metasploit, Nessus, Nmap, Kismet, and Wireshark.

You are also encouraged to explore the services offered by leading cloud security providers to gain an understanding of the overall offerings in the market, including Bitglass, Skyhigh Networks, Netskope, CipherCloud, and Okta.

An introduction to secure cloud computing

Cloud computing has been the latest technological advancement fueling the drive of digital transformation for most organizations, almost all of which have some service or the other being catered to or delivered by leveraging cloud services. Most are driven to do so due to the various benefits associated with transformation to the cloud, such as the following:

  • Resource scalability
  • Reduced operational cost 
  • Reduced infrastructure maintenance cost and effort
  • Storage efficiency and accessibility
  • Efficient BCP/DR
  • Control retention options such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)
  • Security features such as encryption, VPNs, and API keys
  • Regulatory and compliance requirements

While we embark on the journey toward cloud security, do keep in mind the following considerations:

  • A good foundational understanding of the existing technology stack is essential, as while you move to the cloud, most of the foundational process and IT operations will remain the same. Hence, any contradictions should be clearly articulated and understood.
  • Understand what changes are required and how to deal with it: While moving to the cloud, many operations might see a change in how you exert your control over them. Understand these well and have steady-state operations from day one.
  • Security apparatus: Ensure that the information security, Identity Access Management (IAM), Privileged Access Management (PAM), testing, VMs and compliance teams, along with everyone else, are on board with the decision and know the part they each need to play.
  • Moving to the cloud is not inherently secure by itself: You will need to fine-tune your monitors and alerts to adequately be aware of the ongoing activities. Make sure your team is ready with the requisite knowledge and skills. Ensure you have clear communication with the vendor and understand the shared responsibility model.

Besides keeping the preceding points in mind, there are a few steps you need to take in creating a secure architecture:

  • Firstly, understand the organization's business goals and objectives as that will be the primary driver of cloud adoption, and consequently, its security. 
  • Secondly, understand the IT strategy and align your plan to it.
  • Third, make a clear distinction of how the cloud structure will be constructed. What are the trust areas and relationships? Do you see a zero-trust model as feasible? Is the business ready for it?
  • Lastly, what are the regulatory and compliance requirements, are there any other internal and external factors that may influence your plan? Take account of them and plan accordingly. 

While trying to understand how to secure your cloud deployment, besides adopting the preceding steps it is also important to make a clear distinction between your responsibility as an organization, and the responsibility of the cloud service provider. AWS has come out with a shared responsibility model that demonstrates this. Let's take a closer look

AWS' shared responsibility model

AWS' shared responsibility model for the cloud is an industry-standardized responsibility model that demonstrates who is responsible for what in a cloud service engagement. This helps everyone to understand and conceptually separate out critical aspects such as compliance, security management, and accountability in the cloud service engagement. The following diagram demonstrates AWS shared responsibility model (https://aws.amazon.com/compliance/shared-responsibility-model/):

Before we dive into details of the different vendors that provide us with cloud solutions, let's quickly take a look at some of the security concepts and attributes associated with AWS cloud services and how we can implement and fine-tune these for a better security posture.

Major cybersecurity challenges with the cloud

Cloud computing has changed the game in the computing world and how we deliver and receive services with the click of a button over the web. This has contributed to various gains from a business perspective, such as massive cost savings, speed of service delivery, and ease of doing business as per the increase or decrease in the need for infrastructure and other services provided by the cloud service provider. 

However, the picture is not all rosy and there are various challenges from a security perspective that haunt cloud solutions. Let's see a snapshot view of some of these issues:

  • Shortage of skilled workers: We as an industry are struggling to fill positions that demand a candidate to have dominance in both disciplines of information security and cloud technology. Hence, organizations should look at hiring candidates with a strong skillset in one domain and then training them in the other as per the business requirements. Alternatively, they can outsource one of the processes to a third-party service provider or hire two analysts and build different teams that would work in conjunction. 
  • Privacy challenges: Keeping track of your data in real-time is a big concern for all organizations that are using such services. Now with the implementation of the various data privacy laws, it is more important than ever to pay sufficient attention to ensuring that you can track and restrict how your data is stored, processed, and decommissioned. Multi-Factor Authentication (MFA) and encryption can help with such concerns to some extent, but the real answer is to carry out a full tactical review of the cloud engagement and the business processes associated with your data.
  • Insecure APIs and integrations: Apart from how APIs are developed and used in cross-function operations and integration, it's also important to ensure that the API tokens and keys are managed securely. Access control and authentication, along with the monitoring of activities, will ensure coverage against these issues.
  • Compliance: Regulatory and compliance requirements, when translated into the cloud domain, can be a different beast to tackle. Hence, it is important for organizations to understand how the cloud service provider ensures compliance with all the regulatory and compliance requirements that you need to adhere to and what evidence they can provide for this being the case. To achieve this, you can task an internal team to assess and verify that these controls work effectively and are complementary to the compliance requirement. Also, keep tabs on the platform and service changes by the service provider and how they impact your compliance status. 
  • Visibility: Organizations often feel a lack of control over their data, assets, and services once they've been outsourced to a cloud provider. Hence it's important, before the actual engagement, that you spend some time to understand how they operate, respond, and act when a certain deviation from business-as-usual happens. Also, ensure that you run through common scenarios and conduct tabletop exercises to measure and plan how things will go down in an actual scenario. This should be complemented with good compliance and security practices.

Besides the aforementioned challenges, there are a few other concerns that also surround the intent to cloud transformation. Some of the major concerns that can be highlighted are as follows:

  • The management of sensitive data.
  • Sustained levels of compliance across the board.
  • Deploying proprietary technology over the cloud.
  • Shared cloud resources may be under stress on occasion.
  • Modification might be required to be compatible with distributed cloud architecture.

Other top threats to the cloud include data breaches; insufficient identity, credential, and access management; insecure interfaces and APIs; system vulnerabilities; account or service hijacking by using stolen passwords; malicious insiders; data loss; abuse and nefarious use of cloud services – and many more. Wire19 wrote an article on these threats, along with remediation steps for them, which can be found at https://wire19.com/10-biggest-threats-to-cloud-computing-2019-report/.

Now that we have talked about the preludes to a good cloud service engagement, let's take a look at one of the most widely used cloud service providers and understand what they offer. 

Amazon Web Services (AWS)

"When dealing with cloud vendors: trust, but verify."
Russian proverb

Amazon Web Services (AWS) is one of the largest cloud service providers in the world, leading the charts with about 32.3% of market share according to a report from Canalys. According to AWS, five foundational pillars together form the AWS Well-Architected framework. These pillars are important to discuss as they are universal principles that can and should be included in any operational setup. They are as follows:

  • Operational excellence: Focuses on the capability to operationalize and monitor infrastructure in order to deliver business services
  • Security: Focuses on the capability required for protecting business assets such as information and systems via risk modeling, assessment, and implementation of mitigation strategies
  • Reliability: Focuses on the ability of a system to recover from a cyber disruption by acquiring computing resources and mitigating the loss of service
  • Performance efficiency: Focuses on the effective utilization of computing resources for delivering services as demand changes
  • Cost optimization: Focuses on the ability to minimize operational costs by resources and process optimization

AWS has a security-focused approach toward cloud services, encapsulating key attributes. For each of these attributes, it offers a number of services as seen in the following table:

Attribute Services offered
Infrastructure security Built-in firewalls – Amazon VPC and WAF
DDoS mitigation Amazon CloudFront, Auto Scaling, and Route 53
Data encryption Storage and DB services such as EBS, S3, AWS Key Management Service (KMS), and AWS CloudHSM
Inventory and configuration Security assessment service such as Amazon Inspector, and AWS Config to track and manage changes in the environment
Monitoring and logging AWS CloudTrail and Amazon CloudWatch
IAM controls AWS IAM, AWS MFA for privileged accounts and AWS Directory Service
Performing vulnerability/penetration testing Any compatible VAPT platform/tool

 

Next, we will take a deep dive into the various security features offered by AWS Cloud Security and see how they enhance the security posture for cloud implementation. 

AWS security features

AWS follows the security-by-design principle, some of the key aspects of which involve the following: 

  • Architecture: This is a centrally managed platform that runs a hardened Windows Server image. It also uses Hyper-V and runs Windows Server and Linux on guest VMs for platform services.
  • Patch management: Apply cyclic scheduled updates and comprehensive reviews of all changes.
  • Monitoring and logging: Perform alerting and monitoring of all security events, granular identity, and access management.
  • Antivirus/anti-malware: Perform real-time protection, on-demand scanning, and monitoring on the cloud.
  • Threat defense: Perform big data analysis for intrusion detection and prevention, DoS protection, encryption, and cyclic penetration testing.
  • Network isolation: Restricted internet access by default, along with the use of network security groups, data segregation, and isolated VPNs.

Besides the aforementioned benefits, AWS provides a whole lot of advantages, some of which we will discuss in detail in the subsequent subsections. 

Well-defined identity capabilities

The idea is to ensure that only authorized and authenticated users are able to access applications and services. This would result in strategies such as the following:

  • Define a management policy for rolling out to users and groups (where a group is a logical grouping of users with the application of a group policy).
  • Services (least privilege and granular controls) and roles (used for instances and functions).
  • Implementation of least privilege as a principle.
  • MFA on important accounts and services.
  • Usage of temporary credentials (when applicable) via AWS STS.
  • Utilize Access Advisor.
  • Usage of credentials management tools such as AWS Systems Manager, Secrets Manager, Amazon Cognito (for mobile and web applications), and AWS Trusted Advisor.

Traceability

This demonstrates the capability to track activity in the environment. This can be achieved by capturing data logs and applying analytics to them. This is done with the help of the following:

  • Streamlined asset management
  • API-driven log analysis with CloudWatch
  • Automated responses with Lambda
  • Changes monitored with AWS Config and Amazon Inspector
  • Actively detected threats with Amazon GuardDuty

Defense in depth

This is a crucial security concept that should be adopted across the board with verifiable efficiency. In the context of AWS, this involves the following:

  • Physical elements, such as AWS compliance, and third-party attestations
  • Creating network and host-level boundaries via the use of Virtual Private Clouds (VPCs), Security Groups (SGs), Network Access Control Lists (NACLs), subnets, router tables, and gateways
  • Ensuring system security via hardened Amazon Machine Images (AMIs) and OS instances, patch management, and well-defined IAM roles
  • Protecting data via user authentication, access controls, and encryption
  • Protecting infrastructure via network and host-level boundaries, system security configuration and management, OS firewalls, vulnerability management, Endpoint Detection and Response (EDR); and the removal of unnecessary applications, services, and default configurations and credentials

Automation of security best practices

As a security practice, various scopes of automation can be embedded in the service model, some of which include the following:

  • Utilization of CloudFormation to recreate clean/updated environments easily for production or investigation purposes
  • Utilization of Terraform for building, changing, and versioning infrastructure safely and efficiently
  • Utilization of Continuous Integration and Continuous Deployment (CI/CD) pipelines and automating the remediation action and response for non-compliant infrastructure and sub-components

Continuous data protection

It is important to understand the sensitivity of the data that is being processed and classify it accordingly. We can classify data based on the business and financial impacts that the given data carries. This is how the required level of confidentiality can be accurately gauged.

Most organizations have classifications such as Public, Private, and Restricted. However, based on the sensitivity and the operational model, further classifications can be considered. This can subsequently be clubbed with the IAM policy for a streamlined approach.

AWS provides a service called Amazon Macie, which offers an automated approach to discover, classify, and protect sensitive data through machine learning. For data in transit, security features such as VPN connectivity to the VPC, TLS application communication, ELB, or CloudFront with ACM should be considered. Likewise, encryption and tokenization should be considered for data at rest. Beyond this, we can leverage Amazon Certificate Manager, AWS KMS, AWS CloudHSM, and so on.

Security event response

This is where the flavor of incident response in the cloud comes in. It is important to classify the severity of incidents and escalate when necessary. This would encompass the following steps: 

  1. Preparation: It is essential to have adequately trained Incident Response (IR) capability to respond to cloud-specific threats with appropriate logging via CloudTrail, and VPC correlation and analysis in a central repository utilizing encryption (KMS), or account isolation and segmentation.
  2. Identification: Identify threats and breaches by utilizing User and Entity Behavior Analytics (UEBA)-based detection rules.
  3. Containment: Utilize the AWS CLI for the implementation of a restrictive group policy. 
  4. Investigation: Analysis and correlation of threat and activity timelines to establish the chain of events.
  5. Eradication: Ensure that files are wiped securely, and KMS data is deleted.
  6. Recovery: Reinstate network access and configuration to the native state.
  7. Follow-up: Validate data deletion and resolution.
McAfee has a comprehensive guide of 51 best practices for AWS, and is highly recommended for any security professional to examine. Visit https://www.skyhighnetworks.com/cloud-security-blog/aws-security-best-practices/ for more information.

Similar to AWS, there are many top dogs in the cloud service provider domain that have customization comparable with AWS in terms of the broader security framework. A top competitor to AWS is Microsoft's Azure. Let's see what it has to offer in the next section.

Microsoft Azure security technologies

Microsoft Azure offers a similarly well-architected framework, and is one of the close competitors of AWS. Azure comes with a collection of leading principles that can be used to enhance the quality of a workload. The framework consists of the following five key pillars of architecture excellence:

  • Cost optimization: This involves managing costs to maximize the value delivered. In order to achieve this principle, we can adopt various strategies including reviewing cost principles, developing a cost model, creating budgets and alerts, reviewing the cost optimization checklist, and using monitoring and analytics to gain cost insights.
  • Operational excellence: This includes operations processes that keep a system running in production. The focus is on instrumentation, generating the raw data from the application log, collection and storage, analysis and diagnosis, visualization, and alerting. It also provides the ability to design, build, and orchestrate with modern practices such as using monitoring and analytics to gain operational insights, using automation to reduce effort and errors, and so on.
  • Performance efficiency: This refers to the ability of a system to adapt to changes in load. Here the focus is on enabling true cloud scaling, elastic horizontal scaling, automated scaling, cheaper and increased resiliency, and redundancy in scaling. This results in the ability to leverage scaling up and scaling out, optimize network performance, optimize storage performance, and identify performance bottlenecks in your applications.
  • Reliability: This refers to the ability of a system to recover from failures and continue functioning. Here, the focus is on built-in data replication, measures to counter hardware failures, increased reliability, and the resilience of VMs. This results in the ability to build a highly available architecture that can recover from failures.
  • Security: This refers to protecting applications and data from threats. In order to achieve this, the following services are offered by Azure: improved identity management by Azure AD, protecting infrastructure via trust relationships in the Azure AD tenant via Role-Based Access Control (RBAC), application security measures such as using SSL everywhere, protecting against CSRF and XSS attacks, preventing SQL injection attacks, and so on. Also included are data sovereignty and encryption for Azure Storage, Azure SQL Database, Azure Synapse Analytics, and Cosmos DB. This allows us to use strategies such as defense in depth (via data applications, VM/compute, networking, perimeter, policies and access, and physical security). AWS also provides protection from common attacks on all layers of the OSI model.

Next, we will learn how to incorporate security into your architecture design, and discover the tools that Azure provides to help you create a secure environment through all the layers of your architecture.

The Zero Trust model

The Zero Trust model is guided by the principle that you should not just assume trustworthiness, but should also always verify. For example, users' devices inside the network are inherently trusted by the security apparatus, which makes it easy for an attacker to leverage that trust for a smooth lateral movement and elevation of privileges. 

With the change in the dynamics of work brought about by the constant digital transformation and unforeseen events such as the COVID-19 pandemic, organizations are now allowing users to bring your own device (BYOD), which means that most of the components of the network are now no longer under the control of the organization. The Zero Trust model relies on the verifiable user and device trust claims to grant access to organizational resources. No longer is trust assumed based on the location inside an organization's perimeter.

This model has forced security researchers, engineers, and architects to rethink the approach applied to security. Hence, now we utilize a layered strategy to protect our resources, called defense in depth.

Security layers

Defense in depth can be visualized as a set of concentric rings with the data to be secured at the center. Each ring adds an additional layer of security around the data. This approach removes reliance on any single layer of protection and acts to slow down an attack and provide alert telemetry that can be acted upon, either automatically or manuallyEach layer can implement one or more of the CIA concerns:


 

Layer

Example

Principle

1

Data

Data encryption at rest in Azure blob storage

Integrity

2

Application

SSL/TLS encrypted sessions

Integrity

3

Compute

Regularly apply OS and layered software patches

Availability

4

Network

Network security rules

Confidentiality

5

Perimeter

DDoS protection

Availability

6

Identity and Access

Azure AD user authentication

Integrity

7

Physical Security

Azure data center biometric access controls

Confidentiality

With every additional layer, the security of your network is improved, so that it becomes difficult for threat actors to reach the innermost layer where your precious and confidential data is stored.

Identity management using Azure

In the previous section, we saw how identity management can act as a security layer to protect our data. In this section, we will look at identity from a different perspective. We'll discuss identity as a security layer for internal and external applications. As a part of this, we'll understand the benefits of single sign-on (SSO) and MFA to provide identity security, and why to consider replicating on-premises identities in Azure AD.

Today, organizations are looking at ways they can bring the following capabilities into their applications:

  • Provide SSO to application users.
  • Enhance the legacy application to use modern authentication with minimal effort.
  • Enforce MFA for all logins outside the company's network.
  • Develop an application to allow patients to enroll and securely manage their account data.

Azure Application Proxy can be used to quickly, easily, and securely allow the application to be accessed remotely without any code changes. Azure AD Application Proxy is composed of two components: a connector agent that sits on a Windows server within your corporate network, and an external endpoint, either the MyApps portal or an external URL. When a user navigates to the endpoint, they authenticate with Azure AD and are routed to the on-premises application via the connector agent.

Infrastructure protection using Azure

Here, we will explore how infrastructure outages can be avoided by utilizing the capabilities of Azure to protect access to the infrastructure.

Criticality of infrastructure

Cloud infrastructure is becoming a critical piece of many businesses. It is critical to ensure that people and processes have only the rights they need to get their job done. Assigning incorrect access can result in data loss and data leakage, or cause services to become unavailable.

System administrators can be responsible for a large number of users, systems, and permission sets. Correctly granting access can quickly become unmanageable and can lead to a "one size fits all" approach. This approach can reduce the complexity of administration, but makes it far easier to inadvertently grant more permissive access than required.

RBAC offers a slightly different approach. Roles are defined as collections of access permissions. On Azure, users, groups, and roles are all stored in the Azure AD. The Azure Resource Manager API uses RBAC to secure all resource access management within Azure and can be clubbed with the Azure AD Privileged Identity Management (PIM) for auditing member roles.

Here are some of the key features of PIM:

  • Providing just-in-time privileged access to Azure AD and Azure resources
  • Assigning time-bound access to resources by using start and end dates
  • Requiring approval to activate privileged roles
  • Enforcing Azure MFA when activating any role
  • Understanding user's activity in a larger context
  • Getting notifications when privileged roles are activated
  • Conducting access reviews to ensure that users still need their roles
  • Downloading an audit history for an internal or external audit

To use PIM, you need one of the following paid or trial licenses:

  • Azure AD Premium P2
  • Enterprise Mobility + Security (EMS) E5

It's often valuable for services to have identities. Often, and against best practices, the credential information is embedded in configuration files. With no security around these configuration files, anyone with access to the systems or repositories can access these credentials, which exposes the organization to risk.

Azure AD addresses this problem through two methods:

Encryption

Data is an organization’s most valuable and irreplaceable asset, and encryption serves as the last and strongest line of defense in a layered security strategy. Here, we'll take a look at what encryption is, how to approach the encryption of data, and what encryption capabilities are available on Azure. This includes both data at rest and data in transit.  

Identifying and classifying data

 It is critical that we have an active process of identifying and classifying the types of data we are storing and that we align this with the business and regulatory requirements surrounding the storage of data. It's beneficial to classify this data as it relates to the impact of data exposure on the organization, its customers, and partners. An example of classification could be as follows:

Data classification

Explanation

Examples

Restricted

Data classified as restricted poses a significant risk if exposed, altered, or deleted. Strong levels of protection are required for this data.

Data containing social security numbers, credit card numbers, and personal health records

Private

Data classified as private poses a moderate risk if exposed, altered, or deleted. Reasonable levels of protection are required for this data. Data that is not classified as restricted or public will be classified as private.

Personal records containing information such as an address, phone number, academic records, and customer purchase records

Public

Data classified as public poses no risk if exposed, altered, or deleted. No protection is required for this data.

Public financial reports, public policies, and product documentation for customers

 

By taking an inventory of the types of data being stored, we can get a better picture of where sensitive data may be stored and where existing encryption policies may or may not be employed.

Encryption on Azure

Azure Storage Service Encryption (SSE) can be used to protect data to meet the essential information security and compliance requirements. SSE automatically encrypts all data with 256-bit AES encryption where the encryption, decryption, and key management are optimized by default.

This encompasses encrypting VMs with Azure Disk Encryption (ADE), encrypting databases with Transparent Data Encryption (TDE), encrypting secrets with Azure Key Vault's cloud service, and encrypting backups with Azure Backup for on-premises machines and Azure VMs.

Network security

Network security involves protecting the communication of resources within and outside of your network. The goal is to limit exposure at the network layer across your services and systems. By limiting this exposure, you decrease the likelihood that your resources can be attacked. In the realm of network security, efforts can be focused on the following areas:

  • Securing traffic flow between applications and the internetThis focuses on limiting exposure outside your network. Network attacks will most frequently start outside your network, so by limiting your network's exposure to the internet and securing the perimeter, the risk of being attacked can be reduced.
  • Securing traffic flow among applications: This focuses on data between applications and their tiers, between different environments, and in other services within your network. By limiting exposure between these resources, you reduce the effect a compromised resource can have. This can help reduce further propagation within a network.
  • Securing traffic flow between users and the applicationSecuring traffic flow between users and the application focuses on securing the network flow for your end users. This limits the exposure your resources have to outside attacks and provides a secure mechanism for users to utilize your resources.

A common thread throughout this chapter has been taking a layered approach to security, and this approach is no different at the network layer. It's not enough to just focus on securing the network perimeter or focusing on the network security between services inside a network. A layered approach provides multiple levels of protection so that if an attacker gets through one layer, there are further protections in place to limit further attacks.

Let's take a look at how Azure can provide the tools for a layered approach to securing your network footprint.

Internet protection

If we start on the perimeter of the network, we're focused on limiting and eliminating attacks from the internet. A great first place to start is to assess the resources that are internet-facing, and only allow inbound and outbound communication where necessary. Identify all resources that allow inbound network traffic of any type, and ensure they are necessary and restricted to only the ports/protocols required. Azure Security Center is a great place to look for this information, as it will identify internet-facing resources that don't have network security groups associated with them, as well as resources that are not secured behind a firewall.

There are a couple of ways to provide inbound protection at the perimeter:

  • Using a web application firewall (WAF) to provide advanced security for your HTTP-based services. The WAF is based on rules from the OWASP 3.0 or 2.2.9 core ruleset, and provides protection from commonly known vulnerabilities such as cross-site scripting and SQL injection.
  • For the protection of non-HTTP-based services or for increased customization, network virtual appliances (NVAs) can be used to secure your network resources. NVAs are similar to firewall appliances you might find in on-premises networks and are available from many of the most popular network security vendors. NVAs can provide greater customization of security for those applications that require it, but can come with increased complexity, so careful consideration of requirements is advised.

To mitigate these attacks, Azure DDoS provides basic protection across all Azure services and enhanced protection for further customization of your resources.

Virtual networks

Network security groups are entirely customizable and provide the ability to fully lock down network communication to and from your VMs. By using network security groups, you can isolate applications between environments, tiers, and services.

To isolate Azure services to only allow communication from virtual networks, use virtual network service endpoints. This reduces the attack surface for your environment, reduces the administration required to limit communication between your virtual network and Azure services, and provides optimal routing for this communication.

Network integrations

Network infrastructure often requires integration to provide communication over Azure. We can utilize a VPN to initiate secure communication channels.

In order to provide committed and private connections, we can use tools such as ExpressRoute. This results in the improvement of secure communication over a private circuit rather than the public internet.

To easily integrate multiple virtual networks in Azure, virtual network peering establishes a direct connection between designated virtual networks. Once established, you can use network security groups to provide isolation between resources in the same way you secure resources within a virtual network. This integration gives you the ability to provide the same fundamental layer of security across any peered virtual networks. Communication is only allowed between directly connected virtual networks.

With this, we come to an end of our discussion on Microsoft Azure. For a detailed deep dive into the features of Azure and its implementation, please view the Microsoft Azure documentation at https://docs.microsoft.com/en-us/azure/security/azure-security, which is a great learning resource. I recently also came across an article that talks about addressing cloud security with the help of Azure Sentinel and existing Security information and event management (SIEM). It can be found at https://www.peerlyst.com/posts/uplift-the-capability-of-your-existing-enterprise-siem-with-azure-sentinel-to-address-cloud-security-arun-mohanWhile you are at it, do check out the Azure Sentinel design as well.

So far in this chapter, we have covered two of the most popular cloud providers – Amazon's AWS and Microsoft's Azure. Moving on next, we will take a look at CipherCloud and some of its key features.

CipherCloud

Established in 2010, CipherCloud operates across PaaS, SaaS, and IaaS. It provides cloud security solutions for a vast range of providers and is compliant with a mix of global privacy and compliance regulations including GDPR and PCI. We will not discuss CipherCloud to the core, however, we will look at some of its important platforms and features that make it a notable mention:

  • CASB+ Platform: This is CipherCloud's flagship security deployment framework that encompasses the best security practices to deliver comprehensive visibility, data security, threat protection, and compliance for cloud-based assets.
  • Data Loss Prevention (DLP) demonstrates the following capabilities: 
    • Granular policy controls to detect, remediate, and prevent potential breaches
    • Multi-cloud protection across the widest range of cloud apps
    • Out-of-the-box compliance policies for many global regulations
    • On-demand scanning of new files or content going to the cloud
    • Historical data scans to detect sensitive data already in the cloud
    • Integration with enterprise DLP systems to extend corporate policies to cloud apps
  • Adaptive Access Control (AAC): Demonstrates protection capabilities against threats such as unauthorized access from restricted geographic locations and integration with enterprise IAM and MDM to extend access policies to cloud apps.
  • Shadow IT Discovery demonstrates the following capabilities:
    • Discover all cloud applications in use.
    • Identify risky cloud applications.
    • Leverage the risk knowledge base to uncover various external factors that might have an impact on the organization.
  • Encryption demonstrates the following capabilities: 
    • Persistent end-to-end encryption of cloud data
    • Exclusive control over the encryption process and keys
    • Granular policy controls to selectively encrypt any type of data
    • Format solution to preserve cloud functionality
    • Mobile and endpoint apps enabling file decryption by an authorized user
    • Standards-based AES 256-bit encryption with FIPS 140-2 validation
  • Tokenization demonstrates the following capabilities: 
    • Securing sensitive data at rest and movement within the enterprise
    • Local storage of sensitive data and token mapping in a secure database
    • Highly scalable solutions with the least latency
  • Activity Monitoring demonstrates the following capabilities: 
    • Real-time monitoring of users, data, devices, and clouds
    • Detailed reporting on logins, downloads, and policy violations
    • Anomaly detection using advanced machine learning
    • Monitoring of privileged user activities and security controls
    • Intuitive drill-down functionality for dashboards and reporting
  • Key Management demonstrates the following capabilities: 
    • Exclusive control over the encryption process and keys
    • Standards-based key management
    • Integration with external KMIP-compliant key management
    • Split keys between multiple custodians
    • Key rotation and expiration without affecting legacy data
  • Multi-mode protection demonstrates the following capabilities: 
    • Active encryption: Ironclad data protection safeguards malicious access to critical data without the appropriate keys
    • Customer key management: Encryption keys are held within the customer environment, hence averting unintended exposure
    • FIPS validated standards-based encryption: Uses AES 256-bit encryption, NIST-approved key management, FIPS 140-2 validation of cryptographic modules
    • Format and function preserving: Supports searching, sorting, reporting, indexing, and charting, while data remains encrypted or tokenized
    • Tokenization: Complies with data residency requirements, substituting arbitrary initiated values
    • High performance: Provides highly scalable distributed architecture and minimal latency
  • Digital Rights Management demonstrates the following capabilities: 
    • Persistent end-to-end encryption of cloud data
    • Secure access to sensitive files by authorized users on iOS, Android, OS X, and Windows devices
    • Local decryption of sensitive content by authorized, authenticated users
    • Integrated support for multiple file sharing apps including Box, Dropbox, OneDrive, SharePoint, Google Drive, and others
    • Support for internal and external collaborators
    • Remote, real-time key revocation for lost or compromised devices
    • Mobile and endpoint apps to enable file decryption by an authorized user
    • Standards-based AES 256-bit encryption with FIPS 140-2 validation
    • Highly scalable solutions with minimal latency
  • Malware Detection demonstrates the following capabilities:
    • Detection of viruses, spyware, ransomware, worms, bots, and more
    • Automatic detection, quarantine, and removal of infected content
    • Anomaly detection and machine learning to detect suspicious activity
    • Real-time updates for zero-day malware protection

This concludes our section on CipherCloud, which is one of the most competitive next-gen CASB solutions available on the market. You can also explore various different vendors in the space at https://www.csoonline.com/article/3104981/what-is-a-cloud-access-security-broker-and-why-do-i-need-one.html and https://www.gartner.com/reviews/market/cloud-access-security-brokers.

Similarly, we have other vendors with a suite of security functions as part of their overall cloud security offering. Some of the prominent ones are Palo Alto Networks, Cisco, Sophos, Proofpoint, Skyhigh Networks, and ZScaler. Apart from these, we can also look at dedicated vendors for specific security solutions such as Centrify Cloud for PAM, Boxcryptor for end-to-end encryption, and so on.

Securing cloud computing

Organizations should have a clear understanding of the potential security benefits and risks associated with cloud computing in the context of their decision to move the business to the cloud and set realistic expectations with their cloud service provider. They must understand the pros and cons of the various service delivery models such as IaaS, PaaS, and SaaS, as each model has its own uniquely diverse security requirements and responsibilities.

In this section, we will go over some of the security threats as countermeasures that organizations face after moving to the cloud.

With the constant push of digital transformation, clubbed with the introduction of cost-effective cloud solutions, organizations now understand that their critical information and processes do not reside in one location. As a result of this, threat actors have started focusing and customizing their tactics, techniques, and procedures that are better suited for targeting cloud-based services.

Security threats

Cloud computing is a very dynamic environment in terms of growth and service offerings and has several security threats and risks associated with its application, which it is necessary to account for in the planning and implementation stage itself. Some of the major factors are as follows:

  • Loss of governance oversight
  • Lack of clarity in the responsibility matrix
  • Vendor lock‐in agreements
  • Risks associated with regulatory, legal, and compliance issues
  • Lack of visibility in the handling of security incidents, issues associated with data protection, and malicious insiders
  • Malicious behavior of insiders
  • Operational failures of providers, and the resulting downtime
  • Challenges associated with data deletion

These points provide an outline of the commonly faced threats related to cloud implementation; however, based on the deployment, there may be different issues that might surface. Hence, it is important to demonstrate due diligence and due care through the entire life cycle and conduct cyclic reviews and audits.

Countermeasures

Since we talked about the risk factors, let's also take a look at the mitigating steps that should be taken by organizations to accurately assess and manage the security of their cloud environment to mitigate risks:

  • Focus on an efficient Governance, risk, and compliance (GRC) process and adherence to it.
  • Establish security and IT audits for business operations and processes.
  • Identify, categorize, and manage identity and access management processes.
  • Ensure the implementation of adequate data classification and protection processes.
  • Have a comprehensive third-party vendor and service provider assessment process.
  • Ensure cloud network connections are securely implemented and operationalized.
  • Validate security controls on physical infrastructure and facilities.
  • Comprehend requirements pertaining to the exit process and data deletion.

In this section, we took a deep look at the different aspects of cloud computing, AWS, and how to protect your cloud with the use of various techniques. Irrespective of how secure an organization's cloud environment is, it's imperative that the internal network environment is secured as well. This is why, in the next section, we will be taking a look at the different aspects of wireless security.

Wireless network security

Today, wireless technology is widely used in corporate offices, factories, businesses, government agencies, and educational institutions. There are various books (from both Packt and other publishers) that cover the basics of a foundational and practical approach to wireless penetration testing and analysis.

In this section, we will take a look at a few attack surface analysis and exploitation techniques, along with a few best practices while using Wi-Fi.

Check out the following wireless security wiki if you already have a good grip on the basics of wireless security. The wiki caters to the red/blue team perspective and is available at https://www.peerlyst.com/posts/a-wireless-security-wiki-peerlyst.

Wi-Fi attack surface analysis and exploitation techniques

Wi-Fi technology has been around since approximately 1997. Since then, there has been a considerable improvement in the aspects of connectivity and security. However, wireless technology is still susceptible to several attack vectors that often lead to unauthorized access of network devices. Some of the commonly seen wireless threats range from rogue access points, man-in-the-middle attacks, DoS attacks, security misconfigurations, Caffe Latte attacks, and network injections, to name a few.

In order to conduct a Wi-Fi security assessment, aircrack-ng, or NetSumbler (see the following information box for details), security professionals often use tools such as Cain and Able, AirSnort, AirJack, KisMet, and InSSIDer. No matter which tool you use, the most important thing is to understand your requirements first. This is exactly what we will be doing in the upcoming subsections.

Aircrack: A password cracking utility that uses statistical techniques to crack WEP and can perform dictionary cracks on WPA and WPA2 after capturing the WPA handshake.

NetStumbler: Used for wardriving, detection, and verifying network configurations and rogue access points.

Wi-Fi data collection and analysis

The collection and analysis of data are important for successfully conducting Wi-Fi attacks. This data includes information such as MAC IDs, IP addresses, location, login history, the model and OS version of your device, browsing data, email servers you're connected to, usernames, installed apps, and so on. If sniffers are installed, it can be nasty: all your incoming and outgoing packets can be captured to gather sensitive information.

The Wi-Fi logs can be used for performance analysis as well as security purposes. For example, if you are a big retail store, then your Wi-Fi data can be used to do the following:

  • Check/gather identities and contact data.
  • Gather zip/postal codes to know where people travel from.
  • Use a MAC ID tracker to understand the locations that people travel to and from.
  • The pattern of customers visiting the stores, such as a certain time or day of the week.
  • Gather information on websites visited to check the interests of people.

On the other hand, for security purposes in an organization, this data can be used to create user behavior profiles (this is considered unethical and non-compliant in some countries) that do the following:

  • Track activities performed by users.
  • Track the devices connected to the network (both legitimate and illegitimate).
  • Monitor Wi-Fi infrastructure health.

There are many tools out there that offer these features and functionality, one of which is Acrylic Wi-Fi Professional. It has the following features:

  • Wi-Fi analyzer: Used to gather information on Wi-Fi networks including hidden networks, to view connected devices, and so on
  • Monitor mode: Used for capturing packets from connected devices, identifying device position with GPS, and so on
  • Troubleshooting: Used to gather health checks, performance metrics, quality assessments, and so on
  • Export data: Used to export data such as reports that it might have generated

Make sure the Wi-Fi infrastructure is safe and secure. Provide admin access only to those who need it, and implement security measures on your Wi-Fi infrastructure, physical and logical.

As an activity, you can perform a regular survey by walking around the campus of your organization with a laptop or mobile to identify whether there are any open network connections available. If found, then do your assessment and provide awareness to the employees working in your organization as deemed fit. If you can identify who has enabled the open insecure Wi-Fi connection, then caution them to disable it.

Wi-Fi attack and exploitation techniques 

A frequently used security mitigation technique to counter such attacks is the use of a Wireless Intrusion Prevention System (WIPS). This is even recommended by the Payment Card Industry Security Standards Council (PCI DSS). Besides this, you can use MAC filtering, network cloaking, an implementation of WPA, Temporal Key Integrity Protocol (TKIP), Extensible Authentication Protocol (EAP), VPN, and end-to-end encryption.

A few of the common modes of attack and exploitation that impact Wi-Fi networks are shown in the following table, along with the tools that can be used to mitigate them:

Attack  Description Tools
Wardriving Discovering wireless LANs by listening to beacons or sending probe requests. Once found, it will be used for further attacks. NetStumbler, KisMAC, and so on
Rogue Access Points (APs) Creating a rogue AP within the network to gain access. It will be a backdoor to a trusted network. Hardware or software APs
MAC spoofing An attacker’s MAC address is re-crafted to pose as an authorized AP.
Eavesdropping Used to capture and decipher an unprotected application to gather sensitive information. Kismet and Wireshark
WEP key cracking Active and passive methods are used to recover the WEP key by capturing/sniffing the data. aircrack-ng, AirSnort, and so on
Beacon flood Crafting and generating so many fake 802.11 beacons that it becomes almost impossible to find a legitimate AP. FakeAP
TKIP MIC exploit Generating false/invalid TKIP data so that the target AP's MIC threshold is exceeded and hence the service is suspended.
AP theft Physically removing the AP from a public area so that the AP no longer exists.

 

For additional insights into wireless attacks and safeguards, please refer to Wireless Exploitation and Mitigation Techniques, by Gianfranco Di Santo:

http://csc.csudh.edu/cae/wp-content/uploads/sites/2/2013/11/Research-Paper.pdf

In this section, we learned about the various attack and exploitation techniques that can be used to test and verify the security controls. In the next section, we will take a look at some of the best practices that will help you take your security posture to the next level.

Best practices

With the dynamic requirement for internet access anywhere, Wi-Fi has now become an integral part of life. The ease of connectivity and availability anywhere makes it very attractive to users. But everything has its pros and cons.

While Wi-Fi offers ease of connectivity, it also brings in security issues and the possibility of hacking and eavesdropping if not configured and used appropriately. The open Wi-Fi networks in public areas are the most vulnerable ones. Therefore, it is always advisable to use Wi-Fi networks with the utmost precaution. Wi-Fi networks can be at enterprise/personal or public level and can be secured or unsecured.

Here are some of the don’ts for a Wi-Fi network:

  • Do not connect to any unsecured public Wi-Fi network.
  • Do not access banking and sensitive information, including personal information, while on a public network.
  • Turn off auto-connect (Wi-Fi/Bluetooth) for any available network on your laptop, phone, or tablet.
  • Do not use Wired Equivalent Privacy (WEP) as it is an old method and can be deciphered easily.
  • Do not use the Pre-Shared Key (PSK) option as it is not secure at the enterprise level.
  • Do not trust hidden SSIDs and limit the SSIDs in the enterprise environment to which users can connect.

Here are some of the do's or best practices while using a Wi-Fi network:

  • Do change the default password.
  • Do change the SSID and make it hidden.
  • Do limit the range of Wi-Fi signals (recommended for home users).
  • Do use strong encryption methods.
  • Do deploy a firewall/WIDS/WIPS/NAC (recommended for enterprise networks).
  • Do secure 802.1X client settings (for example, by using certificates).
  • Do use 802.1X (for example, 802.11i), which uses Extensible Auth Protocol (EAP) authentication instead of PSK. To do this you would require a RADIUS/AAA server.

A few of the products used for Wi-Fi security are Cisco Wireless Security Suite, WatchGuard Wi-Fi Security, Sonicwall Distributed Wi-Fi Solution, and CheckPoint UTM-1 Edge W.

The following is a list of some of the key security solutions and their desired capabilities:

  • Data loss prevention:
    • The ability to fine-tune policy controls to detect, prevent, and remediate likely breaches
    • Multi-cloud protection across cloud apps
    • Default availability of compliance policies for global regulatory requirements
    • Proactive and on-demand scanning of new files
    • Proactive scans for existing data on the cloud
    • Integration with enterprise DLP to carry over existing policies to cloud apps
  • User and entity behavior analytics: 
    • Monitor user behavior via real-time dashboards.
    • Integrate data from other sources to create a correlation map of activities conducted by the user and match it against the known patterns of the user.
    • Use AI and ML to assist in the detection of anomalous behaviors.
    • Generate alerts in case of threats being detected based on use cases and patterns.
    • The ability to provide detailed audit trails for forensic investigations.
  • Shadow IT discovery: 
    • CSA methodology-based risk knowledge base.
    • Identify potentially risky cloud applications.
    • Identify and discover all cloud apps in use.
  • Encryption and tokenization
    • Persistent end-to-end cloud data encryption
    • Control over the encryption process and keys used
    • The ability to encrypt and decrypt any type of data across mobile and endpoints
    • The availability of AES 256-bit encryption with FIPS 140-2 validation
    • Minimal latency and highly scalable solutions
    • Encryption at rest, in transit, and in use
    • SaaS and IaaS apps should enable file- and field-level encryption
    • Integration with digital rights management solutions
    • A secure JDBC-compliant database for storage of data.
  • Digital rights management
    • Secure access to sensitive files on mobile devices
    • Security checks to validate whether actions are performed by authorized and authenticated users
    • Readily available integrations for third-party file-sharing apps
    • Real-time, remote wipe functionality for compromised devices
  • Adaptive access control:
    • IAM and MDM integration to carry over the organization's access policies to cloud apps
    • Concurrent login protection
    • Device access protection
    • Context-aware policies
    • On-demand scanning of existing cloud data
    • Dynamic remediation
  • Cloud security posture management
    • Monitor the cloud environment for new services and misconfigurations.
    • Enforcement of security policies, compliance and regulatory requirements, and industry standards.
  • Some other key aspects: 
    • Active monitoring of users, data, devices, and cloud apps
    • The use of cyclic reporting and dashboards displaying user activity, policy violations, and security threats
    • An industry-standard-compliant KMS with the integration of an external Key Management Interoperability Protocol (KMIP)
    • The option for multiple custodians for splitting keys, key rotation, and expiration policy
    • The ability to detect malicious content and applications using zero-day protection
If interested, you can also check out Xiaopan OS, which is a penetration-testing distribution for wireless security enthusiasts and can be found at https://sourceforge.net/projects/xiaopanos/.

In this section, we have taken a look at the best practices that will enable you to quickly ensure that you have a robust and secure deployment. Next, we will take a look at the approach needed for a security assessment.

Security assessment approach

In order to truly understand the level of security maturity and the appropriateness of the security posture of the organization, it is important to conduct an in-depth assessment. We would need to collect the following evidence for comprehensive security or risk assessment: 

  • Conduct exercises with key stakeholders.
  • Review all related policy and service documentation.
  • Perform a risk assessment and determine the risk profile.
  • Conduct a cybersecurity maturity assessment.

Based on the risk assessment, we will be able to provide recommendations and an action plan that clearly outlines the actionable steps that need to be carried out in order to fix the security gaps and bring the organization up to the desired level of maturity and also meet the regulatory requirements needed to protect the organization. The key steps will be as follows:

  1. Cyber risk assessment: This is a detailed risk assessment explaining the step-by-step approach, tools, and results. Confirm and document the approach, scope, and goals of the engagement. Create a detailed plan of what needs to be done, who needs to be interviewed, what documents need to be reviewed, and what follow-ups are needed after the first engagement to verify and observe the findings. 
  2. Risk assessment: Determine and assess the risks and threats faced by the organization. This may include conducting personal interviews with the process owners and subject matter experts of the process in order to understand the process better. This will also require the study of the documented process and any past audit reports, among other documentation that shows how the process is supposed to be followed and how it is done on the ground. Also, take into account what is mandated by the regulators and how the teams adhere to them. 
  3. Cybersecurity maturity assessment: Conduct a gap assessment and use the NIST CSF scoring guidelines to calculate the organization's level of cybersecurity maturity. This may require more insight based on the various other cybersecurity frameworks that are available in the market. It's always better to have reference to more than one framework, as it shows the importance and relevance of the findings and how they correlate to various regulatory and compliance requirements. The basic idea is to identify the risk and gaps, and map these to all the possible recommendations from different frameworks that relate to it.
  4. Recommendations: Leverage the results from the risk assessment and the maturity scoring to develop recommendations. This should talk about the security issues that are being solved and how they are being remediated. This should also provide information about any residual risk and cost/benefit analysis. If there is more than one solution, then do mention the others, but make sure to prioritize them and call out each one's pros and cons.
  5. Documentation: Prioritize risk assessment results and recommendations into an action plan with a time frame that would comply with the required mandates. This may include senior leadership visibility, so make sure to prepare an executive summary and a report that speaks broadly to the major issues. For the technical team, there can be separate documentation with the tactical and technical walkthroughs and your detailed findings. 

The overall aim is to conduct a risk assessment and evaluate the organization's cybersecurity program and develop recommendations along with a high-level action plan to address the cybersecurity requirements. Failure to meet the cybersecurity requirements will lead to increased compliance, operational, and reputational risks. The NIST Cybersecurity Framework, along with any other industry-recognized frameworks, can be used as a guiding line along with the regulatory requirements, if any. The key actions should include (but not be limited to) the following:

  • A complete risk-based assessment of the organization's business and dependent technologies
  • Assessment and auditing of the program, process, people, and participation
  • Using the document containing the identified gaps to create an action plan to fix/remediate the issues exposed in the findings. 
More specifically to the cloud, we can also use the Consensus Assessments Initiative Questionnaire, available on the CSA website: https://cloudsecurityalliance.org/artifacts/consensus-assessments-initiative-questionnaire-v3-0-1/

Software-defined radio attacks

Software-defined radios (SDRs) are setups where the traditional hardware components of radio are instead substituted by software that can produce the same results. They can be half- or full-duplex, based on their particular configuration. Examples of modern SDRs include HackRF and Ubertooth. They are often used by researchers to analyze signal transmission to and from IoT devices. In this section, we'll look at some of the common radio attacks and techniques to mitigate them.

Types of radio attacks

In the subsequent section, we discuss three of the most common attacks that work by exploiting radio signal transmissions. These include the replay attack, cryptanalysis attack, and reconnaissance attack.

Replay attacks

The most common type of attack is based on capturing a command sequence and re-transmitting it later. This is fairly easy to do using an SDR. Here's how it's done:

  1. The first step is to find out the central frequency of transmission.
  2. After the central frequency is obtained, the attacker can listen on that frequency for new data whenever a command is sent by one device to another.
  3. Once the data is captured, the attacker can use open source software such as Universal Radio Hacker (URH) to isolate a single command sequence.

Remember that for executing the actual exploit, the attacker is required to transmit the isolated command sequence on the same frequency in the vicinity of the IoT device, which in turn replays the command on that device. URH and a few other pieces of software for SDR can replay captured signals without much manual intervention.

Cryptanalysis attacks

This type of attack is much more sophisticated and can be used to exploit the devices. This is how it is done:

  1. The first step in this attack is the same as for the previous attack – capturing a sample command signal.
  2. Once that signal is obtained, it is analyzed in URH. The noise threshold of the environment is subtracted from the signal to obtain the original signal.
  3. After that, the signal is demodulated, but that requires the knowledge of the modulation scheme used in the communication system.
  4. Now the protocol is reverse-engineered and the actual command sequence is obtained. This can then be used to craft the messages directly and send them over to other devices of the same type.

Replay attacks do not always work across multiple devices because the communication protocol often uses device identification numbers. Cryptanalysis attacks require in-depth knowledge of both cryptography and communication theory, which are not required in replay attacks.

Wearable devices have been gaining prominence both for individuals who use them to monitor their health and for insurance companies that use it to gauge what incentives they should provide. Wearable devices often use Bluetooth for near-field communication. Until now, these devices have been highly vulnerable (devices that use versions older than Bluetooth 4.2 still are) to both replay and cryptanalysis attacks. If a rogue SDR is installed in a public setting such as a gym, these devices can be manipulated to show false health reports and harm both the users and the businesses depending on it.

Reconnaissance attacks

This type of attack is complementary to the cryptanalysis attack. It is not feasible to guess the type of modulation scheme used or the protocol used in the captured communication sample. This information can often be obtained from the device spec sheet.

All devices that make use of RF bandwidth are required to be certified by the authorities in that country (such as the FCC for the USA), and they publish analysis reports about all such devices publicly. Manufacturers often try to thwart attackers attempting this type of analysis by removing any identification markings from the chips. The attackers then analyze the chips using a multimeter and mark out various pins, which are then compared to the public schematics of other similar chips to determine the product ID.

Mitigation techniques

We just saw some common radio attacks. But is there any way we can mitigate them? Yes! In order to mitigate SDR attacks, a few modern IoT devices have come to the rescue. Some of the techniques used are described as follows:

  • Encrypting the signals: This is the most important precaution. All systems should be engineered with the assumption that they will operate in a hostile environment. While the modulation scheme can be figured out by recon attacks, reverse engineering the protocol is a much more difficult problem.
  • Using rolling commands: Using the same command every time exposes the device to replay attacks. Modern IoT devices use commands that work on a rolling window basis, so a command used once is not used again. Each command is specific to a particular device too. Vulnerable implementations of this scheme use a small keyspace that can be brute-forced by an attacker with some patience.
  • Using preamble and synchronization nibbles: Protocols that do not use preamble and synchronization nibbles for separating the commands are vulnerable to brute-force attacks using De Bruijn sequence reduction, which reduces the number of bits required to be replayed to transmit multiple command sequences by overlapping the common bits as per the algorithm.

IoT security is a game of cat and mouse. Both sides in the war are always finding ways to outsmart the other. Now that vehicles and industrial machines are also being equipped with IoT, the security aspect has never been more important. Attackers have already demonstrated hacking multiple IoT devices using affordable SDRs. Awareness among manufacturers is increasing but a lot more work still needs to be done in this area.

Summary

In this chapter, we took a look at concepts surrounding cloud computing, wireless security, and SDR attacks. We briefly touched upon the Cloud shared responsibility model, which demonstrates who is responsible for what in a cloud service engagement. Next, we took a look at the various security attributes and components with regard to AWS and its close competitor – Microsoft Azure. We also touched on other cloud security solutions such as CipherCloud, and other security functions. Next, we discussed the need for securing the wireless network, the tools and techniques that are used by threat actors, and how to defend against them. Toward the end of the chapter, we also discussed radio attacks and their corresponding mitigations.   

This chapter enabled you to understand the key aspects and attributes required to securely implement and operationalize a cloud deployment.

In the next chapter, we will discuss the top network threats that organizations face and how you, as a security professional, can mitigate them using a variety of techniques. We will also discuss how your organization can keep up with the evolving threat landscape and mitigate against new vulnerabilities and establish a continuous monitoring process. 

Questions

As we conclude, here is a list of questions for you to test your knowledge regarding this chapter's material. You will find the answers in the Assessments section of the Appendix:

  1. Which of the following is a correct statement?  
    • The cloud service provider will inherently provide the required security features.
    • To define the security needed, you need to do a comprehensive assessment of the cloud service and the application.
    • A cloud application security control mirrors the controls in native applications.
    • All of the above.
  2. Which of the following is the standard for interoperable cloud-based key management? 
    • KMIP
    • PMIK
    • AIMK
    • CMIL
  3. Which of the following is one of the most actively developing and important areas of cloud computing technology?
    • Logging
    • Auditing
    • Regulatory compliance
    • Authentication
  4. AWS supports ________ Type II audits.
    • SAS70
    • SAS20
    • SAS702
    • SAS07
  5. Security methods such as private encryption, VLANs, and firewalls come under the __________ subject area.
    a) Accounting management
    b) Compliance
    c) Data privacy
    d) Authorization
  1. For the _________ model, the security boundary may be defined by the vendor to include the software framework and middleware layer.
    • SaaS
    • PaaS
    • IaaS
    • All of the above
  2. Which of the following types of cloud does not require mapping?
    • Public
    • Private
    • Hybrid
    • Community cloud
  3. Which of the following offers the strongest wireless security?
    • WEP
    • WPA
    • WPA2
    • WPA3
  4.  _______________ is the central node of 802.11 wireless operations.
    • WPA
    • An access point
    • WAP
    • An access port
  5.  ___________ is the process of wireless traffic analysis that may be helpful for forensic investigations or when troubleshooting any wireless issue.
    • Wireless traffic sniffing
    • Wi-Fi traffic sniffing
    • Wireless traffic checking
    • Wireless transmission sniffing

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.6.75