Chapter 3. Windows Azure

Enterprises today run on several flavors of operating systems like Windows, UNIX, and mainframes. As the business grows, enterprises have to expand their data and processing capacities by buying more servers and operating systems to support capacity. Typically businesses have to plan for growth well in advance to budget the expenses. The tipping point, where investments in server systems may not justify the value they provide for the business growth, is not far away. This is because server systems are expensive to maintain, and before they start providing real value to the business, they may become obsolete. As a result, businesses have a constant struggle of justifying system upgrades to new server systems.

By moving to a cloud operating system, businesses can outsource their server growth to a cloud service provider like Microsoft. Microsoft can manage the server growth for businesses by providing compute, storage, and management capabilities in the cloud. When businesses feel the need to grow or scale back their server capacities, they just buy more or reduce the current capacities in the cloud according to business demand. Microsoft, on the other hand, can provide cloud services to multiple customers and transfer the savings achieved through economies of scale to businesses. Typical on-premise systems are designed and deployed to handle maximum capacity, but with the hardware elasticity offered by the cloud, businesses can deploy their systems to handle minimum capacity and then dynamically scale up as the demand increases.

Microsoft offers computing, storage, and management capabilities to businesses through Windows Azure. Windows Azure runs in Microsoft's data centers as a massively scalable operating system spanning multiple virtualized and actual platforms.

In this chapter, I will discuss Windows Azure architecture and its computational and service management components in detail. In the next chapter, I will cover Windows Azure storage.

Windows Azure Architecture

Windows Azure is the operating system that manages not only servers but also services. Under the hood, Windows Azure runs on 64-bit Windows Server 2008 R2 operating systems with Hyper V support. You can think of Windows Azure as a virtual operating system composed of multiple virtualized servers running on massively scalable but abstracted hardware. The abstraction between the Windows Azure core services and the hardware is managed by Fabric Controller. Fabric Controller manages en-to-end automation of Windows Azure services, from hardware provisioning to maintaining service availability. Fabric Controller reads the configuration information of your services and adjusts the deployment profile accordingly. Figure 3-1 illustrates the role of Fabric Controller in Windows Azure architecture.

Fabric Controller

Figure 3.1. Fabric Controller

Windows Azure is designed to be massively scalable and available. In Figure 3-1, the Fabric Controller reads the service configuration information provided by the cloud service and accordingly spawns the server virtual machines required to deploy the cloud service. The deployment of cloud services and spawning of virtual instances of servers are transparent to the developer. The developer just sees the status of the cloud service deployment on the Windows Azure developer portal. Once the cloud service is deployed, it is managed entirely by Windows Azure. You just have to specify the end state of the cloud service in its configuration file, and Windows Azure will provision the necessary hardware and software to achieve it. Deployment, scalability, availability, upgrades, and hardware server configurations are managed by Windows Azure for the cloud service.

In the previous chapter, you saw that Windows Azure consists of three main services: Compute, Storage, and Management. The Compute service provides scalable hosting for IIS web applications and .NET background processes. The web application role is called the Web role, and the background process role is called the Worker role. The Worker role is analogous to Windows Services and is designed specifically for background processing. A Windows Azure cloud service comprises of a Web role and/or a Worker role and service definition of the service.

The Storage service in Windows Azure supports three types of services: blobs, queues, and tables. these storage types support local as well as direct access through a REST API. Table 3-1 illustrates the commonalities and differences among the three storage types in Windows Azure.

Table 3.1. Windows Azure Storage

Feature

Blob

Queue

Table

URL schema

http://[StorageAccount].blob.core.windows.net/[ContainerName]/[BlobName]

http://[StorageAccount].queue.core.windows.net/[QueueName]

http://[StorageAccount].table.core.windows.net/[TableName]?$filter=[Query]

Maximum size

50GB

8K (string)

Designed for terabytes of data

Recommended usage

Large binary data types

Cross-service message communication

Storing smaller structured objects like the user state across sessions

API reference

http://msdn.microsoft.com/en-us/library/dd135733.aspx

http://msdn.microsoft.com/en-us/library/dd179363.aspx

http://msdn.microsoft.com/en-us/library/dd179423.aspx

Even though the Storage service makes it easy for Windows Azure cloud services to store data within the cloud, you can also access data directly from client applications using the REST API. For example, you could write a music storage application that uploads all your MP3 files from you client machine to the blob storage, completely bypassing the Windows Azure Compute service. Compute and Storage services can be used independently of each other in Windows Azure.

The Management service offers the features offered by Windows Azure developer portal as REST API calls. So, you can manage your applications and storage in Windows Azure dynamically by calling the Service Management API over REST interface.

Figure 3-2 illustrates the Windows Azure architecture.

Windows Azure

Figure 3.2. Windows Azure

In Figure 3-2, Compute and Storage services run as independent services in Windows Azure. The Web and Worker roles run in the Compute Service, and the blob, queue and table services run in the Storage service of Windows Azure. The Fabric Controller abstracts the underlying infrastructure components like virtualized servers, network components, DNS, and load balancers from Compute and Storage services. When a request from the Internet comes for a Windows Azure Web role application, it passes through the load balancer to the Web role of the Compute Service. If a request for a Storage service comes in, it passes through the load balancer to the appropriate Storage service component. Even when a Web or Worker role wants to communicate with the Storage service, it has to use the same REST APIs that other client applications use. Finally, the Compute and Storage services can be managed by the Service Management API.

Let's consider an example of your own media storage system in the cloud. In the past decade, there has been a data explosion due to exponential rise in the amount of digital assets in an individual's life. These assets are in the form of music, video, pictures, and documents. Individuals face the challenge of storing all this content in one place locally. I personally have three hard drives with a combined capacity of 1TB, and I have almost 700GB full. Web sites like Flickr.com and Shutterfly.com can help manage pictures, and sites like YouTube.com and MSN Videos can manage videos. But what if you want a personal hard drive in the cloud with some backup capabilities and functionality so that you don't have to maintain terabytes of digital assets in your house or scattered over multiple web sites. Maybe you would also like to access these assets from anywhere you are. To resolve the digital asset storage problem, you could build a media storage service for yourself on Windows Azure as shown in Figure 3-3.

A Windows Azure Media Server

Figure 3.3. A Windows Azure Media Server

Figure 3-3 is a solution specific illustration of Figure 3-2. The Web role is a web application interface for viewing and managing the digital assets stored in Storage services. It also provides some application services like uploading, deleting, updating, and listing of digital assets stored in the Storage service. The Web role application also provides a built-in Silverlight Media Player for accessing the digital assets from your client machine or mobile phone browsers. All the digital assets are stored in the Storage service as blobs. The Worker service does background processing of indexing and cleaning up the digital assets on a periodic basis. Note that this service does not use either tables or queues because the service does not need them.

To keep the discussion simple, I have kept this example at the conceptual, rather than at the physical or logical design, level. In the next section, I will discuss the development environment for Windows Azure so that you can start building your own Windows Azure cloud services like the media storage service discussed here.

Again, the three core services Compute, Storage, and Management combined form the Windows Azure cloud operating system. All the three services abstract the underlying hardware and the operating system infrastructure required for deploying applications in the cloud. The Compute service provides Web and Worker roles that enable running of web and background process applications respectively in Windows Azure. The Storage service offers blob storage, queuing, and table storage capabilities for storing any kind of files, messages, and structured storage in the cloud respectively. The service management interface provides management capabilities to all of your Windows Azure deployments through a single interface. From an architect's perspective, Windows Azure provides most of the features required for designing distributed applications in the cloud.

The Compute Service

As you now know, Compute is one of the core services of Windows Azure. It is also called Hosted Service in Windows Azure portal terminology. In this section, I will cover Windows Azure Compute service and the developer experience associated with it. The Compute service gives you ability to develop and deploy Windows Azure cloud services. The environment consists of an underlying .NET 3.5 Framework (SP1) and IIS 7 running on 64-bit Windows 2008 servers. You can also enable Full Trust in Windows Azure services for developing native applications.

The Windows Azure Compute service is based on a role-based design. To implement a service in Windows Azure, you have to implement one or more roles supported by the service. The current version of Windows Azure supports two roles: Web and Worker.

Web Role

A Web role is a web site or web service that can run in an IIS 7 environment. Most commonly, it will be an ASP.NET web application or a Windows Communications Foundation (WCF) service with HTTP and/or HTTPS endpoints.

Note

Inbound supported protocols are HTTP and HTTPS, and outbound protocols can be any TCP socket. The UDP outbound protocol is not supported at this time in Windows Azure services.

The Web role also supports FastCGI extension module to IIS 7.0. This allows developers to develop web applications in interpreted languages like PHP and native languages like C++. Windows Azure supports Full Trust execution that enables you to run FastCGI web applications in Windows Azure Web role. To run FastCGI applications, you have to set the enableNativeCodeExecution attribute of the Web role to true in the ServiceDefinition.csdef file. In support of FastCGI in the Web role, Windows Azure also introduces a new configuration file called Web.roleconfig. This file should exist in the root of the web project and should contain a reference to the FastCGI hosting application, like php.exe. In the interest of keeping this book conceptual, I will not be covering FastCGI applications. For more information on enabling FastCGI applications in Windows Azure, please visit the Windows Azure SDK site at http://msdn.microsoft.com/en-us/library/dd573345.aspx.

Warning

Even though Windows Azure supports native code execution, the code still runs with Windows user, not administrator, privileges, so some WIN32 APIs that require system administrator privileges will not be accessible

Worker Role

The Worker role gives you the ability to run a continuous background process in the cloud. The Worker role can expose internal and external endpoints and also call external interfaces. A Worker role can also communicate with the queue, blob, and table Windows Azure storage services. A Worker role instance runs independently of the Web role instance, even though both of them may be part of the same service. A Worker role runs on a totally different virtual machine than the Web role in the same service. In some Windows Azure services, you may require communication between a Web role and a Worker role. Even though the Web and Worker role expose endpoints for communication among roles, the recommended mode of reliable communication is Windows Azure queues. Web and Worker roles both can access Windows Azure queues for communicating runtime messages. I will cover Windows Azure queues in the next chapter.

A Worker role class must inherit from the Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint class. RoleEntryPoint is an abstract class that defines functions for initializing, starting and stopping the Worker role service. A Worker role can stop either when it is redeployed to another server, or you have executed the Stop action from the Windows Azure developer portal. Figure 3-4 illustrates the sequence diagram for the life cycle of a Worker role.

Sequence diagram for a Worker role service

Figure 3.4. Sequence diagram for a Worker role service

In Figure 3-4, there are three objects: Fabric Controller, RoleEntryPoint, and a Worker role implementation of your code. Fabric Controller is a conceptual object; it represents the calls that the Windows Azure Fabric Controller makes to a Worker role application. The Fabric Controller calls the Initialize() method on the RoleEntryPoint object. RoleEntryPoint is an abstract class so it does not have its own instance; it is inherited by the Worker role instance to receive calls. The OnStart() method is a virtual method, so it does not need to be implemented in the Worker role class. Typically, you would write initialization code like starting diagnostics service or subscribing to role events in this method. The Worker role starts its application logic in the Run() method. The Run() method should have an continuous loop for continuous operation. If the Run() method returns, the role is restarted by the OnStart() method. If the role is able to start successfully, the OnStart() method returns True to the Fabric Controller; otherwise, it returns False. The Fabric Controller calls the Stop() method to shut down the role when the role is redeployed to another server or you have executed a Stop action from the Windows Azure developer portal.

Windows Azure API Structure

Windows Azure SDK provides a set of APIs to complement the core services offered by Windows Azure. These APIs are installed as a part of Windows Azure SDK and can be used locally for developing Windows Azure applications. The Microsoft.WindowsAzure.ServiceRuntime assembly and namespace consists of classes used for developing applications in the compute service.

The Microsoft.WindowsAzure.StorageClient assembly and namespace consists of classes used for developing applications to interact with the storage service. The assembly makes REST calls to the storage service REST interface.

The service management API is exposed as a REST interface, and the csmanage.exe application in Windows Azure code samples (http://code.msdn.microsoft.com/windowsazuresamples) can be used to call the service management APIs.

Developer Environment

The development environment of Windows Azure consists of two main components: Windows Azure Tools for Visual Studio and the Windows Azure SDK. In this section, I will cover these in detail.

Windows Azure Tools for Visual Studio

Windows Azure Tools for Visual Studio is a Visual Studio extension supporting Windows Azure development. You can download it from the Azure SDK web site http://www.microsoft.com/azure/sdk.mspx.

Visual Studio Project Types

The Windows Azure Tools for Visual Studio creates a project type named Cloud Service containing project templates for Web role and Worker role. After you install Windows Azure Tools for Visual Studio, open Visual Studio and create a new Project by selecting File

Visual Studio Project Types
New Project

Figure 3.5. New Project

The Windows Azure Cloud Service template defines the cloud service project. Click OK to choose from the available roles (see Figure 3-6).

Cloud Service roles

Figure 3.6. Cloud Service roles

The available cloud service roles are as follows:

  • ASP.NET Web role: As the name suggests, this role consists of an ASP.NET project. You can build or migrate any ASP.NET compatible project for deploying to the cloud.

  • WCF Service Web role: This role consists of a WCF project. You can build or migrate a WCF services in this project for deploying to the cloud.

  • Worker role: The Worker role project is a background process application. It is analogous to a Windows service. A Worker role has start and stop methods in its superclass and can expose internal and external endpoints for direct access.

  • CGI Web role: The CGI Web role is a FastCGI-enabled Web role. It does not consist of a Cloud Service project as shown in Figure 3-10.

Choose the roles you want, as shown in Figure 3-7. I have selected a Web role, a WCF Web role, and a Worker role.

Selected roles

Figure 3.7. Selected roles

Click OK to create the cloud service project, as shown in Figure 3-8.

Empty cloud service project

Figure 3.8. Empty cloud service project

In Figure 3-8, the HelloAzueCloud cloud service project holds references in the Roles subfolder to all the role projects in the solution. The cloud service project also contains ServiceDefinition.csdef and ServiceConfiguration.cscfg files that define the configuration settings for all the roles in the cloud service.

The WCF service Web role project includes a sample service and its associated configuration in the we.config file. The WebRole.cs file implements the start and configuration changing events fired by the Windows Azure platform. This file is created for all the roles with default start and configuration changing event handlers. You can handle additional events like StatusCheck and Stopping depending on your application needs. The WebRole class inherits the RoleEntryPoint class from the Microsoft.WindowsAzure.ServiceRuntime namespace. The WebRole.cs file is analogous to the Global.asax file in a traditional ASP.NET application.

The ASP.NET Web role project consists of a Default.aspx file and its associated code-behind and web.config file.

Finally, the Worker role project consists of WorkerRole.cs file and its associated app.config file. In addition to inheriting the RoleEntryPoint class, it also overrides the Run() method in which you add your continuous processing logic. Because a Worker role is not designed to have any external interface by default, it does not contain any ASP.NET or WCF files.

In summary, the cloud service defined in this project consists of a WCF service, an ASP.NET web application, and a Worker role service. The entire package constitutes a Windows Azure cloud service.

Note

In the interest of keeping this book conceptual, I will not be covering FastCGI applications.

Role Settings and Configuration

In the cloud service project, you can configure each role's settings by double-clicking the role reference in the Roles subdirectory of the cloud service project. Figure 3-9 shows the role settings page in Visual Studio.

Role settings (the default is Configuration)

Figure 3.9. Role settings (the default is Configuration)

The role settings page has five tabs: Configuration, Settings, Endpoints, Local Storage, and Certificates.

The Configuration tab is selected by default and displays the following configuration options:

  • .NET Trust Level: The .NET Trust Level specifies the trust level under which this particular role runs. The two options are Full Trust and Windows Azure Partial Trust. Full Trust options gives the role privileges to access certain machine resources and execute native code. Even in full trust, the role still runs in the standard Windows Azure user's context and not the administrator's context. In the partial trust option, the role runs in a partially trusted environment and does not have privileges for accessing machine resources and native code execution.

  • Instances: The instance count defines the number of instances of each role you want to run in the cloud. For example, you can run two instances of ASP.NET Web role and one instance of the Worker role for background processing. The two instances of ASP.NET Web role will give you automatic load-balancing across the instances. By default, all the roles run as single instance. This option gives you the ability to scale-up and scale-down your role instances on-demand.

The VM size option gives you the ability to choose from a list of virtual machines preconfigured in the Windows Azure virtual machine pool. You can choose from the following list of predefined virtual machines depending on your deployment needs:

  • Small: 1 core processor, 1.7GB RAM, 250GB hard disk

  • Medium: 2 core processors, 3.5GB RAM, 500GB hard disk

  • Large: 4 core processors, 7GB RAM, 1000GB hard disk

  • Extra large: 8 core processors, 15GB RAM, 2000GB hard disk

The Web roles have a startup action that defines the endpoint on which the browser should launch. This setting is not a cloud service setting but a project setting for launching the Web role in the development fabric.

The Settings tab, shown in Figure 3-10, defines any custom settings you can add to the role configuration.

Settings

Figure 3.10. Settings

These custom name-value pairs are analogous to the name-value appSettings in an app.config or web.config file. You can retrieve the values of these settings in your code by calling the RoleEnvironment.GetConfigurationSettingValue. By default there is a DiagnosticsConnectionString setting present which is used for logging from your roles. Do not remove this setting.

The Endpoints tab contains endpoints your role will create when it is deployed. Figure 3-11 shows the Endpoints tab for a Web role and a Worker role respectively.

Endpoints tab for Web and Worker roles

Figure 3.11. Endpoints tab for Web and Worker roles

The Web role can have InputEndpoints and an internal endpoint. InputEndpoints are HTTP or HTTPS endpoints exposed externally. The port number defines the port your will use while accessing the default web page or service in this Web role. In case of an HTTPS endpoint, you can upload the X.509 certificate for accessing the web page or service using an HTTPS encrypted connection.

The internal endpoint is the endpoint accessible to other roles within the cloud service. For example, a Worker role can get a reference to the internal endpoint of a Web role in the same cloud service for making web service method calls to it.

A Worker role has no defined endpoints like a Web role because it is intended to be used as a background process. To define an endpoint, you have to add one to the list and select its type (input or internal), protocol (tcp, http, https), port, and optionally, an SSL certificate name.

Note that a Web role can have only HTTP or HTTPS endpoints, but a Worker role can have an HTTP, HTTPS, or TCP endpoint.

The LocalStorage tab defines local directories that will be created on the server machine of the role for storing files locally. Figure 3-12 shows the settings on the LocalStorage tab.

Local storage

Figure 3.12. Local storage

The name of the local storage will be the names of directories created on the server. The size column defines the maximum size of the folder contents and the "Clean on Role Recycle" column defines whether you want the contents of the directory cleaned up when a role recycles. You can use this option for creating sticky storage for maintaining state of the role across reboots and failures. Local storage can be effectively used for temporary caching and session management applications.

The Certificate tab is used for referencing the certificates in your role. At the time of this writing, you still had to use the Windows Azure developer portal or the service management API for uploading the certificate to the server and then reference the certificate in the settings as shown in Figure 3-13.

Certificate configuration

Figure 3.13. Certificate configuration

Note

Some of the role settings directly modify the ServiceDefinition.csdef and ServiceConfiguration.cscfg files, and you can achieve the same configuration effect by directly modifying these files instead.

Visual Studio Project Actions

Once you have created a Windows Azure cloud service project, you can work with the cloud service roles, work with storage services or work on the debug and deployment of the cloud service.

Working with Cloud Service Roles

You can associate an existing Web role or a Worker role from a solution to the cloud service project, or create a new role by right-clicking on the Roles subdirectory and selecting Add, as shown in Figure 3-14.

Adding associate roles to cloud service

Figure 3.14. Adding associate roles to cloud service

By selecting New Web Role or New Worker Role project, you can create a new Web role project in the solution that is associated with the cloud service project. By selecting a Web role or Worker role project in the solution, you can associate an existing project in the solution to the cloud service project. Figure 3-15 shows option for adding a new role to the existing cloud service.

Adding new roles

Figure 3.15. Adding new roles

Working with Storage Services

The Windows Azure development fabric includes a local storage environment that resembles the cloud storage service. It has development-specific blob, queue, and table services that simulate the ones in the Windows Azure cloud. These services depend on SQL Server 2005 or 2008 database. So, you need to have SQL Server 2005 or 2008 installed on your machine to work with storage services development environment (also called Development Storage).

To start the development storage:

  • Select Start

    Working with Storage Services
Development Storage

Figure 3.16. Development Storage

When you debug your service within Visual Studio, it starts the development storage, which you can access by right-clicking the Windows Azure system tray icon and selecting Show Development Storage UI. Figures 3-17 and 3-18 illustrate the system tray options and the development storage user interface.

Windows Azure System Tray Options

Figure 3.17. Windows Azure System Tray Options

Developer Storage User Interface

Figure 3.18. Developer Storage User Interface

Debugging in Visual Studio .NET

In the Windows Azure Cloud environment, no direct debugging is available. Your only option is logging. But in the development fabric, you can debug by adding breakpoints in the code and by viewing the logging information in the development fabric user interface. Like any typical .NET application, Visual Studio .NET attaches the debugger to the application when run in debug mode in the development fabric. The debugger will break to the breakpoint set in the Web and Worker role projects. In the Windows Azure cloud, Visual Studio debugging environment is not available, so the best option is to log. I will discuss diagnostics and logging later in this chapter. Figure 3-19 illustrates the Development Fabric UI used for logging.

Development Fabric Logging

Figure 3.19. Development Fabric Logging

Tip

I recommend inserting logging statements to the Windows Azure application right from the beginning. This way, you can debug the application in the development fabric as well as in the Windows Azure cloud without making any code changes.

To enable native code debugging in a Web role project, right-click the Web role project, select Properties, go to the Web tab, and select the "Native code" check box in the Debuggers section, as shown in Figure 3-20.

Web role unmanaged code debugging

Figure 3.20. Web role unmanaged code debugging

To enable native code debugging in a Worker role project, right-click the Worker role project, select Properties, go to the Debug tab, and select the "Enable unmanaged code debugging" check box, as shown in Figure 3-21.

Worker role unmanaged code debugging

Figure 3.21. Worker role unmanaged code debugging

Packaging the Service

To deploy the Windows Azure cloud service in the cloud, you have to package it into a .cspkg file containing all the assemblies and components, and upload the package to Windows Azure developer portal. To package a service, right-click the cloud service project, and select Publish, as shown in Figure 3-22.

Packaging a Windows Azure service

Figure 3.22. Packaging a Windows Azure service

When you select Publish, Visual Studio .NET creates two files: [Service Name].cspkg and ServiceConfiguration.cscfg. It also opens Internet Explorer and takes you to the LiveID sign-in screen to sign into the Windows Azure developer portal. The [Service Name].cspkg is the service package containing all the service components required by Windows Azure to run the service in the cloud. The .cspkg file is a zip archive, and you can explore its contents by renaming it to .zip and extracting it. The ServiceConfiguration.cscfg file is the configuration file for the service instances. It is a copy if the ServiceConfiguration.cscfg file from the cloud service project.

Windows Azure SDK Tools

The Windows Azure SDK tools are located in the directory C:Program FilesWindows Azure SDKv1.0in for a default Windows Azure installation. Table 3-2 lists the tools included in the Windows Azure SDK.

Table 3.2. Windows Azure SDK Tools

Tool

Description

CSPack.exe

This tool is used to package a service for deployment. It takes in a ServiceDefinition.csdef file and outputs a .cspkg file.

CSRun.exe

This tool deploys a service into the local development fabric. You can also control the run state of the development fabric from this tool. This tool depends on the service directory structure created by the CSPack.exe /copyonly option.

DSInit.exe

This tool initializes the development storage environment. It is automatically called by Visual Studio.NET and DevelopmentStorage.exe when you run a cloud application in the development fabric for the first time.

Service Models

A service model of Windows Azure cloud service consists of two main configuration files: ServiceDefinition.csdef and ServiceConfiguration.cscfg. ServiceDefinition.csdef defines the metadata and configuration settings for the service, and ServiceConfiguration.cscfg sets the values of configuration settings for the runtime instance of the service. The overall service model defines the metadata and configuration parameters and the end state of the service. Windows Azure reads these files when deploying instances of your service in the cloud. You can also modify the service model settings by right-clicking each role in the cloud service project and selecting properties. This is the recommended way of configuring your service manually.

ServiceDefinition.csdef

The ServiceDefinition.csdef files defines the overall structure of the service. It defines the roles available to the service and the service input endpoints. It also defines the configuration settings for the service. The values of these configuration parameters are set in the ServiceConfiguration.cscfg configuration file. Listing 3-1 shows the contents of a ServiceDefinition.csdef file.

Example 3.1. ServiceDefinition.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="CloudService1" xmlns="http://schemas.
microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="WebRole1">
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
</ConfigurationSettings>
  </WebRole>
  <WebRole name="WCFServiceWebRole1">
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="8080" />
    </InputEndpoints>
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
    </ConfigurationSettings>
    <LocalResources>
     <LocalStorage name="L1" cleanOnRoleRecycle="false" sizeInMB="10" />
    </LocalResources>
    <Certificates>
     <Certificate name="C1" storeLocation="LocalMachine" storeName="My" />
    </Certificates>
<InternalEndpoint name="InternalHttpIn" protocol="http" />
  </WebRole>
  <WorkerRole name="WorkerRole1" enableNativeCodeExecution="true">
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
    </ConfigurationSettings>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="tcp" port="10000" />
      <InternalEndpoint name="Endpoint2" protocol="tcp" />
    </Endpoints>
    <Certificates>
      <Certificate name="C1" storeLocation="LocalMachine" storeName="My" />
    </Certificates>
  </WorkerRole>
</ServiceDefinition>

Listing 3-1 is the service definition for the CloudService1: it has two Web role instances with names WebRole1 and WCFServiceWebRole1, and a Worker role instance with the name WorkerRole1. They also define endpoints, local storage, and certificates. WCFServiceWebRole1 has a <LocalStorage> element that defines the local storage space for the service role. The <ConfigurationSettings> element defines a DiagnosticsConnectionString configuration setting for the service role. The value for the DiagnosticsConnectionString setting is set in ServiceConfiguration.cscfg.

Note

The ServiceDefinition.csdef file of a service cannot be changed at run time because it defines the shape and non-changeable parameters of the service. You have to republish the service after changing its ServiceDefinition.csdef for the changes to take effect. For more details on the ServiceDefinition.csdef schema, please visit http://msdn.microsoft.com/en-us/library/dd179395.aspx.

Endpoints

Windows Azure roles can have two types of endpoints: internal and input. The internal endpoints are used for interrole communications within the same cloud service, whereas the input endpoints can be accessed from anywhere. Figure 3-23 illustrates some internal endpoints of Web and Worker roles.

Internal endpoints for interrole communication

Figure 3.23. Internal endpoints for interrole communication

In Figure 3-23, there are two Web roles with one instance each and two instances of a Worker role. The Worker role exposes an endpoint that is consumed by both the Web roles. Note that each Web role can communicate with the exact instance of the Worker role. The Web role also exposes an HTTP endpoint that can be consumed by any of the roles in the cloud service. The endpoint only publishes the IP address and port of the instance; you still have to write TCP or HTTP code to send and receive requests. You can get a reference to the internal endpoint of an instance as follows:

IPEndPoint internale = RoleEnvironment.Roles["HelloWorkerRole"].Instances[0]
.InstanceEndpoints["MyInternalEndpoint"].IPEndpoint;

where HelloWorkerRole is the name of the Worker role and MyInternalEndpoint is the name of the endpoint. You can get the IP address of an internal instance end point in the following manner:

string ipaddress =  RoleEnvironment.Roles["HelloWorkerRole"].Instances[0]
.InstanceEndpoints["MyInternalEndpoint"].IPEndpoint.ToString();

Figure 3-24 illustrates the input endpoints of a Web role and a Worker role.

Input endpoints for external communication

Figure 3.24. Input endpoints for external communication

The Web role instances have default HTTP input endpoints for accepting Internet requests. Windows Azure also allows Worker roles to have HTTP and TCP input endpoints for accepting connections over the Internet. Like the internal endpoint, the access to input endpoint is not limited within the cloud service; any external application can communicate with the input endpoint of the role. In Figure 3-24, Web Role 1 and Worker Role 1 have input endpoints available for communication. Any application can now communicate with endpoints of these roles over the Internet. Because the input endpoints are exposed externally, they are automatically load balanced by Windows Azure between instances. In some documentation, input endpoints are also referred to as external endpoints. You can get a reference to the input endpoint of an instance as follows:

IPEndPoint inpute = RoleEnvironment.Roles["HelloWorkerRole"].Instances[0]
.InstanceEndpoints["MyInputEndpoint"].IPEndpoint;

where HelloWorkerRole is the name of the Worker role and MyInputEndpoint is the name of the endpoint. Once you have the IPEndPoint object, you can get the IP address and port number of the endpoint to initiate communications.

Local Storage

The <LocalStorage> element defines a temporary local storage space for the service role instance on the server running the role instance. It has three attributes: name, cleanOnRoleRecycle, and sizeInMb. The Fabric Controller reserves some space on the server file system of the machine on which the role instance of the service is running. The name attribute refers to the directory allocated by the Fabric Controller for storage, and cleanOnRecycle specifies whether to clean the contents of the local storage across instance reboots. The sizeInMb attribute refers to the amount of space allocated for local storage. The minimum value that sizeInMb can take is 1, which is the minimum amount of storage that can be allocated in megabytes. The local storage does not have any relationship with Windows Azure storage services; it is a feature of the Compute service for providing temporary storage space for Web and Worker roles.

Windows Azure runtime provides a static function LocalResource GetLocalResource (string localResourceName) in the Microsoft.WindowsAzure..ServiceRuntime.RoleEnvironment class to get reference to the LocalResource class, which represents the local storage space reserved for the service. The localResourceName function parameter is the name of storage space defined as the name attribute of <LocalStorage> element. In Listing 3-1, I am allocating a space of 10MB for the storage space named L1 on local machine of the service role instance. I can now get a reference to the local storage space by calling the function LocalResource resource = RoleEnvironment.GetLocalResource("L1");

Warning

Local storage space allocated on the local machine is local for that instance. If the cleanOnRoleRecycle attribute is set to false, the data from local directory will be lost on role recycle. So, while developing applications, you should consider local storage purely for unreliable caching purposes with data loss checks built into the application.

Full Trust Execution

By default, Windows Azure applications run under full trust in the cloud environment. When running under partial trust, the code has access only to limited resources and libraries. When running under full trust, cloud services can access certain system resources and can call managed assemblies as well as native code. To enable Full Trust in your application, set the enableNativeCodeExecution attribute of the <WebRole> or <WorkerRole> element in the ServiceDefinition.csdef file to true:

<WebRole name="<role name>" enableNativeCodeExecution="true|false">

Table 3-3 lists the permissions for a cloud application role running in partial and full trust execution modes.

Table 3.3. Partial and Full Trust Permissions

Resource

Partial Trust

Full Trust

Call managed code assemblies

Assemblies with AllowPartiallyTrustedCallers attribute

All assemblies

System registry

No access

Read access to HKEY_CLASSES_ROOT

HKEY_LOCAL_MACHINE

HKEY_USERS

HKEY_CURRENT_CONFIG

32-bit P/Invoke

Not supported

Not supported

64-bit P/Invoke

Not supported

Supported

32-bit native subprocess

Not supported

Supported

64-bit native subprocess

Not supported

Supported

Local storage

Full access

Full access

System root and its subdirectories

No access

No access

Windows (e.g., C:Windows) and its subdirectories

No access

Read access

Machine configuration files

No access

No access

Service configuration file (ServiceConfiguration.cscfg)

Read access

Read access

Note

You can find more information on the Windows Azure partial trust policy in the Windows Azure SDK documentation at http://msdn.microsoft.com/en-us/library/dd573355.aspx .

Table 3-3 clearly shows that in partial trust you cannot call native code and the access to the machine resources are limited. Even in full trust execution, the access has been limited to prevent any system-related damage. Partial trust application roles can call only managed code assemblies that have AllowPartiallyTrustedCallers attribute, whereas a full trust application role can all any managed code assembly. A partial trust application role cannot make any P/Invoke native calls. A full trust application role can make P/Invoke calls to a 64-bit library. P/Invoke calls to a 32-bit library are not directly supported in Windows Azure. Instead, you could spawn a 32-bit subprocess from your application role and make P/Invoke calls to 32-bit library from within that subprocess. The system root directory (usually C:Windowssystem32) is not accessible in Windows Azure. A full trust application role has only read access to the Windows directory (usually C:Windows). Both, full and partial trust roles have full access to the local storage. Local storage is the recommended temporary file and data storage for Windows Azure applications.

Warning

The resource access works differently in the Windows Azure cloud and the development fabric. In the Windows Azure cloud, the application role runs under the privileges of a standard Windows Azure account, whereas the application role in the development fabric runs under the logged-in user account. So, the application role running in the local development fabric may behave differently to the same application role running in the Windows Azure cloud environment.

Certificate Management

In Windows Azure, you can use certificates not only for encrypting the HTTPS endpoints of your web and Worker roles but also for custom message level encryption. You can upload X.509 certificated to your Windows Azure service either from the Windows Azure portal or using the service management API. You can upload any number of certificates for the service and these certificates will be installed in the Windows certificate stores of the role instances.

Once a certificate is uploaded to the service, it can be referenced in the ServiceDefinition.csdef and ServiceConfiguration.cscfg. The ServiceDefinition.csdef defines the name, store location, and store name of the certificate on the instance as shown here:

<Certificate name="C1" storeLocation="LocalMachine" storeName="My" />

The ServiceConfiguration.cscfg file defines the thumbprint and the thumbprint algorithm of the certificate as shown here.

<Certificate name="Certificate1" thumbprint="
5CA27AF00E1759396Cxxxxxxxxxxxxxx" thumbprintAlgorithm="sha1" />

ServiceConfiguration.cscfg

The ServiceConfiguration.cscfg file contains the values for the configuration parameters that apply to one or more instance of the service. You can have one configuration file per instance of the service or multiple instances can share the same configuration file. Listing 3-2 shows the contents of the ServiceConfiguration.cscfg file corresponding to the ServiceDefinition.csdef file from Listing 3-1.

Example 3.2. ServiceConfiguration.cscfg

<?xml version="1.0"?>
<ServiceConfiguration serviceName="HelloAzureCloud" xmlns="
http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="WebRole1">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString"
        value="UseDevelopmentStorage=true" />
    </ConfigurationSettings>
  </Role>
  <Role name="WCFServiceWebRole1">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString"
        value="UseDevelopmentStorage=true" />
    </ConfigurationSettings>
    <Certificates>
      <Certificate name="C1" thumbprint="xxxx"
        thumbprintAlgorithm="sha1" />
    </Certificates>
  </Role>
  <Role name="WorkerRole1">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString"
       value="UseDevelopmentStorage=true" />
    </ConfigurationSettings>
    <Certificates>
      <Certificate name="C1"
        thumbprint="xxxx" thumbprintAlgorithm="sha1" />
    </Certificates>
  </Role>
</ServiceConfiguration>

In Listing 3-2, there are three roles defined, two Web roles and a Worker role. Each role one has only one instance.

Note

For more details on the ServiceConfiguration.cscfg schema, please visit http://msdn.microsoft.com/en-us/library/dd179389.asp.

Configuration Settings

If you are a .NET developer, you should be familiar with the web.config and app.config files for .NET applications. These files define different kinds of runtime settings for the application. In these files, you can also create custom configuration settings, like database connection strings and web service URLs, to avoid hard-coding values. Similarly, Windows Azure allows you to create configuration settings in the ServiceDefinition.csdef and ServiceConfiguration.cscfg files. In ServiceDefinition.csdef, you define the configuration setting names for the entire service, and in ServiceConfiguration.cscfg, you set the values for these settings. Therefore, for every ConfigurationSetting defined in ServiceDefinition.csdef, there should be an equivalent value in the ServiceConfiguration.cscfg file. Listing 3-3 shows the definition of the ConfigurationSettings in ServiceDefinition.csdef, and Listing 3-4 shows the values of ConfigurationSettings in ServiceConfiguration.cscfg. Note the elements highlighted in bold.

Example 3.3. ConfigurationSettings Definition in ServiceDefinition.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name=" MyCloudService " xmlns="http://
schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="WebRole">
    <ConfigurationSettings>
      <Setting name="AccountName"/>
      <Setting name="AccountSharedKey"/>
      <Setting name="BlobStorageEndpoint"/>
      <Setting name="QueueStorageEndpoint"/>
      <Setting name="TableStorageEndpoint"/>
      <Setting name="ContainerName"/>
    </ConfigurationSettings>
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
  </WebRole>
</ServiceDefinition>

Example 3.4. ConfigurationSettings Values in ServiceConfiguration.cscfg

<?xml version="1.0"?>
<ServiceConfiguration serviceName="MyCloudService" xmlns="http://
schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="WebRole">
    <Instances count="1"/>
    <ConfigurationSettings>
      <Setting name="AccountName" value="devstoreaccount1" />
<Setting name="AccountSharedKey" value="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzF" />
      <Setting name="BlobStorageEndpoint" value="http://127.0.0.1:10000"/>
      <Setting name="QueueStorageEndpoint" value="http://127.0.0.1:10001"/>
      <Setting name="TableStorageEndpoint" value="http://127.0.0.1:10002/" />
      <Setting name="ContainerName" value="XXXgallery"/>
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

In Listing 3-3, I have defined six configuration setting names: AccountName, AccoundSharedKey, BlobStorageEndPoint, QueueStorageEndPoint, TableStorageEndPoint, and ContainerName for the WebRole service, and in Listing 3-4, I am setting the values of these setting names.

Development Fabric

The development fabric simulates the Windows Azure cloud runtime environment on your local machine. The development fabric is specifically designed for development and testing in your local environment. You cannot attach a development fabric with the Windows Azure cloud service. The development fabric user interface can be started in any of the following manners:

  • By debugging or running a cloud service from within Visual Studio.NET

  • By running CSRun.exe from the command line with valid parameters

  • By running DFUI.exe from the Windows Azure SDK bin directory

  • From the Windows Azure SDK programs Start menu

Once the development fabric starts, you can access it from the development fabric system tray icon. Figure 3-25 illustrates the development fabric user interface hosting a cloud service.

Development fabric UI

Figure 3.25. Development fabric UI

The development fabric UI shows the service deployments in the local environment and allows you to alter the state of a running service. You can run, suspend, restart or remove a service deployment from within the development fabric UI.

Figure 3-25 shows CloudService4 service running with two instances of the Web role and two instances of the Worker role. The console windows on the right-hand side correspond to each instance of the deployed service. The console window depicts the state and health of the instance and displays any logging information that the instance outputs. The Service Details node displays the Service Name, Interface Type, URL, and IP Address for the service. When the development fabric starts a service, it first tries to use the TCP port for the input endpoint mentioned in the ServiceDefinition.csdef file, but if the port is not available, it uses the next available TCP port. Therefore, in some cases, the port on which the input endpoint is running may be different to the port in the ServiceDefinition.csdef file.

In the development fabric, you can attach a debugger to the running instance at runtime by right-clicking one of the instance and selecting Attach Debugger, as shown in Figure 3-26.

Development fabric's Attach Debugger button

Figure 3.26. Development fabric's Attach Debugger button

The development fabric UI will give you the option of selecting the available debuggers on the local machine. It also allows you to set the logging levels at the service, role, and instance levels. The logging levels are accessible either from the Tools menu or by right-clicking the appropriate node. The Tools

Development fabric's Attach Debugger button

Development Storage

Development storage simulates the Windows Azure blobs, queues, and table storage services on your local computer. The development storage environment is specifically designed for development and testing on the local machine and therefore has several limitations compared to the Windows Azure storage services in the cloud. Development storage provides a user interface to start, stop, reset, and view the local storage services, as shown in Figure 3-27.

Development storage UI

Figure 3.27. Development storage UI

Figure 3-27 shows the name of the service, its status, and the endpoint it is listening on. From Tools

Development storage UI

The development storage environment depends on the SQL Server 2005/2008 database instance on the local machine and by default is configured for SQL Server Express 2005/2008 databases. You can change the development storage to point to another database using the DSInit.exe tool that you saw in Table 3-2, with a /sqlInstance parameter.

Note

Use SQL instance name without the server qualifier or use . (a point) for the default instance. To see all the parameters for DSInit.exe, go to the devstore directory of the Windows Azure SDK installation, and run DSInit.exe /? from the command prompt.

Table 3-4 lists some key limitations of development storage compared to Windows Azure cloud storage.

Table 3.4. Development Storage Limitations

Attribute

Limitation

Authentication

Development storage only supports a single fixed developer account with a well-known authentication key. (See below).

Encryption

Development storage does not support HTTPS.

Scalability

Development storage is not designed to support a large number of concurrent clients. You should use development storage only for functional testing, not for performance or stress testing.

Flexibility

In the CTP version of Windows Azure, the development table storage required a fixed schema to be created before using the table service. The cloud table service did not have this constraint. You can use the table service directly without configuring the schema. The development storage does not require fixed schema any more. String properties in the development table cannot exceed 1,000 characters.

Size

The development blob service supports only 2GB of storage, whereas the cloud blob supports 50GB of storage.

In the case of authentication, the account name and account key are as follows:

Account name: devstoreaccount1

Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Warning

Before deploying a storage service application to Windows Azure cloud, please make sure to change the development account information to your cloud account. You should not use the development storage account to access the Windows Azure storage service in the cloud.

Diagnostics

Logging support in the cloud is one of the biggest concerns of the developer community. With highly interactive integrated design environment (IDE) tools like Visual Studio.NET and runtime environments like the .NET Framework, you can pinpoint problems in you code even in deployed environments when applications are running on-premise. However, the Visual Studio.NET domain is limited to the access it has to the application's runtime environment. Visual Studio.NET communicates with the runtime environment of the application to gather debug information of the application. The application needs to have debug symbols loaded in runtime for Visual Studio.NET to debug. The Windows Azure development fabric has access to the local runtime environment, so you can debug your local Windows Azure application like any other .NET application by adding breakpoints.

Unfortunately, the Windows Azure cloud environment is inaccessible to the local Visual Studio.NET environment. Once the service is deployed to Windows Azure, it is totally managed by Windows Azure, and you do not have access to its runtime. The Windows Azure team realized this and has added logging capabilities to the Windows Azure runtime. The diagnostics service runs along with your role instance, collects diagnostics data as per the configuration, and can save the data to your Windows Azure storage service if configured to do so. You can also communicate with the diagnostics service remotely from an on-premise application or configure it to persist the diagnostics data on a periodic basis. The diagnostics service supports logging of the following data types from your cloud service:

  • Windows Azure logs: These are the application logs that you dump from your application. These can be any messages emitted from your code.

  • Diagnostic monitor logs: These logs are about the diagnostics service itself.

  • Windows event logs: These are the Windows event logs generated on the machine on which the role instance is running.

  • Windows performance counters: These refer to the subscriptions to the performance counters on the machine on which the role instance is running

  • IIS logs and failed request traces: These are the IIS logs and the IIS failed request traces generated on the Web role instance.

  • Application crash dumps: These are the crash dumps generated when an application crashes.

The diagnostics gathering model in Windows Azure consists of two fundamental steps—configuration and management. In the configuration step, you configure the diagnostics service with all the data types you are interested in collecting the diagnostics information on, and then the diagnostics service starts collecting the data for the configured data types accordingly. In the management step, you use the diagnostics management API provided by the Windows Azure SDK for changing the configuration of an already running diagnostics service, and the diagnostics service will reconfigure itself for collecting the appropriate data. You can use the diagnostics management API from outside of the Windows Azure cloud environment (e.g., on-premise) to interact with the diagnostics service on your role instance. Next, using the same API, you can perform scheduled or on-demand transfers of the diagnostics information from role instance machines to your Windows Azure storage account.

In this book, I will focus only on the Windows Azure logs data type, because it relates directly to the development of Windows Azure services. I will provide some examples of the rest of the data types, but will not be discussing those in detail in this book. For more information on these data types, please refer to the diagnostics API in Windows Azure SDK documentation. The diagnostics API is present in the Microsoft.WindowsAzure.Diagnostics assembly.

Note

You can find more information about the Windows Azure Runtime API at the Windows Azure MSDN reference site: http://msdn.microsoft.com/en-us/library/dd179380.aspx.

Logging

Windows Azure Runtime API consists of a managed code library and an unmanaged code library. In this book, I will cover only the managed code library. The managed code library namespace for diagnostics is Microsoft.WindowsAzure.Diagnostics. Associating diagnostics with you cloud service is a three step process:

  1. Configure the trace listener.

  2. Define the storage location for the diagnostics service.

  3. Start the diagnostics service.

Configuring the Trace Listener

When you create a new role using the role templates template in Visual Studio.NET, the app.config and web.config files get created automatically in the role project and it consists of a trace listener provider, as shown in Listing 3-5.

Example 3.5. Diagnostics Trace Listener Configuration

<system.diagnostics>
<trace>
 <listeners>
   <add type=
"Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,
 Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,
 PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics">
<filter type="" />
      </add>
  </listeners>
 </trace>
</system.diagnostics>

The DiagnosticMonitorTraceListener enables you to use the .NET Tracing API for logging within the code. You can use the Write() and WriteLine() methods of the System.Diagnostics.Trace class for logging from your code as shown here:

Trace.WriteLine("INFORMATION LOG", "Information");
Trace.WriteLine("CRITICAL LOG", "Critical");

Defining the Storage Location for the Diagnostics Service

In the ServiceDefinition.csdef and ServiceConfiguration.cscfg files, you have to define the diagnostics connection string pointing to the storage location of your choice (development storage or cloud storage). Visual Studio automatically generates this configuration for you as shown in Listing 3-6.

Example 3.6. Diagnostics Connection String Configuration

For development storage:
<ConfigurationSettings>
 <Setting name="DiagnosticsConnectionString" value = "UseDevelopmentStorage=true"/>
</ConfigurationSettings>
For cloud storage:
<ConfigurationSettings>
 <Setting name="DiagnosticsConnectionString" value=
"DefaultEndpointsProtocol=https;AccountName=proazurestorage;AccountKey=[YOURKEY]"/>
</ConfigurationSettings>

Starting the Diagnostics Service

Next, you have to start the diagnostics service in your role by passing in the connection string name you defined in step 2. If you create a role using the Visual Studio role templates, the WebRole.cs/WorkerRole.cs files contain the code for starting the diagnostics service in the OnStart() method, DiagnosticMonitor.Start("DiagnosticsConnectionString");.

Once started, the diagnostics monitoring service can start collecting the logged data. You can also choose to further configure the diagnostics service through the DiagnosticMonitorConfiguration class, as shown in Listing 3-7.

Example 3.7. Programmatically Changing the Diagnostics Configuration

//Get the default configuration
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefault
InitialConfiguration();
//Set the schedule to transfer logs every 10 mins to the storage
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(10);
//Start Diagnostics Monitor with the storage account configuration
DiagnosticMonitor.Start("DiagnosticsConnectionString",dmc);

In Listing 3-7, the diagnostics monitor is started with the configuration option to transfer logs to the defined storage every 10 minutes automatically.

Tip

When designing cloud applications, it is important to design diagnostics and logs reporting right from the beginning. This will save you a lot of debugging time and help you create a high quality application.

Developing Windows Azure Services with Inter-role Communication

In this example, you will learn to develop Windows Azure services in the local development fabric and in the cloud environment. I will also show you how to communicate across roles using the internal endpoints. You will also learn to use your own Configuration Settings in ServiceDefinition.csdef and ServiceConfiguration.cscfg.

Objectives

The objectives of this example are as follows:

  • Understand inter-role communication in Windows Azure cloud services.

  • Access local machine resources.

  • Understand configuration settings for configuring cloud services.

Adding Diagnostics and Inter-role Communication

In this section, I will guide you through the code for adding diagnostics, configuration and inter-role communication to the Windows Azure services.

  1. Open Ch3Solution.sln from Chapter 3's source code directory

  2. Expand the HelloService folder as shown in Figure 3-28.

HelloService folder

Figure 3.28. HelloService folder

The folder contains one Web role, one Worker role, and one cloud service project: HelloWebRole, HelloWorkerRole, and HelloAzureCloud cloud service respectively.

Service Model

The ServiceDefinition.csdef and ServiceConfiguration.cscfg file define the service model and configuration values for the service. Listing 3-8 shows the ServiceDefinition.csdef for the HelloAzureCloud service.

Example 3.8. ServiceDefinition.csdef for the HelloAzureCloud Service

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="HelloAzureCloud" xmlns="http://schemas.
microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="HelloWebRole" enableNativeCodeExecution="true">
<LocalResources>
    <LocalStorage name="HelloAzureWorldLocalCache" sizeInMB="10" />
  </LocalResources>
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
      <!--This is the current logging level of the service -->
      <Setting name="LogLevel" />
      <Setting name="ThrowExceptions" />
      <Setting name="EnableOnScreenLogging" />
    </ConfigurationSettings>
  </WebRole>
  <WorkerRole name="HelloWorkerRole" enableNativeCodeExecution="true">
    <Endpoints>
      <!-- Defines an internal endpoint for inter-role communication
that can be used to communicate between worker or Web role instances -->
      <InternalEndpoint name="MyInternalEndpoint" protocol="tcp" />
      <!-- This is an external endpoint that allows a role to listen
 on external communication, this could be TCP, HTTP or HTTPS -->
      <InputEndpoint name="MyExternalEndpoint" port="9001" protocol="tcp" />
    </Endpoints>
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
    </ConfigurationSettings>
  </WorkerRole>
</ServiceDefinition>

The service model defines an external HTTP endpoint (input endpoint) for the HelloWebRole listening on port 80 and an internal as well as external endpoints for the HelloWorkerRole. HelloWebRole also defines a local storage named HelloAzureWorldLocalCache with maximum size of 10MB. Both the roles define a configuration setting named DiagnosticsConnectionString for diagnostics. Listing 3-9 shows the ServiceConfiguration.cscfg file for the HelloAzureCloud service.

Example 3.9. ServiceConfiguration.cscfg for the HelloAzureCloud Service

<?xml version="1.0"?>
<ServiceConfiguration serviceName="HelloAzureCloud" xmlns="http://
schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="HelloWebRole">
    <Instances count="2" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" value="Default
EndpointsProtocol=https;AccountName=proazurestorage;AccountKey=Ry " />
      <!--This is the current logging level of the service -->
      <!--Supported Values are Critical,
      Error,Warning,Information,Verbose-->
      <Setting name="LogLevel" value="Information" />
      <Setting name="ThrowExceptions" value="true" />
<Setting name="EnableOnScreenLogging" value="true" />
    </ConfigurationSettings>
  </Role>
  <Role name="HelloWorkerRole">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" value="Default
EndpointsProtocol=https;AccountName=proazurestorage;AccountKey=Ry " />
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

The ServiceConfiguration.cscfg file defines the values for the model you defined in ServiceDefinition.csdef. In addition, it also allow you to define the number of instances for you role in the instances element. In Listing 3-9, two instances of the Web role and one instance of the Worker role are running. Also note that the diagnostics is pointing to a cloud storage account named proazurestorage. You must replace this with your own storage account or simple set its value to UseDevelopmentStorage=true to use the development storage.

Worker Role

HelloWorkerRole implements two methods, OnStart() and Run(). In the OnStart() method, it also subscribes to the role changing event to catch any configuration changes.

Note

In both the Web and the Worker roles, you need to add references to the following assemblies: Microsoft.WindowsAzue.ServiceRuntime.dll and Microsoft.WindowsAzure.Diagnostics.dll. And you need to add the following using statements in code: using Microsoft.WindowsAzue.ServiceRuntime; and using Microsoft.WindowsAzure.Diagnostics;.

Listing 3-10 shows the code for the HelloWorkerRole class.

Example 3.10. HelloWorkerRole

public override void Run()
{
  Trace.WriteLine("HelloWorkerRole entry point called", "Information");
  var internalEndpoint =   RoleEnvironment.CurrentRoleInstance.Instance
Endpoints["MyInternalEndpoint"];
  var wcfAddress = new Uri(String.Format("net.tcp://{0}",internal
Endpoint.IPEndpoint.ToString()));
  Trace.WriteLine(wcfAddress.ToString());
  var wcfHost = new ServiceHost(typeof(HelloServiceImpl), wcfAddress);
  var binding = new NetTcpBinding(SecurityMode.None);
  wcfHost.AddServiceEndpoint(typeof(IHelloService), binding, "helloservice");
try
  {
    wcfHost.Open();
    while (true)
    {
      Thread.Sleep(10000);
      Trace.WriteLine("Working", "Information");
    }
  }
  finally
  {
    wcfHost.Close();

  }
}

public override bool OnStart()
{
  //Get the default configuration
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefault
InitialConfiguration();
//Set the schedule to transfer logs every 10 mins to the storage
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(10);
//Start Diagnostics Monitor with the storage account configuration
DiagnosticMonitor.Start("DiagnosticsConnectionString",dmc);

  RoleEnvironment.Changing += RoleEnvironmentChanging;
  return base.OnStart();
}

private void RoleEnvironmentChanging(object sender,
RoleEnvironmentChangingEventArgs e)
{
  if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
  e.Cancel = true;
}

The OnStart() method starts the diagnostics service with a scheduled log transfer to the storage and also subscribes to the Changing event of the Windows Azure runtime to detect any changes to the configuration. You can use the RoleEnvironmentChanging event to capture the following changes:

  • RoleEnvironmentConfigurationSettingChange to detect the changes in the service configuration.

  • RoleEnvironmentTopologyChange to detect the changes to the role instances in the service.

In addition, you can remove a role instance from the load-balancer after the service has started by subscribing the RoleEnvironment.StatucCheck event and calling SetBusy() method on the RoleInstanceStatusCheckEventArgs. You can also request a recycle of the role instance on-demand by calling the RoleEnvironment.RequestRecycle() method. For more information on runtime API, please see the Microsoft.WindowsAzure.ServiceRuntime namespace in the Windows Azure SDK class documentation.

Because the diagnostics service is configured to save all the logs to the Windows Azure storage, all the Trace.WriteLine() statements will be sent to the storage periodically. The Run() method gets a reference to the internal endpoint named MyInternalEndpoint from the service definition and retrieves its IP address and creates a WCF service host for the HelloServiceImpl. Once the WCF host is opened on the internal IP address and port, any role in the service can make WCF method calls. Listing 3-11 shows the code for IHelloService and HelloServiceImpl.

Example 3.11. Hello Contract

[ServiceContract (Namespace="http://proazure/helloservice")]
    interface IHelloService
    {
        [OperationContract]
        string GetMyIp();
        [OperationContract]
        string GetHostName();


    }
    [ServiceBehavior(AddressFilterMode=AddressFilterMode.Any)]
    public class HelloServiceImpl : IHelloService
    {


        #region IHelloService Members

        public string GetMyIp()
        {
            IPAddress[] ips = null;

            ips = Dns.GetHostAddresses(Dns.GetHostName());

            if (ips != null)
            {
                foreach (IPAddress i in ips)
                {
                    if(i.AddressFamily ==
System.Net.Sockets.AddressFamily.InterNetwork)
                     return i.ToString(); ;
                }

            }

            return "";
        }

        #endregion
public string GetHostName()
        {
            return Dns.GetHostName();
        }

    }

The IHelloService interface defines only two methods to retrieve the IP address and the domain name of the machine on which the Worker role is running. HelloServiceImpl class implements these two methods.

Web Role

The user interface for the Web role is in Default.aspx. The user interface is designed to do a few operations when you click the Get Machine Info button. Figure 3-29 illustrates the user interface design of Default.aspx page.

Default.aspx User Interface Design

Figure 3.29. Default.aspx User Interface Design

When you click the Get Machine Info button, it retrieves the machine name, host address, and local storage and calls the HelloWorkerRole service through an internal endpoint. You can also upload a file to the local storage using the upload file button. All the functions use traditional .NET APIs for retrieving local file and network information of the machine. If you are developing the service from scratch, you will have to add reference to the HelloWorkRole WCF service. In the HelloWebRole project, the reference has already been added for you in the ClientProxy.cs file. Listing 3-12 shows the code for calling the HelloWorkerRole WCF service

Example 3.12. Call Worker Role WCF Service

string wrIp =  RoleEnvironment.Roles["HelloWorkerRole"].Instances[0].
InstanceEndpoints["MyInternalEndpoint"].IPEndpoint.ToString();
lblWREndpointAddress.Text = wrIp;
var serviceAddress = new Uri(String.Format("net.tcp://{0}/{1}", wrIp,
 "helloservice"));
var endpointAddress = new EndpointAddress(serviceAddress);
var binding = new NetTcpBinding(SecurityMode.None);
var client = new ClientProxy(binding, endpointAddress);
lblWRHostName.Text = client.GetHostName();
lblWRIp.Text = client.GetMyIp();

In Listing 3-12, the Web role gets reference to the internal endpoint of a Worker role instance and instantiates the ClientProxy object to call the IHelloService methods GetHostName() and GetMyIp().

Note

An important point to note here is that the endpoints are exposed as IP address of the instance and you still have to build your server in the form of TcpListener, WCF service, or HTTP service on that IP address.

Running the HelloAzureCloud Service

To build and run the solution, press F5 on the HelloAzureCloud project to start it in debug mode. Click the Get Machine Info button. Figure 3-30 illustrates the HelloAzureCloud Web role application running on the local machine.

HelloAzureCloud on local machine

Figure 3.30. HelloAzureCloud on local machine

Open the development fabric UI by clicking the development fabric icon in the system tray. Figure 3-31 shows the development fabric UI running two instances of the Web role and one instance of the Worker role.

HelloAzureCloud development fabric two instances

Figure 3.31. HelloAzureCloud development fabric two instances

The information is logged either in the console of instance 0 or instance 1 depending on where the load balancer sends the request. If you click the Get Machine Info button very quickly, you will see that the request gets load balanced across both the instances. Figure 3-32 shows the load-balanced requests across two instances of the Web role application.

Load Balance across two instances of HelloAzureCloud service

Figure 3.32. Load Balance across two instances of HelloAzureCloud service

In Figure 3-32, observe the logs in the consoles of both the instances of HelloAzureCloud Web roles.

Now that you have tested the cloud service in the development fabric, you can deploy it in the Windows Azure cloud. When you deploy the application in the cloud, the consoles that you see in the development fabric are not available to visually view the logs. So, let's see how you can access and view these logs in the cloud.

To deploy HelloAzureCloud service to Windows Azure, right-click on the HelloAzureCloud project, and select Publish to create the service package HelloAzureCloud.cspkg as shown in the Figure 3-33.

Publish to Windows Azure

Figure 3.33. Publish to Windows Azure

To upload the service package to Windows Azure, you will need to login to Windows Azure developer portal using your LiveID. Once you've logged in, go to the HelloWindowsAzure project that you created in Chapter 2. If you did not create a project in Chapter 2, create a new project by following steps from the Chapter 2 example.

On the project page, click the Deploy button in the staging environment as shown in Figure 3-34.

Deploy to Windows Azure Staging

Figure 3.34. Deploy to Windows Azure Staging

On the Staging Deployment page, browse to the HelloAzureCloud.cspkg that was created when you published the cloud service from Visual Studio.NET. Next, browse to the ServiceConfiguration.cscfg file that was created along with HelloAzureCloud.cspkg.

Label the deployment as Hello Azure Service, as shown in Figure 3-35.

Deploying the service package to staging

Figure 3.35. Deploying the service package to staging

When the package gets deployed, the staging environment cube image will change its color to blue, as shown in Figure 3-36.

Deployed staging environment

Figure 3.36. Deployed staging environment

The diagnostic features in Windows Azure allows you to store the logs to a Windows Azure storage or local development storage. The Windows Azure logs generated using the System.Diagnostics.Trace class are by default stored in Windows Azure storage tables. In the above example, the logs are configured to be automatically copied to table storage every 10 minutes. To view the logs of your application in Windows Azure table storage, you will need a storage account. You can create a new storage project from the Windows Azure developer portal and configure the diagnostics connection string in the HelloAzureCloud service to point to the storage account.

If you don't have a storage service already created in Windows Azure, go to All Services page, and click New Service as shown in Figure 3-37.

Create a new service

Figure 3.37. Create a new service

On the "Create a new service component" page, select Storage Account, as shown in Figure 3-38.

Create a new storage account

Figure 3.38. Create a new storage account

On the Project Properties page, label the project Pro Azure Storage, give it some description, and click Next, as shown in Figure 3-39.

ProAzureStorage Project Properties

Figure 3.39. ProAzureStorage Project Properties

Next create a storage account URL, as shown in Figure 3-40.

Create a storage account URL

Figure 3.40. Create a storage account URL

Note

Your name should be globally unique. If there is a project already named proazurestorage, you should choose a different name; you can click Check Availability for its availability.

Once you create the storage account, you will be taken to a page that displays endpoints to blob, queue, and table storage as shown in Figure 3-41. For the purpose of this example, we don't need these URLs.

ProAzureStorage Storage Page

Figure 3.41. ProAzureStorage Storage Page

Now that you have the storage created, you can configure the HelloAzureCloud service diagnostics to point to this storage account.

To start the HelloAzure cloud service, go back to the HelloAzureCloud Windows Azure page, and click the Run button in the staging environment to run the web application

When the state of the staging deployment changes from Initializing to Started, click the Web Site URL to test the web application, as shown in Figure 3-42.

Start the staging application

Figure 3.42. Start the staging application

The Default.aspx for the web application shows up in a separate browser window, as shown in Figure 3-43.

HelloAzureCloud with Logging

Figure 3.43. HelloAzureCloud with Logging

Compare the values generated by the page when you click Get Machine Info with the values generated in the development environment.

Tip

Keep a watch on the machine name as you test your web application. It changes depending on the instance that you request goes to. Currently, the web application is running with two instances.

Now that the logs are copied to the table storage, we should be able to access them using the Windows Azure Table Storage API. There is a nice windows client application called Azure Storage Explorer that I usually use to explore blob storage. The Azure Storage Explorer is free and is available on CodePlex.

Note

Azure Storage Explorer can be downloaded at http://azurestorageexplorer.codeplex.com/.

Download, install, and run Azure Storage Explorer using the executable AzureStorageExplorer.exe. Go to Tools

HelloAzureCloud with Logging
Enter account Name in Azure Storage Explorer

Figure 3.44. Enter account Name in Azure Storage Explorer

The account name is the first word in the blob URL string, http://<accountName>.table.core.windows.net. In Figure 3-44, proazurestorage is the account name.

Warning

A common mistake is to enter an account label instead of an account name. Account label is the label of the project. In many cases, the label and name may be the same, depending on how the account was created. However, the account name is always the first word in the blob URL.

Note that the first account in the Accounts list is the local development storage account.

To see the logs in development storage, open development storage in Azure Storage Explorer, click the Tables tab, and click WADLogsTable to see the list of log entries, as shown in Figure 3-45.

Explore logs

Figure 3.45. Explore logs

Developing a Web Role to Worker Role Message Exchange

In this example, you will develop a Worker role service that calls a Windows Communications Foundation (WCF) service hosted in a Web role. Even though there is interaction between the two roles, I have kept the cloud service at a conceptual level and attempted to cover most of the topics covered in the chapter so far.

Objectives

The objectives of this example are as follows:

  • Understand Windows Azure Worker role development.

  • Understand configuration settings for Windows Azure applications.

  • Work with local storage.

  • Host a WCF service in a Web role.

  • Call a Web role WCF service from a Worker role.

Service Architecture

The message exchange service between the Web and Worker roles is a role monitoring service. It displays the system properties of the Windows Azure roles running in your Windows Azure cloud service. For example, if your service has two Worker role instances and one Web role instances, then these roles will register with the central Web role application that displays the system properties of all the three roles. Figure 3-46 illustrates the application architecture for this example.

Application architecture

Figure 3.46. Application architecture

The service consists of a cloud service project with one Web role application and one Worker role application. The Web role has two logical components—the web application and the SystemInfo WCF service. The SystemInfo WCF service receives system information messages from the Web role instance as well as Worker role instances. The SystemInfo WCF service saves these messages to the local storage of the Web role. The ASP.NET page reads the stored messages from the local storage and displays them in a GridView control. The SystemInfo WCF service and the ASP.NET web application both run in the same Web role instance and thus share the same local storage. The web page has an AJAX timer that refreshes the web page periodically for displaying the latest information from the local storage.

The Worker role has one main logical component, the Worker role service. The Worker role service reads the system information from the underlying operating system it is running on and calls the SystemInfo WCF service in the Web role periodically to send the latest information.

System Information Message

The system information message is the data contract between the Windows Azure roles and the SystemInfo WCF service. The system information message is a dataset object that is exchanged between the Windows Azure roles and the SystemInfo WCF service. Any role instance running in Windows Azure can send a System Information message to the SystemInfo WCF service by calling the WCF method SendSystemInfo(SystemMessageExchange ds). Figure 3-47 illustrates the dataset table schema exchanged between the Windows Azure roles and the SystemInfo WCF service.

System Information Message

Figure 3.47. System Information Message

The system information dataset consists of the fields described in Table 3-5.

Table 3.5. System Information Dataset Fields

Field Name

Description

MachineName

This is the name of the underlying machine the role instance is running on. I am using the System.Environment.MachineName property for retrieving the machine name.

OSVersion

This is the version of the underlying operating system the role instance is running on. I am using the System.Environment.OSVersion.VersionString property for retrieving the version of the operating system.

LocalStoragePath

This is the actual path of the local storage in the underlying operating system. I am using Microsoft.ServiceHosting.ServiceRuntime.ILocalResource.RootPath property to retrieve the local storage path.

WindowsDirectory

This is the windows operating system directory. Usually, this maps to C:Windows on most of the operating system installations. I am using the System.Environment.GetEnvironmentVariable("windir"); method for retrieving the Windows directory path of the underlying operating system instance.

SystemDirectory

This is the system directory of the underlying Windows operating system. I am using the System.Environment.SystemDirectory property for retrieving the system directory path of the underlying operating system instance.

CurrentDirectory

The current directory is the path of the current working directory in the underlying Windows operating system. I am using System.Environment.CurrentDirectory property for retrieving the current working directory in the underlying operating system instance.

UserDomainName

This is the network domain name associated with the current logged in user. I am using System.Environment.UserDomainName property for receiving the user domain name.

UserName

This is the user name of the current logged in user. I am using System.Environment.UserName property for retrieving the user name.

Role

This is the role type of the service instance (i.e., Web or Worker).

Timestamp

This is the system message object creation timestamp.

The Components of the Solution

In this section, I will go over the different components of the solution and some key methods.

The Visual Studio.NET solution for this example is called Ch3Solution.sln. It can be found in the Chapter 3 source directory. Open the Ch3Solution.sln file in Visual Studio.NET. The projects that are referenced in this example are ProAzureCommonLib, WebWorkerExchange, WebWorkerExchange_WebRole, and WebWorkerExchange_WorkerRole. Table 3-6 describes the role of each project in the service architecture.

Table 3.6. Visual Studio.NET Projects

Project Name

Description

ProAzureCommonLib

This is a class library project that consists of helper classes and methods for logging, configuration, and local storage. It also contains the definition for the SystemMessageExchange.xsd dataset for sending system information to the SystemInfo WCF service.

WebWorkerExchange

This is the cloud service project that has the ServiceDefinition.csdef and ServiceConfiguration.cscfg files. This project also contains references to the Web role and the Worker role projects in the solution.

WebWorkerExchange_WebRole

This is the Web role project that contains the SystemInfo WCF service and the ASP.NET web page for displaying the system information of the role instances.

WebWorkerExchange_WorkerRole

This is the Worker role project that calls the SystemInfo WCF service for sending the System Information.

Figure 3-48 illustrates the four projects in Visual Studio.NET Solution Explorer.

Visual Studio.NET projects

Figure 3.48. Visual Studio.NET projects

Now, let's go over each project to explore the implementation of classes and functions.

Creating the ProAzureCommonLib Class Library Project

The ProAzureCommonLib project is a class library project containing two main files relevant to this example—WindowsAzureSystemHelper.cs and SystemMessageExchange.xsd. WindowsAzureSystemHelper.cs consists of a class WindowsAzureSystemHelper with all static helper methods. SystemMessageExchange.xsd is the dataset that defines the contract between the role instances and the WCF service. It was discussed previously in the "System Information Message" section. WindowsAzureSystemHelper class has four main sections of helper methods for logging, configuration, local storage, and system information.

Logging

The logging section has two helper methods wrapping over the Trace.WriteLine() function. Listing 3-13 shows the LogError() and LogInfo() helper methods used in the application.

Example 3.13. Logging Helper Methods

public static void LogError(string message)
{
Trace.WriteLine(String.Format("{0} on machine {1}", message,
Environment.MachineName), "Error");

}
public static void LogInfo(string message)
{
Trace.WriteLine(String.Format("{0} on machine {1}", message,
Environment.MachineName), "Information");
}

The LogError() and LogInfo() methods accept a string message parameter that is logged using the Trace.WriteLine() method. I also append System.Environment.MachineName property to the message for ease of readability if you are looking at logs from multiple machines.

Note

You need native code access to read the System.Environment.MachineName property from either the Web or Worker role. In the ServiceDefinition.csdef file, set enableNativeCodeExecution = true for both the roles.

Configuration

The configuration section consists of helper methods for reading the configuration values from SystemConfiguration.cscfg file. The helper methods act as a wrapper over RoleEnvironment.GetConfigurationSettingValue() method and return data type specific values. Listing 3-14 shows the configuration helper method for retrieving a Boolean configuration value.

Example 3.14. Configuration Helper Method

public static bool GetBooleanConfigurationValue(string configName)
{
       try
       {
              bool ret;
              if (bool.TryParse
              (RoleEnvironment.GetConfigurationSettingValue(configName), out ret))
              {
              return ret;
              }
              else
              {
              LogError(String.Format
("Could not parse value for configuration setting {0}", configName));

              throw new Exception
              (String.Format("Could not parse value for configuration
 setting {0}", configName));
              }
       }
       catch (Exception ex)
       {
       LogError(ex.Message);
       throw ex;
       }
}

The GetBooleanConfigurationValue() method accepts a configuration name and converts the string value returned by RoleEnvironment. GetConfigurationSettingValue() method to a Boolean value. The method also logs errors appropriately if the conversion fails. In the configuration section, there are similar helper functions for retrieving string, integer and double configuration values.

Local Storage

The local storage section contains helper methods for accessing Windows Azure local storage. Local storage is a file system directory on the server on which a particular role instance is running on. The contents of the local storage are not persistent across instance failures and restarts because the role may be redeployed to another virtual machine due to a failure or a restart. Listing 3-15 shows two static helper methods used for accessing the local storage.

Example 3.15. Local Storage Access Methods

public static string GetLocalStorageRootPath(string localStorageName)
{
       try
       {
              LocalResource resource = RoleEnvironment.
              GetLocalResource(localStorageName);
              return resource.RootPath;
       }
       catch (Exception ex)
       {
       LogError(String.Format("Error in GetLocalStorageRootPath of {0}. {1}",
       "WindowsAzureSystemHelper", ex.Message));
       throw ex;
       }
}
public static bool CanAccessLocalStorage(string localStorageName)
{
       WindowsAzureSystemHelper.LogInfo("Can access Local Storage?");
       bool ret = false;
       try
{
       string fp = WindowsAzureSystemHelper.
       GetLocalStorageRootPath(localStorageName) + "proazure.txt";
       using (StreamWriter sw = File.CreateText(fp))
       {
              WindowsAzureSystemHelper.LogInfo("Created File " + fp);
              sw.WriteLine("This is a Pro Azure file.");
              WindowsAzureSystemHelper.LogInfo("Wrote in File " + fp);
       }//using
       string fpNew = WindowsAzureSystemHelper.
       GetLocalStorageRootPath(localStorageName) + "proazure2.txt";
       File.Copy(fp, fpNew);
       string fpNew2 = WindowsAzureSystemHelper.
       GetLocalStorageRootPath(localStorageName) + "proazure3.txt";
       File.Move(fp, fpNew2);
       WindowsAzureSystemHelper.LogInfo("Deleting File " + fpNew2);
       File.Delete(fpNew2);
       WindowsAzureSystemHelper.LogInfo("Deleted File " + fpNew2);
       WindowsAzureSystemHelper.LogInfo("Deleting File " + fpNew);
       File.Delete(fpNew);
       WindowsAzureSystemHelper.LogInfo("Deleted File " + fpNew);
       ret = true;
       }
       catch (Exception ex)
       {
       WindowsAzureSystemHelper.LogError("Error in CanAccessSystemDir " + ex.Message);
       }
       return ret;
}

There are two methods defined in Listing 3-12: GetLocalStorageRootPath() and CanAccessLocalStorage(). The GetLocalStorageRootPath() method calls the RoleEnvironment.GetLocalResource() method and returns the root path property of the local storage. The CanAccessLocalStorage() executes a test to check whether the role instance can access the local storage by creating, copying, moving and deleting files. The method returns true if all the tests pass, otherwise it returns false. Both the methods accept local storage name as the parameter defined in the ServiceDefinition.csdef file.

The other two methods in the local storage section are WriteLineToLocalStorage() and ReadAllLinesFromLocalStorage(). The WriteLineToLocalStorage() appends a line of text to the specified file in the local storage. It also creates the file if it does not exist. The writeDuplicateEntries parameter is to specify whether duplicate entries are allowed in the file. The ReadAllLinesfromLocalStorage() method reads all the lines of text from the specified file in the local storage and returns them as an IList<string> data structure. Listing 3-16 shows code for both the methods.

Example 3.16. Write and Read Text to Local Storage

public static void WriteLineToLocalStorage
(string fileName, string localStorageName, string message,
bool writeDuplicateEntries)
{
LogInfo(message);
        string path = GetLocalStorageRootPath(localStorageName);
        path = Path.Combine(path, fileName);
        string entry = String.Format("{0}{1}", message, Environment.NewLine);
        bool write = true;
        if (!writeDuplicateEntries)
        {
        if (!File.Exists(path))
        {
                 using (StreamWriter sw = File.CreateText(path))
                 {
                 }
        }
        string[] lines = File.ReadAllLines(path, Encoding.UTF8);
        if (lines != null && lines.Length > 0)
        {
                 if (lines.Contains<string>(message))
                 {
                  write = false;
                 }
        }
        }
        if (write)
        {
         File.AppendAllText(path, entry, Encoding.UTF8);
        }
}

public static IList<string> ReadAllLinesFromLocalStorage
(string fileName, string localStorageName)
{
        List<string> messages = new List<string>();
        string path = Path.Combine(GetLocalStorageRootPath(local
StorageName), fileName);
        if (File.Exists(path))
        {
                 using (FileStream stream = File.Open
                 (path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
                 {
                         StreamReader reader = new StreamReader(stream
, Encoding.UTF8);

                         while (true)
                         {
                                 string line = reader.ReadLine();
                                 if (line == null) break;
                                 messages.Add(line);
                         }
                 }
        }
        return messages;
}

System Information

The system information section contains only one helper method GetSystemInfo() for retrieving the system information and returning the SystemMessageExchange dataset. Listing 3-17 shows the code for GetSystemInfo() method.

Example 3.17. GetSystemInfo() Method

public static SystemMessageExchange GetSystemInfo
(string localStorageName, string role)
{
try
{
        SystemMessageExchange ds = new SystemMessageExchange();
        SystemMessageExchange.SystemInfoRow row =
        ds.SystemInfo.NewSystemInfoRow();
        row.CurrentDirectory = Environment.CurrentDirectory;
        try
        {
          row.LocalStoragePath = GetLocalStorageRootPath(localStorageName);
        }
        catch (Exception ex1)
        {
          LogError(ex1.Message);

        }
        row.MachineName = Environment.MachineName;
        row.OSVersion = Environment.OSVersion.VersionString;

        string dir;
        if (CanAccessSystemDir(out dir))
        {
          row.SystemDirectory = dir;

        }

        if (CanAccessWindowsDir(out dir))
        {
          row.WindowsDirectory = dir;
        }
        row.UserDomainName = Environment.UserDomainName;
        row.UserName = Environment.UserName;
        row.Role = role;
        row.Timestamp = DateTime.Now.ToString("s");
        ds.SystemInfo.AddSystemInfoRow(row);

        return ds;

}
catch (Exception ex)
{
LogError("GetSystemInfo " + ex.Message);
}

return null;
}

The GetSystemInfo() method accepts the local storage name and the role name of the instance as the parameters. It then creates an instance of the SystemMessageExchange dataset and adds a new row to the SystemInfo DataTable in the dataset. The data row is filled with appropriate system information values and added back to the SystemInfo DataTable using the AddSystemInfoRow() method. The dataset is then returned from the function. Note that this method can be called from either the Web or Worker role to retrieve the system information from the underlying operating system.

WebWorkerExchange Cloud Service Project

This is the cloud service project created using the Web and Worker Cloud Service project template in Visual Studio.NET. It contains the ServiceDefinition.csdef and ServiceConfiguration.cscfg files. The project also holds references to the Web and Worker role projects. The Roles folder in the project contains references to the Web role (WebWorkerExchange_WebRole ) and the Worker role (WebWorkerExchange_WorkerRole ) projects from the same solution.

ServiceDefinition.csdef

The ServiceDefinition.csdef file defines the service model for the project. Listing 3-18 shows the contents of the ServiceDefinition.csdef file.

Example 3.18. ServiceDefinition.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="WebWorkerExchange" xmlns="http://schemas.
microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="WebRole" enableNativeCodeExecution="true">
    <LocalStorage name="SystemInfoWebLocalCache" sizeInMB="10"/>
    <InputEndpoints>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
    <ConfigurationSettings>
     <Setting name="DiagnosticsConnectionString" />
      <!--This is the current logging level of the service -->
      <Setting name="LogLevel"/>
      <Setting name="ThrowExceptions"/>
      <Setting name="EnableOnScreenLogging"/>
    </ConfigurationSettings>
  </WebRole>
  <WorkerRole name="WorkerRole" enableNativeCodeExecution="true">
    <LocalStorage name="SystemInfoWorkerLocalCache" sizeInMB="10"/>
<ConfigurationSettings>
     <Setting name="DiagnosticsConnectionString" />
      <!--This is the current logging level of the service -->
      <Setting name="LogLevel"/>
      <Setting name="ThreadSleepTimeInMillis"/>
      <Setting name="SystemInfoServiceURL"/>
    </ConfigurationSettings>
  </WorkerRole>
</ServiceDefinition>

In Listing 3-18, a Web role and a Worker role are defined. Both have native code execution enabled for accessing System.Environment properties. Both the roles also define local storage folders. The configuration settings defined in the Web role are mainly related to logging and exception handling. In the Worker role, the ThreadSleepTimeInMillis setting defines the thread sleep time for the worker thread, which corresponds to the time interval for calling the SendSystemInfo() WCF method. The SystemInfoServiceURL is the SystemInfo WCF service URL for sending the system information. The values of these settings are set in the ServiceConfiguration.cscfg file. You have to change the SystemInfoServiceURL value when you deploy the service to the cloud staging and production environments.

ServiceConfiguration.cscfg

The ServiceConfiguration.cscfg file contains the values of the configuration settings defined in ServiceDefinition.csdef. Listing 3-19 shows the contents of the ServiceConfiguration.cscfg file.

Example 3.19. ServiceConfiguration.cscfg

<?xml version="1.0"?>
<ServiceConfiguration serviceName="WebWorkerExchange" xmlns="
http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="WebRole">
    <Instances count="1"/>
    <ConfigurationSettings>
<Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />
      <!--Supported Values are Critical,
      Error,Warning,Information,Verbose-->
      <Setting name="LogLevel" value="Information"/>
      <Setting name="ThrowExceptions" value="true"/>
      <Setting name="EnableOnScreenLogging" value="true"/>
    </ConfigurationSettings>
  </Role>
  <Role name="WorkerRole">
    <Instances count="2"/>
    <ConfigurationSettings>
<Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />
      <!--Supported Values are Critical,
      Error,Warning,Information,Verbose-->
        <Setting name="LogLevel" value="Information"/>
      <Setting name="ThreadSleepTimeInMillis" value="5000"/>
<Setting name="SystemInfoServiceURL"
value="http://localhost:81/SystemInfo.svc"/>
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

In Listing 3-19, the values for the configuration settings defined in the ServiceDefinition.csdef are set. Note that the Web role has only one instance defined, whereas the Worker role has two instances defined. The ServiceInfoServiceURL value points to SystemInfo WCF service in the local development fabric, because the testing is done in the development fabric.

Creating the WebWorkerExchange_WebRole Web Role Project

This is the Web role project in the cloud service that defines the SystemInfo WCF service (SystemInfo.svc) and the ASP.NET Page (Default.aspx) for displaying the system information of role instances.

SystemInfo.svc

The SystemInfo.svc is a WCF service implements the ISystemInfo interface. The ISystemInfo interface has only one method void SendSystemInfo(SystemMessageExchange ds).

The method accepts an instance of the SystemMessageExchange dataset. Listing 3-20 shows the implementation of the void SendSystemInfo(SystemMessageExchange ds) in SystemInfo class.

Example 3.20. SystemInfo.cs

public const string LOCAL_STORAGE_NAME = "SystemInfoWebLocalCache";
public const string SYSTEM_INFO_MACHINE_NAMES = "machines.txt";
public const string SYS_INFO_CACHE_XML = "SystemInfoCache.xml";
public static readonly SystemMessageExchange sysDS = new SystemMessageExchange();

public void SendSystemInfo(SystemMessageExchange ds)
{
if (ds != null && ds.SystemInfo.Rows.Count > 0)
{
        string machineName = ds.SystemInfo[0].MachineName;
        string machineLocalStoragePath = ds.SystemInfo[0].LocalStoragePath;
        //Log the message
        WindowsAzureSystemHelper.LogInfo(machineName + ">" + ds.GetXml());

        //Add machine names
        WindowsAzureSystemHelper.WriteLineToLocalStorage
        (SYSTEM_INFO_MACHINE_NAMES,LOCAL_STORAGE_NAME, machineName, false);

        //Copy the file to LocalStorage
        string localStoragePath = WindowsAzureSystemHelper.GetLocalStorageRootPath
        (LOCAL_STORAGE_NAME);
try
        {
                string query = String.Format
                ("MachineName = '{0}' AND LocalStoragePath = '{1}'",
                machineName, machineLocalStoragePath);
                WindowsAzureSystemHelper.LogInfo("Query = " + query);
                System.Data.DataRow[] dRows = sysDS.SystemInfo.Select(query);

                if (dRows != null && dRows.Length > 0)
                {
                sysDS.SystemInfo.Rows.Remove(dRows[0]);
                }

                sysDS.SystemInfo.Merge(ds.SystemInfo);
                sysDS.AcceptChanges();
                sysDS.WriteXml(Path.Combine(localStoragePath, SYS_INFO_CACHE_XML));
                WindowsAzureSystemHelper.LogInfo("SystemInfoCache.
xml -- " + sysDS.GetXml());

        }
        catch (Exception ex)
        {
        WindowsAzureSystemHelper.LogError("SendSystemInfo():" + ex.Message);
        }
}
else
{
WindowsAzureSystemHelper.LogInfo("SendSystemInfo(): null message received");

}
}

In Listing 3-20, I define a read-only static instance of the SystemMessageExchange dataset for storing all the requests coming from the WCF clients. When the SendSystemInfo() method is called, a new system information dataset comes in, I read the machine name of the request and write a line to the machines.txt file in the local storage using the call WindowsAzureSystemHelper.WriteLineToLocalStorage(SYSTEM_INFO_MACHINE_NAMES, LOCAL_STORAGE_NAME, machineName, false);.

Next, I query the class instance of the SystemMessageExchange dataset (sysDS) to check whether an entry for this machine name exists. I am using the machine name and the local storage for querying because the same machine may have different roles running with different local storage names. So, my assumption is that the machine name and local storage pair is unique for the purpose of this lab. If a row with the specified parameters already exists in the dataset, I delete it and then merge the received dataset (ds) with the class instance of the dataset (sysDS) for inserting the latest information received from the role instance. In the system information dataset, the timestamp is the only field that will vary with time if the role instance keeps on running on the same underlying virtual machine. Finally, I serialize the dataset (sysDS) to the local storage of the Web role using sysDS.WriteXml() method call. Every time the WCF method is called, the system information file stored in the local storage is updated with the latest information. Once the file is saved to the local storage, it is available to the other objects running in the same Web role instance.

For the sake of simplicity, I am using basicHttpBinding in the web.config file for the SystemInfo WCF service as shown in Listing 3-21.

Example 3.21. ServiceInfo WCF Binding

<services>
<service behaviorConfiguration="WebWorkerExchange_WebRole.System
InfoBehavior" name="WebWorkerExchange_WebRole.SystemInfo">
<endpoint address="" binding="basicHttpBinding"
contract="WebWorkerExchange_WebRole.ISystemInfo">
<identity>
<dns value="localhost"/>
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
</services>

Default.aspx

The Default.aspx file is the home page for the Web role application. On a periodic basis, the Default.aspx page reads the machines.txt file and the serialized system information dataset from the local storage and displays it to the end user. Figure 3-49 illustrates the design of Default.aspx.

Default.aspx

Figure 3.49. Default.aspx

The Default.aspx page has three main controls: an ASP.NET ListBox, a GridView, and an ASP.NET Ajax Timer control. The ListBox is used to display the contents of the messages.txt file from the local storage stored by the SystemInfo WCF service. The GridView is used for displaying the content of the system information dataset file stored in the local storage by the SystemInfo WCF service. The ListBox and the GridView are both placed on an AJAX update panel. The Timer control refreshes the UpdatePanel every 10 seconds. Listing 3-22 shows the Tick event of the Timer control in Default.aspx.cs.

Example 3.22. Default.aspx.cs

protected void Timer1_Tick(object sender, EventArgs e)
{
       ExecuteExchange();
       ListMachines();
}
private void ListMachines()
{
       try
       {
                IList<string> messages =
                WindowsAzureSystemHelper.ReadAllLinesFromLocalStorage(
                SystemInfo.SYSTEM_INFO_MACHINE_NAMES,
 SystemInfo.LOCAL_STORAGE_NAME);
                lbMachines.Items.Clear();
                foreach (string message in messages)
                {
                        lbMachines.Items.Add(message);
                }
                string sysInfoPath =
                Path.Combine(WindowsAzureSystemHelper.GetLocalStorageRootPath
                (SystemInfo.LOCAL_STORAGE_NAME), SystemInfo.SYS_INFO_CACHE_XML);
                if (File.Exists(sysInfoPath))
                {
                        string sysInfoFileContents = File.ReadAllText(sysInfoPath);
                        if (!string.IsNullOrEmpty(sysInfoFileContents))
                        {
                                SystemMessageExchange ds =
 new SystemMessageExchange();
                                ds.ReadXml(new StringReader(sysInfoFileContents));
                                GridView1.DataSource = ds.SystemInfo;
                                GridView1.DataBind();
                        }
                }
       }
       catch (Exception ex)
       {
       WindowsAzureSystemHelper.LogError(ex.Message);
       }
}

In Listing 3-22, I am using the helper methods from the WindowsAzureSystemHelper class. The first method WindowsAzureSystemHelper.ReadAllLinesFromLocalStorage() reads the machine names from machines.txt from the local storage. All the retrieved machine names are then added to the list box. Similarly, the dataset method ds.ReadXml() deserializes the dataset from the local storage and adds it as a data source to the GridView. So, every 10 seconds you will see the timestamp refreshed on the Default.aspx page.

Creating the WebWorkerExchange_WorkerRole Worker Role Project

This is the Worker role project for sending continuous system information to the SystemInfo WCF service in the Web role. A Worker role in the Windows Azure cloud is analogous to the Windows Service on a Windows Server system. As discussed earlier in this chapter, the Worker role class must inherit from the RoleEntryPoint abstract class In this example, I have implemented the OnStart() and Run() methods.

Example 3.23. Worker Role Run() Method

public override void Run()
{
       Trace.WriteLine("WebWorkerExchange_WorkerRole entry point
 called", "Information");
       WindowsAzureSystemHelper.LogInfo("Worker Process entry point called");
       ThreadSleepInMillis = WindowsAzureSystemHelper.GetIntConfigur
ationValue("ThreadSleepTimeInMillis");
       while (true)
       {
       ExecuteExchange();
       Thread.Sleep(ThreadSleepInMillis);
       WindowsAzureSystemHelper.LogInfo("Working");

       }
}

In Listing 3-23, the Run() method has an continuous while loop that executes the WCF method by calling a local method ExecuteExchange() and then goes to sleep for a configured amount of milliseconds. The core logic of the application is in the ExecuteExchange() method shown in Listing 3-24.

Example 3.24. ExecuteExchange Method

private void ExecuteExchange()
{
       try
       {
       SystemMessageExchange ds =
       WindowsAzureSystemHelper.GetSystemInfo(LOCAL_STORAGE_NAME, "Worker");
               if (ds == null)
               {
                       WindowsAzureSystemHelper.LogError
                       ("ExecuteExchange():SystemMessageExchange DataSet is null");
               }
               else
               {
                       WindowsAzureSystemHelper.LogInfo(ds.GetXml());
                       string url = WindowsAzureSystemHelper.Get
StringConfigurationValue
                       ("SystemInfoServiceURL");

                       CallSystemInfoService(url, ds);
}
       }
       catch (Exception ex)
       {
       WindowsAzureSystemHelper.LogError("ExecuteExchange():" + ex.Message);
       }
}

In Listing 3-24, there are two main method calls—WindowsAzureSystemHelper.GetSystemInfo() and CallSystemInfoService(). WindowsAzureSystemHelper.GetSystemInfo() is a helper method that returns the SystemMessageExchage dataset, and CallSystemInfoService() is a private method that calls the SystemInfo WCF service in the Web role. Listing 3-25 shows the code for CallSystemInfoService() method.

Example 3.25. CallSystemInfoService() Method

private void CallSystemInfoService(string url, SystemMessageExchange ds)
{
       SystemInfoService.SystemInfoClient client = null;
       BasicHttpBinding bind = new BasicHttpBinding();
       try
       {
               EndpointAddress endpoint = new EndpointAddress(url);
               client = new SystemInfoService.SystemInfoClient(bind, endpoint);
               client.SendSystemInfo(ds);
               WindowsAzureSystemHelper.LogInfo(
               String.Format("Sent message to Service URL {0}", url));


       }
       catch (Exception ex)
       {
       WindowsAzureSystemHelper.LogError("CallSystemInfoService():" + ex.Message);
       }
       finally
       {
               if (client != null)
               {
                       if (client.State == CommunicationState.Faulted)
                       client.Abort();
                       else
                       client.Close();
               }
       }
}

In Listing 3-25, I initialize the BasicHttpBinding object and then initialize the EndpointAddress object and pass the URL from the ServiceConfiguration.cscfg passed as a parameter to the method. Next, I instantiate the SystemInfo WCF service proxy class SystemInfoClient and call the SendSystemInfo() method with the SystemMessageExchange dataset as the parameter. At this time, the WCF method is invoked and the dataset sent to the SystemInfo WCF service in the Web role.

To run the application in the development fabric, press F5 or right-click the WebWorkerExchange project and select Debug

CallSystemInfoService() Method

Note

Make sure that you have added the correct URL of the SystemInfo WCF service in the ServiceConfiguration.cscfg file for the setting name SystemInfoServiceURL.

When the web application starts, it stars the Web role as well as the Worker role instances. The Web role starts a new browser window and loads Default.aspx. The Web role also starts the SystemInfo WCF service. As per the ServiceConfiguration.cscfg, the development fabric starts two instances of the Worker role.

Open the development fabric UI from the system tray icon or by going to All Programs

CallSystemInfoService() Method
WebWorkerExchange in development fabric

Figure 3.50. WebWorkerExchange in development fabric

In Figure 3-50, there are two Worker role instances and one Web role instance. The logs for all the instances are displayed in their respective consoles. You can click one of the green instance nodes to enlarge the console of that instance for more detailed log viewing.

Figure 3-51 illustrates the Default.aspx page with the system information from all the service instances.

Default.aspx showing system information

Figure 3.51. Default.aspx showing system information

If you observe Default.aspx, you will see the Timestamp values change 10 seconds. You will also find it interesting to see the different system properties displayed in the GridView. Next, let's deploy the WebWorkerExchange cloud service to the Windows Azure cloud. The steps for deploying WebWorkerExchange cloud service to Windows Azure are the same as those you say earlier in this chapter:

  1. Right-click the WebWorkerExchange project, and select Publish.

  2. Sign in to the Windows Azure developer portal.

  3. In Windows Azure developer portal, select the Windows Azure project you want to deploy the service to or create a new Windows Azure project.

  4. Go to the project page, and click Deploy in the Staging section.

  5. On the Staging Deployment page, in the App Package section, browse and add the WebWorkerExchange.cspkg package.

  6. Next, add the ServiceConfiguration.cscfg configuration file in the Configuration Settings section.

  7. Give the project a name (e.g., Web Worker Exchange).

  8. Click Deploy. Windows Azure will read the ServiceDefinition.csdef file to provision the Web and Worker roles.

  9. Make sure the state of the Web and Worker roles changes to Allocated, so you are ready to run the service.

  10. Before running the service, though, you need to configure the SystemInfoServiceURL configuration setting in the Worker role to point to the URL of the SystemInfo service. Click Configure, and change the SystemInfoServiceURL setting to point to your staging environment Web Site URL, as shown in Figure 3-52.

    Configuring SystemInfoServiceURL

    Figure 3.52. Configuring SystemInfoServiceURL

  11. Click the Run button to start the service. Windows Azure now starts the Web and Worker role instances, as shown in Figure 3-53.

    WebWorkerExchange in the staging environment

    Figure 3.53. WebWorkerExchange in the staging environment

  12. When all roles are in the Started state, click the Web Site URL to open the Default.aspx page. Figure 3-54 shows the Default.aspx file loaded form the Windows Azure staging environment.

Default.aspx in the staging environment

Figure 3.54. Default.aspx in the staging environment

In Figure 3-54, note that all the three instances of the service are provisioned on different underlying servers. Also note the different system information sent by each instance, including the local storage directory, user domain, and user name. The user names under which the instances are run are GUIDs, which makes sense in an automatically provisioned infrastructure. Once you have tested the service in the staging environment, you can swap the staging environment with the production environment for deploying the service in production. Figure 3-55 shows the production deployment of the WebWorkerExchange in Windows Azure.

WebWorkerExchange in production

Figure 3.55. WebWorkerExchange in production

Geolocation

Windows Azure is already available in multiple data centers within the United States, and going forward, Microsoft plans to expand into data centers around the world. In today's enterprise, as well as consume, applications, the common paint point is to design a globally available service. The service needs to be physically deployed into data centers around the world for business continuity, performance, network latency, compliance, or geopolitical reasons. For example, in one project I had the responsibility for architecting a global deployment of a business critical application for a Fortune 100 company. Even though I did not need to travel around the world, I had to plan and coordinate deployment efforts around five data centers across the world. The effort took six months of rigorous planning and coordination. With geolocation support in Windows Azure, you can choose the geolocation of the storage and the compute host. Table 3-7 lists some of the common geolocation advantages.

Table 3.7. Geolocation Advantages

Advantage

Rationale

Business Continuity and Planning

With geolocation features, enterprise data can be replicated across multiple data centers around the world as an insurance shield from natural and political disasters.

Performance and Network Latency

One of the architectural tenets and best practices of cloud services is keeping data close to the application for optimizing performance and end user experience. With geolocation support, a cloud service application can be run in close proximity to the data for improved performance.

Compliance

Compliance laws are different in different countries. Multinational organizations have to deal with compliance regulations in all the countries that they do business in. With Windows Azure, companies can now move data closer to the country offices for adhering to the country specific compliance regulations.

Geopolitical Requirements

Some countries pose restrictions and constraints on enterprises in where they can store enterprise data. Geolocation features can help enterprises better align with such geopolitical requirements

Geolocation support gives you the ability to choose the affinity of the storage and compute services to a particular geo-location.

Enabling Geographic Affinity

When you create a new storage account or a hosted services project, you can specify the location and affinity group for your project. The steps for creating a geographic affinity between a hosted service project and a storage account follow:

  1. From the Create a new service component page, create a new Hosted Services project.

  2. Give the project a name and a label. I have named my project Pro Azure. Click Next

  3. Select a hosted service URL on the Create a Project page. Also, you can check its availability.

  4. On the Create a Project page, you will see a Hosted Service Affinity Group section, as shown in Figure 3-56.

    Hosted Service Affinity Group

    Figure 3.56. Hosted Service Affinity Group

  5. The Hosted Service Affinity group section starts with the question "Does this service need to be hosted in the same region as some of your other hosted services or storage accounts?"

    • If your answer is No, you can just choose a region for your service from the Region drop-down list, and click Create. By default, USA-Anywhere is selected, which does not give you a choice on where the service will be located.

    • If your answer is Yes, you have two choices as shown in Figure 3-57: Use an existing affinity group and region or create a new affinity group and region for reusing it across multiple projects.

    Creating a new affinity group

    Figure 3.57. Creating a new affinity group

  6. I will create a new affinity group called Pro Azure NW, and assign it the USA-Northwest geographic location.

  7. Click the Create button to create the project.

The new project shows the affinity group and its geographic location, as shown in Figure 3-58.

Pro Azure NW Affinity group

Figure 3.58. Pro Azure NW Affinity group

Next, let's create a new storage account and specify the same affinity group, so Windows Azure will know to provision the hosted service and the storage account as close to each other as possible for maximizing the bandwidth and lowering the network latency.

  1. To create a Storage Account, create a new project of type Storage Account in the Windows Azure developer portal.

  2. Give the project a name (e.g., ProAzure NW Storage), and click Next.

  3. On Create a Project page, create a storage account name (e.g., proazuregeostorage), and check it availability.

  4. Next, in the Storage Account Affinity Group section, choose "Use existing Affinity Group" and select Pro Azure NW, or whatever affinity group you created for the hosted services project (see Figure 3-59).

    Selecting a storage account affinity group

    Figure 3.59. Selecting a storage account affinity group

    Note that the geographic region gets automatically populated and cannot be edited.

  5. Click Create to create the storage account project with the same affinity group and region as the hosted services account.

Content Delivery Network

Content Delivery Network (CDN) is a Windows Azure blob replication and caching service that makes your blobs available globally at strategic locations closer to the blob consumers. For example, if your media-heavy web site has media files centrally located in the United States, whereas your users are from all the continents, then there will be performance degradation for the users in distant locations. Windows Azure CDN pushes content closer to the users at several data center locations in Asia, Australia, Europe, South America, and the United States. At the time of this writing, there were 18 locations (or edges) across these continents that provided caching service to the Windows Azure blob storage via CDN. So, if you enable your media files on the Windows blob storage with CDN, they will be automatically available across these locations locally thus improving the performance for the users. Currently, the only restriction on enabling CDN is the blob containers must be public. This makes CDN extremely useful for e-commerce, news media, social networking, and interactive media web sites.

When you enable a storage account with CDN, the portal creates a unique URL with the following format for CDN access to the blobs in that storage account: http://<guid>.vo.msecnd.net/.

This URL is different from the blob storage URL format, http://<storageaccountname>.blob.core.windows.net/, because, the blob storage URL is not designed to resolve to CDN locations. Therefore, to get the benefit of CDN, you must use the URL generated by CDN for the blob storage. You can also register a custom domain name for the CDN URL from Windows Azure Developer Portal.

To enable CDN on a storage account, follow these steps:

  1. Go to your Windows Azure Developer Portal storage account.

  2. Click Enable CDN on the storage account page, as shown in Figure 3-60.

    Enabling CDN

    Figure 3.60. Enabling CDN

  3. The portal provides a CDN endpoint to the storage by creating a CDN URL of the format http://<guid>.vo.msecnd.net/.

You can use the CDN endpoint URL for accessing your public containers. The portal also creates a record for the CDN endpoint in the Custom Domains list. To create a custom domain name, you can click on the Manage link for the CDN endpoint in the Custom Domain list and follow the instructions. I will cover Windows Azure storage in the next chapter, but I have covered CDN in this section, because it aligns very well with geographic affinity capabilities of Windows Azure.

Windows Azure Service Management

Unlike on-premise applications, the deployment of a cloud services involves only software provisioning from the developer's perspective. You saw in the earlier examples how hardware provisioning was abstracted from you in the deployment process. In a scalable environment where enterprises may need to provision multiple services across thousands of instances, you need more programmatic control over the provision process rather than configuring services using Windows Azure developer portal. Manually uploading service packages and then starting and stopping services from the portal interface works well for smaller services, but becomes a time-consuming and error-prone process when deploying multiple large-scale services. The Windows Azure Service Management API allows you to programmatically perform most of the provisioning functions via a REST-based interface to your Windows Azure cloud account. Using the Service Management API, you can script your provisioning and deprovisioning process end to end in an automated manner. In this section, I will cover some important functions from the Service Management API and also demonstrate some source code for you to build your own cloud service provisioning process.

Service Management API Structure

The Service Management API provides most of the functions you can perform on the storage services and hosted services from Windows Azure developer portal. The Service Management API categorizes the API operations into three primary sections: storage accounts, hosted services, and affinity groups. Operations on storage accounts mainly cover listing of accounts and generation of the access keys. Operation on hosted services cover listing of services, deploying services, removing services, swapping between staging and production, and upgrading services. The affinity groups operations are limited to listing and getting properties of affinity groups in your account.

Note

You can find the Service Management API reference at http://msdn.microsoft.com/en-us/library/ee460799.aspx.

The Service Management API uses X.509 client certificates for authenticating calls between the client and the server.

Warning

The source code in the following section is based on early CTP version (released October 10, 2009) of the service management API and its associated client assembly Microsoft.Samples.WindowsAzure.ServiceManagement. You can download the latest version of the assembly from the Windows Azure Code Samples page at http://code.msdn.microsoft.com/windowsazuresamples. Even though the API may change in the future, the concepts used in this section will remain the same past its final release. You may have slightly modify the source code to make it work with the latest available API.

Programming with the Service Management API

To start programming with the Service Management API, you must first create a valid X.509 certificate (or work with an existing one). You can use makecert.exe to create a self-signed certificate

makecert -r -pe -a sha1 -n "CN=Windows Azure Authenticat
ion Certificate" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES
 Cryptographic Provider" -sy 24 proazureservicemgmt.cer

Next, go to the Accounts section of Windows Azure Developer portal and upload the certificate from Manage API certificate section.

Upload the API certificate

Figure 3.61. Upload the API certificate

Once the certificate is uploaded, you can call the Service Management REST API by passing the certificate as the ClientCertificate property of the System.Net.HttpWebRequest object, by using the csmanage.exe application from the Service Management API samples, or by building your own application. In Ch3Solution, I have created a sample Windows Application that makes REST calls to the Service Management API. It uses the Microsoft.Samples.WindowsAzure.ServiceManagement.dll file from the service management code samples. The csmanage.exe uses the same assembly to make the API calls. Eventually, the API assembly may become part of the Windows Azure SDK. Figure 3-62 illustrates the Service Management API windows application.

The Service Management API windows application

Figure 3.62. The Service Management API windows application

In Figure 3-62, The Service Management Operations section lists the operations that you can invoke on the Service Management API. The output textbox prints the output from the operations. The right-hand side of the user interface consists of input parameters. The input parameters are as follows:

  • Subscription Id: You can get the subscriptionId from the Account page of the developer portal. This parameter is required by all the Service Management API operations.

  • Certificate Path: This text box points to the API certificate file on the local machine. This certificate must match the one you uploaded to the portal.

  • Resource Type: This drop-down lists the types of resource you want to access: Hosted Service, Storage Account, or Affinity Group.

  • Resource name: You should type the name of the resource you want to access (e.g., storage account name, hosted service name, affinity group name).

The remaining input parameters are operation dependant. You can choose an operation from the Service Management operations list, enter input parameters and click Execute Operation. For example, to create a deployment in your hosted service account, you can:

  1. Select the Create Deployment operation.

  2. Enter your Account SubscriptionId.

  3. Select the API certificate from local machine.

  4. Select Hosted Service Name as the Resource Type.

  5. Enter the name of the Hosted Service you want to deploy your service to in the Resource Name text box.

  6. Select the slot type (staging or production).

  7. Choose a deployment name.

  8. Choose a deployment label.

  9. You have to then point to a service package (.cspkg) on a blob storage in the Package Blob URL text box.

  10. Select the path to the ServiceConfiguration.cscfg file of the cloud service.

  11. Click Execute Operation.

The OP-ID shows the operation ID returned by the method call, which you can use to track the operation status. To check the status of the deploy operation, select the Get Operation Status method, and click Execute Operation. The status gets displayed in the bottom window. Once the deployment is complete, you can run the deployment by selecting the Update Deployment Status method and selecting the "running" option from the deployment status drop-down. Similarly, you can execute other operations from the Service Management API.

Windows Azure Service Life Cycle

The objective of Windows Azure is to automate the service life cycle as much as possible. Windows Azure service life cycle has five distinct phases and four different roles, as shown in Figure 3-60.

The Windows Azure service life cycle

Figure 3.63. The Windows Azure service life cycle

The five phases are as follows:

  • Design and development: In this phase, the on-premise team plans, designs, and develops a cloud service for Windows Azure. The design includes quality attribute requirements for the service and the solution to fulfill them. This phase is conducted completely on-premise, unless there is some proof of concept (POC) involved. The key roles involved in this phase are on-premise stakeholders. For the sake of simplicity, I have combined these on-site design roles into a developer role.

  • Testing: In this phase, the quality attributes of the cloud service are tested. This phase involves on-premise as well as Windows Azure cloud testing. The tester role is in charge of this phase and tests end-to-end quality attributes of the service deployed into cloud testing or staging environment.

  • Provisioning: Once the application is tested, it can be provisioned to Windows Azure cloud. The deployer role deploys the cloud service to the Windows Azure cloud. The deployer is in charge of service configurations and makes sure the service definition of the cloud service is achievable through production deployment in Windows Azure cloud. The configuration settings are defined by the developer, but the production values are set by the deployer. In this phase, the role responsibilities transition from on-premise to the Windows Azure cloud. The fabric controller in Windows Azure assigns the allocated resources as per the service model defined in the service definition. The load balancers and virtual IP address are reserved for the service.

  • Deployment: In the deployment phase, the fabric controller commissions the allocated hardware nodes into the end state and deploys services on these nodes as defined in the service model and configuration. The fabric controller also has the capability of upgrading a service in running state without disruptions. The fabric controller abstracts the underlying hardware commissioning and deployment from the services. The hardware commissioning includes commissioning the hardware nodes, deploying operating system images on these nodes, and configuring switches, access routers, and load-balancers for the externally facing roles (e.g., Web role).

  • Maintenance: Windows Azure is designed with the assumption that failure will occur in hardware and software. Any service on a failed node is redeployed automatically and transparently, and the fabric controller automatically restarts any failed service roles. The fabric controller allocates new hardware in the event of a hardware failure. Thus, fabric controller always maintains the desired number of roles irrespective of any service, hardware or operating system failures. The fabric controller also provides a range of dynamic management capabilities like adding capacity, reducing capacity and service upgrades without any service disruptions. Figure 3-64 illustrates the fabric controller architecture.

Fabric controller architecture

Figure 3.64. Fabric controller architecture

In Figure 3-64, the fabric controller abstracts the underlying Windows Server 2008 operating system and the hardware from the service role instances, and it performs the following high-level tasks:

  • Allocates the nodes

  • Starts operating system images on the nodes

  • Configures the settings as per the service model described by the service creator

  • Starts the service roles on allocated nodes

  • Configures load balancers, access routers, and switches

  • Maintains the desired number of role instances of the service irrespective of any service, hardware or operating system failures

Table 3-8 lists the quality attribute requirements for Windows Azure and the description of how it satisfies those.

Table 3.8. Quality Attributes

Quality Attribute

Description

High availability

Windows Azure provides built-in redundancy with access routers, load balancers, and switches. Load balancers are automatically provisioned for external facing roles (e.g. Web roles).

Service isolation

Every service operates within the parameters of its service model. Services can access only the resources declared in the service model configuration. These resources include endpoints, local storage, and local machine resources.

Security

Every service role instance runs in the Windows user context. The instance does not have access to any administrative privileges and limited native execution access when native access is enabled.

Automatic provisioning

The fabric controller automates the service deployment from bare-metal hardware to service role deployment. The service model and the configuration information act as the instruction set for the fabric controller to provision appropriate hardware and virtual machine instances. The fabric controller can also upgrade your service while running without disruptions.

Architectural Advice

Finally, here is a list of some practical advice that should serve you well going forward.

  • Clearly separate the functionality of the Web role from the Worker role. Do not use Worker role to perform web functions by exposing HTTP (or HTTPS) endpoints

  • Maintaining stateless role interfaces is important for load balancing and fault tolerance. Keep the roles stateless.

  • Use internal endpoints only for unreliable communications. For reliable communications, use Windows Azure queues (discussed in the next chapter).

  • User Worker roles effectively for batch and background processing.

  • Use service management API prudently for commissioning and decommissioning of the role instances. Do not keep instances running idle for a long period of time, because you are using server resources and will be charged for it.

  • Do not use local storage for reliable storage; use Windows Azure storage as a reliable storage for storing data from roles.

  • Design the system for fault tolerance and always account for failure of role instances.

  • Finally, do not deploy your cloud service for maximum capacity; deploy for minimum or optimum capacity, and dynamically provision more instances as demand increases and vice versa.

Summary

In this chapter, we dove deeply into the computational features of Microsoft's Windows Azure cloud operating system. Through the examples, you were exposed to deploying Windows Azure Web role and Worker role instances, not only in the development fabric but also in the Windows Azure cloud. In the examples, you also learned how to access the configuration settings and local storage. Then, I briefly covered the geolocation, CDN, and service management features of Windows Azure. In the examples in this chapter, we were storing and retrieving data from the local storage, which is local and machine dependent. The data will be lost as soon as the underlying machine is rebooted or the service redeployed. Windows Azure storage provides you with persistent storage for storing highly available data that can be accessed from anywhere using REST-based API. In the next chapter, you will learn Windows Azure storage components and their programming APIs in detail.

Bibliography

Mario Barbacci, M. H. (1995). Quality Attributes. Pittsburgh, Pennsylvania 15213: Software Engineering Institute, Carnegie Mellon University.

Microsoft Corporation. (n.d.). Windows Azure SDK. Retrieved from MSDN: http://msdn.microsoft.com/en-us/library/dd179367.aspx

Microsoft Corporation. (n.d.). Windows Azure Team Blog. Retrieved from http://blogs.msdn.com/windowsazure

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.162.37