Chapter 8. Cloud and Service Programming with F#

Cloud services provide online resources to host a company’s business platform. The cloud solution is cost effective because there is no need for a company to maintain and manage the standalone hardware and software associated with hosting a solution. Because the data shared on the cloud can be accessed anywhere in the world, you can easily coordinate work with offshore teams or provide the data to different devices and geographic locations without incurring the capital expenditure that would normally be required.

Cloud computing also provides a scalable framework. Consider, for example, a company in the US that provides tax-preparation services. From March to April is the busiest time of year for the company. During this period, the company needs more computing power than at any other time of the year. A cloud service provider can offer computing power to the business in a more flexible way. Because the cloud lowers a company’s computing expenses, the company can focus more on innovation within its core business.

To understand the importance of cloud computing, just look at the investment in it by big companies such as Microsoft and Google—it is clear that cloud computing is the next big thing. For a global company that has data distributed at different locations, the cloud allows computations to span different CPU cores as well as different virtual machines. It is well known that successful distributed or parallel computations need a combination of novel algorithm design, computation abstraction, and programming implementation. This requires a language that can work well on all of these fronts. F# is perfect for this task. Let’s start with the basics of Microsoft Windows Azure.

Introducing Windows Azure

Windows Azure is Microsoft’s cloud-computing platform. Developers can use Windows Azure to develop, publish, and manage applications. The applications are then hosted at Microsoft datacenters across the world. The application running on Windows Azure can be implemented with different programming languages. This section covers a few key concepts in Windows Azure application development, including cloud queue, WCF service, blob storage, and SQL Azure. More detailed information about Azure can be found at http://www.windowsazure.com.

Setting Up Your Environment

To develop a Windows Azure application, you need a Windows Azure account, which can be acquired from the aforementioned website. Figure 8-1 shows the initial page you see when signing up for a Windows Azure account.

Signing up for a Windows Azure account
Figure 8-1. Signing up for a Windows Azure account

Once you sign in with your Windows LiveID, you can take advantage of the free trial offer, as shown in Figure 8-2.

Free trial offer from Windows Azure
Figure 8-2. Free trial offer from Windows Azure

After the account is set up, you can log in to the management portal where most of the configuration and management tasks are performed. The login page is shown in Figure 8-3.

Logging in to the management portal
Figure 8-3. Logging in to the management portal

From the management portal, shown in Figure 8-4, you can perform management tasks such as configuring a database. Now you have a place to host your application when it is ready.

Windows Azure management portal
Figure 8-4. Windows Azure management portal

You are now ready to start development-related tasks. There are a few options for interacting with Azure through code. The easiest approach for a .NET developer is to use the Windows Azure Software Development Kit (SDK) for .NET. The Windows Azure SDK for .NET can be downloaded from the .NET section download page (http://www.windowsazure.com/en-us/downloads/), which is shown in Figure 8-7. The Windows Azure SDK for .NET can work with Microsoft Visual Studio 2010 and Visual Studio 2012. In this chapter, I will use Visual Studio 2012 to demonstrate how to develop Azure applications. The Azure SDK is independent from Visual Studio, so all the functionalities are the same in Visual Studio 2010. Figure 8-5 shows installation page for the Windows Azure SDK for .NET.

Windows Azure SDK for .NET is installed by using Microsoft Web Platform Installer 4.5. If you installed Visual Studio 2012, the Web Platform Installer has already been installed on your computer. The standalone version can be downloaded from http://www.microsoft.com/web/downloads/platform.aspx.

Windows Azure SDK for .NET installation in progress
Figure 8-5. Windows Azure SDK for .NET installation in progress

The installation process adds the following components to your system, as shown in Figure 8-6:

  • ASP.NET Web Pages 2

  • Microsoft SQL Server 2012 Data-Tier Application Framework (DACFx) for x86

  • SQL Server Data Tools - Build Utilities

  • Microsoft Web Tooling 1.1 for Visual Studio 2012

  • Windows Azure Emulator 1.8

  • Windows Azure Tools 1.8 for Microsoft Visual Studio 2012

  • Windows Azure Authoring Tools 1.8

  • Windows Azure Libraries for .NET 1.8

  • LightSwitch Azure Publishing 2.0 add-on for Visual Studio 2012

Window Azure SDK for .NET installation completed
Figure 8-6. Window Azure SDK for .NET installation completed
Downloading Windows Azure SDK for .NET for Visual Studio
Figure 8-7. Downloading Windows Azure SDK for .NET for Visual Studio

Developing a Windows Azure Application

After successfully installing the .NET Azure SDK, you can start to create your first Windows Azure application. The Azure SDK supports creating several types of projects. In this sample, you will create a Windows Azure cloud service. The location to create a Windows Azure Cloud Service project is under the Visual C# language node, as shown in Figure 8-8.

Creating a Windows Azure Cloud Service project in Visual Studio
Figure 8-8. Creating a Windows Azure Cloud Service project in Visual Studio

After you select the project and click OK, a new dialog box (shown in Figure 8-9) is displayed, which you can use to add information about the Azure Cloud Service project. As part of the Cloud Service project, you need to create a Worker role that performs background processing service. The worker role code is the project hosting the processing code. You need to select the F# Worker Role project in this dialog box. A worker role project is used to create a background processing service. You can change the project name by clicking WorkerRole1 once. For the purpose of this demo, let’s use the default name, WorkerRole1.

Creating an F# worker role project
Figure 8-9. Creating an F# worker role project

Your first Azure application now looks like Figure 8-10. The solution has two projects. The WindowsAzure1 project contains the configuration settings for the Azure solution, and the WorkerRole1 project is where the real program logic resides.

Windows Azure solution with an F# worker role project
Figure 8-10. Windows Azure solution with an F# worker role project

The complete code for the worker-role processing logic is shown in Example 8-1. The WorkerRole class inherits from RoleEntryPoint and overrides two methods. The Run method, like its name suggests, hosts the execution logic. Because the service is expected to run for an infinite amount of time, it is not surprising to see that the rudimentary code, found in the Run method, contains an infinite while loop. The OnStart method configures the service before it starts. The default code sets the connection limit to 12.

Example 8-1. Default F# worker role code
namespace WorkerRole1

open System
open System.Collections.Generic
open System.Diagnostics
open System.Linq
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.Diagnostics
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type WorkerRole() =
    inherit RoleEntryPoint()

    // This is a sample worker implementation. Replace with your logic.
    let log message kind = Trace.WriteLine(message, kind)

    override wr.Run() =
        log "WorkerRole1 entry point called" "Information"
        while(true) do
            Thread.Sleep(10000)
            log "Working" "Information"

    override wr.OnStart() =
        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
        base.OnStart()

You can hit F5 to run the application in the local Compute emulator. For a simple program like this, there is little effort needed for debugging.

You are now ready to deploy your application to the Windows Azure environment. Right-click on the WindowsAzure1 project, and select the menu item Publishing. You use this action to deploy the current solution to Azure. If this is the first time you have published an Azure application, you will need to download and import your credentials file. Figure 8-11 shows the Publish Windows Azure Application Wizard.

Publish Windows Azure Application Wizard
Figure 8-11. Publish Windows Azure Application Wizard

Note

Once an application is published to the Azure environment, the application is considered to be running and starts to be counted as computation time, even if the program does nothing. Therefore, it is recommended that you debug and fully test the application on your local computer before publishing.

Your HelloWorld-like application is ready. In the next section, you will go through several F# Azure examples.

Azure Cloud Queue

A queue is an abstract linear data structure. It is a first-in, first-out (FIFO) structure, in which the newest addition is inserted at the end of the queue and an item can be removed only at the beginning of the queue. The Azure Cloud Queue service is designed to store a large amount of data. The queue can be accessed from anywhere in the world (using an HTTP or HTTPS connection).

According to the MSDN documentation (http://msdn.microsoft.com/en-us/library/windowsazure/hh767287.aspx), the cloud queue should be used when any of the following scenarios apply to your application:

  • You need to store messages of a combined size greater than or equal to 5 GB in a queue and the messages need to be kept in the queue for no more than one week.

  • You need flexible leasing to process messages. This allows the worker process or processes to come back after an extended period of time to resume the processing of messages. This would not be possible with a short lease time. Additionally, worker processes can extend the lease time if the processing time is longer than expected.

  • You need to track the progress of message processing. This enables other worker processes to resume message processing when another process is interrupted.

  • You need server-side log information about all transaction activity on the queue.

In this section, I am going to implement a simple queue. The sample solution AzureQueueSample contains two projects in addition to the standard Azure configuration project. One is the worker project, which inserts a message into the queue, and the other is the consumer project named ConsumerProject, which takes the message from the queue. Because there is no UI work, two F# worker role projects are sufficient. I create two projects and call them WorkerRole project and ConsumerRole project. The project creation dialog box is shown in Figure 8-12.

Creating the WorkerRole and ConsumerRole projects
Figure 8-12. Creating the WorkerRole and ConsumerRole projects

After you create these two projects, you can go to the AzureQueueSample Role folder to configure the queue connection strings. Right-click the WorkerRole node, and select Property to show the configuration menu. The queue connection string can be set in the Settings area. Create the connection string by clicking the Add Setting button. Figure 8-13 shows the default string that is created after you click this button. You can now rename the default string to MyConnectionString and set its type to Connection String.

Adding a string to the project settings
Figure 8-13. Adding a string to the project settings

You can click the button with the ellipsis (...) to set the connection string. You should first debug the program locally rather than initially targeting to the real Azure environment. Figure 8-14 shows the Storage Account Connection String dialog box. By default, the Azure project is set to use the Windows Azure storage emulator.

Storage Account Connection String dialog box
Figure 8-14. Storage Account Connection String dialog box

Accept this default setting by clicking OK and name the connection string as MyConnectionString. Figure 8-15 shows the WorkerRole project settings. The connection string is set to use development storage. The setting is stored in the .csdef files in the project in XML format. Although these XML files can be modified manually, beginners should use the Settings dialog box.

Note

Because both worker and consumer projects access the queue, the setting needs to be made on both the WorkerRole and ConsumerRole projects.

Project settings for the WorkerRole project
Figure 8-15. Project settings for the WorkerRole project

These are some of the Azure cloud queue operations used in the sample:

  • Create a queue

    let queueClient = storageAccount.CreateCloudQueueClient()
    let queue = queueClient.GetQueueReference("myqueue")
    
    queue.CreateIfNotExist()
  • Add a message to the queue

    let message = CloudQueueMessage("Hello, World")
    queue.AddMessage(message)
  • Peek at the message from the queue, and show the message as a string

    let peekedMessage = queue.PeekMessage()
    if (peekedMessage <> null) then
        let msg = queue.GetMessage()
        log msg.AsString "Information"
  • Dequeue a message

    let msg = queue.GetMessage()
    queue.DeleteMessage(msg)

To make this sample more interesting, I use a variable to specify the sleep time for the Thread.Sleep function. The sleep time is set dynamically by the current message count in the queue. If the message count is greater than 3, which means there are plenty of messages for the customer, the sleep time is increased by 100 ms. Otherwise, the sleep time is cut in half. The WorkerRole code is shown in Example 8-2.

Example 8-2. Code for changing the sleep time
namespace WorkerRole1

open System.Diagnostics
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type WorkerRole() =
    inherit RoleEntryPoint()

    // This is a sample worker implementation. Replace with your logic.
    let log message kind = Trace.WriteLine(message, kind)

    override wr.Run() =

        log "WorkerRole1 entry point called" "Information"

        let connectionStringName = "MyConnectionString"
        let storageAccount =
CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(connectionStringName))
        let queueClient = storageAccount.CreateCloudQueueClient()
        let queue = queueClient.GetQueueReference("myqueue")

        ignore <| queue.CreateIfNotExist()
        let mutable sleepTime = 1000
        while(true) do
            Thread.Sleep(sleepTime)

            let message = CloudQueueMessage("Hello, World")
            queue.AddMessage(message)
            log "add new message" "Information"
            let queueSize = queue.RetrieveApproximateMessageCount()
            sleepTime <- if queueSize > 3 then
                                sleepTime + 100
                            else
                                max (sleepTime / 2) 1

            let msg = sprintf "Current Sleep time is %A" sleepTime
            log msg "Information"

    override wr.OnStart() =

        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
        base.OnStart()

Note

To prevent sleepTime from being set to 0, a max function is used to make sure sleepTime’s minimum value is 1.

Microsoft.WindowsAzure.Configuration.dll needs to be added to the project references.

Compared to the WorkerRole code, the ConsumerRole code is simpler. The consumer project’s job is to check the message queue for messages every 200 milliseconds (ms). If there is a message, the consumer project will dequeue and print the message. The complete code for ConsumerRole is shown in Example 8-3.

Example 8-3. ConsumerRole code
namespace ConsumerRole

open System.Diagnostics
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type WorkerRole() =
    inherit RoleEntryPoint()

    // This is a sample worker implementation. Replace with your logic.

    let log message kind = Trace.WriteLine(message, kind)

    override wr.Run() =
        log "ConsumerRole entry point called" "Information"
        let connectionStringName = "MyConnectionString"
        let storageAccount =
CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(connectionStringName))
        let queueClient = storageAccount.CreateCloudQueueClient()
        let queue = queueClient.GetQueueReference("myqueue")
        ignore <| queue.CreateIfNotExist()

        while(true) do
            Thread.Sleep(200)
            // Peek at the next message
            let peekedMessage = queue.PeekMessage()
            if (peekedMessage <> null) then
                let msg = queue.GetMessage()
                log msg.AsString "Information"
                queue.DeleteMessage(msg)
            else
                log "no message" "Information"
                ()

    override wr.OnStart() =
        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
        base.OnStart()

Note

Microsoft.WindowsAzure.Configuration.dll needs to be added to the project references.

Before executing the code, you need to set the instance number for ConsumerRole and WorkerRole in the Configuration area in the project settings. For test purposes, the ConsumerRole project instance count is set to 1, so there is only one consumer removing the messages from the queue. The WorkerRole project instance count is set to 3. Other deployment settings such as VM Size can also be set. For this sample, default values are used. Figure 8-16 shows the settings for the WorkerRole project.

Setting the instance count number for the WorkerRole project
Figure 8-16. Setting the instance count number for the WorkerRole project

When the project is executed, the emulator is started first. You can display the emulator by clicking the blue emulator icon in the Windows system tray, which is shown in Figure 8-17.

Emulator icon
Figure 8-17. Emulator icon

You can display the Compute Emulator UI and Storage Emulator UI options by right-clicking the emulator icon, as shown in Figure 8-18.

Displaying the emulator UIs
Figure 8-18. Displaying the emulator UIs

The Compute Emulator UI shows current running instances. You can see in the left panel in Figure 8-19 that there is 1 consumer role instance and 3 worker role instances. The right panel shows the output from these instances.

An example of the Compute Emulator UI
Figure 8-19. An example of the Compute Emulator UI

The Storage Emulator UI, shown in Figure 8-20, displays the blob, queue, and table status.

An example of the Storage Emulator UI
Figure 8-20. An example of the Storage Emulator UI

Because there are three workers generating messages and only one consumer, which is taking one message every 200 ms, I expect that the sleep time will be set to 600. The execution result verifies my expectation. After several iterations, the sleep time of the WorkerRole instances is changed to 622 on my computer. The WorkerRole activity information can be viewed in the console window output, which you saw in Figure 8-19.

Note

The actual execution result, which is a number, will vary from execution to execution.

Azure WCF Service

Windows Communication Foundation (WCF) is a programming framework for building secure and reliable communication between applications. It is part of the Microsoft .NET Framework and is a unified programming model used to build service-oriented applications. In this section, I am going to present a mortgage calculator, which is a WCF service built to be hosted on the Azure cloud. The complete solution can be downloaded from the F# team sample archive, F# 3.0 Sample pack, at http://fsharp3sample.codeplex.com/.

You can access the project by clicking the Source Code tab, AzureSamples, and WcfInWorkerRole, as shown in Figure 8-21.

WcfInWorkerRole project in the F# 3.0 Sample Pack project on Codeplex
Figure 8-21. WcfInWorkerRole project in the F# 3.0 Sample Pack project on Codeplex

The solution structure is shown in Figure 8-22. There are four projects in the WcfInWorkerRole solution:

  • The LoanCalculatorContracts project defines service contracts.

  • The WCFWorker project is the F# worker role project that starts and hosts the WCF service.

  • The WCFWorkerAzure project is the Azure configuration project.

  • The WPFTestApplication project is a C# WPF application used to test the WCF service. This project is a client-side application.

WCF service sample solution structure
Figure 8-22. WCF service sample solution structure

The LoanCalculatorContracts project contains the WCF service and data contract interfaces. Example 8-4 defines the data contract and service contract interfaces by which the client and service exchange the loan and payment information and perform the computations.

Example 8-4. Mortgage calculator data contract and service contract interfaces

Data contract interfaces for loan and payment information

/// Record for LoanInformation
[<DataContract>]
type LoanInformation =
    { [<DataMember>] mutable Amount : double
      [<DataMember>] mutable InterestRateInPercent : double
      [<DataMember>] mutable TermInMonth : int }

/// Record for PaymentInformation
[<DataContract>]
type PaymentInformation =
    { [<DataMember>] mutable MonthlyPayment : double
      [<DataMember>] mutable TotalPayment : double }

Service contract interface

[<ServiceContract>]
type public ILoanCalculator =
    /// Use Record to send and receive data
    [<OperationContract>]
    abstract Calculate : a:LoanInformation -> PaymentInformation

Note

All the interface methods must be given a parameter name—for example, a:LoanInformation. Otherwise, the service host will fail to start.

The F# worker role project named WCFWorker implements the ILoanCalculator interface. Example 8-5 shows the implementation code.

Example 8-5. ILoanCalculator interface implementation
namespace WCFWorker

open System
open System.Collections.Generic
open System.Linq
open System.Text
open LoanCalculatorContracts
open System.ServiceModel
open System.Runtime.Serialization

[<ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)>]
type public LoanCalculatorImplementation() =

    member this.Calculate(loan : LoanInformation) = (this :> ILoanCadulator).Calculate
loan

    interface ILoanCalculator with
        override this.Calculate(loan : LoanInformation) =
            let monthlyInterest = Math.Pow((1.0 + loan.InterestRateInPercent / 100.0), 1.0
/ 12.0)
 - 1.0
            let num = loan.Amount * monthlyInterest
            let den = 1.0 - (1.0 / (Math.Pow(1.0 + monthlyInterest, (double)loan.
TermInMonth)))
            let monthlyPayment = num / den

            let totalPayment = monthlyPayment * (double)loan.TermInMonth
            let paymentInformation  = {MonthlyPayment = monthlyPayment;
                                            TotalPayment = totalPayment}

            paymentInformation

The worker role execution code is relatively simple. It starts the WCF service and then executes an infinite loop. The WCF service will run and listen for an incoming request, calculate the loan payment, and return the result. The worker role code is shown in Example 8-6.

Example 8-6. Worker role code that starts and hosts the WCF service
namespace WCFWorker

open System
open System.Collections.Generic
open System.Diagnostics
open System.Linq
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.Diagnostics
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient
open System.ServiceModel
open System.Runtime.Serialization
open LoanCalculatorContracts

type WorkerRole() as this =
    inherit RoleEntryPoint()

    [<DefaultValue>]
    val mutable serviceHost : ServiceHost

    member private this.CreateServiceHost() =

        this.serviceHost <- new ServiceHost(typeof<LoanCalculatorImplementation>)
        let binding = new NetTcpBinding(SecurityMode.None)
        let externalEndPoint =
                 RoleEnvironment.CurrentRoleInstance.InstanceEndpoints.["WCFEndpoint"]
        let endpoint = String.Format("net.tcp://{0}/LoanCalculator",
                                             externalEndPoint.IPEndpoint)
        this.serviceHost.AddServiceEndpoint(typeof<ILoanCadulator>, binding, endpoint)
        |> ignore

        this.serviceHost.Open()

    override wr.Run() =
        while (true) do
                Thread.Sleep(10000)
                Trace.WriteLine("Working", "Information")

    override wr.OnStart() =

        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
        this.CreateServiceHost()
        base.OnStart()

Azure Blob Storage

The Azure blob storage service is used to store unstructured data whose size can be hundreds of gigabytes (GBs). A storage account can hold up to 100 TBs of blob data. The typical usage for blob storage is to store and share unstructured data such as image, video, and audio files. You can also use it as a way to back up your data. Like the Azure Cloud Queue service, it can be accessed anywhere by using HTTP or HTTPS.

The sample thumbnails in this section can be downloaded from http://fsharp3sample.codeplex.com/. You can access the project by clicking the Source Code tab, AzureSamples, and the Thumbnails_Dev11 folder. Figure 8-23 shows the project.

Thumbnails project in F# 3.0 Sample Pack on Codeplex
Figure 8-23. Thumbnails project in F# 3.0 Sample Pack on Codeplex

The project demonstrates how to use the Azure cloud queue and blob service. The queue is used to transfer the image file path. The blob is used to store the image data. The F# worker role gets the path from the queue, generates a thumbnail, and stores the thumbnail file on the blob. The solution structure is shown in Figure 8-24.

Thumbnails solution structure
Figure 8-24. Thumbnails solution structure

The following basic blob operations are available:

  • Create blob client

    let storageAccount = CloudStorageAccount.Parse(RoleEnvironment.
    GetConfigurationSettingValue(
                                       "DataConnectionString"))
    let blobStorage = storageAccount.CreateCloudBlobClient()
  • Create a container, and set permissions

    let container = blobStorage.GetContainerReference("photogallery")
    container.CreateIfNotExist() |> ignore
    let mutable permissions = container.GetPermissions()
    permissions.PublicAccess <- BlobContainerPublicAccessType.Container
    container.SetPermissions(permissions)
  • Upload blob to container

    let thumbnail = container.GetBlockBlobReference("thumbnails/" + thumbnailName)
    thumbnail.Properties.ContentType <- "image/jpeg"
    thumbnail.UploadFromStream(this.CreateThumbnail(image))
  • Download blob

    let content = container.GetBlockBlobReference(path)
    let image = new MemoryStream()
    content.DownloadToStream(image)

The solution contains three projects. The Thumbnails project is he Azure configuration. The Thumbnails_WebRole project is a C# web role project for uploading and displaying thumbnails generated from the Thumbnails_WorkerRole project. Thumbnails_WorkerRole is an F# worker role project that does the heavy lifting of the thumbnail image-processing work.

Example 8-7 shows the F# worker role code. When the role starts and executes the OnStart function, it sets up an event handler to process the configuration changes. The OnRun function first sets up the blob and queue before going into the infinite loop. Within the loop, the queue is repeatedly checked to see whether there is a message that needs to be processed. If there is a message in the queue, the message is taken and the image specified by the message is turned into a thumbnail image. The thumbnail image then will be uploaded to the blob storage, where a web role can take the thumbnail image and display it.

Example 8-7. F# worker role code
namespace Microsoft.Samples.ServiceHosting.Thumbnails

open System
open System.Collections.Generic
open System.Configuration
open System.Diagnostics
open System.Drawing
open System.IO
open System.Text
open System.Linq
open System.Net
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.Diagnostics
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type public WorkerRole() =
    inherit RoleEntryPoint()
    [<DefaultValue>]
    val mutable width : int

    [<DefaultValue>]
    val mutable height : int

    [<DefaultValue>]
    val mutable configSetter : string * bool -> unit

    // function to create thumbnail from a stream
    member private this.CreateThumbnail( input : Stream ) =
        let orig = new Bitmap(input)

        if (orig.Width > orig.Height) then
            this.width <- 128
            this.height <- 128 * orig.Height / orig.Width
        else
            this.height <- 128
            this.width <- 128 * orig.Width / orig.Height

        let thumb = new Bitmap(this.width,this.height)

        use graphic = Graphics.FromImage(thumb)
        graphic.InterpolationMode <-
            System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic
        graphic.SmoothingMode <- System.Drawing.Drawing2D.SmoothingMode.AntiAlias
        graphic.PixelOffsetMode <- System.Drawing.Drawing2D.PixelOffsetMode.HighQuality

        graphic.DrawImage(orig, 0, 0, this.width, this.height)

        let ms = new MemoryStream()
        thumb.Save(ms,System.Drawing.Imaging.ImageFormat.Jpeg)
        ms.Seek(0L,SeekOrigin.Begin) |> ignore
        ms
    override this.OnStart() =
        // This code sets up a handler to update CloudStorageAccount
        // instances when their corresponding
        // configuration settings change in the service configuration file.
        CloudStorageAccount.SetConfigurationSettingPublisher(
            new Action<string,Func<string,bool>>(
                fun (configName:string) (configSetter:Func<string,bool>) ->
                    // Provide the configSetter with the initial value
                    configSetter.Invoke(
                        RoleEnvironment.GetConfigurationSettingValue(configName)) |>
ignore
                    RoleEnvironment.Changed.Add(
                        fun (arg : RoleEnvironmentChangedEventArgs) ->
                            let c =
                                arg.Changes.OfType<RoleEnvironmentConfigurationSettingCha
nge>()
                            if (c.Any(fun n -> n.ConfigurationSettingName = configName))
then
                                let cN = RoleEnvironment.GetConfigurationSettingValue(con
figName)
                                if (not (configSetter.Invoke(cN))) then
                                    // In this case, the change to storage account
credentials in
                                    // service configuration is significant enough that
the role
                                    // needs to be recycled in order to use the latest
settings.
                                    // (for example, the endpoint has changed)
                                    RoleEnvironment.RequestRecycle()
                        )
        ))

        base.OnStart()

    override this.Run() =
        let storageAccount =
            CloudStorageAccount.Parse(
                RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"))
        let blobStorage = storageAccount.CreateCloudBlobClient()
        let container = blobStorage.GetContainerReference("photogallery")

        let queueStorage = storageAccount.CreateCloudQueueClient()
        let queue = queueStorage.GetQueueReference("thumbnailmaker")

        Trace.TraceInformation("Creating container and queue...")

        // If the Start() method throws an exception, the role recycles.
        // If this sample is run locally and development storage tool has not been
started, this
        // can cause a number of exceptions to be thrown because roles are restarted
repeatedly.
        // Let's try to create the queue and the container and
        // check whether the storage services are running at all.
        let mutable containerAndQueueCreated = false
        while(not containerAndQueueCreated) do
            try
                container.CreateIfNotExist() |> ignore
                let mutable permissions = container.GetPermissions()
                permissions.PublicAccess <- BlobContainerPublicAccessType.Container

                container.SetPermissions(permissions)
                permissions <- container.GetPermissions()
                queue.CreateIfNotExist() |> ignore
                containerAndQueueCreated <- true

            with
            | :? StorageClientException as e ->
                    if (e.ErrorCode = StorageErrorCode.TransportError) then
                        Trace.TraceError(
                            String.Format(
                                "Connect failure! The most likely reason is that the local
"+
                                "Development Storage tool is not running or your storage
account
 configuration is incorrect. "+
                                "Message: '{0}'", e.Message))
                        System.Threading.Thread.Sleep(5000)

                    else
                        raise e

        Trace.TraceInformation("Listening for queue messages...")

        // Now that the queue and the container have been created
        // in the preceding initialization process, get messages
        // from the queue and process them individually.
        while (true) do
            try
                let msg = queue.GetMessage()
                if (box(msg) <> null) then
                    let path = msg.AsString
                    let thumbnailName = System.IO.Path.GetFileNameWithoutExtension(path) +
".jpg"
                    Trace.TraceInformation(String.Format("Dequeued '{0}'", path))
                    let content = container.GetBlockBlobReference(path)
                    let thumbnail = container.GetBlockBlobReference("thumbnails/" +
thumbnailName)
                    let image = new MemoryStream()

                    content.DownloadToStream(image)

                    image.Seek(0L, SeekOrigin.Begin) |> ignore
                    thumbnail.Properties.ContentType <- "image/jpeg"
                    thumbnail.UploadFromStream(this.CreateThumbnail(image))

                    Trace.TraceInformation(String.Format("Done with '{0}'", path))

                    queue.DeleteMessage(msg)
                else
                    System.Threading.Thread.Sleep(1000)

            // Explicitly catch all exceptions of type StorageException here
            // because we should be able to
            // recover from these exceptions next time the queue message,
            // which caused this exception,
            // becomes visible again.
            with
            | _ as e ->
                                System.Threading.Thread.Sleep(5000)
                                Trace.TraceError(
                                    String.Format(
                                        "Exception when processing queue item. Message:
'{0}'",
                                            e.Message))

Azure SQL Database

In addition to blob storage, Azure also provides SQL database support. You can create a database from the management portal. On the management portal, you can select the SQL DATABASES tab. After clicking one of the servers, you can click the New button in the lower-left portion of the page to create a database on that server, as shown in Figure 8-25.

Creating a database on the Azure cloud
Figure 8-25. Creating a database on the Azure cloud

After you click the New button, the dialog box for creating a database is displayed, as shown in Figure 8-26. Click the Custom Create option to create the database.

Creating a database dialog box
Figure 8-26. Creating a database dialog box

Figure 8-27 shows the dialog box that is displayed when you click Custom Create. It contains options for specifying the database name and edition, as well as other settings. There are two editions: Web and Business. These two editions are identical except for their capacity. The Web edition scales from 1 GB to 5 GBs, while the business edition scales from 10 GBs to 50 GBs in 10-GB increments.

Creating a database dialog box with database parameters
Figure 8-27. Creating a database dialog box with database parameters

After you click the button in the lower-right corner of the dialog box, your new database is created, as shown in Figure 8-28. All the created database are listed. The connection to the database is listed in the Connect To Your Database area.

The setup page for the new database created in the Azure environment
Figure 8-28. The setup page for the new database created in the Azure environment

After the database is created, you can log in to it to create a table. Figure 8-29 shows the database management UI. In the pane on the lower-right side of this UI, you can find information about the connection string. Example 8-8 demonstrates how to create the Log table.

Example 8-8. Creating the Log table
create table Log (
    ID int not null primary key,
    Event varchar(50),
    Description varchar(2500))

insert into Log Values(1,'Info', 'starting...')
insert into Log Values(2,'Info', 'working...')
insert into Log Values(3,'Info', 'end')
UI for managing the database
Figure 8-29. UI for managing the database

It is easy for F# to access an Azure SQL database by using a type provider. Because the SQL Connection type provider works only for the SQL Server database, you can use only the Entity type provider to access an Azure SQL database. Example 8-9 shows how to use the Entity type provider to connect to the database, query the database, and print out the query results.

Example 8-9. Connecting to and querying an Azure SQL database using the Entity type provider
#if INTERACTIVE
#r "System.Data"
#r "System.Data.Entity"
#r "FSharp.Data.TypeProviders"
#endif

open System.Data
open System.Data.Entity
open Microsoft.FSharp.Data.TypeProviders

[<Literal>]
let conString = """Server=tcp:<server name>.database.windows.net,1433;Database=SetupLog;User
 ID=<user id>;Password=<password>;Trusted_Connection=False;Encrypt=True;Connection
Timeout=30"""

// You can use Server Explorer to build your ConnectionString.
type internal SqlConnection =
    Microsoft.FSharp.Data.TypeProviders.SqlEntityConnection<ConnectionString = conString>
let internal db = SqlConnection.GetDataContext()
// query the log table
let internal table = query {
    for r in db.Log do
    select r
    }

// print the log information
for p in table do
    printfn "[%d] %s: %s" p.ID p.Event p.Description

Execution result

[1] Info: starting...
[2] Info: working...
[3] Info: end

Code Snippet for Azure Development

You can find some Azure F# code snippets at http://fsharpcodesnippet.codeplex.com/. Table 8-1 lists the name and default code of these Azure code snippets.

Table 8-1. Code snippet for Azure development

Name

Default code

Create Azure cloud queue

let connectionStringName = "MyConnectionString"
let storageAccount =
CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(connectionSt
ringName))
let queueClient = storageAccount.CreateCloudQueueClient()
let queue = queueClient.GetQueueReference("myqueue")
ignore <| queue.CreateIfNotExist()

Create Azure Service Bus Queue

let QueueName = "GAQueue";
let dp = QueueDescription(QueueName)
let connectionString =
CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.
ConnectionString");
let namespaceManager = NamespaceManager.CreateFromConnectionString(connecti
onString);
if not <| namespaceManager.QueueExists(QueueName) then
    ignore <| namespaceManager.CreateQueue(dp)

Create blob and set permission

let container = blobStorage.GetContainerReference("ref")
container.CreateIfNotExist() |> ignore
let mutable permissions = container.GetPermissions()
permissions.PublicAccess <- BlobContainerPublicAccessType.Container
container.SetPermissions(permissions)

Create blob storage

let storageAccount =
CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(""))
let blobStorage = storageAccount.CreateCloudBlobClient()]

Receive message from Azure Service Bus Queue

let QueueName = "GAQueue";
let dp = QueueDescription(QueueName)
let connectionString =
CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.
ConnectionString");
let factory = MessagingFactory.CreateFromConnectionString(connectionStri
ng);

let receiver = factory.CreateMessageReceiver(QueueName);
let msg = receiver.Receive()
msg.Complete();
let r = msg.Properties.["propertyName"]

Send message to Azure Service Bus Queue

let QueueName = "GAQueue";
let dp = QueueDescription(QueueName)
let connectionString =
CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.
ConnectionString");
let factory = MessagingFactory.CreateFromConnectionString(connectionStri
ng);
let sender = factory.CreateMessageSender(QueueName);
let m = new BrokeredMessage(1)
m.Properties.["propertyName"] <- 1
sender.Send(m);

MapReduce

From the definition on Wikipedia (http://en.wikipedia.org/wiki/MapReduce), MapReduce (which can also be written as Map/Reduce) is a programming model used to process large data sets. Google proposed it and applied it to processing large amounts of data, such as log files. Large means that the data set is so big that it has to be processed across thousands of machines in order to complete the process in a reasonable amount of time. MapReduce can process both unstructured data such as files or structured data such as a database. In most cases, the data is stored in a number of locations. The process happens at the location closest to the data, which saves the time associated with transferring data over the wire. In other words, the data is so vast, the computation is moved close to the data to improve the response time. The idea for MapReduce is derived from the functional programming paradigm; coincidently, F# is positioned to provide cloud programming support.

The MapReduce model is derived from functional programming concepts known as map and reduce combinators. As the name suggests, MapReduce has two steps: map and reduce. The map step starts at the master node, which takes the input and divides it into smaller problems; it then distributes these subsets to the worker nodes. The worker nodes could further divide the problem and pass down the data to its worker nodes, thus creating a multilevel tree. The process is shown in Figure 8-30. The master node at the top starts with a large amount of data (represented by the wide arrow). Then the data flows toward the bottom of the tree, and the data is distributed into smaller categories. When all of the worker nodes process the data simultaneously, the result can be returned much more quickly.

The map step in MapReduce
Figure 8-30. The map step in MapReduce

In the reduce step, the data flows the opposite way. The parent nodes start to collect the answers from their worker nodes. The master node collects all the results and combines them for the final output. The process is demonstrated in Figure 8-31. When the data is aggregated to the root node, a conclusion (represented by the light bulb) can be made.

The reduce step in MapReduce
Figure 8-31. The reduce step in MapReduce

As an example, imagine that a 100-element array is passed to the master node and you want to know the sum of all the elements. The map step starts by splitting the data into small chunks, let’s say 10 elements, and each chunk is passed to a child node. After adding up these 10 elements, the worker nodes return the result to the master node, which calculates the sum of all its child nodes and returns the final answer. Example 8-10 shows how to simulate MapReduce on a local computer.

Example 8-10. Simulating MapReduce on a local computer
let data = [| 1..100 |]

let reduceFunction list =
    let r =
        list
        |> Seq.sum
    printfn "result = %A" r
    r

let map () =
    [0..9]
    |> Seq.map (fun i -> i * 10,(i + 1) * 10 - 1)
    |> Seq.map (fun (a,b) ->
                printf "from %A to %A: " a b
                async {
                    let! sum = async { return reduceFunction (data.[a..b]) } |> Async.
StartChild
                    return! sum
                })

let reduce seq =
    seq
    |> Seq.sumBy Async.RunSynchronously

let mapReduce =
    map()
    |> reduce

printfn "final result = %A" mapReduce

Execution result

from 0 to 9: result = 55
from 10 to 19: result = 155
from 20 to 29: result = 255
from 30 to 39: result = 355
from 40 to 49: result = 455
from 50 to 59: result = 555
from 60 to 69: result = 655
from 70 to 79: result = 755
from 80 to 89: result = 855
from 90 to 99: result = 955
final result = 5050

Another way to transfer the data is to pass in selection criteria, such as an index range or a database selection. Example 8-11 shows how to pass an index range to the MapReduce process.

Example 8-11. MapReduce with an index range
let data = [| 1..100 |]

let reduceFunction a b =
    let r =
        data.[a..b]
        |> Seq.sum
    printfn "result = %A" r
    r

let map () =
    [0..9]
    |> Seq.map (fun i -> i * 10,(i + 1) * 10 - 1)
    |> Seq.map (fun (a, b) ->
                printf "from %A to %A: " a b
                async {
                    let! sum = async { return reduceFunction a b } |> Async.StartChild
                    return! sum
                })

let reduce seq =
    seq
    |> Seq.sumBy Async.RunSynchronously

let mapReduce =
    map()
    |> reduce

printfn "final result = %A" mapReduce

Execution result from MapReduce with an index range

from 0 to 9: result = 55
from 10 to 19: result = 155
from 20 to 29: result = 255
from 30 to 39: result = 355
from 40 to 49: result = 455
from 50 to 59: result = 555
from 60 to 69: result = 655
from 70 to 79: result = 755
from 80 to 89: result = 855
from 90 to 99: result = 955
final result = 5050

The simulation program shows how Map/Reduce works. Now you can implement it using Windows Azure. The master and worker nodes are both implemented as Azure F# worker role projects. The solution structure is shown in Figure 8-32.

MapReduce Azure sample solution structure
Figure 8-32. MapReduce Azure sample solution structure

The master node takes the responsibility of splitting the big data set into small chunks and passing the chunks to the worker nodes. An Azure cloud queue is used to perform the communication between master and worker nodes. There are two queues: the parameter queue and the result queue. The parameter queue, which is named queue1 as shown in Example 8-12, is used to hold the small chunks of data. The result queue, which is named queue2 as shown in Example 8-12, is where the worker nodes insert the results. The master node gets all 10 elements from the result queue and then performs the reduce (sum) operation to get the final result, which is 5050.

Example 8-12. Master node code
namespace MasterNode

open System
open System.Diagnostics
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type WorkerRole() =
    inherit RoleEntryPoint()

    // This is a sample worker implementation. Replace with your logic.

    let log message kind = Trace.WriteLine(message, kind)
    let random = Random()

    override wr.Run() =

        let connectionStringName = "QueueConnectionString"
        let storageAccount =
CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(connectionStringName))
        let queueClient = storageAccount.CreateCloudQueueClient()
        let queue = queueClient.GetQueueReference("myqueue")
        let queue2 = queueClient.GetQueueReference("resultqueue")
        ignore <| queue.CreateIfNotExist()
        ignore <| queue2.CreateIfNotExist()

        let data = [| 1..100 |]

        // map step, split the data into small chunks
        let chunk = 9
        // insert 10 elements
        [0..chunk]
        |> Seq.map (fun i -> i * 10, (i + 1) * 10 - 1)
        |> Seq.iter (fun (a,b) ->
                        let l =
                            data.[a..b]
                            |> Array.map (fun i -> i.ToString())

                        queue.AddMessage(CloudQueueMessage(String.Join(",", l))))

        // reduce step, retrieve message and sum
        let result =
            [0..chunk]
            |> Seq.map (fun _ ->
                            while queue2.PeekMessage() = null do
                                Thread.Sleep(random.Next(5, 100))
                            queue2.GetMessage().AsString |> Convert.ToInt32)
            |> Seq.sum

        log (sprintf "final result = %A" result) "Information"

        log "MasterNode entry point called" "Information"
        while(true) do
            Thread.Sleep(10000)
            log "Working" "Information"

    override wr.OnStart() =

        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

        base.OnStart()

Note

The queue name cannot have uppercase characters.

The worker node code, which is relatively simple, is shown in Example 8-13. It takes the data from the queue, which is queue1, and performs a Seq.sum operation. The summary result is inserted into the result queue, which is queue2.

Example 8-13. Worker node code
namespace WorkerNode

open System
open System.Diagnostics
open System.Net
open System.Threading
open Microsoft.WindowsAzure
open Microsoft.WindowsAzure.ServiceRuntime
open Microsoft.WindowsAzure.StorageClient

type WorkerRole() =
    inherit RoleEntryPoint()

    // This is a sample worker implementation. Replace with your logic.

    let log message kind = Trace.WriteLine(message, kind)
    let random = Random()

    override wr.Run() =

        let connectionStringName = "QueueConnectionString"
        let qs = CloudConfigurationManager.GetSetting(connectionStringName)
        let storageAccount = CloudStorageAccount.Parse(qs)
        let queueClient = storageAccount.CreateCloudQueueClient()
        let queue = queueClient.GetQueueReference("myqueue")
        let queue2 = queueClient.GetQueueReference("resultqueue")

        ignore <| queue.CreateIfNotExist()
        ignore <| queue2.CreateIfNotExist()

        let mutable msg = queue.GetMessage()
        while msg.DequeueCount <> 1 do
            Thread.Sleep(random.Next(5, 100))
            msg <- queue.GetMessage()

        queue.DeleteMessage(msg)

        let data =
            msg.AsString.Split([| "," |], StringSplitOptions.RemoveEmptyEntries)
            |> Array.map Convert.ToInt32

        let sum = data |> Seq.sum

        queue2.AddMessage(CloudQueueMessage(sum.ToString()))

        log "WorkerNode entry point called" "Information"
        while(true) do
            Thread.Sleep(10000)
            log "Working" "Information"

    override wr.OnStart() =
        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit <- 12

        // For information on handling configuration changes
        // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

        base.OnStart()

MapReduce Design Patterns

MapReduce is a powerful tool for processing data. This section introduces several MapReduce design patterns that can be used to solve common programming problems. There are several communication approaches that can be applied in addition to Azure cloud. In this section, I generalize these approaches as the Emit method. The Emit method sends a message containing the actual data as well as the key value to distinguish this message from others. Example 8-12 and Example 8-13 can be implemented as pseudo code, as shown in Example 8-14.

Example 8-14. Summing using MapReduce
let map () =
    let chunks = split data
    for chunk in chunks do
        let sum = chunk |> Seq.sum
        emit(id, sum)

let reduce() =
    let sum =
        resultSet
        |> Seq.sum (fun (id, sum) -> sum)

    emit("final result", sum)

Another application is to count the occurrence of a term in a document. The key value is the term whose occurrence needs to be computed. The map step sends the term occurrence, and the reduce step aggregates the term occurrences. The pseudo code is shown in Example 8-15.

Example 8-15. Counting using MapReduce
let map term =
    let sum =
        doc
        |> Seq.filter (fun word -> word = term)
        |> Seq.count

    emit(term, sum)
let reduce() =
    let hashTable = Hashtable<term, int>()
    resultSet
    |> Seq.iter (fun (term, count) -> hashTable.[term] <- hashTable.[term] + count

    emit("final result", hashTable)

MapReduce can also be used on a more complex structure. In the following new sample, a graph of the entity and its relationships is stored during each iteration. The map step sends one entity’s state, which can be as simple as all reachable nodes. The reduce step goes through all messages, which contain the relationship information, and updates the state for each node. The state in one node quickly “infects” other nodes. The whole process is shown in Figure 8-33, Figure 8-34, and Figure 8-35.

Graph processing initial state
Figure 8-33. Graph processing initial state
Graph processing after one iteration
Figure 8-34. Graph processing after one iteration
Graph processing after two iterations
Figure 8-35. Graph processing after two iterations

The MapReduce pattern is a hot research topic. You can reference the following sites for more details:

Genetic Algorithms on Cloud

A Genetic Algorithm (GA) deploys a large number of potential solutions, and these solutions are later filtered by the target function to decide if they are close to the optimal solution. The inferior solution, which is considered to be far from the optimum, will be abandoned. Those selected as good are used to generate new sets of potential solutions, and the whole process continues on. The selection process is usually performed with some randomization.

Unlike other computations, a GA does not stop by itself. The developer needs to set up how the computation process should be stopped. Because only the solutions close to optimum are selected, the whole process makes the majority of the solution converge at the optimum solution. Eventually, the optimum solution is found. The whole process simulates Darwin’s natural selection theory, only the fit can survive and the survivor is the solution to the target problem. The GA is a perfect candidate for an NP-complete problem, such as the traveling salesman problem (TSP). It is often used in investment firms to find the best trading/investing strategies. One of the reasons that GA works so well is that it has implicit parallelism and error tolerance built in. It is a perfect candidate for a cloud computation sample, and this section demonstrates how to implement GA on the Azure cloud.

Understanding Genetic Algorithms

The group of potential solutions is called a population, and it contains individuals that are usually called chromosomes. The target function used to decide which chromosomes are fit and which are inferior is called the fitness function. The individual in the population is changed when experiences cross over with another individual or individuals, or when individuals mutate by themselves. The crossover and mutation process is also called recombination.

If the whole search space can be envisioned as a two-dimensional (2D) platform, the population is initially scattered everywhere on the platform. When the evaluation process starts, individuals far from the optimum die out and the whole population converges at the optimum area. The population can converge at a suboptimum area; therefore, the algorithm uses several parameters to maintain the diversity of the population with the hope that it does not prematurely converge. Note that the population does not necessarily converge on the optimum, but all the individuals are very close to optimum. If you are not all that familiar with GA, the whole evolutionary process is nothing more than chaos. But this population is disciplined by the target (fitness) function. This whole process wastes some search energy, but the problems that GA tries to attack are always NP-complete problems, in which you are interested only in finding a solution, regardless of whether that solution is optimum.

The GA pseudo code in Example 8-16 shows how a simple GA (SGA) works. It first initializes a population of chromosomes. Each chromosome in the population is a potential solution to the target function, which is also the fitness function. The fitness function returns a value representing the quality of the solution (chromosome). The selection is then used to filter out inferior chromosomes. The selected chromosomes then perform a crossover and mutation operation. After these transformation operations, the chromosomes are put into a new population and the process is repeated. The process repeats for a certain number of iterations or until the best chromosome in the population is close enough to the optimum. The fitness function is used to determine how close a chromosome is to the optimum.

Example 8-16. GA pseudo code
initialize population
compute the chromosome fitness value from the fitness function

evolve the population for X times according to the following rule:

   step1: select a chromosome from the population and call it c0
   step2: select a chromosome from the population and call it c1
   step3: let c0 and c1 do the recombination and generate new chromosomes c0' and c1'
   step4: put c0 and c1 into new population
   step5: compute the fitness value from the fitness function

   repeat step1 to step5 until the new population size = the current population size

   current population = new population

The chromosome usually is represented as an array or list of integers or float numbers. The implementation in Example 8-17 introduces an extra layer and makes a chromosome that has a geno-type layer and a pheno-type layer. The geno-type layer is where the actual data is located. If you take a single geno-type layer data item as input to a function, the result of the function is a data item in the pheno-type layer. You can use a function to transform the geno-type layer data to a pheno type, which can enable users to have more complex encoding for the chromosome. A user function needs to be passed in to initialize each building block, which is called loci. The mutation operation is going to be introduced later in the Crossover and Mutation section. The whole population initialization is shown in Example 8-18.

Example 8-17. Chromosome definition
/// Chromesome type to represent the individual involved in the GAs
type ChromosomeType(f, size, ?converters) =
    let initialF = f
    let mutable genos = [for i in 1..size do yield f()]
    let mutable genoPhenoConverters = converters
    /// make a duplicate copy of this chromosome
    member this.Clone() =
        let newValue =
            match converters with
                | Some(converters) -> new ChromosomeType(initialF, size, converters)
                | None -> new ChromosomeType(initialF, size)
        newValue.Genos <- this.Genos
        newValue

    /// get fitness value with given fitness function
    member this.Fitness(fitnessFunction) = this.Pheno |> fitnessFunction

    /// gets and sets the Geno values
    member this.Genos
        with get() = genos
        and set(value) = genos <- value

    /// gets and sets the Pheno values
    member this.Pheno
        with get() =
            match genoPhenoConverters with
            | Some(genoPhenoConverters) -> List.zip genoPhenoConverters genos |> List.map
(fun
 (f,value) -> f value)
            | None -> this.Genos

    /// mutate the chromosome with given mutation function
    member this.Mutate(?mutationF) =
        let location = random.Next(Seq.length this.Genos)
        let F =
            match mutationF with
                | Some(mutationF) -> mutationF
                | None -> f
        let transform i v =
            match i with
                | _ when i = location -> F()
                | _ -> v
        this.Genos <- List.mapi transform this.Genos
Example 8-18. Population initialization
    /// generate a population for GAs
    let Population randomF populationSize chromosomeSize =
        [for i in 1..populationSize do yield (new
ChromosomeType(f = randomF, size = chromosomeSize))]

Selection

First, keep in mind that the selection can choose suboptimum individuals in order to keep the diversity in the population, and that is totally fine. There are two commonly used selection techniques:

  • Roulette wheel selection. Imagine a roulette wheel that hosts all the individuals in the population. Each individual has its space according to its fitness value. The better the fitness value, the bigger space the individual has. The bigger space gives the individual a greater chance of being selected.

  • Rank selection. The roulette wheel selection uses the fitness value to relocate the space, and rank selection uses the rank in the sorted list to do so. The worst individual is assigned to the first space, the second worst is assigned to the second space, and so on.

Rank selection gives more weight to the inferior individuals and can speed up the convergence process. It will be used as the default selection in the sample code, as shown in Example 8-19.

Example 8-19. Rank selection
/// rank selection method
let RankSelection (population:ChromosomeType list) fitnessFunction=
    let populationSize = Seq.length population
    let r() = randomF() % populationSize
    let randomSelection() =
        let c0 = population.[r()]
        let c1 = population.[r()]
        let result = if (c0.Fitness(fitnessFunction) > c1.Fitness(fitnessFunction)) then
c0 else
c1
        result.Clone()
    Seq.init populationSize (fun _ -> randomSelection())

Crossover and Mutation

There are two typical recombination operations: crossover and mutation. These two operations are always performed on the selected individuals, and they transform the geno type layer data. The crossover rate and mutation rate control the execution of operations. The crossover needs two individuals, and they exchange the building blocks to generate offspring. Table 8-2 shows a simple crossover. The Chromosome 1 building blocks are exchanged with Chromosome 2. Because individuals going into the crossover are considered to be the good ones, the crossover hopes the offspring are better, or at least as good, as their parents—although in reality that might not always be the case. The crossover operation is defined in Example 8-20.

Table 8-2. Crossover operation

Chromosome 1

0000 0000 0000 0000

Chromosome 2

1111 1111 1111 1111

Offspring 1

1000 1111 0001 0101

Offspring 2

0111 0000 1110 1010

Example 8-20. Shuffle crossover
/// shuffle crossover
let ShuffleCrossover (c0:ChromosomeType) (c1:ChromosomeType) =
    let crossover c0 c1 =
        let isEven n = n%2 = 0
        let randomSwitch (x, y) = if isEven (randomF()) then (x, y) else (y, x)
        List.zip c0 c1 |> List.map randomSwitch |> List.unzip
    let (first, second) = crossover (c0.Genos) (c1.Genos)
    c0.Genos <- first
    c1.Genos <- second

The mutation involves only one individual. The simplest mutation is to randomly change a building block on the chromosome to another valid value. Table 8-3 shows a one-location mutation.

Table 8-3. Mutation operation

Chromosome

0000 0000 0000 0000

Mutated chromosome

1000 0000 0000 0000

The crossover is considered to not introduce diversity, which means there is no new number introduced by the crossover. . The building block in the individual is just moving from one individual to another. The mutation, on the other hand, is considered to be a way to introduce diversity into the population. Depending on other evolutionary strategy settings, the crossover rate is usually set to a higher number, such as 90 percent, and the mutation rate is as low as 2 percent. This agrees with the real-world number. To maintain existence, species usually do not have a high mutation rate, but the crossover mate (mating) is high.

Elitism

Mother Nature does not necessarily give any favor to a good individual. The recombination operators, in most cases, generate new individuals by destroying their parents. Even though the best individual is selected every time, it can still be destroyed and the next generation will be downgraded. Most of the GA implementations add the concept of elitism to preserve the best individuals and allow them to go into the next generation without interruption from the recombination operation. The sample in this section uses elitism by default. The complete GA code is shown in Example 8-21, and Example 8-22 shows how to use the GA to find the maximum value for the formula where xi ε [0,9].

In Example 8-16, you can see that the chromosome selection and transformation process can be executed in parallel. The GA computation can be parallel at the chromosome level as well as at the population level. The whole evolutionary process is perfect for a parallel programming model, and multiple populations can be executed simultaneously as long as these populations can exchange individuals. You might be surprised that the population does not need to be synchronized. When a population is initialized, chromosomes do not converge and are instead spread out in the search space. During the evolutionary process, chromosomes converge at the optimum area.

When the population converges, we say that the population’s diversity is low. The diversity of the population is decreased during the evolutionary process. If the search space has several suboptimum areas, these areas can trap the population and cause convergence at a suboptimum area or areas. The only way to avoid premature convergence is to increase the diversity by introducing new chromosomes. The new chromosomes, with some probability, can help the population out of the suboptimum areas and eventually find a global optimum. If a population evolves slower than the others, the slow population can introduce diversity into the fast population when exchanging individuals. If the VM hosting the population crashes, the newly initialized population can provide more diversity to the whole system. Because of the large number of chromosomes in the population, missing a chromosome during the communication does not affect the evolutionary process. Because of these features, the GA is immune to crashes or communication errors, as long as it is spread among different virtual hosts in the Azure cloud.

Example 8-21. Genetic Algorithm code
/// Evolutionary computation module
/// this module is for Genetic Algorithm (GA) only
module EvolutionaryComputation

    /// random number generator
    let random = new System.Random((int)System.DateTime.Now.Ticks)

    /// random int generator
    let randomF() = random.Next()

    /// random float (double) generator
    let randomFloatF() = random.NextDouble()

    /// Chromesome type to represent the individual involves in the GAs
    type ChromosomeType(f, size, ?converters) =
        let initialF = f
        let mutable genos = [for i in 1..size do yield f()]
        let mutable genoPhenoConverters = converters

        /// make a duplicate copy of this chromosome
        member this.Clone() =
            let newValue =
                match converters with
                    | Some(converters) -> new ChromosomeType(initialF, size, converters)
                    | None -> new ChromosomeType(initialF, size)
            newValue.Genos <- this.Genos
            newValue

        /// get fitness value with given fitness function
        member this.Fitness(fitnessFunction) = this.Pheno |> fitnessFunction

        /// gets and sets the Geno values
        member this.Genos
            with get() = genos
            and set(value) = genos <- value

        /// gets and sets the Pheno values
        member this.Pheno
            with get() =
                match genoPhenoConverters with
                | Some(genoPhenoConverters) ->
                    List.zip genoPhenoConverters genos |> List.map (fun (f,value) -> f
value)
                | None -> this.Genos

        /// mutate the chromosome with given mutation function
        member this.Mutate(?mutationF) =
            let location = random.Next(Seq.length this.Genos)
            let F =
                match mutationF with
                    | Some(mutationF) -> mutationF
                    | None -> f
            let transform i v =
                match i with
                    | _ when i=location -> F()
                    | _ -> v
            this.Genos <- List.mapi transform this.Genos

    /// generate a population for GAs
    let Population randomF populationSize chromosomeSize =
        [for i in 1..populationSize do yield (new
ChromosomeType(f = randomF, size = chromosomeSize))]

    /// find the maximum fitness value from a population
    let maxFitness population fitnessF =
        let best = Seq.maxBy (fun (c:ChromosomeType) -> c.Fitness(fitnessF)) population
        best.Fitness(fitnessF)

    /// find the most fit individual
    let bestChromosome population fitnessF =
        let best = Seq.maxBy (fun (c:ChromosomeType) -> c.Fitness(fitnessF)) population
        best

    /// rank selection method
    let RankSelection (population:ChromosomeType list) fitnessFunction=
        let populationSize = Seq.length population
        let r() = randomF() % populationSize
        let randomSelection() =
            let c0 = population.[r()]
            let c1 = population.[r()]
            let result = if (c0.Fitness(fitnessFunction) > c1.Fitness(fitnessFunction))
then c0
else c1
            result.Clone();
        Seq.init populationSize (fun _ -> randomSelection())

    /// shuffle crossover
    let ShuffleCrossover (c0:ChromosomeType) (c1:ChromosomeType) =
        let crossover c0 c1 =
            let isEven n = n%2 = 0
            let randomSwitch (x,y) = if isEven (randomF()) then (x,y) else (y,x)
            List.zip c0 c1 |> List.map randomSwitch |> List.unzip
        let (first,second) = crossover (c0.Genos) (c1.Genos)
        c0.Genos <- first
        c1.Genos <- second

    /// evolve the whole population
    let Evolve (population:ChromosomeType list) selectionF crossoverF fitnessF
crossoverRate
mutationRate elitism =
        let populationSize = Seq.length population
        let r() = randomF() % populationSize
        let elites = selectionF population fitnessF |> Seq.toList
        let seq0 = elites |> Seq.mapi (fun i element->(i,element)) |> Seq.filter (fun
(i,_)->i%2=0) |> Seq.map (fun (_,b)->b)
        let seq1 = elites |> Seq.mapi (fun i element->(i,element)) |> Seq.filter (fun
(i,_)->i%2<>0) |> Seq.map (fun (_,b)->b)
        let xoverAndMutate (a:ChromosomeType) (b:ChromosomeType) =
            if (randomFloatF() < crossoverRate) then
                crossoverF a b
            if (randomFloatF() < mutationRate) then
                a.Mutate()
            if (randomFloatF() < mutationRate) then
                b.Mutate()
            [a] @ [b]

        if elitism then
            let seq0 = seq0
            let seq1 = seq1
            let r = Seq.map2 xoverAndMutate seq0 seq1 |> List.concat
            r.Tail @ [  bestChromosome population fitnessF ]
        else
            Seq.map2 xoverAndMutate seq0 seq1 |> List.concat

    /// composite function X times
    let rec composite f x =
         match x with
            | 1 -> f
            | n -> f >> (composite f (x-1))

    /// convert a function seq to function composition
    let compositeFunctions functions =
         Seq.fold ( >> ) id functions
Example 8-22. Invoking the GA code
open EvolutionaryComputation

// define the loci function to set each building block on the chromosome
let lociFunction() = box(random.Next(10))

// define the fitness (target) function
let fitnessF (items:obj list) =
    items |> Seq.map (System.Convert.ToInt32) |> Seq.sum

// initialize the population
let myPopulation = Population lociFunction 50 10

// evolve the population with rank selection, shuffle crossover (90%)
// mutation rate = 10% and elitism = true
let myEvolve population = Evolve population RankSelection ShuffleCrossover fitnessF 0.9
0.1
true

// evolve the population 75 times
let result = composite myEvolve 75 myPopulation

// print out the best fitness value which is the solution to the target function
printfn "%A" (maxFitness result fitnessF)

Azure Communication

Azure provides different ways to perform communications between hosts, such as AppFabric and Windows Azure Service Bus. In this section, the service bus is chosen to perform communication among the populations. The Service Bus mechanism provides both relayed and brokered messaging capabilities. The Service Bus Relay service supports direct one-way messaging, request/response messaging, and peer-to-peer messaging. One way that the Service Bus relay facilitates this is by enabling you to securely expose Windows Communication Foundation (WCF) services that reside within a corporate enterprise network to the public cloud, without having to open up a firewall connection or require intrusive changes to a corporate network infrastructure. Brokered messaging provides durable, asynchronous messaging components such as Queues, Topics, and Subscriptions, with features that support publish-subscribe and temporal decoupling: senders and receivers do not have to be online at the same time; the messaging infrastructure reliably stores messages until the receiving party is ready to receive them.

Setting Up the Service from Azure Management Portal

The service bus can be found in the management portal at the Azure website, as shown in Figure 8-36.

The Service Bus option in the Azure management portal
Figure 8-36. The Service Bus option in the Azure management portal

After you select the Service Bus option, you can create a service bus by clicking the Create button, which is shown in Figure 8-37.

Button for namespace and service bus queue
Figure 8-37. Button for namespace and service bus queue

For the sample code in this chapter, a namespace named testservicebus0 is required, as shown in Figure 8-38.

Creating sample namespace and service queue
Figure 8-38. Creating sample namespace and service queue

Before the code is presented, you need to retrieve the default key by clicking the Access Key button at the bottom of the page, as shown in Figure 8-39.

Service bus configuration UI
Figure 8-39. Service bus configuration UI

Figure 8-40 shows the default key.

Default key for the service
Figure 8-40. Default key for the service

These setup steps provide the service bus namespace, service bus queue, and user secret. Creating a cloud queue structure is similar to the procedure for creating a service bus. It can be done using the Cloud Services menu item, which you can see in Figure 8-36.

Service-Side Code

The next sample code creates an echo service. The service listens for incoming messages, receives a message, and then sends back that same message. The service code has its configuration file, App.config, and a simple .fs file. The App.config file defines how to communicate with the Azure cloud and the service’s file version, which is 1.7.0.0 in Example 8-23.

Example 8-23. Configuration file for a service bus
<configuration>
  <system.serviceModel>
    <services>
      <service name="Microsoft.ServiceBus.Samples.EchoService">
        <endpoint contract="Microsoft.ServiceBus.Samples.IEchoContract"
binding="netTcpRelayBinding" />
      </service>
    </services>
    <extensions>
      <bindingExtensions>
        <add name="netTcpRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.
ServiceBus,
Version=1.7.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
      </bindingExtensions>
    </extensions>
  </system.serviceModel>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
  </startup>
</configuration>

The service code, shown in Example 8-24, defines a contract interface named IEchoContract and a channel interface named IEchoChannel. The service namespace, service queue name, and issuerSecret are created during the setup step. You can use F5 to run the program.

Example 8-24. Echo service code using an Azure service bus
// Learn more about F# at http://fsharp.net
namespace Microsoft.ServiceBus.Samples

open System

open Microsoft.ServiceBus
open System.ServiceModel
[<ServiceContract(
    Name = "IEchoContract",
    Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")>]
type IEchoContract = interface
    [<OperationContract>]
    abstract member Echo : msg:string -> string
end

type IEchoChannel = interface
    inherit IEchoContract
    inherit IClientChannel
end

[<ServiceBehavior(
    Name = "EchoService",
    Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")>]
type EchoService() = class
    interface IEchoContract with
        member this.Echo(msg) =
            printfn "%s" msg
            msg
end

module Program =

    [<EntryPoint>]
    let Main(args) =
        let serviceNamespace = "testservicebus0"
        let issuerName = "owner"
        let issuerSecret = "<your account key>";

        // create the service credential
        let sharedSecretServiceBusCredential = TransportClientEndpointBehavior()
        sharedSecretServiceBusCredential.TokenProvider <-
            TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret);

        // set up the service host
        let address = ServiceBusEnvironment.CreateServiceUri("sb",
                                                                        serviceNamespace,
                                                                        "EchoService");
        let host = new ServiceHost(typeof<EchoService>, address);
        let serviceRegistrySettings = new ServiceRegistrySettings(DiscoveryType.Public);

        // setup endpoints
        for endpoint in host.Description.Endpoints do
            endpoint.Behaviors.Add(serviceRegistrySettings);
            endpoint.Behaviors.Add(sharedSecretServiceBusCredential)

        // open the service host
        host.Open();
        Console.WriteLine("Service address: " + address.ToString());
        Console.WriteLine("Press [Enter] to exit");
        Console.ReadLine() |> ignore

        // close the host service
        host.Close()

        0

Note

The interface, class, and struct keywords in the type definition are optional. The following two interface definition snippets are the same:

type IA =
    abstract F : int -> int

type IB = interface
    abstract F : int -> int
end

Client-Side Code

Like server-side code, the client-side code’s configuration also is located in the App.config and has some simple .fs code. The configuration code is shown in Example 8-25. The service bus DLL file is version 1.7.0.0. The client-side code is even simpler—it sends a string message to the service. In Example 8-26, a service interface named IEchoContract and a channel interface named IEchoChannel are defined. The string message is sent via the created channel.

Example 8-25. Client-side configuration file
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.serviceModel>
    <client>
      <endpoint name="RelayEndpoint" contract="Microsoft.ServiceBus.Samples.IEchoContract"
binding="netTcpRelayBinding" />
    </client>
    <extensions>
      <bindingExtensions>
        <add name="netTcpRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement,
Microsoft.ServiceBus, Version=1.7.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
/>
      </bindingExtensions>
    </extensions>
  </system.serviceModel>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
  </startup>
</configuration>
Example 8-26. Client-side code for the service bus service
// Learn more about F# at http://fsharp.net

namespace Microsoft.ServiceBus.Samples

open System
open Microsoft.ServiceBus;
open System.ServiceModel;

[<ServiceContract(
    Name = "IEchoContract",
    Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")>]
type IEchoContract = interface
    [<OperationContract>]
    abstract member Echo : msg:string -> string
end

type IEchoChannel = interface
    inherit IEchoContract
    inherit IClientChannel
end

module Program =

    [<EntryPoint>]
    let Main(args) =

        ServiceBusEnvironment.SystemConnectivity.Mode <- ConnectivityMode.AutoDetect;

        let serviceNamespace = "testServiceBus0"
        let issuerName = "owner"
        let issuerSecret = "<your account key>";

        let serviceUri = ServiceBusEnvironment.CreateServiceUri("sb",

serviceNamespace,
                                                                           "EchoService");
        let sharedSecretServiceBusCredential = new TransportClientEndpointBehavior();
        sharedSecretServiceBusCredential.TokenProvider <-
            TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret);

        let channelFactory = new ChannelFactory<IEchoChannel>(
                                           "RelayEndpoint",
                                           new EndpointAddress(serviceUri));
        channelFactory.Endpoint.Behaviors.Add(sharedSecretServiceBusCredential);

        let channel = channelFactory.CreateChannel();
        channel.Open();

        Console.WriteLine("Enter text to echo (or [Enter] to exit):");
        let mutable input = Console.ReadLine();
        while (input <> String.Empty) do
            try
                Console.WriteLine("Server echoed: {0}", channel.Echo(input));
            with
            | _ as e->
                Console.WriteLine("Error: " + e.Message);
            input <- Console.ReadLine();

        channel.Close();
        channelFactory.Close();

        0

This simple communication mechanism is used to implement the cloud GA sample, which exchanges the best individuals among populations. The cloud GA sample is shown in next section.

Genetic Algorithms in the Cloud

When thinking about how to parallelize the GA in the cloud, there are two choices. One is to make the parallelization work on the chromosome level, and the other is to make it work on the population level. Because the communication cost in the cloud environment cannot be ignored, this implementation takes the population-level option, which means each virtual host on the cloud runs a population. The best chromosome in one population is selected and injected into another population or populations. After several iterations, all of the populations are supposed to converge at the same area and, we hope, that is where the optimum solution is located.

For the real implementation, cloud queue storage is used to store the best chromosomes from different populations. The population randomly enqueues its best chromosome and dequeues an element to replace the worst chromosome, if the queue is not empty. If the queue is empty, the population will just continue its evaluation, hoping that it can get some external information next time. There is another monitoring component that is used to make sure that the queue does not grow too fast. It deletes three items at once, trying to keep the queue length in control. The monitor component also hosts a service bus, which enables the client application to get the best result by querying the service bus service. Keep in mind that all the queue operations are randomly performed. It seems everything is in a chaos, but the fitness function, which is also the problem to solve, will guide the populations to the right spot. The whole architecture is shown in Figure 8-41.

Cloud GA architecture
Figure 8-41. Cloud GA architecture

You can load the full solution from the F# 3.0 pack (http://fsharp3sample.codeplex.com). This sample is under the AzureSamples folder in the source tree. There are four projects in the solution, as shown in Figure 8-42:

  • CloudGA is the configuration project, which every Azure project creates.

  • FSharpComputationLibrary is the project that hosts the GA library and cloud-configuration parameters. It is a common F# library project.

  • GARole is an F# worker role project created by the Azure cloud wizard. It runs the GA library and sends/receives the chromosomes from the cloud queue storage, which helps the current population evolve.

  • MonitorRole is also an F# worker role project. It regularly cleans the queue and reports the solution to the problem by hosting a service bus service.

CloudGA solution architecture
Figure 8-42. CloudGA solution architecture

The GARole project contains only one .fs file. The main execution function is the Run function. It invokes the GA library and interacts with the queue to exchange individuals with other populations. The code snippet is shown in Example 8-27. There is a random waiting statement before the code snippet. It is put there on purpose to decrease the chance of the GA population always getting the chromosome that it sends to the queue. The queue is set up in the OnStart function. (See Example 8-28.) The queue is cleared when the worker role is started. Multiple GA worker roles can be started at a different pace, and some items in the queue can be lost. The chromosome loss, compared to the whole GA evolutionary time and complexity of the target function, can be ignored.

Example 8-27. GA communicating with the cloud queue
let msg = q.GetMessage()
if msg <> null then
    worst.FromString(msg.AsString, System.Convert.ToInt32 >> box)
q.DeleteMessage(msg)

let bestStr = best.ToString()
q.AddMessage(CloudQueueMessage(bestStr))
Example 8-28. Setting up the cloud queue
override wr.OnStart() =

    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit <- 12

    // set up the queue
    let credential = StorageCredentialsAccountAndKey(
                            CloudConstant.storageAccount,
                            CloudConstant.accountKey)

    q <- CloudQueue(CloudConstant.queueUrl, credential)
    if q.Exists() then q.Clear()
    q.CreateIfNotExist() |> ignore

    base.OnStart()

The Monitor role is used to control the size of the cloud queue by deleting three items in a batch if the size of the queue is greater than a predefined number. In the sample, this predefined number is set to 3 as well. The code is shown in Example 8-29. The queue setup code in the OnStart method for the Monitor role is not different from the GA role. The Monitor role interacts with the service bus service to enable it to answer the query from client applications regarding what the best solution for the target problem is. (See Example 8-30.) The service bus code is similar to the one in Example 8-23. The only difference is the extra field named InstanceContextMode. It is set to InstanceContextMode.Single in the ServiceBehavior on the service class. This field enables you to pass an instance of a service class instead of passing in the type.

Example 8-29. Monitor role Run function
override wr.Run() =

    log "MonitorRole entry point called" "Information"
    while(true) do
        Thread.Sleep 1000
        log "Monitor Working to clear" "Information"
        log (sprintf "queue length = %A" (q.RetrieveApproximateMessageCount()))
"Information"
        if (q.RetrieveApproximateMessageCount() > 3) then
            q.GetMessages(3) |> Seq.iter q.DeleteMessage

        let msg = q.GetMessage()
        if msg <> null then
            answerHost.Value <- msg.AsString
Example 8-30. Monitor role setting up the service bus service and cloud queue
override wr.OnStart() =
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit <- 12

    // set up the queue
    let credential = StorageCredentialsAccountAndKey(CloudConstant.storageAccount,
CloudConstant.accountKey)

    q <- CloudQueue(CloudConstant.queueUrl, credential)
    q.CreateIfNotExist() |> ignore

    // start the service bus service
    let serviceNamespace = CloudConstant.serviceNamespace
    let issuerName = CloudConstant.issuerName
    let issuerSecret = CloudConstant.issuerSecret

    let sharedSecretServiceBusCredential = TransportClientEndpointBehavior()
    sharedSecretServiceBusCredential.TokenProvider <-
TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret)
    let address = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace,
CloudConstant.serviceName);
    host <- new ServiceHost(answerHost, address)
    let serviceRegistrySettings = ServiceRegistrySettings(DiscoveryType.Public)

    for endpoint in host.Description.Endpoints do
        endpoint.Behaviors.Add(serviceRegistrySettings)
        endpoint.Behaviors.Add(sharedSecretServiceBusCredential)

    host.Open()

    base.OnStart()

When the solution is executed, it can be viewed from the output windows in Visual Studio. Another approach is to use a service client to query the monitor role. Figure 8-43 shows the query result from the service bus client. As you monitor the results, you will see that the best solution is improving as time passes. Finally, the output will show the optimum, an all-9 string list. Because of the elitism concept that is used, once the population is converging at the optimum, it is very difficult for it to lose its focus.

Query result from the GA execution
Figure 8-43. Query result from the GA execution

More sophisticated features, such as restart and acceptance of user input during the evolution, are beyond the scope of this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.165.226