4

Implementing Azure Functions

So far, we’ve covered some of the in-depth topics around Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Another popular service model that we haven’t discussed yet is Function as a Service (FaaS). Azure Functions is Microsoft’s FaaS solution, which takes the benefits of PaaS further by completely abstracting the underlying infrastructure, with pay-per-execution billing and automatic scaling available.

During this chapter, we will introduce and explore the Azure Functions service and use cases. We will introduce some of the fundamental concepts of Azure Functions and run through a development workflow for a function app, including development, testing, and deployment. After creating several functions, we will expand on this further by introducing stateful durable functions.

By the end of this chapter, you will understand the benefits and use cases, and have some familiarity with the development workflow of Azure Functions.

In this chapter, we will cover the following main topics:

  • Exploring Azure Functions
  • Developing, testing, and deploying Azure Functions
  • Discovering stateful durable functions

Technical requirements

To follow through the examples in this chapter, the following are required in addition to VS Code:

Code in Action videos for this chapter: https://bit.ly/3xC0ao5

Note on Programming Language Examples

Remember, this book is an exam preparation guide, so we may not be covering examples using your preferred programming language. If you can understand the concepts with the language (C#) we use in this chapter, you should be able to answer exam questions on other languages as well. For documentation on supported languages, check out the Further reading section of this chapter.

Exploring Azure Functions

The Azure Functions service allows you to create code that can be triggered by events coming from Azure, third-party services, and on-premises systems, with the ability to access relevant data from these services and systems. Essentially, Azure Functions provides you with a serverless platform on which to run blocks of code (or functions) that respond to events. The unit of deployment in Azure Functions is a function app.

Within Azure, you create a function app, within which you can create one or more functions that share some common configuration such as app settings. The functions within a function app will all scale together, which is a similar concept to what we discussed in the last chapter with App Service plans. With this in mind, it often makes sense to group functions that are logically related together within a function app.

At the time of writing, the latest Azure Functions versions (4.x) support the following languages: C#, F#, Java, JavaScript, PowerShell, Python, and TypeScript. C# and JavaScript have been supported for longer than any of the other languages, so code samples in the exam are more likely to be in either of these two languages. The examples in this chapter use C#.

A topic that used to form part of the exam but no longer does is custom handlers. Custom handlers let you implement function apps in languages that aren’t currently offered out of the box, such as Go and Rust, for example. You can also create a custom handler to implement a function app in a runtime that’s not currently featured by default. As this topic is no longer in the exam, we won’t explore the topic further here, but a link to customer handler documentation can be found in the Further reading section of this chapter.

Azure Functions is often the service of choice for tasks such as data, image, and order processing, maintenance of files, simple APIs and microservices, and other tasks you might want to run on a schedule. While there are similarities between Azure Functions and services such as Logic Apps and App Service WebJobs, there are some key differences to be aware of:

  • Logic Apps development is more declarative, with a designer-first focus, whereas Azure Functions is more imperative, with a code-first focus. You can monitor Azure Functions using Application Insights (a topic for Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data), while Logic Apps can be monitored using the Azure portal and Azure Monitor logs.
  • App Service WebJobs and Azure Functions are built with the WebJobs SDK, both built on App Service, with support for extensibility, as well as features such as source control integration, authentication, and Application Insights monitoring. Azure Functions has several features that can offer developers more productivity than WebJobs:
    • A serverless application model with automatic scaling without additional configuration
    • The ability to develop and test within the browser
    • Trigger on HTTP/webhook and Azure Event Grid events
    • More options for languages, development environments, pricing, and integrations with Azure services
    • Pay-per-use pricing

For details on some common scenarios and the suggested implementations of Azure Functions for each, check out the Further reading section of this chapter. The last point in the list of differences between WebJobs and Azure Functions can help dramatically reduce your compute cost, depending on the hosting plan selected. This leads us to the topic of hosting options, as there are different options with different use cases.

Hosting options

There are three main hosting plans available for Azure Functions, all of which are available on both Windows and Linux VMs. Here’s a brief summary of these plans:

  • Consumption (also referred to as Serverless): This is the default hosting plan for function apps, providing automatic scaling of function instances based on the number of incoming events, as well as providing potential compute cost savings by billing only for the number of executions, execution time, and memory used by your functions. This is measured in what’s known as GB-seconds.

For example, if a function uses 0.5 GB of memory when it runs and runs for a total of 5 seconds, the execution cost is 2.5 GB-seconds (0.5 GB * 5 seconds). If the function isn’t executed at all during that period, the execution cost is nothing. You also get a free grant of 1,000,000 executions and 400,000 GB-seconds each month.

After a period of being idle, the instances will be scaled to zero. For the first requests after scaling, there may be some latency during a cold startup while the instances are scaled up from zero.

  • Premium: Unlike the Consumption plan, this plan automatically scales using pre-warmed workers, meaning there’s no latency after being idle. As you might imagine, this plan also runs on more powerful instances and has some additional features, such as being able to connect to virtual networks. This plan is intended for function apps that need to run continuously (or nearly continuously), run for longer than the execution time limit of the Consumption plan, or run on a custom Linux image. This plan uses Elastic Premium (EP) App Service plans, so you’ll need to create or select an existing App Service plan using one of the EP SKUs. Unlike the autoscale settings we saw with App Service plans from the last chapter, you can configure Elastic Scale out with the number of always-ready instances.

The billing for this plan is based on the number of core seconds and memory allocation across all instances. There’s no execution charge – unlike with the Consumption plan – but there is a minimum charge each month, regardless of whether or not your functions have been running, as a result of the requirement to have at least one instance allocated at all times.

  • App Service plan (also referred to as Dedicated): This plan uses the same App Service plans we became familiar with in the last chapter, including all the same scaling options. The billing of this plan is exactly the same as any other App Service plan, which differs from the Consumption and Premium plans. This plan can be useful when you have underutilized App Service plan resources running other apps or when you want to provide a custom image for your functions to run on.

If you’re going to use the App Service plan, you should go into the Configuration blade of your function app and, under the General settings tab, ensure that Always on is toggled to On, so that the function app works correctly (it should be on by default). This can also be configured using the CLI, as you might imagine.

You also have the option of hosting your function apps on App Service Environments (ASEs) for a fully isolated environment, which was mentioned in the last chapter, as well as on Kubernetes, neither of which are in the scope of this book. Regardless of which plan you choose, every function app requires a general Azure storage account of a type that supports queues and tables for storing the function code files, as well as operations such as managing triggers and logging executions (we will see this in the Discovering stateful durable functions section of this chapter). HTTP and webhook triggers are the only trigger types that don’t require storage.

Storage accounts are billed separately from functions, so bear that in mind for billing. A link to the Azure Functions pricing page can be found in the Further reading section of this chapter if you’d like to see further information on the price details.

While we’re already familiar with the scaling options available with App Service plans, we should briefly discuss scaling when using the Consumption or Premium plans.

Scaling Azure Functions

The number of instances that Azure Functions scales to is determined by the number of events that trigger a function.

Remember

Function apps are the unit of deployment for Azure Functions, but they are also the unit of scale for Azure Functions – if a function app scales, all functions within the app scale at the same time.

The scale controller – which monitors the rate of events to decide whether to scale in or out – will use different logic for the scale decision based on the type of trigger being used. For example, it will take the queue length and age of the oldest queue message into consideration when you’re using an Azure Queue Storage trigger. For functions using HTTP triggers, new instances can be allocated at a maximum rate of once per second. For functions using other trigger types, that rate is a maximum of once every 30 seconds (although it is faster on the Premium plan).

A single instance of a function app might be able to process multiple requests at once, so there isn’t a limit on concurrent executions; however, a function app can only scale out to up to a maximum of 200 instances on the Consumption plan and 100 on the Premium plan. You can reduce this limit if you wish.

We’ve mentioned triggers a few times and with Azure Functions being event-driven, it’s worth going into some detail about triggers, as well as getting and sending data from connected services and systems with input and output bindings.

Triggers and bindings

In a nutshell, triggers cause your functions to run, and bindings are how you connect your function to other services or systems to obtain data from and send data to these services or systems. If you don’t need to send or receive data as part of your function, don’t use additional bindings – they’re optional. The exam mentions triggers using data operations, timers, and webhooks, so we’ll touch on each of those in the next section of this chapter.

Input bindings (the data your function receives) are received by the function as parameters and output bindings (the data your function sends) use the return value of the function. The trigger creates an input binding by default, so it doesn’t need an additional binding to be made in order to provide data to the function. Triggers and bindings are defined differently based on the language being used:

  • C# class library: You can configure triggers and bindings by decorating methods and parameters with C# attributes.
  • Java: You can configure triggers and bindings by decorating methods and parameters with Java annotations.
  • C# script/JavaScript/PowerShell/Python/TypeScript: You can configure triggers and bindings by updating the function.json file.

To declare whether a binding is an input or output, you specify the direction as either in or out for the direction property of the binding. Some bindings also support a special inout direction. Each binding needs to have a type, direction, and name value defined.

Consider this basic scenario: each time a new message arrives in Azure Queue Storage, you want to create a new row in Azure Table Storage to store some data from the queue message. You would use an Azure Queue Storage trigger (queueTrigger), which creates an input binding, and you would create an Azure Table Storage output binding (table).

With an awareness of the basic concepts of Azure Functions, including triggers and bindings, we are now ready to start creating functions.

Developing, testing, and deploying Azure Functions

Each function is made up of two main parts: your code and some configuration. The configuration file is created automatically for compiled languages based on annotations in the code; for scripting languages, the configuration file needs to be created – this is the function.json file we previously mentioned.

The files required and created depend on the language being used. The folder structure may change depending on the language as well. Our first examples will be using quite minimal C# script projects with no additional extensions for the most part, which will have a folder structure as follows:

Figure 4.1 – An example folder structure of a function app

Figure 4.1 – An example folder structure of a function app

Within the wwwroot folder, you will see a host.json file, which contains configuration options for all the functions within the function app. A link to further information about the host.json file can be found in the Further reading section of this chapter. There would be more files and folders if we were to use extensions, and the structure would also differ if we used a different language. As you might imagine, if we had multiple functions within this example function app, we would have an additional folder with a different name containing those files underneath the wwwroot folder.

Depending on the language being used and your preference, you can develop and test your functions within the portal, or you can work on them locally, using VS Code, for example. You should decide how you wish to develop early on because you shouldn’t mix developing functions locally with developing them within the portal in the same function app. If you create and publish a function from a local project, don’t try to maintain or modify the code from within the portal. In fact, once you deploy a project that was developed locally, you no longer have the option to create new functions within the same function app using the portal for development.

With all that being said, let’s head to the Azure portal and create a function app that will contain some basic functions using all three required trigger types, to see all the concepts we’ve explored so far in action:

  1. Within the Azure portal, create a new function app. The direct URL is https://portal.azure.com/#create/Microsoft.FunctionApp.
  2. Select the correct subscription from the Subscription dropdown, and either select an existing resource group or create a new one in the Resource Group field.
  3. Provide a globally unique name in the Function App name field.
  4. Make sure that Code is selected under the Publish setting.
  5. Select .NET for the Runtime stack setting and set the Version setting to the latest available one (which is 6 at the time of writing).
  6. Select your desired Region setting and progress to the next screen of the wizard.
  7. Set the Hosting options to their defaults, which should be to create your new Storage account name, set Operating System to Windows, and select the Consumption (Serverless) plan.
  8. Next, move past Networking on to Monitoring, where you should set Enable Application Insights to No.

You’re welcome to leave the setting enabled, but we won’t be covering Application Insights until Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data.

  1. Complete the wizard and click Create to create the resource.
  2. Once created, go to the newly created function app, and open the Configuration blade.

There are several application settings already present, including the following:

  • WEBSITE_CONTENTSHARE: This is used by a Premium plan or Windows Consumption plan and contains the path to the function app code and configuration files, with a default name starting with the function app name.
  • AzureWebJobsStorage: This contains the storage account connection string for the Azure Functions runtime to use in normal operations, as mentioned previously.

A reference for the Azure Functions application settings can be found in the Further reading section of this chapter.

With the function app resource deployed, we can start creating our functions within it. In this case, we’re going to do everything within the Azure portal for simplicity. When developing C# functions in the portal, C# script is used rather than compiled C#.

The data operation trigger

We’ll start by creating a function that uses a data operation trigger in the following scenario – each time a new message arrives in Azure Queue Storage, we want to create a new row in Azure Table Storage with some data from the queue message:

  1. Open the Functions blade and select Create to start creating our first function.
  2. For the Development environment setting, make sure it’s set to Develop in portal.
  3. Select Azure Queue Storage trigger, enter the name QueueTrigger1 in the New Function field, set Queue name to myqueue-items, leave the Storage account connection setting as AzureWebJobsStorage (for simplicity, we’ll just use the same storage account for everything in this function), then click Create.
  4. Once created, open the newly created function, if it doesn’t automatically open.
  5. Open the Integration blade, where you will see a visual representation of the trigger, as well as input and output bindings (as mentioned previously, the trigger will have an input binding already, so we won’t need to create another input binding in our scenario):
Figure 4.2 – A visual representation of function integrations

Figure 4.2 – A visual representation of function integrations

Notice the trigger has myQueueItem in parentheses. This is the parameter name to identify our trigger within our code so that we can obtain data from it.

  1. Open the Code + Test blade and from the file dropdown, select the function.json file, which you will see contains the schema for the bindings we previously discussed, with the expected details from the trigger.
  2. Head back to the Integration blade and click on Add output under Outputs. We could have created the output binding directly in the function.json code, but we will use this for now.
  3. Change Binding Type to Azure Table Storage (because we want to create a new row for each message received), and change Table parameter name to $return, which is how we tell it to use the return value of the function. Set Table name to outTable and click OK. The diagram will now show the output binding.
  4. Head back to the Code + Test blade and open the function.json file again. This time, you will see our output binding configuration added to the file.

At this point, your function.json file should resemble the following:

{

  "bindings": [

    {

      "name": "myQueueItem",

      "type": "queueTrigger",

      "direction": "in",

      "queueName": "myqueue-items",

      "connection": "AzureWebJobsStorage"

    },

    {

      "name": "$return",

      "direction": "out",

      "type": "table",

      "connection": "AzureWebJobsStorage",

      "tableName": "outTable"

    }

  ]

}

Notice that the connection value is AzureWebJobsStorage, which relates to the application setting pointed out earlier. When connecting to other Azure services, the bindings refer to the environment variables created by the application settings, rather than using hardcoded connection string values directly. Some connections will use an identity rather than a secret, in which case you can configure a managed identity and provide relevant permissions to it. We will discuss managed identities further in Chapter 8, Implementing Secure Cloud Solutions, so we won’t go into detail at this stage.

  1. At the bottom of the screen, expand the Logs section to view the filesystem log streaming console. Here, you can see any logs from the function, as well as compilation logs.
  2. Switch the file to the run.csx file and delete the existing code.
  3. Enter the following code into the run.csx file:

    using Microsoft.Extensions.Logging;

    public static DemoMessage Run(string myQueueItem, ILogger log)

    {

        return new DemoMessage() {

            PartitionKey = "Messages",

            RowKey = Guid.NewGuid().ToString(),

            Message = myQueueItem.ToString() };

    }

    public class DemoMessage

    {

        public string PartitionKey { get; set; }

        public string RowKey { get; set; }

        public string Message{ get; set; }

    }

Notice that we’re accessing myQueueItem, which is the trigger parameter name (also acting as an input binding). The return value of the method is passed to the output binding, which will create a new row with a PartitionKey value of Messages, a new GUID for the RowKey value, and the Message column will contain the value from myQueueItem.

  1. Save the file and confirm that the log shows that the compilation was successful.
  2. Click Test/Run and type a message into the Body field of the Input tab, then click Run.
  3. Upon completion, the log should show that the function was called and succeeded, with the Output tab showing a 202 Accepted response code.
  4. Open the storage account, and within the Storage browser blade, navigate to Tables and open the newly created outTable table. You should see a new row containing the message:
Figure 4.3 – A new row created in Azure Table Storage using the function test run

Figure 4.3 – A new row created in Azure Table Storage using the function test run

Congratulations! You’ve successfully created and tested a new function. The code and bindings work.

Let’s set up the queue and confirm that everything works as intended outside of the test functionality:

  1. While still in the Storage browser blade, navigate to Queues, select Add queue, and name it myqueue-items, which was the value of queueName in our input binding.
  2. Click on the newly created queue and click Add message. Type a message and click OK to add the message to the queue.

For simplicity, we’re just using text, but we could have passed in JSON and had our function interpret the JSON elements to create an entry with multiple pieces of data.

  1. Periodically refresh the queue until the message is removed from the queue.
  2. Navigate to Tables and open outTable again. All being well, you should now see that a new row has been created with the message text you input as the queue message.
  3. While we’re here, open File shares and open the file share (notice that the name is the value from the WEBSITE_CONTENTSHARE application setting of the function app).
  4. Navigate to sitewwwroot and you’ll see the file and folder structure previously discussed.

That’s the data operation trigger taken care of – next, we’ll tackle timers. We won’t list every single step for the remaining trigger types, only the relevant differences.

Timers

Here are the alternative steps required for creating a function that implements a timer trigger:

  1. Create a new function within the same function app we just used, this time selecting the Timer trigger option modifying the schedule however you want.

You’ll see this uses the NCrontab syntax. A link with more information on NCrontab can be found in the Further reading section of this chapter. If the schedule was set to 0 */5 * * * *, then the function would trigger every 5 minutes of every day. Feel free to modify the timing for this exercise.

  1. Open the function and review the files and code as before to see how this is implemented. Notice that the type here is timerTrigger. Open the logs as before to confirm the timer successfully triggers and the function runs on the configured schedule.

The final trigger type we’re going to look at in this section is webhooks, or HTTP triggers.

Webhooks

Here are the alternative steps required for creating a function that implements an HTTP trigger:

  1. Create another function within the same function app we’ve been using, selecting the HTTP trigger option and accepting the rest of the settings as their defaults.
  2. Open the newly created function and once again view the function.json file to see the httpTrigger type being used, as well as an array of accepted methods that can trigger the function.

Notice the output binding uses the return value from the code. Notice also the authLevel value is set to function by default. This means that the function won’t be triggered by just any GET or POST request, but only if that request contains an API key from the function.

  1. Open the run.csx file and review the code. You can see that it has a default response if there’s no query string or request body, and a personalized response if a name is passed in the query string or request body.
  2. Click Get function URL, leaving the dropdown as the default function key, and copy the URL.
  3. In a new browser tab or window, navigate to the URL just copied and your function should display a generic message.
  4. Remove ? and everything after it from the URL, and try again. The function doesn’t run because it’s configured to require a function key, so you get a 401 HTTP error instead.
  5. Re-add the full URL, including the code string, adding &name=Azure to the end of the URL, and try again. Notice the personalized greeting, which reflects the code we saw.

Feel free to explore further, but for now, we’ve looked at the three trigger types mentioned in the exam.

Serverless functions are typically intended to be single purpose, short-lived, and stateless, which is great for scaling. Certain kinds of applications are difficult to implement without persistent state and being unable to incorporate a state can be somewhat of a constraint. Durable functions allow you create stateful functions using Azure Functions, which is the final topic for this chapter.

Discovering stateful durable functions

Durable Functions is an extension of Azure Functions that allows you to write stateful, serverless workflows (or orchestrations). You define the stateful workflows with orchestrator functions and you define stateful entities with entity functions. Durable functions manage the state, checkpoints, and restarts for you, using data stored in the storage account to keep track of the orchestration progress.

When you have more complex workflows and business logic that needs to be broken into multiple functions with stateful coordination, previously, you’d have had to come up with a creative solution using multiple services yourself to try and achieve this. That’s the primary use case for durable functions. Let’s briefly look at the function types and the typical patterns that durable functions help with.

Types of durable function

To achieve this kind of orchestration, durable functions have four types of functions:

  • Client functions (or starter functions): These can use all the function triggers and are used to initiate a new orchestration workflow. If you want to test an orchestration workflow, you can’t manually trigger an orchestrator function; you trigger the client function, which sends a message using an orchestrator client binding to initiate the orchestration.
  • Orchestrator functions: These define the steps within a workflow and can handle any errors that occur at any point in the workflow. Orchestrator functions don’t actually perform any activities – they only orchestrate.
  • Activity functions: These implement the steps within a workflow and can make use of all the input and output bindings available to functions.
  • Entity functions (or durable entities): These define the operations for reading and updating small pieces of state and can be invoked from client functions or orchestrator functions, accessed via a unique entity ID.

Durable function patterns

There are typically six application patterns that can benefit most from durable functions:

  • Function chaining: Where multiple functions need to be executed in a specific order with the output from one function being passed to the input of the next. An orchestrator function keeps track of the sequence progress.
  • Fan-out/fan-in: When multiple functions need to be executed in parallel and progress needs to wait for all those functions to complete. Usually, there would be some aggregation done on the results returned from the functions.
  • Asynchronous HTTP APIs: Useful when coordination is required between long-running operations and external clients. Once the long-running operation starts, the orchestrator function manages polling the status until the operation completes or times out.
  • Monitor: When there’s a recurring process within a workflow that needs to be polled until certain conditions are met, such as monitoring something for a change in state, for example. The orchestrator function will call an activity function that checks whether these conditions are being met.
  • Human interaction: A lot of business workflows need to pause for some kind of approval. The orchestrator function uses what’s called a durable timer to request approval, and then waits for an external event (receiving approval) before moving to the next function. Optionally, you might decide to have a timeout and run another function to take some remediation action or escalation if the external event didn’t happen. We will talk about both scenarios later in the chapter.
  • Aggregator: When data needs to be aggregated over a period of time into a single, addressable entity. This data might come from multiple sources, at varying volumes. The aggregator may have to carry out an action on data as it arrives, and external clients may need to query the said data. This makes use of durable entities.

To demonstrate a durable function, we’re going to create one and this time, we’re going to do our development in VS Code, so that you can see the development and testing experience locally, which is often the preferred way to develop functions.

Developing within VS Code

When developing locally, the first thing we will do is create a new local Azure Functions project, where we select the same kind of settings that we did in the portal. After this, we can develop and test our durable functions locally before deploying to Azure.

Creating a project

These are the steps we can carry out to create a local Azure Functions project within VS Code:

  1. With all the technical requirements installed and VS Code open, create a new folder for the project.
  2. Open the Command Palette with F1, Ctrl + Shift + P, or View | Command Palette….
  3. Start the storage emulator by entering azurite: start into the input box. This allows us to use local storage when developing locally without having to provision a storage account in Azure for local testing. You’ll notice some new files and folders created.
  4. Open the command palette again and this time, enter azure functions: create new project.
  5. Browse to and select the folder you created for this, select C# for the language, .NET 6 for the runtime, and Durable Functions Orchestration for the template, enter a name or accept the default, and accept the default namespace.
  6. When prompted, select the Use local emulator option.
  7. Once completed, open the .cs file and examine the code.

Within the generated code, we can see examples of a few of the function types previously discussed. The first is an orchestrator function (notice we’re decorating the function with an OrchestrationTrigger decorator, which will be used in the population of the function.json file upon compilation):

        [FunctionName("DurableFunctionsOrchestrationCSharp1")]
        public static async Task<List<string>> RunOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            var outputs = new List<string>();
            // Replace "hello" with the name of your Durable Activity Function.
            outputs.Add(await context.CallActivityAsync<string>("DurableFunctionsOrchestration CSharp1_Hello", "Tokyo"));
            outputs.Add(await context.CallActivityAsync<string>("DurableFunctionsOrchestration CSharp1_Hello", "Seattle"));
            outputs.Add(await context.CallActivityAsync<string>("DurableFunctionsOrchestration CSharp1_Hello", "London"));
// returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
            return outputs;
        }

This function uses the function chaining pattern previously discussed, where it’s calling an activity function in sequence, passing in the name of a city, and returning a list of outputs from that activity function.

Next, we have the activity function (decorated with ActivityTrigger):

        [FunctionName("DurableFunctionsOrchestrationCSharp1_Hello")]
        public static string SayHello([ActivityTrigger] string name, ILogger log)
        {
            log.LogInformation($"Saying hello to {name}.");
            return $"Hello {name}!";
        }

This activity function takes the name passed to it, logs a message, and returns a string value greeting the provided name.

Finally, we have the client – or starter – function, which triggers the orchestration function (decorated with HttpTrigger, although it could be any other trigger type):

        [FunctionName("DurableFunctionsOrchestrationCSharp1_HttpStart")]
        public static async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req,
            [DurableClient] IDurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.
            string instanceId = await starter.StartNewAsync("DurableFunctionsOrchestrationCSharp1", null);
            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
            return starter.CreateCheckStatusResponse(req, instanceId);
        }

This function has an HTTP trigger that accepts GET and POST requests, triggers the orchestration function, and uses the generated instanceId value in a log message, as well as for generating a message that can be used to check the status. We won’t make any code changes this time.

In the file explorer, you’ll see a file called local.settings.json. This file stores the application settings that are only used for local development. This is useful for us because we know the AzureWebJobsStorage value is used by our functions and we want to make sure, for development, we’re using the local storage emulator, but in Azure, we want to use the storage account. The value of AzureWebJobsStorage is UseDevelopmentStorage=true, indicating that the storage emulator will be used locally.

Let’s get ready to give this a test.

Developing and testing locally

Here are the steps we can follow to build and test our functions locally in VS Code:

  1. From the activity bar of VS Code, click the Azure icon and expand the FUNCTIONS list.
  2. Expand your subscription and you’ll be able to see the function we created earlier within the portal. Feel free to look through and see what can be found there. You can also see the local project in the same area. Expand Local Project and Functions, then click Run build task to update this list… (you could also build however you would normally build your .NET projects):
Figure 4.4 – The local and Azure-based functions listed

Figure 4.4 – The local and Azure-based functions listed

You can now see the three functions listed: we have our orchestrator function, activity function, and client (or starter) function.

  1. Click on each of the functions and the function.json files for each will be displayed, where you can see the specific bindings, which should be somewhat familiar by now.

Although we won’t do it here, you could also right-click on the functions to add a binding, which would open a wizard to have you populate the relevant properties for the binding.

  1. Run the program as you normally would, which is often using F5.
  2. You will see our functions and their triggers in the terminal output. The client function has a webhook URL we can use to trigger. Visit that URL and you will see several pieces of information, including the statusQueryGetUri URI. The terminal output will also display output messages.
  3. Copy the full URL value of statusQueryGetUri, which was generated by the client function using the CreateCheckStatusResponse call we saw in the code, then visit that URL.

Here you can see the status as Completed, along with the output list from the orchestrator function and some other data.

  1. Back in VS Code, right-click on the client function, select Execute Function Now…, and either accept the default JSON input or delete (it makes no difference in our case). You’ll see the terminal output showing the durable functions running.
  2. Stop the app running with Ctrl + C (or any other way you wish).

We’ve just created and tested a durable function locally without the functions being deployed in Azure – great job! Let’s finish this up by deploying our function to Azure along with a new function app.

Deploying to Azure

There are several ways to deploy our app to a function app – we’re just going to use one of them here:

  1. Open the command palette as before and enter Create new Function App in Azure… (Advanced).

We’re going to use the advanced method so we can select the resource group rather than allow it to create a new one.

  1. Enter a globally unique name for this function app, choose .NET 6 when prompted with Select a runtime stack, choose Windows when prompted with Select an OS, choose the appropriate resource group and location settings when prompted, choose the Consumption plan, select Create a new storage account, and enter a unique name for the storage account if the default isn’t acceptable. Select Skip for now at the Application Insights resource creation step. Monitor the terminal output to see when it completes.
  2. Open the command palette as before and enter azure functions: deploy to function app.
  3. Select the newly created function app when prompted with Select a resource.
  4. When prompted, select Deploy.

We created the function app, but we could have also deployed the code to an existing function app, which is what we would do if we needed to update the code or deploy another function to the same function app.

Upon completion, you should see the new function app with all our new functions created under your subscription in VS Code.

  1. Execute the starter function within Azure by right-clicking on it and selecting Execute Function Now…, as we did previously.
  2. Within the Azure portal, open the newly created storage account and go into the Storage browser blade.
  3. Explore the tables you see, where you’ll be able to see all the snapshots in history that the orchestrator function creates during the execution of the workflow, as well as the status for each instance.

You can also see various queues used for the orchestration. If you haven’t worked out by now, the AzureWebJobsStorage application setting has indeed been pulled from the function app application settings and not our local setting. Detailed information on the performance and scale of Azure Functions, which details how the storage accounts are used, can be found in the Further reading section of this chapter.

You may have noticed some references to something called task hubs. Let’s take a quick look at what that means in durable functions.

Task hubs

Task hubs are a logical container for storage resources used in durable functions orchestration such as the queues, tables, and containers we can see in our storage account. If one or more function apps share the same storage account, they should all be configured with their own task hub names. If not, they may compete against each other for messages, which could lead to them getting stuck in a specific state.

Task hubs can be defined in the host.json file, which could look as follows:

{
  "version": "2.0",
  "extensions": {
    "durableTask": {
      "hubName": "MyTaskHub"
    }
  }
}

Azure Functions automatically enforces the default deployment slots to have the task hub name match that of the site, which is why the task hub name we saw reflected the name of our function app. If using multiple deployment slots, you should configure task hub names for the non-production slots to avoid conflicts.

Durable functions provide a means to control timing, called durable timers. These should be used instead of the standard ways in which you would normally create timers in your chosen language.

Controlling timing

To create a durable timer, you would call the CreateTimer() method in .NET or the createTimer() method in JavaScript.

A very basic, crude, one-line example of creating a 1-minute timer in C#, which will pause the orchestration for 1 minute before continuing, is the following:

await context.CreateTimer(context.CurrentUtcDateTime.Add(TimeSpan.FromMinutes(1)), CancellationToken.None);

If we wanted to create a timeout that canceled the task, we could create a new CancellationTokenSource, using a line similar to this:

var cts = new CancellationTokenSource()

Then, instead of CancellationToken.None, we could reference the CancellationTokenSource object with cts.Token, and after our timeout logic, we could cancel the durable timer with cts.Cancel(). If the function doesn’t either complete or cancel, the orchestration status isn’t set to completed.

One of the patterns we mentioned earlier was the human interaction pattern. Having the ability to wait and listen for external events is required for this to work, so let’s take a look at this final topic.

Waiting for and sending events

To wait for an external event in C#, you would call the WaitForExternalEvent<type>("<name>") method in the orchestrator function, specifying the name of the event and the type of data it expects to receive. For example, to wait for an external event called Approval, which you would expect to return a Boolean value indicating whether or not something is approved, we could use the following:

bool approved = await context.WaitForExternalEvent<bool>("Approval");

We could then check on the approval outcome using the approved variable.

Using the approval scenario again, if we had a client function that received the approval information, and that function should pass the information to the orchestration function listening for it, in C#, we could use the RaiseEventAsync() method as follows:

await client.RaiseEventAsync(instanceId, "Approval", true);

It’s passing in the instanceId value, the name of the event (Approval, in this case), and the Boolean value.

If the orchestrator function wasn’t listening for that event, the message would get added to an in-memory queue, so that it would be available should the orchestrator function start listening for that event later.

With that, we have come to the end of our journey into Azure Functions. Feel free to delete the resource group and all resources created during this chapter if you wish.

Summary

In this chapter, we explored what Azure Functions is, what hosting options are available, and some fundamentals around scaling, as well as the core concepts of triggers and bindings. From there, we developed and tested functions in the Azure portal using a data operation trigger, a timer trigger, and a webhook trigger. We then looked at how you can create stateful workflows using durable functions, where we looked at the different function types available and primary use cases, before developing our own durable functions locally within VS Code, making use of local development application settings and the storage emulator. Finally, we took a brief look at task hubs, controlling timing, and finished off with how to wait for and send events with durable functions.

In the next chapter, we will step away from focusing on compute solutions, and look at developing solutions that use Cosmos DB storage. We will be looking at the service, the available APIs for Cosmos DB, managing databases and containers, followed by inserting and querying documents. We will then move into the topics of change feed, partitioning and consistency levels, as well as optimizing database performance and costs.

Questions

  1. What is the name of the file that holds the information on a function’s triggers and bindings?
  2. What information is contained in the AzureWebJobsStorage application setting?
  3. Which durable function type does an orchestrator function call to implement the steps of a workflow?
  4. Which file is used to define application settings that only apply to local development?

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.199.122