My personal blog is written in .NET Core (http://mode19.net). Originally I wrote each post in its own page. Those pages were all part of the source code of the blog and had corresponding metadata in a database. But as the number of posts increased, the site became hard to manage, especially since the older pages were written using older libraries and techniques. The contents of the blog posts didn’t change—only the formatting changed.
That’s when I decided to convert my blog posts to Markdown. Markdown allows me to write just the content of the blog post without having to worry about the formatting. That way, I could store my blog posts in a database or BLOB storage and not have to rebuild the web application every time I posted a new entry. I could also convert every page on the blog to use the latest libraries I wanted to try out, without touching the posts’ content.
To handle the storing of posts and conversion from Markdown to HTML, I created a microservice. To describe what a microservice is, I’ll borrow some of the characteristics listed in Christian Horsdal Gammelgaard’s book Microservices in .NET Core (Manning, 2017). A microservice is
In this chapter, you’ll create a blog post microservice. The data store will be Azure Blob Storage. I picked Azure Blob Storage because it presents a challenge in that HTTP requests made to it need special headers and security information. There’s support for Azure Blob Storage in the Azure SDK, which is available for .NET Standard. But as an exercise, you’ll make the HTTP requests directly.
In chapter 2 you used the dotnet new web template. That template is tuned more for websites than web services. You’ll start with that template and make the necessary adjustments to turn it into a web service-only project.
But before you begin, let’s find something interesting for your service to do.
There are many implementations of Markdown, and several are available in .NET Core or .NET Standard. The library you’ll be using is called Markdown Lite.
You can see how it works by creating an empty web application. Create a new folder called MarkdownLiteTest and run the dotnet new console command in it. Add a reference to Microsoft.DocAsCode.MarkdownLite in the project file, as follows.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.DocAsCode.MarkdownLite" Version="2.13.1" /> 1 </ItemGroup> </Project>
Now try out some sample code. The following listing shows a test to convert simple Markdown text into HTML and write the HTML to the console.
using System; using Microsoft.DocAsCode.MarkdownLite; namespace MarkdownLiteTest { public class Program { public static void Main() { string source = @" Building Your First .NET Core Applications ======= In this chapter, we will learn how to setup our development environment, create an application, and "; var builder = new GfmEngineBuilder(new Options()); var engine = builder.CreateEngine( new HtmlRenderer()); 1 var result = engine.Markup(source); 2 Console.WriteLine(result); } } }
The output should look like this:
<h1 id="building-your-first-net-core-applications"> Building Your First .NET Core Applications</h1> <p>In this chapter, we will learn how to setup our development environment, create an application, and</p>
Markdown Lite doesn’t add <html> or <body> tags, which is nice for inserting the generated HTML into a template.
Now that you know how to use Markdown Lite, you can put it into a web service.
In chapter 2 you created an ASP.NET Core service using Kestrel and some simple request-handling code that returned a “Hello World” response for all incoming requests. In this chapter’s example, you’ll need to process the input that comes in. ASP.NET has some built-in mechanisms to route requests based on URI and HTTP verb that you’ll take advantage of.
Start by creating a new folder called MarkdownService and running dotnet new web. Modify the project file as shown in the following listing.
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> 1 <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> <PackageReference Include="Microsoft.DocAsCode.MarkdownLite" Version="2.13.1" /> </ItemGroup> </Project>
The Program.cs file is responsible for starting the web server. Its code can be simplified to what’s shown in the next listing.
using Microsoft.AspNetCore; using Microsoft.AspNetCore.Hosting; namespace MarkdownService { public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .Build(); } }
The Startup class is where you’ll configure ASP.NET MVC. MVC handles the incoming requests and routes them depending on configuration and convention. Modify the Startup.cs file to look like the code in the next listing.
using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; using Microsoft.DocAsCode.MarkdownLite; namespace MarkdownService { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); 1 var builder = new GfmEngineBuilder(new Options()); var engine = builder.CreateEngine(new HtmlRenderer()); services.AddSingleton<IMarkdownEngine>(engine); 2 } public void Configure(IApplicationBuilder app) { app.UseMvc(); 3 } } }
MVC stands for “model, view, controller,” which is a pattern for building web applications. ASP.NET MVC was introduced as an alternative to the old WebForms approach for building web applications. Neither was intended for REST services, so another product called Web API was introduced for that purpose. In ASP.NET Core, Web API and MVC have been merged into one, and WebForms no longer exists.
The IMarkdownEngine object is created at startup and added as a singleton to the dependency injection. ASP.NET Core uses the same Microsoft.Extensions.DependencyInjection library you used in chapter 6.
The next thing you need to do is create a controller. MVC uses reflection to find your controllers, and it routes incoming requests to them. You just need to follow the conventions. Create a new file called MdlController.cs and add the following code.
using System.Collections.Generic; using System.IO; using Microsoft.AspNetCore.Mvc; using Microsoft.DocAsCode.MarkdownLite; namespace MarkdownService { [Route("/")] 1 public class MdlController : Controller { private readonly IMarkdownEngine engine; public MdlController(IMarkdownEngine engine) 2 { this.engine = engine; } [HttpPost] 3 public IActionResult Convert() { var reader = new StreamReader(Request.Body); 4 var markdown = reader.ReadToEnd(); 5 var result = engine.Markup(markdown); return Content(result); 6 } } }
After executing dotnet run, you should have a web server running on http://localhost:5000. But if you navigate to this URL with a browser, you’ll get a 404. That’s because in listing 7.6 you only created an HttpPost method. There’s no HttpGet method. In order to test the service, you need to be able to send a POST with some Markdown text in it.
The quickest way to do this is with Curl. Curl is a command-line tool that you’ll find very useful when developing web services and applications. It handles many more protocols than HTTP and HTTPS.
Curl is available on all platforms. Visit https://curl.haxx.se/download.html to download the version for your OS.
For our purposes, you’ll create an HTTP POST with the body contents taken from a file. First, create a file, such as test.md, with some Markdown text in it. Then execute a curl command like this one:
curl -X POST --data-binary @test.md http://localhost:5000 this was styled as tip
Use --data-binary instead of -d to preserve newlines.
If all goes correctly, the generated HTML should be printed on the command line. Curl made it possible to test your web service before writing the client code.
Now that you have a working service, let’s look at how a client can make requests to web services in .NET Core.
You’ll use the Markdown Lite service created in the previous section to test with, so leave it running and open another terminal. Go to the MarkdownLiteTest folder created earlier. Add a test.md file to this folder with some sample Markdown (or copy the file you used in the previous section). To make this file available while running the MarkdownLiteTest application, you’ll need to copy it to the output folder, as follows.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> 1 <ItemGroup> <None Include="test.md"> 2 <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </None> </ItemGroup> </Project>
Next, write the code that will POST data to an HTTP endpoint. The best option for this in .NET Core is HttpClient. Modify the Program.cs file to add the code from listing 7.8.
.NET Framework veterans may remember WebClient, which was originally not included in .NET Core because HttpClient is a better option. Developers asked for WebClient to be included because not all old WebClient code can be ported to HttpClient easily. But when writing new code, stick with HttpClient.
using System; using System.IO; using System.Net.Http; namespace MarkdownLiteTest { public class Program { public static void Main(string[] args) { var client = new HttpClient(); 1 var response = client.PostAsync( "http://localhost:5000", new StreamContent( new FileStream("test.md", FileMode.Open)) 2 ).Result; 3 string markdown = response.Content. ReadAsStringAsync().Result; 4 Console.WriteLine(markdown); } } }
The StreamContent object inherits from HttpContent. You can provide any stream to StreamContent, which means you don’t have to keep the full content of the POST in memory. The PostAsync method is also nice if you don’t want to block the thread while waiting for the POST to complete.
In this example, you didn’t take advantage of the async features of .NET, but to build high-performance microservice applications, you need to understand how to use those features.
In listing 7.8 you explicitly call .Result on the returned values of two async methods: PostAsync and ReadAsStringAsync. These methods return Task objects. Your client doesn’t need to be asynchronous because it’s only doing one thing. It doesn’t matter if you block the main thread, because there’s nothing else that needs to happen.
Services, in contrast, can’t afford to tie up threads waiting for something. Let’s take a closer look at the service code that converts the posted Markdown to HTML in the next listing.
[HttpPost] public IActionResult Convert() { var reader = new StreamReader(Request.Body); var markdown = reader.ReadToEnd(); 1 var result = engine.Markup(markdown); return Content(result); }
The problem with blocking the thread to read the incoming HTTP request is that the client may not be executing as quickly as you think. If the client has a slow upload speed or is malicious, it could take minutes to upload all the data. Meanwhile, the service has a whole thread stuck on this client. Add enough of these clients, and soon you’ll run out of available threads or memory.
The solution to this problem is to rely on two powerful C# constructs called async and await. The following listing shows how you could rewrite the Convert method to be asynchronous.
[HttpPost] public async Task<IActionResult> Convert() 1 { using (var reader = new StreamReader(Request.Body)) 2 { var markdown = await reader.ReadToEndAsync(); 3 var result = engine.Markup(markdown); return Content(result); } }
The async/await constructs are a bit of compiler magic that make asynchronous code much easier to write. The await signals a point in the method where the code will need to wait for something. The C# compiler will split the Convert method into two methods, with the second being invoked when the awaited item is finished. This all happens behind the scenes, but if you’re curious about how it works, try viewing the IL (the .NET Intermediate Language—the stuff inside a .NET DLL) generated for async methods in the ILDASM tool that comes with Visual Studio.
Now if the client uploads its request content slowly, the only impact is that it will hold a socket open. The layers beneath your service code will gather the network I/O and buffer it until the request content length is reached. This means your service can handle more requests with fewer threads.
Writing asynchronous code becomes more important when your service depends on other services, which limit operations to the speed of the network. You’ll see an example of this in the next section.
Now that you’ve figured out how to convert Markdown to HTML, you can incorporate Azure Blob Storage for storing posts. Instead of posting data to the Markdown service, you’ll send it a BLOB name and have it return the converted HTML. You can do this by adding a GET method to your service.
Before going into that, though, you need to pull some values from configuration.
Your code uses the Microsoft.Extensions.Configuration library, which you learned about in chapter 6. You learned how to add a config.json file to your project, copy it to the build output, and add the dependency on the Configuration library. Do that now for this project, and consult chapter 6 if you need any tips.
In order to read the config, you’ll need to create an IConfigurationRoot object, as follows.
using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.Configuration; 1 using Microsoft.Extensions.DependencyInjection; using Microsoft.DocAsCode.MarkdownLite; namespace MarkdownService { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); var builder = new GfmEngineBuilder(new Options()); var engine = builder.CreateEngine(new HtmlRenderer()); services.AddSingleton<IMarkdownEngine>(engine); var configBuilder = new ConfigurationBuilder(); configBuilder.AddJsonFile("config.json", false); 2 var configRoot = configBuilder.Build(); 3 services.AddSingleton<IConfigurationRoot>( 4 configRoot); } public void Configure(IApplicationBuilder app) { app.UseMvc(); } } }
In listing 7.11 you didn’t introduce a fallback for the configuration. That’s why the config.json file isn’t optional.
You’ll need to read the config values in the MdlController class. The code for doing this is shown next.
using Microsoft.Extensions.Configuration; 1 public class MdlController : Controller { private static readonly HttpClient client = new HttpClient(); private readonly IMarkdownEngine engine; private readonly string AccountName; private readonly string AccountKey; private readonly string BlobEndpoint; private readonly string ServiceVersion; public MdlController(IMarkdownEngine engine, IConfigurationRoot configRoot) 2 { this.engine = engine; AccountName = configRoot["AccountName"]; 3 AccountKey = configRoot["AccountKey"]; BlobEndpoint = configRoot["BlobEndpoint"]; ServiceVersion = configRoot["ServiceVersion"]; }
The config.json file will have the four properties read in listing 7.12. The next listing shows an example config file.
{ "AccountName": "myaccount", "AccountKey": "<accountkey>", "BlobEndpoint": "https://myaccount.blob.core.windows.net/", "ServiceVersion": "2009-09-19" }
Don’t forget to modify the project file to copy config.json to the output folder as you did earlier with test.md.
If you’re using the Azure emulator, often referred to as development storage, use the configuration settings in the following listing.
{ "AccountName": "devstoreaccount1", "AccountKey": 1 "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/ K1SZFPTOtr/KBHBeksoGMGw==", "BlobEndpoint": "http://127.0.0.1:10000/devstoreaccount1/", "ServiceVersion": "2009-09-19" }
In the following listing, you expect the caller to pass in the container and BLOB names in the query string. The method makes a request to Azure Blob Storage to retrieve the Markdown content. Your code uses Markdown Lite to convert the result to HTML and sends the response to the caller. Add this code to the MdlController class.
using System; using System.Net.Http; using System.Security.Cryptography; 1 using System.Text; using System.Threading.Tasks; using Microsoft.Extensions.Configuration; [HttpGet] 2 public async Task<IActionResult> GetBlob( string container, string blob) 3 { var path = $"{container}/{blob}"; var rfcDate = DateTime.UtcNow.ToString("R"); var devStorage = BlobEndpoint.StartsWith("http://127.0.0.1:10000") ? $"/{AccountName}" : ""; 4 var signme = "GET " + 5 "x-ms-blob-type:BlockBlob " + $"x-ms-date:{rfcDate} " + $"x-ms-version:{ServiceVersion} " + 6 $"/{AccountName}/{path}"; var uri = new Uri(BlobEndpoint + path); 7 var request = new HttpRequestMessage(HttpMethod.Get, uri); request.Headers.Add("x-ms-blob-type", "BlockBlob"); request.Headers.Add("x-ms-date", rfcDate); request.Headers.Add("x-ms-version", ServiceVersion); 8 string signature = ""; using (var sha = new HMACSHA256( System.Convert.FromBase64String(AccountKey))) 9 { var data = Encoding.UTF8.GetBytes(signme); 10 signature = System.Convert.ToBase64String(sha.ComputeHash(data)); } var authHeader = $"SharedKey {AccountName}:{signature}"; request.Headers.Add("Authorization", authHeader); 11 var response = await client.SendAsync(request); 12 var markdown = await response.Content.ReadAsStringAsync(); var result = engine.Markup(markdown); return Content(result); }
The code in listing 7.15 can seem overwhelming, so let’s break it down into manageable pieces. The first part is the method signature, shown in the next listing.
[HttpGet] public async Task<IActionResult> GetBlob( string container, string blob)
The HttpGet attribute tells ASP.NET MVC that GetBlob receives client HTTP requests using the GET verb. The parameters of the method, container and blob, are expected to be passed from the client in the query string. For example, the client could make a GET request to http://localhost:5000?container=somecontainer&blob =test.md. MVC will extract the name/value pairs from the query string and match them to the method parameters.
Most of the code in GetBlob creates an HTTP request to send to Azure Blob Storage. You’ll need an Azure storage account to test this (Azure has a 30-day free trial if you don’t already have a subscription). There’s also an Azure storage emulator available as part of the Azure SDK, but it only works on Windows. Finally, there’s an open source, cross-platform Azure storage emulator called Azurite, which you can find at https://github.com/arafato/azurite.
The GET blob request is encapsulated in an HttpRequestMessage object. Put the code that creates that object into its own method, as shown in the next listing.
private HttpRequestMessage CreateRequest( HttpMethod verb, string container, string blob) { var path = $"{container}/{blob}"; var rfcDate = DateTime.UtcNow.ToString("R"); 1 var uri = new Uri(BlobEndpoint + path); 2 var request = new HttpRequestMessage(verb, uri); request.Headers.Add("x-ms-blob-type", "BlockBlob"); 3 request.Headers.Add("x-ms-date", rfcDate); request.Headers.Add("x-ms-version", ServiceVersion); 4 var authHeader = GetAuthHeader( 5 verb.ToString().ToUpper(), path, rfcDate); request.Headers.Add("Authorization", authHeader); return request; }
Although this chapter focuses on making requests to Azure Blob Storage, the same techniques apply to other HTTP services. You’ll be writing several operations against Azure Blob Storage in this chapter, so you’ll be able to reuse CreateRequest in other operations.
Azure BLOB containers have different levels of exposure. It’s possible to expose the contents publicly so that a request doesn’t need authentication. In this case, the container is private. The only way to access it is to use a shared key to create an authentication header in the request. In listing 7.17, the code for creating the authentication header is split into a separate method called GetAuthHeader. The code for GetAuthHeader is shown in the following listing.
private string GetAuthHeader(string verb, string path, string rfcDate) { var devStorage = BlobEndpoint.StartsWith("http://127.0.0.1:10000") ? $"/{AccountName}" : ""; var signme = $"{verb} " + 1 "x-ms-blob-type:BlockBlob " + $"x-ms-date:{rfcDate} " + $"x-ms-version:{ServiceVersion} " + $"/{AccountName}{devStorage}/{path}"; string signature; using (var sha = new HMACSHA256( System.Convert.FromBase64String(AccountKey))) 2 { var data = Encoding.UTF8.GetBytes(signme); signature = System.Convert.ToBase64String( sha.ComputeHash(data)); 3 } return $"SharedKey {AccountName}:{signature}"; 4 }
The aim of this method is to produce a hashed version of the request header. The server will perform the same hash and compare it against the value you sent. If they don’t match, it will report an error and tell you what content it hashed. This helps in case you’ve mistyped something.
Authentication for Azure storage is covered in depth in “Authentication for the Azure Storage Services” at http://mng.bz/7j0B.
The previous helper methods have made the GetBlob method much shorter. The updated version is shown in the next listing.
[HttpGet] public async Task<IActionResult> GetBlob(string container, string blob) { var request = CreateRequest(HttpMethod.Get, container, blob); var response = await client.SendAsync(request); var markdown = await response.Content.ReadAsStringAsync(); var result = engine.Markup(markdown); return Content(result); }
The Markdown service now has a GET operation. The first step in testing it is to put a Markdown file in an Azure BLOB container. There are many tools for doing this, including the Azure portal. You’ll also need to get the account name and key from the Azure portal to populate the values in the config.json file.
Once the Markdown files are in place, you can make a request to the Markdown service with a console application. The following listing shows the contents of the Program.cs file in a console application that tests the new Azure storage operation.
using System; using System.IO; using System.Net.Http; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { var client = new HttpClient(); var response = client.GetAsync( "http://localhost:5000?container=somecontainer&blob=test.md") .Result; string markdown = response.Content. ReadAsStringAsync().Result; Console.WriteLine(markdown); } } }
Conversely, you can use the following curl command:
curl http://localhost:5000?container=somecontainer&blob=test.md
The quotations around the URL in listing 7.20 are necessary for Windows. The & symbol has a special meaning in Windows command-line scripting.
Your Markdown service isn’t technically a microservice. One of the key principles of a microservice is that it has its own isolated data source. In the previous section, you added BLOBs to the Azure storage account either through the Azure portal or an external tool.
In order to isolate the data source for the Markdown service, you’ll need to add methods to upload new BLOBs and change existing BLOBs. To achieve this, you’ll add a PUT operation, as in the following listing.
[HttpPut("{container}/{blob}")] public async Task<IActionResult> PutBlob(string container, string blob) { var contentLen = this.Request.ContentLength; 1 var request = CreateRequest(HttpMethod.Put, container, blob, contentLen); 2 request.Content = new StreamContent( this.Request.Body); 3 request.Content.Headers.Add("Content-Length", contentLen.ToString()); 4 var response = await client.SendAsync(request); if (response.StatusCode == HttpStatusCode.Created) return Created( $"{AccountName}/{container}/{blob}", null); 5 else return Content(await response.Content.ReadAsStringAsync()); 6 }
In the PutBlob method, you’re essentially taking a PUT request and creating your own request with the right authorization header for Azure Blob Storage. In a production service, you wouldn’t expose a secure resource through an insecure one—securing services with ASP.NET Core is a deep subject that you can read about in ASP.NET Core in Action by Andrew Lock (Manning, 2018). The purpose of this example is to explore how PUT operations work.
An HTTP PUT operation is considered idempotent, which means that no matter how many times you call it, it will result in the same outcome. If you PUT the same BLOB multiple times, each call will return a 201—a duplicate call won’t result in adverse effects. Contrast this with POST, which isn’t idempotent. If you perform a POST and it times out, the state of the resource is unknown, and you’d need to make a GET call to verify the state of the resource before retrying the POST. In the Markdown service, you use POST only for an operation that doesn’t save data.
The content of the Markdown file that the client is requesting to store in your service is in the body of the request. You can get a Stream with the content data directly from this.Request.Body. Rather than measure the length of the content yourself, you get it from the incoming request using this.Request.ContentLength. The content length is a required header for PUT operations to Azure Blob Storage, but you’ll notice that it’s added to Request.Content.Headers instead of Request.Headers. Content headers include things like length, type, and encoding. This is probably because these headers are special and are indicated by position rather than name. To see what I mean by that, look at how the authentication header is created in the next listing.
private string GetAuthHeader(string verb, string path, string rfcDate, long? contentLen) 1 { var devStorage = BlobEndpoint.StartsWith("http://127.0.0.1:10000") ? $"/{AccountName}" : ""; var signme = $"{verb} " + 2 $"{contentLen} " + 3 " " + "x-ms-blob-type:BlockBlob " + $"x-ms-date:{rfcDate} " + $"x-ms-version:{ServiceVersion} " + $"/{AccountName}{devStorage}/{path}"; string signature; using (var sha = new HMACSHA256(System.Convert.FromBase64String(AccountKey))) { var data = Encoding.UTF8.GetBytes(signme); signature = System.Convert.ToBase64String(sha.ComputeHash(data)); } return $"SharedKey {AccountName}:{signature}"; }
For a PUT operation against Azure Blob Storage, only the content length is required. It goes three lines after the verb.
Because contentLen is a nullable long, nothing will be written if it’s null. If you used a regular long value type contentLen would have some default value (like 0), and that would get written to the signme string. Using the nullable long means you don’t have to do anything special for GET vs. PUT requests. The CreateRequest helper method needs to provide a default null value, as shown in the following listing.
private HttpRequestMessage CreateRequest(HttpMethod verb, string container, string blob, long? contentLen = default(long?)) 1 { ... var authHeader = GetAuthHeader(verb.ToString().ToUpper(), path, rfcDate, contentLen); 2 request.Headers.Add("Authorization", authHeader); return request; }
Default parameters are a handy C# feature. They must go at the end of the parameter list and they’re specified by assigning a default value with =. The default() keyword creates a constant value. In the case of nullable types, like long?, the default is null.
To test this new method in the Markdown service, you can use the same code and curl commands as in the code snippet in section 7.1.3, earlier in the chapter. Simply change POST to PUT and modify the URL to include the container and BLOB name. Listings 7.24 and 7.25 show how to do this.
curl -X PUT --data-binary @test.md http://localhost:5000/somecontainer/foo.md
var response = client.PutAsync( "http://localhost:5000/somecontainer/foo.md", new StreamContent( new FileStream("test.md", FileMode.Open)) ).Result;
Now that you have the ability to upload BLOBs to containers, you should expose a way for clients to get the list of containers and of BLOBs in the containers. The most straightforward way is to modify the HttpGet operation to allow null values for BLOB or container. A null BLOB parameter would indicate that the client wants a list of all BLOBs in the container. A null container parameter would indicate that they want a list of all containers.
Azure Blob Storage supports list requests, returning the lists in XML documents. Up until now, you haven’t specified a content type for the response. The default content type from ASP.NET is “text/html”, which is perfect for a response that’s Markdown converted to HTML. In this example, you’ll return the result of the Azure storage call. The following listing shows the modifications to support returning XML.
[HttpGet] public async Task<IActionResult> GetBlob(string container, string blob) { var request = CreateRequest(HttpMethod.Get, container, blob); var contentType = blob == null ? "text/xml" : 1 "text/html"; var response = await client.SendAsync(request); var responseContent = await response.Content.ReadAsStringAsync(); if (blob != null) 2 responseContent = engine.Markup(responseContent); return Content(responseContent, contentType); 3 }
Making a GET request to the service with the BLOB or container parameter not specified will result in null values being passed into the GetBlob method. To request a list of BLOBs in the “somecontainer” container, you’d use the URL http://localhost:5000?container=somecontainer. To get a list of all the containers, you’d use http://localhost:5000.
A list request to Azure Blob Storage is slightly different than the GET requests you’ve made so far. The following listing shows the updates to the helper methods for listing BLOBs and containers.
private HttpRequestMessage CreateRequest(HttpMethod verb, string container, string blob, long? contentLen = default(long?)) { string path; Uri uri; if (blob != null) 1 { path = $"{container}/{blob}"; uri = new Uri(BlobEndpoint + path); } else if (container != null) 2 { path = container; uri = new Uri($"{BlobEndpoint}{path}?restype=container&comp=list"); } else 3 { path = ""; uri = new Uri($"{BlobEndpoint}?comp=list"); } var rfcDate = DateTime.UtcNow.ToString("R"); var request = new HttpRequestMessage(verb, uri); if (blob != null) 4 request.Headers.Add("x-ms-blob-type", "BlockBlob"); request.Headers.Add("x-ms-date", rfcDate); request.Headers.Add("x-ms-version", ServiceVersion); var authHeader = GetAuthHeader(verb.ToString().ToUpper(), path, rfcDate, contentLen, blob == null, container == null); request.Headers.Add("Authorization", authHeader); return request; } private string GetAuthHeader(string verb, string path, string rfcDate, long? contentLen, bool listBlob, bool listContainer) { var devStorage = BlobEndpoint.StartsWith("http://127.0.0.1:10000") ? $"/{AccountName}" : ""; var signme = $"{verb} " + $"{contentLen} " + " " + 5 (listBlob ? "" : "x-ms-blob-type:BlockBlob ") + $"x-ms-date:{rfcDate} " + $"x-ms-version:{ServiceVersion} " + $"/{AccountName}{devStorage}/{path}"; if (listContainer) 6 signme += " comp:list"; else if (listBlob) signme += " comp:list restype:container"; string signature; using (var sha = new HMACSHA256(System.Convert.FromBase64String(AccountKey))) { var data = Encoding.UTF8.GetBytes(signme); signature = System.Convert.ToBase64String(sha.ComputeHash(data)); } return $"SharedKey {AccountName}:{signature}"; }
To round out the functionality of the Markdown service, you’ll add the ability to delete a BLOB from a container. A request with a DELETE verb has a similar structure as a GET request. The only real consideration is what status code to return.
Azure Blob Storage will return a 202 (Accepted) status code when issuing a delete BLOB command. This is because the BLOB immediately becomes unavailable but isn’t deleted until a garbage collection happens. This is in line with RFC 2616 of the HTTP specification:
A successful response SHOULD be 200 (OK) if the response includes an entity describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No Content) if the action has been enacted but the response does not include an entity.
For the Markdown service, the BLOB is essentially deleted. You won’t return the value of the BLOB in the response, so a 204 (No Content) seems more appropriate. The following listing shows how to write the delete operation.
[HttpDelete] public async Task<IActionResult> DeleteBlob(string container, string blob) { var request = CreateRequest(HttpMethod.Delete, 1 container, blob); var response = await client.SendAsync(request); if (response.StatusCode==HttpStatusCode.Accepted) 2 return NoContent(); else return Content(await response.Content.ReadAsStringAsync()); }
With the HttpDelete operation added, your service now handles the GET, PUT, POST, and DELETE HTTP verbs. The only verb we won’t cover is PATCH ([HttpPatch]), which is used for partial modification of a record. Azure Blob Storage doesn’t support PATCH, so it doesn’t apply to this example.
To learn more about what we covered in this chapter, try the following resources:
In this chapter you learned how to write a microservice and communicate with other HTTP services as a client. These key concepts were covered:
Here are some important techniques to remember from this chapter:
Much of modern programming involves writing and communicating with HTTP services. ASP.NET Core makes writing HTTP REST services quick and intuitive by using a convention-based approach. Methods like Content, Created, Accepted, and the like match the HTTP specifications. Routing requests to the right methods is handled via the Http* attributes, and accessing parameters from the URI or query string doesn’t require manual parsing.
Making HTTP requests from .NET Core code is also straightforward. The HttpClient class offers useful helper methods. In this chapter, you used HttpClient to communicate with Azure storage. For .NET Framework developers used to having the Azure SDK, contacting the HTTP services directly can seem daunting. But once you understand how to authenticate, it’s easy.
3.144.205.223