Try
In this chapter, you’ll first learn why it’s sometimes desirable to define lazy computations, functions that may or may not be evaluated. You’ll then see how these functions can be composed with other functions independently of their evaluation.
Once you’ve got your feet wet with lazy computations, which are just plain functions, you’ll see how the same techniques can be extended to computations that have some useful effect other than laziness. Namely, you’ll learn how to use the Try
delegate to safely run code that may throw an exception and how to compose several Try
s. You’ll then learn how to compose functions that take a callback without ending up in callback hell.
What holds all these techniques together is that, in all cases, you’re treating functions as things that have certain specific characteristics, and you can compose them independently of their execution. This requires a leap in abstraction, but the result is quite powerful.
NOTE The contents of this chapter are challenging, so don’t be discouraged if you don’t get it all on your first reading.
Laziness in computing means deferring a computation until its result is needed. This is beneficial when the computation is expensive, and its result may not be needed.
To introduce the idea of laziness, consider the following example of a method that randomly picks one of two given elements. You can try it out in the REPL:
var rand = new Random(); T Pick<T>(T l, T r) => rand.NextDouble() < 0.5 ? l : r; Pick(1 + 2, 3 + 4) // => 3, or 7
The interesting thing to point out here is that when you invoke Pick
, both the expressions 1 + 2
and 3 + 4
are evaluated, even though only one of them is needed in the end.1 So, the program is performing some unnecessary computation. This is suboptimal and should be avoided if the computation is expensive enough. To prevent this, we could rewrite Pick
to take not two values but two lazy computations instead; that is, functions that can produce the required values:
T Pick<T>(Func<T> l, Func<T> r) => (rand.NextDouble() < 0.5 ? l : r)(); Pick(() => 1 + 2, () => 3 + 4) // => 3, or 7
Pick
now first chooses between the two functions and then evaluates one of them. As a result, only one computation is performed.
In summary, if you’re not sure whether a value will be required and it may be expensive to compute it, pass the value lazily by wrapping it in a function that computes the value.
NOTE Integer addition is an extremely fast operation, so in this particular example, the cost of allocating the two lambdas outweighs the benefit of making the computations lazy. This technique is only justified in case of operations that are computationally intensive or that perform I/O.
Next, you’ll see how such a lazy API can be beneficial when working with Option
.
The Option
API provides a couple of examples that nicely illustrate how laziness can be useful. Let’s look at those.
Imagine you have an operation that returns an Option
, and you want to provide a fallback—another Option
-producing operation to use if the first operation returns None
. Combining two such Option
-returning functions in this way is a common scenario and is achieved through the OrElse
function, which is defined as follows:
public static Option<T> OrElse<T> (this Option<T> left, Option<T> right) => left.Match ( () => right, (_) => left );
OrElse
simply yields the left Option
if it’s Some
; otherwise, it falls back to the right Option
. For example, say you define a repository that looks items up from a cache, failing which, it goes to the DB:
interface IRepository<T> { Option<T> Lookup(Guid id); } class CachingRepository<T> : IRepository<T> { IDictionary<Guid, T> cache; IRepository<T> db; public Option<T> Lookup(Guid id) => cache.Lookup(id).OrElse(db.Lookup(id)); }
Can you see the problem in the preceding code? Because OrElse
is always called, its argument is always evaluated, meaning that you’re hitting the DB even if the item is found in the cache. This defeats the purpose of the cache altogether!
This can be solved by using laziness. For such scenarios, I’ve defined an overload of OrElse
, taking not a fallback Option
but a function that will be evaluated, if necessary, to produce the fallback Option
:
public static Option<T> OrElse<T> (this Option<T> opt, Func<Option<T>> fallback) => opt.Match ( None: fallback, ❶ Some: _ => opt );
❶ Only evaluates the fallback function in the None
case
In this implementation, the fallback
function will only be evaluated if opt
is None
. (Compare this to the previously shown overload, where the fallback option right
is always evaluated.) You can accordingly fix the implementation of the caching repository as follows:
Now, if the cache lookup returns Some
, OrElse
is still called, but not db.Lookup
, achieving the desired behavior.
As you can see, to make the evaluation of an expression lazy, instead of providing an expression, you provide a function that when called will evaluate that expression. Instead of a T
, provide a Func<T>
.
A similar scenario is when you want to extract the inner value from an Option
, providing a fallback value in case it’s None
. This operation is called GetOrElse
. For instance, you may need to look up a value from configuration and use a default value instead if no value is specified:
string DefaultApiRoot => "localhost:8000"; string GetApiRoot(IConfigurationRoot config) => config.Lookup("ApiRoot").GetOrElse(DefaultApiRoot);
Assume that Lookup
returns a duly populated Option
whose state depends on whether the value was specified in configuration. Notice that the DefaultApiRoot
property is evaluated regardless of the state of the Option
.
In this case, that’s OK because it simply returns a constant value. But if DefaultApiRoot
involved an expensive computation, you’d prefer to only perform it if needed by passing the default value lazily. This is why I’ve also provided two overloads of GetOrElse
:
public static T GetOrElse<T>(this Option<T> opt, T defaultValue) => opt.Match ( () => defaultValue, (t) => t ); public static T GetOrElse<T>(this Option<T> opt, Func<T> fallback) => opt.Match ( () => fallback(), (t) => t );
The first overload takes a regular fallback value, T
, which is evaluated when GetOrElse
is called. The second overload takes a Func<T>
, a function that is evaluated only when necessary.
In the rest of this chapter, you’ll see how lazy computations can be composed and why doing so is a powerful technique. We’ll start with the plain-vanilla lazy computation, Func<T>
, and then move on to lazy computations that include some useful effect, such as handling errors or state.
You saw that Func<T>
is a lazy computation that can be invoked to obtain a T
. It turns out that Func<T>
can be treated as a functor over T
. Remember, a functor is something that has an inner value over which you can Map
a function. How is that possible? The functors you’ve seen so far are all containers of some sort. How can a function possibly be a container, and what’s its inner value?
Well, you can think of a function as containing its potential result. If, say, Option<T>
“maybe-contains” some value of type T
, you can say that Func<T>
“potentially-contains” some value of type T
or, perhaps more accurately, contains the potential to produce a value of type T
. A function’s inner value is the value it yields when it’s evaluated.
You may know the tale of Aladdin’s magic lamp. When rubbed, it would produce a powerful genie. Clearly, such a lamp could have the power to contain anything: put a genie into it, and you can rub it to get the genie back out; put your grandma in it, and you can rub it to get grandma back. And you can think of it as a functor: map a “turn blue” function onto the lamp and, when you rub the lamp, you’ll get the contents of the lamp turned blue. Func<T>
is such a container, where rubbing is function invocation.
In reality, you know that a functor must expose a Map
method with a suitable signature. If you follow the functor pattern (see section 6.1.4), the signature of Map
for Func<T>
will involve
An input functor of type ()
→
T
, a function that can be called to generate a T
. Let’s call it f
.
An expected result of type ()
→
R
, a function that can be called to generate an R
.
The implementation is quite simple: invoke f
to obtain a T
, and then pass it to g
to obtain an R
, as figure 14.1 illustrates. Listing 14.1 shows the corresponding code.
Notice that Map
doesn’t invoke f
. It takes a lazily evaluated T
and returns a lazily evaluated R
. Also notice that the implementation is just function composition.
To see this in action, open the REPL, import LaYumba.Functional
as usual, and type the following:
var lazyGrandma = () => "grandma"; var turnBlue = (string s) => $"blue {s}"; var lazyGrandmaBlue = lazyGrandma.Map(turnBlue); lazyGrandmaBlue() // => "blue grandma"
To better understand the laziness of the whole computation, you can bake in some debug statements:
var lazyGrandma = () => { WriteLine("getting grandma..."); return "grandma"; }; var turnBlue = (string s) => { WriteLine("turning blue..."); return $"blue {s}"; }; var lazyGrandmaBlue = lazyGrandma.Map(turnBlue); ❶ lazyGrandmaBlue() ❷ // prints: getting grandma... // turning blue... // => "blue grandma"
❶ None of the functions are evaluated yet.
❷ All previously composed functions are evaluated now.
As you can see, the functions lazyGrandma
and turnBlue
aren’t invoked until the last line. This shows that you can build up complex logic without executing anything until you decide to fire things off.
Once you’ve thoroughly understood the preceding examples, experimented in the REPL, and understood the definition of Map
, it will be easy to understand the definition of Bind
shown in the following listing.
Bind
returns a function that, when evaluated, will evaluate f
to get a T
, apply g
to it to get a Func<R>
, and evaluate it to get the resulting R
.
This is all very interesting, but how useful is it exactly? Because functions are already built into the language, being able to treat Func
as a monad might not give you a lot. On the other hand, knowing that functions can be composed like any other monad, we can bake some interesting effects into how those functions behave. This is what the rest of this chapter is about.
In chapter 8, I showed how you could go from an Exception
-based API to a functional one by catching exceptions and returning them in an Exceptional
—a structure that can hold either an exception or a successful result. For instance, if you want to safely create a Uri
from a string
, you could write a method as follows:
Exceptional<Uri> CreateUri(string uri) { try { return new Uri(uri); } catch (Exception ex) { return ex; } }
This works, but should you do this for every method that can throw an exception? Surely, after a couple of times, you’ll start to feel that all this trying and catching is boilerplate. Can we abstract it away?
Indeed we can, with Try
—a delegate representing an operation that may throw an exception. It’s defined as follows:
Try<T>
is simply a delegate you can use to represent a computation that normally returns a T
but may throw an exception instead; hence, its return value is wrapped in an Exceptional
.
Defining Try
as a separate type allows you to define extension methods specific to Try
(most importantly, Run
), which safely invokes it and returns a suitably populated Exceptional
:
public static Exceptional<T> Run<T>(this Try<T> f) { try { return f(); } catch (Exception ex) { return ex; } }
Run
does the try
-catch
ceremony once and for all, so you never have to write a try
-catch
statement again. Refactoring the previous CreateUri
method to use Try
, you can write:
Notice how Try
enables you to define CreateUri
without any boilerplate code for handling exceptions, and yet, you can still execute CreateUri
safely by using Run
to invoke it. Test it for yourself by typing the following into the REPL:
Try<Uri> CreateUri(string uri) => () => new Uri(uri); CreateUri("http://github.com").Run() // => Success(http://github.com/) CreateUri("rubbish").Run() // => Exception(Invalid URI: The format of the URI could not be...)
Also notice that the body of CreateUri
returns a Uri
, but Try<Uri>
is defined to return an Exceptional<Uri>
. This is fine because I’ve defined implicit conversion from T
to Exceptional<T>
. Here again, the details of error handling have been abstracted away, so you can concentrate on the code that matters.
As a shorthand notation, if you didn’t want to define CreateUri
as a dedicated function, you could use the Try
function (defined in F
), which simply transforms a Func<T>
into a Try<T>
:
Now comes the interesting part—the reason why it’s important that you can compose lazy computations. If you have two (or more) computations that may fail, you can “monadically” compose them into a single computation that may fail by using Bind
. For example, imagine you have a string representing an object in JSON format with the following structure:
You want to define a method that creates a Uri
from the value in the “Uri” field of the JSON object. The following listing shows an unsafe way to do this.
using System.Text.Json; record Website(string Name, string Uri); Uri ExtractUri(string json) { var website = JsonSerializer.Deserialize<Website>(json); ❶ return new Uri(website.Uri); ❷ }
❶ Deserializes the string into a Website
Both JsonSerializer.Deserialize
and the Uri
constructor throw an exception if their input isn’t well formed.
Let’s use Try
to make the implementation safe. We can start by wrapping the method calls that can throw an exception into a Try
as follows:
Try<Uri> CreateUri(string uri) => () => new Uri(uri); Try<T> Parse<T>(string s) => () => JsonSerializer.Deserialize<T>(s);
As usual, the way to compose several operations that return a Try
is with Bind
. We’ll look at its definition in a moment. For now, trust that it works, and let’s use it to define a method that combines the two preceding operations into another Try
-returning function:
This works, but it’s not particularly readable. The LaYumba.Functional
library includes the implementation of the LINQ query pattern (see the sidebar “A reminder on the LINQ query pattern”) for Try
and all other included monads, so we can improve readability by using a LINQ expression instead, as the following listing demonstrates.
Try<Uri> ExtractUri(string json) => from website in Parse<Website>(json) ❶ from uri in CreateUri(website.Uri) ❷ select uri;
❶ Deserializes the string into a Website
Listing 14.4 is the safe counterpart to the unsafe code in listing 14.3. You can see that we could make this refactoring without compromising on readability. Let’s feed a few sample values to ExtractUri
to see that it works as intended:
ExtractUri( @"{ ""Name"":""Github"", ""Uri"":""http://github.com"" }") .Run() // => Success(http://github.com/) ExtractUri("blah!").Run() // => Exception('b' is an invalid start of a value...) ExtractUri("{}").Run() // => Exception(Value cannot be null...) ExtractUri( @"{ ""Name"":""Github"", ""Uri"":""rubbish"" }") .Run() // => Exception(Invalid URI: The format of the URI...)
Remember, everything happens lazily. When you call ExtractUri
, you just get a Try
that can eventually perform some computation. Nothing really happens until you call Run
.
Now that you’ve seen how to use Bind
to compose several computations that may fail, let’s look under the hood and see how Bind
is defined for Try
.
Remember that a Try<T>
is just like a Func<T>
for which we now know that invocation may throw an exception. Let’s start by quickly looking at Bind
for Func
again:
A cavalier way of describing this code is that it first invokes f
and then g
. Now we need to adapt this to work with Try
. First, replacing Func
with Try
gives us the correct signature. (This is often half of the work because for the core functions, if the implementation type checks, it usually works.) Second, because invoking a Try
directly may throw an exception, we need to use Run
instead. Finally, we don’t want to run the second function if the first function fails. The following listing shows the implementation.
public static Try<R> Bind<T, R> (this Try<T> f, Func<T, Try<R>> g) => () => f.Run() ❶ .Match ( Exception: ex => ex, ❷ Success: t => g(t).Run() ❶ );
❶ Uses Run
to safely execute each Try
❷ If the first Try
fails, doesn’t execute the second one
Bind
takes a Try
and a Try
-returning function g
. It then returns a function that when invoked runs the Try
and, if it succeeds, runs g
on the result to obtain another Try
, which is also run.
If we can define Bind
, we can always define Map
, which is usually simpler. I suggest you define Map
as an exercise.
In this chapter and the next, you’ll often read about monadically composing computations. That sounds complicated, but it really isn’t, so let’s take the mystery out of it.
First, let’s recap “normal” function composition, which I covered in chapter 7. Suppose you have two functions:
You can compose them by simply piping the output of f
into g
, obtaining a function A
→
C
. Now imagine you have the following functions:
These functions obviously don’t compose because f'
returns a Try<B>
, whereas g'
expects a B
, but it’s fairly clear that you may want to combine them by extracting the B
from the Try<B>
and feeding it to g'
. This is monadic composition, and it’s exactly what Bind
for Try
does, as you’ve seen.
In other words, monadic composition is a way to combine functions that’s more general than function composition and involves some logic dictating how the functions are composed. This logic is captured in the Bind
function.
There are several variations on this pattern. Imagine the following functions:
Could we compose these into a new function of type A
→
(C,
K)
? Given an A
, it’s easy to compute a C
: run f"
on the A
, extract the B
from the resulting tuple, and feed it to g"
. In the process, we’ve computed two K
's, so what should we do with them? If there’s a way to combine two K
's into a single K
, then we could return the combined K
. For example, if K
is a list, we could return all elements from both lists. Functions in the preceding form can be monadically composed if K
is of a suitable type.2
The functions for which I’ll demonstrate monadic composition in this book are listed in table 14.1, but there are many more possible variations.
In this section, I’ll start by showing how using HOFs in some cases leads to deeply nested callbacks, affectionately called “callback hell” or “the pyramid of doom.” I’ll use DB access as the specific scenario to illustrate this problem and show how you can leverage the LINQ query pattern to create flat, monadic workflows instead.
This section contains advanced material that isn’t required to understand coming chapters, so if this is your first reading, feel free to skip to chapter 15.
In section 2.3, you learned about functions that perform some setup and teardown and are parameterized with a function to be invoked in between. An example of this was a function that managed a DB connection, parameterized with a function that used the connection to interact with the DB:
public static class ConnectionHelper { public static R Connect<R> (ConnectionString connString, Func<SqlConnection, R> f) { using var conn = new SqlConnection(connString); conn.Open(); return f(conn); } }
This function can be consumed in client code like so:
public void Log(LogMessage message) => Connect(connString, c => c.Execute("sp_create_log" , message, commandType: CommandType.StoredProcedure));
Let’s define a similar function that can be used to log a message before and after an operation:
public static class Instrumentation { public static T Trace<T>(ILogger log, string op, Func<T> f) { log.LogTrace($"Entering {op}"); T t = f(); log.LogTrace($"Leaving {op}"); return t; } }
If you want to use both functions (opening/closing a connection as well as tracing entering/leaving a block), you’d write something like the following listing shows.
public void Log(LogMessage message) => Instrumentation.Trace("CreateLog" , () => ConnectionHelper.Connect(connString , c => c.Execute("sp_create_log" , message, commandType: CommandType.StoredProcedure)));
This is starting to become hard to read. What if you wanted some other setup of work to be done as well? For every HOF you add, your callbacks are nested one level deeper, making the code harder to understand. That’s why it’s called “the pyramid of doom.”
Instead, what we’d ideally like to have is a clean way to compose a middleware pipeline, as figure 14.2 illustrates. We want to add some behavior (like connection management, diagnostics, and so on) to each trip to the DB. Conceptually, this is similar to the middleware pipeline for handling HTTP requests in ASP.NET Core.
In a normal, linear function pipeline, the output of each function is piped into the next function. Each function has no control over what happens downstream. A middleware pipeline, on the other hand, is U-shaped: each function passes some data along, but it also receives some data on the way out, so to speak. As a result, each function is able to perform some operations before and after the functions downstream.
I’m going to call each of these functions or blocks a middleware. We want to be able to nicely compose such middleware pipelines to add logging, timing, and so on. But, because each middleware must take a callback function as an input argument (otherwise, it can’t intervene after the callback has returned), how can we escape the pyramid of doom?
It turns out that one way we can look at Bind
is as a recipe against the pyramid of doom. For instance, you may remember how in chapter 8, we used Bind
to combine several Either
-returning functions:
WakeUpEarly() .Bind(ShopForIngredients) .Bind(CookRecipe) .Match ( Left: PlanB, Right: EnjoyTogether );
If you expand the calls to Bind
, the preceding code looks like this:
WakeUpEarly().Match ( Left: planB, Right: u => ShopForIngredients(u).Match ( Left: planB, Right: ingr = CookRecipe(ingr).Match ( Left: planB, Right: EnjoyTogether ) ) );
You can see that Bind
effectively enables us to escape the pyramid of doom in this case: the same would apply to Option
and so on. But can we define Bind
for our middleware functions?
To answer this question, let’s look at the signatures of our middleware functions and see if there’s a pattern that we can identify and capture in an abstract way. These are the functions we’ve seen so far:
Let’s imagine a couple more examples where we might like to use middleware. We could use a timing middleware that logs how long an operation has taken and another middleware that begins and commits a DB transaction. The signatures would look like this:
Time
has the same signature as Trace
: it takes a logger and a string (the name of the operation that’s being timed) and the function being timed. Transact
is similar to Connect
, but it takes a connection that’s used to create a transaction and a function that consumes the transaction.
Now that we have four reasonable use cases, let’s see if there’s a pattern in the signatures:
ConnectionString→
(SqlConnection→
R)→
R ILogger→
string→
(()→
R)→
R SqlConnection→
(SqlTransaction→
R)→
R
Each function has some parameters that are specific to the functionality it exposes, but there’s definitely a pattern. If we abstract away these specific parameters (which we can provide with partial application) and only concentrate on the arguments shown in bold, all functions have a signature in this form:
They all take a callback function (although, in this context, it’s usually called a continuation) that produces an R
, and they return an R
(presumably, the very R
returned by the continuation or a modified version of it). The essence of a middleware function is that it takes a continuation of type T
→
R
, supplies a T
to it to obtain an R
, and returns an R
, as figure 14.3 shows.
Let’s capture this essence with a delegate:
But wait. Why is it returning dynamic
rather than R
?
The problem is that T
(the input to the continuation) and R
(its output) are not known at the same time. For example, suppose you want to create a Middleware
instance from a function such as Connect
, which has this signature:
The continuation accepted by Connect
takes a SqlConnection
as input, so we can use Connect
to define a Middleware<SqlConnection>
. That means the T
type variable in Middleware<T>
resolves to SqlConnection
, but we don’t yet know what the given continuation will yield, so we can’t yet resolve the R
type variable in Connect<R>
.
Unfortunately, C# doesn’t allow us to partially apply type variables, hence, dynamic
. So although, conceptually, we’re thinking of combining HOFs of this type
we’re in fact modeling them as follows:
Later, you’ll see that you can still work with Middleware
without compromising on type safety.
The interesting—and mind-bending—thing is that Middleware<T>
is a monad over T
, where (remember) T
is the type of the input argument taken by the continuation that is given to the middleware function. This seems counterintuitive. A monad over T
is usually something that contains a T
or some T
s. But this still applies here: if a function has the signature (T
→
R)
→
R
, then it can provide a T
to the given function T
→
R
, so it must contain or somehow be able to produce a T
.
It’s time to learn how to combine two middleware blocks with Bind
. Essentially, Bind
attaches a downstream middleware block to a pipeline, as figure 14.4 shows.
The implementation of Bind
is simple to write but not easy to fully grasp:
public static Middleware<R> Bind<T, R> (this Middleware<T> mw, Func<T, Middleware<R>> f) => cont => mw(t => f(t)(cont));
We have a Middleware<T>
expecting a continuation of type (T
→
dynamic)
. We then have a function, f
, that takes a T
and produces a Middleware<R>
, expecting a continuation of type (R
→
dynamic)
. What we get as a result is a Middleware<R>
that, when supplied a continuation, cont
, runs the initial middleware, giving it as continuation a function that runs the binder function f
to obtain the second middleware, to which it will pass cont
. Don’t worry if this doesn’t fully make sense at this point.
public static Middleware<R> Map<T, R> (this Middleware<T> mw, Func<T, R> f) => cont => mw(t => cont(f(t)));
Map
takes a Middleware<T>
and a function f
from T
to R
. The middleware knows how to create a T
and supply it to a continuation that takes a T
. By applying f
, it now knows how to create an R
and supply it to a continuation that takes an R
. You can visualize Map
as adding a transformation T
→
R
before the continuation or, alternatively, as adding a new setup/teardown block to the pipeline, performing a transformation as the setup and passing the result along as the teardown, as figure 14.5 shows.
Finally, once we’ve composed the desired pipeline, we can run the whole pipeline by passing a continuation:
The preceding code shows that if you have a Middleware<A>
and a continuation function cont
of type A
→
B
, you can directly supply the continuation to the middleware.
There’s still a small crease to iron out. Notice that when we provide the continuation, we get a dynamic
back, where we really expect a B
. To maintain type safety, we can define a Run
function that runs the pipeline with the identity function as continuation:
Because mw
is a Middleware<T>
(meaning that mw
can provide a value of type T
to its continuation) and because the continuation in this case is the identity function, we know that the continuation produces a T
, so we can confidently cast the result of running the middleware to T
.
When we want to run a pipeline, instead of directly providing the continuation, we can use Map
to map the continuation and then call Run
:
Here we map our continuation A
→
B
onto our Middleware<A>
, obtaining a Middleware<B>
, and then run it (with the identity function) to obtain a B
. Notice that exp2
from this snippet is identical to exp1
from the previous one, but we’ve regained type safety.3
Let’s put this all to work by refactoring the DbLogger
from section 2.3 to use Middleware
rather than HOFs:
public class DbLogger { Middleware<SqlConnection> Connect; public DbLogger(ConnectionString connString) { Connect = f => ConnectionHelper.Connect(connString, f); } public void Log(LogMessage message) => ( from conn in Connect select conn.Execute("sp_create_log", message , commandType: CommandType.StoredProcedure) ).Run();
In the constructor, we essentially use partial application to bake the connection string into the Connect
function, which now has the right signature to be used as a Middleware<SqlConnection>
.
In the Log
method, we create a pipeline with a single middleware block, which creates the DB connection. We can then use LINQ syntax to refer to conn
(the connection that will be available when the pipeline is run) when calling Execute
, the main operation that will interact with the DB.
Of course, we could have written Log
more concisely by just passing a callback to Connect
. But the whole point here is to avoid callbacks. As we add more blocks to the pipeline, we’ll be able to do this by just adding from
clauses to our LINQ comprehension. You’ll see this next.
Suppose we have a DB operation that sometimes takes longer than expected, so we’d like to add another middleware that logs how long the DB access operation took. For this, we could define the following HOF:
public static class Instrumentation { public static T Time<T>(ILogger log, string op, Func<T> f) { var sw = new Stopwatch(); sw.Start(); T t = f(); sw.Stop(); log.LogDebug($"{op} took {sw.ElapsedMilliseconds}ms"); return t; } }
Time
takes three arguments: a logger, to which it will log the diagnostic message; op
, the name of the operation being performed, which will be included in the logged message; and a function representing the operation whose duration is being timed.
There’s a slight problem because Time
takes a Func<T>
(a function with no input arguments), whereas we defined the continuations accepted by our middleware to be in the form T
→
dynamic
(there should always be an input parameter). We can bridge the gap with Unit
, as usual, but this time on the input side. For this, I’ve defined an adapter function that converts a function taking Unit
to a function that takes no arguments:
With this in place, we can enrich our pipeline with a block for logging the time taken for DB access as the following listing shows.
public class DbLogger { Middleware<SqlConnection> Connect; Func<string, Middleware<Unit>> Time; public DbLogger(ConnectionString connString, ILogger log) { Connect = f => ConnectionHelper.Connect(connString, f); Time = op => f => Instrumentation.Time(log, op, f.ToNullary()); } public void DeleteOldLogs() => ( from _ in Time("DeleteOldLogs") from conn in Connect select conn.Execute ( "DELETE [Logs] WHERE [Timestamp] < @upTo" , new { upTo = 7.Days().Ago() }) ).Run(); }
Once we’ve wrapped the call to Instrumentation.Time
in a Middleware
, we can use it in the pipeline by adding an additional from
clause. Notice that the _
variable will be assigned the Unit
value returned by Time
. You can disregard it, but LINQ syntax doesn’t allow you to omit it.
As a final example, let’s add one more type of middleware that manages a DB transaction. We can abstract simple transaction management into a HOF like this:
public static R Transact<R> (SqlConnection conn, Func<SqlTransaction, R> f) { using var tran = conn.BeginTransaction(); R r = f(tran); tran.Commit(); return r; }
Transact
takes a connection and a function f
that consumes the transaction. Presumably, f
involves multiple DB operations that we need to be performed atomically. As an effect of how the using
declaration is interpreted, the transaction will be rolled back if an exception is thrown by f
. The following listing provides an example of integrating Transact
into a pipeline.
Middleware<SqlConnection> Connect(ConnectionString connString) ❶ => f => ConnectionHelper.Connect(connString, f); Middleware<SqlTransaction> Transact(SqlConnection conn) ❶ => f => ConnectionHelper.Transact(conn, f); Func<Guid, int> DeleteOrder(ConnectionString connString) ❷ => (Guid id) => { SqlTemplate deleteLinesSql = "DELETE OrderLines WHERE OrderId = @Id"; SqlTemplate deleteOrderSql = "DELETE Orders WHERE Id = @Id"; object param = new { Id = id }; Middleware<int> deleteOrder = from conn in Connect(connString) from tran in Transact(conn) select conn.Execute(deleteLinesSql, param, tran) + conn.Execute(deleteOrderSql, param, tran); return deleteOrder.Run(); };
❶ Adapters to turn existing HOFs into Middleware
❷ The connection string is injected.
Connect
and Transact
simply wrap existing HOFs into a Middleware
. DeleteOrder
is written in curried form so that we can provide the connection string at startup and the ID of the order to delete at run time as explained in section 9.4. Now look at the interesting bit—the middleware pipeline declared as deleteOrder
:
Connect
defines a block that creates (and then disposes) the connection.
Transact
defines another block that consumes the connection and creates (and then disposes) a transaction.
Within the select
clause, we have two DB actions that use the connection and the transaction and which will, therefore, be executed atomically. Because Execute
returns an int
(the number of affected rows), we can use +
to combine the two operations.
As you already saw in previous chapters, the Guid
of the order being deleted is used to populate the Id
field of a param
object, and as a result, it replaces the @Id
token of the SQL template strings.
Once the middleware functions are set up, adding or removing a step from the pipeline is a one-line change. If you’re logging timing information, do you only want to time the DB actions or also the time taken to acquire the connection? Whatever the case, you can change that by simply changing the order of middleware in the pipeline as the following snippets show:
❶ Acquiring the connection counts toward the time that will be logged. |
The flat layout of LINQ queries makes it easy to see and change the order of the middleware functions. And, of course, this solution avoids the pyramid of doom. Although I’ve used the idea of middleware and the somewhat specific scenario of DB access to illustrate it, the concept of continuations is wider and applies to any function in this form:4
This also means that we could have avoided defining a custom delegate Middleware
. The definitions of Map
, Bind
, and Run
have nothing specific to this scenario, and we could have used Func<Func<T,
dynamic>
dynamic>
instead of Middleware<T>
. This might even save a few lines of code because it removes the need for creating delegates of the correct type. I opted for Middleware
as a more explicit, domain-specific abstraction, but that’s a personal preference.
In this chapter, you’ve seen how delegate-based monads like Try
and Middleware
provide powerful and expressive constructs. They allow us to elegantly address general problems like exception handling and more specific scenarios like middleware pipelines. We’ll explore some more scenarios in chapter 15.
Laziness means deferring a computation until its result is needed. It’s especially useful when the result may not be required in the end.
Lazy computations can be composed to create more complex computations that can then be triggered as needed.
When dealing with an exception-based API, you can use the Try
delegate type. The Run
function safely executes the code in the Try
and returns the result wrapped in an Exceptional
.
HOFs in the form (T
→
R)
→
R
(functions that take a callback or continuation) can also be composed monadically, enabling you to use flat LINQ expressions rather than deeply nested callbacks.
1 This is because C# is a language with strict or eager evaluation (expressions are evaluated as soon as they’re bound to a variable). Although strict evaluation is more common, there are languages, notably Haskell, that use lazy evaluation so that expressions are evaluated only as needed.
2 This is referred to as the writer monad in the literature, and types for which two instances can always be combined into one are called monoids.
3 This is because when computing exp2
, we first compute mw.Map(cont)
, which composes cont
with the continuation that will eventually be given. Then, by calling Run
, we provide the identity function as continuation. The resulting continuation is the composition of cont
and identity, which is exactly the same as providing cont
as the continuation.
4 In the literature, this is known as the continuation monad, which again is a misnomer because the monad here isn’t the continuation but the computation that takes a continuation as input.
3.138.106.225