If you’ve been doing any reading lately about cloud-native services and applications, then you’re probably getting sick of hearing that these services all need to be stateless.
Stateless in this case doesn’t mean that state can’t exist anywhere; it just means that it cannot exist in your application’s memory. A truly cloud-native service does not maintain state between requests.
To build stateless services, we really just need to kick the state responsibility a little further down the road. In this chapter, we’ll talk about how to build a microservice that depends on an external data source. Our code in this chapter will work with Entity Framework (EF) Core, and we’ll upgrade the team and location services we’ve been working with to true data persistence.
There are many risks associated with embracing a 1.0-level technology. The ecosystem is generally immature, so support for your favorite things may be lacking or missing entirely. Tooling and integration and overall developer experience are often high-friction. Despite the long and storied history of .NET, .NET Core (and especially the associated tooling) should still be treated like a brand new 1.0 product.
One of things we might run into when trying to pick a data store that is compatible with EF Core is a lack of available providers. While this list will likely have grown by the time you read this, at the time this chapter was written, the following providers were available for EF Core:
For databases that aren’t inherently compatible with the Entity Framework relational model, like MongoDB, Neo4j, Cassandra, etc., you should be able to find client libraries available that will work with .NET Core. Since most of these databases expose simple RESTful APIs, you should still be able to use them even if you have to write your own client.
For the most up-to-date list of databases available, check the docs.
Because of my desire to keep everything as cross-platform as possible throughout this book, I decided to use Postgres instead of SQL Server to accommodate readers working on Linux or Mac workstations. Postgres is also easily installed on Windows.
In Chapter 3, we created our first microservice. In order to get something running and focus solely on the discipline and code required to stand up a simple service, we used an in-memory repository that didn’t amount to much more than a fake that aided us in writing tests.
In this section we’re going upgrade our location service to work with Postgres. To do this we’re going to create a new repository implementation that encapsulates the PostgreSQL client communication. Before we get to the implementation code, let’s revisit the interface for our location repository (Example 5-1).
using
System
;
using
System.Collections.Generic
;
namespace
StatlerWaldorfCorp.LocationService.Models
{
public
interface
ILocationRecordRepository
{
LocationRecord
Add
(
LocationRecord
locationRecord
);
LocationRecord
Update
(
LocationRecord
locationRecord
);
LocationRecord
Get
(
Guid
memberId
,
Guid
recordId
);
LocationRecord
Delete
(
Guid
memberId
,
Guid
recordId
);
LocationRecord
GetLatestForMember
(
Guid
memberId
);
ICollection
<
LocationRecord
>
AllForMember
(
Guid
memberId
);
}
}
The location repository exposes standard CRUD functions like Add
, Update
, Get
, and Delete
. In addition, this repository exposes methods to obtain the latest location entry for a member as well as the entire location history for a member.
The purpose of the location service is solely to track location data, so you’ll notice that there is no reference to team membership at all in this interface.
The next thing we’re going to do is create a database context. This class will serve as a wrapper around the base DbContext
class we get from Entity Framework Core. Since we’re dealing with locations, we’ll call our context class LocationDbContext
.
If you’re not familiar with Entity Framework or EF Core, the database context acts as the gateway between your database-agnostic model class (POCOs, or Plain-Old C# Objects) and the real database. For more information on EF Core, check out Microsoft’s documentation. We could probably spend another several chapters doing nothing but exploring its details, but since we’re trying to stay focused on cloud-native applications and services, we’ll use just enough EF Core to build our services.
The pattern for using a database context is to create a class that inherits from it that is specific to your model. In our case, since we’re dealing with locations, we’ll create a LocationDbContext
class (Example 5-2).
using
Microsoft.EntityFrameworkCore
;
using
StatlerWaldorfCorp.LocationService.Models
;
using
Npgsql.EntityFrameworkCore.PostgreSQL
;
namespace
StatlerWaldorfCorp.LocationService.Persistence
{
public
class
LocationDbContext
:
DbContext
{
public
LocationDbContext
(
DbContextOptions
<
LocationDbContext
>
options
)
:
base
(
options
)
{
}
protected
override
void
OnModelCreating
(
ModelBuilder
modelBuilder
)
{
base
.
OnModelCreating
(
modelBuilder
);
modelBuilder
.
HasPostgresExtension
(
"uuid-ossp"
);
}
public
DbSet
<
LocationRecord
>
LocationRecords
{
get
;
set
;}
}
}
Here we can use the ModelBuilder
and DbContextOptions
classes to perform any additional setup we need on the context. In our case, we’re ensuring that our model has the uuid-ossp
Postgres extension to support the member ID field.
Now that we have a context through which other classes can use to communicate with the database, we can create a real implementation of the ILocationRecordRepository
interface. This real implementation will take an instance of LocationDbContext
as a constructor parameter. This sets us up nicely to configure this context with environment-supplied connection strings when deploying for real and with mocks or in-memory providers (discussed later) when testing.
Example 5-3 contains the code for the LocationRecordRepository
class.
using
System
;
using
System.Linq
;
using
System.Collections.Generic
;
using
StatlerWaldorfCorp.LocationService.Models
;
namespace
StatlerWaldorfCorp.LocationService.Persistence
{
public
class
LocationRecordRepository
:
ILocationRecordRepository
{
private
LocationDbContext
context
;
public
LocationRecordRepository
(
LocationDbContext
context
)
{
this
.
context
=
context
;
}
public
LocationRecord
Add
(
LocationRecord
locationRecord
)
{
this
.
context
.
Add
(
locationRecord
);
this
.
context
.
SaveChanges
();
return
locationRecord
;
}
public
LocationRecord
Update
(
LocationRecord
locationRecord
)
{
this
.
context
.
Entry
(
locationRecord
).
State
=
EntityState
.
Modified
;
this
.
context
.
SaveChanges
();
return
locationRecord
;
}
public
LocationRecord
Get
(
Guid
memberId
,
Guid
recordId
)
{
return
this
.
context
.
LocationRecords
.
Single
(
lr
=>
lr
.
MemberID
==
memberId
&&
lr
.
ID
==
recordId
);
}
public
LocationRecord
Delete
(
Guid
memberId
,
Guid
recordId
)
{
LocationRecord
locationRecord
=
this
.
Get
(
memberId
,
recordId
);
this
.
context
.
Remove
(
locationRecord
);
this
.
context
.
SaveChanges
();
return
locationRecord
;
}
public
LocationRecord
GetLatestForMember
(
Guid
memberId
)
{
LocationRecord
locationRecord
=
this
.
context
.
LocationRecords
.
Where
(
lr
=>
lr
.
MemberID
==
memberId
).
OrderBy
(
lr
=>
lr
.
Timestamp
).
Last
();
return
locationRecord
;
}
public
ICollection
<
LocationRecord
>
AllForMember
(
Guid
memberId
)
{
return
this
.
context
.
LocationRecords
.
Where
(
lr
=>
lr
.
MemberID
==
memberId
).
OrderBy
(
lr
=>
lr
.
Timestamp
).
ToList
();
}
}
}
The code here is pretty straightforward. Any time we make a change to the database, we call SaveChanges
on the context. If we need to query, we use the LINQ expression syntax where we can combine Where
and OrderBy
to filter and sort the results.
When we do an update, we need to flag the entity we’re updating as a modified entry so that Entity Framework Core knows how to generate an appropriate SQL UPDATE
statement for that record. If we don’t modify this entry state, EF Core won’t know anything has changed and so a call to SaveChanges
will do nothing.
The next big trick in this repository is injecting the Postgres-specific database context. To make this happen, we need to add this repository to the dependency injection system in the ConfigureServices
method of our Startup
class (Example 5-4).
public
void
ConfigureServices
(
IServiceCollection
services
)
{
services
.
AddEntityFrameworkNpgsql
()
.
AddDbContext
<
LocationDbContext
>(
options
=>
options
.
UseNpgsql
(
Configuration
));
services
.
AddScoped
<
ILocationRecordRepository
,
LocationRecordRepository
>();
services
.
AddMvc
();
}
First we want to use the AddEntityFrameworkNpgsql
extension method exposed by the Postgres EF Core provider. Next, we add our location repository as a scoped service. When we use the AddScoped
method, we’re indicating that every new request made to our service gets a newly created instance of this repository.
So far we’ve created an interface that represents the contract to which our repositories must conform. We’ve got an in-memory implementation of a repository and we’ve got a repository that wraps a DbContext
configured to talk to PostgreSQL.
You might be wondering how (or if) we can test the database context wrapper in isolation, since we can already test the repository in isolation. Microsoft does have an in-memory provider for Entity Framework Core. There are a couple of drawbacks to this provider, however. First and foremost, the InMemory
provider is not a relational database. This means that you can save data using this provider that might normally violate a real database’s referential integrity and foreign key constraints.
If you dig a little deeper into this provider, you’ll realize that it is essentially an EF Core facade around simple in-memory collection storage. We have already built a repository that works against collection objects, so the only added value this provider gives us is a little bit of additional code coverage to ensure that our database context is actually invoked. You should not assume that the InMemory
provider is going to give you confidence that your database operations will behave as planned.
It is for these reasons, and the fact that this is not a book focused on TDD, that I decided to skip writing tests using this provider. We’ve got unit tests for our repositories and, as you’ll see later in the chapter, we’re going to be building automated integration tests that talk to a real PostgreSQL database.
I’ll leave it up to you to decide whether you think the use of the InMemory
provider will add testing value and confidence to your projects.
When we talk about making our services cloud native, one of the things that always comes up is the notion of backing services. Put simply, this means that we need to treat everything that our application needs to function as a bound resource: files, databases, services, messaging, etc.
Every backing service our application needs should be configurable externally. As such, we need to be able to get our database connection string from someplace other than our code. If we check a connection string into source control, then we’ve already violated some of the cardinal rules of cloud-native application development.
The means by which an application gets its external configuration vary from platform to platform. For this sample we’re going to use environment variables that can override defaults supplied by a configuration file.
This appsettings.json file looks like the one here (newlines inside the connection string are for book formatting only):
{
"transient"
:
false
,
"postgres"
:
{
"cstr"
:
"Host=localhost;Port=5432;Database=locationservice;
Username=integrator;Password=inteword"
}
}
This scenario makes it very easy to override default configuration in deployment environments but have a relatively low-friction developer experience on our development workstations.
The repository we built earlier requires some kind of database context in order to function. The database context is the core primitive of Entity Framework Core. (This book is not an EF Core reference manual, so if you want additional information you should consult the official documentation.)
To create a database context for the location model, we just need to create a class that inherits from DbContext
. I’ve also included a DbContextFactory
because that can sometimes make running the Entity Framework Core command-line tools simpler:
using
Microsoft.EntityFrameworkCore
;
using
Microsoft.EntityFrameworkCore.Infrastructure
;
using
StatlerWaldorfCorp.LocationService.Models
;
using
Npgsql.EntityFrameworkCore.PostgreSQL
;
namespace
StatlerWaldorfCorp.LocationService.Persistence
{
public
class
LocationDbContext
:
DbContext
{
public
LocationDbContext
(
DbContextOptions
<
LocationDbContext
>
options
)
:
base
(
options
)
{
}
protected
override
void
OnModelCreating
(
ModelBuilder
modelBuilder
)
{
base
.
OnModelCreating
(
modelBuilder
);
modelBuilder
.
HasPostgresExtension
(
"uuid-ossp"
);
}
public
DbSet
<
LocationRecord
>
LocationRecords
{
get
;
set
;}
}
public
class
LocationDbContextFactory
:
IDbContextFactory
<
LocationDbContext
>
{
public
LocationDbContext
Create
(
DbContextFactoryOptions
options
)
{
var
optionsBuilder
=
new
DbContextOptionsBuilder
<
LocationDbContext
>();
var
connectionString
=
Startup
.
Configuration
.
GetSection
(
"postgres:cstr"
).
Value
;
optionsBuilder
.
UseNpgsql
(
connectionString
);
return
new
LocationDbContext
(
optionsBuilder
.
Options
);
}
}
}
With a new database context, we need to make it available for dependency injection so that the location repository can utilize it:
public
void
ConfigureServices
(
IServiceCollection
services
)
{
var
transient
=
true
;
if
(
Configuration
.
GetSection
(
"transient"
)
!
=
null
)
{
transient
=
Boolean
.
Parse
(
Configuration
.
GetSection
(
"transient"
)
.
Value
)
;
}
if
(
transient
)
{
logger
.
LogInformation
(
"Using transient location record repository."
)
;
services
.
AddScoped
<
ILocationRecordRepository
,
InMemoryLocationRecordRepository
>
(
)
;
}
else
{
var
connectionString
=
Configuration
.
GetSection
(
"postgres:cstr"
)
.
Value
;
services
.
AddEntityFrameworkNpgsql
(
)
.
AddDbContext
<
LocationDbContext
>
(
options
=
>
options
.
UseNpgsql
(
connectionString
)
)
;
logger
.
LogInformation
(
"Using '{0}' for DB connection string."
,
connectionString
)
;
services
.
AddScoped
<
ILocationRecordRepository
,
LocationRecordRepository
>
(
)
;
}
services
.
AddMvc
(
)
;
}
The calls to AddEntityFrameworkNpgsql
and AddDbContext
are the magic that makes everything happen here.
With a context configured for DI, our service should be ready to run, test, and accept EF Core command-line parameters like the ones we need to execute migrations. You can see the code for the migrations in the location service’s GitHub repository. When building your own database-backed services, you can also use the EF Core command-line tools to reverse-engineer migrations from existing database schemas.
We’ve unit tested all of our code, and we’ve made the decision to not use the InMemory
EF Core data provider, but we still don’t have full confidence in our service. The only way we’re going to have full confidence is when we exercise our repository class against a real Postgres database.
Back in the old days, when developers rode dinosaurs to and from the office, we would have installed Postgres on our local workstation, configured it manually, and even manually triggered a test that would exercise the repository class against this local instance.
This pattern is the antithesis of the kind of agility and automation we want when building applications for the cloud. No, what we want instead is for our automated build pipeline to spin up a fresh, empty instance of Postgres every time we run the build. Then we want integration tests to run against this fresh instance, including running our migrations to set up the schema in the database. We want this to work locally, on our teammates’ workstations, and in the cloud, all automatically after every commit.
This is why I enjoy the combination of Wercker and Docker (though most Docker-native CI tools support similar functionality). If we just add the following lines to the top of our wercker.yml file, the Wercker CLI (and the hosted version in the cloud) will spin up a connected Postgres Docker image and create a bunch of environment variables that provide the host IP, port, and credentials for the database (Example 5-5).
services: - id: postgres env: POSTGRES_PASSWORD: inteword POSTGRES_USER: integrator POSTGRES_DB: locationservice
We can specify the credentials we’re going to use or we can let Wercker pick them. Either way, the credentials and other relevant information are made available to our build pipeline in environment variables.
Normally we would throw a fit about checking in credentials, but since these credentials are only used to configure a short-lived database that only exists long enough to run integration tests inside a private container, this isn’t dangerous. If these credentials pointed to a database that existed anywhere in a semi-permanent environment, that would be a red flag.
This chapter has a lot of connection strings that wrap across lines in the printed and electronic book. These line wraps don’t exist in the actual YAML, JSON, or C# files. Please double-check with the files in GitHub if you’re not sure when there should or should not be a line feed.
Now we can set up some build steps that prepare for and execute integration tests, as in Example 5-6.
# integration tests - script: name: integration-migrate cwd: src/StatlerWaldorfCorp.LocationService code: | export TRANSIENT=false export POSTGRES__CSTR= "Host=$POSTGRES_PORT_5432_TCP_ADDR" export POSTGRES__CSTR= "$POSTGRES__CSTR;Username=integrator;Password=inteword;" export POSTGRES__CSTR= "$POSTGRES__CSTR;Port=$POSTGRES_PORT_5432_TCP_PORT; Database=locationservice" dotnet ef database update - script: name: integration-restore cwd: test/StatlerWaldorfCorp.LocationService.Integration code: | dotnet restore - script: name: integration-build cwd: test/StatlerWaldorfCorp.LocationService.Integration code: | dotnet build - script: name: integration-test cwd: test/StatlerWaldorfCorp.LocationService.Integration code: | dotnet test
The awkward-looking concatenation of the shell variable is just a way of making it slightly clearer how that variable is being created, and sometimes you run into parsing issues with the semicolons that cut off the rest of the environment variable.
The following is the list of commands being executed by the integration suite:
dotnet ef database update
Ensures that the schema in the database matches what our EF Core model expects. This will actually instantiate the Startup
class, call ConfigureServices
, and attempt to pluck out the LocationDbContext
class and then execute the migrations stored in the project.
dotnet restore
Verifies and collects dependencies for our integration test project.
dotnet build
Compiles our integration test project.
dotnet test
Runs the detected tests in our integration test project.
You can see the full wercker.yml file in the GitHub repository for the location service. I cannot stress enough how important it is that you and your team be able to automatically run all of your unit and integration tests in a reliable, reproducible environment. This is key to rapid iteration when building microservices for the cloud.
Running the data service should be relatively easy. The first thing we’re going to need to do is spin up a running instance of Postgres. If you were paying attention to the wercker.yml file for the location service that sets up the integration tests, then you might be able to guess at the docker run
command to start Postgres with our preferred parameters:
$ docker run -p 5432:5432 --name some-postgres -e POSTGRES_PASSWORD=inteword -e POSTGRES_USER=integrator -e POSTGRES_DB=locationservice -d postgres
This starts the Postgres Docker image with the name some-postgres
(this will be important shortly). To verify that we can connect to Postgres, we can run the following Docker command to launch psql
:
$ docker run -it --rm --link some-postgres:postgres postgres psql -h postgres -U integrator -d locationservice Password for user integrator: psql (9.6.2) Type "help" for help. locationservice=# select 1; ?column? ---------- 1 (1 row)
With the database up and running, we need a schema. The tables in which we expect to store the migration metadata and our location records don’t yet exist. To put them in the database, we just need to run an EF Core command from the location service’s project directory. Note that we’re also setting environment variables that we’ll need soon:
$ export TRANSIENT=false $ export POSTGRES__CSTR="Host=localhost;Username=integrator; Password=inteword;Database=locationservice;Port=5432" $ dotnet ef database update Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:03.25 info: Startup[0] Using 'Host=localhost;Username=integrator; Password=inteword;Port=5432;Database=locationservice' for DB connection string. Executed DbCommand (13ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT EXISTS (SELECT 1 FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid=c.relnamespace WHERE c.relname='__EFMigrationsHistory'); Executed DbCommand (56ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] CREATE TABLE "__EFMigrationsHistory" ( "MigrationId" varchar(150) NOT NULL, "ProductVersion" varchar(32) NOT NULL, CONSTRAINT "PK___EFMigrationsHistory" PRIMARY KEY ("MigrationId") ); Executed DbCommand (0ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT EXISTS (SELECT 1 FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid=c.relnamespace WHERE c.relname='__EFMigrationsHistory'); Executed DbCommand (2ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT "MigrationId", "ProductVersion" FROM "__EFMigrationsHistory" ORDER BY "MigrationId"; Applying migration '20160917140258_Initial'. Executed DbCommand (19ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; Executed DbCommand (18ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] CREATE TABLE "LocationRecords" ( "ID" uuid NOT NULL, "Altitude" float4 NOT NULL, "Latitude" float4 NOT NULL, "Longitude" float4 NOT NULL, "MemberID" uuid NOT NULL, "Timestamp" int8 NOT NULL, CONSTRAINT "PK_LocationRecords" PRIMARY KEY ("ID") ); Executed DbCommand (0ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] INSERT INTO "__EFMigrationsHistory" ("MigrationId", "ProductVersion") VALUES ('20160917140258_Initial', '1.1.1'); Done.
At this point Postgres is running with a valid schema and it’s ready to start accepting commands from the location service. Here’s where it gets a little tricky. If we’re going to run the location service from inside a Docker image, then referring to the Postgres server’s host as localhost
won’t work—because that’s the host inside the Docker image.
What we need is for the location service to reach out of its container and then into the Postgres container. We can do this with a container link that creates a virtual hostname (we’ll call it postgres
), but we’ll need to change our environment variable before launching the Docker image:
$ export POSTGRES__CSTR="Host=postgres;Username=integrator; Password=inteword;Database=locationservice;Port=5432" $ docker run -p 5000:5000 --link some-postgres:postgres -e TRANSIENT=false -e PORT=5000 -e POSTGRES__CSTR dotnetcoreservices/locationservice:latest
Now that we’ve linked the service’s container to the Postgres container via the postgres
hostname, the location service should have no trouble connecting to the database.
To see this all in action, let’s submit a location record (as usual, take the line feeds out of this command when you type it):
$ curl -H "Content-Type:application/json" -X POST -d '{"id":"64c3e69f-1580-4b2f-a9ff-2c5f3b8f0e1f","latitude":12.0, "longitude":10.0,"altitude":5.0,"timestamp":0, "memberId":"63e7acf8-8fae-42ce-9349-3c8593ac8292"}' http://localhost:5000/locations/63e7acf8-8fae-42ce-9349-3c8593ac8292 {"id":"64c3e69f-1580-4b2f-a9ff-2c5f3b8f0e1f", "latitude":12.0,"longitude":10.0,"altitude":5.0, "timestamp":0,"memberID":"63e7acf8-8fae-42ce-9349-3c8593ac8292"}
Take a look at the trace output from your running Docker image for the location service. You should see some very useful Entity Framework trace data explaining what happened. The service performed a SQL INSERT
, so things are looking promising:
info: Microsoft.EntityFrameworkCore.Storage. IRelationalCommandBuilderFactory[1] Executed DbCommand (23ms) [Parameters=[@p0='?', @p1='?', @p2='?', @p3='?', @p4='?', @p5='?'], CommandType='Text', CommandTimeout='30'] INSERT INTO "LocationRecords" ("ID", "Altitude", "Latitude", "Longitude", "MemberID", "Timestamp") VALUES (@p0, @p1, @p2, @p3, @p4, @p5); info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1] Executing ObjectResult, writing value Microsoft.AspNetCore .Mvc.ControllerContext. info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2] Executed action StatlerWaldorfCorp.LocationService. Controllers.LocationRecordController.AddLocation (StatlerWaldorfCorp.LocationService) in 2253.7616ms info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2] Request finished in 2602.7855ms 201 application/json; charset=utf-8
Let’s ask the service for this fictitious member’s location history:
$ curl http://localhost:5000/locations/63e7acf8-8fae-42ce-9349-3c8593ac8292 [{"id":"64c3e69f-1580-4b2f-a9ff-2c5f3b8f0e1f", "latitude":12.0,"longitude":10.0,"altitude":5.0, "timestamp":0,"memberID":"63e7acf8-8fae-42ce-9349-3c8593ac8292"}]
The corresponding Entity Framework trace looks like this:
info: Microsoft.EntityFrameworkCore.Storage. IRelationalCommandBuilderFactory[1] Executed DbCommand (23ms) [Parameters=[@__memberId_0='?'], CommandType='Text', CommandTimeout='30'] SELECT "lr"."ID", "lr"."Altitude", "lr"."Latitude", "lr"."Longitude", "lr"."MemberID", "lr"."Timestamp" FROM "LocationRecords" AS "lr" WHERE "lr"."MemberID" = @__memberId_0 ORDER BY "lr"."Timestamp"
Just to be double sure, let’s query the latest endpoint to make sure we still get what we expect to see:
$ curl http://localhost:5000/locations/63e7acf8-8fae-42ce-9349-3c8593ac8292 /latest {"id":"64c3e69f-1580-4b2f-a9ff-2c5f3b8f0e1f", "latitude":12.0,"longitude":10.0,"altitude":5.0, "timestamp":0,"memberID":"63e7acf8-8fae-42ce-9349-3c8593ac8292"}
Finally, to prove that we really are using real database persistence and that this isn’t just a random fluke, use docker ps
and docker kill
to locate the Docker process for the location service and kill it. Restart it using the exact same command you used before.
You should now be able to query the location service and get the exact same data you had before. Of course, once you stop the Postgres container you’ll permanently lose that data.
There are no hard and fast rules about microservices that say we must communicate with a database, but exposure to the real world tells us that a lot of our microservices are going to sit on top of databases.
In this chapter we talked about some of the architectural and technical concerns with building a .NET Core microservice that exposes a RESTful API that interacts with a database. We illustrated how to use dependency injection to configure our repository service, as well as how to use build automation tools to run integration tests against clean, private database instances.
In the coming chapters, we’ll start exploring more advanced topics as we widen the scope of our coverage of microservices from individual services to entire service ecosystems.
3.137.164.24