Using a local volume

The first issue is a very serious problem because all of our data is currently tied to our container, so if the database app stops, you have to restart the same container to get your data back. In this situation, if the container is run with the --rm flag and stops or is otherwise terminated, all the data associated with it would disappear, which is definitively not something we want. While large-scale solutions for this problem are done with sharding, clustering, and/or persistent volumes for our level, we should be fine by just mounting the data volume where we want to keep our data into the container directly. This should keep the data on the host filesystem if anything happens to the container and can be further backed up or moved somewhere else if needed.

This process of mounting (sometimes called mapping) a directory into the container is actually relatively easy to do when we start it if our volume is a named volume stored within Docker internals:

$ docker run --rm -d -v local_storage:/data/db -p 27000:27017 database

What this will do is create a named volume in Docker's local storage called local_storage, which will be seamlessly mounted on /data/db in the container (the place where the MongoDB image stores its data in the images from Docker Hub). If the container dies or anything happens to it, you can mount this volume onto a different container and retain the data.

-v , --volume , and using a named volume are not the only ways to create volumes for Docker containers. We will cover the reasons why we use this syntax as opposed to other options (that is, --mount) in more detail in Chapter 5Keeping the Data Persistent, which specifically deals with volumes.

Let us see this in action (this may require a MongoDB client CLI on your host machine):

$ # Start our container
$ docker run --rm
-d
-v local_storage:/data/db
-p 27000:27017
database

16c72859da1b6f5fbe75aa735b539303c5c14442d8b64b733eca257dc31a2722

$ # Insert a test record in test_db/coll1 as { "item": "value" }
$ mongo localhost:27000
MongoDB shell version: 2.6.10
connecting to: localhost:27000/test

> use test_db
switched to db test_db

> db.createCollection("coll1")

{ "ok" : 1 }

> db.coll1.insert({"item": "value"})

WriteResult({ "nInserted" : 1 })

> exit

bye

$ # Stop the container. The --rm flag will remove it.
$ docker stop 16c72859
16c72859

$ # See what volumes we have
$ docker volume ls
DRIVER VOLUME NAME
local local_storage

$ # Run a new container with the volume we saved data onto
$ docker run --rm
-d
-v local_storage:/data/db
-p 27000:27017
database

a5ef005ab9426614d044cc224258fe3f8d63228dd71dee65c188f1a10594b356

$ # Check if we have our records saved
$ mongo localhost:27000
MongoDB shell version: 2.6.10
connecting to: localhost:27000/test

> use test_db
switched to db test_db

> db.coll1.find()

{ "_id" : ObjectId("599cc7010a367b3ad1668078"), "item" : "value" }

> exit


$ # Cleanup
$ docker stop a5ef005a
a5ef005a

As you can see, our record persisted through the original container's destruction, which is exactly what we want! We will cover how to handle volumes in other ways in later chapters, but this should be enough to get us where we want with this critical issue in our little service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.130.103