At the core of most large-scale web applications and services is a high-performance data storage solution. The backend data store is responsible for storing everything from user account information to shopping cart items to blog and comment data. Good web applications must store and retrieve data with accuracy, speed, and reliability. Therefore, the data storage mechanism you choose must perform at a level that satisfies user demand.
Several different data storage solutions are available to store and retrieve data needed by your web applications. The three most common are direct file system storage in files, relational databases, and NoSQL databases. The data store chosen for this book is MongoDB, which is a NoSQL database.
The following sections describe MongoDB and discuss the design considerations you need to review before deciding how to implement the structure of data and configuration of the database. The sections cover the questions to ask yourself, and then cover the mechanisms built into MongoDB to satisfy the demands of the answers to those questions.
The concept of NoSQL (Not Only SQL) consists of technologies that provide storage and retrieval without the tightly constrained models of traditional SQL relational databases. The motivation behind NoSQL is mainly simplified designs, horizontal scaling, and finer control of the availability of data.
NoSQL breaks away from the traditional structure of relational databases and allows developers to implement models in ways that more closely fit the data flow needs of their systems. This allows NoSQL databases to be implemented in ways that traditional relational databases could never be structured.
There are several different NoSQL technologies, such as HBase’s column structure, Redis’s key/value structure, and Neo4j’s graph structure. However, in this book MongoDB and the document model were chosen because of great flexibility and scalability when it comes to implementing backend storage for web applications and services. Also MongoDB is one of the most popular and well supported NoSQL databases currently available.
MongoDB is a NoSQL database based on a document model where data objects are stored as separate documents inside a collection. The motivation of the MongoDB language is to implement a data store that provides high performance, high availability, and automatic scaling. MongoDB is simple to install and implement, as you see in the upcoming chapters.
MongoDB groups data together through collections. A collection is simply a grouping of documents that have the same or a similar purpose. A collection acts similarly to a table in a traditional SQL database, with one major difference. In MongoDB, a collection is not enforced by a strict schema; instead, documents in a collection can have a slightly different structure from one another as needed. This reduces the need to break items in a document into several different tables, which is often done in SQL implementations.
A document is a representation of a single entity of data in the MongoDB database. A collection is made up of one or more related objects. A major difference between MongoDB and SQL is that documents are different from rows. Row data is flat, meaning there is one column for each value in the row. However, in MongoDB, documents can contain embedded subdocuments, thus providing a much closer inherent data model to your applications.
In fact, the records in MongoDB that represent documents are stored as BSON, which is a lightweight binary form of JSON, with field:value
pairs corresponding to JavaScript property:value
pairs. These field:value
pairs define the values stored in the document. That means little translation is necessary to convert MongoDB records back into the JavaScript object that you use in your Node.js applications.
For example, a document in MongoDB may be structured similarly to the following with name
, version
, languages
, admin
, and paths
fields:
{ name: "New Project", version: 1, languages: ["JavaScript", "HTML", "CSS"], admin: {name: "Brad", password: "****"}, paths: {temp: "/tmp", project: "/opt/project", html: "/opt/project/html"} }
Notice that the document structure contains fields/properties that are strings, integers, arrays, and objects, just like a JavaScript object. Table 11.1 lists the different data types that field values can be set to in the BSON document.
The field names cannot contain null
characters, .
(dots), or $
(dollar signs). Also, the _id
field name is reserved for the Object ID. The _id
field is a unique ID for the system that is made up of the following parts:
A 4-byte value representing the seconds since the last epoch
A 3-byte machine identifier
A 2-byte process ID
A 3-byte counter, starting with a random value
The maximum size of a document in MongoDB is 16MB, which prevents queries that result in an excessive amount of RAM being used or intensive hits to the file system. Although you may never come close, you still need to keep the maximum document size in mind when designing some complex data types that contain file data.
The BSON data format provides several different types that are used when storing the JavaScript objects to binary form. These types match the JavaScript type as closely as possible. It is important to understand these types because you can actually query MongoDB to find objects that have a specific property that has a value of a certain type. For example, you can look for documents in a database whose timestamp value is a String
object or query for ones whose timestamp is a Date
object.
MongoDB assigns each of the data types an integer ID number from 1 to 255 that is used when querying by type. Table 11.1 shows a list of the data types that MongoDB supports along with the number MongoDB uses to identify them.
Table 11.1 MongoDB data types and corresponding ID number
Type |
Number |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Another thing to be aware of when working with different data types in MongoDB is the order in which they are compared. When comparing values of different BSON types, MongoDB uses the following comparison order from lowest to highest:
1. Min Key
(internal type)
2. Null
3. Numbers
(32-bit integer, 64-bit integer, Double
)
4. String
5. Object
6. Array
7. Binary Data
8. Object ID
9. Boolean
10. Date
, Timestamp
11. Regular Expression
12. Max Key
(internal type)
Before you begin implementing a MongoDB database, you need to understand the nature of the data being stored, how that data is going to get stored, and how it is going to be accessed. Understanding these concepts allows you to make determinations ahead of time and to structure the data and your application for optimal performance.
Specifically, you should ask yourself the following questions:
What are the basic objects that my application will be using?
What is the relationship between the different object types: one-to-one, one-to-many, or many-to-many?
How often will new objects be added to the database?
How often will objects be deleted from the database?
How often will objects be changed?
How often will objects be accessed?
How will objects be accessed: by ID, property values, comparisons, and so on?
How will groups of object types be accessed: by common ID, common property value, and so on?
Once you have the answers to these questions, you are ready to consider the structure of collections and documents inside the MongoDB. The following sections discuss different methods of document, collection, and database modeling you can use in MongoDB to optimize data storage and access.
Data normalization is the process of organizing documents and collections to minimize redundancy and dependency. This is done by identifying object properties that are subobjects and should be stored as a separate document in another collection from the object’s document. Typically, this is used for objects that have a one-to-many or many-to-many relationship with subobjects.
The advantage of normalizing data is that the database size will be smaller because only a single copy of an object will exist in its own collection instead of being duplicated on multiple objects in a single collection. Also, if you modify the information in the subobject frequently, you only need to modify a single instance rather than every record in the object’s collection that has that subobject.
A major disadvantage of normalizing data is that when looking up user objects that require the normalized subobject, a separate lookup must occur to link the subobject. This can result in a significant performance hit if you are accessing the user data frequently.
An example of when it makes sense to normalize data is a system that contains users that have a favorite store. Each User
is an object with name
, phone
, and favoriteStore
properties. The favoriteStore
property is also a subobject that contains name
, street
, city
, and zip
properties.
However, thousands of users may have the same favorite store, so there is a high one-to-many relationship. Therefore, it doesn’t make sense to store the FavoriteStore
object data in each User
object because it would result in thousands of duplications. Instead, the FavoriteStore
object should include an _id
object property that can be referenced from documents in the user’s FavoriteStore
. The application can then use the reference ID favoriteStore
to link data from the Users
collection to FavoriteStore
documents in the FavoriteStores
collection.
Figure 11.1 illustrates the structure of the Users
and FavoriteStores
collections just described.
Figure 11.1 Defining normalized MongoDB documents by adding a reference to documents in another collection
Denormalizing data is the process of identifying subobjects of a main object that should be embedded directly into the document of the main object. Typically this is done on objects that have a mostly one-to-one relationship or are relatively small and do not get updated frequently.
The major advantage of denormalized documents is that you can get the full object back in a single lookup without the need to do additional lookups to combine subobjects from other collections. This is a major performance enhancement. The downside is that for subobjects with a one-to-many relationship you store a separate copy in each document, which slows down insertion and also takes up additional disk space.
An example of when it makes sense to normalize data is a system that contains users with home and work contact information. The user is an object represented by a User
document with name
, home
, and work
properties. The home
and work
properties are subobjects that contain phone
, street
, city
, and zip
properties.
The home
and work
properties do not change often on the user. You may have multiple users from the same home; however, there likely will not be many of them, and the actual values inside the subobjects are not that big and will not change often. Therefore, it makes sense to store the home
contact information directly in the User
object.
The work
property takes a bit more thinking. How many people are going to have the same work contact information? If the answer is not many, then the work
object should be embedded with the User
object. How often are you querying the User
and need the work
contact information? If the answer is rarely, then you may want to normalize work
into its own collection. However, if the answer is frequently or always, then you will likely want to embed work
with the User
object.
Figure 11.2 illustrates the structure of the Users
with Home
and work
contact information embedded as just described.
Figure 11.2 Defining denormalized MongoDB documents by implementing embedded objects inside a document
A great feature of MongoDB is the ability to create a capped collection, which is a collection that has a fixed size. When a new document that exceeds the size of the collection needs to be written to a collection, the oldest document in the collection is deleted and the new document is inserted. Capped collections work great for objects that have a high rate of insertion, retrieval, and deletion.
The following list contains the benefits of using capped collections:
Capped collections guarantee that the insertion order is preserved. Queries do not need to use an index to return documents in the order they were stored, thus eliminating indexing overhead.
Capped collections also guarantee that the insertion order is identical to the order on disk by prohibiting updates that increase the document size. This eliminates the overhead of relocating and managing the new location of documents.
Capped collections automatically remove the oldest documents in the collection. Therefore, you do not need to implement deletion in your application code.
Be careful using capped collections, though, as they have the following restrictions:
Documents cannot be updated to a larger size once they have been inserted into the capped collection. You update them, but the data must be the same size or smaller.
Documents cannot be deleted from a capped collection. That means that the data takes up space on disk even if it is not being used. You can explicitly drop the capped collection to effectively delete all entries, but you need to re-create it to use it again.
A great use of capped collections is as a rolling log of transactions in your system. You can always access the last X number of log entries without needing to explicitly clean up the oldest.
Write operations are atomic at the document level in MongoDB, which means that only one process can update a single document or a single collection at the same time. This means that writing to documents that are denormalized is atomic. However, writing to documents that are normalized requires separate write operations to subobjects in other collections, and therefore the writes of the normalized object may not be atomic as a whole.
Keep atomic writes in mind when designing your documents and collections to ensure that the design fits the needs of the application. In other words, if you absolutely must write all parts of an object as a whole in an atomic manner, then you need to design the object in a denormalized fashion.
When you update a document, consider what effect the new data will have on document growth. MongoDB provides some padding in documents to allow for typical growth during an update operation. However, if the update causes the document to grow to an amount that exceeds the allocated space on disk, MongoDB has to relocate that document to a new location on the disk, incurring a performance hit on the system. Also, frequent document relocation can lead to disk fragmentation issues—for example, if a document contains an array and you add enough elements to the array.
One way to mitigate document growth is to use normalized objects for the properties that may grow frequently. For example, instead of using an array to store items in a Cart
object, you could create a collection for CartItems
and store new items that get placed in the cart as new documents in the CartItems
collection and then reference the user’s Cart
items within them.
MongoDB provides several mechanisms to optimize performance, scaling, and reliability. As you contemplate your database design, consider each of the following options:
Indexing: Indexes improve performance for frequent queries by building a lookup index that can be easily sorted. The
_id
property of a collection is automatically indexed on since it is a common practice to look items up by ID. However, you also need to consider what other ways users access data and implement indexes that will enhance those lookup methods.
Sharding: Sharding is the process of slicing up large collections of data that can be split between multiple MongoDB servers in a cluster. Each MongoDB server is considered a shard. This provides the benefit of using multiple servers to support a high number of requests to a large system, thus providing horizontal scaling to your database. Look at the size of your data and the amount of requests that will be accessing it to determine whether and how much to shard your collections.
Replications: Replication is the process of duplicating data on multiple MongoDB instances in a cluster. When considering the reliability aspect of your database, you should implement replication to ensure that a backup copy of critical data is always readily available.
Another important thing to consider when designing your MongoDB documents and collections is the number of collections that the design will result in. There isn’t a significant performance hit for having a large number of collections; however, there is a performance hit for having large numbers of items in the same collection. Consider ways to break up your larger collections into more consumable chunks.
For example, say that you store a history of user transactions in the database for past purchases. You recognize that for these completed purchases, you will never need to look them up together for multiple users. You only need them available for the user to look at his or her own history. If you have thousands of users who have a lot of transactions, then it makes sense to store those histories in a separate collection for each user.
One of the most commonly overlooked aspects of database design is that of the data life cycle. Specifically, how long should documents exist in a specific collection? Some collections have documents that should be indefinite, for example, active user accounts. However, keep in mind that each document in the system incurs a performance hit when querying a collection. You should define a TTL or time-to-live value for documents in each of your collections.
There are several ways to implement a time-to-live mechanism in MongoDB. One way is to implement code in your application to monitor and clean up old data. Another way is to use the MongoDB TTL setting on a collection, which allows you to define a profile where documents are automatically deleted after a certain number of seconds or at a specific clock time. For collections where you only need the most recent documents, you can implement a capped collection that automatically keeps the size of the collection small.
Two more important things to consider when designing a MongoDB database are data use and how it will affect performance. The previous sections described different methods for solving some complexities of data size and optimization. The final things you should consider and even reconsider are data usability and performance. Ultimately, these are the two most important aspects of any web solution and, consequently, the storage behind it.
Data usability describes the ability for the database to satisfy the functionality of the website. First, you must make sure that the data can be accessed so that the website functions correctly. Users will not tolerate a website that simply does not do what they want it to. This also includes the accuracy of the data.
Then you can consider performance. Your database must be able to deliver the data at a reasonable rate. You can consult the previous sections when evaluating and designing the performance factors for your database.
In some more complex circumstances, you may find it necessary to evaluate data usability and then performance and then go back and evaluate usability again for a few cycles until you get the balance correct. Also, keep in mind that in today’s world, usability requirements can change at any time. Remembering that can influence how you design your documents and collections so that they can become more scalable in the future if necessary.
In this chapter you learned about MongoDB and design considerations for the structure of data and configuration of a database. You learned about collections, documents, and the types of data that can be stored in them. You also learned how to plan your data model, what questions you need to answer, and the mechanisms built in to MongoDB to satisfy the demands your database needs.
In the next chapter, you install MongoDB. You also learn how to use the MongoDB shell to set up user accounts and access collections and documents.
18.190.176.243