Node was built from the ground up to be an ideal runtime for high-capacity servers. As a result, the HTTP functions it comes with out of the box are thin abstractions over the TCP layer that provide streams of data that can be handled as rapidly as the packets arrive.
Of course, for most applications, stream-based functionality is overkill. We’re perfectly happy reading the whole request before we decide to handle it and then sending a complete response. Also, we want to be able to think in terms of individual routes such as /moon-unit-alpha/ and /moon-unit-zappa/, rather than parsing every request’s headers ourselves. These are basic abstractions that all Node web frameworks provide. Of these frameworks, the most popular is called Express.[49]
We need to install Express with npm:
| $ npm install --save express |
Once that’s done, starting an Express server is easy:
| express = require 'express' |
| app = express() |
| |
| # Start our Express server |
| port = process.env.PORT or 8520 |
| app.listen port, -> |
| console.log "Now listening on port #{port}" |
With that, we have a fully operational web server, faithfully responding to every request with a 404 Not Found. We can make this server a whole lot more useful by telling it that it can serve all of the files in the public subdirectory of the directory server.js is in:
| app.use(express.static("#{__dirname}/public")) |
Our server is now half-finished! If you run it and connect to http://localhost:8520 in your browser, the server will try to serve public/index.html. We’ll set up our public files in the next section. But first, let’s finish our server by implementing a database layer and a RESTful API for our project’s data.
Which database to choose? There are countless high-quality open-source database projects these days. Debating SQL vs. NoSQL is all the rage (and drives many developers to rage), and choosing the right database to ensure a project’s future scalability is a nail-biting decision. Luckily, I’m writing a book about CoffeeScript, so I don’t have to make any such decision. Instead, I’m going to pick the simplest option: NeDB.[50] NeDB (Node Embedded Database) is a document store (similar to MongoDB) with no external dependencies. Guess how you install it? That’s right:
| $ npm install --save nedb |
We’ll tell NeDB to store our objects in three files, one for each of our collections (mirroring the schema that we used in localStorage in the previous chapter). We’ll also tell NeDB to index objects in each collection by their id property. This speeds up queries and, more importantly, ensures that objects remain uniquely identifiable by id:
| Datastore = require('nedb') |
| db = {} |
| ['boards', 'columns', 'cards'].forEach (collectionKey) => |
| db[collectionKey] = new Datastore |
| filename: "#{__dirname}/#{collectionKey}.db" |
| autoload: true |
| |
| db[collectionKey].ensureIndex {fieldName: 'id', unique: true} |
| return |
Note the autoload option, which tells NeDB to buffer any commands it receives until it has opened the file on disk (or created the file if it didn’t already exist).
This seems like a good place to create our initial data:
| # Set the initial board state if none already exists |
| db.boards.insert({ |
| id: 1 |
| name: 'New Board' |
| }) |
If a board with id: 1 already exists, NeDB will ignore the insertion attempt.
All we need to do now is wire up some API endpoints. First, let’s install and use a convenient Express middleware that automatically parses JSON from POST and PUT request bodies:
| $ npm install --save body-parser |
| bodyParser = require('body-parser') |
| app.use(bodyParser.json()) |
Now for the endpoint definitions themselves. For our minimal RESTful API, we need a way to fetch whole collections (GET), a way to insert a new object into a collection (POST), and a way to update an existing object (PUT):
| ['boards', 'columns', 'cards'].forEach (collectionKey) => |
| |
| # Endpoint to fetch the entire collection |
| app.get "/#{collectionKey}", (req, res) => |
| db[collectionKey].find {}, (err, collection) => |
| throw err if err |
| res.send(collection) |
| return |
| |
| # Endpoint to add a new object to the collection (assigns id) |
| app.post "/#{collectionKey}", (req, res) => |
| object = req.body |
| db[collectionKey].count {}, (err, count) => |
| throw err if err |
| object.id = count + 1 |
| db[collectionKey].insert object, (err) => |
| throw err if err |
| res.send(object) |
| return |
| |
| # Endpoint to update an existing object in the collection |
| app.put "/#{collectionKey}/:id", (req, res) => |
| query = {id: +req.params.id} |
| object = req.body |
| options = {} |
| db[collectionKey].update query, object, options, (err) => |
| throw err if err |
| res.send(object) |
| return |
Simple, isn’t it? Document stores are a pleasure to work with! Of course, in a production application, we’d want to add a number of additional steps to these endpoints. We’d want to verify that the requester has access. We’d want to perform schema validation to prevent irregular objects from populating the database. We’d want to enforce storage limits and throttle requests to prevent the user from overwhelming our service. But for this project, I’m content to leave these elegant little endpoints alone.
One more thing: we’ll have an easier time debugging if we take advantage of the source maps we’ve compiled. Node doesn’t do this out of the box. Instead, we need to install the source-map-support package and run while we’re in development mode:
| $ npm install --save source-map-support |
| # Read environment configuration |
| env = process.env.NODE_ENV or 'development' |
| |
| # In development mode, enable source map support |
| if env is 'development' |
| require('source-map-support').install() |
In the next section, we’ll update our front-end code from the previous chapter to take advantage of this newly created API.
3.144.110.155