Creating a Nuxt App with a CMS and GraphQL

In the previous chapters, you have been creating APIs from scratch so that they work with Nuxt apps. Building a personalized API can be rewarding and fulfilling, but it may not suit every situation. Building an API from the bottom-up is time-consuming. In this chapter, we are going to explore third-party systems that can provide us with the API services we need without having us build them from scratch. Ideally, we want to use a system that can help us manage our content – a content management system (CMS).

WordPress and Drupal are popular CMSes. They are packed with APIs that are worth looking into. In this book, we will be using WordPress. Besides CMSes such WordPress, we will also look into headless CMSes. A headless CMS is just like WordPress but is a pure API service without the frontend presentation, which can be done in Nuxt, just as we have been doing throughout this book. Keystone will be the headless CMS that we will explore in this book. However, the WordPress API and the Keystone API are two different kinds of API. Specifically, the former is a REST API, while the latter is a GraphQL API. But what are they? In short, a REST API is an API that uses HTTP requests to GET, PUT, POST, and DELETE data. The APIs you created in the previous chapters are REST APIs. GraphQL is an API that implements the GraphQL specification (technical standard).

GraphQL APIs are an alternative to REST APIs. To demonstrate how we can deliver the same result using these two different kinds of API, we will use the sample Nuxt app website we provided in Chapter 4, Adding Views, Routes. and Transitions. This can be found in /chapter-4/nuxt-universal/sample-website/ in this book's GitHub repository. We will refactor the existing pages (home, about, projects, content, and the project subpages), which consist of text and images (featured images, fullscreen images, and individual project images). We will also refactor the navigation by getting data from the APIs instead of hardcoding them, just like we did for the other Nuxt apps in the previous chapters. With a CMS, we can get navigation data dynamically through the API, regardless of whether it is a REST or a GraphQL API.

Furthermore, we are going to generate static Nuxt pages (you learned about these in Chapter 14Using Linters, Formatters, and Deployment Commands, and Chapter 15Creating an SPA with Nuxt) with these CMSes. So, by the end of this chapter, you will have a full and final view of what you have learned throughout this book.

In this chapter, we will cover the following topics:

  • Creating headless REST APIs in WordPress
  • Introducing Keystone
  • Introducing GraphQL
  • Integrating Keystone, GraphQL, and Nuxt

Let's get started by looking into the WordPress REST API.

Creating headless REST APIs in WordPress

WordPress (WordPress.org) is an open source PHP CMS for general-purpose website development. It is not "headless" by default; it is stacked with a template system. This means the view and the data are intertwined. However, since 2015 (WordPress 4.4), the REST API infrastructure has been integrated into WordPress core for developers, and now all the default endpoints can be accessed if you append /wp-json/ to your website-based URL. You can also extend the WordPress REST API and add your own custom endpoints. So, we can easily use WordPress as a "headless" REST API by ignoring the view. You will find out how to achieve this in the upcoming sections. To speed up the development process, we will install the following WordPress plugins:

You can create your own plugins and meta boxes if you prefer not to use any of these. Please check out how to create custom meta boxes at https://developer.wordpress.org/plugins/metadata/custom-meta-boxes/. Also, check out how to develop custom plugins at https://developer.wordpress.org/plugins/intro/.

For more information about the WordPress REST API, please visit https://developer.wordpress.org/rest-api/.

To develop and extend the WordPress REST API with these plugins or with yours, first, you will need to download WordPress and install the program on your machine. We'll learn how to do this in the next section.

Installing WordPress and creating our first pages

There are a few ways we can install and serve WordPress:

We will use the built-in PHP server in this book as it is the simplest way to get WordPress started and will make it easier to move it around in the future if we need to, as long as it is served on the same port; for example, localhost:4000. So, let's find out how to do this:

  1. Create a directory (make it writable as well) and download and unzip WordPress in there. You can download WordPress from https://wordpress.org/. You should see some .php files with /wp-admin/, /wp-content/, and /wp-includes/ directories in your unzipped WordPress directory.
  2. Create a MySQL database (for example, nuxt-wordpress) through the PHP Adminer.
  3. Navigate to the directory and serve WordPress with built-in PHP, as follows:
$ php -S localhost:4000
  1. Point your browser to localhost:4000 and install WordPress with the required MySQL credentials (database name, username, and password) and your WordPress user account information (username, password, and email address).
  2. Log into the WordPress admin UI with your user credentials at localhost:4000/wp-admin/ and create some main pages (home, about, projects, contact) under the Pages label.
  3. Navigate to Menus from under Appearance and create the site navigation by adding menu-main to the Menu Name input field.
  4. Select all the pages (contact, about, projects, home) that appear under Add menu items and click Add to Menu to add them to menu-main as navigation items. You can drag and sort the items so that they are read in this order: home, about, projects, contact. Then, click the Save Menu button.
  5. (Optional) Change the WordPress permalinks from the Plain option to the Custom Structure (with a value of /%postname%/, for example) in Permalinksunder Settings.
  6. Download the plugins we mentioned previously and unpack them into a /plugins/ directory. This can be found inside the /wp-content/ directory. Then, activate them through the admin UI. 

If you inspect the wp_options table in the nuxt-wordpress database, you should see that port 4000 is recorded successfully in the siteurl and home fields. So, from now on, you can move your WordPress project directory wherever you like, as long as you run it with the built-in PHP server at this port.

Although we have the data of our main pages and navigation in WordPress, we still need the data of the subpages of the Projects page. We can add them to the Page label and then just attach them to the Projects page. But these pages will share an identical content type (which is called post type in WordPress) – the page post type. It is better to organize them in a separate post type so that they can be managed more easily. We'll find out how to create custom post types in WordPress in the next section.

For more details about the WordPress installation process, please visit https://wordpress.org/support/article/how-to-install-wordpress/.

Creating custom post types in WordPress

We can create custom post types in WordPress from the functions.php file in any WordPress theme. However, since we are not going to use the WordPress template system to deliver the view for our content, we can just extend a child theme from the default theme that is provided by WordPress. Then, we can just activate the child theme in Themes, under Appearance. We'll use the "Twenty Nineteen" theme to extend our child theme and then create the custom post types from there. Let's get started:

  1. Create a directory called twentynineteen-child in the /themes/ directory and create a style.css file that contains the following content:
// wp-content/themes/twentynineteen-child/style.css
/*
Theme Name: Twenty Nineteen Child
Template: twentynineteen
Text Domain: twentynineteenchild
*/

@import url("../twentynineteen/style.css");

Theme Name, Template, and Text Domain are the minimum required header comments for extending a theme, followed by importing its parent's style.css file. These header comments must be put at the top of the file.

If you want to include more header comments in this child theme, please visit https://developer.wordpress.org/themes/advanced-topics/child-themes/.
  1. Create a functions.php file in the /twentynineteen-child/ directory and create the custom post type using this format and WordPress' register_post_type function, as follows:
// wp-content/themes/twentynineteen-child/functions.php
function create_something () {
register_post_type('<name>', <args>);
}
add_action('init', 'create_something');

So, to add our custom post type, just use project as the type name and provide some arguments:

// wp-content/themes/twentynineteen-child/functions.php
function create_project_post_type () {
register_post_type('project', $args);
}
add_action('init', 'create_project_post_type');

We can add labels and what content fields we want to support to the custom post type UI, as follows:

$args = [
'labels' => [
'name' => __('Project (Pages)'),
'singular_name' => __('Project'),
'all_items' => 'All Projects'
],
//...
'supports' => ['title', 'editor', 'thumbnail', 'page-attributes'],
];
For more information about the register_post_type function, please visit https://developer.wordpress.org/reference/functions/register_post_type/.

For more information about the custom post type UI, please visit https://wordpress.org/plugins/custom-post-type-ui/.
  1. (Optional) We can also add support for category and tag for this custom post type, as follows:
'taxonomies' => [
'category',
'post_tag'
],

However, these are global category and tag instances, which means they are shared with other post types such as the Page and Post post types. So, if you want to specify specific categories for the Project post type only, use the following code:

// wp-content/themes/twentynineteen-child/functions.php
add_action('init', 'create_project_categories');
function create_project_categories() {
$args = [
'label' => __('Categories'),
'has_archive' => true,
'hierarchical' => true,
'rewrite' => [
'slug' => 'project',
'with_front' => false
],
];
$postTypes = ['project'];
$taxonomy = 'project-category';
register_taxonomy($taxonomy, $postTypes, $args);
}
For more information about registering taxonomies, please visit https://developer.wordpress.org/reference/functions/register_taxonomy/.
  1. (Optional) It may be a good idea to disable Gutenberg completely for all post types if you find it difficult to use:
// wp-content/themes/twentynineteen-child/functions.php
add_filter('use_block_editor_for_post', '__return_false', 10);
add_filter('use_block_editor_for_post_type', '__return_false', 10);
  1. Activate the child theme in the WordPress admin UI and start adding project type pages to the Projects label.

You will notice that the content fields (title, editor, thumbnail, page-attributes) that you can use to add content to the project pages are very limited. We need more specific content fields, such as content fields for adding multiple project images and a fullscreen image. This is the same issue we had with the home page because we need another content field so that we can add multiple slide images as well. To add more of these content fields, we will need custom meta boxes. You can use the ACF plugin or create your own custom meta boxes and include them in the functions.php file or create them as a plugin. Alternatively, you can use another different meta box plugin such as Meta Box (https://metabox.io/). It is entirely up to you.

Once you have created the custom content fields and added the required content to each project page, you can extend the WordPress REST API for project pages, main pages, and navigation. We'll learn how to do this in the next section.

Extending the WordPress REST API

The WordPress REST API can be accessed with /wp-json/ and is the entry route that's appended to your site-based URLs. For example, you can see all the other available routes by pointing your browser to localhost:4000/wp-json/. You will see what endpoints are available in every route, as these can be either GET or POST endpoints. For example, the /wp-json/wp/v2/pages route has a GET endpoint for listing pages and a POST endpoint for creating a page. You can find out more about these default routes and endpoints at https://developer.wordpress.org/rest-api/reference/.

However, if you have custom post types and custom content fields, then you will need custom routes and endpoints. We can create custom versions of these by registering them with the register_rest_route function in the functions.php file, as follows:

add_action('rest_api_init', function () { , and then followed by the available endpoint
$args = [
'methods' => 'GET',
'callback' => '<do_something>',
];
register_rest_route(<namespace>, <route>, $args);
});

Let's learn how to extend the WordPress REST API:

  1. Create a global namespace and endpoints for fetching the navigation and a single page:
// wp-content/themes/twentynineteen-child/functions.php
$namespace = 'api/v1/';

add_action('rest_api_init', function () use ($namespace) {
$route = 'menu';
$args = [
'methods' => 'GET',
'callback' => 'fetch_menu',
];
register_rest_route($namespace, $route, $args);
});

add_action('rest_api_init', function () use ($namespace) {
$route = 'page/(?P<slug>[a-zA-Z0-9-]+)';
$args = [
'methods' => 'GET',
'callback' => 'fetch_page',
];
register_rest_route($namespace, $route, $args);
});

Notice that we pass the global namespace to each block of add_action by using the PHP use keyword in the anonymous functions. For more information about the PHP use keyword and anonymous functions, please visit https://www.php.net/manual/en/functions.anonymous.php.

For more information about the register_rest_route function from WordPress, please visit https://developer.wordpress.org/reference/functions/register_rest_route/.
  1. Create endpoints for fetching a single project page and listing project pages:
// wp-content/themes/twentynineteen-child/functions.php
add_action('rest_api_init', function () use ($namespace) {
$route = 'project/(?P<slug>[a-zA-Z0-9-]+)';
$args = [
'methods' => 'GET',
'callback' => 'fetch_project',
];
register_rest_route($namespace, $route, $args);
});

add_action('rest_api_init', function () use ($namespace) {
$route = 'projects/(?P<page_number>d+)';
$args = [
'methods' => 'GET',
'callback' => 'fetch_projects',
];
register_rest_route($namespace, $route, $args);
});
  1. Create a fetch_menu function for fetching the menu-main navigation items:
// wp-content/themes/twentynineteen-child/functions.php
function fetch_menu ($data) {
$menu_items = wp_get_nav_menu_items('menu-main');

if (empty($menu_items)) {
return [];
}

return $menu_items;
}

We use the wp_get_nav_menu_items function from WordPress to help us fetch the navigation.

For more information about the wp_get_nav_menu_items function, please visit https://developer.wordpress.org/reference/functions/wp_get_nav_menu_items/.
  1. Create a fetch_page function for fetching a page by slug (or path):
// wp-content/themes/twentynineteen-child/functions.php
function fetch_page ($data) {
$post = get_page_by_path($data['slug'], OBJECT, 'page');

if (!count((array)$post)) {
return [];
}
$post->slides = get_field('slide_items', $post->ID);

return $post;
}

Here, we use the get_page_by_path function from WordPress to fetch the page. For more information about this function, please visit https://developer.wordpress.org/reference/functions/get_page_by_path/.

We also use the get_field function from the ACF plugin to fetch the list of slide images that are attached to the page and then push them to the $post object as slides. For more information about this function, please visit https://www.advancedcustomfields.com/resources/get_field/.

  1. Create a fetch_project function in order to fetch a single project page:
// wp-content/themes/twentynineteen-child/functions.php
function fetch_project ($data) {
$post = get_page_by_path($data['slug'], OBJECT, 'project');

if (!count((array)$post)) {
return [];
}
$post->fullscreen = get_field('full_screen_image', $post->ID);
$post->images = get_field('image_items', $post->ID);

return $post;
}

Again, we use the WordPress get_page_by_path function for fetching a page for us and the ACF get_field function for fetching images (the fullscreen image and project images) attached to the project page and then push them to the $post object as fullscreen and images.

  1. Create a fetch_projects function for fetching a list of project pages, 6 items per page:
// wp-content/themes/twentynineteen-child/functions.php
function fetch_projects ($data) {
$paged = $data['page_number'] ? $data['page_number'] : 1;
$posts_per_page = 6;
$post_type = 'project';
$args = [
'post_type' => $post_type,
'post_status' => ['publish'],
'posts_per_page' => $posts_per_page,
'paged' => $paged,
'orderby' => 'date'
];
$posts = get_posts($args);

if (empty($posts)) {
return [];
}

foreach ($posts as &$post) {
$post->featured_image = get_the_post_thumbnail_url($post->ID);
}
return $posts;
}

Here, we used the get_posts function from WordPress with the required arguments to fetch the list. For more information about this function, please visit https://developer.wordpress.org/reference/functions/get_posts/.

Then, we loop each project page and push their featured images into the get_the_post_thumbnail_url function from WordPress. For more information about this function, please visit https://developer.wordpress.org/reference/functions/get_the_post_thumbnail_url/.

  1. We also need to compute the data (the previous page number and next page number) in order to make pagination for project pages, so instead of just returning $posts, return it as items in the following array with the pagination data:
$total = wp_count_posts($post_type);
$total_max_pages = ceil($total->publish / $posts_per_page);

return [
'items' => $posts,
'total_pages' => $total_max_pages,
'current_page' => (int)$paged,
'next_page' => (int)$paged === (int)$total_max_pages ? null :
$paged + 1,
'prev_page' => (int) $paged === 1 ? null : $paged - 1,
];

Here, we used the wp_count_posts function to count the total published project pages. For more information about this function, please visit https://developer.wordpress.org/reference/functions/wp_count_posts/.

  1. Log into the WordPress admin UI, go to Rewrite Rules under Tools, and click the Flush Rules button to refresh the WordPress rewrite rules.
  2. Go to your browser and test the custom API routes that you have just created:
/wp-json/api/v1/menu
/wp-json/api/v1/page/<slug>
/wp-json/api/v1/projects/<number>
/wp-json/api/v1/project/<slug>

You should see a bunch of JSON raw data printed on your browser screen. The JSON raw data can be difficult to read, but you can use JSONLint, a JSON validator, for pretty-printing your data at https://jsonlint.com/. Alternatively, you can just use Firefox, which has the option to pretty-print your data.

You can find the entire code for this in /chapter-18/cross-domain/backend/wordpress/, in this book's GitHub repository. You can find a sample database (nuxt-wordpress.sql) in it too. The default username and password in this sample database for logging into the WordPress admin UI is admin.

Well done! You have successfully extended the WordPress REST API so that it supports custom post types. We don't need to develop any new theme in WordPress to view our content because this will be handled by Nuxt. We can keep WordPress' existing themes for previewing the content. This means we are only using WordPress to host our site content remotely, including all the media files (images, videos, and so on). Furthermore, we can generate static pages using Nuxt (just like we did in the previous chapters) and stream all the media files from WordPress to our Nuxt project so that we can host them locally. We'll learn how to do this in the next section.

Integrating with Nuxt and streaming images from WordPress

Integrating Nuxt with the WordPress REST API is similar to when you integrated with the cross-domain APIs you learned about and created in the previous chapters. However, in this section, we will improve the plugin that we use to load images by requiring them from the /assets/ directory. But since our images are uploaded to the WordPress CMS and are kept in the /uploads/ directory in our WordPress project, we need to refactor our asset loader plugin so that it requires the images from the /assets/ directory when they are found in there; otherwise, we just load them remotely from WordPress. Let's get started:

  1. Set remote URL for the Axios instance in the Nuxt config file, as follows:
// nuxt.config.js
const protocol = 'http'
const host = process.env.NODE_ENV === 'production' ? 'your-domain.com' : 'localhost'
const ports = {
local: '3000',
remote: '4000'
}
const remoteUrl = protocol + '://' + host + ':' + ports.remote

module.exports = {
env: {
remoteUrl: remoteUrl,
}
}
  1. Create an Axios instance and inject it into the Nuxt context directly as $axios. Also, add this Axios instance to the app option into the context using the inject function:
// plugins/axios.js
import axios from 'axios'

let baseURL = process.env.remoteUrl
const api = axios.create({ baseURL })

export default (ctx, inject) => {
ctx.$axios = api
inject('axios', api)
}
  1. Refactor the asset loader plugin, as follows:
// plugins/utils.js
import Vue from 'vue'

Vue.prototype.$loadAssetImage = src => {
var array = src.split('/')
var last = [...array].pop()
if (process.server && process.env.streamRemoteResource === true) {
var { streamResource } = require('~/assets/js/stream-resource')
streamResource(src, last)
return
}

try {
return require('~/assets/images/' + last)
} catch (e) {
return src
}
}

Here, we split the image URL string into an array, get the image's filename (for example, my-image.jpg) from the last item in the array, and store it in the last variable. We then require the image locally using the filename (last). If an error is thrown, that means the image does not exist in the /assets/ directory, so we just return the image's URL (src) as it is.

However, we will stream the image from the remote URL to the /assets/ directory using a streamResource function when our app is running on the server-side and the streamRemoteResource option is true. You will find out how to create this option (just like the remoteURL option) in the upcoming step.

  1. Create a stream-resource.js file with the streamResource function in the /assets/ directory, as follows:
// assets/js/stream-resource.js
import axios from 'axios'
import fs from 'fs'

export const streamResource = async (src, last) => {
const file = fs.createWriteStream('./assets/images/' + last)
const { data } = await axios({
url: src,
method: 'GET',
responseType: 'stream'
})
data.pipe(file)
}

In this function, we use plain Axios to request the data of the remote resource by specifying stream as the response type. We then use the createWriteStream function from the Node.js built-in File System (fs) package with the necessary filepath to create the image in the /assets/ directory.

For more information about the fs package and its createWriteStream function, please visit https://nodejs.org/api/fs.html and https://nodejs.org/api/fs.htmlfs_fs_createwritestream_path_options.

For more information about the Node.js stream's pipe event in the response data and the Node.js stream itself, please visit https://nodejs.org/api/stream.htmlstream_event_pipe and https://nodejs.org/api/stream.htmlstream_stream.
  1. Register both plugins in the Nuxt config file:
// nuxt.config.js
plugins: [
'~/plugins/axios.js',
'~/plugins/utils.js',
],
  1. Refactor the home page's index.vue in the /pages/ directory in order to use these two plugins, as follows:
// pages/index.vue
async asyncData ({ error, $axios }) {
let { data } = await $axios.get('/wp-json/api/v1/page/home')
return {
post: data
}
}

<template v-for="slide in post.slides">
<img :src="$loadAssetImage(slide.image.sizes.medium_large)">
</template>

Here, we used $axios from our plugin to request the WordPress API. After receiving the data, we populated it in the <template> block. The $loadAssetImage function is used to run logic on how to load and process the image for us.

The rest of the pages in the /pages/ directory should be refactored and follow the same pattern we followed for the home page. They are /about.vue, /contact.vue, /projects/index.vue, /projects/_slug.vue,  and /projects/pages/_number.vue. Also, you need to do this for the component in the /components/ directory; that is, /projects/project-items.vue. You can find the repository path to these completed files in the GitHub repositories provided at the end of this section.

  1. Create another script command with a custom environment variable, NUXT_ENV_GEN, and put stream as its value in the package.json file in our Nuxt project:
// package.json
"scripts": {
"generate": "nuxt generate",
"stream": "NUXT_ENV_GEN=stream nuxt generate"
}

In Nuxt, if you create an environment variable prefixed with NUXT_ENV_ in the package.json file, it will be injected into the Node.js process environment automatically. After doing this, you can access it throughout the app via the process.env object including other custom properties you might set in the env property in the Nuxt config file

For more information about the env property in Nuxt, please visit https://nuxtjs.org/api/configuration-env/.
  1. Define the streamRemoteResource option for the asset loader plugin (which we refactored in step 3) in the env property in the Nuxt config file, as follows:
// nuxt.config.js
env: {
streamRemoteResource: process.env.NUXT_ENV_GEN === 'stream' ?
true : false
},

This streamRemoteResource option will be set to true when we get the stream value from the NUXT_ENV_GEN environment variable; otherwise, it is always set to false. So, when this option is set to true, the asset loader plugin will start streaming the remote resources to the /assets/ directory for us.

  1. (Optional) If the Nuxt crawler fails to detect the dynamic routes for some unknown reasons, then generate these routes manually in the generate option in the Nuxt config file, as follows:
// nuxt.config.js 
import axios from 'axios'
export default {
generate: {
routes: async function () {
const projects = await axios.get(remoteUrl + '/wp-json/api/v1/projects')
const routesProjects = projects.data.map((project) => {
return {
route: '/projects/' + project.post_name,
payload: project
}
})

let totalMaxPages = Math.ceil(routesProjects.length / 6)
let pagesProjects = []
Array(totalMaxPages).fill().map((item, index) => {
pagesProjects.push({
route: '/projects/pages/' + (index + 1),
payload: null
})
})

const routes = [ ...routesProjects, ...pagesProjects ]
return routes
}
}
}

In this optional step, we used Axios to fetch all the child pages that belong to the projects post type, and used the JavaScript map method to loop these pages in order to generate their routes. And then, we took the length of the child pages to work out how many maximum pages in number (totalMaxPages) by dividing the child pages by six (making six items per page). After that, we converted the totalMaxPages number into an array by using the JavaScript Array object, and then used the Javascript fillmap, and push methods to loop the array in order to generate the dynamic routes for pagination. Lastly, we concatenated the routes from the child pages and pagination with the JavaScript spread operator, and then return them as a single array for Nuxt to generate the dynamic routes for us.

  1. Run the stream command first, followed by the generate command on your terminal, as follows:
$ npm run stream && npm run generate

We use the stream command to stream the remote resources to the /assets/ directory by generating the first batch of static pages, then the generate command to regenerate the static pages. At this point, webpack will process the images in the /assets/ directory and export them to the /dist/ folder with the static pages. So, after running these two commands, you should see that the remote resources are streamed and processed in /assets/ and /dist/. You can navigate to these two directories and inspect the downloaded resources.

You can find the Nuxt app of this section in /chapter-18/cross-domain/frontend/nuxt-universal/nuxt-wordpress/axios-vanilla/ in this book's GitHub repository.

Well done! You have successfully integrated Nuxt with the WordPress REST API and streamed remote resources for static pages. WordPress may not be everyone's choice since it does not comply with PHP Standards Recommendations (PSRs) (https://www.php-fig.org/) and has its own way of getting things done. But it was released in 2003 before PSR and many modern PHP frameworks. It has been able to support countless businesses and individuals ever since. Of course, it has evolved and offers one of the most user-friendly admin UIs for editors and developers alike.

If this hasn't convinced you to use WordPress as an API, there are other options. In the next section, we are going to look at an alternative to REST APIs  GraphQL APIs – and an alternative to WordPress in Node.js  Keystone. Keystone uses GraphQL to deliver its API. Before diving into GraphQL, we'll take a look at Keystone and learn how to develop customized CMS.

Introducing Keystone

Keystone is a scalable headless CMS for building GraphQL APIs in Node.js. It is open source and equipped with a very decent admin UI where you can manage your content. Just like WordPress, you can create custom content types in Keystone called lists and then query your contents through the GraphQL API. You create lists from source, just like you create REST APIs. You add what you need for your API so that it is highly scalable and extensible.

To use Keystone, first, you need to prepare a database for storing your content. Keystone supports MongoDB and PostgreSQL. You need to install and configure one of them and then find out the connection string for Keystone. You learned about MongoDB in Chapter 9, Adding a Server-Side Database, so using it again as the database for Keystone should not be an issue for you. But what about PostgreSQL? Let's find out.

For more information about Keystone, please visit https://www.keystonejs.com/.

Installing and securing PostgreSQL (Ubuntu)

PostgreSQL, also known as Postgres, is an object-relational database system, often compared with MySQL, which is a (purely) relational database management system (RDBMS). Both are open source and use tables but have their differences.

For example, Postgres is largely SQL compliant while MySQL is partially compliant, and MySQL performs faster in terms of read speed while PostgreSQL is faster at injecting complex queries. For more information about Postgres, please visit https://www.postgresql.org/.

You can install Postgres on many different operating systems, including Linux, macOS, and Windows. Depending on your operating system, you can follow the official guide at https://www.postgresql.org/download/ to install it on your machine. We will show you how to install and secure it on Linux, specifically Ubuntu, in the following steps:

  1. Update your local package index and install Postgres from Ubuntu's default repositories using Ubuntu's apt packaging system:
$ sudo apt update
$ sudo apt install postgresql postgresql-contrib
  1. Verify Postgres by checking its version:
$ psql -v

If you get the following output, this means you have installed it successfully:

/usr/lib/postgresql/12/bin/psql: option requires an argument -- 'v'
Try "psql --help" for more information.

The number 12 in the path indicates you have Postgres version 12 on your machine.

  1. Enter the Postgres shell from your terminal:
$ sudo -u postgres psql

You should get an output similar to the following on your terminal:

postgres@lau-desktop:~$ psql
psql (12.2 (Ubuntu 12.2-2.pgdg19.10+1))
Type "help" for help.

postgres=
  1. List the default users using the Postgres du command:
postgres= du

You should get two default users, as follows:

Role name 
-----------
postgres
root

We will add a new administrative user (or role) to the list using an interactive prompt on our terminal. However, we need to exit the Postgres shell first:

postgres= q
  1. Type in the following command with the --interactive flag:
$ sudo -u postgres createuser --interactive

You should see the following two questions regarding the name of the new role and whether the role should have superuser permissions:

Enter name of role to add: user1
Shall the new role be a superuser? (y/n) y

Here, we called the new user user1. It has superuser permissions, just like the default users do.

  1. Log into the Postgres shell with sudo -u postgres psql to verify the new user with the du command. You should see that it has been added to the list.
  2. Add a password to the new user with the following SQL query:
ALTER USER user1 PASSWORD 'password';

If you get the following output, then you have successfully added a password for this user:

ALTER ROLE
  1. Exit the Postgres shell. Now, you can use PHP's Adminer (https://www.adminer.org/) to log into Postgres with this user and, from there, add a new database that will be required when you install Keystone later. Then, you can use the following format for the Postgres connection string for the database you have just created:
postgres://<username>:<password>@localhost/<dbname>

Note that a password is always required for any user to log into the database from Adminer for security reasons. So, it is a good practice to add security to your database, especially for production, regardless of whether it is a MySQL, Postgres, or MongoDB database. What about MongoDB? You learned to install and use it in previous chapters, but it hasn't been secured yet. We'll find out how to do this in the next section.

Installing and securing MongoDB (Ubuntu)

By now, you should know how to install MongoDB. So, in this section, we will focus on securing databases in MongoDB. To secure MongoDB, we will start by adding an administrative user to MongoDB, as follows:

  1. Connect to the Mongo shell from your terminal:
$ mongo
  1. Select the admin database and add a new user with a username and password (for example, root and password) to this database, as follows:
> use admin
> db.createUser(
{
user: "root",
pwd: "password",
roles: [ { role: "userAdminAnyDatabase", db: "admin" },
"readWriteAnyDatabase" ]
}
)
  1. Exit the shell and open the MongoDB configuration file from your terminal:
$ sudo nano /etc/mongod.conf
  1. Look for the security section, remove the hash, and add the authorization setting, as shown here:
// mongodb.conf
security:
authorization: "enabled"
  1. Save and exit the file and restart MongoDB:
$ sudo systemctl restart mongod
  1. Verify the configuration by checking the status of MongoDB:
$ sudo systemctl status mongod

If you see an "active" status, that means you have configured it correctly.

  1. Log in as "root" with the password and the --authenticationDatabase option. Also, supply the name of the database where the user is stored, which is "admin" in this case:
$ mongo --port 27017 -u "root" -p "password" --authenticationDatabase "admin"
  1. Create a new database (for example, test) and attach a new user to it:
> use test
db.createUser(
{
user: "user1",
pwd: "password",
roles: [ { role: "readWrite", db: "test" } ]
}
)
  1. Exit and test the database by logging in as user1:
$ mongo --port 27017 -u "user1" -p "password" --authenticationDatabase "test"
  1. Test whether you can access this test database but not other databases:
> show dbs

If you receive no output, that means you are only authorized to access this database after authentication. You can use the following format to supply the MongoDB connection string for Keystone or any other apps (for example, Express, Koa, and so on):

mogodb://<username>:<password>@localhost:27017/<dbname>

Again, it is good practice to add security to the database, especially for production, but it is easier and faster to develop apps with MongoDB without authentication enabled. You can always disable it for local development and just enable it in the production server.

Now, both database systems (Postgres and MongoDB) are ready and you can choose either of them to build your Keystone app. So, let's get to it!

Installing and creating Keystone apps

There are two ways to start a Keystone project  from scratch or by using the Keystone scaffolding tool known as keystone-app. If you are going to do it from scratch, you need to install any Keystone-related packages manually. These include the minimum required Keystone's packages and the additional Keystone packages that you need to build your app. Let's take a look at this manual installation:

  1. Create a project directory and install the minimum required packages – the Keystone package itself, the Keystone GraphQL package (which is considered as an app in Keystone), and a database adapter:
$ npm i @keystonejs/keystone
$ npm i @keystonejs/app-graphql
$ npm i @keystonejs/adapter-mongoose
  1. Install the additional Keystone packages that you need, such as the Keystone Admin UI package (which is considered an app in Keystone) and the Keystone field package for registering lists:
$ npm i @keystonejs/app-admin-ui
$ npm i @keystonejs/fields
  1. Create an empty index.js file in your root directory and import the packages you have just installed:
// index.js
const { Keystone } = require('@keystonejs/keystone')
const { GraphQLApp } = require('@keystonejs/app-graphql')
const { AdminUIApp } = require('@keystonejs/app-admin-ui')
const { MongooseAdapter } = require('@keystonejs/adapter-mongoose')
const { Text } = require('@keystonejs/fields')
  1. Create a new instance of Keystone and pass the new instance of the database adapter to it, as follows:
const keystone = new Keystone({
name: 'My Keystone Project',
adapter: new MongooseAdapter({ mongoUri: 'mongodb://localhost/your-
db-name' }),
})
Check out the following guide to learn how to configure the Mongoose adapter: https://www.keystonejs.com/keystonejs/adapter-mongoose/. We will cover this again when we install Keystone with the scaffolding tool. 
  1. Create a simple list  a Page list, for example  and define the fields that you will need in order to store the data for every single item in this list:
keystone.createList('Page', {
fields: {
name: { type: Text },
},
})

It is a convention to capitalize the name of the list for GraphQL. We will cover this soon.

  1. Export the keystone instance and the apps so that they can be executed:
module.exports = {
keystone,
apps: [new GraphQLApp(), new AdminUIApp()]
}
  1. Create a package.json file (if you haven't done so already) and add the following keystone command to the scripts, as follows:
"scripts": {
"dev": "keystone"
}
  1. Start the app by running the dev script on your terminal:
$ npm run dev

You should see the following output on your terminal. This means you have started the app successfully:

 Command: keystone dev
✓ Validated project entry file ./index.js
✓ Keystone server listening on port 3000
✓ Initialised Keystone instance
✓ Connected to database
✓ Keystone instance is ready at http://localhost:3000
∞ Keystone Admin UI: http://localhost:3000/admin
∞ GraphQL Playground: http://localhost:3000/admin/graphiql
∞ GraphQL API: http://localhost:3000/admin/api

Well done! You have your first and simplest Keystone app up and running. In this app, you have a GraphQL API at localhost:3000/admin/api, a GraphQL Playground at localhost:3000/admin/graphiql, and a Keystone Admin UI at localhost:3000/admin. But how do we use the GraphQL API and GraphQL Playground? Rest assured, we will get to that in the upcoming section.

It is not difficult at all to start a new Keystone app, is it? You just need to install what Keystone requires and what you need. However, the easiest way to kick off a Keystone app is by using the scaffolding tool. The benefit of using the scaffolding tool is that it comes with some optional samples of Keystone apps during the installation process and they can be very useful as guides and templates. These optional samples are as follows:

  • Starter: This example demonstrates basic user authentication using Keystone.
  • Todo: This example demonstrates a simple app for adding items to a Todo list, along with some frontend integration (HTML, CSS, and JavaScript).
  • Blank: This example provides a basic starting point, along with the Keystone Admin UI, GraphQL API, and GraphQL Playground. These are just like the ones in the manual installation but without the Keystone field package.
  • Nuxt: This example demonstrates a simple integration with Nuxt.js.

We will go for the blank option because it provides us with the basic packages we need so that we can build our lists on top of them. Let's take a look:

  1. Create a fresh Keystone app with any name on your terminal:
$ npm init keystone-app <app-name>
  1. Answer the questions that Keystone asks about, as follows:
✓ What is your project name?
✓ Select a starter project: Starter / Blank / Todo / Nuxt
✓ Select a database type: MongoDB / Postgre
  1. After the installation is complete, move into your project directory:
$ cd <app-name>
  1. If you are using secured Postgres, then just provide the connection string, along with the username, password, and database for Keystone:
// index.js
const adapterConfig = { knexOptions: { connection: 'postgres://
<username>:<password>@localhost/<dbname>' } }

Note that you just have to remove <username>:<password>@ from the string if you don't have authentication enabled. Then, run the following command to install the database tables:

$ npm run create-tables
For more information about the Knex database adapter, please visit https://www.keystonejs.com/quick-start/adapters or visit knex.js at http://knexjs.org/. It is a query builder for PostgreSQL, MySQL, and SQLite3.
  1. If you are using secured MongoDB, then just provide the connection string, along with the username, password, and database for Keystone:
// index.js
const adapterConfig = { mongoUri: 'mogodb://<username>:<password>@localhost:27017/<dbname>' }

Note that you just have to remove <username>:<password>@ from the string if you don't have authentication enabled.

For more information about the Mongoose database adapter, please visit https://www.keystonejs.com/keystonejs/adapter-mongoose/ or visit Mongoose at https://mongoosejs.com/. MongoDB is a schemaless database system by nature, so this adapter is used as a schema solution to model the data in our app.
  1. Change the server default port from 3000 to 4000 to serve the Keystone app. You can do this by simply adding PORT=4000 to the dev script, as follows:
// package.json
"scripts": {
"dev": "cross-env NODE_ENV=development PORT=4000 ...",
}

The reason we changed the port for Keystone to 4000 is because we are reserving port 3000 for Nuxt apps.

  1. Install nodemon in our project. This will allow us to monitor changes in our Keystone app so that it can reload the server for us:
$ npm i nodemon --save-dev
  1. After installing this package, add the nodemon --exec command to the dev script, as follows:
// package.json
"scripts": {
"dev": "... nodemon --exec keystone dev",
}
For more information about nodemon, please visit https://nodemon.io/.
  1. Start the development server for our Keystone app with the following command:
$ npm run dev

You should see the following output on your terminal. This means you have installed the Keystone app successfully:

✓ Keystone instance is ready at http://localhost:4000
∞ Keystone Admin UI: http://localhost:4000/admin
∞ GraphQL Playground: http://localhost:4000/admin/graphiql
∞ GraphQL API: http://localhost:4000/admin/api

This is the same as performing the manual installation but on a different port. In this app, you have a GraphQL API at localhost:4000/admin/api, a GraphQL Playground at localhost:4000/admin/graphiql, and a Keystone Admin UI at localhost:4000/admin. Before we can do anything with the GraphQL API and GraphQL Playground, we must add lists to our Keystone app and start injecting data from the Keystone Admin UI. We'll start adding lists and fields to the app in the next section.

You can find the apps we created from both of these installation techniques in /chapter-18/keystone/ in this book's GitHub repository.

Creating lists and fields

In Keystone, lists are schemas. A schema is a data model that has types that describe our data. It is the same in Keystone: a list schema is composed of fields that have types to describe the data they accept, just like we had in the manual installation, in which we have a Page list composed of a single name field with a Text type.

There are many different field types in Keystone, such as File, Float, Checkbox, Content, DateTime, Slug, and Relationships. You can find out about the rest of the Keystone field types that you need in their documentation at https://www.keystonejs.com/.

To add fields and their types to the list, you just have to install the Keystone packages that hold those field types in your project directory. For example, the @keystonejs/fields package holds the Checkbox, Text, Float, and DateTime field types, among others. You can find out about the rest of the field types at https://www.keystonejs.com/keystonejs/fields/fields. After you have the required field type packages installed, you can just import them and unpack the field types you need by using the JavaScript destructuring assignment for list creation.

However, lists can grow over time, which means they can become messy and difficult to keep up with. So, it is a good idea to create lists in separate files in a /list/ directory for better maintainability, as follows:

// lists/Page.js
const { Text } = require('@keystonejs/fields')

module.exports = {
fields: {...},
}

Then, you just have to import it into the index.js file. So, let's find out what schema/lists and other Keystone packages we need to build our Keystone app. The lists that we are going to create are as follows:

  • A Page schema/list for storing main pages such as home, about, contact, and projects
  • A Project schema/list for storing project pages
  • An Image schema/list for storing images for main and project pages
  • A Slide Image schema/list for storing images for main pages only
  • A Nav Link schema/list for storing the site links

The Keystone packages that we are going to use to create these lists are as follows:

Now, let's install and use them to create our lists:

  1. Install the Keystone packages that we mentioned previously via npm:
$ npm i @keystonejs/app-static
$ npm i @keystonejs/file-adapters
$ npm i @keystonejs/fields-wysiwyg-tinymce
  1. Import @keystonejs/app-static into index.js and define the path and the folder name where you want to keep the static files:
// index.js
const { StaticApp } = require('@keystonejs/app-static');

module.exports = {
apps: [
new StaticApp({
path: '/public',
src: 'public'
}),
],
}
  1. Create a File.js file in the /lists/ directory. Then, define the fields for the Image list using the File, Text, and Slug field types from @keystonejs/fields and LocalFileAdapter from @keystonejs/file-adapters. This will allow you to upload files to the local location; that is, /public/files/:
// lists/File.js
const { File, Text, Slug } = require('@keystonejs/fields')
const { LocalFileAdapter } = require('@keystonejs/file-adapters')

const fileAdapter = new LocalFileAdapter({
src: './public/files',
path: '/public/files',
})

module.exports = {
fields: {
title: { type: Text, isRequired: true },
alt: { type: Text },
caption: { type: Text, isMultiline: true },
name: { type: Slug },
file: { type: File, adapter: fileAdapter, isRequired: true },
}
}

In the preceding code, we defined a list of fields (title, alt, caption, name, and file) so that we can store the meta-information about every uploaded file. It is good practice to have the name field in every list schema so that we can store a unique name in this field that we can use as the label in Keystone Admin UI. We can use it to identify each injected list item easily. To generate a unique name for this field, we can use the Slug type, which, by default, generates the unique name from the title field.

For more information about the field types that we used in the preceding code, please visit the following links: 

For more information about LocalFileAdapter, please visit https://www.keystonejs.com/keystonejs/file-adapters/localfileadapter.

Our app files can be uploaded to Cloudinary using CloudinaryFileAdapter.


For more information about how to set up an account so that you can host files on Cloudinary, please visit https://cloudinary.com/.
  1. Create a SlideImage.js file in the /lists/ directory and define the fields that are identical to the ones in the File.js file with an additional field type, Relationship, so that you can link the slide image to the project page:
// lists/SlideImage.js
const { Relationship } = require('@keystonejs/fields')

module.exports = {
fields: {
// ...
link: { type: Relationship, ref: 'Project' },
},
}
For more information about the Relationship field, please visit https://www.keystonejs.com/keystonejs/fields/src/types/relationship/.
  1. Create a Page.js file in the /lists/ directory and define the fields for the Page list using the Text, Relationship, Slug, and Wysiwyg field types from @keystonejs/fields and @keystonejs/fields-wysiwyg-tinymce, as follows:
// lists/Page.js
const { Text, Relationship, Slug } = require('@keystonejs/fields')
const { Wysiwyg } = require('@keystonejs/fields-wysiwyg-tinymce')

module.exports = {
fields: {
title: { type: Text, isRequired: true },
excerpt: { type: Text, isMultiline: true },
content: { type: Wysiwyg },
name: { type: Slug },
featuredImage: { type: Relationship, ref: 'Image' },
slideImages: { type: Relationship, ref: 'SlideImage', many:
true },
},
}

In the preceding code, we defined a list of fields (title, excerpt, content, name, featuredImage, and slideImages) so that we can store the data of every main page that we will inject into this content type. Note that we link featuredImage to the Image list and link slideImages to the SlideImage list. We want to allow multiple images to be placed in the slideImages field, so we set the many option to true.

For more information about these one-to-many and many-to-many relationships, please visit https://www.keystonejs.com/guides/new-schema-cheatsheet.
  1. Create a Project.js file in the /lists/ directory and define the fields that are identical to the ones in the File.js file for the Project list with two additional fields (fullscreenImage and projectImages):
// lists/Project.js
const { Text, Relationship, Slug } = require('@keystonejs/fields')
const { Wysiwyg } = require('@keystonejs/fields-wysiwyg-tinymce')

module.exports = {
fields: {
//...
fullscreenImage: { type: Relationship, ref: 'Image' },
projectImages: { type: Relationship, ref: 'Image', many:
true },
},
}
  1. Create a NavLink.js file in the /lists/ directory and define the fields (title, order, name, link, subLinks) for the NavLink list using the Text, Relationship, Slug, and Integer field types from @keystonejs/fields, as follows:
// lists/NavLink.js
const { Text, Relationship, Slug, Integer } = require('@keystonejs/fields')

module.exports = {
fields: {
title: { type: Text, isRequired: true },
order: { type: Integer, isRequired: true },
name: { type: Slug },
link: { type: Relationship, ref: 'Page' },
subLinks: { type: Relationship, ref: 'Project', many: true },
},
}

Here, we use the order field to sort the link items by their numeric positions in the GraphQL query. You will learn about this soon. The subLinks field is an example that demonstrates how you can make simple sublinks in Keystone. So, we can add multiple sublinks to the main links by attaching the project pages to this field, which is linked to the Project list using the Relationship field type.

For more information about the Integer field type, please visit https://www.keystonejs.com/keystonejs/fields/src/types/integer/.
  1. Import the files from the /lists/ directory and start creating the list schema from them, as follows:
// index.js
const PageSchema = require('./lists/Page.js')
const ProjectSchema = require('./lists/Project.js')
const FileSchema = require('./lists/File.js')
const SlideImageSchema = require('./lists/SlideImage.js')
const NavLinkSchema = require('./lists/NavLink.js')

const keystone = new Keystone({ ... })

keystone.createList('Page', PageSchema)
keystone.createList('Project', ProjectSchema)
keystone.createList('Image', FileSchema)
keystone.createList('SlideImage', SlideImageSchema)
keystone.createList('NavLink', NavLinkSchema)
  1. Start the app by running the dev script on your terminal:
$ npm run dev

You should see a list of URLs on your terminal identical to the ones shown in the previous section. This means you have started the app successfully on localhost:4000. So, now, you can point your browser to localhost:4000/admin and start injecting content and uploading files from the Keystone Admin UI. Once you have the content and data ready, you can query them using the GraphQL API and GraphQL Playground. But before you can do that, you should learn what a GraphQL is and how to create and use it independently from Keystone. So, let's find out!

You can find the source code for this app in /chapter-18/cross-domain/backend/keystone/ in this book's GitHub repository.

Introducing GraphQL

GraphQL is an open source query language, server-side runtime (execution engine), and specification (technical standard). But what does it mean? What is it? GraphQL is a query language, which is what the "QL" part of GraphQL stands for. To be specific, it is a client query language. But again, what does it mean? The following example will address any doubts you have about GraphQL queries:

{
planet(name: "earth") {
id
age
population
}
}

GraphQL queries like the previous one are used in HTTP clients such as Nuxt or Vue to send the query to the server in exchange for a JSON response, as follows:

{
"data": {
"planet": {
"id": 3,
"age": "4543000000",
"population": "7594000000"
}
}
}

As you can see, you get the specific data for the fields (age and population) that you requested and nothing more. This is what makes GraphQL distinctive and gives the client the power to request exactly what they want. It's cool and exciting, isn't it? But what is it in the server that returns the GraphQL response? A GraphQL API server (server-side runtime).

GraphQL queries are sent by the client to a GraphQL API server over an HTTP endpoint via the POST method to the server as a string. The server extracts and processes the query string. Then, just like any typical API server, the GraphQL API will fetch the data from a database or other services/APIs and return it to the client in a JSON response.

So, can we use a server such as Express as a GraphQL API server? Yes and no. All qualified GraphQL servers must implement two core components, as specified in the GraphQL specification, that validate and process and then return the data: a schema and resolvers.

A GraphQL schema is a collection of type definitions that consist of objects that the client can request and the fields that the objects have. On the other hand, GraphQL resolvers are functions that are attached to the fields that return values when the client makes a query or mutation. For example, the following is the type definition for finding a planet:

type Planet {
id: Int
name: String
age: String
population: String
}

type Query {
planet(name: String): Planet
}

Here, you can see that GraphQL uses a strongly typed schema each field must be defined with a type that can be a scalar type (which is a single value that can be an integer, Boolean, or string) or an object type. The Planet and Query types are object types, while String and Int are scalar types. Each of the fields in the object types must be resolved with a function, as follows:

Planet: {
id: (root, args, context, info) => root.id,
name: (root, args, context, info) => root.name,
age: (root, args, context, info) => root.age,
population: (root, args, context, info) => root.population,
}

Query: {
planet: (root, args, context, info) => {
return planets.find(planet => planet.name === args.name)
},
}

The preceding example was written in JavaScript, but a GraphQL server can be written in any programming language as long as you follow and implement what is outlined in the GraphQL specification at https://spec.graphql.org/. The following are some examples of GraphQL implementations in different languages:

You are free to create a new implementation as long as you comply with the GraphQL specification, but we're only going to use GraphQL.js in this book. Now, you probably have some deeper questions what exactly is the query type? We know that it is an object type, but why do we need it? Do we need to have it in the schema? The short answer is yes.

We'll look at this in more detail in the next section and find out why it is required regardless. We will also find out how to use Express as a GraphQL API server. So, keep reading.

Understanding the GraphQL schema and resolvers

The example schema and resolvers for finding a planet that we discussed in the previous section presume that we use the GraphQL schema language, which helps us create the GraphQL schema required by the GraphQL server. We can easily create a GraphQL.js GraphQLSchema instance from the GraphQL schema language using the makeExecutableSchema function from a Node.js package called GraphQL Tools.

You can find out more information about this package at https://www.graphql-tools.com/ or https://github.com/ardatan/graphql-tools.

The GraphQL schema language is a "shortcut" – a shorthand notation for constructing your GraphQL schema and its type system. Before making use of this shorthand notation, we should take a look at how a GraphQL schema is built from the low-level objects and functions such as GraphQLObjectType, GraphQLString, GraphQLList, and so on from GraphQL.js, which implements the GraphQL specification. Let's install these packages and create a simple GraphQL API server with Express:

  1. Install Express, GraphQL.js, and GraphQL HTTP Server Middleware via npm:
$ npm i express
$ npm i express-graphql
$ npm i graphql

GraphQL HTTP Server Middleware is a piece of middleware that allows us to create a GraphQL HTTP server with any HTTP web framework that implements the way Connect supports a middleware, such as Express, Restify, and Connect itself.

For more information about these packages, please visit the following links:
  1. Create an index.js file in the project's root and import expressexpress-graphql and graphql, using the require method:
// index.js
const express = require('express')
const graphqlHTTP = require('express-graphql')
const graphql = require('graphql')

const app = express()
const port = process.env.PORT || 4000
  1. Create a dummy data with a list of planets:
// index.js
const planets = [
{ id: 3, name: "earth", age: 4543000000, population:
7594000000 },
{ id: 4, name: "mars", age: 4603000000, population: 0 },
]
  1. Define the Planet object type and the fields that the client can query:
// index.js
const planetType = new graphql.GraphQLObjectType({
name: 'Planet',
fields: {
id: { ... },
name: { ... },
age: { ... },
population: { ... },
})

Note that it is a convention to capitalize the object type in the name field for the GraphQL schema's creation.

  1. Define various types and how you want to resolve the value for each field:
// index.js
id: {
type: graphql.GraphQLInt,
resolve: (root, orgs, context, info) => root.id,
},
name: {
type: graphql.GraphQLString,
resolve: (root, orgs, context, info) => root.name,
},
age: {
type: graphql.GraphQLString,
resolve: (root, orgs, context, info) => root.age,
},
population: {
type: graphql.GraphQLString,
resolve: (root, orgs, context, info) => root.population,
},

Notice that every resolver function accepts the following four arguments:

  • root: The object or value that's resolved from the parent object type (the Query in step 6).
  • args: Arguments that the field can receive if they are set. See step 8.
  • context: A mutable JavaScript object that holds the top-level data that is shared across all the resolvers. It is the Node.js HTTP request object (IncomingMessage) by default in our case when using Express. We can modify this context object and add general data that we want to be shared, such as authentication and database connections. See step 10.
  • info: A JavaScript object that holds information about the current field such as its field name, return type, parent type (Planet, in this case), and the general schema details.

We can omit them if they aren't needed for resolving the value for the current field.

  1. Define the Query object type and the fields that the client can query:
// index.js
const queryType = new graphql.GraphQLObjectType({
name: 'Query',
fields: {
hello: { ... },
planet: { ... },
},
})
  1. Define the type and resolve how you want to return the value for the hello field:
// index.js
hello: {
type: graphql.GraphQLString,
resolve: (root, args, context, info) => 'world',
}
  1. Define the type and resolve how you want to return the value for the planet field:
// index.js
planet: {
type: planetType,
args: {
name: { type: graphql.GraphQLString }
},
resolve: (root, args, context, info) => {
return planets.find(planet => planet.name === args.name)
},
}

Notice that we passed the Planet object type that we created and stored in the planetType variable to the planet field in the Query object type so that a relationship between them can be established.

  1. Construct a GraphQL schema instance with the required query field and the Query object type that you have just defined with the fields, types, arguments, and resolvers in it, as follows:
// index.js
const schema = new graphql.GraphQLSchema({ query: queryType })

Note that the query key must be provided as the GraphQL query root type so that our query can be chained down to the fields in the Planet object type. We can say that the Planet object type is a subtype or a child of the Query object type (the root type) and that their relationship must be established in the parent object (Query) using the type field in the planet field.

  1. Use the GraphQL HTTP Server Middleware as a piece of middleware with the GraphQL schema instance to establish the GraphQL server on an endpoint permitted by Express called /graphiql, as follows:
// index.js
app.use(
'/graphiql',
graphqlHTTP({ schema, graphiql: true }),
)

It is recommended to set the graphiql option to true so that we can use the GraphQL IDE when the GraphQL endpoint is loaded on the browser.

At this top level, you can also modify the context of your GraphQL API by using the context option inside the graphqlHTTP middleware, as follows:

context: {
something: 'something to be shared',
}

By doing this, you can access this top-level data from any resolver. This can be very useful. Cool, isn't it?

  1. Finally, after all the data has been loaded, start the server with the node index.js command on your terminal with the following line in the index.js file:
// index.js
app.listen(port)
  1. Point your browser to localhost:4000/graphiql. You should see the GraphQL IDE, a UI where you can test your GraphQL API. So, type the following query into the input area on the left-hand side:
// localhost:4000/graphiql
{
hello
planet (name: "earth") {
id
age
population
}
}

You should see that the preceding GraphQL query has been exchanged with a JSON object on the right-hand side when you hit the play button:

// localhost:4000/graphiql
{
"data": {
"hello": "world",
"planet": {
"id": 3,
"age": "4543000000",
"population": "7594000000"
}
}
}

Well done  you have managed to create a basic GraphQL API server with Express using the low-level approach! We hope this has given you a full picture of how a GraphQL API server can be created with the GraphQL schema and resolvers. We also hope that you can see the relationship between these two core components in GraphQL and that we have answered yours questions; that is, what exactly is the Query type? Why do we need it? Do we need to have it in the schema? The answer is yes, the query (object) type is a root object type (usually called a root Query type) that must be provided when creating the GraphQL schema.

But you may still have some questions and complaints, particularly regarding the resolvers surely you find it tedious and dumb to define the resolvers in step 5 for the fields in the Planet object type because they do nothing except return the values that are resolved from the query object. Is there any way to avoid this painful repetition? The answer is yes: you don't specify them for every field in your schema, and this lies in the default resolver. But how do we do this? We'll find out in the next section.

You can find this and other examples in /chapter-18/graphql-api/graphql-express/ in this book's GitHub repository.

Understanding GraphQL default resolvers

When no resolver has been specified for a field, by default, this field will take on the value of the property in the object that's been resolved by the parent – that is, if that object has a property name that matches the field name. So, the fields in the Planet object type can be refactored as follows:

fields: {
id: { type: graphql.GraphQLInt },
name: { type: graphql.GraphQLString },
age: { type: graphql.GraphQLString },
population: { type: graphql.GraphQLString },
}

The values of these fields will fall back to the properties in the object that's been resolved by the parent (the query type) under the hood, as follows:

root.id
root.name
root.age
root.population

So, put the other way around, when a resolver is specified explicitly for a field, this resolver will always be used, even if the parent's resolver returns any value for that field. For example, let's specify a value explicitly for the id field in the Planet object type, as follows:

fields: {
id: {
type: graphql.GraphQLInt,
resolve: (root, orgs, context, info) => 2,
},
}

We already know that the default ID values for Earth and Mars are 3 and 4 and that they are resolved by the Query object type (the parent), as shown in step 8 in the previous section. But these resolved values will never be used because they are overridden by the value in the ID's resolver. So, let's query Earth or Mars, as follows:

{
planet (name: "mars") {
id
}
}

In this case, you will always get 2 in the JSON response:

{
"data": {
"planet": {
"id": 2
}
}
}

This is very clever, isn't it? It saves us from painful repetition – that is, if you have tons of fields in an object type. However, so far, we have been following the most painful way to construct our schema by working with GraphQL.js. This is because we wanted to see and understand how a GraphQL schema is created from the low-level types. We probably wouldn't want to take this long and winding road in real life, especially in a large project. Instead, we should prefer using the GraphQL schema language to build the schema and resolvers for us. In the next section, we will show you how to create a GraphQL API server easily with the GraphQL schema language and Apollo Server as an alternative to GraphQL HTTP Server Middleware. So, read on!

Creating a GraphQL API with Apollo Server

Apollo Server is an open source and GraphQL spec-compliant server developed by the Apollo platform for building GraphQL APIs. We can use it standalone or with other Node.js web frameworks such as Express, Koa, Hapi, and so on. We will use Apollo Server as it is in this book, but if you want to use it with other frameworks, please visit https://github.com/apollographql/apollo-serverinstallation-integrations.

In this GraphQL API, we will create a server that queries a collection of books by title and author. Let's get started:

  1. Install Apollo Server and GraphQL.js via npm as the project dependencies:
$ npm i apollo-server
$ npm i graphql
  1. Create an index.js file in the project root directory and import the ApolloServer and gql functions from the apollo-server package:
// index.js
const { ApolloServer, gql } = require('apollo-server')

The gql function is used to parse GraphQL operations and the schema language by wrapping them with template literal tags (or tagged template literals). For more information about template literals and tagged templates, please visit https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals.

  1. Create the following static data, which holds the lists of authors and posts:
// index.js
const authors = [
{ id: 1, name: 'author A' },
{ id: 2, name: 'author B' },
]

const posts = [
{ id: 1, title: 'Post 1', authorId: 1 },
{ id: 2, title: 'Post 2', authorId: 1 },
{ id: 3, title: 'Post 3', authorId: 2 },
]
  1. Define the Author, Post, and Query object types, along with the fields that the client can query:
// index.js
const typeDefs = gql`
type Author {
id: Int
name: String
}

type Post {
id: Int
title: String
author: Author
}

type Query {
posts: [Post]
}
`

Note that we can shorthand the Author, Post, and Query object types as the Author type, the Post type,  and the Query type. It is just clearer than using "object type" to describe them because that is what they are. Remember that apart from being an object type by nature, the Query type is also the root type in GraphQL schema creation.

Notice how we establish the relationship of Author with Post and Post with Query the type for the author field is the Author type. The Author type has simple scalar types for its fields (id, name), while the Post type has simple scalar types (id, title) and the Author type (author) for its field. The Query type has the Post type for its only field, which is posts, but it is a list of posts, so we must use the type modifier to wrap the Post type with open and closed square brackets to indicate that this posts field will resolve with an array of Post objects.

For more information about the type modifier, please visit https://graphql.org/learn/schema/lists-and-non-null.
  1. Define resolvers to specify how you want to resolve the value for the posts field in the Query type and the author field in the Post type:
// index.js
const resolvers = {
Query: {
posts: (root, args, context, info) => posts
},

Post: {
author: root => authors.find(author => author.id ===
root.authorId)
},
}

Notice how the GraphQL schema language has helped us decouple the resolvers from the object types and that they are simply defined in a single JavaScript object. The resolvers in the JavaScript object are "magically" connected with the object types, as long as the property names for our resolvers map the field names in the type definitions. Hence, this JavaScript object is called a resolver map. Before defining resolvers, we must also define the top-level property names (Query, Post) in the resolver map so that they match the object types (Author, Post, Query) in the type definitions. But we don't need to define any specific resolvers for the Author type in this resolver map because the values for the fields (id, name) in Author are resolved by the default resolvers automatically.

Another point to note is that the values for the fields (id, title) in the Post type are also resolved by the default ones. If you don't like using property names to define resolvers, you can use resolver functions instead, as long as the function names correspond with the field names in the type definitions. For example, the resolvers for the author field can be rewritten as follows:

Post: {
author (root) {
return authors.find(author => author.id === root.authorId)
},
}
  1. Construct a GraphQL schema instance from ApolloServer with the type definitions and resolvers. Then, start the server, as follows:
// index.js
const server = new ApolloServer({ typeDefs, resolvers })

server.listen().then(({ url }) => {
console.log(`Server ready at ${url}`)
})
  1. Launch your GraphQL API with the node command on your terminal:
$ node index.js
  1. Point your browser to localhost:4000. You should see the GraphQL Playground loaded on your screen. From there, you can test your GraphQL API. So, type the following query into the input area on the left-hand side:
{
posts {
title
author {
name
}
}
}

You should see that the preceding GraphQL query has been exchanged with a JSON object on the right-hand side when you hit the play button:

{
"data": {
"posts": [
{
"title": "Post 1",
"author": {
"name": "author A"
}
},
...
]
}
}

This is beautiful and wonderful, isn't it? That's how easily we can build a GraphQL API with the GraphQL schema language and Apollo Server. It is worth knowing the long and painful way of how a GraphQL schema and resolvers are created before adopting the shorthand method. Once you have this basic concrete knowledge, you should be able to query the data you have stored with Keystone with ease. We have only covered a few of GraphQL's types in this book, including the scalar type, the object type, the query type, and the type modifier. There are a few other types you should check out, such as the mutation type, the enumeration type, the union and input types, and interface. Please check them out at https://graphql.org/learn/schema/.

If you want to learn more about GraphQL, please visit https://graphql.org/learn/. For more information about Apollo Server, visit https://www.apollographql.com/docs/apollo-server/.

You can find the code that was used in this section, along with other example GraphQL type definitions, in /chapter-18/graphql-api/graphql-apollo/ in this book's GitHub repository.

Now, let's learn how to use the Keystone GraphQL API.

Using the Keystone GraphQL API

The GraphQL Playground for the Keystone GraphQL API is located at localhost:4000/admin/graphiql. Here, we can test the list we created through the Keystone admin UI at localhost:4000/admin. Keystone will generate four top-level GraphQL queries automatically for every list that's created. For example, we will get the following queries for the page list we created in the previous section:

  • allPages

This query can be used to fetch all the items from the Page list. We can also search, limit, and filter the result, as follows:

{
allPages (orderBy: "name_DESC", skip: 0, first: 6) {
title
content
}
}
  • _allPagesMeta

This query can be used to fetch all meta-information about items in the Page list, such as the total count of all matched items, which can be useful for pagination. We can also search, limit, and filter the result, as follows:

{
_allPagesMeta (search: "a") {
count
}
}
  • Page

This query can be used to fetch a single item from the Page list. We can only use a where parameter with an id key to fetch the page, as follows:

{
Page (where: { id: $id }) {
title
content
}
}
  • _PagesMeta

This query can be used to fetch the meta-information about the Page list itself, such as its name, access, schema, and fields, as follows:

{
_PagesMeta {
name
access {
read
}
schema {
queries
fields {
name
}
}
}
}

As you can see, these four queries, along with the filter, limit, and sorting parameters, provide us with enough power to fetch the specific data that we need and nothing more. What's more is that, in GraphQL, we can fetch multiple resources with a single request, as follows:

{
_allPagesMeta {
count
},
allPages (orderBy: "name_DESC", skip: 0, first: 6) {
title
content
}
}

This is amazing and fun, isn't it? In a REST API, you may have to send multiple requests to multiple API endpoints for multiple resources. GraphQL offers us an alternative to solve this infamous issue of REST APIs that has bothered both frontend and backend developers. Note that these four top-level queries also apply to other lists we have created, including Project, Image, and NavLink.

For more information about these four top-level queries and the filter, limit, and sorting parameters, as well as the GraphQL mutations and execution steps, which are not covered in this book, please visit https://www.keystonejs.com/guides/intro-to-graphql/.

If you want to learn about how to query a GraphQL server in general, please visit https://graphql.org/learn/queries/.

Now that you have basic knowledge of GraphQL and are aware of Keystone's top-levels GraphQL queries, it's time to learn how to use them in the Nuxt app.

Integrating Keystone, GraphQL, and Nuxt

Keystone's GraphQL API endpoint is located at localhost:4000/admin/api. As opposed to a REST API, which usually has multiple endpoints, GraphQL API usually has one single endpoint for all queries. So, we will use this endpoint to send our GraphQL queries from the Nuxt app. It is good practice to always test our queries on the GraphQL Playground first to confirm that we get the result we need and then use those tested queries in our frontend apps. Besides, we should always use the query keyword in our queries in the frontend app to fetch data from the GraphQL API.

In this exercise, we will refactor the Nuxt app that we built for the WordPress API. We will be looking at the /pages/index.vue, /pages/projects/index.vue, /pages/projects/_slug.vue, and /store/index.js files. We will still be using Axios to help us send the GraphQL query. Let's take a look at how to get the GraphQL query and Axios working together:

  1. Create a variable that will store the GraphQL query in order to fetch the title of the home page and the slide images that we attached to it:
// pages/index.vue
const GET_PAGE = `
query {
allPages (search: "home") {
title
slideImages {
alt
link {
name
}
file {
publicUrl
}
}
}
}
`

We only need the slug from the project page that the image will link to, so the name field is the only field we will query. And we only need the relative public URL of the image, so the publicUrl field is the only field we want from the image file object. Also, we use the allPages query instead of Page because it is easier to get the page by its slug, which is home in this case.

  1. Send the query to the GraphQL API endpoint using the post method from Axios:
// pages/index.vue
export default {
async asyncData ({ $axios }) {
let { data } = await $axios.post('/admin/api', {
query: GET_PAGE
})
return {
post: data.data.allPages[0]
}
},
}

Notice that we only need the first item from the array in the data that's returned from the GraphQL API, so we use 0 to locate this first item.

Note that we should also refactor /pages/about.vue, /pages/contact.vue, /pages/projects/index.vue, and /pages/projects/pages/_number.vue following the same pattern of how we refactored this home page. You can find the path to this book's GitHub repository, which contains the complete code, at the end of this section.

  1. Create a variable that will store the query and allow you to fetch multiple resources from the endpoint, as follows:
// components/projects/project-items.vue
const GET_PROJECTS = `
query {
_allProjectsMeta {
count
}
allProjects (orderBy: "name_DESC", skip: ${ skip }, first: ${
postsPerPage }) {
name
title
excerpt
featuredImage {
alt
file {
publicUrl
}
}
}
}
`

As you can see, we are fetching the total count of project pages through _allProjectsMeta and the list of project pages through allProjects with the orderBy, skip, and first filters. The data for the skip and first filters will be passed in as variables; that is, skip and postsPerPage, respectively.

  1. Compute the data for the skip variable from the route parameters, set 6 to the postsPerPage variable, and then send the query to the GraphQL API endpoint using the post method from Axios:
// components/projects/project-items.vue
data () {
return {
posts: [],
totalPages: null,
currentPage: null,
nextPage: null,
prevPage: null,
}
},

async fetch () {
const postsPerPage = 6
const number = this.$route.params.number
const pageNumber = number === undefined ? 1 : Math.abs(
parseInt(number))
const skip = number === undefined ? 0 : (pageNumber - 1)
* postsPerPage

const GET_PROJECTS = `... `

let { data } = await $axios.post('/admin/api', {
query: GET_PROJECTS
})

//... continued in step 5.
}

As you can see, we compute the pageNumber data from the route parameters, which we can only access via this.$route.params in the fetch method. The skip data is computed from pageNumber and postsPerPage before we pass it to the GraphQL query and fetch our data. Here, we will get 1 for pageNumber and 0 for skip on the /projects or /projects/pages/1 route, 2 for pageNumber and 6 for skip on the /projects/pages/2 route, and so on. Also, we must make sure that any intentional negative data in the route (for example, /projects/pages/-100) will be made positive by using the JavaScript Math.abs function.

For more information about the JavaScript Math.abs function, please visit https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/abs.
  1. Create the pagination (the next page and the previous page) from the count field that's returned from the server, and then return the data as usual for the <template> block, as follows:
// components/projects/project-items.vue
let totalPosts = data.data._allProjectsMeta.count
let totalMaxPages = Math.ceil(totalPosts / postsPerPage)

this.posts = data.data.allProjects
this.totalPages = totalMaxPages
this.currentPage = pageNumber
this.nextPage = pageNumber === totalMaxPages ? null : pageNumber + 1
this.prevPage = pageNumber === 1 ? null : pageNumber - 1
  1. Create a variable that will store the query for fetching a single project page by the slug from the endpoint, as follows:
// pages/projects/_slug.vue
const GET_PAGE = `
query {
allProjects (search: "${ params.slug }") {
title
content
excerpt
fullscreenImage { ... }
projectImages { ... }
}
}
`

Here, we are fetching the project page through allProjects with the search filter. The data for the search filter will be passed in from the params.slug parameter. The fields we will query in fullscreenImage and fullscreenImage are the same as the ones in featuredImage; you can find them in step 3.

  1. Send the query to the GraphQL API endpoint using the post method from Axios:
// pages/projects/_slug.vue
async asyncData ({ params, $axios }) {
const GET_PAGE = `...`

let { data: { data: result } } = await $axios.post('/admin/api',
{
query: GET_PAGE
})

return {
post: result.allProjects[0],
}
}

Notice that you can also destructure nested objects or arrays and assign a variable to the value. In the preceding code, we have assigned result as the variable in order to store the value of the data property that's returned by GraphQL.

  1. Create a variable that will store the query for fetching the list of NavLinks from the endpoint with the orderBy filter, as follows:
// store/index.js
const GET_LINKS = `
query {
allNavLinks (orderBy: "order_ASC") {
title
link {
name
}
}
}
`
  1. Send the query to the GraphQL API endpoint using the post method from Axios and then commit the data to the store state:
// store/index.js
async nuxtServerInit({ commit }, { $axios }) {
const GET_LINKS = `...`
let { data } = await $axios.post('/admin/api', {
query: GET_LINKS
})
commit('setMenu', data.data.allNavLinks)
}
  1. (Optional) Just like the step 9 in the Integrating with Nuxt and streaming images from WordPress section, if the Nuxt crawler fails to detect the dynamic routes for some unknown reasons, then generate these routes manually in the generate option in the Nuxt config file, as follows:
// nuxt.config.js
import axios from 'axios'

export default {
generate: {
routes: async function () {
const GET_PROJECTS = `
query {
allProjects { name }
}
`
const { data } = await axios.post(remoteUrl + '/admin/api', {
query: GET_PROJECTS
})
const routesProjects = data.data.allProjects.map(project => {
return {
route: '/projects/' + project.name,
payload: project
}
})

let totalMaxPages = Math.ceil(routesProjects.length / 6)
let pagesProjects = []
Array(totalMaxPages).fill().map((item, index) => {
pagesProjects.push({
route: '/projects/pages/' + (index + 1),
payload: null
})
})

const routes = [ ...routesProjects, ...pagesProjects ]
return routes
}
},
}

In this optional step, you can see that we use the same JavaScript built-in object and methods – Array, map, fill and push, just as in the Integrating with Nuxt and streaming images from WordPress section, to work out the dynamic routes for the child pages and pagination for us, and then return them as a single array for Nuxt to generate their dynamic routes.

  1. Run the following script commands for either development or production:
$ npm run dev
$ npm run build && npm run start
$ npm run stream && npm run generate

Remember that if you want to generate static pages and host the images in the same location, we have the ability to stream the remote images to the /assets/ directory so that webpack can process these images for us. So, if you want to do that, then just as we've done previously, run npm run stream first to stream the remote images to your local disc and then run npm run generate to regenerate the static pages with the images before hosting them somewhere.

You can find the code for this exercise in /chapter-18/cross-domain/frontend/nuxt-universal/nuxt-keystone in this book's GitHub repository.

Apart from using Axios, you can also use Nuxt Apollo module to send GraphQL queries to the server. For more information about this module and its usage, please visit https://github.com/nuxt-community/apollo-module.

Well done! You have successfully integrated Nuxt with the Keystone GraphQL API and streamed remote resources for static pages just like did with the WordPress REST API. We hope that Keystone and GraphQL, in particular, have shown you another exciting API option. You can even take the GraphQL knowledge you have learned in this chapter further and develop your GraphQL API for Nuxt apps. You can also take Nuxt to the next level with many other technologies, just like some of those we have walked you through in this book. This book has been quite a journey. We hope it has benefitted you regarding web development and that you can take what you have learned from this book as far as you can. Now, let's summarize what you have learned in this chapter.

Summary

In this chapter, you managed to create custom post types and routes to extend the WordPress REST API, integrated with Nuxt, and streamed the remote resources from WordPress to generate static pages. You also managed to customize a CMS from Keystone by creating lists and fields. You then learned how to create a GraphQL API at a low level with GraphQL.js and at a high level with the GraphQL schema language and Apollo Server. Now that you've grasped the foundations of GraphQL, you can query the Keystone GraphQL API from the Nuxt app using GraphQL queries and Axios. And last, not least, you can stream remote resources from the Keystone project to the Nuxt project to generate static pages. Well done!

This has been a very long journey. You've gone from learning about the directory structure of Nuxt to adding pages, routes, transitions, components, Vuex stores, plugins, and modules, and then to creating user logins and API authentication, writing end-to-end tests, and creating Nuxt SPAs (static pages). You've also integrated Nuxt with other technologies, tools, and frameworks, including MongoDB, RethinkDB, MySQL, PostgreSQL, and GraphQL; Koa, Express, Keystone, and Socket.IO; PHP and PSRs; Zurb Foundation and Less CSS; and Prettier, ESLint, and StandardJS.

We hope that this has been an inspiring journey and that you will adopt Nuxt in your projects wherever it fits and take it further to benefit yourself as well as the community. Keep coding, be inspiring, and stay inspired. We wish you all the best.

Note that a final app example of this book can be found on the author's website. It's a solely static-generated web app made entirely with Nuxt's static target and GraphQL! Please have a look and explore it at https://lauthiamkok.net/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.79.20