Deploying to Cloud Storage with Hugo

Hugo has built-in support for deploying your site to cloud storage provided by Amazon S3, Google Compute Storage, and Microsoft Azure. To use these services, you need to configure Hugo with the information it needs about where to place the files.

Let’s configure Hugo to deploy to Amazon S3.

Before you start, you should be aware that this method will result in a site that doesn’t have a custom domain and won’t be secured with a TLS certificate. However, you can combine this approach with Amazon’s Cloudfront[42] and Route 53[43] services to configure TLS certificates and a domain name for your site. As such, this approach is targeted toward people who have some experience hosting static content on S3.

To use Hugo to deploy to S3, you need an AWS account. If you don’t have an account, visit the AWS site [44] to set one up.

You also need an AWS access key. To set this up, follow Amazon’s documentation.[45] Make sure to record your access key and the accompanying secret. You won’t be able to retrieve the secret once you create it, so if you lose it, you’ll have to generate a new access key and secret.

Keep Your AWS Credentials Safe!

images/aside-icons/warning.png If you’re not careful and someone else gets a hold of your AWS access key, they can use it to access your account, spin up resources, and spend a lot of your money. Keep your access keys safe, and do not store them in version control systems.

Next, you need to download and install the AWS command-line utility.[46] You’ll use this utility to create a local configuration file that stores your AWS credentials.

Once you’ve installed the CLI, configure your credentials with the AWS CLI using the following command:

 $ aws configure

The tool prompts you for your AWS acces key and secret for your account. Enter both, then accept the default values for the rest of the prompts. This process creates a file in your home directory that contains your AWS credentials. The aws CLI app uses this file to find credentials when you run commands, and Hugo uses this file to make its connection to your S3 bucket when you tell it to deploy your site.

To host a website on S3, you need to create a bucket for your site, configure the bucket to serve web content, and apply a policy that lets people view all of the bucket’s contents. Although you can do this through the AWS console, it’s much quicker to use the AWS CLI tool.

Create the S3 bucket using the following command, but use your own name for your bucket, as bucket names must be unique:

 $ ​​aws​​ ​​s3api​​ ​​create-bucket​​ ​​--bucket​​ ​​bph-pp-hugoprod​​ ​​
  ​​--acl​​ ​​public-read​​ ​​--region​​ ​​us-east-1

This command creates the bucket bph-pp-hugo-prod in the US East 1 AWS region.

To use an S3 bucket as a website, you have to enable it to serve web pages and define the index document. You can also define the name of the error document you want to use, but this is optional:

 $ ​​aws​​ ​​s3​​ ​​website​​ ​​s3://bph-pp-hugoprod/​​ ​​
  ​​--index-document​​ ​​index.html​​ ​​
  ​​--error-document​​ ​​error.html

You’re ready to create a bucket policy that allows access to your bucket. Without this policy, people won’t be able to view your content.

First, create the policy as a JSON file named bucketpolicy.json with the following contents:

 {
 "Version"​:​"2012-10-17"​,
 "Statement"​:[
  {
 "Sid"​:​"PublicReadGetObject"​,
 "Effect"​:​"Allow"​,
 "Principal"​: ​"*"​,
 "Action"​: [​"s3:GetObject"​],
 "Resource"​: [​"arn:aws:s3:::bph-pp-hugoprod/*"​]
  }
  ]
 }

Then, use the aws s3api command to apply the policy to the bucket:

 aws s3api put-bucket-policy --bucket bph-pp-hugoprod
  --policy file://bucketpolicy.json

Your bucket is now configured to serve a static website. Not only can you use the aws command to sync the files to the bucket, you can also use the hugo deploy command to transfer the files, giving you some additional options.

Open Hugo’s config.toml file and add a new deployments section that specifies the bucket and region like this:

 [deployment]
 
 [[deployment.targets]]
  name = ​"prod"
  URL = ​"s3://bph-pp-hugoprod?region=us-east-1"

You can create multiple deployment targets for your site. For example, you can have a staging target which publishes to one bucket and a production target that publishes to another.

To increase your site’s performance, you can compress your scripts, pages, and styles so they’ll download faster. Add this section to the file:

 [[deployment.matchers]]
 pattern = ​"^.+​​\​​.(html|xml|js|css)$"
 gzip = ​true

Save the file, and build your Hugo site again using the hugo --cleanDestinationDir command to ensure that the public folder is completely cleaned out and no artifacts from previous builds remain:

 $ ​​hugo​​ ​​--cleanDestinationDir

Now, use the hugo deploy command to upload the files to your bucket.

 $ ​​hugo​​ ​​deploy

Your site is now live. To access it, visit the URL associated with your bucket. In this example, the URL is http://bph-pp-hugoprod.s3-website.us-east-1.amazonaws.com.

In Using Webpack and npm with Hugo, you created a package.json file that had some build tasks defined. Modify that package.json file and add a hugo-deploy command for deploying the site, as well as a deploy command which does the entire build and deploy process:

 "scripts"​: {
 "build"​: ​"npm-run-all webpack hugo-build"​,
 "hugo-build"​: ​"hugo --cleanDestinationDir"​,
 "webpack"​: ​"webpack"​,
 "webpack-watch"​: ​"webpack --watch"​,
»"hugo-deploy"​: ​"hugo deploy"​,
»"deploy"​: ​"npm-run-all build hugo-deploy"
 "dev"​: ​"npm-run-all webpack --parallel webpack-watch hugo-server"
  },

The next time you’re ready to deploy your site, run the command npm run deploy. Your site builds and deploys to S3 with a single command.

At this point, you’re ready to explore using Cloudfront and Route 53 to point your own domain at the site. Once you’ve done that, you need to modify the base URL for your Hugo site to reflect the domain. The easiest way to do that is to change the hugo-build task in your package.json file to specify the base URL:

 "scripts"​: {
 "build"​: ​"npm-run-all webpack hugo-build"​,
»"hugo-build"​: ​"hugo --cleanDestinationDir -b https://yourdomain.com/"​,
 "webpack"​: ​"webpack"​,

Save the file and run the npm run deploy to rebuild and redeploy the site.

The steps in this section are similar for other cloud providers. The only major difference is the value you use for the URL field in config.toml. If you’re using Google, specify your bucket name like this:

 URL = "gs://your_bucket_name"

If you’re using Azure, specify the blob:

 URL = "azblob://$your_blob_name"

Consult the instructions for your cloud provider on how to configure the resources and access permissions, and how to connect your domain name to those resources.

If you’re not interested in using cloud storage, you can deploy Hugo to a standard web server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.251.37