Deploying a blog with Hugo and AWS

April 21, 2020

Any website born after 2017 can’t blog. All they know is Hugo, charge they phone, S3, CloudFront, Lambda@Edge, eat hot chip, and lie.

I’m using Hugo to generate this site based on Markdown text files and HTML/Go templates. The output is just a bunch of static files. But how do you actually serve a bunch of static files on the internet these days?

Attempt 1: Point my subdomain at an S3 bucket

You’d think the simplest option for serving a bunch of static files would be S3, right? Isn’t that what it’s for?

I made a new S3 bucket in the AWS console, ignored all the new screaming warnings about not making your S3 buckets public, and made the bucket public. I also had to go to the bucket’s Properties page and enable “Static website hosting”. This lets you specify the HTML pages to use for directory indexes and error pages, so visitors to your site will see index.html instead of “Access Denied”. After that, I set up hugo deploy to deploy to S3 using the AWS CLI.

For reasons I’ve forgotten, the DNS settings for are split between my domain registrar and AWS. In order to seamlessly point a subdomain at an S3 bucket, you have to use AWS’s DNS service (Route53), so I needed to delegate responsibility for to Route53 via some NS records:

$ dig +noall +answer ns	21599	IN	NS	21599	IN	NS	21599	IN	NS	21599	IN	NS

Then, in the Route53 console, I set up an A record as an alias so that Route53 would send traffic to the appropriate bucket. This mostly worked! But HTTPS didn’t work. I was annoyed for a second, but then realized that HTTPS probably shouldn’t work without configuration. In order to serve the website over TLS, AWS would need to have a certificate for my domain. While they could just go ahead and issue such a certificate, since they’re a certificate authority, I probably wouldn’t want them to do that without me explicitly asking.

Attempt 2: Put CloudFront in front of S3

The supported way to get TLS set up for an S3-hosted website is via CloudFront, AWS’s CDN product. But before you can set up CloudFront, you need to get a certificate for your domain. I have Let’s Encrypt set up for, so theoretically I could have used that certificate somehow, but that wouldn’t get automatically renewed and would probably be its own headache.

Instead, I used the wizard built into AWS Certificate Manager to get a certificate. Before issuing a certificate, Certificate Manager requires you to prove that you control the domain that you’re requesting a certificate for. DNS verification is done by adding a new CNAME record (of the wizard’s choosing) to the domain you own, which AWS then looks up via the public DNS system to make sure you control the domain. The wizard even has a one-button option that lets you automatically add the DNS challenge if you have a Route53 zone that matches the domain. I thought that looked handy, but sadly I only noticed it after setting up the CNAME manually.

Once I had a certificate, the CloudFront setup wizard was straightforward enough: I just picked which S3 bucket I wanted to serve content out of, and tweaked a couple of other options for index and error pages.

When I went to the home page, the blog seemed to be served correctly, but every individual post page gave me an error. The reason for this is that Hugo uses directories and “index.html” files extensively: the content of this post is in /posts/hugo-aws/index.html, but all the links to it point to /posts/hugo-aws/. CloudFront doesn’t have an option to automatically serve directory indexes (except at the root of your deployment), so the officially supported way to make this work is with a “Lambda@Edge” function.

Attempt 3: Add a Lambda function to serve index.html files

Here’s what you have to do to serve index.html as your directory index. It’s literally a JavaScript function that runs on every request to see if the request path ends in /, and, if so, rewrites it to end in /index.html. I don’t have much else to say about this, but if you can read this post on the web, it works.