How I deployed the website

Published on

In the previous blog post I talked about how I created the static pages for hosting on AWS S3. Since you’re reading this, you might be interested in how I registered the domain and got things deployed. If you want to see how I think you can do this best, read the article backwards since this is the solution that I used in the end. Read from the front of the article if you want to know how I got there! At this point the article is a bit of a mess so I plan to rewrite it in the next few days.

Registering a domain with Amazon Route 53

First things first, I needed to register a domain. I was thinking of registering a .ch domain, but realised that Swiss law requires registrants to publish their home address so that it’s accessible via WHOIS. I wasn’t overly keen on that so decided that a .me address was a good alternative. I knew that I wanted to use AWS Route53 to register the domain, but when I went to the page it wasn’t entirely obvious to me where I needed to look. This guide showed me exactly where I needed to click. The whole process took about an hour, but AWS notes that it can take up to 4 days.

Now I’m the proud owner of jonfuller.me, so next step is deploying the site to S3.

Impressum

The whole thing around needing to provide a contact address reminded me about the need for an ‘Impressum’ which I knew that most German websites required. It turns out that most Swiss websites also require one. I found the following article fairly informative. Even though I don’t need a fully fledged Impressum I quickly added a page for that.

Deploy using Hugo Deploy

The Hugo site has a whole page to help you deploy to a bunch of different providers. I’m going to try that out.

Since I used a Dockerfile to build Hugo, I added a line to install Python and AWS CLI to the Hugo container. To be able to login to the AWS CLI the container will need to know the AWS Key/Secret key. I guess that the easiest way to do that is to pass it to the Docker container as a pair of environment variables. That got me thinking, and a quick Google search came up with this handy Stack Overflow thread. TL;DR mount your local credentials store as a volume for the Docker container.

sudo docker run -it -p 1313:1313 -v /srv/website/src:/srv/website/src -v $HOME/.aws/credentials:/root/.aws/credentials:ro hugo:latest

Create the AWS Bucket that will serve the website

Of course it’s possible to do this via the CLI, but it’s probably easier in this case to do it via the AWS Management Console.

I chose to create two buckets, with similar names, one for dev deployments that I can use to check contents before making them live, and a second for the prod deployment that you’re reading now. For the time being I just left all other settings at their defaults.

I ran into an issue where the deploy command wasn’t available. Turns out that was added in Hugo 0.56, and the alpine:3.10 version is only 0.55. So I switched my Dockerfile to use the edge version of Alpine, which ships 0.59 (latest version at time of writing).

Running hugo to compile the site, and hugo deploy to push the site now works really nicely. Just need to allow public access to the bucket , and then apply public permissions to all files. You could use a bucket policy to restrict access e.g. to a certain IP if you’re still testing the website.

One other gotcha that I came across was that I needed to set the website baseURL to the S3 Bucket DNS name while I’m testing, otherwise resources like scripts aren’t correctly referenced/loaded.

Setup proper DNS

So now I can visit my website at https://bucketname-region.amazonaws.com but I want to find it at https://jonfuller.me! So there is still a little legwork to do. As is usually the case, AWS has already written some documentation for that.

At this point I realise that I need to create a new S3 Bucket for my website. since the Bucket Name needs to match the domain name. This time we can set the option to tag the bucket which will make keeping track of costs easier, and also allow public access settings to be made to the bucket. Next we add the public bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Principal": "*",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::jonfuller.me"
            ]
        },
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::jonfuller.me/*"
            ]
        }
    ]
}

Finally, just need to setup the Route53 ALIAS record as described. But this has the problem that SSL/TLS isn’t supported. So we do need to figure out how to setup the CDN…

Configure a CDN

When reading the guide I got a bit confused about whether or not the S3 bucket should have ‘static website’ mode enabled or not. I found this really helpful step-by-step guide that doesn’t answer the question, but does show a working solution.

The main issue when I first deployed my site was that the about, blog and tags pages weren’t displaying since CloudFront didn’t know to serve the about/index.html page when browsing to /about. Luckily the guide shows how to use a simple Lambda to rewrite the URLs to working ones.

Don’t forget that the Lambda needs to be deployed in US-East-1!

I need to do a bit more research on cache-invalidation vs. S3 object versioning.

Do it again, but better

After running through the whole exercise I realised a way to do it better. I still use the Hugo Deploy script. But I found a really handy CloudFormation template that I customized for my solution. I use a second template to define the Lambda that does the URL rewriting. I’ve not tied them both together into a single template just yet.

The article in question is this one from AWS.

I only changed the template to use a cheaper pricing option (fewer edge locations), to enable HTTP2, and to pull in my DNS settings for jonfuller.me and to use my SSL certificate.

Summary

At this point it looks like everything is up and running, so I’ll leave everything up for a few days and see if I spot any issues. I’ll follow up with a couple of posts. One summarising the previous blog posts and giving a few opinions on how it went. The second will cover setting up cost management for the website to check that it doesn’t break my credit card…