Optimizing Multi-Region Hosting with Route53 and Latency Routing: A Terraform Guide
Hello everyone! In this article, I'm going to be doing a walkthrough on how to harness the capabilities of Route53 to achieve efficient multi-region hosting with latency routing. This guide will be provisioned completely by Terraform so let's get started.
We're going to have a simple web application using nginx setup on two different EC2 instances, one will be in Eu-central-1 and the other will be in the Us-east region. Each web application displays an HTML page that prints its region nothing more nothing less.
Then we will be creating a new domain and exploring Route53's routing-based policy to redirect users to the server that has the least latency to them.
Our AWS infrastructure will contain the following:
AWS VPC with 1 public subnet & Internet Gateway
AWS Elastic IP (static IP address)
AWS EC2 Instance with its network interface & security group
Since this infrastructure will be the same in both regions, instead of duplicating the code let's explore more in terraform and use modules this time.
Modules in Terraform allow reusing some configuration multiple times without having to duplicate code. You can create the config once and pass different variables according to what you exactly need.
To get started this is our file hierarchy
As we can see in modules we have networks and VPC module which will contain most of our infrastructure.
VPC Module
In modules/networks/vpc/main.tf we have the following (will be split into multiple code blocks below)
resource "aws_vpc" "geo-vpc" {
cidr_block = var.cidr_block
tags = {
Name = var.name
}
}
resource "aws_subnet" "geo-subnet" {
vpc_id = aws_vpc.geo-vpc.id
cidr_block = var.subnet_cidr_block
availability_zone = var.availability_zone
tags = {
Name = "${var.name}-subnet"
}
}
resource "aws_internet_gateway" "geo-igw" {
vpc_id = aws_vpc.geo-vpc.id
tags = {
Name = "${var.name}-igw"
}
}
resource "aws_route_table" "geo-route-table" {
vpc_id = aws_vpc.geo-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.geo-igw.id
}
tags = {
Name = "${var.name}-route-table"
}
}
resource "aws_route_table_association" "geo-route-table-association" {
subnet_id = aws_subnet.geo-subnet.id
route_table_id = aws_route_table.geo-route-table.id
}
So far we created a VPC with a subnet and an internet gateway. Along with creating a route table allowing the subnet to be public. All the values are provided via Variables which are located in Variables.tf (we'll see later how we use the module and pass values to it)
resource "aws_security_group" "geo-sg" {
name = "${var.name}-sg"
description = "Allow all inbound traffic"
vpc_id = aws_vpc.geo-vpc.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
resource "aws_network_interface" "geo-ni" {
subnet_id = aws_subnet.geo-subnet.id
private_ips = [var.private_ip]
security_groups = [aws_security_group.geo-sg.id]
tags = {
Name = "${var.name}-ni"
}
}
resource "aws_eip" "geo-eip" {
vpc = true
network_interface = aws_network_interface.geo-ni.id
associate_with_private_ip = var.private_ip
depends_on = [aws_network_interface.geo-ni, aws_internet_gateway.geo-igw, aws_instance.geo-instance]
}
Next we created a security group for our to be created EC2 instance, allowing access to port 80, then we create a network interface assigning the subnet and giving it a private ip address as a variable (will be our EC2 instance private IP).
Lastly we create the AWS elastic IP assigning it to our private ip and network interface.
Now all that's left is creating the EC2 instance which we'll cover next.
resource "aws_instance" "geo-instance" {
ami = var.ami
instance_type = "t2.micro"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.geo-ni.id
}
user_data = <<-EOF
#!/bin/bash
sudo apt update
sudo apt install -y nginx
sudo systemctl start nginx
echo "<h1>Hello World from ${var.name}</h1>" | sudo tee /var/www/html/index.html
EOF
tags = {
Name = "${var.name}-instance"
}
}
Our EC2 instance's AMI is passed as a variable (different regions have different AMIs for the same operating system). Then we assign the network interface created earlier and we pass in a user_data
block that executes a bash script to install nginx and edit the Html rendered to contain the region.
Now moving on to modules/networks/vpc/variables.tf we have the following
variable "cidr_block" {
description = "The CIDR block of the VPC"
}
variable "name" {
description = "The name of the VPC"
}
variable "subnet_cidr_block" {
description = "The CIDR block of the subnet"
}
variable "availability_zone" {
description = "The availability zone of the subnet"
}
variable "private_ip" {
description = "The private IP address of the instance"
}
variable "ami" {
description = "The AMI to use for the instance"
}
All the variables we used in the main.tf file are defined here
Lastly, we have a file outputs.tf that will output the two public IP addresses of our EC2 instances.
modules/networks/vpc/outputs.tf
output "address" {
value = aws_eip.geo-eip.public_ip
}
That concludes our VPC module. Next up is using the module and passing different values to the variables defined then after that provisioning Route53
Module usage
Before we get into module usage since we're creating VPC in two different regions we need to specify multiple providers for each region. Then when using the module pass in the provider we wish to use. So in providers.tf in your project root directory specify it like this
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
access_key = <key>
secret_key = <key>
region = "eu-central-1"
alias = "europe"
}
provider "aws" {
access_key = <key>
secret_key = <key>
region = "us-east-1"
alias = "us"
}
Then we'll use the alias in each module to choose which provider we want.
In our project root directory main.tf
file we'll have the following code to provision our Module config in Frankfurt(eu-central-1)
module "vpc-frankfurt" {
source = "./modules/networks/vpc"
providers = {
aws = aws.europe
}
cidr_block = "10.0.0.0/16"
name = "frankfurt"
subnet_cidr_block = "10.0.1.0/24"
availability_zone = "eu-central-1a"
private_ip = "10.0.1.50"
ami = "ami-03f1cc6c8b9c0b899"
}
In Frankfurt's VPC we specify the provider as mentioned above and the source for our module. Then we pass in all the variables the module needs as above.
Then for us-east-1
module "vpc-us-east-1" {
source = "./modules/networks/vpc"
providers = {
aws = aws.us
}
cidr_block = "10.0.0.0/16"
name="us-east-1"
subnet_cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
private_ip = "10.0.1.50"
ami = "ami-003d3d03cfe1b0468"
}
Same concept but different variables. Not only did modules make our code much cleaner. It also saved us a lot of time in writing unnecessary boilerplate.
Provisioning Route53
Before starting you'll need to make sure that you own a domain on any platform. We'll be using this domain for our provisioning. I created a new domain on Route53 if you'd like to do the same check this link
Now in our root project's main.tf file we'll create what is called a Route53 Hosted Zone which is mainly a container for some domain we have that has DNS records inside. We can add different routing policies to these DNS records and have complete control over them.
First, we create the hosted zone
resource "aws_route53_zone" "nginx-zone" {
name = "name-of-hosted-zone"
}
Then we add records to it.
resource "aws_route53_record" "usa" {
zone_id = aws_route53_zone.nginx-zone.zone_id
name = "hewitech.click"
type = "A"
ttl = "300"
records = [module.vpc-us-east-1.address]
latency_routing_policy {
region = "us-east-1"
}
set_identifier = "us-east-1"
}
We specify the zone_id, along with the domain name, type and TTL.
Then we give it a records list of IP addresses since our type is A record. We have only 1 IP address in us-east-1 and to get the output address from the module we do module.vpc-us-east-1.address
Since we specified the output name to be address
this is how to call it.
Then we give it a latency_routing_policy
block (we'll get to this in a bit) and specify that to be us-east-1
for its region.
Now for the Frankfurt record
resource "aws_route53_record" "frankfurt" {
zone_id = aws_route53_zone.hei-zone.zone_id
name = "hewitech.click"
type = "A"
ttl = "300"
records = [module.vpc-frankfurt.address]
latency_routing_policy {
region = "eu-central-1"
}
set_identifier = "eu-central-1"
}
Same thing but we specify the Frankfurt IP address and a latency routing policy with region eu-central-1
Once a request comes to this domain since it has the latency routing policy it will check against the regions specified in these policies and route the one with the lowest latency.
For more on latency routing check here
Now we are done with all the infrastructure code. Let's do terraform init
followed by terraform apply --auto-approve
When everything is set up we need to add AWS Nameserver records to the domain name from the registrar(where our domain resides) (Godaddy, route53, etc).
If we head to Route53 in our AWS management console and find our newly created hosted zone and check the records, we should find a couple of name servers that AWS automatically added for us. We need to copy these name servers and add them as records in our domain registrar service.
Then after about 5 minutes or so (wait till the DNS propagates) (you can check on multiple websites such as https://dnschecker.org) We should be able to hit our domain name and get routed to the lowest latency server.
For example, my nearest server will be Frankfurt so if I visit my domain I get the following
Then if I proceed to use a VPN (Hola VPN on Chrome) and select The United States I get the following
Finally, after everything is done don't forget to terraform destroy --auto-approve
That's it! Hope you guys enjoyed the small walkthrough and learned something new today. See you in the next one!