- Ben Makes Stuff
- Posts
- How I automated setup and deployment for my apps with Terraform
How I automated setup and deployment for my apps with Terraform
Plus a note on Tailscale as a VPN and why you should care as a solo founder
Hey there, allow me to re-introduce myself š
Iām Ben, a solo founder living in Thailand š¹š and originally hailing from the USA šŗšø. I quit my big tech job back in May 2023, and by July 2023 I had decided to embark on a journey of building profitable businesses on my own with a goal of getting to $10K MRR.
And here we are in January 2025. Iām nowhere close to that $10K MRR goal yet, but Iām going to keep pushing until I get there!
With that said, I decided recently that I wanted to fix a problem that had been nagging at me for a while: the pain of manually entering Cloudflare, AWS security settings for every single app I wanted to launch (or really just maintain) after moving to self-hosting.
This post will go into how you can set this up for your own business(es) and as the title says, as well as when and why you should care.
If you liked this article, hereās a link to āļø buy me a coffee and show your support!
Terraform? Whatās that?
If you havenāt heard of Terraform already, you can basically think of it as a tool that helps you automate configuration for services built by AWS, Cloudflare, Hetzner, or any other number of companies you might care about.
You might have also heard of āinfrastructure as codeā (IaC) tools - Terraform is indeed a tool that fits in this bucket.
The best way I can think of to describe what it is in more detail is to explain the ābeforeā and āafterā of using Terraform:
Before:
š© Manually log into the AWS console
š« Type in a bunch of details and click the ācreateā button on a new EC2 instance (This is a Virtual Machine for those that donāt use AWS)
š“ Wait for creation to complete which might take 5-10 mins
š¤Ŗ Inevitably get distracted and watch YouTube or something
š„ļø Copy and paste the IP address of the EC2 instance into an āAā record of your domain in Cloudflare (or whatever is managing your DNS) so your website resolves.
Now repeat for every single app, service, or website you ever want to launch! Painful, right?
After:
š¤ Add (or update) an EC2 configuration file
š Add (or update) a Cloudflare configuration file
ā
Run terraform apply
Done!
Why you should care about migrating to Terraform
Let me first just say: are you at the beginning of your journey and just launching your first app? Ignore this post entirely and close the tab now.
Terraform is only a useful tool if you find yourself doing the same things over and over (what Iāve described above in ābeforeā) and you have more than a couple things to deploy.
To that end: if you only have 1 app, you wonāt need to repeat the above steps more than once. In this case, Itās not a good use of time to focus on automating this. Your focus instead should be on building a cool product that turns a profit.
On the other hand:
š³ Do you maintain 3+ different apps?
š ļø Or do you have one app with more than a couple services as components or dependencies that you find yourself needing to update once in a while?
š« Multiple teammates working on your product?
š Starting to scale and worried about disaster recovery?
ā You should pay attention to the rest of this post.
Disaster recovery you say?
Yes. There are a few disaster scenarios where Terraform really comes in handy:
You or someone on your team accidentally deletes some resources in AWS. You can run
terraform apply
to recreate everything in one command.
Youāve accidentally bricked your VM by running some commands you shouldnāt have (
sudo rm -rf /
anyone? Note: donāt actually run this of course š ) and frantically need to re-create it before customers start yelling.Your hosting provider goes out of business and you need to quickly set things up just like you had them on your old provider without making any mistakes.
Iāve personally run into the second situation (not with a rm
command, but something to that effect) already and was very glad I had just set up Terraform when it happened.
As a part of self-hosting, thereās a good chance you will also need to recreate your VPS at least once! Everyone makes mistakes.
Show us the code!
Ok. Iām done yapping, hereās exactly how I have my apps set up in Terraform.
Letās start with the directory structure that I recommend and drill into each key file to explain how it all works:

My actual āinfraā repository with all of my terraform config inside.
There are a lot of files here, so letās just cover the critical ones, starting with EC2. Hereās how I have my VM set up which hosts all of my apps:
resource "aws_instance" "servername_ec2_instance" {
tags = {
Name = "servername-ec2-instance"
}
instance_type = "t4g.medium" # arm64 instance with 2C, 4GB RAM
ami = "ami-..." # Redacted ID - this points to Debian 12
subnet_id = data.aws_subnet.us_east_1a.id
key_name = "ben-laptop-ec2-key-pair" # I defined this in AWS manually. There's probably Terraform config you can set up for this but I haven't bothered yet.
root_block_device {
volume_type = "gp3"
volume_size = 30 # GB
}
associate_public_ip_address = true
vpc_security_group_ids = [
aws_security_group.allow_ssh_ingress.id,
aws_security_group.allow_http_ingress.id,
aws_security_group.allow_all_traffic_egress.id
]
user_data = templatefile("scripts/init.tpl", {
docker_registry_host = var.docker_registry_host
docker_registry_username = var.docker_registry_username
docker_registry_access_token = var.docker_registry_access_token
vector_sink_config_url = var.vector_sink_config_url
atgatt_backend_api_config = var.atgatt_backend_api_config
atgatt_backend_migrator_config = var.atgatt_backend_migrator_config
unblock_domains_backend_api_config = var.unblock_domains_backend_api_config
unblock_domains_backend_monitor_config = var.unblock_domains_backend_monitor_config
unblock_domains_backend_migrator_config = var.unblock_domains_backend_migrator_config
watchdog_backend_api_config = var.watchdog_backend_api_config
watchdog_backend_bot_config = var.watchdog_backend_bot_config
watchdog_backend_migrator_config = var.watchdog_backend_migrator_config
})
user_data_replace_on_change = false
# Honestly not sure what this does, but without it AWS console shows a warning and Google told me to add this to resolve it, which it did.
metadata_options {
http_tokens = "required"
}
lifecycle {
ignore_changes = [ user_data ]
}
}
output "servername_ec2_instance_public_ip" {
value = aws_instance.servername_ec2_instance.public_ip
}
Most of this should be self-explanatory. First, I set the name, type, and operating system of the EC2 instance and make sure the disk is big enough for what Iām trying to do.
The sections I want to cover in more detail are the user_data
section as this is key to automating instance setup, vpc_security_group_ids
as this is a big part of how the instance is kept secure, as well as the output
.
First, letās cover user_data
. The fast explanation for this confusingly named file is that itās AWSā way of letting you define a script that runs whenever the instance is created and boots up for the first time.
Notice that Iām passing several variables to this script as itās merely a template that gets substituted with real values when terraform runs. Variables are defined in Terraform (usually in a variables.tf
file) like so:

Note the use of sensitive = true. Youāll want to set this for anything like a password or access token to prevent the value from getting logged and leaked.
The cool part about this is that you can simply set an environment variable prefixed with TF_VAR_
, in this case TF_VAR_docker_registry_access_token
, and Terraform will magically substitute the value of this variable with the environment variable value. This makes it possible to run Terraform in an automated fashion in CI environments, for example.
Now letās see how I have my user_data
script defined. Iāve included comments so you can understand whatās going on, and notice I reference variables by using ${var_name}:
#!/bin/bash
# Ensure latest updates are installed
apt update -y
apt upgrade -y
# Install essential packages. Fail2ban bans unauthorized logins after a certain number of attempts, useful to prevent bots from attempting to exploit your SSH process.
# Telnet is nice if you need to verify a host is reachable as sometimes hosts block ping requests despite being online.
apt install -y telnet fail2ban
# Install Dokku via official install script
echo 'Installing Dokku...'
wget -NP . https://dokku.com/install/v0.35.15/bootstrap.sh
sudo DOKKU_TAG=v0.35.15 bash bootstrap.sh
# Add SSH keys to Dokku so our dev machine + github actions can run Dokku commands
echo 'Done installing Dokku. Adding SSH keys...'
echo 'ssh-ed25519 AAAAC...publickey1goeshere ben-laptop' | dokku ssh-keys:add ben-laptop
echo 'ssh-ed25519 AAAAC...publickey2goeshere github-actions' | dokku ssh-keys:add github-actions
echo 'Done adding SSH keys. Configuring vector for logging globally...'
dokku logs:vector-start
wget -O /var/lib/dokku/data/logs/vector.json ${vector_sink_config_url}
# This is how I configure logging globally for all apps on my VPS. For more details, see https://betterstack.com/docs/logs/dokku/
echo 'Done configuring vector. Authenticating with Docker registry...'
dokku registry:login ${docker_registry_host} ${docker_registry_username} ${docker_registry_access_token}
echo 'Done authenticating with the registry. Creating and deploying apps...'
# Configure ATGATT. Some explanation here: Dokku has a nice feature where you can just execute `dokku config:set KEY1=VAL1 KEY2=VAL2...
# So I just have a giant string with all of my sensitive app config as an environment variable.
dokku apps:create atgatt-backend-api
dokku domains:set atgatt-backend-api api.atgatt.co
dokku config:set atgatt-backend-api --no-restart ${atgatt_backend_api_config}
# This is where I tell Dokku to deploy the app with the `latest` version of the image. This is powerful as my apps will automatically spin up using the latest version if this node ever gets recreated.
dokku git:from-image atgatt-backend-api ghcr.io/mockernut-ventures/atgatt-backend-api:latest
dokku apps:create atgatt-backend-migrator
dokku config:set atgatt-backend-migrator --no-restart ${atgatt_backend_migrator_config}
dokku git:from-image atgatt-backend-migrator ghcr.io/mockernut-ventures/atgatt-backend-migrator:latest
# Configure Unblock Domains
dokku apps:create unblock-domains-backend-api
dokku domains:set unblock-domains-backend-api api.unblock.domains
dokku config:set unblock-domains-backend-api --no-restart ${unblock_domains_backend_api_config}
dokku git:from-image unblock-domains-backend-api ghcr.io/mockernut-ventures/unblock-domains-backend-api:latest
dokku apps:create unblock-domains-backend-monitor
dokku config:set unblock-domains-backend-monitor --no-restart ${unblock_domains_backend_monitor_config}
dokku git:from-image unblock-domains-backend-monitor ghcr.io/mockernut-ventures/unblock-domains-backend-monitor:latest
dokku apps:create unblock-domains-backend-migrator
dokku config:set unblock-domains-backend-migrator --no-restart ${unblock_domains_backend_migrator_config}
dokku git:from-image unblock-domains-backend-migrator ghcr.io/mockernut-ventures/unblock-domains-backend-migrator:latest
# Configure Watchdog
dokku network:create watchdog-backend-network
dokku apps:create REDACTED-DEPENDENCY
dokku proxy:disable REDACTED-DEPENDENCY
dokku checks:disable REDACTED-DEPENDENCY
dokku network:set REDACTED-DEPENDENCY attach-post-create watchdog-backend-network
dokku docker-options:add REDACTED-DEPENDENCY deploy "-p 8726:8726"
dokku git:from-image REDACTED-DEPENDENCY REDACTED-DEPENDENCY:v0.1.2
dokku apps:create watchdog-backend-api
dokku domains:set watchdog-backend-api api.watchdog.chat
dokku config:set watchdog-backend-api --no-restart ${watchdog_backend_api_config}
dokku git:from-image watchdog-backend-api ghcr.io/mockernut-ventures/watchdog-backend-api:latest
dokku apps:create watchdog-backend-bot
dokku domains:set watchdog-backend-bot api-bot.watchdog.chat
dokku network:set watchdog-backend-bot attach-post-create watchdog-backend-network
dokku checks:set watchdog-backend-bot wait-to-retire 2
dokku config:set watchdog-backend-bot --no-restart ${watchdog_backend_bot_config}
dokku git:from-image watchdog-backend-bot ghcr.io/mockernut-ventures/watchdog-backend-bot:latest
dokku apps:create watchdog-backend-migrator
dokku config:set watchdog-backend-migrator --no-restart ${watchdog_backend_migrator_config}
dokku git:from-image watchdog-backend-migrator ghcr.io/mockernut-ventures/watchdog-backend-migrator:latest
# Finalize installation and reboot if needed (because of Kernel updates or whatever else got installed)
if [ -f /var/run/reboot-required ]; then
echo 'Done, but reboot is required. Rebooting in 30 seconds to finish installing updates...'
sleep 30
systemctl reboot
else
echo 'Done and no reboot is required. Install finished.'
fi
Now that weāve covered automating setup of a virtual machine and configuring apps to run with Dokku, letās cover the security settings as there are some interesting things to note here:

Hereās an example of how I have my HTTP traffic security group configured. Note that I only allow known-safe Cloudflare IPs to hit port 80 on my VPS.
The purpose of this configuration is to prevent unauthorized traffic from bypassing Cloudflare. If youāre wondering how I determine which IPs to whitelist, I use Terraformās data
block and http
provider to automatically fetch these IPs from Cloudflare directly. Hereās how this works:
I define a data
block in data.tf
like this:
data "http" "cloudflare_ips_api" {
url = "https://api.cloudflare.com/client/v4/ips"
request_headers = {
"Accept" = "application/json"
}
}
Then when I run terraform apply
, Terraform will automatically make a HTTP request to this URL and save the response for further processing.
In my locals.tf
, which is where you should process and persist things like this, I then parse the response into an appropriate format using jsondecode
:
locals {
cloudflare_ip_ranges_resp = jsondecode(data.http.cloudflare_ips_api.response_body)
cloudflare_ipv4_ranges = local.cloudflare_ip_ranges_resp.result.ipv4_cidrs
cloudflare_ipv6_ranges = local.cloudflare_ip_ranges_resp.result.ipv6_cidrs
allowed_ssh_ipv4_ranges = [] # To temporarily allow all traffic, change to: ["0.0.0.0/0"]
allowed_ssh_ipv6_ranges = [] # To temporarily allow all traffic, change to: ["::/0"]
allowed_postgres_ipv4_ranges = concat(var.work_ipv4_ranges, ["${aws_instance.myserver_ec2_instance.public_ip}/32"])
allowed_postgres_ipv6_ranges = concat(var.work_ipv6_ranges, aws_instance.myserver_ec2_instance.ipv6_addresses)
}
There are a few things that are quite powerful about this file. The first is that I never have to manually whitelist a cloudflare IP, as cloudflareās API is used.
Whenever I want to refresh the current list of IPs, I just run terraform apply
and itāll pull in any changes. I could even start running terraform apply
on a schedule or in response to an event to catch changes in a completely automated fashion!
The second is the āallowed_postgres_ipv4_rangesā line: this makes sure that every time my EC2 instance (that weāve configured above) changes or gets recreated, the latest public_ip is always pulled.
Hereās whatās great about this: if the IP changes, Terraform will automatically perform an update on the postgres resourceās firewall (not shown in this post, but defined in aiven_pg.tf
). It will also update all of my Cloudflare A records to point to the new server IP.
No more manually updating IP whitelists to include my own server!
But wait, zero allowed SSH ipv4/6 ranges?!
If you have a sharp eye, you might have noticed that my SSH whitelist is completely empty.
In fact, leaving the SSH port open is a common mistake many people make when setting up a server for the first time, but if you ever do this youāll have bots hitting your server within minutes of it coming online.
These bots tend to try āadminā āpostgresā and other such usernames and brute force passwords (which is why you should disable password auth) to see if they can gain access.
So blocking SSH entirely is of course quite secure, but itās a double edged sword as normally, this would mean that I would never be able to SSH into my host when I needed to.
So whatās the secret?
Itās called Tailscale. Tailscale acts as a VPN, but unlike a traditional VPN it does not require a separate Bastion host. If youād like to understand more about how this works, itās out of scope of this post, but click here to read the official how-it-works blog.
I havenāt included Tailscale in my Terraform config yet, but I have an item on my TODO list to add it. When I get around to it, Iāll simply add this script to my user_data
which Iāve manually run on my VPS for now:tailscale up --ssh --hostname=<my-server-name> --auth-key=<my-tailscale-key>
This command starts Tailscaleās SSH daemon, which then allows you to access SSH even on a closed port just by running ssh youruser@host
ā no credentials required!
You do need to install Tailscale on your dev machine for this to work, but itās very worth an extra app install to not have to have an open SSH port.
Note that even with Tailscale installed, I like to leave the allowed_ssh_ipv*_ranges
locals around. This way, if the Tailscale daemon crashes or thereās some other emergency scenario, Iāll just add my IP to that local variable, which will then allow me to access the host using the traditional SSH server (bypassing Tailscale).
What happens if my resources already exist? How do I migrate to Terraform?
One other issue you might be thinking about is how to migrate existing resources that youāve manually created to Terraform.
The short answer is: it depends on where they come from.
Cloudflare? https://github.com/cloudflare/cf-terraforming helps automate this.
Hereās the command youāll need to run to import DNS records:
cf-terraforming -e [email protected] -t "<cloudflare_api_token>" -z <zone_id> --resource-type "cloudflare_record" --modern-import-block import >> cloudflare_records_yoursite.tf && cf-terraforming -e [email protected] -t "<cloudflare_api_token>" -z <zone_id> --resource-type "cloudflare_record" --modern-import-block generate >> cloudflare_records_yoursite.tf
Then once the file is generated, you might notice that resources are named weirdly, like āterraform_managed_resource_abc1234ā¦.ā You should rename all of these for clarity. Hereās a prompt to feed to a LLM to make this process faster:
Rename all autogenerated terraform_managed_resource_* resources to be more reasonable i.e. mysite_api_A_record for the A record for api. Also replace the zone_id for all records with cloudflare_zone.mysite_zone.id.
What about non-Cloudflare resources?
AWS and pretty much everything else? https://github.com/GoogleCloudPlatform/terraformer is the way to go.
You can also accomplish this manually, by defining an import block:
import {
to = aws_instance.example
id = "i-abcd1234" # Find the ID of the resource from AWS first
}
resource "aws_instance" "example" {
name = "your-instance-name"
# (other resource arguments...)
}
Then, just run terraform apply
and wait for the resources to be imported. Once theyāre imported, you can remove the import
blocks (or leave them) as theyāre no longer needed (terraform adds these resources to the .tfstate file found in your working directory once an import succeeds)
One piece of advice: donāt try to import everything at once. Itāll be a lot of work and youāll get demotivated fast. Instead, migrate the most critical pieces first:
VPS
Database
DNS
Then migrate everything else over time on an as-needed basis.
Conclusion
I hope you got some value out of this post and learned how easy it is to automate infrastructure deployment with Terraform! Itās what I use for my apps and it has already saved me countless times.
I put a lot of research into this, so if you find some value out of what Iāve written here, hereās a link to āļø buy me a coffee and show your support!
Thanks so much for reading either way.
Reply