Building Secure AWS Infrastructure with Terraform - Complete Lab Guide
If you want to learn Terraform by building something real, this post is for you. I walk through every file in a complete Terraform project that deploys a segmented AWS VPC with public and private subnets, EC2 instances, NAT Gateway, security groups, and VPC Flow Logs -- all from scratch.
Objectives
- Build a segmented AWS VPC with public and private subnets
- Launch EC2 instances in both subnets (bastion host + private backend)
- Secure access using Security Groups
- Enable VPC Flow Logs for traffic monitoring
- Manage everything with Terraform, using variables and data sources for flexibility
Prerequisites
- An AWS Free Tier account
- AWS CLI installed and configured (
aws configure) with IAM user access keys - Terraform installed on the local workstation
- An EC2 SSH Key Pair created in AWS (with the
.pemfile downloaded and stored securely)
Terraform Project Structure
/terraform-vpc-lab
├── provider.tf
├── variables.tf
├── data.tf
├── locals.tf
├── main.tf
├── outputs.tf
├── terraform.tfvars
├── deploy-terraform.ps1
└── user-data
├── private-userdata.sh
└── public-userdata.sh
Each file has a specific purpose, described in detail below.
provider.tf
provider "aws" {
region = "us-east-1"
}
This defines the AWS provider and sets the default region. I hardcoded us-east-1 here, but you could easily make this a variable.
variables.tf
The variables.tf file defines all the input variables, making the configuration flexible, reusable, and easier to manage across environments (dev, staging, prod). By declaring variables here, I avoid hardcoding values in main.tf or other resource files.
# variables.tf - defines and validates variables
variable "vpc_cidr" {
description = "CIDR block for the new VPC"
type = string
default = "10.0.0.0/16"
validation {
condition = can(cidrhost(var.vpc_cidr, 0))
error_message = "VPC CIDR must be a valid IPv4 CIDR block."
}
}
# defines which AWS Availability zone to deploy into
variable "availability_zone" {
description = "The AWS Availability Zone"
type = string
default = "us-west-1a"
# regex validation to ensure the format matches AWS AZ patterns
validation {
condition = can(regex("^[a-z]{2}-[a-z]+-[0-9][a-z]$", var.availability_zone))
error_message = "Availability zone must be in the format like 'us-west-1a'."
}
}
# Defines the IP range for the public subnet
variable "public_subnet_cidr" {
description = "CIDR block for the public subnet"
type = string
default = "10.0.1.0/24"
# CIDR validation check
validation {
condition = can(cidrhost(var.public_subnet_cidr, 0))
error_message = "Public subnet CIDR must be a valid IPv4 CIDR block."
}
}
# Defines the IP range for the private subnet
variable "private_subnet_cidr" {
description = "CIDR block for the private subnet"
type = string
default = "10.0.2.0/24"
# CIDR validation check
validation {
condition = can(cidrhost(var.private_subnet_cidr, 0))
error_message = "Private subnet CIDR must be a valid IPv4 CIDR block."
}
}
# Specifies the name of the EC2 SSH key pair used for connecting to instances.
variable "key_name" {
description = "The name of the EC2 SSH key pair"
type = string
}
# Used in security group rules to restrict SSH access
# requires a /32 CIDR notation
variable "home_ip" {
description = "Your home public IP address with /32 mask (e.g., 203.0.113.1/32)"
type = string
# Regex validation to ensure proper format
validation {
condition = can(regex("^([0-9]{1,3}\\.){3}[0-9]{1,3}/32$", var.home_ip))
error_message = "Home IP must be a valid IPv4 address with /32 mask (e.g., 203.0.113.1/32)."
}
}
# Used for tagging AWS resources for identification
# Helps track which resources belong to which project
variable "project_name" {
description = "Name of the project for resource tagging"
type = string
default = "terraform-vpc-demo"
}
# Tags resources or configures settings per environment
variable "environment" {
description = "Environment name (dev, staging, prod)"
type = string
default = "dev"
# Ensures only use valid environment names
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be one of: dev, staging, prod."
}
}
# Controls the size of the EC2 instances
# Below is free-tier eligible
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
# Controls whether DNS hostnames is enabled for the VPC
# Below is set to true - best practice
variable "enable_dns_hostnames" {
description = "Enable DNS hostnames in the VPC"
type = bool
default = true
}
# Controls whether DNS support is enabled for the VPC
# Below set to true - best practice
variable "enable_dns_support" {
description = "Enable DNS support in the VPC"
type = bool
default = true
}
Each variable includes validation blocks to catch configuration errors before deployment. This is safer than discovering bad inputs at apply time, and makes the whole project portable across environments.
terraform.tfvars
# terraform.tfvars - define sensitive values
key_name = "AWS_key_name"
home_ip = "publicIP/32"
project_name = "project name"
environment = "dev"
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.1.0/24"
private_subnet_cidr = "10.0.2.0/24"
availability_zone = "us-west-1a"
instance_type = "t2.micro"
This file provides actual values for the variables defined in variables.tf. Terraform loads it automatically. Keep this out of version control since it contains sensitive values like your public IP.
data.tf
Data sources let Terraform query AWS for dynamic information -- things like the current region, account identity, available AZs, and the latest AMI IDs. I use these throughout the configuration instead of hardcoding values.
# data.tf - Centralize data sources
# Fetches the AWS region that Terraform is currently operating in
# Useful to reference the region dynamically in resources or outputs without hardcoding var.region everywhere
data "aws_region" "current" {}
# Returns details about the active AWS identity: AWS account ID, Amazon Resource Name (ARN), User ID
# Useful for tagging, auditing, or conditional logic tied to the current account
data "aws_caller_identity" "current" {}
# Returns a list of available AZs in the current region
# Can be used to distribute resources across zones for HA
# below is only including AZs in the "available" state
data "aws_availability_zones" "available" {
state = "available"
}
# Looks up the most recent official Amazon Linux 2 AMI
# Ensures that EC2 instances are always built from the latest Amazon-maintained AMI, reducing the need to hardcode AMI IDs
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"] # HVM-based, general-purpose SSD, 64-bit
}
filter {
name = "virtualization-type" # sets virtualization type to HVM
values = ["hvm"]
}
}
Data sources keep the configuration dynamic and portable. The AMI lookup is especially useful -- it always grabs the latest Amazon Linux 2 image, so I never have to manually update AMI IDs when deploying to a new region.
locals.tf
The locals block defines computed values and constants that I reuse throughout the configuration. Think of them as internal variables that can include derived values -- not just user-provided inputs.
# locals.tf - Define reusable values and computed tags
# `locals {` starts the block
locals {
# Common tags to apply to all resources
# Defines a reusable map of tags
common_tags = {
Project = var.project_name
Environment = var.environment
ManagedBy = "Terraform"
CreatedAt = timestamp() # Computed at runtime using current runtime stamp
}
# Resource naming convention
# Useful for consistent resource names or tags across environments
name_prefix = "${var.project_name}-${var.environment}"
# Computed values
# These locals make it easier to refer to the values without repeating longer expressions
vpc_cidr = var.vpc_cidr # references the variable block
region = data.aws_region.current.name # references the data block
}
The common_tags map and name_prefix string are the workhorses here. I apply them to every resource, which keeps naming consistent and makes cost allocation straightforward. Change the project name or environment once, and it propagates everywhere.
main.tf
This is the core infrastructure file -- it contains all the resource blocks that build the actual AWS environment. I'll walk through each section.
VPC Creation
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = var.enable_dns_hostnames
enable_dns_support = var.enable_dns_support
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-vpc"
})
}
Creates the VPC with DNS settings enabled and applies consistent tags from the locals block.
Internet Gateway
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-igw"
})
}
Attaches an Internet Gateway to the VPC, enabling internet access for public resources.
Subnets
# Create public subnet
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidr
availability_zone = var.availability_zone
map_public_ip_on_launch = true
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-public-subnet"
Type = "Public"
})
}
# Create private subnet
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidr
availability_zone = var.availability_zone
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-private-subnet"
Type = "Private"
})
}
The public subnet auto-assigns public IPs on launch; the private subnet does not. This is the foundation of the network segmentation.
Elastic IP and NAT Gateway
# Allocate Elastic IP for NAT Gateway
resource "aws_eip" "nat" {
domain = "vpc"
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-nat-eip"
})
depends_on = [aws_internet_gateway.main]
}
# Create NAT Gateway in public subnet
resource "aws_nat_gateway" "nat" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public.id
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-nat-gateway"
})
depends_on = [aws_internet_gateway.main]
}
The Elastic IP gives the NAT Gateway a static outbound IP. The NAT Gateway sits in the public subnet and allows private subnet instances to reach the internet (for updates, API calls, etc.) without being directly accessible from the outside.
Route Tables
# Create public route table
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-public-rt"
Type = "Public"
})
}
# Associate public route table with public subnet
resource "aws_route_table_association" "public_assoc" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}
# Create private route table (via NAT Gateway)
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id # Changed from data.aws_vpc.existing.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat.id
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-private-rt"
Type = "Private"
})
}
# Associate private route table with private subnet
resource "aws_route_table_association" "private_assoc" {
subnet_id = aws_subnet.private.id
route_table_id = aws_route_table.private.id
}
The public route table sends 0.0.0.0/0 traffic to the Internet Gateway; the private route table sends it to the NAT Gateway instead. Each subnet gets associated with its respective route table.
Security Groups
# Create Security Group for public EC2
resource "aws_security_group" "public_sg" {
name_prefix = "${local.name_prefix}-public-"
description = "Security group for public EC2 instances - allows SSH from specified IP"
vpc_id = aws_vpc.main.id
ingress {
description = "SSH from home IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.home_ip]
}
egress {
description = "All outbound traffic"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-public-sg"
Type = "Public"
})
lifecycle {
create_before_destroy = true
}
}
# Create Security Group for private EC2
resource "aws_security_group" "private_sg" {
name_prefix = "${local.name_prefix}-private-"
description = "Security group for private EC2 instances - allows SSH from public subnet"
vpc_id = aws_vpc.main.id
ingress {
description = "SSH from public subnet"
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.public_sg.id]
}
egress {
description = "All outbound traffic"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-private-sg"
Type = "Private"
})
lifecycle {
create_before_destroy = true
}
}
The public security group only allows SSH from my home IP. The private security group only allows SSH from instances in the public security group -- this is the bastion host pattern. Note the create_before_destroy lifecycle rule, which prevents downtime during security group updates.
EC2 Instances
# Launch EC2 in public subnet
resource "aws_instance" "public_ec2" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = aws_subnet.public.id
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.public_sg.id]
user_data = base64encode(templatefile("${path.module}/user-data/public-userdata.sh", {
hostname = "${local.name_prefix}-public"
}))
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-public-ec2"
Type = "Public"
Role = "Bastion"
})
}
# Launch EC2 in private subnet
resource "aws_instance" "private_ec2" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = aws_subnet.private.id
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.private_sg.id]
user_data = base64encode(templatefile("${path.module}/user-data/private-userdata.sh", {
hostname = "${local.name_prefix}-private"
}))
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-private-ec2"
Type = "Private"
Role = "Backend"
})
}
The public instance serves as a bastion/jump host; the private instance is the backend. Both use templatefile() to inject the hostname into their respective user data scripts at deploy time.
CloudWatch Log Group for VPC Flow Logs
# Create CloudWatch Log Group for VPC Flow Logs
resource "aws_cloudwatch_log_group" "vpc_logs" {
name = "/aws/vpc/flowlogs/${local.name_prefix}"
retention_in_days = 7
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-vpc-flow-logs"
})
}
This CloudWatch log group stores the VPC Flow Logs with a 7-day retention period -- long enough for troubleshooting without accumulating unnecessary costs.
IAM Role and Policy for VPC Flow Logs
# Create IAM role for flow logs
resource "aws_iam_role" "flow_logs_role" {
name_prefix = "${local.name_prefix}-flow-logs-"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "vpc-flow-logs.amazonaws.com"
}
}]
})
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-flow-logs-role"
})
}
# Create custom IAM policy for flow logs
resource "aws_iam_role_policy" "flow_logs_policy" {
name_prefix = "${local.name_prefix}-flow-logs-"
role = aws_iam_role.flow_logs_role.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
Resource = "${aws_cloudwatch_log_group.vpc_logs.arn}:*"
}
]
})
}
The IAM role uses a trust policy that only allows the vpc-flow-logs.amazonaws.com service to assume it. The attached policy grants just enough permissions to write to the specific log group.
Enable VPC Flow Logs
# Enable VPC Flow Logs
resource "aws_flow_log" "vpc_flow_logs" {
log_destination_type = "cloud-watch-logs"
log_destination = aws_cloudwatch_log_group.vpc_logs.arn
traffic_type = "ALL"
vpc_id = aws_vpc.main.id # Changed from data.aws_vpc.existing.id
iam_role_arn = aws_iam_role.flow_logs_role.arn
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-vpc-flow-logs"
})
}
This captures all traffic (accepted and rejected) flowing through the VPC. Invaluable for debugging security group rules and diagnosing connectivity issues.
outputs.tf
The outputs.tf file defines what Terraform prints after terraform apply finishes -- key resource IDs, IPs, and ready-to-use SSH commands. These outputs also feed into other Terraform modules if you decide to compose this with additional infrastructure later.
# outputs.tf
# Outputs VPC Information
# Useful for confirming network foundation and account context
output "vpc_info" {
description = "Information about the created VPC"
value = {
vpc_id = aws_vpc.main.id
vpc_cidr = aws_vpc.main.cidr_block
igw_id = aws_internet_gateway.main.id
region = data.aws_region.current.name
account_id = data.aws_caller_identity.current.account_id
}
}
# Outputs Subnet Information
# Helps verify where subnets landed
output "subnet_info" {
description = "Information about created subnets"
value = {
public_subnet = {
id = aws_subnet.public.id
cidr_block = aws_subnet.public.cidr_block
az = aws_subnet.public.availability_zone
}
private_subnet = {
id = aws_subnet.private.id
cidr_block = aws_subnet.private.cidr_block
az = aws_subnet.private.availability_zone
}
}
}
# Outputs Security Group Information
# Useful to reference or modify SGs later
output "security_group_info" {
description = "Information about created security groups"
value = {
public_sg_id = aws_security_group.public_sg.id
private_sg_id = aws_security_group.private_sg.id
}
}
# Outputs EC2 Instance Information
# Critical for connecting, troubleshooting, or automating follow-up tasks
output "instance_info" {
description = "Information about EC2 instances"
value = {
public_instance = {
id = aws_instance.public_ec2.id
public_ip = aws_instance.public_ec2.public_ip
private_ip = aws_instance.public_ec2.private_ip
ami_id = aws_instance.public_ec2.ami
}
private_instance = {
id = aws_instance.private_ec2.id
private_ip = aws_instance.private_ec2.private_ip
ami_id = aws_instance.private_ec2.ami
}
}
}
# Outputs NAT Gateway Information
# Helps confirm NAT setup and outbound address
output "nat_gateway_info" {
description = "Information about NAT Gateway"
value = {
nat_gateway_id = aws_nat_gateway.nat.id
elastic_ip = aws_eip.nat.public_ip
}
}
# Outputs VPC Flow Logs Information
# Useful to pull logs, configure alerts, or analyze traffic
output "flow_logs_info" {
description = "Information about VPC Flow Logs"
value = {
log_group_name = aws_cloudwatch_log_group.vpc_logs.name
log_group_arn = aws_cloudwatch_log_group.vpc_logs.arn
}
}
# Outputs SSH Command Shortcuts
# Improves usability, especially for fast testing or handover
output "ssh_commands" {
description = "SSH commands to connect to instances"
value = {
public_instance = "ssh -i ~/.ssh/${var.key_name}.pem ec2-user@${aws_instance.public_ec2.public_ip}"
private_instance = "ssh -i ~/.ssh/${var.key_name}.pem -o ProxyJump=ec2-user@${aws_instance.public_ec2.public_ip} ec2-user@${aws_instance.private_ec2.private_ip}"
}
}
The SSH command outputs are especially handy -- I can copy-paste them directly after deployment. One improvement I would make next time: add sensitive = true to hide internal IPs from CLI output, and format outputs as maps for easier scaling to multiple instances.
User Data Scripts
User data scripts run on first boot to customize each instance. Both scripts follow the same pattern: update packages, install tools, set the hostname, install the CloudWatch agent, and create an identifying MOTD.
public-userdata.sh
#!/bin/bash
# user-data/public-userdata.sh
yum update -y
yum install -y htop tree wget curl
# Set hostname
hostnamectl set-hostname ${hostname}
echo "127.0.0.1 ${hostname}" >> /etc/hosts
# Install CloudWatch agent
yum install -y amazon-cloudwatch-agent
# Create a welcome message
cat > /etc/motd << 'EOF'
*****************************************************
* Welcome to the Public EC2 Instance (Bastion) *
* This instance has internet access and can *
* be used to access private instances *
*****************************************************
EOF
# Log instance startup
echo "$(date): Public instance ${hostname} started successfully" >> /var/log/terraform-deployment.log
The ${hostname} variable gets substituted by Terraform's templatefile function at deploy time, so each instance gets a unique, identifiable hostname.
private-userdata.sh
#!/bin/bash
# user-data/private-userdata.sh
yum update -y
yum install -y htop tree wget curl
# Set hostname
hostnamectl set-hostname ${hostname}
echo "127.0.0.1 ${hostname}" >> /etc/hosts
# Install CloudWatch agent
yum install -y amazon-cloudwatch-agent
# Create a welcome message
cat > /etc/motd << 'EOF'
*****************************************************
* Welcome to the Private EC2 Instance *
* This instance is in a private subnet and *
* accessible only through the bastion host *
*****************************************************
EOF
# Log instance startup
echo "$(date): Private instance ${hostname} started successfully" >> /var/log/terraform-deployment.log
Same approach as the public script, but with a distinct MOTD so I immediately know which instance I've landed on when SSH-ing in. The base64encode(templatefile(...)) pattern in main.tf handles variable substitution and the encoding that AWS expects for user data.
Deployment Workflow: Terraform Commands
The standard workflow:
terraform init # Download providers, initialize backend
terraform fmt # Auto-format .tf files
terraform validate # Check syntax and logic
terraform plan # Preview what will be created
terraform apply # Deploy the infrastructure
terraform destroy # Tear it all down when finished
Deployment Workflow: PowerShell Script
I also wrote a PowerShell script to automate the full workflow with safety checks. It verifies prerequisites, validates configuration, prompts for confirmation, and runs each Terraform command in sequence.
# Terraform Deployment Script
# ==========================================
# 1. PREREQUISITES CHECK
# ==========================================
Write-Host "Checking Terraform installation..." -ForegroundColor Yellow
terraform --version
if ($LASTEXITCODE -ne 0) {
Write-Host "ERROR: Terraform not found! Install from: https://www.terraform.io/downloads" -ForegroundColor Red
exit 1
}
Write-Host "Checking AWS CLI..." -ForegroundColor Yellow
aws --version
if ($LASTEXITCODE -ne 0) {
Write-Host "ERROR: AWS CLI not found! Install from: https://aws.amazon.com/cli/" -ForegroundColor Red
exit 1
}
Write-Host "Checking AWS credentials..." -ForegroundColor Yellow
aws sts get-caller-identity
if ($LASTEXITCODE -ne 0) {
Write-Host "ERROR: AWS credentials not configured! Run: aws configure" -ForegroundColor Red
exit 1
}
# ==========================================
# 2. PROJECT SETUP
# ==========================================
Write-Host "Checking project directory..." -ForegroundColor Yellow
$CurrentDir = Split-Path -Leaf (Get-Location)
Write-Host "Current directory: $CurrentDir" -ForegroundColor Green
# Go up one level if we're in user-data directory
if ($CurrentDir -eq "user-data") {
Set-Location ".."
$CurrentDir = Split-Path -Leaf (Get-Location)
Write-Host "Moved up to: $CurrentDir" -ForegroundColor Green
}
# Verify Terraform files exist
$TerraformFiles = @("main.tf", "variables.tf", "outputs.tf", "provider.tf", "data.tf", "locals.tf")
$MissingFiles = @()
foreach ($file in $TerraformFiles) {
if (-not (Test-Path $file)) {
$MissingFiles += $file
}
}
if ($MissingFiles.Count -gt 0) {
Write-Host "ERROR: Missing Terraform files: $($MissingFiles -join ', ')" -ForegroundColor Red
Write-Host "Make sure you're in the correct directory with your .tf files!" -ForegroundColor Red
exit 1
}
Write-Host "SUCCESS: All Terraform files found" -ForegroundColor Green
# Check if terraform.tfvars exists
if (-not (Test-Path "terraform.tfvars")) {
Write-Host "Creating terraform.tfvars template..." -ForegroundColor Yellow
$TerraformVars = @"
# terraform.tfvars - Customize these values for your environment
# Required variables (you MUST set these)
key_name = "your-key-pair-name" # Replace with your EC2 key pair name
home_ip = "203.0.113.1/32" # Replace with your public IP + /32
# Optional customizations
project_name = "tillynet-vpc-lab"
environment = "dev"
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.1.0/24"
private_subnet_cidr = "10.0.2.0/24"
availability_zone = "us-west-1a"
instance_type = "t2.micro"
# DNS settings
enable_dns_hostnames = true
enable_dns_support = true
"@
$TerraformVars | Out-File -FilePath "terraform.tfvars" -Encoding UTF8
Write-Host "SUCCESS: Created terraform.tfvars template" -ForegroundColor Green
Write-Host "IMPORTANT: Edit terraform.tfvars with your actual values before proceeding!" -ForegroundColor Red
}
# Check user-data directory
if (-not (Test-Path "user-data")) {
Write-Host "ERROR: user-data directory not found!" -ForegroundColor Red
exit 1
}
$UserDataFiles = @("user-data/public-userdata.sh", "user-data/private-userdata.sh")
foreach ($file in $UserDataFiles) {
if (-not (Test-Path $file)) {
Write-Host "ERROR: Missing user-data script: $file" -ForegroundColor Red
exit 1
}
}
Write-Host "SUCCESS: User-data scripts found" -ForegroundColor Green
# ==========================================
# 3. GET PUBLIC IP
# ==========================================
Write-Host "Getting your public IP address..." -ForegroundColor Yellow
try {
$PublicIP = (Invoke-RestMethod -Uri "https://ifconfig.me/ip" -TimeoutSec 10).Trim()
Write-Host "SUCCESS: Your public IP: $PublicIP" -ForegroundColor Green
Write-Host "Make sure terraform.tfvars has: home_ip = `"$PublicIP/32`"" -ForegroundColor Cyan
# Check if terraform.tfvars needs updating
$TfVarsContent = Get-Content "terraform.tfvars" -Raw
if ($TfVarsContent -match "203\.0\.113\.1/32") {
Write-Host "WARNING: terraform.tfvars still has placeholder IP - update it!" -ForegroundColor Yellow
}
}
catch {
Write-Host "ERROR: Could not get public IP. Get it from: https://whatismyipaddress.com/" -ForegroundColor Red
}
# ==========================================
# 4. TERRAFORM DEPLOYMENT
# ==========================================
Write-Host "`nSTARTING TERRAFORM DEPLOYMENT" -ForegroundColor Magenta
Write-Host "======================================" -ForegroundColor Magenta
# Function to run Terraform commands
function Invoke-TerraformCommand {
param(
[string]$Command,
[string]$Description
)
Write-Host "`n$Description" -ForegroundColor Yellow
Write-Host "Running: terraform $Command" -ForegroundColor Gray
$StartTime = Get-Date
Invoke-Expression "terraform $Command"
$EndTime = Get-Date
$Duration = $EndTime - $StartTime
if ($LASTEXITCODE -eq 0) {
Write-Host "SUCCESS: $Description completed (took $($Duration.TotalSeconds) seconds)" -ForegroundColor Green
return $true
} else {
Write-Host "ERROR: $Description failed!" -ForegroundColor Red
Write-Host "Check the output above for error details." -ForegroundColor Red
return $false
}
}
# Pre-deployment validation
Write-Host "`nValidating configuration..." -ForegroundColor Yellow
$TfVarsContent = Get-Content "terraform.tfvars" -Raw
if ($TfVarsContent -match "your-key-pair-name") {
Write-Host "ERROR: terraform.tfvars still has placeholder key_name!" -ForegroundColor Red
Write-Host "Create a key pair in AWS Console: EC2 -> Key Pairs -> Create" -ForegroundColor Yellow
exit 1
}
# Step 1: Initialize
if (-not (Invoke-TerraformCommand "init" "Initializing Terraform")) {
exit 1
}
# Step 2: Validate
if (-not (Invoke-TerraformCommand "validate" "Validating Terraform configuration")) {
exit 1
}
# Step 3: Format
Invoke-TerraformCommand "fmt" "Formatting Terraform code"
# Step 4: Plan
Write-Host "`nCreating Terraform plan..." -ForegroundColor Yellow
Write-Host "This shows what resources will be created (NEW VPC + subnets + EC2s)." -ForegroundColor Gray
if (-not (Invoke-TerraformCommand "plan -out=tfplan" "Planning deployment")) {
exit 1
}
# Step 5: Confirm
Write-Host "`nReady to deploy infrastructure?" -ForegroundColor Yellow
Write-Host "This will create a NEW VPC with all resources (NAT Gateway costs about $32/month)." -ForegroundColor Red
Write-Host "Resources to be created:" -ForegroundColor Cyan
Write-Host "- New VPC (10.0.0.0/16)" -ForegroundColor White
Write-Host "- Internet Gateway" -ForegroundColor White
Write-Host "- Public and Private Subnets" -ForegroundColor White
Write-Host "- NAT Gateway (about $32/month)" -ForegroundColor White
Write-Host "- 2 EC2 instances (t2.micro - free tier)" -ForegroundColor White
Write-Host "- Security Groups, Route Tables, VPC Flow Logs" -ForegroundColor White
$Confirmation = Read-Host "Type 'yes' to proceed with deployment"
if ($Confirmation -eq "yes") {
# Step 6: Apply
if (Invoke-TerraformCommand "apply tfplan" "Applying Terraform plan") {
Write-Host "`nDEPLOYMENT SUCCESSFUL!" -ForegroundColor Green
Write-Host "======================================" -ForegroundColor Green
# Show outputs
Write-Host "`nInfrastructure Details:" -ForegroundColor Cyan
terraform output
# Cleanup
Remove-Item "tfplan" -ErrorAction SilentlyContinue
Write-Host "`nNext Steps:" -ForegroundColor Yellow
Write-Host "1. Check AWS Console to see created resources" -ForegroundColor White
Write-Host "2. Use SSH commands from output to connect to instances" -ForegroundColor White
Write-Host "3. Test connectivity: Public -> Private instance" -ForegroundColor White
Write-Host "4. When done testing, run: terraform destroy" -ForegroundColor White
Write-Host "`nUseful AWS Console Links:" -ForegroundColor Yellow
Write-Host "- VPC Dashboard: https://console.aws.amazon.com/vpc/" -ForegroundColor White
Write-Host "- EC2 Dashboard: https://console.aws.amazon.com/ec2/" -ForegroundColor White
Write-Host "- CloudWatch Logs: https://console.aws.amazon.com/cloudwatch/" -ForegroundColor White
}
} else {
Write-Host "Deployment cancelled by user." -ForegroundColor Red
Remove-Item "tfplan" -ErrorAction SilentlyContinue
}
# ==========================================
# 5. HELPFUL COMMANDS
# ==========================================
Write-Host "`nUSEFUL TERRAFORM COMMANDS:" -ForegroundColor Magenta
Write-Host "======================================" -ForegroundColor Magenta
Write-Host "terraform show # Show current state" -ForegroundColor White
Write-Host "terraform output # Show outputs again" -ForegroundColor White
Write-Host "terraform state list # List all resources" -ForegroundColor White
Write-Host "terraform plan # Plan changes" -ForegroundColor White
Write-Host "terraform apply # Apply changes" -ForegroundColor White
Write-Host "terraform destroy # Delete all resources" -ForegroundColor White
Write-Host "`nTROUBLESHOOTING:" -ForegroundColor Magenta
Write-Host "======================================" -ForegroundColor Magenta
Write-Host "- Error about credentials: Run 'aws configure'" -ForegroundColor White
Write-Host "- Error about regions: Check availability_zone variable" -ForegroundColor White
Write-Host "- Error about key pairs: Create key pair in EC2 console first" -ForegroundColor White
Write-Host "- Error about permissions: Ensure your AWS user has admin rights" -ForegroundColor White
Write-Host "`nDeployment script completed!" -ForegroundColor Green
What the Script Does
- Prerequisite checks -- verifies Terraform, AWS CLI, and valid credentials are present before running anything
- Project directory validation -- confirms all
.tffiles and user data scripts exist; generates aterraform.tfvarstemplate if missing - Public IP detection -- fetches your current public IP via
ifconfig.meand warns if the placeholder is still interraform.tfvars - Terraform deployment -- runs
init,validate,fmt,plan, prompts for confirmation, thenapply - Post-deployment output -- displays infrastructure details, SSH commands, and AWS Console links
The Invoke-TerraformCommand helper function tracks execution time and provides color-coded pass/fail feedback for each step. If anything fails, the script exits early instead of blindly continuing.
Best Practices and Lessons Learned
- Never hardcode secrets -- use
terraform.tfvarsor environment variables for sensitive values like your home IP - Always plan before apply --
terraform planhas saved me from unintended changes more than once - Use dynamic AMI lookups -- hardcoded AMI IDs break when you switch regions
- Secure your state file --
.tfstatecan contain sensitive outputs; treat it like a credential - Git-ignore everything sensitive -- add
terraform.tfvars,*.tfstate, and*.tfstate.backupto.gitignore