Skip to main content

Terraform lifecycle: the everyday flow

·1204 words·6 mins
Khalid Rizvi
Author
Khalid Rizvi
Where Legacy Meets GenAI

1. What is Terraform? (Simple analogy)
#

Imagine you decide to build your own coffee shop at home:

  • You pick the space, buy tables, chairs, coffee machine, set up WiFi.
  • You write down the layout: “two tables, one bar, one machine, plug into the socket near the window.”
  • Later you want to replicate it in another city: you reuse the list, tweak the number of chairs, change the machine brand, but the core plan stays the same.

That’s what Terraform does for infrastructure (cloud servers, networks, databases).

  • You write what you want.
  • Terraform reads what you already have.
  • Then it makes the changes so that actual infrastructure matches your plan.

Officially:

“Terraform is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently.” ([HashiCorp Developer][1])

Why it matters:

  • Your “coffee-shop plan” can be versioned, reviewed, reused.
  • You avoid clicking lots of buttons by hand.
  • You get predictability, safety, and clarity.

2. The Terraform Workflow (Lifecycle you’ll use)
#

Here’s how you move from idea → infrastructure → updates → teardown.

StepEveryday AnalogyCommands & Description
InitYou buy and open your tool-box, unpack things, set up your workspace.terraform init — Downloads providers (AWS, Azure, etc.), sets up working dir & backend where state lives.
FmtYou tidy your desk so the plan is easy to read.terraform fmt -recursive — Auto-formats your .tf files so they’re consistent.
ValidateYou check you wrote the checklist correctly: “Did I spell coffee machine right? Did I list two tables?”terraform validate — Checks your HCL (HashiCorp Configuration Language) syntax and references.
PlanYou preview what changes will happen: “You’ll bring 2 tables, move the machine, etc.” No work starts yet.terraform plan -out=tfplan.bin — Shows what Terraform would change.
ApplyYou actually bring in the tables, plug in the machine, connect WiFi.terraform apply tfplan.bin — Executes the plan and makes changes.
DestroyYou decide you’re dismantling the coffee shop—clear everything out.terraform destroy — Deletes the resources in that working directory/project.

Why this flow is good:

  • You preview before you apply → fewer surprises.
  • You version your setup (just like code).
  • You get repeatable, reviewable infrastructure work.
  • For more detail on the workflow: see “Core Terraform Workflow Overview”. ([HashiCorp Developer][2])

3. State: Terraform’s Memory Ledger
#

Terraform keeps a state file (usually terraform.tfstate) which is like the “what I already have” ledger.

  • Analogy: You keep a list on your wall: “2 tables, 1 machine, WiFi router”.
  • Terraform uses it to know when things drift (someone adds an extra chair manually) and what changes to apply.
  • Best practice: Use a remote backend (S3 + DynamoDB lock, Terraform Cloud) so the file is safe, shared, and team-friendly.

Useful state commands:

terraform state list        # list all tracked resources  
terraform state show <addr> # show one resource's details  
terraform state mv <from> <to> # rename/move resource in state  
terraform state rm <addr>      # stop tracking something (it won’t delete in cloud)  

Don’t manually edit state unless you’re very sure—you risk breaking the ledger.


4. Core Building Blocks
#

Here are the key pieces you’ll use when writing Terraform config.

Providers — plugins that talk to cloud platforms.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}
provider "aws" {
  region = var.region
}

You’re saying: “Hey Terraform, use AWS provider version ~5.0, region = var.region” (registry docs here: ([Terraform Registry][3]))

Resources — actual things you want to exist: servers, buckets.

resource "aws_s3_bucket" "logs" {
  bucket = var.bucket_name
}

Data sources — read-only lookups of existing infra.

data "aws_vpc" "default" {
  default = true
}

Variables — inputs you can swap per environment.

variable "region" {
  type = string
}
# you pass e.g., -var-file=dev.tfvars

Locals — computed values you reuse.

locals {
  full_name = "${var.env}-${var.app}"
}

Outputs — what you want to share after apply.

output "bucket_name" {
  value = aws_s3_bucket.logs.bucket
}

Modules — reusable “kits” (like buying a furniture kit instead of each piece separately).

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  name   = local.full_name
  cidr   = "10.0.0.0/16"
}

5. Meta-Arguments & Advanced Patterns
#

Just like customization when building your coffee shop: you might want special requirements (no downtime, no accidental removal, etc.). Terraform has meta-arguments for that.

count / for_each — make many of something
#

resource "aws_iam_user" "u" {
  for_each = toset(["alice","bob"])
  name     = each.key
}

depends_on / lifecycle — control how changes behave
#

The lifecycle block (docs: ([HashiCorp Developer][4])) offers useful flags like:

  • create_before_destroy = true → build new object before tearing old one (reduces downtime).
  • prevent_destroy = true → stops Terraform from destroying a critical resource accidentally.
  • ignore_changes = [tags] → ignores changes to tags so Terraform doesn’t fight with external tag-gers.
  • replace_triggered_by = […] → when resource X changes, resource Y should replace too.

Example:

resource "aws_s3_bucket" "logs" {
  bucket = var.bucket_name
  lifecycle {
    prevent_destroy       = true
    create_before_destroy = true
    ignore_changes        = [tags]
  }
}

Preconditions / Postconditions
#

You can also enforce checks at runtime — e.g., “ensure instance type is not t2.micro” or “check this attribute after creation”. See docs: ([Spacelift][5])


6. Importing Existing Infrastructure
#

If you already built something manually (click-click on console) and you want Terraform to adopt it instead of rebuild:

CLI quick import:

terraform import aws_s3_bucket.logs my-logs-bucket

Config-based import (repeatable):

import {
  to = aws_s3_bucket.logs
  id = "my-logs-bucket"
}

Then run plan/apply to sync. Good for large systems.


7. Workspaces & Environments
#

Workspaces provide simple environment separation (dev, prod) inside one Terraform directory.

terraform workspace new dev
terraform workspace select dev
terraform workspace list

But: for serious multi-account or multi-cloud setups, you’ll likely use separate backends, separate state files, and pipelines.


8. Everyday Flags and Layout
#

Useful flags:

  • terraform init -upgrade → refresh provider versions.
  • terraform plan -var-file=dev.tfvars → use env-specific variables.
  • terraform apply -auto-approve → skip prompt (CI use).
  • terraform plan -target=resource.address → focus on one resource (use with caution).
  • terraform plan/apply -replace=resource.address → force recreation (modern form of taint).

File structure you’ll find manageable:

.
├─ main.tf          # your core resources & modules
├─ variables.tf     # input variables
├─ outputs.tf       # output definitions
├─ providers.tf     # provider setup
├─ locals.tf        # computed values, naming conventions
├─ versions.tf      # required Terraform version, provider version locks
├─ envs/
│   ├─ dev.tfvars
│   └─ prod.tfvars
└─ modules/
    └─ my_module/
        ├─ main.tf
        ├─ variables.tf
        └─ outputs.tf

9. Mental Model You Can Trust
#

  • Write your desired world (HCL).
  • Init & Validate to prep your workspace.
  • Plan to preview what will change.
  • Apply to make reality match.
  • State is Terraform’s memory ledger.
  • Modules & variables make repeatable, clean code.
  • Meta-arguments & lifecycle blocks control behavior (downtime, drift, protection).
  • Destroy when you need cleanup.
  • Import when you adopt existing resources.

10. Why This Matters to You (As Architect / Engineer)
#

  • Infrastructure becomes code — you treat it like what you already build (software).
  • Repeatable environments (dev, test, prod) without copy-pasting.
  • Reviewable changes (you see the diff).
  • Fewer surprises.
  • Teams can collaborate, audit, share.
  • Infrastructure evolves but remains controlled.

References for Deeper Reading
#