AWS Private EC2 Operations Guide Part 1: Why Private Subnet? — The ALB + NAT Gateway Standard Architecture and Sizing-Based Decision Criteria

AWS Private EC2 Operations Guide Part 1: Why Private Subnet? — The ALB + NAT Gateway Standard Architecture and Sizing-Based Decision Criteria


Introduction

“Put your EC2 in a Private Subnet and wrap it with an ALB and a NAT Gateway” — you’ll see this advice after a few minutes of Googling AWS. But most guides jump straight to Terraform code without explaining why. This series starts with that missing piece.

Over five parts, we cover a practical playbook for running EC2 in a Private Subnet on AWS: connecting without a Bastion via SSM, deploying with GitHub Actions, and optimizing cost. Part 1 is about the “why” — the groundwork you need before moving on to Part 2.

  • Part 1 — Why Private Subnet? (this post)
  • Part 2 — Building the VPC infrastructure with Terraform
  • Part 3 — Connecting without Bastion using SSM Session Manager
  • Part 4 — CI/CD pipeline with GitHub Actions + SSM/CodeDeploy
  • Part 5 — Cost analysis and optimization strategies

The target reader is a junior engineer who has “followed a tutorial to launch an EC2 but doesn’t really understand why Private Subnet or NAT Gateway are needed.” After this post, you should walk away thinking “ah, so that’s why we do it this way.”


TL;DR

  • Standard architecture: Internet → ALB (Public) → EC2 (Private) → NAT (Public) → Internet. All inside one VPC. Public/Private is not physical isolation — just a route table difference (whether the subnet has a route to the Internet Gateway).
  • Multi-AZ: a single ALB spans multiple AZs. Subnets must be created per AZ, but never create an ALB per AZ.
  • This setup is not always required: for side projects, a $40/month Public Subnet + SG is plenty. A full Private Subnet architecture runs $100320/month.
  • When it becomes mandatory: 2+ EC2s with HA / PII / payment / compliance (ISMS, PCI DSS, etc.) / 99.9% SLA — any one of these and you need to move.
  • Putting PII on Public Subnet = three real risks: direct attack exposure, compliance violations, broader liability after incidents. Past that line, Private Subnet is “insurance, not cost.”

1. The Standard Architecture

1.1 Topology

flowchart TB
    I1([Internet])
    I2([Internet])
    subgraph VPC["VPC (10.0.0.0/16)"]
        subgraph Public["Public Subnet"]
            ALB[ALB]
            NAT["NAT Gateway<br/>(outbound only)"]
        end
        subgraph Private["Private Subnet"]
            EC2[EC2]
        end
    end
    I1 -->|inbound| ALB
    ALB --> EC2
    EC2 -->|outbound| NAT
    NAT --> I2

1.2 Role of Each Component

First, a common point of confusion. The VPC is the outer box enclosing ALB, NAT Gateway, and EC2 — all three. Saying “ALB and NAT live in the Public Subnet” doesn’t mean they sit outside the VPC; they’re placed in a Public Subnet, a zone inside the same VPC.

Public vs Private is not physical isolation — it’s a route table difference. Public Subnets have a route to the Internet Gateway; Private Subnets don’t. (The actual route table code comes in Part 2.)

  • EC2 lives in the Private Subnet. It has no public IP and cannot be reached directly from the internet. Inbound traffic arrives only through the ALB.
  • ALB lives in the Public Subnet. It accepts HTTP/HTTPS traffic from the internet and routes it to the Private EC2s behind it. It is the “front door” for your service.
  • NAT Gateway also lives in the Public Subnet. It is an outbound-only channel so EC2 can call external APIs, pull OS patches, or ship logs outward. Reverse access (internet → EC2) is not possible through it.
  • Multi-AZ is the production baseline. ALB, NAT Gateway, and EC2 are all spread across at least two AZs so that a single AZ failure doesn’t take the service down.

One principle sums it up: “Inbound only via ALB, outbound only via NAT, everything else blocked.”

1.3 Aside: Subnet ↔ AZ Relationship, and What Multi-AZ Actually Means

We said “Multi-AZ is the production baseline.” Here’s what that actually looks like — the short version.

Three key facts:

  1. A Subnet belongs to exactly one AZ. You pick the AZ at creation time. You cannot mix EC2s from different AZs into the same Subnet.
  2. A VPC spans the entire region. Create a Subnet per AZ inside one VPC and you naturally have a Multi-AZ setup.
  3. A single ALB spans multiple AZs. Attach it to Public Subnets across multiple AZs and AWS automatically handles cross-AZ routing. Do not create an ALB per AZ.

What a 2-AZ setup looks like:

flowchart TB
    Internet([Internet])
    subgraph VPC["VPC (10.0.0.0/16) — region-wide"]
        ALB["ALB<br/>(attached to both Public Subnets, single ALB)"]
        subgraph AZa["AZ-a"]
            PubA["Public Subnet<br/>NAT GW A"]
            PriA["Private Subnet<br/>EC2-1"]
        end
        subgraph AZc["AZ-c"]
            PubC["Public Subnet<br/>NAT GW B"]
            PriC["Private Subnet<br/>EC2-2"]
        end
    end
    Internet --> ALB
    ALB --> PriA
    ALB --> PriC
    PriA -.outbound.-> PubA
    PriC -.outbound.-> PubC
  • 1 VPC (region-wide)
  • 1 ALB (attached to both Public Subnets)
  • 1 Public/Private Subnet per AZ
  • 1 NAT Gateway per AZ (a single-AZ NAT is cheaper — covered in Part 5)
  • EC2s spread across the Private Subnets
More detail — what happens when an AZ dies, why not one ALB per AZ

When one AZ dies: if AZ-a goes down entirely, the Subnets and EC2s in AZ-c keep running. The ALB routes only to the surviving AZ and users barely notice. That’s the practical meaning of “HA via Multi-AZ.”

Why you should not create an ALB per AZ:

  • One ALB = one DNS endpoint. Two ALBs means you have to manage routing between them in Route 53 (weighted or failover policies).
  • You lose the ALB’s built-in cross-AZ HA — it’s already Multi-AZ internally.
  • Double the cost, double the operational overhead.


2. Aside: Public IPv4 vs Elastic IP

In this architecture, EC2 has no public IP at all. But for readers who’ve only used Public Subnets, it’s worth clarifying the difference.

When an EC2 sits in a Public Subnet, it gets a public IP — and that comes in two flavors.

Public IPv4Elastic IP (EIP)
AllocationAuto-assigned when EC2 startsManually allocated by the user
LifetimeChanges on stop/startFixed until explicitly released
Cost$0.005/hour (since Feb 2024)Same when attached to a running EC2. Also billed while unattached
Use caseTemporary testing; no need for a stable IPDNS records, IP allowlists, external integrations
Attaches toAutomatically to an EC2Manually to EC2, NAT Gateway, NLB, etc.

Note: Stop → start an EC2 and the Public IPv4 changes. If you pointed DNS at that IP, the connection breaks. Use an EIP when you need a stable IP. But watch out: EIPs allocated without being attached still incur charges. AWS added this penalty because IPv4 is scarce — “don’t hoard addresses you don’t use.”

Relation to this architecture: EC2 in a Private Subnet has neither a Public IPv4 nor an EIP, because there’s no external exposure in the first place. Inbound is handled by the ALB, outbound by the NAT Gateway. This is one of the reasons Private Subnets are more secure by design.


3. Do You Actually Need This Architecture? — Sizing-Based Judgment

Note: Honestly, for small-scale systems, ALB + Private Subnet + NAT Gateway can be over-engineering. NAT Gateway alone costs $43+/month, and ALB adds $20+ — infrastructure can end up costing more than the service itself.

“Standard architecture” doesn’t mean every service must adopt it. Shoving production topology into a side project wastes money, but cutting corners on a service that handles PII creates real risk. Here’s where the boundary actually sits.

SetupApprox. monthly costBest fit
EC2 Public Subnet + Security Group~$40Side projects, solo operators. SG-based port restrictions are enough
EC2 + Nginx (reverse proxy)~$40No ALB — handle routing directly with Nginx on EC2
Lightsail$10~40Cheapest. Flat rate, no VPC design required
ALB + Private EC2 + NAT Instance~$60Keep the security posture, cut NAT Gateway cost with a NAT Instance
ALB + Private EC2 + NAT Gateway (this series)$100~320Mid-scale and up, compliance requirements, multi-person teams

3.2 Aside: Nginx (Reverse Proxy) vs ALB — What’s the Difference?

The table lists “EC2 + Nginx (reverse proxy)” as an option, so let’s clear that up.

  • Nginx = open-source web server / reverse proxy. Takes client requests and forwards them to the real app (Node.js, Spring, etc.). Handles HTTPS termination, static file serving, caching, and L7 routing — all inside a single EC2.
  • ALB is also a reverse proxy, really — AWS’s managed L7 reverse proxy + load balancer.

Feature comparison:

FeatureNginx (on EC2)ALB
L7 routing, HTTPS terminationOO
Static file servingOX (use S3/CloudFront)
Multi-AZ availabilityDies with the EC2AWS handles it
Health checks / Auto ScalingManualAutomatic
WAF, Shield integrationBuild it yourselfOne click
Monthly costIncluded in EC2$20+ separate
Operational burdenYou manage itNone (managed)

L7 routing, HTTPS termination, and reverse-proxying overlap. With just one EC2, ALB is overkill — there’s nothing to load-balance across.

Which one to pick:

SituationPick
2+ EC2s with HAALB (Nginx alone can’t provide Multi-AZ HA)
Auto Scaling / WAF · Shield · Cognito integrationALB
Small-scale, one EC2 is enoughNginx
Direct static-file serving / Lua · ngx_module customizationNginx
Slow-client / large-static / zero-downtime-deploy pain actually hitsALB → EC2 (Nginx) → app (situational addition)

They operate at different layers and are complementary rather than competing. In production, ALB handles HA, health checks, and WAF. Nginx inside the EC2 is an optional layer you add when slow-client protection, buffering large static assets, a zero-downtime deploy buffer, or complex rewrites actually hurt — plenty of stacks ship with just Spring Boot + CloudFront + ALB.

3.3 When Does a Private Subnet Become Necessary?

Concrete criteria for drawing the line between small-scale and mid-scale:

MetricSmall-scale (Public Subnet OK)Mid-scale and up (Private Subnet recommended)
Daily traffic~100K requests or fewer100K+ requests
EC2 count1 instance2+ instances (HA needed)
Operators1–2 people3+ (access control required)
Budget ratioInfra is 10%+ of revenueInfra is 5% or less of revenue
ComplianceNoneFinancial, healthcare, PII regulations
Availability requirementDowntime tolerable99.9%+ SLA
Data sensitivityMostly public dataPII, payment data

If even one row lands on the right side, it’s time to consider a Private Subnet — especially compliance and data sensitivity, which push you to the mid-scale column regardless of traffic volume.

3.4 Aside: What Is Compliance?

Compliance = adhering to laws, regulations, and industry standards. For a backend engineer, the regulations that most directly shape infrastructure decisions are:

RegulationApplies toCore infrastructure requirement
PIPA (Korea)Any business handling personal informationAccess control, encryption, log retention, network separation
ISMS / ISMS-P (Korea)IT companies above a size thresholdNetwork segmentation, access control, audit logs
e-Financial Supervision (Korea)Financial servicesInternal network isolation, DR, encryption key management
HIPAA (US)Healthcare dataEncryption, access logs, BAA-covered services only
PCI DSS (global)Credit card processingCard number encryption, network isolation, vulnerability scans
GDPR (EU)EU citizen dataData residency, right to deletion, consent management
SOC 2 (global)B2B SaaSAccess control, audit logs, change management

Common requirement: nearly every regulation demands network separation. “PII / payment servers must not be directly reachable from the internet” = no EC2 dropped straight into a Public Subnet. Private Subnet + ALB is the standard answer.

When juniors hit this:

  • Company preparing for ISMS certification
  • Startup chasing enterprise customers and needing SOC 2
  • Launching a financial or healthcare service

Once any of these kick in, the Private Subnet architecture reclassifies from “infra cost” to “compliance cost” — non-negotiable.

3.5 Aside: What Is HA (High Availability)?

The table above mentions “2+ EC2 instances” and “99.9%+ SLA.” Both tie directly to HA, so a quick primer.

HA means “the service stays alive and doesn’t die.” ALB is one of the tools that help achieve HA.

flowchart TD
    HA["HA (goal)<br/>the service must always be up"]
    HA --> L1["Lever 1:<br/>Run 2+ EC2s<br/>(if one dies, others handle requests)"]
    HA --> L2["Lever 2:<br/>ALB spreads traffic<br/>(only to healthy EC2s)"]
    HA --> L3["Lever 3:<br/>Multi-AZ placement<br/>(if one AZ fails, another runs)"]

If you run a single EC2, the moment it dies the service is gone. With two or more, one can die and the rest keep serving — that “can survive one death” state is HA. ALB distributes traffic across them and automatically drops unhealthy instances out of rotation.

Core decision point: Can you justify the $60–140/month that Private Subnet architecture adds? Spending $140 on a side project that runs fine on $40 is wasteful. Spending $40 on a service that handles PII just to save money is reckless. Detailed cost analysis comes in Part 5.

Concretely, you need a Private Subnet when:

  • Traffic is high enough that ALB’s load balancing is actually doing work
  • You run 2+ EC2 instances and availability matters
  • You have compliance requirements (finance, healthcare, PII)
  • Team size grew and you need access control

4. PII + Public Subnet: Three Concrete Risks

Above we said “cutting costs by leaving PII-handling servers in a Public Subnet is a risk.” That’s not a vague warning — it breaks down into three specific risks.

4.1 Direct Attack Surface Exposure

  • Public IPs are automatic scanner targets — within minutes of spinning up an EC2, bots start SSH brute-force attempts.
  • One SG mistake = DB/SSH exposed to the world — this misconfig causes real incidents every year.
  • Direct exposure = direct data exfiltration — attackers skip the “web → internal network → DB” pivot.
  • Private Subnet + ALB adds a defense layer — ALB blocks anomalies at L7; attach WAF and you also get attack detection and blocking.

4.2 Compliance Violations

  • Regulations mandate network separation — Korea’s PIPA and ISMS-P: “systems processing PII must be physically or logically separated from external networks.”
  • Instant audit finding — “PII server in a Public Subnet with a public IP” is classified as insufficient technical safeguards.
  • Evidence of failing reasonable protection duty after an incident — widens the scope of legal liability.

4.3 Broader Liability After Incidents

  • Private Subnet + ALB → you can argue “we applied the standard security architecture” (Well-Architected as evidence).
  • Public Subnet neglect → opens the judgment “security was sacrificed for cost.”
  • This judgment directly affects fines and damages — the wider the fault, the bigger the payout.

Bottom line: For a side project that only handles public data, Public Subnet + Security Group is fine. But the moment PII (user data, payment information, sensitive records) is involved, network-level isolation (Private Subnet) is insurance, not cost. Weigh the extra $60–100/month against potential fines and reputational damage, and the direction is obvious.

In practice, the pragmatic path is: start with Public Subnet + SG for small scale, and migrate to Private Subnet architecture when scale or data sensitivity changes. You don’t need to go full-fledged from day one — but when the nature of your data shifts, don’t hesitate.


Recap

Key takeaways from this post:

  1. The standard architecture is “ALB + Private EC2 + NAT Gateway.” Inbound only via ALB, outbound only via NAT, everything else blocked.
  2. Understand Public IPv4 vs EIP to see why neither is needed in this architecture — there’s no external exposure at all.
  3. Not every service needs this setup. A $40/month Public Subnet + SG is reasonable for small side projects. Move to Private Subnet when scale or compliance demands it.
  4. HA means “2+ EC2s + ALB + Multi-AZ.” This is often the practical tipping point that forces you into a Private Subnet architecture.
  5. PII + Public Subnet has three concrete risks: direct attack exposure, compliance violations, and broader liability after incidents. The moment PII is involved, Private Subnet is insurance, not cost.

Part 1 had one goal — making the architecture make sense. If you now think “oh, that’s why we do it this way” when you see a Private Subnet diagram, we’re done here. Part 2 starts building this architecture in actual code.

In the next post — Building VPC Infrastructure with Terraform — we design the VPC CIDR, lay out 2AZ Public/Private subnets, wire up route tables, use the “SG-references-SG” pattern for Security Groups, and stand up ALB and EC2 in a single main.tf that comes up with one terraform apply.


Appendix: Glossary for This Series

Bookmark this table and come back when acronyms blur together.

AcronymMeaning
VPCVirtual Private Cloud. Your own virtual network inside AWS
SubnetAn IP range inside a VPC. Split into Public (internet-connected) and Private (internal only)
ALBApplication Load Balancer. An L7 load balancer that distributes traffic across multiple EC2s
NATNetwork Address Translation. Lets Private Subnet EC2s reach the internet outbound
AZAvailability Zone. A physically separated datacenter inside a region. Seoul has 2a, 2b, 2c, 2d
SGSecurity Group. Instance-level firewall attached to EC2/ALB/etc.
NACLNetwork Access Control List. Subnet-level firewall
IAMIdentity and Access Management. AWS’s permissions system
SSMAWS Systems Manager. An umbrella service for EC2 management (Session Manager, Run Command, etc.)
CloudTrailAWS API call audit log — automatically records who did what, when
Shop on Amazon

As an Amazon Associate, I earn from qualifying purchases.