← Back to Tools

Instance Calculator

Compare EC2 instances to find the most cost-effective nodes for your workloads.

Instance Type Family vCPU RAM (GB) Hourly Monthly
t3.medium T3 2 4 $0.0416 $ 30.37
t3.large T3 2 8 $0.0832 $ 60.74
t3.xlarge T3 4 16 $0.1664 $ 121.47
t3.2xlarge T3 8 32 $0.3328 $ 242.94
m5.large M5 2 8 $0.0960 $ 70.08
m5.xlarge M5 4 16 $0.1920 $ 140.16
m5.2xlarge M5 8 32 $0.3840 $ 280.32
m5.4xlarge M5 16 64 $0.7680 $ 560.64
m6i.large M6i 2 8 $0.0960 $ 70.08
m6i.xlarge M6i 4 16 $0.1920 $ 140.16
m6i.2xlarge M6i 8 32 $0.3840 $ 280.32
m6i.4xlarge M6i 16 64 $0.7680 $ 560.64
c5.large C5 2 4 $0.0850 $ 62.05
c5.xlarge C5 4 8 $0.1700 $ 124.10
c5.2xlarge C5 8 16 $0.3400 $ 248.20
c5.4xlarge C5 16 32 $0.6800 $ 496.40
c6i.large C6i 2 4 $0.0850 $ 62.05
c6i.xlarge C6i 4 8 $0.1700 $ 124.10
c6i.2xlarge C6i 8 16 $0.3400 $ 248.20
c6i.4xlarge C6i 16 32 $0.6800 $ 496.40
r5.large R5 2 16 $0.1260 $ 91.98
r5.xlarge R5 4 32 $0.2520 $ 183.96
r5.2xlarge R5 8 64 $0.5040 $ 367.92
r5.4xlarge R5 16 128 $1.0080 $ 735.84
r6i.large R6i 2 16 $0.1260 $ 91.98
r6i.xlarge R6i 4 32 $0.2520 $ 183.96
r6i.2xlarge R6i 8 64 $0.5040 $ 367.92
r6i.4xlarge R6i 16 128 $1.0080 $ 735.84
m6g.medium M6g 1 4 $0.0385 $ 28.11
m6g.large M6g 2 8 $0.0770 $ 56.21
m6g.xlarge M6g 4 16 $0.1540 $ 112.42
m6g.2xlarge M6g 8 32 $0.3080 $ 224.84
m7g.medium M7g 1 4 $0.0408 $ 29.78
m7g.large M7g 2 8 $0.0816 $ 59.57
m7g.xlarge M7g 4 16 $0.1632 $ 119.14
m7g.2xlarge M7g 8 32 $0.3264 $ 238.27
c6g.medium C6g 1 2 $0.0340 $ 24.82
c6g.large C6g 2 4 $0.0680 $ 49.64
c6g.xlarge C6g 4 8 $0.1360 $ 99.28
c6g.2xlarge C6g 8 16 $0.2720 $ 198.56
c7g.medium C7g 1 2 $0.0360 $ 26.28
c7g.large C7g 2 4 $0.0725 $ 52.92
c7g.xlarge C7g 4 8 $0.1450 $ 105.85
c7g.2xlarge C7g 8 16 $0.2900 $ 211.70
r6g.medium R6g 1 8 $0.0504 $ 36.79
r6g.large R6g 2 16 $0.1008 $ 73.58
r6g.xlarge R6g 4 32 $0.2016 $ 147.17
r6g.2xlarge R6g 8 64 $0.4032 $ 294.34

How to Use This Tool

  1. Set Your Requirements: Use the filters to specify minimum vCPU and RAM based on your workload needs.
  2. Sort by Price/GB: The "Price / GB RAM" column shows the most cost-effective instances for memory-bound workloads (which is 80% of Kubernetes apps).
  3. Consider Graviton: Instances ending in "g" (like m6g, c6g) are ARM-based and offer 15-20% savings. Ensure your images are multi-arch.
  4. Mix Node Types: Don't use one size for everything. Use T3 for dev, M5 for general workloads, and R5 for databases.

EC2 Instance Families for Kubernetes

T3/T3a (Burstable)

Best for: Dev/test clusters, low-traffic apps, CI/CD workers

Warning: Uses CPU credits. If you exceed baseline CPU for sustained periods, performance throttles. Not recommended for production.

🎯

M5/M6i (General Purpose)

Best for: 70% of production workloads, web apps, APIs, batch jobs

Balanced 1:4 CPU-to-RAM ratio. The safe default choice. M6i is newer with ~15% better price/performance than M5.

⚙️

C5/C6i (Compute Optimized)

Best for: CPU-intensive apps, machine learning inference, video encoding

2:1 CPU-to-RAM ratio. Higher clock speed than M-family. Use for workloads that don't need much RAM but burn CPU.

🧠

R5/R6i (Memory Optimized)

Best for: Java/Spring apps, Redis, Postgres, Elasticsearch, in-memory caches

1:8 CPU-to-RAM ratio. Expensive per hour, but cheapest per GB of RAM. Critical for memory-hungry workloads.

🚀

Graviton (m6g, c6g, r6g)

Best for: Any workload with multi-arch support (most modern apps)

Savings: 15-20% cheaper than Intel/AMD equivalents. ARM-based. Check that your Docker images support linux/arm64.

Common Instance Selection Mistakes

❌ Using One Size for Everything

Running m5.2xlarge for both your API and your Redis cache wastes money. Use node groups with different instance types.

❌ Ignoring Price/GB

A c5.4xlarge costs more per GB of RAM than an m5.2xlarge. If your app is memory-bound, you're overpaying.

✅ Right-Size Node Groups

Create separate node groups: t3 for dev, m6i for APIs, r6i for databases. Use taints/tolerations to schedule pods correctly.

✅ Test Graviton First

Start by migrating 10% of traffic to Graviton nodes. If it works, roll out to 100% and save 20% immediately.

⚡ Quick Decision Guide

Dev cluster? Use t3.medium or t3.large
General production? Use m6i.xlarge with Spot
Java/Spring Boot? Use r6i.large (more RAM)
ML inference? Use c6i.2xlarge (more CPU)
Want 20% savings? Use m6g.xlarge (Graviton)