Docker and AWS are two of the most powerful tools in modern DevOps workflows. If you’re learning Docker or want hands-on cloud deployment experience, deploying a Docker container on an AWS EC2 instance is a perfect real-world exercise.
In this guide, you’ll learn step-by-step how to deploy a Docker container on AWS EC2 instance — using a beginner-friendly and fully practical approach.
What You Will Learn
- How to launch an EC2 instance
- How to connect to EC2 using SSH
- Installing Docker on EC2
- Running a Docker container
- Exposing the container to the internet
- Configuring AWS security groups
- Troubleshooting common issues
Prerequisites
Before you start, make sure you have:
- An active AWS account (Free Tier is fine)
- Basic command line knowledge
- SSH client (Terminal on Linux/Mac or PuTTY for Windows)
- A basic understanding of Docker and containers
How to deploy a Docker container on AWS EC2
Follow the steps below:
Step 1: Launch an AWS EC2 Instance
- Log in to the AWS Console
- Go to EC2 → Launch Instance
- Choose Amazon Linux 2 AMI (or Ubuntu 22.04 LTS)
- Instance Type: Select t2.micro (Free Tier eligible)
- Create a new key pair (download the
.pem
file securely) - Configure the Security Group:
- Allow port 22 (SSH)
- Allow port 80 or 8080 (for web traffic)
- Click Launch
⚠️ Don’t share your private key. It gives full access to your server.
Step 2: Connect to EC2 via SSH
Open your terminal and run:
chmod 400 your-key.pem
ssh -i your-key.pem ec2-user@<EC2_PUBLIC_IP>
For Ubuntu:
ssh -i your-key.pem ubuntu@<EC2_PUBLIC_IP>
If you’re on Windows, open PuTTY and use the .ppk
version of your key.
Step 3: Install Docker on EC2
For Amazon Linux 2:
sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo service docker start
sudo usermod -aG docker ec2-user
Log out and reconnect to apply Docker group access.
For Ubuntu:
sudo apt update
sudo apt install docker.io -y
sudo systemctl start docker
sudo usermod -aG docker ubuntu
Confirm Docker is installed:
docker --version
Step 4: Run a Docker Container
Let’s run a simple Nginx container:
docker run -d -p 80:80 nginx
To verify the container is running:
docker ps
You’ll see output like:
CONTAINER ID IMAGE COMMAND ... PORTS
123abc456xyz nginx ... 0.0.0.0:80->80/tcp
Step 5: Configure AWS Security Group
If you didn’t already add HTTP access while creating the EC2:
- Go to EC2 → Security Groups
- Select the security group attached to your EC2 instance
- Click Edit Inbound Rules
- Add a new rule:
- Type: HTTP | Port: 80 | Source: 0.0.0.0/0
- OR use your custom port (e.g., 8080)
🔒 Tip: Only open the ports you need, and don’t expose sensitive services.
Step 6: Access the Container from the Browser
http://<EC2_PUBLIC_IP>
You should see the Nginx Welcome Page or your container’s application UI.
🎉 Congrats! Your Docker container is now running on the cloud!
Optional: Deploy Your Custom App
If you have your Docker image published on Docker Hub, run it like this:
docker run -d -p 8080:8080 your-dockerhub-user/your-app
Ensure that the port inside the image matches the exposed port.
Step 7: Make Docker Container Persistent Across Reboots
By default, if your EC2 instance reboots (or restarts), your running Docker containers will stop unless configured to restart.
To ensure your container starts automatically on reboot, use Docker’s restart policy:
docker run -d --restart unless-stopped -p 80:80 nginx
Common restart policies:
Policy | Description |
---|---|
no | Container won’t restart (default) |
always | Restart only if the container exits with a non-zero code |
unless-stopped | Restart unless manually stopped |
on-failure | Restart only if container exits with a non-zero code |
This is useful when you’re deploying production-grade apps and want uptime assurance.
Step 8: Secure Your EC2 and Docker Environment
Security is often ignored in basic deployments, but is critical for production setups. Here are a few things you must consider:
Don’t expose ports unnecessarily: Only open ports in the AWS Security Group that are required. For example, if your container listens on port 5000 and it’s not a public-facing API, keep it closed.
Keep Docker and system packages updated: Regularly update Docker and your EC2 OS:
# For Amazon Linux
sudo yum update -y
sudo yum upgrade docker -y
Avoid using the root user: Always create a non-root user and give minimal permissions.
Remove unused containers/images: Unused containers/images consume disk space and may have security vulnerabilities:
docker system prune -a
🛡 Tip: Set up automated security updates using
unattended-upgrades
in Ubuntu.
Step 9: Deploy a Sample Full-Stack App (Optional)
Let’s say you want to go beyond Nginx and run a real app, like a simple Node.js or Flask app containerized in Docker.
Example:
docker run -d -p 3000:3000 your-dockerhub-username/sample-node-app
Ensure your Dockerfile exposes port 3000 and your security group allows traffic to port 3000.
Then visit:
http://<EC2_PUBLIC_IP>:3000
🔄 This is great for testing your own applications or portfolio projects!
Step 10: Monitor and Manage Docker on EC2
Check Docker system usage:
docker system df
Logs of your container:
docker logs <container_id>
Stop and remove containers:
docker stop <container_id>
docker rm <container_id>
View CPU/memory usage:
Install htop
or use:
docker stats
Use Case: When Should You Use EC2 to Deploy Docker?
While EC2 is great for learning and quick deployments, here’s when it makes the most sense:
Use Case | Use EC2? |
---|---|
Learning Docker | ✅ Yes |
Running small apps | ✅ Yes |
Microservices in production | ❌ No — use ECS or Kubernetes (EKS) |
Auto-scaling apps | ❌ Better with ECS/Fargate |
Full CI/CD workflows | ✅ Paired with Jenkins |
So if you’re running a personal project, testing environment, or staging setup, EC2 + Docker is cost-effective and flexible.
Bonus: Clean Up Resources After Testing
If you’re done testing, don’t forget to terminate your EC2 instance to avoid charges:
- Go to EC2 → Instances
- Select your instance → Click Actions → Instance State → Terminate
Also, delete the security group, key pair, and any EBS volumes if unused.
Common Issues and Solutions
Issue | Solution |
---|---|
Permission denied (public key) | Check file permissions: chmod 400 your-key.pem |
Docker not installed | Reinstall using proper package manager |
App not accessible in browser | Check if the correct port is open in the security group |
SSH not working | Reinstall using the proper package manager |
Container crashed | Ensure port 22 is allowed in the security group |
Don’t Miss This! Our Most Valuable Blog on AWS Cost Optimization – Discover Proven Strategies to Cut Your Cloud Bill Instantly!
Summary
You’ve just completed a full cloud deployment workflow:
- Launch and configure an EC2 instance
- SSH into your instance and install Docker
- Deploy, expose, and secure a container
- Automatically restart Docker apps on reboot
- Troubleshoot and monitor your Docker setup
- Decide when to use EC2 vs managed services
Whether you’re testing, learning, or preparing for interviews, this hands-on guide is a practical starting point.
Frequently Asked Questions (FAQs)
Q1. Is EC2 the best place to run Docker in production?
It’s good for small apps and testing. For production, prefer Amazon ECS, Fargate, or EKS.
Q2. Is this setup free?
Yes! EC2 t2.micro and basic AWS services are Free Tier eligible for 12 months (as of 2025).
Q3. How can I auto-start containers after reboot?
Use Docker’s restart policy:
docker run -d –restart unless-stopped -p 80:80 nginx
Q4. Can I run multiple containers?
Yes, you can map each container to a different port or use Docker Compose.
Conclusion
Deploying Docker containers on AWS EC2 gives you the flexibility to test, build, and scale applications in the cloud. Whether you’re learning DevOps or preparing for production-ready deployment, this hands-on guide gives you a solid foundation.
🔔 Want more DevOps guides? Subscribe to our newsletter or explore DevOpshowTo for more!