I created this suite of Ansible playbooks to provision a basic AWS (Amazon Web Services) infrastructure on EC2 with a Staging instance, and to deploy a webapp on the Staging instance which runs in a Docker container, pulled from Docker Hub.
Firstly a Docker image is built locally and pushed to a private Docker Hub repository, then the EC2 SSH key and Security Groups are created, then a Staging instance is provisioned. Next, the Docker image is pulled on the Staging instance, then a Docker container is started from the image, with nginx set up on the Staging instance to proxy web requests to the container. Finally, a DNS entry is added for the Staging instance in Route 53.
This is a simple Ansible framework to serve as a basis for building Docker images for your webapp and deploying them as containers on Amazon EC2. It can be expanded in multiple ways, the most obvious being to add an auto-scaled Production environment with Docker containers and a load balancer. (For Ansible playbooks suitable for provisioning an auto-scaled Production environment, check out my previous article and associated files “How to use Ansible for automated AWS provisioning”.) More complex apps could be split across multiple Docker containers for handling front-end and back-end components, so this could also be added as needed.
CentOS 7 is used for the Docker container, but this can be changed to a different Linux distro if desired. Amazon Linux 2 is used for the Staging instance on EC2.
I created a very basic Python webapp to use as an example for the deployment here, but you can replace that with your own webapp should you so wish.
N.B. Until you’ve tested this and honed it to your needs, run it in a completely separate environment for safety reasons, otherwise there is potential here for accidental destruction of parts of existing environments. Create a separate VPC specifically for this, or even use an entirely separate AWS account.
All the Ansible playbooks and supporting files can be found in this repository on my GitHub.
Installation and setup
- You’ll need an AWS account with a VPC set up, and with a DNS domain set up in Route 53.
- Install and configure the latest version of the AWS CLI. The settings in the AWS CLI configuration files are needed by the Ansible modules in these playbooks. If you’re using a Mac, I’d recommend using Homebrew as the simplest way of installing and managing the AWS CLI.
- If you don’t already have it, you’ll need Python 3. You’ll also need the boto and boto3 Python modules (for Ansible modules and dynamic inventory) which can be installed via pip.
- Ansible needs to be installed and configured. Again, if you’re on a Mac, using Homebrew for this is probably best.
- Docker needs to be installed and running. For this it’s probably best to refer to the instructions on the Docker website.
- A Docker account is required, and a private repository needs to be set up on Docker Hub.
- Copy etc/variables_template.yml to etc/variables.yml and update the static variables at the top for your own environment setup.
Usage
These playbooks are run in the standard way, i.e:
ansible-playbook PLAYBOOK_NAME.yml
Note that Step 4 requires the addition of:
-i etc/inventory.aws_ec2.yml
to use the dynamic inventory, and also the addition of:
-e 'ansible_python_interpreter=/usr/bin/python3'
to ensure it uses Python 3 on the Staging instance.
To deploy your own webapp instead of my basic Python app, you’ll need to modify build_local.yml so it pulls your own app from your git repository, then you can edit the variables as needed in etc/variables.yml.
Playbooks for build/provisioning/deployment
1. build_local.yml
Pulls the webapp from GitHub, builds a Docker image using docker/Dockerfile which runs the webapp, and pushes the image to a private Docker Hub repository:
---
- name: Build Docker image locally
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Get app from GitHub
git:
repo: "https://github.com/mattbrock/simple-webapp.git"
dest: "docker/{{ app_name }}"
force: yes
- name: Log in to Docker Hub repo
docker_login:
username: "{{ docker_user }}"
password: "{{ docker_pass }}"
- name: Build Docker image and push to Docker Hub repo
docker_image:
build:
path: ./docker
name: "{{ docker_repo }}/{{ app_name }}"
push: yes
source: build
force_source: yes
2. provision_key_sg.yml
Provisions an EC2 SSH key and Security Groups:
---
- name: Provision SSH key and Security Groups
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Create EC2 SSH key
ec2_key:
name: "{{ app_name }}"
register: ec2_key
- name: Save EC2 SSH key to file
copy:
content: "{{ ec2_key.key.private_key }}"
dest: etc/ec2_key.pem
mode: 0600
when: ec2_key.changed
- name: Create Security Group for app instance
ec2_group:
name: EC2 App Servers
description: EC2 VPC Security Group for App Servers
vpc_id: "{{ vpc_id }}"
rules:
- proto: tcp
ports: 80
cidr_ip: 0.0.0.0/0
rule_desc: Allow app access to nginx from anywhere
- proto: tcp
ports: 8080
cidr_ip: "{{ my_ip }}/32"
rule_desc: Allow direct app access from my IP
- proto: tcp
ports: 22
cidr_ip: "{{ my_ip }}/32"
rule_desc: Allow SSH from my IP
register: ec2_sg_app
- name: Update variables file with Security Group ID
lineinfile:
path: etc/variables.yml
regex: '^ec2_sg_app_id:'
line: "ec2_sg_app_id: {{ ec2_sg_app.group_id }}"
when: ec2_sg_app.changed
3. provision_staging.yml
Provisions a Staging instance based on the official Amazon Linux 2 AMI:
---
- name: Provision Staging instance
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Launch Staging instance
ec2_instance:
name: Staging
key_name: "{{ app_name }}"
vpc_subnet_id: "{{ vpc_subnet_id_1 }}"
instance_type: t2.micro
security_group: "{{ ec2_sg_app_id }}"
network:
assign_public_ip: true
image_id: "{{ ec2_al2_image_id }}"
tags:
Environment: Staging
wait: yes
register: staging_instance
- name: Update variables file with instance public DNS
lineinfile:
path: etc/variables.yml
regex: '^ec2_staging_instance_public_dns:'
line: "ec2_staging_instance_public_dns: {{ staging_instance.instances[0].public_dns_name }}"
4. deploy_staging.yml
Sets up the Staging instance, pulls the Docker image on the EC2 instance and runs a container, and sets up nginx to proxy incoming requests (on port 80) to the container (with the app running on port 8080):
---
- name: Deploy app on Staging instance
vars:
ansible_ssh_private_key_file: etc/ec2_key.pem
hosts: tag_Environment_Staging
remote_user: ec2-user
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Update OS
become: yes
yum:
name: "*"
state: latest
# This is necessary to be able to use yum to install Docker and nginx
# There doesn't seem to be an Ansible module for amazon-linux-extras
# so am using the command module for now instead
- name: Enable necessary amazon-linux-extras repositories
become: yes
command: amazon-linux-extras enable docker nginx1
- name: Install nginx
become: yes
yum:
name: nginx
state: latest
- name: Deploy nginx config for proxying through to Docker container
become: yes
copy:
src: "etc/{{ app_name}}.conf"
dest: "/etc/nginx/default.d/{{ app_name}}.conf"
- name: Enable and start nginx
become: yes
service:
name: nginx
enabled: yes
state: started
- name: Install Docker
become: yes
yum:
name: docker
state: latest
- name: Install Docker module for Python
pip:
executable: pip3
name: docker
- name: Enable and start Docker service
become: yes
service:
name: docker
enabled: yes
state: started
- name: Add user to docker group
become: yes
user:
name: ec2-user
groups: docker
- name: Reset SSH connection so group change takes effect
meta: reset_connection
- name: Log in to Docker Hub
docker_login:
username: "{{ docker_user }}"
password: "{{ docker_pass }}"
- name: Pull Docker image
docker_image:
name: "{{ docker_repo }}/{{ app_name }}"
source: pull
force_source: yes
- name: Run Docker container
docker_container:
image: "{{ docker_repo }}/{{ app_name }}"
name: "{{ app_name }}"
detach: yes
published_ports: 8080:8080
state: started
The deploy_staging.yml playbook requires dynamic inventory specification and use of Python 3, so run as follows:
ansible-playbook -i etc/inventory.aws_ec2.yml -e 'ansible_python_interpreter=/usr/bin/python3' deploy_staging.yml
5. provision_dns.yml
Provisions the DNS in Route 53 for the Staging instance; note that it may take a few minutes for the DNS to propagate before it becomes usable:
---
- name: Provision DNS
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Add a CNAME record for staging.domain
route53:
state: present
zone: "{{ route53_zone }}"
record: "staging.{{ route53_zone }}"
type: CNAME
value: "{{ ec2_staging_instance_public_dns }}"
ttl: 300
overwrite: yes
Running order
Running later playbooks without having run the earlier ones will fail due to missing components and variables etc.
Running all five playbooks in succession will set up the entire infrastructure from start to finish.
Redeployment
Once the Staging environment is up and running, any changes to the app can be rebuilt and redeployed to Staging by running Steps 1 and 4 again.
Playbooks for deprovisioning
1. destroy_all.yml
Destroys the entire AWS infrastructure:
---
- name: Destroy entire infrastructure
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Delete CNAME record for staging.domain
route53:
state: absent
zone: "{{ route53_zone }}"
record: "staging.{{ route53_zone }}"
type: CNAME
value: "{{ ec2_staging_instance_public_dns }}"
ttl: 300
- name: Terminate all EC2 instances
ec2_instance:
state: absent
filters:
instance-state-name: running
tag:Name: Staging
wait: yes
- name: Delete Security Group for app instances
ec2_group:
group_id: "{{ ec2_sg_app_id }}"
state: absent
- name: Delete EC2 SSH key
ec2_key:
name: "{{ app_name }}"
state: absent
2. delete_all.yml
Clears all dynamic variables in the etc/variables.yml file, deletes the EC2 SSH key, removes the local Docker image, and deletes the local webapp repo in the docker directory:
---
- name: Delete dynamic variables, SSH key file, local Docker image and local app repo
hosts: localhost
connection: local
tasks:
- name: Import variables
include_vars: etc/variables.yml
- name: Remove Staging instance public DNS from variables file
lineinfile:
path: etc/variables.yml
regex: '^ec2_staging_instance_public_dns:'
line: "ec2_staging_instance_public_dns:"
- name: Remove app instances Security Group from variables file
lineinfile:
path: etc/variables.yml
regex: '^ec2_sg_app_id:'
line: "ec2_sg_app_id:"
- name: Delete SSH key file
file:
path: etc/ec2_key.pem
state: absent
- name: Remove local Docker image
docker_image:
name: "{{ docker_repo }}/{{ app_name }}"
state: absent
- name: Delete local app repo folder
file:
path: "./{{ app_name }}"
state: absent
Destruction/deletion notes
USE destroy_all.yml WITH EXTREME CAUTION! If you’re not operating in a completely separate environment, or if your shell is configured for the wrong AWS account, you could potentially cause serious damage with this. Always check before running that you are working in the correct isolated environment and that you are absolutely 100 percent sure you want to do this. Don’t say I didn’t warn you!
Due to the fact that it might take some time to deprovision certain elements, some tasks in destroy_all.yml may initially fail. This should be nothing to worry about. If it happens, wait for a little while then run the playbook again until all tasks have succeeded.
Once everything has been fully destroyed, it’s safe to run the delete_all.yml playbook to clear out the variables file. Do not run this until you are sure everything has been fully destroyed, because the SSH key file can never be recovered again after it has been deleted.
Checking the Docker image in a local container
After building the Docker image in Step 1, if you want to run a local container from the image for initial testing purposes, you can use standard Docker commands for this:
docker run -d --name simple-webapp -p 8080:8080 my-repo/simple-webapp
(replacing “my-repo” with the name of your Docker Hub repo, and replacing “simple-webapp” as needed if you’re running your own webapp.)
You should then be able to make a request to the local container at:
http://localhost:8080/
To check the logs:
docker logs simple-webapp
To stop the container:
docker stop simple-webapp
To remove it:
docker rm simple-webapp
Checking the Staging site
To check the app on Staging once deployed in Step 4, you can get the Staging instance’s public DNS via the AWS CLI with this command:
aws ec2 describe-instances --filters "Name=tag:Environment,Values=Staging" --query "Reservations[*].Instances[*].PublicDnsName"
Then check it in your browser at:
http://ec2-xxx-xxx-xxx-xxx.xx-xxxx-x.compute.amazonaws.com/
(replacing “ec2-xxx-xxx-xxx-xxx.xx-xxxx-x.compute.amazonaws.com” with the actual public address of the instance).
To bypass nginx and make the request directly to the container, go to:
http://ec2-xxx-xxx-xxx-xxx.xx-xxxx-x.compute.amazonaws.com:8080/
(This is only accessible from your IP and not publicly accessible, as per the Security Group rules.)
Once Step 5 has been run to create the DNS entry (and you’ve waited a little while for the DNS to propagate) you can visit your Staging site at http://staging.yourdomain.com/ (obviously replacing “yourdomain.com” with your actual domain as specified in the /etc/variables.yml file).
Checking the logs on the Staging instance, and running other ad hoc Ansible commands
To run ad hoc commands (e.g. uptime
in this example) remotely with Ansible (without playbooks) you can use the ansible
command as follows:
ansible -i etc/inventory.aws_ec2.yml -u ec2-user --private-key etc/ec2_key.pem tag_Environment_Staging -m shell -a uptime
You can use this method to check the Docker webapp logs as follows:
ansible -i etc/inventory.aws_ec2.yml -u ec2-user --private-key etc/ec2_key.pem tag_Environment_Staging -m shell -a "docker logs simple-webapp"
(replacing simple-webapp
with the correct app name if you’re using your own webapp.)
Connecting to instances via SSH
If you need to SSH into the Staging instance once it’s running after Step 3, get the public DNS name using the command above, then SSH in with:
ssh -i etc/ec2_key.pem ec2-user@ec2-xxx-xxx-xxx-xxx.xx-xxxx-x.compute.amazonaws.com
Final thoughts
I hope this is a helpful guide for first steps to take when running containerised Docker apps within an EC2 environment. If you need help with any of the issues raised in this article, or with any other infrastructure, automation, DevOps or SysAdmin projects or tasks, don’t hesitate to get in touch regarding the freelance services I offer.