Creating the automatic deployment of web server from github in aws using Terraform

Sriramadasu Prasanth Kumar
7 min readSep 29, 2020

Hello Guys, I am again with my new article.In this article I have created the aws infrastructure of creating an automated deployment of web server from github using terraform code.

Project Outline:

  1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3.In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Theoritical Concept:

What is AWS?

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow. Running web and application servers in the cloud to host dynamic websites.

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.

In a simple terms, Terraform is a tool for creating the code for cloud infrastructure. With this we can create or launch the entire cloud infrastructure with a single click. It also supports all the clouds providers.

By using the teraform we can manage the multiple clouds. By knowing 1 language we can create infrastructure in all clouds. Terraform provides the CAAS(Code As A Service) which provides the code for creating and destroying the infrastructure in just 1 click.

What is Github?

GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere. This tutorial teaches you GitHub essentials like repositories, branches, commits, and Pull Requests.

What is Cloudfront?

Amazon CloudFront is a content delivery network offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers which cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.

What is EFS?

Amazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources.

That’s all guys these are the major things you should know before creating this project.

Approach:

Step 1:

In this project we are using the AWS cloud so we are using the provider “AWS”.

provider “aws” {
profile = “prasanth”
region = “ap-south-1”
}

Here profile is used for security for the access key and secret key.

Creating the profile:

aws configure --profile user

Then it will prompt for access key and secret key.

Creating the security group

resource "aws_security_group" "MySG" {
name = "MySG"
description = "Allow port 80"
vpc_id = "vpc-0fb5a33c18e258d16"
ingress {
description = "PORT 80"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress{
description= "NFS"
from_port= 2049
to_port= 2049
protocol="tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow port 80"
}
}

Step 2:

Launching EC2 instance

For launching an instance we need a key pair so first we need to create the key pair.

Creating the key pair

resource "tls_private_key"  "mykey"{
algorithm= "RSA"
}
resource "aws_key_pair" "generated_key"{
key_name= "mytask2key"
public_key= "${tls_private_key.mykey.public_key_openssh}"
depends_on = [
tls_private_key.mykey
]
}

Storing the key pair value in local/computer

resource "local_file"  "store_key_value"{
content= "${tls_private_key.mykey.private_key_pem}"
filename= "mytask2key.pem"

depends_on = [
tls_private_key.mykey
]
}

Creating the EC2 instance

resource "aws_instance" "task2os" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "mytask2key"
security_groups= [aws_security_group.MySG.id]
tags = {
Name = "task2os"
}
}

This creates the Ec2 instance.

After instance created we need to print the OS IP.

output "myos_ip" {
value = aws_instance.task2os.public_ip
}

After creating the instance we need to connect and install the web server ang git in it.

resource "null_resource" "remote-exec"{
depends_on= [aws_instance.task2OS]
connection {
type = "ssh"
user = "ec2-user"
private_key= "${tls_private_key.mykey.private_key_pem}"
host = "${aws_instance.task2os.public_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags={
Name="remote-exec-1"
}
}

Step 3:

Creating the EFS as a storage and attaching to the Ec2 instance

resource "aws_efs_file_system"  "allow-nfs"{
creation_token="allow-nfs"
tags={
Name= "allow-nfs"
}
}

This creates the EFS storage

resource "aws_efs_mount_target"  "efs_mount"{

file_system_id= "${aws_efs_file_system.allow-nfs.id}"
subnet_id= "subnet-007d3126a1d3909a1"
security_groups= [aws_security_group.MySG.id]
}

This code attaches the EFS storage to the subnet and security groups.

Step 4:

Copying the instance IP to the file for connecting the EFS storage.

resource "null_resource" "nulllocal2"  {
provisioner "local-exec" {
command = "echo aws_instance.task2os.public_ip > publicip.txt"
}
}

After EFS mounted to the instance we need to fomat it for storing the data by connecting to it.

resource "null_resource" "nullremote3"  {depends_on = [
aws_efs_mount_target.efs_mount,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = "${tls_private_key.mykey.private_key_pem}"
host = "${aws_instance.task2os.public_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Sriramadasu/Web_server.git /var/www/html/"
]
}
}

Step 5:

Creating the S3 bucket for storing the images of github.

resource "aws_s3_bucket" "task2_bucket" {
bucket = "my-task2-aws-bucket"
acl="public-read"
force_destroy=true
tags = {
Name = "My bucket"
}
}
locals {
s3_origin_id = "S3Origin"
}

NOTE : Here the bucket name should be globally unique.

resource "aws_s3_bucket_public_access_block" "aws_public_access" {
bucket = "${aws_s3_bucket.task2_bucket.id}"
block_public_acls = false
block_public_policy = false
}

Allowing the bucket for public access.

Step 6:

Creating the cloudfront service

resource "aws_cloudfront_distribution" "cloudfront" {
origin {
domain_name = aws_s3_bucket.task2_bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id

custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

That’s it the code is ready just we need to run one more command to launch this setup.

terraform plan

After the command it shows the list of services to be added or modified.

After the plan completes it shows the list of services.

On checking if you want to apply this to your cloud then use

terraform apply --auto approve

It is applying the services after the applied successfully your infrastructure is ready.

Here I am attaching the screenshots of services running.

For destroying the entire infrastructure, we just need 1 command

terraform destroy --auto approve

Yeah! it is successfully destroyed the complete infrastructure.

That’s it guys we have did it!

Thank you guys ! for giving your valuable time for reading my article.

--

--

Sriramadasu Prasanth Kumar

MLOps| Hybrid Cloud | DevOps | Hadoop | Kubernets | Data Science| AWS | GCP |