Sahil Raj
5 min readJun 16, 2020

--

Deployment of Infrastructure using terraform:

In this competitive world of cloud computing is a necessity and thus need of multi-cloud arises.Multi-cloud is using different cloud services of different cloud on the basis of less cost,less latency and more security.To deploy projects on multi-cloud we need to know command of all the clouds and thus in making the team learn these command a huge amount of money and time is wasted which we can’t afford these days,here comes terraform in play.It is a type of code through which we can configure any cloud by knowing language of terraform.Terraform comes with inbuilt support of all major clouds.

So here is an example of creation of infrastructure using terraform.

Problem statement:

  1. Create key and security group which allows port 80.
  2. Launch ec2 instance with the help of the security group and key created.
  3. Launch one ebs and mount that volume into /var/www/html.
  4. copy the git hub code int /var/www/html.
  5. create s3 bucket and copy the images and change the permission to s3 readable.
  6. create cloud front using s 3 bucket and use the cloud front url to update in code in /var/www/html.

1. Logging in aws and creating a security group.

We logged in our aws account and and created a security group which allow http traffic as we are hosting website on httpd server.We also allowed port 22 so that we can do ssh from outside to our aws instance.

provider “aws” {
region = “ap-south-1”
profile = “sahil123”
}
resource “aws_security_group” “my-security-group” {
name = “my-security-group”
description = “Allow TLS inbound traffic”
ingress {
description = “allow http traffic”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “allow ssh traffic”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “my-security-group”
}
}

2. Launching an Os and creating an external volume:

We lauched an instance to lauch our webserver and attached a key to it.We created an external volume and attached it to our instance.We use the force detach option to detach the volume if we want to.

resource “aws_instance” “my-webserver” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “mykey11”
security_groups = [aws_security_group.my-security-group.name]

tags = {
Name = “myterraos”
}
}
resource “aws_ebs_volume” “myvol” {
availability_zone = aws_instance.my-webserver.availability_zone
size = 1
tags = {
Name = “volume-1”
}
}
resource “aws_volume_attachment” “myvol-attach” {
device_name = “/dev/xvdh”
volume_id = aws_ebs_volume.myvol.id
instance_id = aws_instance.my-webserver.id
force_detach = true
}

3.We copy the code from github to /var/www/html

We do ssh to our instance and mount the external volume to /var/www/html/.We copy the code of the website to /var/www/html which would ultimately be stored in the EBS(Elastic Block Storage) the external volume. We mounted the external volume to this folder for persistent storage.We did this because, if the instance is deleted so is the code of website so we stored the code of website in the external volume and if the instance is deleted and relaunched again we can just mount the external volume again to get our data.We use null resource as we want to run some linux commands.

resource “null_resource” “nullremote1” {depends_on = [
aws_volume_attachment.myvol-attach,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/sahil/Downloads/mykey11.pem”)
host = aws_instance.my-webserver.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd git php -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/sahil2019/myrepo.git /var/www/html/"
]
}
}

4. Create an s3 bucket and upload the image:

We created an s3 bucket and stored the static content (images) of the website in that bucket and gave the permission as public.

resource “aws_s3_bucket” “mybucket” {
bucket = “sahil3514-terraform-bucket”
acl = “public-read”
versioning {
enabled = true
}
}
locals {
s3_origin_id = “myS3Origin”
}
resource “aws_s3_bucket_object” “object” {
bucket = aws_s3_bucket.mybucket.bucket
key = “my-image.png”
acl = “public-read”
source = “C:/Users/sahil/Pictures/thumbnail.png”
content_type = “image/png”
}

5. Creating cloud front with s3 as origin :

We created a cloudfront link to access the bucket images and our client to get minimum latency in seeing the images .

resource “aws_cloudfront_origin_access_identity” “origin_access_identity” {
comment = “my-website-s3”
}
resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = “${aws_s3_bucket.mybucket.bucket_regional_domain_name}”
origin_id = “${local.s3_origin_id}”
s3_origin_config {
origin_access_identity = “${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}”
}

}
enabled = true
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
price_class = “PriceClass_All”
restrictions {
geo_restriction {
restriction_type = “none”
}
}
tags = {
Environment = “production”
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

6. Now connect to the web server to copy the cloudfront link

Now we just add the cloudfront link to the webserver code and for that we do ssh to our instance.

resource “null_resource” nullremote2{
depends_on = [
null_resource.nullremote1,
#aws_cloudfront_distribution.s3_distribution,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/sahil/Downloads/mykey11.pem”)
host = aws_instance.my-webserver.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo su <<EOF”,
“sudo echo \”<img src=’
https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key}' width=’1000' height=’1000'>\” >> /var/www/html/my.html”,
“EOF”
]

}
}

7. Now finally to get the output:

We use a provisioner local-exec to run the chrome command in our system to access the website.

resource “null_resource” “nullremote3” {depends_on = [
null_resource.nullremote1,
null_resource.nullremote2,
]
provisioner “local-exec” {
command = “chrome ${aws_instance.my-webserver.public_ip}/my.html”
}
}

To run the above code we use:

terraform apply -auto-approve

Final Website:

Web server launched:

security-group created:

S3 bucket created:

CloudFront created:

Git hub link:here

--

--