Más contenido relacionado Similar a RIMA-Infrastructure as a code with Terraform.pptx (20) RIMA-Infrastructure as a code with Terraform.pptx1. RIMA : ROBOTIC INFRASTRUCTURE WITH MODERN AUTOMATION
Bis Tripathy.
1
1
Accessrom
Cloud
DevOps
CICD
Configuration Management
Prvisioning
O/S / VM/ Hyper Vsor
SBOM / DEVSEC OPS
Cloud Infrastructure
RIMA
2. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
2
It requires a village to build
production infrastructure. You
will need at least 5-7
members working for 30 days
to build this project.
The DevOps strategy focuses
on the successful
implementation of DevOps for
infrastructure automation
development to reduce
overall IT costs, failures, and
product delays. The Team
RIMA aims to suggest to
address this business
challenge through the
planned DevOps Adoption
Strategy.
The current problem with the
existing used CI Pipeline is it
is basic and does not address
code smells and
vulnerabilities get introduced
with every iteration of code
deployment. So there is a
need for stronger code quality
check using DevSecOps.The
other identified challenge is
that the infrastructure is
maintained manually for
Upgrades and network
updates which is tedious and
needs to be automated.
3. 3 © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
• Further the infrastructure is not designed to be scalable,
which limits the capabilities of the application in high
traffic windows. This needs a modern solution using Cloud
capabilities and agile DevOps adoption strategy. As the
current infrastructure does not have a disaster recovery
strategy in place in case of calamities, the infrastructure
needs one to be fault tolerant and also highly available
5. 5
• The CI CD pipeline is as follows for this project. For
initial set up:
• Set up jenkins
• Install dependencies for local development
• Create AWS infrastructure using Terraform
• For application development:
• Make development change
• Commit to git
• Update AWS stack using a shell script
• Push to repository after integrating GitHub with
Jenkins and also with JIRA.
• Jenkins build automatically runs based on triggers
• Git Commits
6. 6 © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
SL# Tool Name For
1 Terraform IAC
2 AWS CloudFormation IAC
3 Auto Scaling
4 Ansible CM
5 SonarQube Code Analysis
6 Jenkins CI/CD
7 GitHub Repository
8 Jira Planning Tool
9 Confluence Documentation
10 Docker Containerization
7. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
7
In AWS, the access key, security key and the Secret key need be created with right access to
region, policy and resources also for the Git commit.
The infrastructure Host ( RIMA Harbor) EC2 instance is sufficient to host all the
necessary infrastructure components to provision the project related hosts in
multiple region using CI/CD with terraform. since Infra server is mostly used by
internal team only. The infrastructure host needs to save execution plan to disk
temporarily before applying it. Faster recovery in case the EC2 inaccessible is
more important and cost effective compared to running it on multiple EC2
instances for high availability purpose. Running terraform in multiple EC2
instances means all instances need access to a shared directory. It makes setup
more complicated and harder to maintain.
8. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
8
In this example, we will be
using Github as a place to
store Terraform project. Our
CI/CD is going to run each
time a new PR is created.
Jenkins in our case therefore
can detect whether a PR
contains a Terraform project
and executes the Terraform
project. It also runs when a
new commit is pushed to an
existing PR.
Integrating Jenkins with Github
means we need to expose
Jenkins to the internet. This is
necessary so that Jenkins is
able to receive webhooks from
GitHub.
Another components for the
Terraform platform are S3
bucket and DynamoDB table.
S3 bucket is used to store
remote state for other
Terraform projects. We will use
a single bucket for multiple
Terraform projects. Each
project must have their own
key to avoid key name
overlapping. DynamoDB is a
prime locking mechanism
when using S3 as a Terraform
backend. A single DynamoDB
table is able to support
multiple Terraform projects.
9. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
9
Most of the time we don't need to commit
Terraform state file into a git repository. We'll
make an exception for this Terraform CI/CD
since the code during this project won't
change much. This project uses local state
file. Git serves as a mechanism to share
Terraform project along with the state file
with other team member. It is recommended
to publish this local git repository to a central
repository where other team member can
access it.
Terraform stores the state of all
independently managed resources. This
condition information becomes a proxy for
Terraform to find out the real condition of the
resources being managed. This state storage
concept is known as the backend in
Terraform. Terraform uses local files by
default for the Terraform backend . Besides
local files, Terraform supports remote state
stores like AWS S3, PostgreSQL, etc.
10. 10 © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
• Backends store state in a remote service, which allows
multiple people to access it. Accessing remote state
generally requires access credentials, since state data
contains extremely sensitive information.
• When applying a plan that you previously saved to a
file, Terraform uses the backend configuration stored
in that file instead of the current backend settings. If
that configuration contains time-limited credentials,
they may expire before you finish applying the plan.
Use environment variables to pass credentials when
you need to use different values between the plan and
apply steps.
11. 11
• After you initialize, Terraform creates
a .terraform/ directory locally. This directory contains the
most recent backend configuration, including any
authentication parameters you provided to the Terraform
CLI. Do not check this directory into Git, as it may contain
sensitive credentials for your remote backend.
• The local backend configuration is different and entirely
separate from the terraform.tfstate file that contains state
data about your real-world infrastruture. Terraform stores
the terraform.tfstate file in your remote backend.
12. 12
• To solve the problems described above, we can use
AWS S3 services as Terraform state storage
media. Terraform has built-in support for using S3 as a
remote state storage medium. When using S3 as a
Terraform state storage medium, we need to add other
functionality such as locking mechanisms, version
management, and encryption. We can use AWS
DynamoDB and AWS KMS services to implement
Terraform state locking and encryption mechanisms
on AWS.
• We will set up Terraform to provision required
infrastructure (like a set of AWS EC2 instances with all
their dependencies) and then connect that to an
Ansible which then transactionally configures these
EC2 instances using our playbook.
13. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
• We will be using the AWS EC2 inventory plugin to find
the hosts to configure. To keep it consistent we will use
aws_ec2.yml [ Standards from Ansible Doc] inventory
file to fit our needs. For most of the settings below, there
is usually more than one way to configure it (usually
either through environment variables or
through ansible.cfg file). More on Ansible configuration
can be found in official Ansible docs.
• In Terraform TeamRIMA will use Blue Green Deployment
and it is modelled using the create before destroy
lifecycle setting. As we can’t create a new resource with
the same name as the old one, we don’t hard-code the
name and only specify the prefix. Terraform adds a
random postfix to it, so the new configuration doesn’t
clash with the old one before it is destroyed.
• .
14. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
14
Replacing the launch configuration of an
Auto Scaling group by itself would not
trigger any changes. New instances would
be launched using the new configuration,
but the existing instances are not affected.
We can force the ASG resource to be
inextricably tied to the launch
configuration. To do this, we reference the
launch configuration name in the name of
the Auto Scaling group. Updating the name
of an ASG requires its replacement, and the
new Auto Scaling group would spin up its
instances using the new launch
configuration.
15. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
15
Terraform creates a new
Auto Scaling group and
then, when it’s ready swaps
out the old one.
This approach is frequently
called a “rolling”
deployment, as we see a
complete replacement with
an instant swap, which is a
classic form of Blue/Green.
16. © 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved.
16