A CI / CD model for Terraform

0

Continuous Integration (CI) makes the cycle from design to code and creation of artifacts transparent and consistent. Continuous Delivery (CD) makes the delivery of this artifact to an identical environment every time.

But what about the actual environment in which the artifact is running? Is it the same every time?

This is a difficult thing to guarantee – unless you take advantage of an Infrastructure-as-Code (IaC) approach. This article explains how to use Infrastructure-as-Code to improve CI / CD. We will be using Terraform as an IaC tool, although the lessons below can be applied using any Infrastructure-as-Code solution.

Infrastructure as code

What is infrastructure as code? The basic idea of ​​Infrastructure-as-Code is to have the configuration files required to support the environment in which your code is running, as well as the actual code in the source code management tool. your business. Then, as part of the deployment process, your infrastructure automation tool of choice (e.g. Terraform) will build what is required as a step in the process. Then your code will deploy on top of it. This keeps track of all changes to the infrastructure, as well as the source code supported by those files, allowing for a truly repeatable deployment.

Terraform Basics

Using Terraform requires very basic steps. The first is to have your configuration files in their own directory structure. (Each organization has its own way of handling multiple environments – but for the purposes of this article, we’ll use a flat directory structure.)

All configuration files are written in HashiCorp’s HCL format (which looks a lot like JSON) and it is recommended that they have the * .tf extension. The default file to load variables from is terraform.tfvars.

What do Terraform scripts look like?

From a practical standpoint, all of these scripts could be in one file. But, I find that separating everything makes maintenance easier as the project inevitably grows.

Variables

terraform.tfvars
project_name = "temporary-test-account"
region = "us-central1"
zone = "us-central1-a"
cred_file = "~/serviceaccount.json"
network_name = "terraform-example"
vars.tf
variable "project_name" {
type = "string"
}

variable "region" {
type = "string"
}

variable "zone" {
type = "string"
}

variable "cred_file" {
type = "string"
}

variable "network_name" {
type = "string"
}

Creation of a supplier

gcp.tf creates a connection to a specific project and region within the Google Compute Platform:

provider "google" {
credentials = "${file(var.cred_file)}"
project = "${var.project_name}"
region = "${var.region}"
}

Resource creation

network.tf creates a custom network and subnet:

resource "google_compute_network" "vpc_network" {
name = "${var.network_name}"
auto_create_subnetworks = "false"
}

resource "google_compute_subnetwork" "vpc_subnet" {
name = "${var.network_name}"
ip_cidr_range = "10.222.0.0/20"
network = "${google_compute_network.vpc_network.self_link}"
region = "${var.region}"
private_ip_google_access = true
}

compute.tf creates a single virtual machine in the subnet (specified above):

data "template_file" "metadata_startup_script" {
template = "${file("bootstrap.sh")}"
}

resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "n1-standard-1"
zone = "${var.zone}"

boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}

metadata_startup_script = "${data.template_file.metadata_startup_script.rendered}"

network_interface {
network = "${google_compute_network.vpc_network.self_link}"
subnetwork = "${google_compute_subnetwork.vpc_subnet.self_link}"
access_config = {
}
}
}

firewall.tf opens ports 22 and 80 so we can connect and test:

resource "google_compute_firewall" "fw_access" {
name = "terraform-firewall"
network = "${google_compute_network.vpc_network.name}"

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22", "80"]
}

source_ranges = ["0.0.0.0/0"]
}

Using the Models module

bootstrap.sh in this case is loaded as a startup script in compute.tf and passed as a variable. By taking advantage of the template module, you can dynamically update the startup script (or any other data) to include things like environment-specific database connection strings, or even a password. passes from a safe (which HashiCorp also offers).

#!/bin/bash
# This is used as the startup script by the Google compute unit
# And will start an nginx container as an example

# Update everything
sudo yum -y update

# Install Docker pre-reqs
sudo yum -y install yum-utils device-mapper-persistent-data lvm2

# Remove any Docker installed by CentOS as a default
sudo yum -y remove docker-client docker-common docker

# Add the official Docker repo
sudo yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repo

# Install the official latest Docker Community Edition
sudo yum -y install docker-ce

# Enable and start the daemon
sudo systemctl start docker
sudo systemctl enable docker

# Starting nginx as a container as it is easy and always works
sudo docker run --name docker-nginx -p 80:80 -d nginx

Run Terraform

The first step is to extract the Infrastructure-as-Code project from your source code repository and set the default variables before you can initialize Terraform. (This working demo is available on GitHub.)

terraform-cicd:$ cp terraform.tfvars.example terraform.tfvars
terraform-cicd:$ vi terraform.tfvars

Next, initialize Terraform, which downloads all the plugins necessary for its use:

terraform-cicd:$ terraform init

Initialization of supplier plugins …

  • Search for vendor plugins available at https: //releases.hashicorp.com …

  • Downloading the plugin for the “template” provider (2.1.2) …

  • Downloading the plugin for the “google” provider (2.6.0) …

The following vendors have no version constraints in the configuration, so the latest version has been installed.

To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = “…” constraints to the corresponding provider blocks in the configuration, with the constraint strings suggested below. below.

* provider.google: version = "~> 2.6"
* provider.template: version = "~> 2.1"

Terraform has been successfully initialized!

You can now start working with Terraform. Try running “terraform plan” to see the changes required for your infrastructure. All Terraform commands should now work.

If you ever define or modify modules or the backend configuration for Terraform,
rerun this command to reset your working directory. If you forget, others
the commands will detect it and remind you to do so if necessary.

The next required step in the latest version of Terraform is to apply, which creates the plan and executes it.

terraform-cicd:$ terraform apply

data.template_file.metadata_startup_script: Refreshing state...


An execution plan has been generated and is shown below.
Resource actions are indicated by the following symbols:
+ create

Terraform will perform the following actions:

+ google_compute_firewall.fw_access

id: 

allow.#: "2"

allow.1367131964.ports.#: "0"

allow.1367131964.protocol: "icmp"

allow.186047796.ports.#: "2"

allow.186047796.ports.0: "22"

allow.186047796.ports.1: "80"

allow.186047796.protocol: "tcp"

creation_timestamp: 

destination_ranges.#: 

direction: 

name: "terraform-firewall"

network: "terraform-example"

priority: "1000"

project: 

self_link: 

source_ranges.#: "1"

source_ranges.1080289494: "0.0.0.0/0"



+ google_compute_instance.vm_instance

id: 

boot_disk.#: "1"

boot_disk.0.auto_delete: "true"

boot_disk.0.device_name: 

boot_disk.0.disk_encryption_key_sha256: 

boot_disk.0.initialize_params.#: "1"

boot_disk.0.initialize_params.0.image: "centos-cloud/centos-7"

boot_disk.0.initialize_params.0.size: 

boot_disk.0.initialize_params.0.type: 

can_ip_forward: "false"

cpu_platform: 

deletion_protection: "false"

guest_accelerator.#: 

instance_id: 

label_fingerprint: 

machine_type: "n1-standard-1"

metadata_fingerprint:
metadata_startup_script: "#!/bin/bashn#
This is used as the startup script by the Google compute unitn# And will start an nginx container as an examplenn# Update everythingnsudo yum -y updatenn# Install Docker pre-reqsnsudo yum -y install yum-utils device-mapper-persistent-data lvm2nn# Remove any Docker installed by CentOS as a defaultnsudo yum -y remove docker-client docker-common dockernn# Add the official Docker reponsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reponn# Install the official latest Docker Community Editionnsudo yum -y install docker-cenn# Enable and start the daemonnsudo systemctl start dockernsudo systemctl enable dockernn# Starting nginx as a container as it is easy and always worksnsudo docker run --name docker-nginx -p 80:80 -d nginxnn"
name: "terraform-instance"
network_interface.#: "1"
network_interface.0.access_config.#: "1"
network_interface.0.access_config.0.assigned_nat_ip: 
network_interface.0.access_config.0.nat_ip: 
network_interface.0.access_config.0.network_tier: 
network_interface.0.address: 
network_interface.0.name: 
network_interface.0.network:
"${google_compute_network.vpc_network.self_link}"
network_interface.0.network_ip: 
network_interface.0.subnetwork_project: 
project: 
scheduling.#: 
self_link: 
tags_fingerprint: 
zone: 

+ google_compute_network.vpc_network
id: 
auto_create_subnetworks: "false"
delete_default_routes_on_create: "false"
gateway_ipv4: 
name: "terraform-example"
project: 
routing_mode: 
self_link: 

+ google_compute_subnetwork.vpc_subnet
id: 
creation_timestamp: 
fingerprint: 
gateway_address: 
ip_cidr_range: "10.222.0.0/24"
name: "terraform-example"
network:
"${google_compute_network.vpc_network.self_link}"
private_ip_google_access: "true"
project: 
region: "us-central1-a"
secondary_ip_range.#: 
self_link: 


Plan: 4 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

google_compute_network.vpc_network: Creating...
…
…
…
google_compute_instance.vm_instance: Creation complete after 49s (ID: terraform-instance)
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Did it work?
terraform-cicd:$ gcloud compute instances list | egrep 'EXTERNAL_IP|terraform-instance'
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
terraform-instance us-central1-a n1-standard-1 10.222.0.2 35.192.156.240 RUNNING
terraform-cicd:$ curl http://35.192.156.240

…
…

Yes it’s done ! And now that we’ve tested it, it may go away.

terraform-cicd:$ terraform destroy
google_compute_network.vpc_network: Refreshing state... (ID: terraform-example)
data.template_file.metadata_startup_script: Refreshing state...
google_compute_firewall.fw_access: Refreshing state... (ID: terraform-firewall)
google_compute_subnetwork.vpc_subnet: Refreshing state... (ID: us-central1/terraform-example)
google_compute_instance.vm_instance: Refreshing state... (ID: terraform-instance)
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: 

- destroy Terraform will perform the following actions: - google_compute_firewall.fw_access - google_compute_instance.vm_instance - google_compute_network.vpc_network - google_compute_subnetwork.vpc_subnet 

Plan: 0 to add, 0 to change, 4 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes
google_compute_firewall.fw_access: Destroying... (ID: terraform-firewall)
google_compute_instance.vm_instance: Destroying... (ID: terraform-instance)
google_compute_firewall.fw_access: Destruction complete after 8s
google_compute_instance.vm_instance: Still destroying... (ID: terraform-instance, 10s elapsed)
…
…
google_compute_instance.vm_instance: Destruction complete after 2m10s
google_compute_subnetwork.vpc_subnet: Destroying... (ID: us-central1/terraform-example)
google_compute_subnetwork.vpc_subnet: Still destroying... (ID: us-central1/terraform-example, 10s elapsed)
google_compute_subnetwork.vpc_subnet: Still destroying... (ID: us-central1/terraform-example, 20s elapsed)
google_compute_subnetwork.vpc_subnet: Destruction complete after 27s
google_compute_network.vpc_network: Destroying... (ID: terraform-example)
google_compute_network.vpc_network: Still destroying... (ID: terraform-example, 10s elapsed)
google_compute_network.vpc_network: Still destroying... (ID: terraform-example, 20s elapsed)
google_compute_network.vpc_network: Destruction complete after 27s

Destroy finished! Resources: 4 destroyed.

What about integrating Terraform into an existing CI / CD pipeline?

From popular services like Circle CI to the ubiquitous Jenkins, whatever tool you use for CI / CD it already has a plugin for Terraform or you can just run it from the command line as detailed. above.

Conclusion

By taking advantage of creating and cleaning up environments as part of your CI / CD pipelines, you can reduce the number of incidents that occur when deployments move between environments because everything is built from templates. supported by developers. All events generated as part of a build can easily be tagged and routed through your incident management platform, and end up with the development team supporting that application service.

Maintain a fast, reliable CI / CD pipeline and respond to incidents during deployment with Splunk Observability Cloud. Sign up for a 14-day free trial to see how DevOps teams maintain CI / CD and reduce strain with a comprehensive real-time incident management and response solution.

About the Author

Vince Power is a Solutions Architect who focuses on cloud adoption and technology implementations using open source technologies. He has extensive experience in basic computing and networks (IaaS), identity and access management (IAM), application platforms (PaaS) and continuous delivery.


Source link

Share.

Comments are closed.