понеділок, 20 листопада 2017 р.

Terraform

Terraform by HashiCorp enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform is a simple and reliable way to manage infrastructure in AWS, Google Cloud, Azure, Digital Ocean and more IaaS (providers, in terms of Terraform). The main idea of such tools is to create reproducible infrastructure and Terraform provides DSL to describe infrastructure and then apply to different environments. Previously we used a set of Python and bash scripts to describe what create in AWS, described different conditions which checks if some resource exists in AWS and create if it's not. Actually, Terraform is doing the same underhood. This is an introduction which covers simple use case to create Alluxio cluster I used in the previous post.

Terraform supports a number of different providers, but Terraform script must be written every time for new provider.





For beginning, let's define AWS provider:
provider "aws" {
  region = "us-west-1"
  profile = "xxx-federated"  # this is source of credentials; without profile use access keys
}

Then create security group and open/close required ports:
# As simplification, it opens wide range of ports from 20 to 65530, use more grained control
resource "aws_security_group" "my_test" {
  description = "Used in the terraform"
  vpc_id      = "vpc-XXXXX"   # VPC in which we want to have this subnet
   
  # input
  ingress {
    from_port   = 20
    to_port     = 65530
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/8","172.0.0.0/8"]
  }
 
  # outbound
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}


AWS very actively relies on  roles so our Alluxio instances are supposed to have IAM roles:
# this is profile for our instance, used for IAM purposes
resource "aws_iam_instance_profile" "my_profile" {
  name  = "my_profile"
  role = "BEST_EVER_Role" # yeah, role we're gonna use
}

Alluxio architecture consist Master and several slaves. There is an option to run Secondary Master, but it's similar to Hadoop SNN and won't serve requests from slaves when primary master is not available. 

# EC2 to host Alluxio master
resource "aws_instance" "emr2-master" {
  instance_type = "m4.2xlarge"
  count=1
  # ami based on Amazon linux with docker and git
  ami="ami-YYYY" # our instance is based on this role
  # The name of our SSH keypair we created above.
  key_name = "key" # EC2 key pair name which is known to AWS
  iam_instance_profile = "${aws_iam_instance_profile.my_profile.id}"  # sweet part: reference to instance profile ID
  vpc_security_group_ids = ["${aws_security_group.my_test.id}"]  # another sweet part: reference to security group declared above
  subnet_id = "subnet-123ce1da"  # subnet, just hardcode for now
  tags {
      Name = "Alluxio-master"  # here we can describe several tags we need
    }
  provisioner "remote-exec" {  # provisioner, what we want to run on master after it created
      inline = [
        "git clone https://github.com/Alluxio/alluxio.git",
        "cd alluxio/integration/docker",
        "docker build -t alluxio .",
        "cd ~",
        "mkdir underStorage",
        "sudo mkdir /mnt/ramdisk",
        "sudo mount -t ramfs -o size=12G ramfs /mnt/ramdisk",
        "sudo chmod a+w /mnt/ramdisk",
        "sudo service docker restart",
        "docker run -d --net=host -v $PWD/underStorage:/underStorage -e ALLUXIO_MASTER_HOSTNAME=127.0.0.1 -e ALLUXIO_UNDERFS_ADDRESS=/underStorage alluxio master"
      ]
    }
  connection {
    timeout = "15m" 
    user = "ec2-user"
    private_key = "${file("/Users/kostia/.ssh/key.pem")}"
  }
}

Last step is to describe Alluxio slaves and connect them to Master
resource "aws_instance" "emr2-slaves" {
  instance_type = "r4.xlarge"
  count=3
  # ami based on Amazon linux with docker and git
  ami="ami-ZZZ"
  # The name of our SSH keypair we created above.
  key_name = "key"
  vpc_security_group_ids = ["${aws_security_group.my_test.id}"]
  iam_instance_profile = "${aws_iam_instance_profile.my_profile.id}"
  subnet_id = "subnet-123c1a0b"
  tags {
      Name = "Alluxio-slaves"
    }
  provisioner "remote-exec" {
      inline = [
        "git clone https://github.com/Alluxio/alluxio.git",
        "cd alluxio/integration/docker",
        "docker build -t alluxio .",
        "cd ~",
        "mkdir underStorage",
        "sudo mkdir /mnt/ramdisk",
        "sudo mount -t ramfs -o size=28G ramfs /mnt/ramdisk",
        "sudo chmod a+w /mnt/ramdisk",
        "sudo service docker restart",
        "docker run -d --net=host -v /mnt/ramdisk:/opt/ramdisk -v $PWD/underStorage:/underStorage -e ALLUXIO_MASTER_HOSTNAME=${aws_instance.emr2-master.private_ip} -e ALLUXIO_RAM_FOLDER=/opt/ramdisk -e ALLUXIO_WORKER_MEMORY_SIZE=28GB -e ALLUXIO_UNDERFS_ADDRESS=/underStorage alluxio worker"
      ]
    }
  connection {
    timeout = "15m"  # wait 15 minutes for connection to be established
    user = "ec2-user"
    private_key = "${file("/Users/kostia/.ssh/key.pem")}"
  }
}

Last step is apply changes from Terraform json script to AWS infrastructure. The main commands to know:
terraform apply applies changes to AWS account (i.e. create, update or delete instances)
terraform plan will show plan (list of commands to be executed) before applying
terraform destroy destroys created previously infrastructure

Einführung in Terraform (in English)
Introduksjon i Terraform (in English)


1 коментар: