Ansible is an essential DevOps/DBA tool for managing backups and rolling upgrades to the Cassandra cluster in AWS/EC2. An excellent aspect of Ansible is that it uses ssh, so you do not have to install an agent to use Ansible.
This article series centers on how DevOps/DBA tasks with the Cassandra Database. However the use of Ansible for DevOps/DBA transcends its use with the Cassandra Database so this article is good information for any DevOps/DBA or Developer that needs to manage groups of instances, boxes, hosts whether they be on-prem bare-metal, dev boxes, or in the Cloud. You don’t need to be setting up Cassandra to get use of this article.
Cassandra Tutorial Series on DevOps/DBA Cassandra DatabaseThe first article in this series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant . The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL ). You don’t need those articles to follow along, but they might provide a lot of contexts. You can find the source for the first and second article at our Cloudurable Cassandra Image for Docker, AWS, and Vagrant . In later articles, we will use Ansible to create more complicated playbooks like doing a rolling Cassandra upgrade, and we will cover using Ansible/ssh with AWS EC2.
Source code for Vagrant, and ansbileWe continue to evolve the cassandra-image GitHub project . In an effort for the code to match the listings in the article, we created a new branch where the code was when this article was written (more or less): Article 3 Ansible Cassandra Vagrant .
Let’s get to it. Let’s start by creating a key for our DevOps/DBA test Cassandra cluster .
Create key for test cluster to do Cassandra Database DevOps/DBA tasks with AnsibleTo use Ansible for DevOps/DBA, we will need to setup ssh keys as Ansible uses ssh instead of running an agent on each server like Chef and Puppet.
The tool ssh-keygen manages authentication keys for ssh (secure shell). The utility ssh-keygen generates RSA or DSA keys for SSH (secure shell) protocol version 1 and 2. You can specify the key type with the -t option.
setup key script bin/setupkeys-cassandra-security.sh CLUSTER_NAME=test ... ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \ -f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa" chmod 400 "$PWD/resources/server/certs/"* cp "$PWD/resources/server/certs/"* ~/.ssh ...Let’s break that down.
We use ssh-keygen to create a private key that we will use to log into our boxes.
In this article those boxes are Vagrant boxes (VirtualBox), but in the next article , we will use the same key to manage EC2 instances.
Use ssh-keygen to create private key for ssh to log into Cassandra Database nodes ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \ -f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"Then we restrict the access to the file of the key otherwise, ansible, ssh and scp (secure copy) will not let us use it.
Change the access of the key chmod 400 "$PWD/resources/server/certs/"*The above chmod 400 changes the cert files so only the owner can read the file. This file change mod makes sense. The certification file should be private to the user (and that is what 400 does).
Copy keys to area where it will be copied by Cassandra node provisioning cp "$PWD/resources/server/certs/"* ~/.sshThe above just puts the files where our provisioners (Packer and Vagrant) can pick them up and deploy them with the image.
Locally we are using Vagrant to launch a cluster to do some tests on our laptop.
We also use Packer and aws command line tools to create EC2 AMIs (and Docker images), but we don’t cover aws in this article (it is in the very next which is sort of part 2 to this article).
Create a bastion server to do ansible DevOps/DBA tasks for Cassandra ClusterEventually, we would like to use a bastion server that is a public subnet to send commands to our Cassandra Database nodes that are in a private subnet in EC2. For local testing, we set up a bastion server, which is well explained in this guide to Vagrant and Ansible .
We used Learning Ansible with Vagrant (Part 2 4 ) as a guide for some of the setup performed in this article. It is a reliable source of Ansible and Vagrant knowledge for DevOps/DBA. Their mgmt node corresponds to what we call a bastion server. A notable difference is we are using CentOS 7 not Ubuntu, and we made some slight syntax updates to some of the Ansible commands that we are using (we use a later version of Ansible).
We added a bastion server to our Vagrant config as follows:
Vagrantfile to set up the bastion for our Cassandra Cluster # Define Bastion Node config.vm.define "bastion" do |node| node.vm.network "private_network", ip: "192.168.50.20" node.vm.provider "virtualbox" do |vb| vb.memory = "256" vb.cpus = 1 end node.vm.provision "shell", inline: <<-SHELL yum install -y epel-release yum update -y yum install -y ansible mkdir /home/vagrant/resources cp -r /vagrant/resources/* /home/vagrant/resources/ mkdir -p ~/resources cp -r /vagrant/resources/* ~/resources/ mkdir -p /home/vagrant/.ssh/ cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/ sudo /vagrant/scripts/002-hosts.sh