Ansible: configuration management for everything

Ansible Logo

In the world of both developers and administrators a new spark started: DevOps, fueled by containers, configuration management tools and AGILE methodologies, the new movement is shaking the whole IT industry changing radically how we do things. Ansible is a configuration management tool that arrived “late” to the game but soon became one of the major players, but what does it exactly do?

Servers and Configuration Management tools

With the advent of cloud computing administrators started to look at servers in a different way. The epitome of this shift is the pets vs cattle argument. In the past administrator used to set up, configure, manage servers manually. These operations fall under the concept of “Lifecycle management“. Provisioning a server could potentially take hours of work from a skilled engineer. Thus servers were treated as pets. Cloud computing changed that.

With cloud computing you can spin up an instance in any cloud within seconds, do your work on the instance and shut it down automatically when it is not needed anymore. In a phrase: Lifecycle management got a lot more frenetic (and useful). Servers became cattle. What was lacking was the fundamental step: provisioning. A skilled engineer was still required, even more so, the engineer had to grasp the whole concept of cloud computing.

To resolve this problem a simple yet powerful (and already employed elsewhere) word was used: automate. With cloud computing administrators could already automate operating system installations and common operations, but couldn’t tweak every nook and cranny. To that purpose configuration management tools were “invented” (they already existed, but they evolved quickly). Tools that could create configurations and setups that could be repeated without any skilled engineer with the pressure of a button.

There currently are four main configuration management tools: Puppet, Chef, Salt and Ansible (official site). Even though Ansible was the last to join the game, during these years, also thanks to Red Hat, it gained a huge momentum and it is now one of the preferred solutions to automate configuration management. The introduction of containers to the mix, exacerbated Puppet and Chef downsides allowing Ansible to become a preferred choice for working with containers.

Ansible architecture: the key to success

While Ansible is the youngest among the four major configuration management software tools, it soon became an important piece in both cloud and containers games, and it was later acquired by Red Hat. The key to this incredible ascension can be found in its architecture.

Ansible uses an agent-less architecture: in Puppet or Chef, each managed server needs a running agent that constantly monitors the server and pulls for updates. This is the second difference, both Pupet and Chef are pull-based by default. In both there is a central server holding the configuration and many agents requesting that configuration. Ansible on the other hand is push-based, hence it doesn’t need an agent. As a matter of fact the only requirements for Ansible managed machines are: an ssh connection and Python.

Another point where Ansible excels in is simplicity, you can run commands on thousands of hosts with a simple command or create complex configurations leveraging the same things. The time needed to set Ansible up and getting started is significantly less compared to Puppet or Chef.

Ansible basics: Ad-hoc commands, Playbooks and Inventories

Before we dive into Ansible it’s important to divide two types of operations:

  • Ad-hoc commands: are command line “inline” commands that can run and return a result. These commands are associated with the ansible executable.
  • Playbooks: are files that contain many plays and tasks that include multiple operations. These are associated with the ansible-playbook executable.
# Example ansible ad-hoc
$ ansible myserver -m ping

# Example ansible playbook
$ ansible-playbook playbook.yml

Regardless of the way you use Ansible (ad-hoc, playbook), you will need a list of hosts that you want to manage. You can pass the list of the hosts as the first parameter in the ad-hoc mode, or you can specify it inside a playbook.

To help define these hosts Ansible uses inventories, an inventory file is a INI-like or YAML-formatted file containing hostnames. You will also need to have a user with passwordless sudo, passwordless ssh and python on each target machine. Here’s a sample inventory (in the INI-like format):

[group1]
firstserver.example.com
thirdserver.example.com

[group2]
seconserver.example.com

You can specify an inventory using the -i flag both in ad-hoc and playbook mode.

Ansible components: Playbooks, plays and modules

Ansible uses Modules in order to complete operations on the target machines. There’s a module virtually for everything, from configuring yum packages to firewalld rules. Chances are what you’re trying to accomplish has already a module for it. Here you can find a comprehensive list. If there isn’t you can write your own module to extend Ansible or work your way around the problem using text manipulation utilities (thanks Unix philosophy!). In the previous ad-hoc example we used the ping module, which does exactly what you guessed.

Great, but how can you execute complex tasks such as install and start apache? You can use playbooks to achieve what you would normally do with dozens of ad-hoc modules. Playbooks are in YAML format, be wary because YAML is hash with spaces and indentation. Here’s an example:

---
- hosts: all
  become: true
  tasks:
    - name: install httpd
      yum:
        name: httpd
        state: latest
    - name: start and enable httpd
      service:
        name: httpd
        state: started
        enabled: yes

That’s it, let’s analyze. The playbook is composed of just one play (the one with – hosts). This play will run on all hosts, all is a special group that encompasses all the hosts in the inventory, but you could’ve used “group 1” following the example. Become tells ansible to execute the tasks as a superuser. Then we have two tasks: “install httpd” and “start and enable httpd”. The first task uses the yum module the second one the service module.

Running this playbook you can install and start apache on one, two or thousands of machines. Pretty handy isn’t it? And once you’ve written a playbook, even a junior administrator can run it and use it without exactly knowing what to do. Provisioning servers suddenly becomes easier.

Another important feature of Ansible is idempotence which is a fancy word to say “If you run it twice it won’t do the same thing twice, but will keep its state.” If you run the previous playbook twice, apache will be installed (not twice) and will stay “started” not restarted.

Ansible Tower/AWX: reversing the inverse?

While Ansible has no need for a central server because it doesn’t use agents, there is a need to store playbooks and audit stuffs in a centralized way. For that purpose Ansible Tower was born. Ansible Tower is the Red Hat product based on AWX, the upstream open source product. What Ansible Tower does is providing administrator with a simple, centralized way to manage playbooks, playbook execution, users, permission and audit.

While one of Ansible’s strong points is its agent-less architecture, AWX/Tower really brings Ansible to a whole new level of management geared towards enterprises and IT departments.

What’s next?

There’s really a lot more to Ansible than what’s written here, and the only limit is your imagination. Thins such as ansible-galaxy and ansible-vault are groundbreaking, integrated solutions to real-world problems and you’re encouraged to look into them. (I’ll probably cover them in this article soon! : )

Image courtesy of mark | marksei
mark

You may also like...

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.