How to install ceph with ceph-ansible

Ceph logo

ceph-ansible is the most flexible way to install and manage a full-blown Ceph cluster. Whilst not the easiest way, it isn’t too difficult and can produce production-grade clusters. Let’s take a look.

ceph-ansible: The big picture

Before you delve into the actual installation let’s take a moment to look at the big picture. With ceph-ansible there is one node called the “admin node” which uses Ansible to provision the other nodes, both monitors, and osds.

In order to do so, the admin node will need ssh passwordless access to a privileged user on each machine it will provision.

You can currently install Ceph on any Linux distribution, but ceph-ansible works best on RHEL/CentOS or Ubuntu.

Requirements

  • 1 admin node: specs for this node aren’t that important.
  • at least 1 monitor node: monitors are usually neglected, but they are very important, always make sure they have enough storage. It is suggested to use an odd number of monitor nodes.
  • at least 1 manager node: this node is required since “luminous”, you can install it alongside the monitor.
  • a minimum of 3 osd nodes: each node will have one or more disks dedicated to storage attached. The more storage you manage the more powerful your nodes will have to be. Here you can find the official hardware recommendations.
  • each node must be reachable by the admin node.
  • the admin node must be able to ssh without the need to input a password into each node with a privileged user. The user must be able to execute privileged tasks (sudo) without the need to input a password.

You can actually get around the 3 osd nodes requirement, but in production environments this is not acceptable. In the guide I will assume you already know how to configure passwordless ssh/sudo.

Provisioning the admin node

Important
I take NO responsibility of what you do with your machine; use this tutorial as a guide and remember you can possibly cause data loss if you touch things carelessly.

Provisioning the admin node is quite easy but it can be difficult to get the the Ansible version associated with the ceph-ansible playbook right. Therefore I highly suggest you to review the releases page for more details. In the following section the latest version of Ansible will be installed.

CentOS 7Ubuntu
$ sudo yum install ansible git
$ sudo add-apt-repository ppa:ansible/ansible
$ sudo apt update
$ sudo apt install ansible git

Next we need to fetch the git repository containing the software:

terminal

You can replace $BRANCH with your desired Ceph version, for more info take a look at the releases page.

$ git clone https://github.com/ceph/ceph-ansible.git
$ git checkout $BRANCH
# mv ceph-ansible /usr/share/ceph-ansible
# cp /usr/share/ceph-ansible/site.yml.sample /usr/share/ceph-ansible/site.yml
# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars

Creating the inventory file

An inventory file serves the purpose to list all the machines that must be provisioned by Ansible. It also divides them in groups so that different machines can be provisioned differently. The default inventory is located at /etc/ansible/hosts but you can actually place it wherever you want and pass the path as an argument when executing playbooks. Here’s a sample inventory file for ceph-ansible:

[mons]
mon1

[mgrs]
mon1

[osds]
osd1
osd2
osd3

In this example each machine (mon1,osd-13) is reachable using that name by the admin node. If you don’t have a DNS setup you can use IP addresses. Put monitors under [mons], osd nodes under [osds] and manager nodes under [mgrs]. Don’t forget to add at least one node and at least one manager.

Configuring all.yml

The all.yml file must be placed at /etc/ansible/group_vars. The file doesn’t exist out-of-the-box since it defines your deployment. You can find another file, all.yml.sample, in the same directory that you can use as base for your deployment. Here’s a sample all.yml:

ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: luminous

monitor_interface: eth0
public_network: 172.16.0.0/16
cluster_network: 10.10.10.0/8

The first four lines refer to the version of Ceph and the method to get it. The configuration isn’t consistent and you would use different parameters depending on ceph_repository, you can read more about it here.

monitor_interface specifies which interface the monitor node(s) should listen to.

public_network specifies what public network the cluster will be available on.

cluster_network is not a required parameter, all the nodes will send cluster traffic (internal operations) on this network. In production environments a cluster_network capable enough is a requirement. If you don’t set cluster_network it will default to the same as public_network.

Configuring osds.yml

As with all.yml this file must be placed at /etc/ansible/group_vars and you can find a corresponding sample file. This file specifies the configuration of osd nodes, hence it must configured carefully. Here’s a sample:

osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
  - /dev/sda
  - /dev/sdb
dedicated_devices:
  - /dev/sdc
  - /dev/sdc

osd_objectstore is the most important parameter here, it defines which backend will be used to store objects within Ceph. There are two common backends: filestore (older, requires a filesystem), bluestore (newer, takes the whole device, doesn’t require a filesystem).

osd_scenario defines where osd journals are stored:

  • collocated: journals are stored alongside data. The cluster will lose performance.
  • non-collocated: journals are stored on dedicated_devices (in this example both sda and sdb journals will be stored on sdc).
  • lvm: uses LVM to achieve a non-collocated scenario.

devices is a list of the devices (on each osd node) used to store data (and journals if you set osd_scenario to collocated).

dedicated_devices is a list of devices dedicated to journals. If you set osd_scenario to collocated, dedicated_devices is not required.

Configuring firewall (each node)

I won’t go deep into how to configure your firewall of choice, but here’s a list of the needed ports on each node:

  • Each monitor:
    • 6789/tcp
  • Each manager, osd node:
    • from 6800/tcp to 7300/tcp
  • Each metadata server:
    • 6800/tcp
  • Each object gateway:
    • defaults to port 7480/tcp, but you can easily change it to 80 and 443 (if you want SSL).

Deploy

The last step is to run the playbook:

$ cd /usr/share/ceph-ansible
$ ansible-playbook -i /path/to/inventory -u $USER site.yml

If you’re using the default host file you can omit the -i flag. Replace $USER with the aforementioned user with administration privileges.

If everything goes well you shouldn’t be seeing errors and will only get a summary of each node and the number of tasks associated with that node:

PLAY RECAP ********************************************************************************************************
mon1                        : ok=180  changed=15   unreachable=0    failed=0
osd1                        : ok=69   changed=5    unreachable=0    failed=0
osd2                        : ok=66   changed=5    unreachable=0    failed=0
osd3                        : ok=66   changed=5    unreachable=0    failed=0
Image courtesy of mark | marksei
mark

You may also like...

4 Responses

  1. MyDisqussion says:

    You have /dev/sdc listed twice under dedicated devices in the osds.yml file. Does each device require an entry under dedicated devices?

    • mark says:

      Hello MyDisqussion, each device may or may not have a different dedicated_device, in my example it has been specified twice so that Ceph will use the same device for both devices.

  2. mmarich says:

    Am i missing something, I dont see the site.yml config example?

    • mark says:

      Hello mmarich, you’re right there is one line that’s missing, I have fixed it. Simply put, you can write your own site.yml file if you have enough experience, however there is a site.yml.sample file that can be used for most deployments, most of the configuration is handled by all.yml and osds.yml.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: