Installation and setup of Ceph

Last updated: 14-May-2016

In this section I will explain how to setup a very basic Ceph cluster. This example consists of:
An admin node
Three Monitors
Three OSD nodes

You can download all the code directly from GitHub with the button below.
Download

Note: This is not an installation guide, but rather an example how it can be done. The official installation guide can be found here. A very informative article about Ceph can be found here.

Preparation

For this example I used VM's with Centos 7 to build the Ceph cluster. On each VM you need to install some prerequisites like elrepo, rpm and ntp. To make life a bit easier I created a Vagrant box that has all of this preinstalled. The box can be found on Hashicorp Atlas.
When I wrote this article Vagrant was on version 1.8.1. Virtual Box was on version 5.0.21, and GitBash on version 2.8.2. For Ceph I used Giant, but you an easily change this in another distribution. Make sure you first install the vagrant-hostmanager plugin. For this we open a terminal like GitBash.
  
$ vagrant plugin install vagrant-hostmanager
      

Download the code from GitHub.
  
$ git clone https://github.com/SonnyBurnett/ceph.git
      

Now cd to the folder where the Vagrantfile lives, and start all the VM's.
  
$ cd ceph
$ vagrant up
      

If all the machines are up, ssh to the admin node, and cd to the my-cluster folder. Warning: Do not change the user to root! Note: You can of-course use other terminals like Putty or RoyalTS.
  
$ vagrant ssh cephmaster -c   
$ cd my-cluster
      

Set up communication

The admin node needs to have passwordless sudo rights on all the other nodes. For this we generate an SSH-key on the admin and distribute it to the other machines. Just type enter for all questions from ssh-keygen. type the default password vagrant for ssh-copy-id. Note: you can also run script ssh-keys.sh
$ ssh-keygen
$ ssh-copy-id vagrant@cephmon1
      

Now we can create the Ceph Storage Cluster. This is done by running a script.

$ ./ssh.keys.sh   
      

How the install script works

After running the install script you should already have a running Ceph cluster. At least if the status is: active+clean. If you want to know what exactly happened, continue to read. Otherwise skip this part and go to the next section.

Create the new cluster by first installing the monitor nodes. The admin node will use ceph-deploy to install the software.
$ ceph-deploy new cephmon1 cephmon2 cephmon3 
      

Now you need to change some defaults in the ceph.conf file. If you want to know more about these settings look at the Ceph documentation.

Ok, we have created a cluster. Now we must install Ceph on all the machines. You can do this machine by machine, but it works just as well in one command.
$ ceph-deploy install --no-adjust-repos cephmon1 cephmon2 etc etc etc
      

With Ceph deployed we need to add the monitors and gather keyrings. This sounds rather cryptical but just run the command, and then check your my-cluster folder. It should contain four keyring files.
$ ceph-deploy mon create-initial
      

Ok the monitors are done. Now we add the OSD's. For this you must first prepare and then activate them. You can do this in one command or monitor by monitor.
$ ceph-deploy osd prepare cephnode4:/var/local/osd 
$ ceph-deploy osd activate cephnode4:/var/local/osd 
      

copy the configuration file and admin key to your admin node and your Ceph Nodes, so that you can use the ceph CLI. And finally: Ensure that you have the correct permissions for the ceph.client.admin.keyring.
$ ceph-deploy admin cephmon1 cephnode4 cephmaster etc etc
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
      


If all went well you can run the ceph -s command. The output should look something like this:
    cluster 0c147646-0131-495e-9c21-14317dab8bdf
     health HEALTH_OK
     monmap e1: 3 mons at {cephmon1=192.168.33.82:6789/0,
	                       cephmon2=192.168.33.83:6789/0,
						   cephmon3=192.168.33.84:6789/0}, 
						   election epoch 4, quorum 0,1,2 cephmon1,cephmon2,cephmon3
     osdmap e13: 3 osds: 3 up, 3 in
      pgmap v18: 192 pgs, 3 pools, 0 bytes data, 0 objects
            19073 MB used, 89470 MB / 111 GB avail
                 192 active+clean