Ceph S3 storage

Last updated: 16-May-2016

This article is still draft.

To use S3 storage on your Ceph cluster you need to have a newer version of Ceph installed. check your version first. To set up the Rados Gateway (RGW) you need to have an rgw keyring. check your cluster homefolder if it exists.

$ ceph -v
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
$ ls -al *keyring
-rw------- 1 vagrant vagrant 71 May 16 15:07 ceph.bootstrap-mds.keyring
-rw------- 1 vagrant vagrant 71 May 16 15:07 ceph.bootstrap-osd.keyring
-rw------- 1 vagrant vagrant 71 May 16 15:07 ceph.bootstrap-rgw.keyring
-rw------- 1 vagrant vagrant 63 May 16 15:07 ceph.client.admin.keyring
-rw------- 1 vagrant vagrant 73 May 16 13:15 ceph.mon.keyring


We're going to set up an S3 gateway to a VM with ip address Type in the commands below.

$ ceph-deploy install --rgw
$ ceph-deploy admin
$ ceph-deploy rgw create

The Rados Gatewqay uses a build in Civetweb webserver, that runs on port 7480 by default. To test if your gateway works type in:

$ curl

Now create a radosgw user on the gateway host (probably your admin node). And create a Swift user. Finally create a secret key.

$ sudo radosgw-admin user create --uid="testuser" --display-name="First User"
$ sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
$ sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret

Now we can test the S3 storage. For this we have to install the python-boto package. Then we create a python script called s3test.py. Add the content below to the script.

$ sudo yum install python-boto
$ vim s3test.py

import boto
import boto.s3.connection

access_key = 'I0PJDPCIYZ665MW88W9R'
secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = 'cephmaster', port = 7480,
        is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(),

bucket = conn.create_bucket('my-new-bucket')
    for bucket in conn.get_all_buckets():
            print "{name}".format(
                    name = bucket.name,
                    created = bucket.creation_date,
 $ python s3test.py