Ceph Filesystem


We will first setup a 5th node, which we are going to call “cephclient” from which we will run all the client testing:
benoit@admin:~/ceph-deploy$ ceph-deploy install cephclient
benoit@admin:~/ceph-deploy$ ceph-deploy admin cephclient
benoit@admin:~/ceph-deploy$ ssh cephclient
benoit@cephclient:~$ sudo chown benoit /etc/ceph/ceph.client.admin.keyring
benoit@cephclient:~$ ceph health
HEALTH_OK
benoit@cephclient:~$
The purpose of the last command is just to check that the Ceph software is well installed and configured on the client and that we have a correct access to our cluster.

Remark: Ceph FS is not quite as stable as the Ceph Block Device and Ceph Object Storage.
 
1) Ensure that we have one Meta Data Server running.
Remark: even if the commands allow you to have more than one MDS server running, at this time configuration with more than one MDS process are not supported.
We will run it on our node #2. So from the admin node, we'll do:
benoit@admin:~/ceph-deploy$ ceph-deploy mds create ceph2
2) Go back to your Ceph client, where you will need to have the package ceph-fs-common installed to have the mount.ceph utility available.
If you check the status of the cluster now:
root@cephclient:/mnt2# ceph status
    cluster dc05d0bd-3173-4a20-acad-04beddd749af
     health HEALTH_OK
     monmap e5: 2 mons at {ceph1=192.168.0.111:6789/0,ceph3=192.168.0.113:6789/0}
            election epoch 42, quorum 0,1 ceph1,ceph3
     mdsmap e5: 1/1/1 up {0=ceph2=up:active}
     osdmap e106: 6 osds: 6 up, 6 in
      pgmap v10148: 256 pgs, 3 pools, 621 MB data, 2177 objects
            38904 MB used, 110 GB / 155 GB avail
                 256 active+clean
You can notice a new line in the output, the one started by “mdsmap”.

3) Once the MDS service is running, you are able to mount the filesystem using the kernel driver:
benoit@cephclient:/$ sudo mount.ceph ceph1,ceph3:/ /mnt –o name=admin,secretfile=/etc/ceph/admin.secret
or
benoit@cephclient:/$ sudo mount –t ceph ceph1,ceph3:/ /mnt –o name=admin,secretfile=/etc/ceph/admin.secret
 In the command syntax, you will note:
  • for the source we give a comma separated list of all our MON instances in the cluster (ceph1 & ceph3)
  • You can add the port if your MON process doesn’t use the standard port TCP 6789. E.g: mount.ceph ceph1:6789,ceph3:6789:/ /mnt -o name=admin,secretfile=admin.secret
  • By default, the cluster has enable authenticated access, this is why you have to specify a username and a secret. The installation done so far has created a user called “admin” with a secret key stored in MD5 format in the /etc/ceph/ceph.client.admin.keyring. The parameter showed here use a file to store the secret.
For instance, if the content of /etc/ceph/ceph.client.admin.keyring is:
[client.admin]
        key = AQA5hBNWvCJADBAAaGn3gTbwrQDtVIZyWkjGJg==
The content of the file called /etc/ceph/admin.secret will be:
AQA5hBNWvCJADBAAaGn3gTbwrQDtVIZyWkjGJg==
So only one line containing the secret key only, nothing else.
Less secure, you have the possibility to pass the secret key directly has a parameter using “secret= AQA5hBNWvCJADBAAaGn3gTbwrQDtVIZyWkjGJg==” instead of pointing to a file. This is less secure because this information can be shown in the process tree.
 
4) You can mount automatically this filesystem from the /etc/fstab file by using the following syntax:
ceph1,ceph3:/  /mnt  ceph  name=admin,secretfile=/etc/ceph/admin.secret   0 2