We are now running a ceph cluster, which I find is awesome. Who doesn’t like distributed, easily scalable storage pools?
However, the ceph storage is pretty useless if the clients can’t mount it. Given that most clients talk NFS, SMB, iSCSI and not ceph, an intermediate node needs to be created for exporting ceph to the clients of the world. Enters nfsceph.
nfsceph is something I’ve written off and on over the past few weeks. It is a set of scripts that allows you to create rbds (rados block device) on ceph, maps them, formats them and exports them to the world. More concisely terms, rbd create, rbd map, mkfs.ext3, exportfs.
Let’s see how it makes our (my) life easier!
‘nfsceph create’ creates a filesystem on ceph
[root@nfs1 ~]# nfsceph create backup 10000
Creating rbd... Success.
‘nfsceph list’ lists our filesystems
[root@nfs1 ~]# nfsceph list
backup 10.48576 GB
‘nfsceph export <filesystem> <ip>’ nfs exports a filesystem to the ip specified
‘nfsceph export’ shows the exports you have
[root@nfs1 ~]# nfsceph export backup 192.168.1.22
[root@nfs1 ~]# nfsceph export
At this point, the filesystem is ready to be mounted on the client. You can specify multiple clients, and also netblock (192.168.1.0/24).
The ceph rbd is mounted on /dev/rbd<x>
[root@nfs1 ~]# mount | grep backup
/dev/rbd6 on /export/backup type ext3 (rw)
The filesystem is exported with the following options for best performance and compatibility.
[root@nfs1 ~]# exportfs -v | grep backup
There’s also a set of initscripts that saves the current state to a file, and makes the exports persistent across reboot. If you’d like to play with it, the source can be found on github.
With this architecture, we can scale out quite easily by just adding more intermediate nodes to ease the load. Cheap, (practically) unlimited NFS storage. Awesome. 🙂