- Ceph Cookbook
- Karan Singh
- 486字
- 2025-02-21 19:10:12
Configuring Ceph client
Any regular Linux host (RHEL or Debian-based) can act as a Ceph client. The Client interacts with the Ceph storage cluster over the network to store or retrieve user data. Ceph RBD support has been added to the Linux mainline kernel, starting with 2.6.34 and later versions.
How to do it…
As we have done earlier, we will set up a Ceph client machine using Vagrant and VirtualBox. We will use the same Vagrantfile
that we cloned in the last chapter. Vagrant will then launch an Ubuntu 14.04 virtual machine that we will configure as a Ceph client:
- From the directory where we have cloned
ceph-cookbook git repository
, launch the client virtual machine using Vagrant:$ vagrant status client-node1 $ vagrant up client-node1
- Log in to client-node1:
$ vagrant ssh client-node1
Note
The username and password that Vagrant uses to configure virtual machines is
vagrant
, and Vagrant has sudo rights. The default password forroot
user isvagrant
. - Check OS and kernel release (this is optional):
$ lsb_release -a $ uname -r
- Check for RBD support in the kernel:
$ sudo modprobe rbd
- Allow
ceph-node1
monitor machine to accessclient-node1
over ssh. To do this, copy root ssh keys fromceph-node1
toclient-node1
Vagrant user. Execute the following commands fromceph-node1
machine until otherwise specified:## Login to ceph-node1 machine $ vagrant ssh ceph-node1 $ sudo su - # ssh-copy-id vagrant@client-node1
Provide a one-time Vagrant user password, that is,
vagrant
, forclient-node1
. Once the ssh keys are copied fromceph-node1
toclient-node1
, you should able to log in toclient-node1
without a password. - Use
ceph-deploy
utility fromceph-node1
to install Ceph binaries onclient-node1
:# cd /etc/ceph # ceph-deploy --username vagrant install client-node1
- Copy the Ceph configuration file (
ceph.conf
) toclient-node1
:# ceph-deploy --username vagrant config push client-node1
- The client machine will require Ceph keys to access the Ceph cluster. Ceph creates a default user,
client.admin
, which has full access to the Ceph cluster. It's not recommended to shareclient.admin
keys with client nodes. The better approach is to create a new Ceph user with separate keys and allow access to specific Ceph pools.In our case, we will create a Ceph user,
client.rbd
, with access to therbd
pool. By default, Ceph block devices are created on therbd
pool:# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'
- Add the key to
client-node1
machine forclient.rbd
user:# ceph auth get-or-create client.rbd | ssh vagrant@client-node1 sudo tee /etc/ceph/ceph.client.rbd.keyring
- By this step,
client-node1
should be ready to act as a Ceph client. Check the cluster status from theclient-node1
machine by providing the username and secret key:$ vagrant ssh client-node1 $ sudo su - # cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring ### Since we are not using the default user client.admin we need to supply username that will connect to Ceph cluster. # ceph -s --name client.rbd