GlusterFS volume management using Heketi

Heketi is an Opensource project which provides a RESTful management interface to manage the life cycle of GlusterFS volumes.

Using Heketi it is possible to provision GlutserFS volumes dynamically in cloud services like Openshift. For example if a cloud provider has hundread's of GlusterFS cluster and then a large number of volumes on top of the cluster, it will be very cumbersome job for administrator to manage this.

The purpose of Heketi is to provide a simple way to create, list, and delete GlusterFS volumes in multiple storage clusters. Heketi has intelligence to manage the allocation, creation, and deletion of bricks throughout the disks in the cluster. Also it will make sure to place bricks and its replicas across different failure domains. Heketi supports any number of GlusterFS clusters.

Heketi can be installed on any system which has access to all storage nodes. Here we are using centOS 7 to install Heketi.

Prerequisites

  • SSH user and public key setup on the node
#   ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
  • SSH user must have password-less sudo and should be able to run sudo commands to all Gluster nodes from ssh. For that need to disable requiretty in the /etc/sudoers files of all GlusterFS nodes. To disable it for all user comment below in the /etc/sudoers
Defaults    requiretty  
  • Set up password-less SSH access between Heketi node and the Gluster server nodes
# ssh-copy-id -i root@gluster-node
  • GlusterFS nodes must have glusterfs-server installed and glusterd service enabled.

  • Disks registered with Heketi must be in raw format.

Installation

Enable EPEL repository on Heketi server:

# yum install epel-release

To install Heketi

# yum install heketi

Change owner and group permissions of keys generated earlier to heketi.

chown heketi:heketi /etc/heketi/heketi_key*  

Configuring the Heketi service

Modify the /etc/heketi/heketi.json file and modify executor to ssh. Add private keyfile and ssh user.

    "executor": "ssh",
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root"

By default executor is set to "mock" and is used for testing and development. It will not send commands to any node.

Start the service

systemctl start heketi  

Monitor /var/log/messages or journalctl -u heketi for any errors during service start.

Topology Configuration

Topology is a file in JSON format describing the information about clusters, nodes, and disks to add to Heketi. Here we are using two node GlusterFS cluster.

# cat topology.json
{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "10.10.43.61"
                            ],
                            "storage": [
                                "10.10.43.61"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "10.10.43.153"
                            ],
                            "storage": [
                                "10.10.43.153"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        "/dev/sdb"
                    ]
                }
            ]
        }
    ]
}

Zone is the failure domains, heketi uses this information to make sure that replicas are created across failure domains for both data unavailability and data loss.

Cloud services can use Heketi REST API to load topology and manage life cycle of GLusterFS volume.

Here we are using heketi-cli client to achieve this.

# export HEKETI_CLI_SERVER=http://localhost:8080
# heketi-cli load -json=topology.json
Creating cluster ... ID: 9dea03cc1b2261aa7fdfb186ab4fcd85  
    Creating node 10.10.43.61 ... ID: 264e39988b9513eda371d19d57d3f2d0
        Adding device /dev/sdb ... OK
    Creating node 10.10.43.153 ... ID: 0b6f9c870e2fa94bb6a1299c78ec7d7e
        Adding device /dev/sdb ... OK

To check cluster, node and device info

#heketi-cli cluster list
Clusters:  
956b7ea1bfba4e8d40b9a5a50c4acff0  
9dea03cc1b2261aa7fdfb186ab4fcd85  
db998b229528c1213db696e730bd619e

# heketi-cli cluster info 9dea03cc1b2261aa7fdfb186ab4fcd85
Cluster id: 9dea03cc1b2261aa7fdfb186ab4fcd85  
Nodes:  
0b6f9c870e2fa94bb6a1299c78ec7d7e  
264e39988b9513eda371d19d57d3f2d0  
Volumes:

# heketi-cli node info 0b6f9c870e2fa94bb6a1299c78ec7d7e
Node Id: 0b6f9c870e2fa94bb6a1299c78ec7d7e  
Cluster Id: 9dea03cc1b2261aa7fdfb186ab4fcd85  
Zone: 2  
Management Hostname: 10.10.43.153  
Storage Hostname: 10.10.43.153  
Devices:  
Id:23f18812f53f6f30e3514104a0941b9d   Name:/dev/sdb            Size (GiB):99      Used (GiB):0       Free (GiB):99      

# heketi-cli device info 23f18812f53f6f30e3514104a0941b9d
Device Id: 23f18812f53f6f30e3514104a0941b9d  
Name: /dev/sdb  
Size (GiB): 99  
Used (GiB): 0  
Free (GiB): 99  
Bricks:  

Create GlusterFS volume

To create 2 x 2 distributed-replicate volume

# heketi-cli volume create -name=testvol -size=40 -durability="replicate" -replica=2
Name: testvol  
Size: 40  
Id: fcb74f0cdf4dd68a4d6731295521ada6  
Cluster Id: 9dea03cc1b2261aa7fdfb186ab4fcd85  
Mount: 10.10.43.153:testvol  
Mount Options: backupvolfile-servers=10.10.43.61  
Durability Type: replicate  
Replica: 2  
Snapshot: Disabled

Bricks:  
Id: 30d5d0bfc0246c0916e5771abea7e235  
Path: /var/lib/heketi/mounts/vg_ba9a636ef66bf70d64ff1a55c1a11cb8/brick_30d5d0bfc0246c0916e5771abea7e235/brick  
Size (GiB): 20  
Node: 264e39988b9513eda371d19d57d3f2d0  
Device: ba9a636ef66bf70d64ff1a55c1a11cb8

Id: 51fdeade015b46d312453328e05f6065  
Path: /var/lib/heketi/mounts/vg_ba9a636ef66bf70d64ff1a55c1a11cb8/brick_51fdeade015b46d312453328e05f6065/brick  
Size (GiB): 20  
Node: 264e39988b9513eda371d19d57d3f2d0  
Device: ba9a636ef66bf70d64ff1a55c1a11cb8

Id: d95b9714f80dd75171e630d1bf410f2e  
Path: /var/lib/heketi/mounts/vg_23f18812f53f6f30e3514104a0941b9d/brick_d95b9714f80dd75171e630d1bf410f2e/brick  
Size (GiB): 20  
Node: 0b6f9c870e2fa94bb6a1299c78ec7d7e  
Device: 23f18812f53f6f30e3514104a0941b9d

Id: fe4d88c9fbac9bc43b8df1279804fcc1  
Path: /var/lib/heketi/mounts/vg_23f18812f53f6f30e3514104a0941b9d/brick_fe4d88c9fbac9bc43b8df1279804fcc1/brick  
Size (GiB): 20  
Node: 0b6f9c870e2fa94bb6a1299c78ec7d7e  
Device: 23f18812f53f6f30e3514104a0941b9d

Volume is created. Now we can access it by mounting on any GlusterFS client.