Openstack Manila Multi-Node setup

This is a step by step guide to install and configure openstack manila service on multi-node openstack kilo setup.

In previous blog post (First Steps with Openstack Kilo) we described how to install openstack kilo on 3 CentOS 7.1 x86_64 sserver. We hope you enjoyed reading that blog post.

System details used in setup:

controller (this will also run manila-api and manila-scheduler) 10.104.10.22  
compute (this will also run manila-share)   10.104.1.210  
network (this will also run cinder-volume)  10.104.10.45  

The manila backend would be openstack cinder. In the last blog post we did not configure cinder so here we will first configure cinder.

Cinder Installation and Configuration

We installed cinder-volume on network node. Refer Openstack kilo documentation for cinder installation.

We installed cinder-volume on network node as we have cinder-api and cinder-scheduler services running on controller node.

[root@network ~]# cinder service-list
+------------------+-----------------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |                   Host                  | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-----------------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller   | nova | enabled |   up  | 2015-07-06T13:31:36.000000 |       None      |
|  cinder-volume   | compute@lvm | nova | enabled |   up  | 2015-07-06T13:31:33.000000 |       None      |
+------------------+-----------------------------------------+------+---------+-------+----------------------------+-----------------+

Create a volume of 1 GB and attach it to nova instance to verify if cinder is working correctly:

[root@network ~]# cinder create --name demo-volume1 1
[root@network ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |     Name     | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| d9181a96-18b3-41d8-86ef-1ca4942412fa | available | demo-volume1 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

To launch instance and attach volume to it follow Openstack Guide.

If you are able to successfully attach the new volume (and its visible inside the instance as a raw block device) to the instance launched it means everything is working fine on cinder side, let's configure manila.

PS: One thing to note is that when manila creates a cinder volume and tries to attach it to the service VM it would fail if there is no /var/lock/cinder on the cinder-volume node (aka network node also in our case). Please create this directory beforehand and make sure cinder user and group are set on it. You might want to restart openstack-cinder-volume service on the storage server after you do this change.

Manila Installation and configuration

First upload the official manila service image to glance:

[root@controller ~]# wget https://github.com/uglide/manila-image-elements/releases/download/0.1.0/manila-service-image.qcow2 

[root@controller ~]# glance image-create --name "manila-service-image-new" --file manila-service-image.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

[root@controller ~]# glance image-list
+--------------------------------------+--------------------------+
| ID                                   | Name                     |
+--------------------------------------+--------------------------+
| 4735b462-1f25-44e8-ac98-e97e2d753af9 | Fedora22                 |
| 9bfcb97f-7852-4d4b-8582-42d46292f04a | manila-service-image-new |
+--------------------------------------+--------------------------+

Now follow below steps to install Manila:

  • Install required packages on controller node. Openstack-manila-api and openstack-manila-scheduler services will run on the controller node.
# yum install openstack-manila python-manila python-manilaclient
  • Installed required packages on compute node. Openstack-manila-share service will run on the compute node.
# yum install openstack-manila-share python-manila

For reference of configuration files manila.conf and api-paste.ini:

https://gist.github.com/NehaRawat/b3b2a76c48747d7068cc
https://gist.github.com/NehaRawat/967f1fe8df34c3deeb38

In order to arrive at the right configuration parameters we had to refer to the Packstack based install with manila and modify it according to our multi-node setup.

  • Start and enable manila services
[root@controller ~]# systemctl start openstack-manila-api
[root@controller ~]# systemctl start openstack-manila-scheduler

[root@controller ~]# systemctl enable openstack-manila-api
[root@controller ~]# systemctl enable openstack-manila-scheduler

[root@compute ~]# systemctl start openstack-manila-share

[root@compute ~]# systemctl enable openstack-manila-share
  • To verify use manila service-list command
[root@controller ~]# manila service-list
+----+------------------+-------------------------------------------------+------+---------+-------+----------------------------+
| Id | Binary           | Host                                            | Zone | Status  | State | Updated_at                 |
+----+------------------+-------------------------------------------------+------+---------+-------+----------------------------+
| 1  | manila-scheduler | controller             | nova | enabled | up    | 2015-07-07T01:26:14.000000 |
| 2  | manila-share     | compute@backend1 | nova | enabled | up    | 2015-07-07T01:26:16.000000 |
+----+------------------+-------------------------------------------------+------+---------+-------+----------------------------+

Here backend1 is share name.

  • Create a default share type
[root@controller ~]# manila type-create default_share_type True
+--------------------------------------+--------------------+------------+------------+-------------------------------------+
| ID                                   | Name               | Visibility | is_default | required_extra_specs                |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+
| ce081b61-ae6f-4cdf-badd-a97d2300a949 | default_share_type | public     | -          | driver_handles_share_servers : True |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+

[root@controller ~]# manila type-list
+--------------------------------------+--------------------+------------+------------+-------------------------------------+
| ID                                   | Name               | Visibility | is_default | required_extra_specs                |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+
| ce081b61-ae6f-4cdf-badd-a97d2300a949 | default_share_type | public     | YES        | driver_handles_share_servers : True |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+
  • In this step we will create a share network. It will be used during share server creation. Neutron is default network backend for Generic driver.

List neutron networks

[root@controller ~]# neutron net-list
+--------------------------------------+------------------------+----------------------------------------------------+
| id                                   | name                   | subnets                                            |
+--------------------------------------+------------------------+----------------------------------------------------+
| cef6c917-fdbb-42c8-aa25-5e7f09b8215c | ext-net                | aa680997-210c-4b8a-bf05-4ceffd97f8f3 10.104.0.0/16 |
| c7fcd8a9-7da1-43c3-bef3-089c86773b53 | priv-net               | 9394ca4d-54be-4b4d-a911-7ce03e4c7abe 172.16.0.0/16 |
| 056dad52-0226-433d-9944-5225108edadc | manila_service_network |                                                    |
+--------------------------------------+------------------------+----------------------------------------------------+

Now create manila share network by using priv-net id and subnet id

[root@controller ~]# manila share-network-create --neutron-net-id c7fcd8a9-7da1-43c3-bef3-089c86773b53 --neutron-subnet-id 9394ca4d-54be-4b4d-a911-7ce03e4c7abe --name manila_share
+-------------------+--------------------------------------+
| Property          | Value                                |
+-------------------+--------------------------------------+
| name              | manila_share                         |
| segmentation_id   | None                                 |
| created_at        | 2015-07-07T01:41:12.505139           |
| neutron_subnet_id | 9394ca4d-54be-4b4d-a911-7ce03e4c7abe |
| updated_at        | None                                 |
| network_type      | None                                 |
| neutron_net_id    | c7fcd8a9-7da1-43c3-bef3-089c86773b53 |
| ip_version        | None                                 |
| nova_net_id       | None                                 |
| cidr              | None                                 |
| project_id        | d4df4b96412e4096b8337a1fcfbd4686     |
| id                | a61e7d07-c534-494e-8cc3-8afbaa9da4d2 |
| description       | None                                 |
+-------------------+--------------------------------------+

[root@controller ~]#  manila share-network-list
+--------------------------------------+--------------+
| id                                   | name         |
+--------------------------------------+--------------+
| a61e7d07-c534-494e-8cc3-8afbaa9da4d2 | manila_share |
+--------------------------------------+--------------+
  • Create an NFS share, using the share network. Manila by default uses the GenericShareDriver which uses Cinder service to create a new cinder volume. It will Launch a service VM, create a cinder volume, attach that volume to the nova instance (service VM), format and mount it. Next manila share will export it as NFS v4 export.
[root@controller ~]# manila create --name myshare  --share-network a61e7d07-c534-494e-8cc3-8afbaa9da4d2 NFS 1
+-------------------+--------------------------------------+
| Property          | Value                                |
+-------------------+--------------------------------------+
| status            | creating                             |
| description       | None                                 |
| availability_zone | nova                                 |
| share_network_id  | a61e7d07-c534-494e-8cc3-8afbaa9da4d2 |
| export_locations  | []                                   |
| share_server_id   | None                                 |
| host              | None                                 |
| snapshot_id       | None                                 |
| is_public         | False                                |
| id                | 01578d60-5741-4c8c-8d6e-6819d861cdff |
| size              | 1                                    |
| name              | myshare                              |
| share_type        | ce081b61-ae6f-4cdf-badd-a97d2300a949 |
| created_at        | 2015-07-07T01:45:56.013356           |
| export_location   | None                                 |
| share_proto       | NFS                                  |
| project_id        | d4df4b96412e4096b8337a1fcfbd4686     |
| metadata          | {}                                   |
+-------------------+--------------------------------------+
  • Verify status of manila share and service VM.
[root@controller ~]# manila list
+--------------------------------------+---------+------+-------------+----------
| ID                                   | Name    | Size | Share Proto | Status    | Is Public | Share Type         | Export location                                               | Host                                                     |
+--------------------------------------+---------+------+-------------+----------
| 01578d60-5741-4c8c-8d6e-6819d861cdff | myshare | 1    | NFS         | available | False     | default_share_type | 10.254.0.5:/shares/share-01578d60-5741-4c8c-8d6e-6819d861cdff | compute@backend1#backend1 |
+--------------------------------------+---------+------+-------------+----------

[root@controller ~]# manila share-server-list
+--------------------------------------+-----------------------------------------
| Id                                   | Host                                            | Status | Share Network | Project Id                       | Updated_at                 |
+--------------------------------------+-----------------------------------------
| de2c1922-8e19-4e22-929d-755ed94b096d | compute@backend1 | ACTIVE | manila_share  | d4df4b96412e4096b8337a1fcfbd4686 | 2015-07-10T02:05:23.000000 |
+--------------------------------------+-----------------------------------------
  • On the network node inside the router namespace you will observe a new interface coming up with IP 10.254.0.1/28, this is a good indication if this does not happen for some reason the nova instances in tenant network will not be able to access the share server.
[root@network ~]# ip netns list
qdhcp-8d8c6336-2f8c-4828-96ba-799b093d8280  
qdhcp-c7fcd8a9-7da1-43c3-bef3-089c86773b53  
qrouter-31ea30cf-5f14-4ad0-b3bd-fcf034378f71

[root@network ~]# ip netns exec qrouter-31ea30cf-5f14-4ad0-b3bd-fcf034378f71 /bin/bash

[root@network ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN  
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
9: qr-758ddf52-75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN  
    link/ether fa:16:3e:64:98:54 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.1/16 brd 172.16.255.255 scope global qr-758ddf52-75
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe64:9854/64 scope link 
       valid_lft forever preferred_lft forever
10: qg-92a6bfec-78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN  
    link/ether fa:16:3e:cd:67:0f brd ff:ff:ff:ff:ff:ff
    inet 10.104.15.1/16 brd 10.104.255.255 scope global qg-92a6bfec-78
       valid_lft forever preferred_lft forever
    inet 10.104.15.2/32 brd 10.104.15.2 scope global qg-92a6bfec-78
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fecd:670f/64 scope link 
       valid_lft forever preferred_lft forever
12: qr-2ef9db00-95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN  
    link/ether fa:16:3e:0e:6c:26 brd ff:ff:ff:ff:ff:ff
    inet 10.254.0.1/28 brd 10.254.0.15 scope global qr-2ef9db00-95
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe0e:6c26/64 scope link 
       valid_lft forever preferred_lft forever
  • Now login to service VM to check the status of manila share, note that you can only access the service VM from either the network namespace for the router on the network node or from the compute node:
[root@compute ~]# ssh manila@10.254.0.5
manila@10.254.0.5's password:  
Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-53-generic i686)

 * Documentation:  https://help.ubuntu.com/
Last login: Fri Jul 10 02:09:24 2015 from host-10-254-0-4.openstacklocal  
$ df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on  
/dev/vda1      ext4      1.1G  732M  288M  72% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup  
udev           devtmpfs  999M  4.0K  999M   1% /dev  
tmpfs          tmpfs     202M  484K  202M   1% /run  
none           tmpfs     5.0M     0  5.0M   0% /run/lock  
none           tmpfs    1008M     0 1008M   0% /run/shm  
none           tmpfs     100M     0  100M   0% /run/user  
/dev/vdb       ext4      976M  1.3M  908M   1% /shares/share-01578d60-5741-4c8c-8d6e-6819d861cdff
  • Now launch a new instance using a Fedora 22 image and allow access of the share to this Nova instance. To Launch VM refer earlier blog post First Steps with Openstack Kilo.

Note below we allow access to the manila share to our Fedora 22 instance with IP address 10.104.15.2 , please do not give 0.0.0.0/0 as we found it to not work as desired.

# manila access-allow 01578d60-5741-4c8c-8d6e-6819d861cdff ip 10.104.15.2

Login to the Nova instance and mount the NFS share and create files to verify write access:

[root@demo-instance1 ~]# mount -t nfs 10.254.0.5:/shares/share-01578d60-5741-4c8c-8d6e-6819d861cdff /mnt

[root@demo-instance1 ~]# df -Th
Filesystem                                                    Type      Size  Used Avail Use% Mounted on  
devtmpfs                                                      devtmpfs  991M     0  991M   0% /dev  
tmpfs                                                         tmpfs    1001M     0 1001M   0% /dev/shm  
tmpfs                                                         tmpfs    1001M  304K 1001M   1% /run  
tmpfs                                                         tmpfs    1001M     0 1001M   0% /sys/fs/cgroup  
/dev/vda1                                                     ext4       20G  566M   19G   3% /
tmpfs                                                         tmpfs     201M     0  201M   0% /run/user/1000  
10.254.0.5:/shares/share-01578d60-5741-4c8c-8d6e-6819d861cdff nfs4      976M  1.3M  908M   1% /mnt

Now create files to see if you have write access to the share

# cd /mnt
# touch t{1..3}

Now you could either go to the share service VM and check manually the share backend or mount the same share on another nova instance for example to verify if these files are visible there as well.

This was a simple demonstration of how we could use openstack manila service by integrating it with an existing openstack kilo multi-node setup and cinder as the backend for the NFS.

Next blog post we will take this setup a step further when we demonstrate how easy it is to create a new backend with GlusterFS.