First Steps with Openstack Kilo

In this blog post we will describe how we did a setup for Openstack Kilo release on 3 CentOS 7.1 x86_64 servers.

Details of systems used in this setup:

controller 10.104.10.22  
network 10.104.1.210  
compute 10.104.10.45  

Note that these systems had IPTables disabled and SELinux set to disabled as well. But in an ideal production environment you would want to keep SELinux to enforcing and IPTables enabled. Openstack has excellent SELinux policies one could use to keep the environment safe and secure.

1. Basic setup

In this section we will be doing basic configuration on all three systems to install openstack-kilo.

1.1 Configure Networking

Before installing Openstack we need to configure network interface. Note that in our case we had only 1 network interface on all the systems. Follow below steps on all the three systems.

Edit /etc/sysconfig/network-scripts/ifcfg-em1 (our physical network interface) file and add below line

NM_CONTROLLED=no  

Stop NetworkManager service and disable it on boot.

# systemctl stop NetworkManager
# systemctl disable NetworkManager
# systemctl status NetworkManager

Verify network connectivity to the Internet and among the nodes before proceeding further.

1.2 Configure yum repositories

Configure EPEL and openstack-kilo repository to install required packages for installation.

# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
1.3 Configure NTP

You must install NTP to properly synchronize time among nodes. To configure NTP follow below steps:

# yum -y install ntp
# ntpdate -u 0.centos.pool.ntp.org
# systemctl enable ntpd 
# systemctl start ntpd 
# systemctl status ntpd 
2. Openvswitch Configuration

In this setup before configuring other openstack components we are setting up Openvswitch bridges on Network and Compute systems.

In this setup we are using single interface i.e. em1 on all three systems. It is not required to configure Openvswitch bridge on controller node.

Install openvswitch packages on Compute and network nodes and start openvswitch service.

# yum -y install openstack-neutron-openvswitch
# systemctl enable openvswitch.service
# systemctl start openvswitch.service

Bridge mappings:

  • br-ex - External interface
  • br-int - Integration bridge
2.1 Network Node Configuration

Add below bridges to setup networking:

# ovs-vsctl add-br br-int
# ovs-vsctl add-br br-ex

Verify Configuration:

# ovs-vsctl show

We will now add our physical network interface em1 as a port to the external OVS Bridge br-ex.

To make changes persistent during reboot, edit /etc/sysconfig/network-scripts/ifcfg-em1 file and add below lines

TYPE=OVSPort  
DEVICETYPE=ovs  
OVS_BRIDGE=br-ex  

cat /etc/sysconfig/network-scripts/ifcfg-em1

DEVICE=em1  
ONBOOT=yes  
NM_CONTROLLED=no  
TYPE=OVSPort  
DEVICETYPE=ovs  
OVS_BRIDGE=br-ex  

Create /etc/sysconfig/network-scripts/ifcfg-br-ex file, it should look like

DEVICE=br-ex  
OVSBOOTPROTO=static  
DNS1=10.1.1.254  
IPADDR=10.104.10.45  
NETMASK=255.255.0.0  
MACADDR=B8:AC:6F:8E:9D:CB  
OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR"  
NM_CONTROLLED=no  
ONBOOT=yes  
TYPE=OVSBridge  
DEVICETYPE=ovs  

Now finally we will bring up the new configuration changes without loosing any network connectivity to the said host:

# ifconfig br-ex down
# ifconfig em1 down ; ifconfig em1 up
2.2 Compute Node Configuration
# ovs-vsctl add-br br-ex
# ovs-vsctl add-br br-int

We will now add our physical network interface as port to the OVSBridge br-ex such that on reboot this change is persistent:

cat /etc/sysconfig/network-scripts/ifcfg-em1

DEVICE=em1  
ONBOOT=yes  
NM_CONTROLLED=no  
TYPE=OVSPort  
DEVICETYPE=ovs  
OVS_BRIDGE=br-ex  

cat /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex  
OVSBOOTPROTO=static  
DNS1=10.1.1.254  
MACADDR=e8:9a:8f:bd:c3:ce  
OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR"  
IPADDR=10.104.1.210  
NETMASK=255.255.0.0  
NM_CONTROLLED=no  
ONBOOT=yes  
TYPE=OVSBridge  
DEVICETYPE=ovs  
3. Openstack Installation and Configuratio

After configuring networking, to install core openstack components refer to the excellent Openstack Guide.

We have done manual installation for other components. The primary reason behind doing all manual was to get a better understanding of each and every component , its configuration etc. For basic setup we have configured below services:

  • Keystone
  • Glance
  • Compute
  • Neutron

In this setup we are using the ML2 plugin for neutron with openvswitch mechanism driver. Our external network (for floating IPs) uses flat network type and internal VM network uses gre tunnel.

The plugin configuration file looks like below on neutron network node:

[ml2]
type_drivers = flat,gre  
tenant_network_types = gre  
mechanism_drivers = openvswitch  
[ml2_type_flat]
flat_networks = external  
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000  
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True  
enable_ipset = True  
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver  
[ovs]
local_ip = 10.104.10.45  
bridge_mappings = external:br-ex  
[agent]
tunnel_types = gre  

The plugin configuration file looks like below on nova compute node:

[ml2]
type_drivers = gre  
tenant_network_types = gre  
mechanism_drivers = openvswitch  
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000  
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True  
enable_ipset = True  
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver  
[ovs]
local_ip = 10.104.1.210  
[agent]
tunnel_types = gre  
3.1 Create networks

After installation and configuration of all components was done we created initial networks to launch instance in Openstack environment.

Create external network, this would be done as administrator:

source admin-openrc.sh  

Now we create the external provider network as shown below:

neutron net-create ext-net --router:external   --provider:physical_network external --provider:network_type flat  

Create subnet for external network

neutron subnet-create ext-net 10.104.0.0/16 --name ext-subnet   --allocation-pool start=10.104.15.1,end=10.104.15.20   --disable-dhcp --gateway 10.104.0.1  

Now we will create the tenant network and a subnet within it. We will first source the credentials of the demo tenant:

source demo-openrc.sh  

Now we will create a demo network within this tenant and create a subnet within it as shown below:

neutron net-create demo-net  
neutron subnet-create demo-net 172.16.0.0/16   --name demo-subnet --gateway 172.16.0.1  

Finally we will create a router:

neutron router-create demo-router  

And attach the router to the external network as well as the demo-subnet within the tenant network:

neutron router-interface-add demo-router demo-subnet  
neutron router-gateway-set demo-router ext-net  
3.2 Launch Instance

Before launching instance, create keypair to access it via ssh and add port 22 to default security group otherwise it will not be accessbile.

# Generate key pair to access instance
nova keypair-add demo-key  
nova keypair-list

# Get network id to launch instance
neutron net-list

# Add rules to security group to access instance
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0  

Perform below steps to launch instance and then verify its status. Note that as we followed step-by-step the official Openstack Kilo installation guide we have a Cirros image on the cloud, if you want you could also use the Fedora cloud images for QCOW2.

# Launch Instance
nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=ab3201d5-ba9f-4210-a61d-0777f5a0e48b  --security-group default --key-name demo-key demo-instance1

# Verify status of instance
nova list  
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks            |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| 37c34043-cf4d-4f71-bf5a-773b2a06ae15 | demo-instance1 | ACTIVE | -          | Running     | demo-net=172.16.0.3 |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+

# Create floatin ip for external network
 neutron floatingip-create ext-net

# Attach floating ip to instance
nova floating-ip-associate demo-instance1 10.104.15.2

# Verify floating-ip
nova list  
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks                         |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| 37c34043-cf4d-4f71-bf5a-773b2a06ae15 | demo-instance1 | ACTIVE | -          | Running     | demo-net=172.16.0.3, 10.104.15.2 |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+

# Access instance using ssh key
chmod 600 demo-key.pem  
ssh -i demo-key.pem cirros@10.104.15.2  
$ 

So instance is running and accessible using ssh. This concludes the basic setup of Openstack Kilo on CentOS 7.1 with single network interface on all systems and using neutron ML2 plugin with openvswitch GRE tunnels.

4. Openvswitch state

After the instance is up and running on our brand new Openstack cloud we ventured in to see how does it look from the OVS side:

On the network and compute node we could observe a new OVS bridge automatically got created with the name br-tun and it was connected to br-int (the integration bridge) via patch (veth cable).

We could also see on the network node br-int having connectivity to our external bridge a.k.a br-ex via patch (veth cable).

Other than these basic observation we could also appreciate the several other OVS artifacts in the both network and compute nodes as per the official guide.

For the record the output of ovs-vsctl show from both network and compute nodes are available on the following URLs:

https://gist.github.com/NehaRawat/94e999ed9b48dd8f16f0 (network node)
https://gist.github.com/NehaRawat/1055fdc17347fdf9e15e (compute node)

5. Next steps

Now that we have an Openstack Kilo cloud up and running successfully we will see how we use Openstack Manilla with NFS Ganesha to integrate it with GlusterFS setup we configured earlier.