Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)
The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!
In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS. It’s simple and easy to deploy, and only takes a couple of minutes to have it up and running.
The first thing I did was configure my local 10GB interfaces for heartbeat traffic, to do that I created a sub-interface on VLAN 401 for each node. In this case I used 10.124.1.0/24 addressing.
/etc/sysconfig/network-scripts/ifcfg-bond0.401 on node 1
BOOTPROTO=static IPADDR=10.124.1.1 NETMASK=255.255.255.0 ONBOOT=yes VLAN=yes MTU=9000
/etc/sysconfig/network-scripts/ifcfg-bond0.401 on node 2
BOOTPROTO=static IPADDR=10.124.1.2 NETMASK=255.255.255.0 ONBOOT=yes VLAN=yes MTU=9000
/etc/sysconfig/network-scripts/ifcfg-bond0.401 on node 3
BOOTPROTO=static IPADDR=10.124.1.3 NETMASK=255.255.255.0 ONBOOT=yes VLAN=yes MTU=9000
/etc/sysconfig/network-scripts/ifcfg-bond0.401 on node 4
BOOTPROTO=static IPADDR=10.124.1.4 NETMASK=255.255.255.0 ONBOOT=yes VLAN=yes MTU=9000
The next step was to populate the /etc/hosts file on all of the nodes, this prevents me from having any issues should DNS resolution fail.
10.124.1.1 g1.local.net 10.124.1.2 g2.local.net 10.124.1.3 g3.local.net 10.124.1.4 g4.local.net
Now its time to actually install the Gluster filesystem, I used the RPM’s available here:
To use the RAID 5 volume I created on the OpenCompute OpenVault (Knox Unit), I first create an aligned partition on /dev/sdb using the command:
fdisk -H 8 -S 16 /dev/sdb
Then formatted the drives XFS using:
mkfs.xfs -i size=512 /dev/sdb1
Create the mount point
mkdir -p /export/sdb1
Time to add it to /etc/fstab
echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0" >> /etc/fstab
Mount it!
mount -a
Set glusterd to autostart, then start it:
chkconfig glusterd on service glustered start
Within OpenStack, I want volumes to house my Nova, Glance and Cinder data. So I’ll need to get those directories created.
mkdir /mnt/nova mkdir /mnt/cinder mkdir /mnt/glance
Now, probe for all the cluster peers:
gluster peer probe g1.local.net gluster peer probe g2.local.net gluster peer probe g3.local.net gluster peer probe g4.local.net
Now lets create the GlusterFS volumes for Nova:
gluster volume create nova replica 2 g1.local.net:/export/sdb1/nova g2.local.net:/export/sdb1/nova g4.local.net:/export/sdb1/nova g5.local.net:/export/sdb1/nova
Now lets create the GlusterFS volumes for Cinder:
gluster volume create cinder replica 2 g1.local.net:/export/sdb1/cinder g2.local.net:/export/sdb1/cinder g4.local.net:/export/sdb1/cinder g5.local.net:/export/sdb1/cinder
Now lets create the GlusterFS volumes for Glance:
gluster volume create glance replica 2 g1.local.net:/export/sdb1/glance g2.local.net:/export/sdb1/glance g4.local.net:/export/sdb1/glance g5.local.net:/export/sdb1/glance
Now start each volume:
gluster volume start nova
gluster volume start glance
gluster volume start cinder
Now add your subnets to your auth.allow statements
gluster volume set cinder auth.allow 172.16.*,10.124.1.*,127.0.0.*
gluster volume set nova auth.allow 172.16.*,10.124.1.*,127.0.0.*
gluster volume set glance auth.allow 172.16.*,10.124.1.*,127.0.0.*
Time to add the mount points to /etc/fstab
echo "127.0.0.1:/cinder /mnt/cinder glusterfs defaults,_netdev 0 0" >> /etc/fstab
echo "127.0.0.1:/nova /mnt/nova glusterfs defaults,_netdev 0 0" >> /etc/fstab
echo "127.0.0.1:/glance /mnt/glance glusterfs defaults,_netdev 0 0" >> /etc/fstab
Lets get them mounted!
mount -a