So, it's up and running now.

Next step would be to configure ovirt to use the new gluster volume.

And for backups to be moved from linode to this area instead.

Please note I only allocated 1 TB to glusterlv, and 100 GB to backuplv, This leave ~700 GB untouched for allocation where we need it.

"It's better to grow then to show" ... I just thought of that one, I know its bad. Sorry :-D

---

## The below commands are run on each machine, until otherwise is stated

# find wwid of sdb1
multipath -l

# add wwid of sdb1 to multipath.conf to blacklist it
cat >> /etc/multipath.conf <<EOF
blacklist {
      wwid 36848f690e6d9480019ac01600496584f
}
EOF

# reload multipath
multipath -F
multipath -v2

# Create a volume group so that we can administrate storage more efficiently
vgcreate datavg /dev/sdb1

# One for gluster, one for backup
lvcreate -L 1T -n glusterlv datavg
lvcreate -L 100G -n backuplv datavg

# Create filesystems on both
mkfs.ext4 /dev/disk/by-id/dm-name-datavg-backuplv
mkfs.ext4 /dev/disk/by-id/dm-name-datavg-glusterlv

# Create directories to mount at
mkdir /srv/gluster /srv/backup

# Find UUID of new filesystems
blkid

# Fix so that they mount on boot.
cat >> /etc/fstab <<EOF
UUID="69b0c4e5-ded7-4fc9-aa6f-03f6cc4f60c2" /srv/gluster ext4 defaults 1 2
UUID="4bae10e7-0d8e-477c-aa08-15f885bc52bd" /srv/backup ext4 defaults 1 2
EOF

# Mount it!
mount -a

# Start gluster
service glusterd start

# Enable on boot
chkconfig glusterd on

# Add other node
gluster peer probe rackspace02.ovirt org

# Verify
gluster peer status


## Only execute on one node
gluster volume create vmstorage replica 2 transport tcp rackspace01.ovirt.org:/srv/gluster rackspace02.ovirt.org:/srv/gluster
gluster volume start vmstorage


--
/Alexander Rydekull