Gluster and backup storage on rackspace01 / rackspace02

So, it's up and running now. Next step would be to configure ovirt to use the new gluster volume. And for backups to be moved from linode to this area instead. Please note I only allocated 1 TB to glusterlv, and 100 GB to backuplv, This leave ~700 GB untouched for allocation where we need it. "It's better to grow then to show" ... I just thought of that one, I know its bad. Sorry :-D --- ## The below commands are run on each machine, until otherwise is stated # find wwid of sdb1 multipath -l # add wwid of sdb1 to multipath.conf to blacklist it cat >> /etc/multipath.conf <<EOF blacklist { wwid 36848f690e6d9480019ac01600496584f } EOF # reload multipath multipath -F multipath -v2 # Create a volume group so that we can administrate storage more efficiently vgcreate datavg /dev/sdb1 # One for gluster, one for backup lvcreate -L 1T -n glusterlv datavg lvcreate -L 100G -n backuplv datavg # Create filesystems on both mkfs.ext4 /dev/disk/by-id/dm-name-datavg-backuplv mkfs.ext4 /dev/disk/by-id/dm-name-datavg-glusterlv # Create directories to mount at mkdir /srv/gluster /srv/backup # Find UUID of new filesystems blkid # Fix so that they mount on boot. cat >> /etc/fstab <<EOF UUID="69b0c4e5-ded7-4fc9-aa6f-03f6cc4f60c2" /srv/gluster ext4 defaults 1 2 UUID="4bae10e7-0d8e-477c-aa08-15f885bc52bd" /srv/backup ext4 defaults 1 2 EOF # Mount it! mount -a # Start gluster service glusterd start # Enable on boot chkconfig glusterd on # Add other node gluster peer probe rackspace02.ovirt org # Verify gluster peer status ## Only execute on one node gluster volume create vmstorage replica 2 transport tcp rackspace01.ovirt.org:/srv/gluster rackspace02.ovirt.org:/srv/gluster gluster volume start vmstorage -- /Alexander Rydekull

On Mon, Sep 16, 2013 at 03:08:17PM +0200, Alexander Rydekull wrote:
So, it's up and running now.
Next step would be to configure ovirt to use the new gluster volume.
And for backups to be moved from linode to this area instead.
Please note I only allocated 1 TB to glusterlv, and 100 GB to backuplv, This leave ~700 GB untouched for allocation where we need it.
"It's better to grow then to show" ... I just thought of that one, I know its bad. Sorry :-D
Thanks for setting this up! +1 on leaving room to grow. Couple of questions: * To use this for VMs, we need to migrate from 2 all-in-one clusters to one "normal" cluster? * Can we still use the local storage or do we need to glusterize that too? * And how is it now security wise? Any firewall or other mechanism to prevent random machines from cloning? * For backups, do we connect straight to the hypervisors? I'd prefer a light (HA) backup VM where we can connect to.

----- Original Message -----
From: "Ewoud Kohl van Wijngaarden" <ewoud+ovirt@kohlvanwijngaarden.nl> To: infra@ovirt.org Sent: Tuesday, September 17, 2013 12:42:23 AM Subject: Re: Gluster and backup storage on rackspace01 / rackspace02
On Mon, Sep 16, 2013 at 03:08:17PM +0200, Alexander Rydekull wrote:
So, it's up and running now.
Next step would be to configure ovirt to use the new gluster volume.
And for backups to be moved from linode to this area instead.
Please note I only allocated 1 TB to glusterlv, and 100 GB to backuplv, This leave ~700 GB untouched for allocation where we need it.
"It's better to grow then to show" ... I just thought of that one, I know its bad. Sorry :-D
Thanks for setting this up! +1 on leaving room to grow. Couple of questions:
* To use this for VMs, we need to migrate from 2 all-in-one clusters to one "normal" cluster? * Can we still use the local storage or do we need to glusterize that too? * And how is it now security wise? Any firewall or other mechanism to prevent random machines from cloning? * For backups, do we connect straight to the hypervisors? I'd prefer a light (HA) backup VM where we can connect to.
good news! we have a 3rd server on rackspace so we can install it with f19 and gluster storage. let's plan this migration on next oVirt meeting.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
participants (3)
-
Alexander Rydekull
-
Ewoud Kohl van Wijngaarden
-
Eyal Edri