[ovirt-users] Multiple Data Storage Domains

Sahina Bose sabose at redhat.com
Mon Nov 7 06:50:37 UTC 2016


On Mon, Nov 7, 2016 at 11:20 AM, Gary Pedretty <gary at ravnalaska.net> wrote:

> As a storage domain, this gluster volume will not work whether it is
> preallocated or thin provision.   It will work as a straight gluster volume
> mounted directly to any VM on the ovirt Cluster, or any physical machine,
> just not as a data storage domain in the Data Center.
>
> Are there restrictions to having more than one data storage domain that
> has it gluster volumes on the same hosts that are also part of the Data
> Center and Cluster?
>

There are no such restrictions.

However your volume configuration seems suspect -"stripe 2 replica 2". Can
you provide gluster volume info of your second storage domain gluster
volume? The mount logs of the volume (under
/var/log/glusterfs/rhev-datacenter..<volname>.log) from the host where the
volume is being mounted will also help.


>
>
> Gary
>
>
> ------------------------------------------------------------------------
> Gary Pedretty                                        gary at ravnalaska.net
> <gary at eraalaska.net>
> Systems Manager                                          www.flyravn.com
> Ravn Alaska                           /\                    907-450-7251
> 5245 Airport Industrial Road         /  \/\             907-450-7238 fax
> Fairbanks, Alaska  99709        /\  /    \ \ Second greatest commandment
> Serving All of Alaska          /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> ------------------------------------------------------------------------
>
>
>
>
>
>
>
>
>
>
>
>
> On Nov 6, 2016, at 6:28 AM, Maor Lipchuk <mlipchuk at redhat.com> wrote:
>
> Hi Gary,
>
> Do you have other disks on this storage domain?
> Have you tried to use other VMs with disks on this storage domain?
> Is this disk is preallocated? If not can you try to create a pre-allocate
> disk and re-try
>
> Regards,
> Maor
>
>
>
> On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty <gary at ravnalaska.net> wrote:
>
>> I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts
>> in a cluster, with the Engined being hosted on the Cluster.  This follows
>> the pattern shown in the docs for a glusterized setup, except that I have 4
>> hosts.   I have engine, data, iso and export storage domains all as
>> glusterfs on a replica 3 glusterfs on the first 3 hosts.  These gluster
>> volumes are running on an SSD Hardware Raid 6, which is identical on all
>> the hosts.  All the hosts have a second Raid 6 Array with Physical Hard
>> Drives and I have created a second data storage domain as a glusterfs
>> across all 4 hosts as a stripe 2 replica 2 and have added it to the Data
>> Center.  However if I use this second Storage Domain as the boot disk for a
>> VM, or as second disk for a VM that is already running, the VM will become
>> non-responsive as soon as the VM starts using this disk.   Happens during
>> the OS install if the VM is using this storage domain for its boot disk, or
>> if I try copying anything large to it when it is a second disk for a VM
>> that has its boot drive on the Master Data Storage Domain.
>>
>> If I mount the gluster volume that is this second storage domain on one
>> of the hosts directly or any other machine on my local network, the gluster
>> volume works fine.  It is only when it is used as a storage domain (second
>> data domain) on VMs in the cluster.
>>
>> Once the vm becomes non-responsive it cannot be stopped, removed or
>> destroyed without restarting the host machine that the VM is currently
>> running on.   The 4 hosts are connected via 10gig ethernet, so should not
>> be a network issue.
>>
>>
>> Any ideas?
>>
>> Gary
>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161107/6490a302/attachment-0001.html>


More information about the Users mailing list