I hope I can find help with this problem.
We're using oVirt 3.3.0-4-el6 on CentOS 6. We have three physical
hosts. One acts as the ovirt-engine (virt01), another acts as a host
and small/fast storage (virt02), the other acts as a host with 11+ TB of
storage (mixed fast/large setup). virt02 has one storage domain named
'virt02_data'. virt03 has two storage domains, one named 'virt03_data1'
I can create, take snapshots, and do everything as normal for any disks
on 'virt02_data'. When I try to do the same to 'virt03_data1' and
'virt03_data2', the process fails. VMs currently are running on virt03
with their storage on 'virt03_data1'. These VMs can read/write data to
their virtual disk without issue. I downloaded and ran the nfscheck.py
described on the oVirt wiki to ensure that this isn't an NFS.
Looking through the logs to the best of my ability (debug info abounds),
I'm at a loss to determine which file/section of file is relevant to the
problem. Am I correct that oVirt/vdsm/libvirt uses some UUID setup for
coordinating names of disks and not the nice name I see in the UI?
I would post my logs on a pastebin, but they are entirely too large to
host anywhere but a site such as Mediafire.
Any ideas where to start on this issue?
I've an ovirt 3.2 install and I use iscsi targets for the disks. The
thing is that if I use ovirt to access the iscsi targets, it fails
because it doesn't get the right config deployed to the hosts. Thus I
had to do this manually and add the "private" header to the
multipath.conf file so ovirt doesn't overwrite it's contents. I also use
alias for the luns and they also get wiped out.
Is there a file I can set in the engine so my config can be centralized
again and don't have to propagate every change manually to every host?
Little over a year ago Fedora introduced alternative bonding driver
for network devices - libteam: https://fedorahosted.org/libteam/
Can we expect libteam support in ovirt at the same level as for "bond"
driver? Currently, libteam bonded interfaces are not visible in
"Setup Host Networks" dialog.
Tomasz Torcz 72->| 80->|
xmpp: zdzichubg(a)chrome.pl 72->| 80->|
i run a fully subscribed rhev 3.2 cluster, meaning rhev-m + rhev-h nodes.
but, as most or all of you might understand it whould be wishable to use
the free opensourced version for a extra pair of hostsystems which will be
used as lab / testing envirement before live deploy of our stuff.
so, anyone got experienice in compatibility of ovirt-node images and rhev-m
(3.2 in my case). does this simply work?
Sent from the Delta quadrant using Borg technology!