2014-05-09 15:55 GMT+02:00 Nicolas Ecarnot <nicolas(a)ecarnot.net>:
Hi,
On our second oVirt setup in 3.4.0-1.el6 (that was running fine), I did a
yum upgrade on the engine (...sigh...).
Then rebooted the engine.
This machine is hosting the NFS export domain.
Though the VM are still running, the storage domain is in invalid status.
You'll find below the engine.log.
At first sight, I thought it was the same issue as :
http://lists.ovirt.org/pipermail/users/2014-March/022161.html
because it looked very similar.
But the NFS export domain connection seemed OK (tested).
I did try every trick I could thought of, restarting, checking anything...
Our cluster stayed in a broken state.
On second sight, I saw that when rebooting the engine, then NFS export
domain was not mounted correctly (I wrote a static /dev/sd-something in
fstab, and the iscsi manager changed the letter. Next time, I'll use LVM or
a label).
So the NFS served was void/empty/black hole.
I just realized all the above, and spent my afternoon in cold sweat.
Correcting the NFS mounting and restarting the engine did the trick.
What still disturbs me is that the unavailability of the NFS export domain
should NOT be a reason for the MASTER storage domain to break!
Following the URL above and the BZ opened by the user
(
https://bugzilla.redhat.com/show_bug.cgi?id=1072900), I see this has been
corrected in 3.4.1. What gives a perfectly connected NFS export domain, but
empty?
Hi,
sorry for jumping late on an old thread, I'm the one reporting that bug.
I have two things to say:
- taking advantage of a rare opportunity to turn off my production
cluster I put it back in that critical situation and I can confirm
that with oVirt 3.4.1 the problem has been solved.
PS : I see no 3.4.1 update on CentOS repo.
- me too, until I installed ovirt-release34.rpm (see
http://www.ovirt.org/OVirt_3.4.1_release_notes ). All went smooth
after that.
Best Regards,
Giorgio.