
On Thu, Feb 25, 2016 at 10:03 PM, Dariusz Kryszak <dariusz.kryszak@gmail.com
wrote:
On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak < dariusz.kryszak@gmail.com> wrote:
Hi folks, I have a question about master domain when I'm using hosted engine deployment. At the beginning I've made deployment on NUC (small home installation) with hosted engine on the nfs share from NUC host. I've configured FS gluster on the same machine and used it for master domain and iso domain. Lets say All-in-ONE. After reboot happened something strange. Log says that master domain is not available and has to become on the hosted_storage. This is not ok in my opinion. I know that behavior because master doamin is not available, has been migrated to other shareable (in this case hosted_domain is nfs ). Do you thing, that should be locked in this particular case means when available is only hosted_storage? Right now it is not possible to change this situation because hosted engine resides on the hosted_storage. I Can't migrate it.
It could happen only after the hosted-engine storage domain got imported by the engine but to do that you need an additional storage domain which will become the master storage domain. In the past we had a bug that let you remove the last regular storage domain and it the case the hosted-engine would become the master storage domain and as you pointed out that was an issue. https://bugzilla.redhat.com/show_bug.cgi?id=1298697
Now it should be fixed. If it just happened again just because you gluster regular storage domain wasn't available is not really fixed. Adding Roy here. Dariusz, which release are you using?
Regarding to the ovirt version. 1. ovirt manager ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
The patch that should address that issue is here: https://gerrit.ovirt.org/#/c/53208/ But you'll find it only in 3.6.3; it wasn't available at 3.6.2.6 time. Recovering from the condition you reached is possible but it requires a few manual actions. If your instance was almost empty redeploying is also an (probably easier) option.
# uname -a Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core)
2. hypervisor
uname -a Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core)
rpm -qa|grep 'hosted\|vdsm' vdsm-cli-4.17.18-1.el7.noarch ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch vdsm-xmlrpc-4.17.18-1.el7.noarch vdsm-jsonrpc-4.17.18-1.el7.noarch vdsm-4.17.18-1.el7.noarch vdsm-python-4.17.18-1.el7.noarch vdsm-yajsonrpc-4.17.18-1.el7.noarch vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch vdsm-infra-4.17.18-1.el7.noarch