<div dir="ltr">Hi guys,<div><br></div><div>I&#39;m desperate to get to the bottom of this issue. Does anyone have any ideas please?</div><div><br></div><div>Thank you.</div><div><br></div><div>Regards.</div><div><br></div><div>Neil Wilson.</div><div><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">Neil</b> <span dir="ltr">&lt;<a href="mailto:nwilson123@gmail.com">nwilson123@gmail.com</a>&gt;</span><br>Date: Mon, Sep 11, 2017 at 4:46 PM<br>Subject: AcquireHostIdFailure and code 661<br>To: &quot;<a href="mailto:users@ovirt.org">users@ovirt.org</a>&quot; &lt;<a href="mailto:users@ovirt.org">users@ovirt.org</a>&gt;<br><br><br><div dir="ltr"><div>Hi guys,</div><div><br></div><div>Please could someone shed some light on this issue I&#39;m facing.</div><div><br></div><div>I&#39;m trying to add a new NFS storage domain but when I try add it, I get a message saying &quot;Acquire hostID failed&quot; and it fails to add.</div><div><br></div><div>I can mount the NFS share manually and I can see that once the attaching has failed the NFS share is still mounted on the hosts, as per the following...</div><div><br></div><div>172.16.0.11:/raid1/data/_NAS_<wbr>NFS_Exports_/STOR2 on /rhev/data-center/mnt/172.16.<wbr>0.11:_raid1_data___NAS__NFS__<wbr>Exports___STOR2 type nfs (rw,soft,nosharecache,timeo=<wbr>600,retrans=6,nfsvers=3,addr=<wbr>172.16.0.11)<br></div><div><br></div><div>Also looking at the folders on the NFS share I can see that some data has been written, so it&#39;s not a permissions issue...</div><div><br></div><div><div>drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-<wbr>edf5b93535d5</div><div>-rwx---rwx. 1 vdsm kvm    0 Sep 11 16:08 __DIRECT_IO_TEST__</div></div><div><br></div><div>I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in the hope it&#39;s a known bug, but I&#39;m still encountering the same problem.</div><div><br></div><div>It&#39;s not a hosted engine and you might see in the logs that I have a storage domain that is out of space which I&#39;m aware of, and I&#39;m hoping the system using this space will be decommissioned in 2 days....</div><div><br></div><div><div>Filesystem            Size  Used Avail Use% Mounted on</div><div>/dev/sda2             420G  2.2G  413G   1% /</div><div>tmpfs                  48G     0   48G   0% /dev/shm</div><div>172.16.0.10:/raid0/data/_NAS_<wbr>NFS_Exports_/RAID1_1TB</div><div>                      915G  915G  424M 100% /rhev/data-center/mnt/172.16.<wbr>0.10:_raid0_data___NAS__NFS__<wbr>Exports___RAID1__1TB</div><div>172.16.0.10:/raid0/data/_NAS_<wbr>NFS_Exports_/STORAGE1</div><div>                      5.5T  3.7T  1.8T  67% /rhev/data-center/mnt/172.16.<wbr>0.10:_raid0_data___NAS__NFS__<wbr>Exports___STORAGE1</div><div>172.16.0.20:/data/ov-export</div><div>                      3.6T  2.3T  1.3T  65% /rhev/data-center/mnt/172.16.<wbr>0.20:_data_ov-export</div><div>172.16.0.11:/raid1/data/_NAS_<wbr>NFS_Exports_/4TB</div><div>                      3.6T  2.0T  1.6T  56% /rhev/data-center/mnt/172.16.<wbr>0.11:_raid1_data___NAS__NFS__<wbr>Exports___4TB</div><div>172.16.0.253:/var/lib/exports/<wbr>iso</div><div>                      193G   42G  141G  23% /rhev/data-center/mnt/172.16.<wbr>0.253:_var_lib_exports_iso</div><div>172.16.0.11:/raid1/data/_NAS_<wbr>NFS_Exports_/STOR2</div><div>                      5.5T  3.7G  5.5T   1% /rhev/data-center/mnt/172.16.<wbr>0.11:_raid1_data___NAS__NFS__<wbr>Exports___STOR2</div></div><div><br></div><div>The &quot;STOR2&quot; above is left mounted after attempting to add the new NFS storage domain.</div><div><br></div><div>Engine details:</div><div><div>Fedora release 19 (Schrödinger’s Cat)<br></div></div><div>ovirt-engine-dbscripts-3.5.0.<wbr>1-1.fc19.noarch<br></div><div><div>ovirt-release34-1.0.3-1.noarch</div><div>ovirt-image-uploader-3.5.0-1.<wbr>fc19.noarch</div><div>ovirt-engine-websocket-proxy-<wbr>3.5.0.1-1.fc19.noarch</div><div>ovirt-log-collector-3.5.0-1.<wbr>fc19.noarch</div><div>ovirt-release35-006-1.noarch</div><div>ovirt-engine-setup-3.5.0.1-1.<wbr>fc19.noarch</div><div>ovirt-release33-1.0.0-0.1.<wbr>master.noarch</div><div>ovirt-engine-tools-3.5.0.1-1.<wbr>fc19.noarch</div><div>ovirt-engine-lib-3.5.0.1-1.<wbr>fc19.noarch</div><div>ovirt-engine-sdk-python-3.5.0.<wbr>8-1.fc19.noarch</div><div>ovirt-host-deploy-java-1.3.0-<wbr>1.fc19.noarch</div><div>ovirt-engine-backend-3.5.0.1-<wbr>1.fc19.noarch</div><div>sos-3.1-1.1.fc19.ovirt.noarch</div><div>ovirt-engine-setup-base-3.5.0.<wbr>1-1.fc19.noarch</div><div>ovirt-engine-extensions-api-<wbr>impl-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-webadmin-portal-<wbr>3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-setup-plugin-<wbr>ovirt-engine-3.5.0.1-1.fc19.<wbr>noarch</div><div>ovirt-iso-uploader-3.5.0-1.<wbr>fc19.noarch</div><div>ovirt-host-deploy-1.3.0-1.<wbr>fc19.noarch</div><div>ovirt-engine-setup-plugin-<wbr>ovirt-engine-common-3.5.0.1-1.<wbr>fc19.noarch</div><div>ovirt-engine-3.5.0.1-1.fc19.<wbr>noarch</div><div>ovirt-engine-setup-plugin-<wbr>websocket-proxy-3.5.0.1-1.<wbr>fc19.noarch</div><div>ovirt-engine-userportal-3.5.0.<wbr>1-1.fc19.noarch</div><div>ovirt-engine-cli-3.5.0.5-1.<wbr>fc19.noarch</div><div>ovirt-engine-restapi-3.5.0.1-<wbr>1.fc19.noarch</div><div>libvirt-daemon-driver-<wbr>nwfilter-1.1.3.2-1.fc19.x86_64<br></div><div>libvirt-daemon-driver-qemu-1.<wbr>1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-libxl-1.<wbr>1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-secret-<wbr>1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-config-network-<wbr>1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-storage-<wbr>1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-network-<wbr>1.1.3.2-1.fc19.x86_64</div><div>libvirt-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-kvm-1.1.3.2-1.<wbr>fc19.x86_64</div><div>libvirt-client-1.1.3.2-1.fc19.<wbr>x86_64</div><div>libvirt-daemon-driver-nodedev-<wbr>1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-uml-1.1.<wbr>3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-xen-1.1.<wbr>3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-<wbr>interface-1.1.3.2-1.fc19.x86_<wbr>64</div><div>libvirt-daemon-config-<wbr>nwfilter-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-1.1.3.2-1.fc19.<wbr>x86_64</div><div>libvirt-daemon-qemu-1.1.3.2-1.<wbr>fc19.x86_64</div><div>libvirt-daemon-driver-vbox-1.<wbr>1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-lxc-1.1.<wbr>3.2-1.fc19.x86_64</div><div>qemu-system-lm32-1.4.2-15.<wbr>fc19.x86_64<br></div><div>qemu-system-s390x-1.4.2-15.<wbr>fc19.x86_64</div><div>libvirt-daemon-driver-qemu-1.<wbr>1.3.2-1.fc19.x86_64</div><div>qemu-system-ppc-1.4.2-15.fc19.<wbr>x86_64</div><div>qemu-user-1.4.2-15.fc19.x86_64</div><div>qemu-system-x86-1.4.2-15.fc19.<wbr>x86_64</div><div>qemu-system-unicore32-1.4.2-<wbr>15.fc19.x86_64</div><div>qemu-system-mips-1.4.2-15.<wbr>fc19.x86_64</div><div>qemu-system-or32-1.4.2-15.<wbr>fc19.x86_64</div><div>qemu-system-m68k-1.4.2-15.<wbr>fc19.x86_64</div><div>qemu-img-1.4.2-15.fc19.x86_64</div><div>qemu-kvm-1.4.2-15.fc19.x86_64</div><div>qemu-system-xtensa-1.4.2-15.<wbr>fc19.x86_64</div><div>qemu-1.4.2-15.fc19.x86_64</div><div>qemu-system-microblaze-1.4.2-<wbr>15.fc19.x86_64</div><div>qemu-system-alpha-1.4.2-15.<wbr>fc19.x86_64</div><div>libvirt-daemon-qemu-1.1.3.2-1.<wbr>fc19.x86_64</div><div>qemu-system-arm-1.4.2-15.fc19.<wbr>x86_64</div><div>qemu-common-1.4.2-15.fc19.x86_<wbr>64</div><div>ipxe-roms-qemu-20130517-2.<wbr>gitc4bce43.fc19.noarch</div><div>qemu-system-sh4-1.4.2-15.fc19.<wbr>x86_64</div><div>qemu-system-cris-1.4.2-15.<wbr>fc19.x86_64</div><div>qemu-system-sparc-1.4.2-15.<wbr>fc19.x86_64</div><div>libvirt-daemon-kvm-1.1.3.2-1.<wbr>fc19.x86_64<br></div><div>qemu-kvm-1.4.2-15.fc19.x86_64</div></div><div><br></div><div><br></div><div><br></div>Host Details:<div><div>CentOS release 6.9 (Final)</div><div>vdsm-yajsonrpc-4.16.30-0.el6.<wbr>noarch<br></div><div>vdsm-python-4.16.30-0.el6.<wbr>noarch</div><div>vdsm-4.16.30-0.el6.x86_64</div><div>vdsm-cli-4.16.30-0.el6.noarch</div><div>vdsm-jsonrpc-4.16.30-0.el6.<wbr>noarch</div><div>vdsm-python-zombiereaper-4.16.<wbr>30-0.el6.noarch</div><div>vdsm-xmlrpc-4.16.30-0.el6.<wbr>noarch</div><div>srvadmin-itunnelprovider-7.4.<wbr>0-4.14.1.el6.x86_64<br></div><div>ovirt-release34-1.0.3-1.noarch</div><div>ovirt-release33-1.0.0-0.1.<wbr>master.noarch</div><div>ovirt-release35-006-1.noarch</div><div>qemu-kvm-rhev-tools-0.12.1.2-<wbr>2.479.el6_7.2.x86_64<br></div><div>qemu-kvm-rhev-0.12.1.2-2.479.<wbr>el6_7.2.x86_64</div><div>[root@ovhost3 ~]# rpm -qa | grep -i qemu</div><div>qemu-kvm-rhev-tools-0.12.1.2-<wbr>2.479.el6_7.2.x86_64</div><div>qemu-img-rhev-0.12.1.2-2.479.<wbr>el6_7.2.x86_64</div><div>gpxe-roms-qemu-0.9.7-6.16.el6.<wbr>noarch</div><div>qemu-kvm-rhev-0.12.1.2-2.479.<wbr>el6_7.2.x86_64</div><div>libvirt-lock-sanlock-0.10.2-<wbr>62.el6.x86_64</div><div>libvirt-client-0.10.2-62.el6.<wbr>x86_64</div><div>libvirt-0.10.2-62.el6.x86_64</div><div>libvirt-python-0.10.2-62.el6.<wbr>x86_64</div><div><br></div><div>I have tried renaming the NFS share and as well as unmounting it manually with a -l option (because it says it&#39;s busy when unmounting it from the hosts after deleting it from my DC) and I&#39;ve restarted all hosts after upgrading too.</div><div><br></div><div>Google reveals lots of similar problems but none of the options tried seem to work. I have recently tried enabling selinux as well because I did have it disabled on hosts and engine.</div><div><br></div><div>Any assistance is appreciated.</div><div><br></div><div>Thank you.</div><div><br></div><div>Regards.</div><div><br></div><div>Neil Wilson.</div><div><br></div><div><br></div></div></div>
</div><br></div></div>