<div dir="ltr"><div>Hi guys,</div><div><br></div><div>Please could someone shed some light on this issue I&#39;m facing.</div><div><br></div><div>I&#39;m trying to add a new NFS storage domain but when I try add it, I get a message saying &quot;Acquire hostID failed&quot; and it fails to add.</div><div><br></div><div>I can mount the NFS share manually and I can see that once the attaching has failed the NFS share is still mounted on the hosts, as per the following...</div><div><br></div><div>172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2 type nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=172.16.0.11)<br></div><div><br></div><div>Also looking at the folders on the NFS share I can see that some data has been written, so it&#39;s not a permissions issue...</div><div><br></div><div><div>drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-edf5b93535d5</div><div>-rwx---rwx. 1 vdsm kvm    0 Sep 11 16:08 __DIRECT_IO_TEST__</div></div><div><br></div><div>I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in the hope it&#39;s a known bug, but I&#39;m still encountering the same problem.</div><div><br></div><div>It&#39;s not a hosted engine and you might see in the logs that I have a storage domain that is out of space which I&#39;m aware of, and I&#39;m hoping the system using this space will be decommissioned in 2 days....</div><div><br></div><div><div>Filesystem            Size  Used Avail Use% Mounted on</div><div>/dev/sda2             420G  2.2G  413G   1% /</div><div>tmpfs                  48G     0   48G   0% /dev/shm</div><div>172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB</div><div>                      915G  915G  424M 100% /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Exports___RAID1__1TB</div><div>172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1</div><div>                      5.5T  3.7T  1.8T  67% /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Exports___STORAGE1</div><div>172.16.0.20:/data/ov-export</div><div>                      3.6T  2.3T  1.3T  65% /rhev/data-center/mnt/172.16.0.20:_data_ov-export</div><div>172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB</div><div>                      3.6T  2.0T  1.6T  56% /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___4TB</div><div>172.16.0.253:/var/lib/exports/iso</div><div>                      193G   42G  141G  23% /rhev/data-center/mnt/172.16.0.253:_var_lib_exports_iso</div><div>172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2</div><div>                      5.5T  3.7G  5.5T   1% /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2</div></div><div><br></div><div>The &quot;STOR2&quot; above is left mounted after attempting to add the new NFS storage domain.</div><div><br></div><div>Engine details:</div><div><div>Fedora release 19 (Schrödinger’s Cat)<br></div></div><div>ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch<br></div><div><div>ovirt-release34-1.0.3-1.noarch</div><div>ovirt-image-uploader-3.5.0-1.fc19.noarch</div><div>ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch</div><div>ovirt-log-collector-3.5.0-1.fc19.noarch</div><div>ovirt-release35-006-1.noarch</div><div>ovirt-engine-setup-3.5.0.1-1.fc19.noarch</div><div>ovirt-release33-1.0.0-0.1.master.noarch</div><div>ovirt-engine-tools-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-lib-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-sdk-python-3.5.0.8-1.fc19.noarch</div><div>ovirt-host-deploy-java-1.3.0-1.fc19.noarch</div><div>ovirt-engine-backend-3.5.0.1-1.fc19.noarch</div><div>sos-3.1-1.1.fc19.ovirt.noarch</div><div>ovirt-engine-setup-base-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-extensions-api-impl-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-webadmin-portal-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-setup-plugin-ovirt-engine-3.5.0.1-1.fc19.noarch</div><div>ovirt-iso-uploader-3.5.0-1.fc19.noarch</div><div>ovirt-host-deploy-1.3.0-1.fc19.noarch</div><div>ovirt-engine-setup-plugin-ovirt-engine-common-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-setup-plugin-websocket-proxy-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-userportal-3.5.0.1-1.fc19.noarch</div><div>ovirt-engine-cli-3.5.0.5-1.fc19.noarch</div><div>ovirt-engine-restapi-3.5.0.1-1.fc19.noarch</div><div>libvirt-daemon-driver-nwfilter-1.1.3.2-1.fc19.x86_64<br></div><div>libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-libxl-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-secret-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-config-network-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-storage-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-network-1.1.3.2-1.fc19.x86_64</div><div>libvirt-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64</div><div>libvirt-client-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-nodedev-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-uml-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-xen-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-interface-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-config-nwfilter-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-vbox-1.1.3.2-1.fc19.x86_64</div><div>libvirt-daemon-driver-lxc-1.1.3.2-1.fc19.x86_64</div><div>qemu-system-lm32-1.4.2-15.fc19.x86_64<br></div><div>qemu-system-s390x-1.4.2-15.fc19.x86_64</div><div>libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64</div><div>qemu-system-ppc-1.4.2-15.fc19.x86_64</div><div>qemu-user-1.4.2-15.fc19.x86_64</div><div>qemu-system-x86-1.4.2-15.fc19.x86_64</div><div>qemu-system-unicore32-1.4.2-15.fc19.x86_64</div><div>qemu-system-mips-1.4.2-15.fc19.x86_64</div><div>qemu-system-or32-1.4.2-15.fc19.x86_64</div><div>qemu-system-m68k-1.4.2-15.fc19.x86_64</div><div>qemu-img-1.4.2-15.fc19.x86_64</div><div>qemu-kvm-1.4.2-15.fc19.x86_64</div><div>qemu-system-xtensa-1.4.2-15.fc19.x86_64</div><div>qemu-1.4.2-15.fc19.x86_64</div><div>qemu-system-microblaze-1.4.2-15.fc19.x86_64</div><div>qemu-system-alpha-1.4.2-15.fc19.x86_64</div><div>libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64</div><div>qemu-system-arm-1.4.2-15.fc19.x86_64</div><div>qemu-common-1.4.2-15.fc19.x86_64</div><div>ipxe-roms-qemu-20130517-2.gitc4bce43.fc19.noarch</div><div>qemu-system-sh4-1.4.2-15.fc19.x86_64</div><div>qemu-system-cris-1.4.2-15.fc19.x86_64</div><div>qemu-system-sparc-1.4.2-15.fc19.x86_64</div><div>libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64<br></div><div>qemu-kvm-1.4.2-15.fc19.x86_64</div></div><div><br></div><div><br></div><div><br></div>Host Details:<div><div>CentOS release 6.9 (Final)</div><div>vdsm-yajsonrpc-4.16.30-0.el6.noarch<br></div><div>vdsm-python-4.16.30-0.el6.noarch</div><div>vdsm-4.16.30-0.el6.x86_64</div><div>vdsm-cli-4.16.30-0.el6.noarch</div><div>vdsm-jsonrpc-4.16.30-0.el6.noarch</div><div>vdsm-python-zombiereaper-4.16.30-0.el6.noarch</div><div>vdsm-xmlrpc-4.16.30-0.el6.noarch</div><div>srvadmin-itunnelprovider-7.4.0-4.14.1.el6.x86_64<br></div><div>ovirt-release34-1.0.3-1.noarch</div><div>ovirt-release33-1.0.0-0.1.master.noarch</div><div>ovirt-release35-006-1.noarch</div><div>qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64<br></div><div>qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64</div><div>[root@ovhost3 ~]# rpm -qa | grep -i qemu</div><div>qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64</div><div>qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64</div><div>gpxe-roms-qemu-0.9.7-6.16.el6.noarch</div><div>qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64</div><div>libvirt-lock-sanlock-0.10.2-62.el6.x86_64</div><div>libvirt-client-0.10.2-62.el6.x86_64</div><div>libvirt-0.10.2-62.el6.x86_64</div><div>libvirt-python-0.10.2-62.el6.x86_64</div><div><br></div><div>I have tried renaming the NFS share and as well as unmounting it manually with a -l option (because it says it&#39;s busy when unmounting it from the hosts after deleting it from my DC) and I&#39;ve restarted all hosts after upgrading too.</div><div><br></div><div>Google reveals lots of similar problems but none of the options tried seem to work. I have recently tried enabling selinux as well because I did have it disabled on hosts and engine.</div><div><br></div><div>Any assistance is appreciated.</div><div><br></div><div>Thank you.</div><div><br></div><div>Regards.</div><div><br></div><div>Neil Wilson.</div><div><br></div><div><br></div></div></div>