<div dir="ltr">Hi Moritz,<div><br></div><div>Thanks for your assistance.</div><div><br></div><div>I've checked my /etc/sysconfig/nfs on all 3 hosts and my engine and none of them have any options specified, so I don't think it's this one.</div><div><br></div><div>In terms of adding a sanlock and vdsm user, was this done on your hosts or engine?</div><div><br></div><div>My hosts uid for sanlock and vdsm are all the same. </div><div><br></div><div>I don't have a sanlock user on my ovirt engine,but I do have a vdsm user and the uid matches across all my hosts too.</div><div><br></div><div>Thank you!</div><div><br></div><div>Regards.</div><div><br></div><div>Neil Wilson.</div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 19, 2017 at 3:47 PM, Moritz Baumann <span dir="ltr"><<a href="mailto:moritz.baumann@inf.ethz.ch" target="_blank">moritz.baumann@inf.ethz.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Neil,<br>
<br>
I had similar errors ('Sanlock lockspace add failure' and SPM problems, ...) in the log files and my problem was that I added the "-g" option to mountd (months ago without restarting the service) in /etc/sysconfig/nfs under RPCMOUNTDOPTS.<br>
<br>
I had to either remove the "-g" option or add a goup sanlock and vdsm with the same users as on the ovirt-nodes.<br>
<br>
Maybe your issue is similar.<br>
<br>
Cheers,<br>
Moritz<span class=""><br>
<br>
On 19.09.2017 14:16, Neil wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
Hi guys,<br>
<br>
I'm desperate to get to the bottom of this issue. Does anyone have any ideas please?<br>
<br>
Thank you.<br>
<br>
Regards.<br>
<br>
Neil Wilson.<br>
<br>
---------- Forwarded message ----------<br></span><span class="">
From: *Neil* <<a href="mailto:nwilson123@gmail.com" target="_blank">nwilson123@gmail.com</a> <mailto:<a href="mailto:nwilson123@gmail.com" target="_blank">nwilson123@gmail.com</a>>><br>
Date: Mon, Sep 11, 2017 at 4:46 PM<br>
Subject: AcquireHostIdFailure and code 661<br></span><div><div class="h5">
To: "<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a> <mailto:<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>>" <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a> <mailto:<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>>><br>
<br>
<br>
Hi guys,<br>
<br>
Please could someone shed some light on this issue I'm facing.<br>
<br>
I'm trying to add a new NFS storage domain but when I try add it, I get a message saying "Acquire hostID failed" and it fails to add.<br>
<br>
I can mount the NFS share manually and I can see that once the attaching has failed the NFS share is still mounted on the hosts, as per the following...<br>
<br>
172.16.0.11:/raid1/data/_NAS_N<wbr>FS_Exports_/STOR2 on /rhev/data-center/mnt/172.16.0<wbr>.11:_raid1_data___NAS__NFS__Ex<wbr>ports___STOR2 type nfs (rw,soft,nosharecache,timeo=60<wbr>0,retrans=6,nfsvers=3,addr=172<wbr>.16.0.11)<br>
<br>
Also looking at the folders on the NFS share I can see that some data has been written, so it's not a permissions issue...<br>
<br>
drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-edf5b9<wbr>3535d5<br>
-rwx---rwx. 1 vdsm kvm 0 Sep 11 16:08 __DIRECT_IO_TEST__<br>
<br>
I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in the hope it's a known bug, but I'm still encountering the same problem.<br>
<br>
It's not a hosted engine and you might see in the logs that I have a storage domain that is out of space which I'm aware of, and I'm hoping the system using this space will be decommissioned in 2 days....<br>
<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/sda2 420G 2.2G 413G 1% /<br>
tmpfs 48G 0 48G 0% /dev/shm<br>
172.16.0.10:/raid0/data/_NAS_N<wbr>FS_Exports_/RAID1_1TB<br>
915G 915G 424M 100% /rhev/data-center/mnt/172.16.0<wbr>.10:_raid0_data___NAS__NFS__Ex<wbr>ports___RAID1__1TB<br>
172.16.0.10:/raid0/data/_NAS_N<wbr>FS_Exports_/STORAGE1<br>
5.5T 3.7T 1.8T 67% /rhev/data-center/mnt/172.16.0<wbr>.10:_raid0_data___NAS__NFS__Ex<wbr>ports___STORAGE1<br>
172.16.0.20:/data/ov-export<br>
3.6T 2.3T 1.3T 65% /rhev/data-center/mnt/172.16.0<wbr>.20:_data_ov-export<br>
172.16.0.11:/raid1/data/_NAS_N<wbr>FS_Exports_/4TB<br>
3.6T 2.0T 1.6T 56% /rhev/data-center/mnt/172.16.0<wbr>.11:_raid1_data___NAS__NFS__Ex<wbr>ports___4TB<br>
172.16.0.253:/var/lib/exports/<wbr>iso<br>
193G 42G 141G 23% /rhev/data-center/mnt/172.16.0<wbr>.253:_var_lib_exports_iso<br>
172.16.0.11:/raid1/data/_NAS_N<wbr>FS_Exports_/STOR2<br>
5.5T 3.7G 5.5T 1% /rhev/data-center/mnt/172.16.0<wbr>.11:_raid1_data___NAS__NFS__Ex<wbr>ports___STOR2<br>
<br>
The "STOR2" above is left mounted after attempting to add the new NFS storage domain.<br>
<br>
Engine details:<br>
Fedora release 19 (Schrödinger’s Cat)<br>
ovirt-engine-dbscripts-3.5.0.1<wbr>-1.fc19.noarch<br>
ovirt-release34-1.0.3-1.noarch<br>
ovirt-image-uploader-3.5.0-1.f<wbr>c19.noarch<br>
ovirt-engine-websocket-proxy-3<wbr>.5.0.1-1.fc19.noarch<br>
ovirt-log-collector-3.5.0-1.fc<wbr>19.noarch<br>
ovirt-release35-006-1.noarch<br>
ovirt-engine-setup-3.5.0.1-1.f<wbr>c19.noarch<br>
ovirt-release33-1.0.0-0.1.mast<wbr>er.noarch<br>
ovirt-engine-tools-3.5.0.1-1.f<wbr>c19.noarch<br>
ovirt-engine-lib-3.5.0.1-1.fc1<wbr>9.noarch<br>
ovirt-engine-sdk-python-3.5.0.<wbr>8-1.fc19.noarch<br>
ovirt-host-deploy-java-1.3.0-1<wbr>.fc19.noarch<br>
ovirt-engine-backend-3.5.0.1-1<wbr>.fc19.noarch<br>
sos-3.1-1.1.fc19.ovirt.noarch<br>
ovirt-engine-setup-base-3.5.0.<wbr>1-1.fc19.noarch<br>
ovirt-engine-extensions-api-im<wbr>pl-3.5.0.1-1.fc19.noarch<br>
ovirt-engine-webadmin-portal-3<wbr>.5.0.1-1.fc19.noarch<br>
ovirt-engine-setup-plugin-ovir<wbr>t-engine-3.5.0.1-1.fc19.noarch<br>
ovirt-iso-uploader-3.5.0-1.fc1<wbr>9.noarch<br>
ovirt-host-deploy-1.3.0-1.fc19<wbr>.noarch<br>
ovirt-engine-setup-plugin-ovir<wbr>t-engine-common-3.5.0.1-1.fc19<wbr>.noarch<br>
<a href="http://ovirt-engine-3.5.0.1-1.fc19.no">ovirt-engine-3.5.0.1-1.fc19.no</a><wbr>arch<br>
ovirt-engine-setup-plugin-webs<wbr>ocket-proxy-3.5.0.1-1.fc19.<wbr>noarch<br>
ovirt-engine-userportal-3.5.0.<wbr>1-1.fc19.noarch<br>
ovirt-engine-cli-3.5.0.5-1.fc1<wbr>9.noarch<br>
ovirt-engine-restapi-3.5.0.1-1<wbr>.fc19.noarch<br>
libvirt-daemon-driver-nwfilter<wbr>-1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-qemu-1.1<wbr>.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-libxl-1.<wbr>1.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-secret-1<wbr>.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-config-network-<wbr>1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-storage-<wbr>1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-network-<wbr>1.1.3.2-1.fc19.x86_64<br>
libvirt-1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-kvm-1.1.3.2-1.f<wbr>c19.x86_64<br>
libvirt-client-1.1.3.2-1.fc19.<wbr>x86_64<br>
libvirt-daemon-driver-nodedev-<wbr>1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-uml-1.1.<wbr>3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-xen-1.1.<wbr>3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-interfac<wbr>e-1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-config-nwfilter<wbr>-1.1.3.2-1.fc19.x86_64<br>
libvirt-daemon-1.1.3.2-1.fc19.<wbr>x86_64<br>
libvirt-daemon-qemu-1.1.3.2-1.<wbr>fc19.x86_64<br>
libvirt-daemon-driver-vbox-1.1<wbr>.3.2-1.fc19.x86_64<br>
libvirt-daemon-driver-lxc-1.1.<wbr>3.2-1.fc19.x86_64<br>
qemu-system-lm32-1.4.2-15.fc19<wbr>.x86_64<br>
qemu-system-s390x-1.4.2-15.fc1<wbr>9.x86_64<br>
libvirt-daemon-driver-qemu-1.1<wbr>.3.2-1.fc19.x86_64<br>
qemu-system-ppc-1.4.2-15.fc19.<wbr>x86_64<br>
qemu-user-1.4.2-15.fc19.x86_64<br>
qemu-system-x86-1.4.2-15.fc19.<wbr>x86_64<br>
qemu-system-unicore32-1.4.2-15<wbr>.fc19.x86_64<br>
qemu-system-mips-1.4.2-15.fc19<wbr>.x86_64<br>
qemu-system-or32-1.4.2-15.fc19<wbr>.x86_64<br>
qemu-system-m68k-1.4.2-15.fc19<wbr>.x86_64<br>
qemu-img-1.4.2-15.fc19.x86_64<br>
qemu-kvm-1.4.2-15.fc19.x86_64<br>
qemu-system-xtensa-1.4.2-15.fc<wbr>19.x86_64<br>
qemu-1.4.2-15.fc19.x86_64<br>
qemu-system-microblaze-1.4.2-1<wbr>5.fc19.x86_64<br>
qemu-system-alpha-1.4.2-15.fc1<wbr>9.x86_64<br>
libvirt-daemon-qemu-1.1.3.2-1.<wbr>fc19.x86_64<br>
qemu-system-arm-1.4.2-15.fc19.<wbr>x86_64<br>
qemu-common-1.4.2-15.fc19.x86_<wbr>64<br>
ipxe-roms-qemu-20130517-2.gitc<wbr>4bce43.fc19.noarch<br>
qemu-system-sh4-1.4.2-15.fc19.<wbr>x86_64<br>
qemu-system-cris-1.4.2-15.fc19<wbr>.x86_64<br>
qemu-system-sparc-1.4.2-15.fc1<wbr>9.x86_64<br>
libvirt-daemon-kvm-1.1.3.2-1.f<wbr>c19.x86_64<br>
qemu-kvm-1.4.2-15.fc19.x86_64<br>
<br>
<br>
<br>
Host Details:<br>
CentOS release 6.9 (Final)<br>
vdsm-yajsonrpc-4.16.30-0.el6.n<wbr>oarch<br>
vdsm-python-4.16.30-0.el6.noar<wbr>ch<br>
vdsm-4.16.30-0.el6.x86_64<br>
vdsm-cli-4.16.30-0.el6.noarch<br>
vdsm-jsonrpc-4.16.30-0.el6.noa<wbr>rch<br>
vdsm-python-zombiereaper-4.16.<wbr>30-0.el6.noarch<br>
vdsm-xmlrpc-4.16.30-0.el6.noar<wbr>ch<br>
srvadmin-itunnelprovider-7.4.0<wbr>-4.14.1.el6.x86_64<br>
ovirt-release34-1.0.3-1.noarch<br>
ovirt-release33-1.0.0-0.1.mast<wbr>er.noarch<br>
ovirt-release35-006-1.noarch<br>
qemu-kvm-rhev-tools-0.12.1.2-2<wbr>.479.el6_7.2.x86_64<br>
qemu-kvm-rhev-0.12.1.2-2.479.e<wbr>l6_7.2.x86_64<br>
[root@ovhost3 ~]# rpm -qa | grep -i qemu<br>
qemu-kvm-rhev-tools-0.12.1.2-2<wbr>.479.el6_7.2.x86_64<br>
qemu-img-rhev-0.12.1.2-2.479.e<wbr>l6_7.2.x86_64<br>
gpxe-roms-qemu-0.9.7-6.16.el6.<wbr>noarch<br>
qemu-kvm-rhev-0.12.1.2-2.479.e<wbr>l6_7.2.x86_64<br>
libvirt-lock-sanlock-0.10.2-62<wbr>.el6.x86_64<br>
libvirt-client-0.10.2-62.el6.x<wbr>86_64<br>
libvirt-0.10.2-62.el6.x86_64<br>
libvirt-python-0.10.2-62.el6.x<wbr>86_64<br>
<br>
I have tried renaming the NFS share and as well as unmounting it manually with a -l option (because it says it's busy when unmounting it from the hosts after deleting it from my DC) and I've restarted all hosts after upgrading too.<br>
<br>
Google reveals lots of similar problems but none of the options tried seem to work. I have recently tried enabling selinux as well because I did have it disabled on hosts and engine.<br>
<br>
Any assistance is appreciated.<br>
<br>
Thank you.<br>
<br>
Regards.<br>
<br>
Neil Wilson.<br>
<br>
<br>
<br>
<br>
<br></div></div>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br>
</blockquote>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote></div><br></div>