Hi Moritz,
Thanks for your assistance.
I've checked my /etc/sysconfig/nfs on all 3 hosts and my engine and none of
them have any options specified, so I don't think it's this one.
In terms of adding a sanlock and vdsm user, was this done on your hosts or
engine?
My hosts uid for sanlock and vdsm are all the same.
I don't have a sanlock user on my ovirt engine,but I do have a vdsm user
and the uid matches across all my hosts too.
Thank you!
Regards.
Neil Wilson.
On Tue, Sep 19, 2017 at 3:47 PM, Moritz Baumann <moritz.baumann(a)inf.ethz.ch>
wrote:
Hi Neil,
I had similar errors ('Sanlock lockspace add failure' and SPM problems,
...) in the log files and my problem was that I added the "-g" option to
mountd (months ago without restarting the service) in /etc/sysconfig/nfs
under RPCMOUNTDOPTS.
I had to either remove the "-g" option or add a goup sanlock and vdsm with
the same users as on the ovirt-nodes.
Maybe your issue is similar.
Cheers,
Moritz
On 19.09.2017 14:16, Neil wrote:
> Hi guys,
>
> I'm desperate to get to the bottom of this issue. Does anyone have any
> ideas please?
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
> ---------- Forwarded message ----------
> From: *Neil* <nwilson123(a)gmail.com <mailto:nwilson123@gmail.com>>
> Date: Mon, Sep 11, 2017 at 4:46 PM
> Subject: AcquireHostIdFailure and code 661
> To: "users(a)ovirt.org <mailto:users@ovirt.org>" <users(a)ovirt.org
<mailto:
> users(a)ovirt.org>>
>
>
> Hi guys,
>
> Please could someone shed some light on this issue I'm facing.
>
> I'm trying to add a new NFS storage domain but when I try add it, I get a
> message saying "Acquire hostID failed" and it fails to add.
>
> I can mount the NFS share manually and I can see that once the attaching
> has failed the NFS share is still mounted on the hosts, as per the
> following...
>
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on
> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
> type nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=172
> .16.0.11)
>
> Also looking at the folders on the NFS share I can see that some data has
> been written, so it's not a permissions issue...
>
> drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-edf5b9
> 3535d5
> -rwx---rwx. 1 vdsm kvm 0 Sep 11 16:08 __DIRECT_IO_TEST__
>
> I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in
> the hope it's a known bug, but I'm still encountering the same problem.
>
> It's not a hosted engine and you might see in the logs that I have a
> storage domain that is out of space which I'm aware of, and I'm hoping the
> system using this space will be decommissioned in 2 days....
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 420G 2.2G 413G 1% /
> tmpfs 48G 0 48G 0% /dev/shm
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB
> 915G 915G 424M 100%
> /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Ex
> ports___RAID1__1TB
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1
> 5.5T 3.7T 1.8T 67%
> /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Ex
> ports___STORAGE1
> 172.16.0.20:/data/ov-export
> 3.6T 2.3T 1.3T 65%
> /rhev/data-center/mnt/172.16.0.20:_data_ov-export
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB
> 3.6T 2.0T 1.6T 56%
> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___4TB
> 172.16.0.253:/var/lib/exports/iso
> 193G 42G 141G 23%
> /rhev/data-center/mnt/172.16.0.253:_var_lib_exports_iso
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2
> 5.5T 3.7G 5.5T 1%
> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
>
> The "STOR2" above is left mounted after attempting to add the new NFS
> storage domain.
>
> Engine details:
> Fedora release 19 (Schrödinger’s Cat)
> ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch
> ovirt-release34-1.0.3-1.noarch
> ovirt-image-uploader-3.5.0-1.fc19.noarch
> ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-log-collector-3.5.0-1.fc19.noarch
> ovirt-release35-006-1.noarch
> ovirt-engine-setup-3.5.0.1-1.fc19.noarch
> ovirt-release33-1.0.0-0.1.master.noarch
> ovirt-engine-tools-3.5.0.1-1.fc19.noarch
> ovirt-engine-lib-3.5.0.1-1.fc19.noarch
> ovirt-engine-sdk-python-3.5.0.8-1.fc19.noarch
> ovirt-host-deploy-java-1.3.0-1.fc19.noarch
> ovirt-engine-backend-3.5.0.1-1.fc19.noarch
> sos-3.1-1.1.fc19.ovirt.noarch
> ovirt-engine-setup-base-3.5.0.1-1.fc19.noarch
> ovirt-engine-extensions-api-impl-3.5.0.1-1.fc19.noarch
> ovirt-engine-webadmin-portal-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-iso-uploader-3.5.0-1.fc19.noarch
> ovirt-host-deploy-1.3.0-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.5.0.1-1.fc19.noarch
> ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-engine-userportal-3.5.0.1-1.fc19.noarch
> ovirt-engine-cli-3.5.0.5-1.fc19.noarch
> ovirt-engine-restapi-3.5.0.1-1.fc19.noarch
> libvirt-daemon-driver-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-libxl-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-secret-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-network-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-storage-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-network-1.1.3.2-1.fc19.x86_64
> libvirt-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
> libvirt-client-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-nodedev-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-uml-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-xen-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-interface-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-vbox-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-lxc-1.1.3.2-1.fc19.x86_64
> qemu-system-lm32-1.4.2-15.fc19.x86_64
> qemu-system-s390x-1.4.2-15.fc19.x86_64
> libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
> qemu-system-ppc-1.4.2-15.fc19.x86_64
> qemu-user-1.4.2-15.fc19.x86_64
> qemu-system-x86-1.4.2-15.fc19.x86_64
> qemu-system-unicore32-1.4.2-15.fc19.x86_64
> qemu-system-mips-1.4.2-15.fc19.x86_64
> qemu-system-or32-1.4.2-15.fc19.x86_64
> qemu-system-m68k-1.4.2-15.fc19.x86_64
> qemu-img-1.4.2-15.fc19.x86_64
> qemu-kvm-1.4.2-15.fc19.x86_64
> qemu-system-xtensa-1.4.2-15.fc19.x86_64
> qemu-1.4.2-15.fc19.x86_64
> qemu-system-microblaze-1.4.2-15.fc19.x86_64
> qemu-system-alpha-1.4.2-15.fc19.x86_64
> libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64
> qemu-system-arm-1.4.2-15.fc19.x86_64
> qemu-common-1.4.2-15.fc19.x86_64
> ipxe-roms-qemu-20130517-2.gitc4bce43.fc19.noarch
> qemu-system-sh4-1.4.2-15.fc19.x86_64
> qemu-system-cris-1.4.2-15.fc19.x86_64
> qemu-system-sparc-1.4.2-15.fc19.x86_64
> libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
> qemu-kvm-1.4.2-15.fc19.x86_64
>
>
>
> Host Details:
> CentOS release 6.9 (Final)
> vdsm-yajsonrpc-4.16.30-0.el6.noarch
> vdsm-python-4.16.30-0.el6.noarch
> vdsm-4.16.30-0.el6.x86_64
> vdsm-cli-4.16.30-0.el6.noarch
> vdsm-jsonrpc-4.16.30-0.el6.noarch
> vdsm-python-zombiereaper-4.16.30-0.el6.noarch
> vdsm-xmlrpc-4.16.30-0.el6.noarch
> srvadmin-itunnelprovider-7.4.0-4.14.1.el6.x86_64
> ovirt-release34-1.0.3-1.noarch
> ovirt-release33-1.0.0-0.1.master.noarch
> ovirt-release35-006-1.noarch
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> [root@ovhost3 ~]# rpm -qa | grep -i qemu
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> gpxe-roms-qemu-0.9.7-6.16.el6.noarch
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> libvirt-lock-sanlock-0.10.2-62.el6.x86_64
> libvirt-client-0.10.2-62.el6.x86_64
> libvirt-0.10.2-62.el6.x86_64
> libvirt-python-0.10.2-62.el6.x86_64
>
> I have tried renaming the NFS share and as well as unmounting it manually
> with a -l option (because it says it's busy when unmounting it from the
> hosts after deleting it from my DC) and I've restarted all hosts after
> upgrading too.
>
> Google reveals lots of similar problems but none of the options tried
> seem to work. I have recently tried enabling selinux as well because I did
> have it disabled on hosts and engine.
>
> Any assistance is appreciated.
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
> _______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users