[ovirt-users] AcquireHostIdFailure and code 661
Neil
nwilson123 at gmail.com
Thu Sep 14 11:41:05 UTC 2017
Sorry to re-post, but does anyone have any ideas?
Thank you.
Regards.
Neil Wilson.
On Mon, Sep 11, 2017 at 4:46 PM, Neil <nwilson123 at gmail.com> wrote:
> Hi guys,
>
> Please could someone shed some light on this issue I'm facing.
>
> I'm trying to add a new NFS storage domain but when I try add it, I get a
> message saying "Acquire hostID failed" and it fails to add.
>
> I can mount the NFS share manually and I can see that once the attaching
> has failed the NFS share is still mounted on the hosts, as per the
> following...
>
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on
> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
> type nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=
> 172.16.0.11)
>
> Also looking at the folders on the NFS share I can see that some data has
> been written, so it's not a permissions issue...
>
> drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-
> edf5b93535d5
> -rwx---rwx. 1 vdsm kvm 0 Sep 11 16:08 __DIRECT_IO_TEST__
>
> I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in the
> hope it's a known bug, but I'm still encountering the same problem.
>
> It's not a hosted engine and you might see in the logs that I have a
> storage domain that is out of space which I'm aware of, and I'm hoping the
> system using this space will be decommissioned in 2 days....
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 420G 2.2G 413G 1% /
> tmpfs 48G 0 48G 0% /dev/shm
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB
> 915G 915G 424M 100% /rhev/data-center/mnt/172.16.
> 0.10:_raid0_data___NAS__NFS__Exports___RAID1__1TB
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1
> 5.5T 3.7T 1.8T 67% /rhev/data-center/mnt/172.16.
> 0.10:_raid0_data___NAS__NFS__Exports___STORAGE1
> 172.16.0.20:/data/ov-export
> 3.6T 2.3T 1.3T 65% /rhev/data-center/mnt/172.16.
> 0.20:_data_ov-export
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB
> 3.6T 2.0T 1.6T 56% /rhev/data-center/mnt/172.16.
> 0.11:_raid1_data___NAS__NFS__Exports___4TB
> 172.16.0.253:/var/lib/exports/iso
> 193G 42G 141G 23% /rhev/data-center/mnt/172.16.
> 0.253:_var_lib_exports_iso
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2
> 5.5T 3.7G 5.5T 1% /rhev/data-center/mnt/172.16.
> 0.11:_raid1_data___NAS__NFS__Exports___STOR2
>
> The "STOR2" above is left mounted after attempting to add the new NFS
> storage domain.
>
> Engine details:
> Fedora release 19 (Schrödinger’s Cat)
> ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch
> ovirt-release34-1.0.3-1.noarch
> ovirt-image-uploader-3.5.0-1.fc19.noarch
> ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-log-collector-3.5.0-1.fc19.noarch
> ovirt-release35-006-1.noarch
> ovirt-engine-setup-3.5.0.1-1.fc19.noarch
> ovirt-release33-1.0.0-0.1.master.noarch
> ovirt-engine-tools-3.5.0.1-1.fc19.noarch
> ovirt-engine-lib-3.5.0.1-1.fc19.noarch
> ovirt-engine-sdk-python-3.5.0.8-1.fc19.noarch
> ovirt-host-deploy-java-1.3.0-1.fc19.noarch
> ovirt-engine-backend-3.5.0.1-1.fc19.noarch
> sos-3.1-1.1.fc19.ovirt.noarch
> ovirt-engine-setup-base-3.5.0.1-1.fc19.noarch
> ovirt-engine-extensions-api-impl-3.5.0.1-1.fc19.noarch
> ovirt-engine-webadmin-portal-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-iso-uploader-3.5.0-1.fc19.noarch
> ovirt-host-deploy-1.3.0-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.5.0.1-1.fc19.noarch
> ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-engine-userportal-3.5.0.1-1.fc19.noarch
> ovirt-engine-cli-3.5.0.5-1.fc19.noarch
> ovirt-engine-restapi-3.5.0.1-1.fc19.noarch
> libvirt-daemon-driver-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-libxl-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-secret-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-network-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-storage-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-network-1.1.3.2-1.fc19.x86_64
> libvirt-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
> libvirt-client-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-nodedev-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-uml-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-xen-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-interface-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-vbox-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-lxc-1.1.3.2-1.fc19.x86_64
> qemu-system-lm32-1.4.2-15.fc19.x86_64
> qemu-system-s390x-1.4.2-15.fc19.x86_64
> libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
> qemu-system-ppc-1.4.2-15.fc19.x86_64
> qemu-user-1.4.2-15.fc19.x86_64
> qemu-system-x86-1.4.2-15.fc19.x86_64
> qemu-system-unicore32-1.4.2-15.fc19.x86_64
> qemu-system-mips-1.4.2-15.fc19.x86_64
> qemu-system-or32-1.4.2-15.fc19.x86_64
> qemu-system-m68k-1.4.2-15.fc19.x86_64
> qemu-img-1.4.2-15.fc19.x86_64
> qemu-kvm-1.4.2-15.fc19.x86_64
> qemu-system-xtensa-1.4.2-15.fc19.x86_64
> qemu-1.4.2-15.fc19.x86_64
> qemu-system-microblaze-1.4.2-15.fc19.x86_64
> qemu-system-alpha-1.4.2-15.fc19.x86_64
> libvirt-daemon-qemu-1.1.3.2-1.fc19.x86_64
> qemu-system-arm-1.4.2-15.fc19.x86_64
> qemu-common-1.4.2-15.fc19.x86_64
> ipxe-roms-qemu-20130517-2.gitc4bce43.fc19.noarch
> qemu-system-sh4-1.4.2-15.fc19.x86_64
> qemu-system-cris-1.4.2-15.fc19.x86_64
> qemu-system-sparc-1.4.2-15.fc19.x86_64
> libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
> qemu-kvm-1.4.2-15.fc19.x86_64
>
>
>
> Host Details:
> CentOS release 6.9 (Final)
> vdsm-yajsonrpc-4.16.30-0.el6.noarch
> vdsm-python-4.16.30-0.el6.noarch
> vdsm-4.16.30-0.el6.x86_64
> vdsm-cli-4.16.30-0.el6.noarch
> vdsm-jsonrpc-4.16.30-0.el6.noarch
> vdsm-python-zombiereaper-4.16.30-0.el6.noarch
> vdsm-xmlrpc-4.16.30-0.el6.noarch
> srvadmin-itunnelprovider-7.4.0-4.14.1.el6.x86_64
> ovirt-release34-1.0.3-1.noarch
> ovirt-release33-1.0.0-0.1.master.noarch
> ovirt-release35-006-1.noarch
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> [root at ovhost3 ~]# rpm -qa | grep -i qemu
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> gpxe-roms-qemu-0.9.7-6.16.el6.noarch
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> libvirt-lock-sanlock-0.10.2-62.el6.x86_64
> libvirt-client-0.10.2-62.el6.x86_64
> libvirt-0.10.2-62.el6.x86_64
> libvirt-python-0.10.2-62.el6.x86_64
>
> I have tried renaming the NFS share and as well as unmounting it manually
> with a -l option (because it says it's busy when unmounting it from the
> hosts after deleting it from my DC) and I've restarted all hosts after
> upgrading too.
>
> Google reveals lots of similar problems but none of the options tried seem
> to work. I have recently tried enabling selinux as well because I did have
> it disabled on hosts and engine.
>
> Any assistance is appreciated.
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170914/271bea5e/attachment.html>
More information about the Users
mailing list