[Users] Host cannot access storage domains

Albl, Oliver Oliver.Albl at fabasoft.com
Fri Jan 3 15:08:48 UTC 2014


Dafna,

  /usr/lib/systemd/systemd-vdsmd reconfigure force worked!

VMs start and can be migrated. Thanks a lot for your help - and I'll stay with the node iso image :)

All the best,
Oliver
-----Ursprüngliche Nachricht-----
Von: Alon Bar-Lev [mailto:alonbl at redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 16:00
An: Albl, Oliver
Cc: dron at redhat.com; users at ovirt.org
Betreff: Re: [Users] Host cannot access storage domains



----- Original Message -----
> From: "Oliver Albl" <Oliver.Albl at fabasoft.com>
> To: dron at redhat.com
> Cc: users at ovirt.org
> Sent: Friday, January 3, 2014 4:56:33 PM
> Subject: Re: [Users] Host cannot access storage domains
> 
> Redirecting to /bin/systemctl reconfigure  vdsmd.service Unknown 
> operation 'reconfigure'.

/usr/lib/systemd/systemd-vdsmd reconfigure force

> 
> ... seems to me, I should get rid of the ovirt-node iso installation 
> and move to a rpm based install?
> 
> Thanks,
> Oliver
> -----Ursprüngliche Nachricht-----
> Von: Dafna Ron [mailto:dron at redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:51
> An: Albl, Oliver
> Cc: users at ovirt.org
> Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage 
> domains
> 
> can you run:
> service vdsmd reconfigure on the second host?
> 
> On 01/03/2014 02:43 PM, Albl, Oliver wrote:
> > Dafna,
> >
> >    yes, the VM starts on the first node, the issues are on the second node
> >    only.
> >
> > /etc/libvirt/qemu-sanlock.conf is identical on on both nodes:
> >
> > auto_disk_leases=0
> > require_lease_for_disks=0
> >
> > yum updates reports "Using yum is not supported"...
> >
> > Thanks,
> > Oliver
> >
> > -----Ursprüngliche Nachricht-----
> > Von: Dafna Ron [mailto:dron at redhat.com]
> > Gesendet: Freitag, 03. Jänner 2014 15:39
> > An: Albl, Oliver
> > Cc: users at ovirt.org
> > Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage 
> > domains
> >
> > ok, let's try to zoom in on the issue...
> > can you run vm's on the first host or do you have issues only on the 
> > second host you added?
> > can you run on both hosts?
> > # egrep -v ^# /etc/libvirt/qemu-sanlock.conf
> >
> > can you run yum update on one of the hosts and see if there are 
> > newer packages?
> >
> > Thanks,
> >
> > Dafna
> >
> > On 01/03/2014 02:30 PM, Albl, Oliver wrote:
> >> I installed both hosts using the oVirt Node ISO image:
> >>
> >> OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
> >> 200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
> >> libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19
> >>
> >> Thanks,
> >> Oliver
> >> -----Ursprüngliche Nachricht-----
> >> Von: Dafna Ron [mailto:dron at redhat.com]
> >> Gesendet: Freitag, 03. Jänner 2014 15:24
> >> An: Albl, Oliver
> >> Cc: users at ovirt.org
> >> Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains
> >>
> >> ignore the link :)
> >>
> >> so searching for this error I hit an old bug and it seemed to be an 
> >> issue between libvirt/sanlock.
> >>
> >> https://bugzilla.redhat.com/show_bug.cgi?id=828633
> >>
> >> are you using latest packages?
> >>
> >>
> >>
> >>
> >> On 01/03/2014 02:15 PM, Albl, Oliver wrote:
> >>> Dafna,
> >>>
> >>>      Libvirtd.log shows no errors, but VM log shows the following:
> >>>
> >>> 2014-01-03 13:52:11.296+0000: starting up LC_ALL=C 
> >>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
> >>> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime 
> >>> mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> >>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios 
> >>> type=1,manufacturer=oVirt,product=oVirt
> >>> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304
> >>> C3
> >>> 8
> >>> 4
> >>> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config 
> >>> -nodefaults -chardev 
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,s
> >>> er v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control 
> >>> -rtc base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> >>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
> >>> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
> >>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> >>> file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707acc
> >>> b5
> >>> c
> >>> 4
> >>> b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8
> >>> 38
> >>> 7
> >>> -
> >>> ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca
> >>> 99
> >>> 2
> >>> -
> >>> b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio
> >>> =n
> >>> a
> >>> t
> >>> ive -device
> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk
> >>> 0,
> >>> i
> >>> d
> >>> =virtio-disk0,bootindex=1 -chardev
> >>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcd
> >>> b-
> >>> a
> >>> 2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
> >>> -device
> >>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=c
> >>> ha n n el0,name=com.redhat.rhevm.vdsm -chardev
> >>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcd
> >>> b-
> >>> a
> >>> 2 c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
> >>> -device
> >>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=c
> >>> ha
> >>> n
> >>> n
> >>> el1,name=org.qemu.guest_agent.0 -chardev 
> >>> spicevmc,id=charchannel2,name=vdagent -device 
> >>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=c
> >>> ha
> >>> n
> >>> n
> >>> el2,name=com.redhat.spice.0 -spice 
> >>> tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-chan
> >>> ne
> >>> l
> >>> =
> >>> main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls
> >>> -c
> >>> h
> >>> a
> >>> nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel
> >>> =u s b redir,seamless-migration=on -k en-us -device 
> >>> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,a
> >>> dd
> >>> r
> >>> =
> >>> 0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> >>> libvirt: Lock Driver error : unsupported configuration: 
> >>> Read/write, exclusive access, disks were present, but no leases 
> >>> specified
> >>> 2014-01-03 13:52:11.306+0000: shutting down
> >>>
> >>> Not sure what you mean with this
> >>> http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
> >>> Do you want me to update libvirt with these repos on the 
> >>> oVirt-Node based installation?
> >>>
> >>> Thanks,
> >>> Oliver
> >>> -----Ursprüngliche Nachricht-----
> >>> Von: Dafna Ron [mailto:dron at redhat.com]
> >>> Gesendet: Freitag, 03. Jänner 2014 15:10
> >>> An: Albl, Oliver
> >>> Cc: users at ovirt.org
> >>> Betreff: Re: AW: AW: [Users] Host cannot access storage domains
> >>>
> >>> actually, looking at this again, it's a libvirt error and it can 
> >>> be related to selinux or sasl.
> >>> can you also, look at libvirt log and the vm log under /var/log/libvirt?
> >>>
> >>> On 01/03/2014 02:00 PM, Albl, Oliver wrote:
> >>>> Dafna,
> >>>>
> >>>>       please find the logs below:
> >>>>
> >>>> ERRORs in vdsm.log on host02:
> >>>>
> >>>> Thread-61::ERROR::2014-01-03
> >>>> 13:51:48,956::sdc::137::Storage.StorageDomainCache::(_findDomain)
> >>>> looking for unfetched domain f404398a-97f9-474c-af2c-e8887f53f688
> >>>> Thread-61::ERROR::2014-01-03
> >>>> 13:51:48,959::sdc::154::Storage.StorageDomainCache::(_findUnfetch
> >>>> ed
> >>>> D
> >>>> o
> >>>> m
> >>>> ain) looking for domain f404398a-97f9-474c-af2c-e8887f53f688
> >>>> Thread-323::ERROR::2014-01-03
> >>>> 13:52:11,527::vm::2132::vm.Vm::(_startUnderlyingVm)
> >>>> vmId=`d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6`::The vm start process 
> >>>> failed Traceback (most recent call last):
> >>>>       File "/usr/share/vdsm/vm.py", line 2092, in _startUnderlyingVm
> >>>>         self._run()
> >>>>       File "/usr/share/vdsm/vm.py", line 2959, in _run
> >>>>         self._connection.createXML(domxml, flags),
> >>>>       File
> >>>>       "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
> >>>>       line 76, in wrapper
> >>>>         ret = f(*args, **kwargs)
> >>>>       File "/usr/lib64/python2.7/site-packages/libvirt.py", line 
> >>>> 2920, in createXML
> >>>> libvirtError: Child quit during startup handshake: Input/output 
> >>>> error
> >>>> Thread-60::ERROR::2014-01-03
> >>>> 13:52:23,111::sdc::137::Storage.StorageDomainCache::(_findDomain)
> >>>> looking for unfetched domain 52cf84ce-6eda-4337-8c94-491d94f5a18d
> >>>> Thread-60::ERROR::2014-01-03
> >>>> 13:52:23,111::sdc::154::Storage.StorageDomainCache::(_findUnfetch
> >>>> ed
> >>>> D
> >>>> o
> >>>> m
> >>>> ain) looking for domain 52cf84ce-6eda-4337-8c94-491d94f5a18d
> >>>> Thread-62::ERROR::2014-01-03
> >>>> 13:52:26,353::sdc::137::Storage.StorageDomainCache::(_findDomain)
> >>>> looking for unfetched domain 7841a1c0-181a-4d43-9a25-b707accb5c4b
> >>>> Thread-62::ERROR::2014-01-03
> >>>> 13:52:26,355::sdc::154::Storage.StorageDomainCache::(_findUnfetch
> >>>> ed
> >>>> D
> >>>> o
> >>>> m
> >>>> ain) looking for domain 7841a1c0-181a-4d43-9a25-b707accb5c4b
> >>>>
> >>>> engine.log:
> >>>>
> >>>> 2014-01-03 14:52:06,976 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] START, 
> >>>> IsVmDuringInitiatingVDSCommand( vmId = 
> >>>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6), log id: 5940cf72
> >>>> 2014-01-03 14:52:06,976 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] FINISH, 
> >>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 5940cf72
> >>>> 2014-01-03 14:52:07,057 INFO
> >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] Running command: 
> >>>> RunVmOnceCommand
> >>>> internal: false.
> >>>> Entities affected :  ID: d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 
> >>>> Type: VM,
> >>>> ID:
> >>>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 Type: VM
> >>>> 2014-01-03 14:52:07,151 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.IsoPrefixVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] START, 
> >>>> IsoPrefixVDSCommand(HostName = host02, HostId = 
> >>>> 6dc7fac6-149e-4445-ace1-3c334a24d52a,
> >>>> storagePoolId=b33d1793-252b-44ac-9685-3fe56b83c4c9), log id:
> >>>> 1705b611
> >>>> 2014-01-03 14:52:07,152 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.IsoPrefixVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] FINISH, IsoPrefixVDSCommand, return:
> >>>> /rhev/data-center/mnt/vmmgmt:_var_lib_exports_iso/f74f052e-0dc6-4
> >>>> 56
> >>>> d
> >>>> - a f95-248c2227c2e5/images/11111111-1111-1111-1111-111111111111,
> >>>> log
> >>>> id:
> >>>> 1705b611
> >>>> 2014-01-03 14:52:07,170 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] START, 
> >>>> CreateVmVDSCommand(HostName = host02, HostId = 
> >>>> 6dc7fac6-149e-4445-ace1-3c334a24d52a,
> >>>> vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, vm=VM [TEST2]), log id:
> >>>> 27b504de
> >>>> 2014-01-03 14:52:07,190 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] START, 
> >>>> CreateVDSCommand(HostName = host02, HostId = 
> >>>> 6dc7fac6-149e-4445-ace1-3c334a24d52a,
> >>>> vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, vm=VM [TEST2]), log id:
> >>>> 6ad0220
> >>>> 2014-01-03 14:52:08,472 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] 
> >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
> >>>> spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmT
> >>>> yp
> >>>> e
> >>>> =
> >>>> k
> >>>> vm,emulatedMachine=pc-1.0,keyboardLayout=en-us,memGuaranteedSize=
> >>>> 10
> >>>> 2
> >>>> 4
> >>>> ,
> >>>> pitReinjection=false,nice=0,display=qxl,smartcardEnable=false,smp
> >>>> Co
> >>>> r
> >>>> e
> >>>> s
> >>>> PerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,s
> >>>> re
> >>>> c
> >>>> o
> >>>> r
> >>>> d,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages
> >>>> =t
> >>>> r
> >>>> u
> >>>> e
> >>>> ,vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6,devices=[Ljava.util.Ha
> >>>> sh
> >>>> M
> >>>> a
> >>>> p
> >>>> ;@3692311a,acpiEnable=true,vmName=TEST2,cpuType=SandyBridge,custo
> >>>> m=
> >>>> {
> >>>> }
> >>>> 2014-01-03 14:52:08,476 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] FINISH, CreateVDSCommand, log id:
> >>>> 6ad0220
> >>>> 2014-01-03 14:52:08,484 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] FINISH, CreateVmVDSCommand, return:
> >>>> WaitForLaunch, log id: 27b504de
> >>>> 2014-01-03 14:52:08,497 INFO
> >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> >>>> ctor]
> >>>> (ajp--127.0.0.1-8702-3) [2ab5cd2] Correlation ID: 2ab5cd2, Job ID:
> >>>> 2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom 
> >>>> Event
> >>>> ID: -1, Message: VM TEST2 was started by oliver.albl (Host: host02).
> >>>> 2014-01-03 14:52:14,728 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] START, 
> >>>> DestroyVDSCommand(HostName = host02, HostId = 
> >>>> 6dc7fac6-149e-4445-ace1-3c334a24d52a,
> >>>> vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, force=false, 
> >>>> secondsToWait=0, gracefully=false), log id: 6a95ffd5
> >>>> 2014-01-03 14:52:15,783 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] FINISH, 
> >>>> DestroyVDSCommand, log id: 6a95ffd5
> >>>> 2014-01-03 14:52:15,804 INFO
> >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> >>>> ctor]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] Correlation ID: 
> >>>> null, Call
> >>>> Stack: null, Custom Event ID: -1, Message: VM TEST2 is down. Exit
> >>>> message: Child quit during startup handshake: Input/output error.
> >>>> 2014-01-03 14:52:15,805 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] Running on vds 
> >>>> during rerun failed vm: null
> >>>> 2014-01-03 14:52:15,805 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] vm TEST2 running in 
> >>>> db and not running in vds - add to rerun treatment. vds host02
> >>>> 2014-01-03 14:52:15,808 ERROR
> >>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >>>> (DefaultQuartzScheduler_Worker-7) [24696b3e] Rerun vm 
> >>>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6. Called from vds host02
> >>>> 2014-01-03 14:52:15,810 INFO
> >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> >>>> ctor]
> >>>> (pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID:
> >>>> 2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom 
> >>>> Event
> >>>> ID: -1, Message: Failed to run VM TEST2 on Host host02.
> >>>> 2014-01-03 14:52:15,823 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> >>>> (pool-6-thread-40) [24696b3e] START, 
> >>>> IsVmDuringInitiatingVDSCommand( vmId = 
> >>>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6), log id: 35e1eec
> >>>> 2014-01-03 14:52:15,824 INFO
> >>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> >>>> (pool-6-thread-40) [24696b3e] FINISH, 
> >>>> IsVmDuringInitiatingVDSCommand,
> >>>> return: false, log id: 35e1eec
> >>>> 2014-01-03 14:52:15,858 WARN
> >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (pool-6-thread-40) 
> >>>> [24696b3e] CanDoAction of action RunVmOnce failed.
> >>>> Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE
> >>>> __
> >>>> V
> >>>> M
> >>>> ,
> >>>> SCHEDULING_ALL_HOSTS_FILTERED_OUT
> >>>> 2014-01-03 14:52:15,862 INFO
> >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> >>>> ctor]
> >>>> (pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID:
> >>>> 2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom 
> >>>> Event
> >>>> ID: -1, Message: Failed to run VM TEST2 (User: oliver.albl).
> >>>>
> >>>> Thanks,
> >>>> Oliver
> >>>> -----Ursprüngliche Nachricht-----
> >>>> Von: Dafna Ron [mailto:dron at redhat.com]
> >>>> Gesendet: Freitag, 03. Jänner 2014 14:51
> >>>> An: Albl, Oliver
> >>>> Cc: users at ovirt.org
> >>>> Betreff: Re: AW: [Users] Host cannot access storage domains
> >>>>
> >>>> Thanks for reporting the issue :)
> >>>>
> >>>> As for the vm, can you please find the error in vdsm.log and in 
> >>>> engine and paste it?
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Dafna
> >>>>
> >>>>
> >>>> On 01/03/2014 01:49 PM, Albl, Oliver wrote:
> >>>>> Dafna,
> >>>>>
> >>>>>        you were right, it seems to be a caching issue. Rebooting the
> >>>>>        host did the job:
> >>>>>
> >>>>> Before Reboot:
> >>>>>
> >>>>> [root at host01 log]# vdsClient -s 0 getStorageDomainsList 
> >>>>> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> >>>>> f404398a-97f9-474c-af2c-e8887f53f688
> >>>>> 7841a1c0-181a-4d43-9a25-b707accb5c4b
> >>>>>
> >>>>> [root at host02 log]# vdsClient -s 0 getStorageDomainsList 
> >>>>> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> >>>>> f404398a-97f9-474c-af2c-e8887f53f688
> >>>>> 7841a1c0-181a-4d43-9a25-b707accb5c4b
> >>>>> 925ee53a-69b5-440f-b145-138ada5b452e
> >>>>>
> >>>>> After Reboot:
> >>>>>
> >>>>> [root at host02 admin]# vdsClient -s 0 getStorageDomainsList 
> >>>>> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> >>>>> f404398a-97f9-474c-af2c-e8887f53f688
> >>>>> 7841a1c0-181a-4d43-9a25-b707accb5c4b
> >>>>>
> >>>>> So now I have both hosts up and running but when I try to start 
> >>>>> a VM on the second host, I receive the following messages in the events pane:
> >>>>>
> >>>>> VM TEST2 was started by oliver.albl (Host: host02) VM TEST2 is down.
> >>>>> Exit message: Child quit during startup handshake: Input/output error.
> >>>>>
> >>>>> Thanks again for your help!
> >>>>> Oliver
> >>>>>
> >>>>> -----Ursprüngliche Nachricht-----
> >>>>> Von: Dafna Ron [mailto:dron at redhat.com]
> >>>>> Gesendet: Freitag, 03. Jänner 2014 14:22
> >>>>> An: Albl, Oliver
> >>>>> Cc: users at ovirt.org
> >>>>> Betreff: Re: [Users] Host cannot access storage domains
> >>>>>
> >>>>> yes, please attach the vdsm log
> >>>>> also, can you run vdsClient 0 getStorageDomainsList and 
> >>>>> vdsClient 0 getDeviceList on both hosts?
> >>>>>
> >>>>> It might be a cache issue, so can you please restart the host 
> >>>>> and if it helps attach output before and after the reboot?
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> Dafna
> >>>>>
> >>>>>
> >>>>> On 01/03/2014 01:12 PM, Albl, Oliver wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>> I am starting with oVirt 3.3.2 and I have an issue adding a 
> >>>>>> host to a cluster.
> >>>>>>
> >>>>>> I am using oVirt Engine Version 3.3.2-1.el6
> >>>>>>
> >>>>>> There is a cluster with one host (installed with oVirt Node -
> >>>>>> 3.0.3
> >>>>>> -
> >>>>>> 1.1.fc19 ISO image) up and running.
> >>>>>>
> >>>>>> I installed a second host using the same ISO image.
> >>>>>>
> >>>>>> I approved the host in the cluster.
> >>>>>>
> >>>>>> When I try to activate the second host, I receive the following 
> >>>>>> messages in the events pane:
> >>>>>>
> >>>>>> State was set to Up for host host02.
> >>>>>>
> >>>>>> Host host02 reports about one of the Active Storage Domains as 
> >>>>>> Problematic.
> >>>>>>
> >>>>>> Host host02 cannot access one of the Storage Domains attached 
> >>>>>> to the Data Center Test303. Stetting Host state to Non-Operational.
> >>>>>>
> >>>>>> Failed to connect Host host02 to Storage Pool Test303
> >>>>>>
> >>>>>> There are 3 FC Storage Domains configured and visible to both hosts.
> >>>>>>
> >>>>>> multipath -ll shows all LUNs on both hosts.
> >>>>>>
> >>>>>> The engine.log reports the following about every five minutes:
> >>>>>>
> >>>>>> 2014-01-03 13:50:15,408 ERROR
> >>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> >>>>>> (pool-6-thread-44) Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b:
> >>>>>> LUN_105 check timeot 69.7 is too big
> >>>>>>
> >>>>>> 2014-01-03 13:50:15,409 ERROR
> >>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> >>>>>> (pool-6-thread-44) Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d:
> >>>>>> LUN_103 check timeot 59.6 is too big
> >>>>>>
> >>>>>> 2014-01-03 13:50:15,410 ERROR
> >>>>>> [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
> >>>>>> (pool-6-thread-44) Storage Domain LUN_105 of pool Test303 is in 
> >>>>>> problem in host
> >>>>>> host02
> >>>>>>
> >>>>>> 2014-01-03 13:50:15,411 ERROR
> >>>>>> [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
> >>>>>> (pool-6-thread-44) Storage Domain LUN_103 of pool Test030 is in 
> >>>>>> problem in host
> >>>>>> host02
> >>>>>>
> >>>>>> Please let me know if there are any log files I should attach.
> >>>>>>
> >>>>>> Thank you for your help!
> >>>>>>
> >>>>>> All the best,
> >>>>>>
> >>>>>> Oliver Albl
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Users mailing list
> >>>>>> Users at ovirt.org
> >>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>> --
> >>>>> Dafna Ron
> >>>> --
> >>>> Dafna Ron
> >>> --
> >>> Dafna Ron
> >> --
> >> Dafna Ron
> >>
> >
> > --
> > Dafna Ron
> 
> 
> --
> Dafna Ron
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


More information about the Users mailing list