I was able to restart engine and the two hosts.
All restarted again.
Now the effect to run the VM is that it remains in paused state
- start VM (about 21:54 today)
it starts and goes into paused mode (arrow icon near VM)
From image
https://docs.google.com/file/d/0BwoPbcrMv8mvRXlaa19sdFpmQ0E/edit?usp=sharing
you see that apparently the execute action terminates at 21:54 but the
VM maintains paused state.
- if I try other actions on the same VM there is no message preventing
me from that....
and it stays in paused mode
see several attempt actions to solve situation
- at 21:58 host becomes unreponsive from the gui, no network ping from
engine and if I go into its console I see the login prompt but not
able to connect...
- power off ovnode01
icon near VM becomes now question mark (?)
- power on ovnode01
vm goes into stop mode (red square)
ovnode01 joins again cluster
vdsm log in gzip format starting today before start of vm
https://docs.google.com/file/d/0BwoPbcrMv8mvXzY2eEcwR0VXazQ/edit?usp=sharing
engine.log in gzip format
https://docs.google.com/file/d/0BwoPbcrMv8mvU1RuLVRVYVZ0SXM/edit?usp=sharing
PS: at the moment no fenging action set up. COuld I set any agent for
host virtualized inside VMware?
from a gluster point of view on ovnode01 under /var/log/glusterfs
[root@ovnode01 glusterfs]# ls -lrt
total 2008
drwxr-xr-x. 2 root root 4096 Sep 25 00:05 bricks
-rw-------. 1 root root 59038 Sep 26 22:09 nfs.log
-rw-------. 1 root root 51992 Sep 26 22:09 glustershd.log
-rw-------. 1 root root 40230 Sep 26 22:09
rhev-data-center-mnt-glusterSD-ovnode01:gv01.log
-rw-------. 1 root root 422757 Sep 26 22:47 etc-glusterfs-glusterd.vol.log
-rw-------. 1 root root 1449411 Sep 26 22:47 cli.log
In etc-glusterfs-glusterd.vol.log
I see several lines like this
[2013-09-26 20:19:53.450793] I
[glusterd-handler.c:1007:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req
qemu process:
qemu 4565 1 0 22:21 ? 00:00:09
/usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
1,sockets=1,cores=1,threads=1 -uuid
409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2013-09-26T20:21:00,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
The VM result as started in events of the gui but its icon remains in pause.
See the image:
https://docs.google.com/file/d/0BwoPbcrMv8mvZ1RnUkg4aVhlckk/edit?usp=sharing
Gianluca