[Users] vm runs on host 1 and clone from snapshot runs on host 2?

Hello, while I'm testing the fix for 3.2.1 and clone from snapshot in my two nodes cluster, I notice this VM zensrv runs on host f18ovn03. I create a snapshot and then clone from snapshot and the qemu-img convert process is running on the other node. Is this desired, expected, managed? If so, it's great... [g.cecchi@f18ovn03 ~]$ ps -ef|grep [z]ensrv qemu 15811 1 7 Mar19 ? 04:29:28 /usr/bin/qemu-kvm -name zensrv -S -M pc-0.14 -cpu Opteron_G2 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid c0a43bef-7c9d-4170-bd9c-63497e61d3fc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=18-1,serial=34353439-3036-435A-4A38-303330393338,uuid=c0a43bef-7c9d-4170-bd9c-63497e61d3fc -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zensrv.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-03-19T16:30:10,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc,if=none,id=drive-virtio-disk0,format=qcow2,serial=01488698-6420-4a32-9095-cfed1ff8f4bf,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:43:d9:df,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zensrv.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/zensrv.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49152 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 [g.cecchi@f18ovn01 ~]$ ps -ef|grep [c]onvert vdsm 25609 3141 14 08:30 ? 00:00:16 /usr/bin/qemu-img convert -t none -f qcow2 /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc -O raw /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83/images/8fe906ef-db2e-497a-8b6b-6b00de91f8fe/b61cd69e-a556-4530-b00b-1eaf8afd15bb Gianluca

Hi Gianluca, You can set the VM to run on a specific host, when editing the VM and choose the Host tab and Run On a specific host Generally, you can configure your cluster to use three different types of selections: None, Even Distributed and Power Saving. This can be set at the Cluster Policy tab, when you edit the cluster. By default the cluster is using None policy, which actually runs the even distributed algorithm when you run a VM. Regards, Maor On 03/22/2013 09:38 AM, Gianluca Cecchi wrote:
Hello, while I'm testing the fix for 3.2.1 and clone from snapshot in my two nodes cluster, I notice this
VM zensrv runs on host f18ovn03. I create a snapshot and then clone from snapshot and the qemu-img convert process is running on the other node. Is this desired, expected, managed? If so, it's great...
[g.cecchi@f18ovn03 ~]$ ps -ef|grep [z]ensrv qemu 15811 1 7 Mar19 ? 04:29:28 /usr/bin/qemu-kvm -name zensrv -S -M pc-0.14 -cpu Opteron_G2 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid c0a43bef-7c9d-4170-bd9c-63497e61d3fc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=18-1,serial=34353439-3036-435A-4A38-303330393338,uuid=c0a43bef-7c9d-4170-bd9c-63497e61d3fc -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zensrv.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-03-19T16:30:10,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc,if=none,id=drive-virtio-disk0,format=qcow2,serial=01488698-6420-4a32-9095-cfed1ff8f4bf,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:43:d9:df,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zensrv.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/zensrv.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49152 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
[g.cecchi@f18ovn01 ~]$ ps -ef|grep [c]onvert vdsm 25609 3141 14 08:30 ? 00:00:16 /usr/bin/qemu-img convert -t none -f qcow2 /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc -O raw /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83/images/8fe906ef-db2e-497a-8b6b-6b00de91f8fe/b61cd69e-a556-4530-b00b-1eaf8afd15bb
Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sun, Mar 24, 2013 at 9:05 AM, Maor Lipchuk wrote:
Hi Gianluca, You can set the VM to run on a specific host, when editing the VM and choose the Host tab and Run On a specific host
Generally, you can configure your cluster to use three different types of selections: None, Even Distributed and Power Saving. This can be set at the Cluster Policy tab, when you edit the cluster. By default the cluster is using None policy, which actually runs the even distributed algorithm when you run a VM.
Yes, I know very well this. My observation was about another thing: the disk clone operation itself, that is one step of the "clone from snapshot" activity; the process /usr/bin/qemu-img convert -t none -f qcow2 .... that starting from disk of VM1 creates the disk of VM2 (its clone) I presumed that if host1 had in charge VM1, it would have been host1 in charge of making the disk clone. Instead I observed that the disk clone operation was in charge of host2... Hope it is more clear now.... It has nothing to do with which host then will run the clone

Hi Gianluce, The Host which creates the disks, or any storage related allocation operation is only the SPM in the DC. Regards, Maor On 03/24/2013 10:30 AM, Gianluca Cecchi wrote:
On Sun, Mar 24, 2013 at 9:05 AM, Maor Lipchuk wrote:
Hi Gianluca, You can set the VM to run on a specific host, when editing the VM and choose the Host tab and Run On a specific host
Generally, you can configure your cluster to use three different types of selections: None, Even Distributed and Power Saving. This can be set at the Cluster Policy tab, when you edit the cluster. By default the cluster is using None policy, which actually runs the even distributed algorithm when you run a VM.
Yes, I know very well this. My observation was about another thing:
the disk clone operation itself, that is one step of the "clone from snapshot" activity; the process
/usr/bin/qemu-img convert -t none -f qcow2 ....
that starting from disk of VM1 creates the disk of VM2 (its clone)
I presumed that if host1 had in charge VM1, it would have been host1 in charge of making the disk clone. Instead I observed that the disk clone operation was in charge of host2...
Hope it is more clear now....
It has nothing to do with which host then will run the clone
participants (2)
-
Gianluca Cecchi
-
Maor Lipchuk