On Wed, Sep 10, 2014 at 7:01 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Sep 10, 2014 at 05:17:35PM +0200, Sven Kieske wrote:
>
>
> On 10/09/14 17:05, Nathanaël Blanchet wrote:
> > Hello everyone,
> >
> > I experiment the same error with all of my host with vdsm 4.14.11.2 since I upgraded to engine 3.4.x.
> > I can't boot any new VMs, and the issue is about the duplicate ID drive.
> > If I run once instead of "run", the vm boots fine.
> > I've tested the suggested workaround and it works but it is not comfortable...
> > What about this nasty bug? will it be resolved with 3.4.4?
> > Thanks.
>
> I'm not aware of this bug reported at bugzilla.redhat.com
>
> I may have missed it because there are plenty of bugs, but if you
> don't report the bug there it's not very likely to get fixed.

I'm not 100% that it's the same issue, but I have cloned

    Bug 1140323 - fails to run VM - duplicate ID

to 3.4.4. Let's see if what virt developers say.


BTW: the same problem is still present in 3.5. rc2
I just tried with a centos 6.5 iso image.
if you "erroneously" (?) attach cd iso during VM initial creation phase and you start VM you get the same problem. Then you use the run once and you are able to install os.
But then you continue to have the same problem, even if you modify the VM and you say no cd is present at all.
Only able to start it again with run once method.

Instead if you create VM without specifying any cd and then you use the run once method to connect the iso and install the VM you get no problem at all with cd isos afterwards.

It is quite annoying not being able to recover the situation.

 In engine.log
2014-09-16 00:22:49,683 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-17) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM c65 is down with error. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive
.
2014-09-16 00:22:49,684 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) Running on vds during rerun failed vm: null
2014-09-16 00:22:49,691 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) VM c65 (a858f04a-e4f4-49e8-a097-8ba8390b4d25) is running in db and not running in VDS ovnode04
2014-09-16 00:22:49,710 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) Rerun vm a858f04a-e4f4-49e8-a097-8ba8390b4d25. Called from vds ovnode04
2014-09-16 00:22:49,717 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-12) Correlation ID: 7599780, Job ID: 8d63e66b-b26b-4344-85e0-ab72bf53fa1d, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM c65 on Host ovnode04.
2014-09-16 00:22:49,722 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-12) Lock Acquired to object EngineLock [exclusiveLocks= key: a858f04a-e4f4-49e8-a097-8ba8390b4d25 value: VM
, sharedLocks= ]

In VM.log on hypervisor under /var/log/libvirt/qemu/vm.log

2014-09-15 22:22:49.358+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -na
me c65 -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus
=32,sockets=16,cores=2,threads=1 -uuid a858f04a-e4f4-49e8-a097-8ba8390b4d25 -smbios type=1
,manufacturer=oVirt,product=oVirt Node,version=6-5.el6.centos.11.2,serial=9B1B2F84-3CB9-11
DE-B5A1-001517B493E8,uuid=a858f04a-e4f4-49e8-a097-8ba8390b4d25 -nodefconfig -nodefaults -c
hardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c65.monitor,server,nowait -mon cha
rdev=charmonitor,id=monitor,mode=control -rtc base=2014-09-16T00:22:49,driftfix=slew -no-k
vm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -devi
ce virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/4512e567-f94e-476a-a050-6cd0a15e260a/346732c8-e1e9-49de-9ca4-63f4477ef6dd/images/c0a1e1a3-af66-482b-96b4-e8ec928053f7/c968201d-d3fe-4c29-904e-554004a43e3e,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c0a1e1a3-af66-482b-96b4-e8ec928053f7,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:01:a4:a3:42:00,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/a858f04a-e4f4-49e8-a097-8ba8390b4d25.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/a858f04a-e4f4-49e8-a097-8ba8390b4d25.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5902,tls-port=5903,addr=192.168.1.74,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive
2014-09-15 22:22:49.633+0000: shutting down

In vdsm.log
Thread-159167::ERROR::2014-09-16 00:22:49,849::vm::2326::vm.Vm::(_startUnderlyingVm) vmId=`a858f04a-e4f4-49e8-a097-8ba8390b4d25`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3368, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2665, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive
Thread-159167::DEBUG::2014-09-16 00:22:49,851::vm::2838::vm.Vm::(setDownStatus) vmId=`a858f04a-e4f4-49e8-a097-8ba8390b4d25`::Changed state to Down: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive
 (code=1)
JsonRpc (StompReactor)::DEBUG::2014-09-16 00:22:50,554::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-09-16 00:22:50,556::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::595::Storage.TaskManager.Task::(_updateState) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::moving from state init -> state preparing
Thread-159169::INFO::2014-09-16 00:22:50,559::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-159169::INFO::2014-09-16 00:22:50,559::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'3b23161a-18a3-4cfa-bc78-f4fe2052b19e': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000441651', 'lastCheck': '8.2', 'valid': True}, '346732c8-e1e9-49de-9ca4-63f4477ef6dd': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000640336', 'lastCheck': '3.5', 'valid': True}}
Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::1191::Storage.TaskManager.Task::(prepare) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::finished: {u'3b23161a-18a3-4cfa-bc78-f4fe2052b19e': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000441651', 'lastCheck': '8.2', 'valid': True}, '346732c8-e1e9-49de-9ca4-63f4477ef6dd': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000640336', 'lastCheck': '3.5', 'valid': True}}
Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::595::Storage.TaskManager.Task::(_updateState) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::moving from state preparing -> state finished
Thread-159169::DEBUG::2014-09-16 00:22:50,559::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-159169::DEBUG::2014-09-16 00:22:50,559::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-159169::DEBUG::2014-09-16 00:22:50,560::task::993::Storage.TaskManager.Task::(_decref) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::ref 0 aborting False
Thread-159169::DEBUG::2014-09-16 00:22:50,563::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
...

Gianluca