<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Sep 10, 2014 at 7:01 PM, Dan Kenigsberg <span dir="ltr">&lt;<a href="mailto:danken@redhat.com" target="_blank">danken@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="">On Wed, Sep 10, 2014 at 05:17:35PM +0200, Sven Kieske wrote:<br>
&gt;<br>
&gt;<br>
&gt; On 10/09/14 17:05, Nathanaël Blanchet wrote:<br>
&gt; &gt; Hello everyone,<br>
&gt; &gt;<br>
&gt; &gt; I experiment the same error with all of my host with vdsm 4.14.11.2 since I upgraded to engine 3.4.x.<br>
&gt; &gt; I can&#39;t boot any new VMs, and the issue is about the duplicate ID drive.<br>
&gt; &gt; If I run once instead of &quot;run&quot;, the vm boots fine.<br>
&gt; &gt; I&#39;ve tested the suggested workaround and it works but it is not comfortable...<br>
&gt; &gt; What about this nasty bug? will it be resolved with 3.4.4?<br>
&gt; &gt; Thanks.<br>
&gt;<br>
&gt; I&#39;m not aware of this bug reported at <a href="http://bugzilla.redhat.com" target="_blank">bugzilla.redhat.com</a><br>
&gt;<br>
&gt; I may have missed it because there are plenty of bugs, but if you<br>
&gt; don&#39;t report the bug there it&#39;s not very likely to get fixed.<br>
<br>
</span>I&#39;m not 100% that it&#39;s the same issue, but I have cloned<br>
<br>
    Bug 1140323 - fails to run VM - duplicate ID<br>
<br>
to 3.4.4. Let&#39;s see if what virt developers say.<br></blockquote></div><br><br></div><div class="gmail_extra">BTW: the same problem is still present in 3.5. rc2 <br></div><div class="gmail_extra">I just tried with a centos 6.5 iso image.<br></div><div class="gmail_extra">if you &quot;erroneously&quot; (?) attach cd iso during VM initial creation phase and you start VM you get the same problem. Then you use the run once and you are able to install os.<br></div><div class="gmail_extra">But then you continue to have the same problem, even if you modify the VM and you say no cd is present at all.<br></div><div class="gmail_extra">Only able to start it again with run once method.<br><br></div><div class="gmail_extra">Instead if you create VM without specifying any cd and then you use the run once method to connect the iso and install the VM you get no problem at all with cd isos afterwards.<br><br></div><div class="gmail_extra">It is quite annoying not being able to recover the situation.<br><br></div><div class="gmail_extra"> In engine.log<br>2014-09-16 00:22:49,683 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-17) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM c65 is down with error. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID &#39;drive-ide0-1-0&#39; for drive<br>.<br>2014-09-16 00:22:49,684 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) Running on vds during rerun failed vm: null<br>2014-09-16 00:22:49,691 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) VM c65 (a858f04a-e4f4-49e8-a097-8ba8390b4d25) is running in db and not running in VDS ovnode04<br>2014-09-16 00:22:49,710 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-17) Rerun vm a858f04a-e4f4-49e8-a097-8ba8390b4d25. Called from vds ovnode04<br>2014-09-16 00:22:49,717 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-12) Correlation ID: 7599780, Job ID: 8d63e66b-b26b-4344-85e0-ab72bf53fa1d, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM c65 on Host ovnode04.<br>2014-09-16 00:22:49,722 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-12) Lock Acquired to object EngineLock [exclusiveLocks= key: a858f04a-e4f4-49e8-a097-8ba8390b4d25 value: VM<br>, sharedLocks= ]<br><br></div><div class="gmail_extra">In VM.log on hypervisor under /var/log/libvirt/qemu/vm.log<br><br>2014-09-15 22:22:49.358+0000: starting up<br>LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -na<br>me c65 -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus<br>=32,sockets=16,cores=2,threads=1 -uuid a858f04a-e4f4-49e8-a097-8ba8390b4d25 -smbios type=1<br>,manufacturer=oVirt,product=oVirt Node,version=6-5.el6.centos.11.2,serial=9B1B2F84-3CB9-11<br>DE-B5A1-001517B493E8,uuid=a858f04a-e4f4-49e8-a097-8ba8390b4d25 -nodefconfig -nodefaults -c<br>hardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c65.monitor,server,nowait -mon cha<br>rdev=charmonitor,id=monitor,mode=control -rtc base=2014-09-16T00:22:49,driftfix=slew -no-k<br>vm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -devi<br>ce virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/4512e567-f94e-476a-a050-6cd0a15e260a/346732c8-e1e9-49de-9ca4-63f4477ef6dd/images/c0a1e1a3-af66-482b-96b4-e8ec928053f7/c968201d-d3fe-4c29-904e-554004a43e3e,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c0a1e1a3-af66-482b-96b4-e8ec928053f7,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:01:a4:a3:42:00,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/a858f04a-e4f4-49e8-a097-8ba8390b4d25.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/a858f04a-e4f4-49e8-a097-8ba8390b4d25.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5902,tls-port=5903,addr=192.168.1.74,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6<br>qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID &#39;drive-ide0-1-0&#39; for drive<br>2014-09-15 22:22:49.633+0000: shutting down<br><br></div><div class="gmail_extra">In vdsm.log<br></div><div class="gmail_extra">Thread-159167::ERROR::2014-09-16 00:22:49,849::vm::2326::vm.Vm::(_startUnderlyingVm) vmId=`a858f04a-e4f4-49e8-a097-8ba8390b4d25`::The vm start process failed<br>Traceback (most recent call last):<br>  File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 2266, in _startUnderlyingVm<br>    self._run()<br>  File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 3368, in _run<br>    self._connection.createXML(domxml, flags),<br>  File &quot;/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py&quot;, line 111, in wrapper<br>    ret = f(*args, **kwargs)<br>  File &quot;/usr/lib64/python2.6/site-packages/libvirt.py&quot;, line 2665, in createXML<br>    if ret is None:raise libvirtError(&#39;virDomainCreateXML() failed&#39;, conn=self)<br>libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID &#39;drive-ide0-1-0&#39; for drive<br>Thread-159167::DEBUG::2014-09-16 00:22:49,851::vm::2838::vm.Vm::(setDownStatus) vmId=`a858f04a-e4f4-49e8-a097-8ba8390b4d25`::Changed state to Down: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID &#39;drive-ide0-1-0&#39; for drive<br> (code=1)<br>JsonRpc (StompReactor)::DEBUG::2014-09-16 00:22:50,554::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message &lt;StompFrame command=&#39;SEND&#39;&gt;<br>JsonRpcServer::DEBUG::2014-09-16 00:22:50,556::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br>Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::595::Storage.TaskManager.Task::(_updateState) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::moving from state init -&gt; state preparing<br>Thread-159169::INFO::2014-09-16 00:22:50,559::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)<br>Thread-159169::INFO::2014-09-16 00:22:50,559::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u&#39;3b23161a-18a3-4cfa-bc78-f4fe2052b19e&#39;: {&#39;code&#39;: 0, &#39;version&#39;: 0, &#39;acquired&#39;: True, &#39;delay&#39;: &#39;0.000441651&#39;, &#39;lastCheck&#39;: &#39;8.2&#39;, &#39;valid&#39;: True}, &#39;346732c8-e1e9-49de-9ca4-63f4477ef6dd&#39;: {&#39;code&#39;: 0, &#39;version&#39;: 3, &#39;acquired&#39;: True, &#39;delay&#39;: &#39;0.000640336&#39;, &#39;lastCheck&#39;: &#39;3.5&#39;, &#39;valid&#39;: True}}<br>Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::1191::Storage.TaskManager.Task::(prepare) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::finished: {u&#39;3b23161a-18a3-4cfa-bc78-f4fe2052b19e&#39;: {&#39;code&#39;: 0, &#39;version&#39;: 0, &#39;acquired&#39;: True, &#39;delay&#39;: &#39;0.000441651&#39;, &#39;lastCheck&#39;: &#39;8.2&#39;, &#39;valid&#39;: True}, &#39;346732c8-e1e9-49de-9ca4-63f4477ef6dd&#39;: {&#39;code&#39;: 0, &#39;version&#39;: 3, &#39;acquired&#39;: True, &#39;delay&#39;: &#39;0.000640336&#39;, &#39;lastCheck&#39;: &#39;3.5&#39;, &#39;valid&#39;: True}}<br>Thread-159169::DEBUG::2014-09-16 00:22:50,559::task::595::Storage.TaskManager.Task::(_updateState) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::moving from state preparing -&gt; state finished<br>Thread-159169::DEBUG::2014-09-16 00:22:50,559::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>Thread-159169::DEBUG::2014-09-16 00:22:50,559::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>Thread-159169::DEBUG::2014-09-16 00:22:50,560::task::993::Storage.TaskManager.Task::(_decref) Task=`ae3d407b-6c40-455e-a2d7-5ef55d95d1c5`::ref 0 aborting False<br>Thread-159169::DEBUG::2014-09-16 00:22:50,563::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>...<br><br></div><div class="gmail_extra">Gianluca<br></div></div>