[Users] Panic on FreeBSD guest with >1 CPU

Dor Laor dlaor at redhat.com
Thu Nov 29 08:16:46 UTC 2012


On 11/29/2012 09:55 AM, Itamar Heim wrote:
> On 11/29/2012 01:52 AM, Karli Sjöberg wrote:
>>
>> 28 nov 2012 kl. 17.07 skrev Itamar Heim:
>>
>>> On 11/28/2012 08:39 AM, Karli Sjöberg wrote:
>>>>
>>>> 28 nov 2012 kl. 09.19 skrev Itamar Heim:
>>>>
>>>>> On 11/28/2012 01:52 AM, Karli Sjöberg wrote:
>>>>>>
>>>>>> 27 nov 2012 kl. 16.01 skrev :
>>>>>>
>>>>>>>
>>>>>>> 27 nov 2012 kl. 15.59 skrev Itamar Heim:
>>>>>>>
>>>>>>>> On 11/27/2012 09:56 AM, Karli Sjöberg wrote:
>>>>>>>>>
>>>>>>>>> 27 nov 2012 kl. 15.42 skrev Itamar Heim:
>>>>>>>>>
>>>>>>>>>> On 11/27/2012 08:28 AM, Karli Sjöberg wrote:
>>>>>>>>>>> Hey all!
>>>>>>>>>>>
>>>>>>>>>>> Since recently patching our hosts, I´ve been having trouble
>>>>>>>>>>> running FreeBSD guests with more than one virtual core or
>>>>>>>>>>> socket. I have managed take a screenshot of how it looks like
>>>>>>>>>>> when it panics when booting kernel, right after ACPI:
>>>>>>>>>>> http://i47.tinypic.com/2u90qrr.png
>>>>>>>>>>>
>>>>>>>>>>> I´ve tried this with similar results using 8.2-RELEASE,
>>>>>>>>>>> 8.3-RELEASE, 9.0-RELEASE and 9-STABLE.
>>>>>>>>>>>
>>>>>>>>>>> If I edit the guest to have only one virtual core or socket,
>>>>>>>>>>> it boots up without issue.
>>>>>>>>>>>
>>>>>>>>>>> Since noticing this, I´ve tried updating the packages one
>>>>>>>>>>> more time, thinking maybe it had already been fixed but no,
>>>>>>>>>>> it remains. These are the package versions I´m using:
>>>>>>>>>>> # rpm -qa | egrep '(kernel|libvirt|qemu|vdsm|seabios)' | sort -d
>>>>>>>>>>> ipxe-roms-qemu-20120328-1.gitaac9718.fc17.noarch
>>>>>>>>>>> kernel-3.6.2-4.fc17.x86_64
>>>>>>>>>>> kernel-3.6.3-1.fc17.x86_64
>>>>>>>>>>> kernel-3.6.7-4.fc17.x86_64 << This is the one that´s running
>>>>>>>>>>> libvirt-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-client-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-daemon-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-daemon-config-network-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-daemon-config-nwfilter-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-lock-sanlock-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> libvirt-python-0.9.11.7-1.fc17.x86_64
>>>>>>>>>>> qemu-common-1.0.1-2.fc17.x86_64
>>>>>>>>>>> qemu-img-1.0.1-2.fc17.x86_64
>>>>>>>>>>> qemu-kvm-1.0.1-2.fc17.x86_64
>>>>>>>>>>> qemu-kvm-tools-1.0.1-2.fc17.x86_64
>>>>>>>>>>> qemu-system-x86-1.0.1-2.fc17.x86_64
>>>>>>>>>>> seabios-1.7.1-1.fc17.x86_64
>>>>>>>>>>> seabios-bin-1.7.1-1.fc17.noarch
>>>>>>>>>>> vdsm-4.10.0-10.fc17.x86_64
>>>>>>>>>>> vdsm-cli-4.10.0-10.fc17.noarch
>>>>>>>>>>> vdsm-python-4.10.0-10.fc17.x86_64
>>>>>>>>>>> vdsm-xmlrpc-4.10.0-10.fc17.noarch
>>>>>>>>>>>
>>>>>>>>>>> Do you have any insights as to what the problem might be?
>>>>>>>>>>>
>>>>>>>>>>> Best Regards
>>>>>>>>>>> Karli Sjöberg
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Users mailing list
>>>>>>>>>>> Users at ovirt.org
>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> the first suspect would be qemu-kvm and maybe the bios, can
>>>>>>>>>> you please downgrade it to the previously
>>>>>>>>>
>>>>>>>>> Oh the fun of downgrading...pass:) But it just so happens that
>>>>>>>>> we have another ovirt system running, apart from the production
>>>>>>>>> system, that may be less patched. I´ll check and see if there´s
>>>>>>>>> any difference there. Is there any data you wish me to share,
>>>>>>>>> like logs or something while testing? Or just yay/nay?
>>>>>>>>
>>>>>>>> if we identify the offending package, and versions, easier to
>>>>>>>> report the regression and ask to fix it
>>>>>>>
>>>>>>> So yay/nay it is! Thanks.
>>>>>>>
>>>>>>
>>>>>> I´ve now tested to create a new FreeBSD server with dual cores in
>>>>>> our experiment/test system and it worked, no problemo.
>>>>>>
>>>>>> oVirt test system - good:
>>>>>> ipxe-roms-qemu-20120328-1.gitaac9718.fc17.noarch
>>>>>> kernel-3.3.4-5.fc17.x86_64
>>>>>> libvirt-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-client-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-daemon-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-daemon-config-network-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-daemon-config-nwfilter-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-lock-sanlock-0.9.11.6-1.fc17.x86_64
>>>>>> libvirt-python-0.9.11.6-1.fc17.x86_64
>>>>>> qemu-common-1.0.1-2.fc17.x86_64
>>>>>> qemu-img-1.0.1-2.fc17.x86_64
>>>>>> qemu-kvm-1.0.1-2.fc17.x86_64
>>>>>> qemu-kvm-tools-1.0.1-2.fc17.x86_64
>>>>>> qemu-system-x86-1.0.1-2.fc17.x86_64
>>>>>> seabios-1.7.0-1.fc17.x86_64
>>>>>> seabios-bin-1.7.0-1.fc17.noarch
>>>>>> vdsm-4.10.0-10.fc17.x86_64
>>>>>> vdsm-cli-4.10.0-10.fc17.noarch
>>>>>> vdsm-python-4.10.0-10.fc17.x86_64
>>>>>> vdsm-xmlrpc-4.10.0-10.fc17.noarch
>>>>>>
>>>>>> oVirt prod system - bad:
>>>>>> ipxe-roms-qemu-20120328-1.gitaac9718.fc17.noarch
>>>>>> kernel-3.6.7-4.fc17.x86_64
>>>>>> libvirt-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-client-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-daemon-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-daemon-config-network-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-daemon-config-nwfilter-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-lock-sanlock-0.9.11.7-1.fc17.x86_64
>>>>>> libvirt-python-0.9.11.7-1.fc17.x86_64
>>>>>> qemu-common-1.0.1-2.fc17.x86_64
>>>>>> qemu-img-1.0.1-2.fc17.x86_64
>>>>>> qemu-kvm-1.0.1-2.fc17.x86_64
>>>>>> qemu-kvm-tools-1.0.1-2.fc17.x86_64
>>>>>> qemu-system-x86-1.0.1-2.fc17.x86_64
>>>>>> seabios-1.7.1-1.fc17.x86_64
>>>>>> seabios-bin-1.7.1-1.fc17.noarch
>>>>>> vdsm-4.10.0-10.fc17.x86_64
>>>>>> vdsm-cli-4.10.0-10.fc17.noarch
>>>>>> vdsm-python-4.10.0-10.fc17.x86_64
>>>>>> vdsm-xmlrpc-4.10.0-10.fc17.noarch
>>>>>>
>>>>>
>>>>> so seems like you have different:
>>>>> kernel
>>>>> seabios
>>>>> libvirt
>>>>>
>>>>> can you please upgrade them one by one to find the culprit (I'd do
>>>>> in this order for simplicity of rollback: seabios, libvirt, kernel)
>>>>
>>>> Done. Kernel is the culprit!
>>>>
>>>> We have two hosts in the test system, so what I did was to first
>>>> update seabios; test, then update libvirt; test, kernel; test on
>>>> host1, where all was well until after booting new kernel. So I
>>>> wanted to know for sure that it was only the kernel's blame, so on
>>>> host2 only kernel was updated and afterwards the issue started
>>>> appearing. So definitely, kernel.
>>>>
>>>> PS. As I expected, downgrading the packages again failed horribly. I
>>>> friggin' hate trying to downgrade, even for just a few packages, I
>>>> opt-out and rather just reinstall the whole thing to save me the
>>>> headache:)
>>>>
>>>
>>> please open a bug on fedora kernel, and paste here. thanks
>>> well, i guess next question is does fedora 18 kernel solves the issue...
>>>
>>>
>>
>> Done! https://bugzilla.redhat.com/show_bug.cgi?id=881579
>>
>
> thanks.
> dor/karen - is there anyone who can take a look at this fedora
> regression causing SMP vm's to panic with a kernel upgrade?

Thanks for the report, I assigned it to Marcelo (CCed)

> thanks,
>     Itamar
>




More information about the Users mailing list