It may be virt, but I'm looking...
I'm very suspicious of this happening immediately after hotplugging a NIC,
especially since the bug attached to
https://gerrit.ovirt.org/#/c/98765/
talks about dropping packets. Dominik, did anything else change here?
No, nothing I am aware of.
Is there already a pattern in the failed runs detected, or does it fail
randomly?
On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov
<amarchuk(a)redhat.com>
wrote:
> Which team is it? Is it Virt? Just checking who should open a bug in
> libvirt as suggested.
>
> > On 22 Mar 2019, at 20:52, Nir Soffer <nsoffer(a)redhat.com> wrote:
> >
> > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron <dron(a)redhat.com> wrote:
> > Hi,
> >
> > We are failing ovirt-engine master on test 004_basic_sanity.hotplug_cpu
> > looking at the logs, we can see that the for some reason, libvirt
> reports a vm as none responsive which fails the test.
> >
> > CQ first failure was for patch:
> >
https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for
mdevs,
> use nodisplay to override
> > But I do not think this is the cause of failure.
> >
> > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> related.
> >
> > You can see the libvirt log here:
> >
>
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/arti...
> >
> > you can see the full logs here:
> >
> >
>
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artif...
> >
> > Evgheni and I confirmed this is not an infra issue and the problem is
> ssh connection to the internal vm
> >
> > Thanks,
> > Dafna
> >
> >
> > error:
> > 2019-03-22 15:08:22.658+0000: 22068: warning : qemuDomainObjTaint:7521 :
> Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> tainted: hook-script
> >
> > Why our vm is tainted?
> >
> > 2019-03-22 15:08:22.693+0000: 22068: error :
> virProcessRunInMountNamespace:1159 : internal error: child reported: unable
> to set security context 'system_u:object_r:virt_content_t:s0' on
>
'/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> No such file or directory
> >
> > This should not happen, libvirt is not adding labels to files in
> /rhev/data-center. It is using using its own mount
> > namespace and adding there the devices used by the VM. Since libvirt
> create the devices in its namespace
> > it should not complain about missing paths in /rhev/data-center.
> >
> > I think we should file a libvirt bug for this.
> >
> > 2019-03-22 15:08:28.168+0000: 22070: error :
> qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> agent is not connected
> > 2019-03-22 15:08:58.193+0000: 22070: error :
> qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> agent is not connected
> > 2019-03-22 15:13:58.179+0000: 22071: error :
> qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> agent is not connected
> >
> > Do we have guest agent in the test VMs?
> >
> > Nir
>
> --
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat
>
>
>
>
>
> _______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B44Q3AZA7JU...
>