
On Sun, Apr 8, 2018 at 9:15 AM, Eyal Edri <eedri@redhat.com> wrote:
Was already done by Yaniv - https://gerrit.ovirt.org/#/c/89851. Is it still failing?
On Sun, Apr 8, 2018 at 8:59 AM, Barak Korren <bkorren@redhat.com> wrote:
On 7 April 2018 at 00:30, Dan Kenigsberg <danken@redhat.com> wrote:
No, I am afraid that we have not managed to understand why setting and ipv6 address too the host off the grid. We shall continue researching this next week.
Edy, https://gerrit.ovirt.org/#/c/88637/ is already 4 weeks old, but could it possibly be related (I really doubt that)?
Sorry, but I do not see how this problem is related to VDSM. There is nothing that indicates that there is a VDSM problem. Has the RPC connection between Engine and VDSM failed?
at this point I think we should seriously consider disabling the relevant test, as its impacting a large number of changes.
Dan, was there a fix for the issues? can I have a link to the fix if there was?
Thanks, Dafna
On Wed, Apr 4, 2018 at 5:01 PM, Gal Ben Haim <gbenhaim@redhat.com> wrote:
From lago's log, I see that lago collected the logs from the VMs
using ssh
(after the test failed), which means that the VM didn't crash.
On Wed, Apr 4, 2018 at 5:27 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Apr 4, 2018 at 4:59 PM, Barak Korren <bkorren@redhat.com>
wrote:
> Test failed: [ 006_migrations.prepare_migration_attachments_ipv6 ] > > Link to suspected patches: > (Probably unrelated) > https://gerrit.ovirt.org/#/c/89812/1 (ovirt-engine-sdk) - examples: > export template to an export domain > > This seems to happen multiple times sporadically, I thought this would > be solved by > https://gerrit.ovirt.org/#/c/89781/ but it isn't.
right, it is a completely unrelated issue there (with external networks). here, however, the host dies while setting setupNetworks of an ipv6 address. Setup network waits for Engine's confirmation at 08:33:00,711
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1 537/artifact/exported-artifacts/basic-suit-4.2-el7/test_ logs/basic-suite-4.2/post-006_migrations.py/lago-basic- suite-4-2-host-0/_var_log/vdsm/supervdsm.log but kernel messages stop at 08:33:23
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1 537/artifact/exported-artifacts/basic-suit-4.2-el7/test_ logs/basic-suite-4.2/post-006_migrations.py/lago-basic- suite-4-2-host-0/_var_log/messages/*view*/
Does the lago VM of this host crash? pause?
> > Link to Job: > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/ > > Link to all logs: > > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1 537/artifact/exported-artifacts/basic-suit-4.2-el7/test_ logs/basic-suite-4.2/post-006_migrations.py/ > > Error snippet from log: > > <error> > > Traceback (most recent call last): > File "/usr/lib64/python2.7/unittest/case.py", line 369, in run > testMethod() > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> 129, in wrapped_test > test() > File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> 59, in wrapper > return func(get_test_prefix(), *args, **kwargs) > File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> 78, in wrapper > prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs > File > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt -system-tests/basic-suite-4.2/test-scenarios/006_migrations.py", > line 139, in prepare_migration_attachments_ipv6 > engine, host_service, MIGRATION_NETWORK, ip_configuration) > File > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt -system-tests/basic-suite-4.2/test_utils/network_utils_v4.py", > line 71, in modify_ip_config > check_connectivity=True) > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", > line 36729, in setup_networks > return self._internal_action(action, 'setupnetworks', None, > headers, query, wait) > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
> 299, in _internal_action > return future.wait() if wait else future > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
> 55, in wait > return self._code(response) > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
> 296, in callback > self._check_fault(response) > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
> 132, in _check_fault > self._raise_error(response, body) > File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
On Fri, Apr 6, 2018 at 2:20 PM, Dafna Ron <dron@redhat.com> wrote: line line line line line line line line
> 118, in _raise_error > raise error > Error: Fault reason is "Operation Failed". Fault detail is "[Network > error during communication with the Host.]". HTTP response code is > 400. > > > > </error> > > > > -- > Barak Korren > RHV DevOps team , RHCE, RHCi > Red Hat EMEA > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted > _______________________________________________ > Devel mailing list > Devel@ovirt.org > http://lists.ovirt.org/mailman/listinfo/devel > > _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- GAL bEN HAIM RHV DEVOPS
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel