
On Wed, Dec 27, 2017 at 12:53 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On 25 December 2017 at 14:14, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Dec 25, 2017 at 2:09 PM, Dominik Holler <dholler@redhat.com> wrote:
A helpful hint is in
http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/4492/artifact/exported-artifacts/basic-suit-master- el7/test_logs/basic-suite-master/post-098_ovirt_
On Wed, Dec 27, 2017 at 12:34 PM, Barak Korren <bkorren@redhat.com> wrote: provider_ovn.py/lago-basic-suite-master-engine/_var_log/ ovirt-engine/engine.log :
Caused by: org.jboss.resteasy.spi.ReaderException: org.codehaus.jackson.map.JsonMappingException: Can not construct instance of java.util.Calendar from String value '2017-12-27 13:19:51Z': not a valid representation (error: Can not parse date "2017-12-27 13:19:51Z": not compatible with any of standard forms ("yyyy-MM-dd'T'HH:mm:ss.SSSZ", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'", "EEE, dd MMM yyyy HH:mm:ss zzz", "yyyy-MM-dd")) at [Source: org.jboss.resteasy.client.core.BaseClientResponse$ InputStreamWrapper@72c184c5; line: 1, column: 23] (through reference chain: com.woorea.openstack.keystone.model.Access["token"]->com. woorea.openstack.keystone.model.Token["expires"])
This problem was introduced by https://gerrit.ovirt.org/#/c/85702/
I created a fix: https://gerrit.ovirt.org/85734
Thanks for the quick fix.
Is the new format accpetable to other users of the keystone-like API (such at the neutron cli)?
It seems the fix patch itself failed as well: http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4539/
The failed test is: 006_migrations.prepare_migration_attachments_ipv6
It seems engine has lost the ability to talk to the host.
Logs are here: http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/4539/artifact/exported-artifacts/basic-suit-master- el7/test_logs/basic-suite-master/post-006_migrations.py/
The error seems unrelated to the patch, since - as you say - the error is about host networking, long before OVN got involved.
I see that http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/4539/artifact/exported-artifacts/basic-suit-master- el7/test_logs/basic-suite-master/post-006_migrations.py/ lago-basic-suite-master-host-0/_var_log/vdsm/supervdsm.log/*view*/
has a worrying failure to ifup a bridge, which might be more related
ifup/oncae8adf7ba944::ERROR::2017-12-27 05:03:06,001::concurrent::201::root::(run) FINISH thread <Thread(ifup/oncae8adf7ba944, started daemon 140108891563776)> failed Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 194, in run ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/network/ configurators/ifcfg.py", line 925, in _exec_ifup _exec_ifup_by_name(iface.name, cgroup) File "/usr/lib/python2.7/site-packages/vdsm/network/ configurators/ifcfg.py", line 911, in _exec_ifup_by_name raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '') ConfigNetworkError: (29, '\n')
Does not seem to be a problem, it is an outcome of an ancient command on an device that does not exist anymore. We are not cleaning up all the spawned threads at the end of a transaction, so we see such things from time to time.