Any updates?The tests are still failing on vdsmd won't start from Sunday... master repos havn't been refreshed for a few days due to this.from host deploy log: [1] basic-suite-master-engine/_var_log_ovirt-engine/host- deploy/ovirt-host-deploy- 20161227012930-192.168.201.4- 14af2bf0.log the job links [2]
016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stdout: 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stderr: A dependency job for vdsmd.service failed. See 'journalctl -xe' for details. 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/tmp/ovirt-QZ1ucxWFfm/ pythonlib/otopi/context.py", line 132, in _executeMethod method['method']() File "/tmp/ovirt-QZ1ucxWFfm/otopi- plugins/ovirt-host-deploy/ vdsm/packages.py", line 209, in _start self.services.state('vdsmd', True) File "/tmp/ovirt-QZ1ucxWFfm/otopi- plugins/otopi/services/ systemd.py", line 141, in state service=name, RuntimeError: Failed to start service 'vdsmd' 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Closing up': Failed to start service 'vdsmd' 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV BASE/error=bool:'True' 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV BASE/excep [2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ master/lastCompletedBuild/ testReport/ On Sun, Dec 25, 2016 at 11:31 AM, Eyal Edri <eedri@redhat.com> wrote:We should see it fixed here hopefully [1]--On Sun, Dec 25, 2016 at 11:19 AM, Dan Kenigsberg <danken@redhat.com> wrote:On Sun, Dec 25, 2016 at 10:28 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
>
>
> On Sun, Dec 25, 2016 at 9:47 AM, Dan Kenigsberg <danken@redhat.com> wrote:
>>
>> Correct. https://gerrit.ovirt.org/#/c/69052/
>>
>> Can you try adding
>> lago shell "$vm_name" -c "mkdir -p /var/log/ovirt-imageio-daemon/ &&
>> chown vdsm:kvm /var/log/ovirt-imageio-daemon/" You're right. a hack would have to `chmod a+rwx
>
>
> How will it know what is the vdsm user before installing vdsm?
/var/log/ovirt-imageio-daemon/` instead.
> Why not either:
> 1. Fix it
yes, that's why we've opened
https://bugzilla.redhat.com/show_bug.cgi?id=1400003 ; now a fix is
getting merged. I don't know when it is going to be ready in lago's
repos.
> -or-
> 2. Revert the offending patch?
I'm not aware of such patch. It's a race that has been there since
ever, and I don't know why it suddenly pops up so often.
Eyal Edri
Associate ManagerRHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)--Eyal Edri
Associate ManagerRHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)