----- Original Message -----
From: "David Caro" <dcaroest(a)redhat.com>
To: "Dan Kenigsberg" <danken(a)redhat.com>
Cc: dcaro(a)redhat.com, "Vered Volansky" <vered(a)redhat.com>,
"infra" <infra(a)ovirt.org>
Sent: Tuesday, June 17, 2014 11:33:24 AM
Subject: Re: Bad setup code in vdsm_master_storage_functional_tests_localfs_gerrit
On Tue 17 Jun 2014 10:24:49 AM CEST, Dan Kenigsberg wrote:
> On Tue, Jun 17, 2014 at 03:40:51AM -0400, Vered Volansky wrote:
>>
>>
>> ----- Original Message -----
>>> From: "Dan Kenigsberg" <danken(a)redhat.com>
>>> To: "Vered Volansky" <vered(a)redhat.com>
>>> Cc: "infra" <infra(a)ovirt.org>
>>> Sent: Monday, June 16, 2014 11:29:42 AM
>>> Subject: Re: Bad setup code in
>>> vdsm_master_storage_functional_tests_localfs_gerrit
>>>
>>> On Sun, Jun 15, 2014 at 04:11:53AM -0400, Vered Volansky wrote:
>>>> The job with this issue is gone, let me know if it's risen again.
>>>
>>> The fragile code is still in
>>>
http://jenkins.ovirt.org/view/All/job/vdsm_master_storage_functional_test...
>>> why not make it more robust before /var/log/vdsm disappears and make it
>>> break again?
>>
>> because I don't understand the issue. The file is only created if
>> missing. The directory should be there.
It was fixed by me some time ago (I added the mkdir -p before the
touch, just in case)
sudo mkdir -p /var/log/vdsm
sudo chown vdsm:kvm
sudo sh -c 'echo "" > /var/log/vdsm/vdsm.log'
sudo sh -c 'echo "" > /var/log/vdsm/supervdsm.log'
I saw that, but this thread was opened after this change.
>
> However, apparently it was not there, which made the echo fail, which
> led to the job failing. We should understand why it disappeared.
>
> dcaro, eedri - do you have any idea?
Totally agree, and, if it was meant to be there, I'll remove the mkdir
to make the test fail if it's not there.
But for what I see on the job, there's nothing that ensures you that
the directory will be there, vdsm might never have been installed on
that machine, or might have been properly cleaned at some point
(removing logs and leftovers).
So, In my opinion the issue is that we are not cleaning up properly
after the vdsm jobs and leaving the logs behind.
We sure are not. In the past I had
vdsm logs when vdsm was not installed, which led to this echo.
logrotate was suggested to me, but I saw the job is already configured this way and then
told this was actually related to something else and the above is how it should be done.
A different suggestion is welcome.
The test is not supposed to run in parallel to another, this is how it's configured.
Also, to which point can these tests run on docker? Anyone have tried?
No, When
asked how to set it up this is what was suggested.
Because it would make it fit to be run in parallel and on any slave
with any specific deps.
--
David Caro
Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Email: dcaro(a)redhat.com
Web:
www.redhat.com
RHT Global #: 82-62605
_______________________________________________
Infra mailing list
Infra(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra