Triggered a sanity tier1 execution [1] using [2], which covers all the
requested areas, on iSCSI, NFS and Gluster.
I'll update with the results.
[1]
vdsm-4.30.0-291.git77aef9a.el7.x86_64
On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik <mpolednik(a)redhat.com>
wrote:
On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:
> Hi Martin,
>
> I see [1] requires a rebase, can you please take care?
>
Should be rebased.
At the moment, our automation is stable only on iSCSI, NFS, Gluster and FC.
> Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not
> stable enough at the moment.
>
That is still pretty good.
[1]
https://gerrit.ovirt.org/#/c/89830/
>
>
> Thanks
>
> On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik <mpolednik(a)redhat.com>
> wrote:
>
> On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>>
>> Hi, sorry if I misunderstood, I waited for more input regarding what
>>> areas
>>> have to be tested here.
>>>
>>>
>> I'd say that you have quite a bit of freedom in this regard. GlusterFS
>> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
>> that covers basic operations (start & stop VM, migrate it), snapshots
>> and merging them, and whatever else would be important for storage
>> sanity.
>>
>> mpolednik
>>
>>
>> On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <mpolednik(a)redhat.com>
>>
>>> wrote:
>>>
>>> On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:
>>>
>>>>
>>>> We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,
>>>>
>>>>> will
>>>>> have to check, since usually, we don't execute our automation on
them.
>>>>>
>>>>>
>>>>> Any update on this? I believe the gluster tests were successful, OST
>>>> passes fine and unit tests pass fine, that makes the storage backends
>>>> test the last required piece.
>>>>
>>>>
>>>> On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir <ratamir(a)redhat.com>
wrote:
>>>>
>>>>
>>>>> +Elad
>>>>>
>>>>>
>>>>>> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg
<danken(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer
<nsoffer(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri
<eedri(a)redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>> Please make sure to run as much OST suites on this patch
as
>>>>>>>> possible
>>>>>>>>
>>>>>>>> before merging ( using 'ci please build' )
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> But note that OST is not a way to verify the patch.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Such changes require testing with all storage types we
support.
>>>>>>>>
>>>>>>>> Nir
>>>>>>>>
>>>>>>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <
>>>>>>>> mpolednik(a)redhat.com
>>>>>>>> >
>>>>>>>>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hey,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> I've created a patch[0] that is finally able
to activate
>>>>>>>>>> libvirt's
>>>>>>>>>> dynamic_ownership for VDSM while not negatively
affecting
>>>>>>>>>> functionality of our storage code.
>>>>>>>>>>
>>>>>>>>>> That of course comes with quite a bit of code
removal, mostly in
>>>>>>>>>> the
>>>>>>>>>> area of host devices, hwrng and anything that
touches devices;
>>>>>>>>>> bunch
>>>>>>>>>> of test changes and one XML generation caveat
(storage is handled
>>>>>>>>>> by
>>>>>>>>>> VDSM, therefore disk relabelling needs to be
disabled on the VDSM
>>>>>>>>>> level).
>>>>>>>>>>
>>>>>>>>>> Because of the scope of the patch, I welcome
storage/virt/network
>>>>>>>>>> people to review the code and consider the
implication this
>>>>>>>>>> change
>>>>>>>>>> has
>>>>>>>>>> on current/future features.
>>>>>>>>>>
>>>>>>>>>> [0]
https://gerrit.ovirt.org/#/c/89830/
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> In particular: dynamic_ownership was set to 0
prehistorically
>>>>>>>>>> (as
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> part
>>>>>>>>
>>>>>>> of
https://bugzilla.redhat.com/show_bug.cgi?id=554961 )
because
>>>>>>> libvirt,
>>>>>>> running as root, was not able to play properly with
root-squash nfs
>>>>>>> mounts.
>>>>>>>
>>>>>>> Have you attempted this use case?
>>>>>>>
>>>>>>> I join to Nir's request to run this with storage QE.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> --
>>>>>>
>>>>>>
>>>>>> Raz Tamir
>>>>>> Manager, RHV QE
>>>>>>
>>>>>>
>>>>>>
>>>>>>