<div dir="ltr">Also, snapshot preview failed (2nd snapshot):<div><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">2018-04-22 18:01:06,253+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Volume.create succeeded in 0.84 seconds (__init__:311)
</span><br>2018-04-22 18:01:06,261+0300 INFO (tasks/6) [storage.ThreadPool.WorkerThread] START task 6823d724-cb1b-4706-a58a-83428363cce5 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x7f1aac54fc68>>, args=None) (threadPool
<br>:208)
<br>2018-04-22 18:01:06,906+0300 WARN (check/loop) [storage.asyncutils] Call <bound method DirectioChecker._check of <DirectioChecker /rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__l
<br>ocal__ge2__nfs__0/46d2fd2b-bdd0-40f5-be4c-0aaf2a629f1b/dom_md/metadata running next_check=4920812.91 at 0x7f1aac3ed790>> delayed by 0.51 seconds (asyncutils:138)
<br>2018-04-22 18:01:07,082+0300 WARN (tasks/6) [storage.ResourceManager] Resource factory failed to create resource '01_img_7df9d2b2-52b5-4ac2-a9f0-a1d1e93eb6d2.095ad9d6-3154-449c-868c-f975dcdcb729'. Canceling request. (resourceManager:543<br>)
<br>Traceback (most recent call last):
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 539, in registerResource
<br> obj = namespaceObj.factory.createResource(name, lockType)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 193, in createResource
<br> lockType)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 122, in __getResourceCandidatesList
<br> imgUUID=resourceName)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 198, in getChain
<br> uuidlist = volclass.getImageVolumes(sdUUID, imgUUID)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1537, in getImageVolumes
<br> return cls.manifestClass.getImageVolumes(sdUUID, imgUUID)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 337, in getImageVolumes
<br> if (sd.produceVolume(imgUUID, volid).getImage() == imgUUID):
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 438, in produceVolume
<br> volUUID)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 69, in __init__
<br> volUUID)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86, in __init__
<br> self.validate()
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112, in validate
<br> self.validateVolumePath()
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 129, in validateVolumePath
<br> raise se.VolumeDoesNotExist(self.volUUID)
<br>VolumeDoesNotExist: Volume does not exist: (u'a404bfc9-57ef-4dcc-9f1b-458dfb08ad74',)
<br>2018-04-22 18:01:07,083+0300 WARN (tasks/6) [storage.ResourceManager.Request] (ResName='01_img_7df9d2b2-52b5-4ac2-a9f0-a1d1e93eb6d2.095ad9d6-3154-449c-868c-f975dcdcb729', ReqID='79c96e70-7334-4402-a390-dc87f939b7d2') Tried to cancel a p<br>rocessed request (resourceManager:187)
<br>2018-04-22 18:01:07,084+0300 <span style="color:rgb(255,255,255);background-color:rgb(0,0,0)">ERROR</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> (tasks/6) [storage.TaskManager.Task] (Task='6823d724-cb1b-4706-a58a-83428363cce5') Unexpected error (task:875)
</span><br>Traceback (most recent call last):
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
<br> return fn(*args, **kargs)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run
<br> return self.cmd(*self.argslist, **self.argsdict)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
<br> return method(self, *args, **kwargs)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1939, in createVolume
<br> with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 1025, in acquireResource
<br> return _manager.acquireResource(namespace, name, lockType, timeout=timeout)
<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 475, in acquireResource
<br> raise se.ResourceAcqusitionFailed()
<br>ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory threw an exception.: ()
<br>2018-04-22 18:01:07,735+0300 INFO (tasks/6) [storage.ThreadPool.WorkerThread] FINISH task 6823d724-cb1b-4706-a58a-83428363cce5 (threadPool:210)<br></span><br></div><div><br></div><div><br></div><div><b><u>Steps from [1]:</u></b></div><div><b><u><br></u></b></div><div><pre class="gmail-console-output" style="box-sizing:border-box;white-space:pre-wrap;word-wrap:break-word;margin:0px;color:rgb(51,51,51);font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:54:41</b> </span>2018-04-22 17:54:41,574 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Setup 2: Creating VM vm_TestCase11660_2217544157
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:54:55</b> </span>2018-04-22 17:54:55,593 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> 049: storage/rhevmtests.storage.storage_snapshots.test_live_snapshot.TestCase11660.test_live_snapshot[glusterfs]
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:54:55</b> </span>2018-04-22 17:54:55,593 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Create a snapshot while VM is running
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:54:55</b> </span>2018-04-22 17:54:55,593 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> STORAGE: GLUSTERFS
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:58:04</b> </span>2018-04-22 17:58:04,761 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 3: Start writing continuously on VM vm_TestCase11660_2217544157 via dd
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:58:35</b> </span>2018-04-22 17:58:35,334 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 4: Creating live snapshot on a VM vm_TestCase11660_2217544157
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:58:35</b> </span>2018-04-22 17:58:35,334 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 5: Adding new snapshot to VM vm_TestCase11660_2217544157 with all disks
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:58:35</b> </span>2018-04-22 17:58:35,337 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 6: Add snapshot to VM vm_TestCase11660_2217544157 with {'description': 'snap_TestCase11660_2217545559', 'wait': True}
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">17:59:26</b> </span>2018-04-22 17:59:26,179 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 7: Writing files to VM's vm_TestCase11660_2217544157 disk
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">18:00:33</b> </span>2018-04-22 18:00:33,117 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 8: Shutdown vm vm_TestCase11660_2217544157 with {'async': 'false'}
<span class="gmail-timestamp" style="box-sizing:border-box"><b style="box-sizing:border-box">18:01:04</b> </span>2018-04-22 18:01:04,038 <span style="box-sizing:border-box;color:rgb(0,205,0)">INFO</span> Test Step 9: Previewing snapshot snap_TestCase11660_2217545559 on VM vm_TestCase11660_2217544157</pre><pre class="gmail-console-output" style="box-sizing:border-box;white-space:pre-wrap;word-wrap:break-word;margin:0px;color:rgb(51,51,51);font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><br></pre><pre class="gmail-console-output" style="box-sizing:border-box;white-space:pre-wrap;word-wrap:break-word;margin:0px;color:rgb(51,51,51);font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><br></pre><pre class="gmail-console-output" style="box-sizing:border-box;white-space:pre-wrap;word-wrap:break-word;margin:0px;color:rgb(51,51,51);font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><br></pre><pre class="gmail-console-output" style="box-sizing:border-box;white-space:pre-wrap;word-wrap:break-word;margin:0px;color:rgb(51,51,51);font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">[1]</pre><pre class="gmail-console-output" style="box-sizing:border-box;word-wrap:break-word;margin:0px;text-align:start;text-indent:0px;text-decoration-style:initial;text-decoration-color:initial"><font color="#333333"><span style="font-size:14px;white-space:pre-wrap"><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-storage/1048/consoleFull">https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-storage/1048/consoleFull</a><br></span></font></pre><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 23, 2018 at 1:29 AM, Elad Ben Aharon <span dir="ltr"><<a href="mailto:ebenahar@redhat.com" target="_blank">ebenahar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Sorry, this is the new execution link:<div><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-storage/1048/testReport/" target="_blank">https://rhv-jenkins.rhev-ci-<wbr>vms.eng.rdu2.redhat.com/job/<wbr>rhv-4.2-ge-runner-storage/<wbr>1048/testReport/</a><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 23, 2018 at 1:23 AM, Elad Ben Aharon <span dir="ltr"><<a href="mailto:ebenahar@redhat.com" target="_blank">ebenahar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi, I've triggered another execution [1] due to some issues I saw in the first which are not related to the patch.<div><br></div><div>The success rate is 78% which is low comparing to tier1 executions with code from downstream builds (95-100% success rates) [2].</div><div><br></div><div>From what I could see so far, there is an issue with move and copy operations to and from Gluster domains. For example [3].</div><div><br></div><div>The logs are attached.</div><div><br></div><div><br></div><div>[1]</div><div><font color="#1155cc"><u><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-tier1-after-upgrade/7/testReport/" target="_blank">https://rhv-jenkins.rhev-ci-vm<wbr>s.eng.rdu2.redhat.com/job/rhv-<wbr>4.2-ge-runner-tier1-after-upgr<wbr>ade/7/testReport/</a></u></font><br></div><div><br></div><div><br></div><div><br></div><div>[2]</div><div><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-tier1-after-upgrade/7/" target="_blank">https://rhv-jenkins.rhev-ci-vm<wbr>s.eng.rdu2.redhat.com/job/rhv-<wbr>4.2-ge-runner-tier1-after-upgr<wbr>ade/7/</a><br></div><div><br></div><div><br></div><div><br></div><div>[3]</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">2018-04-22 13:06:28,316+0300 INFO (jsonrpc/7) [vdsm.api] FINISH deleteImage error=Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835<wbr>-5f603e682f33, domain=e5fd29c8-52ba-467e-be09<wbr>-ca40ff054dd4' from=: </span><br>:ffff:10.35.161.182,40936, flow_id=disks_syncAction_ba6b2<wbr>630-5976-4935, task_id=3d5f2a8a-881c-409e-93e<wbr>9-aaa643c10e42 (api:51) <br>2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='3d5f2a8a-881c-409e-93e9<wbr>-aaa643c10e42') Unexpected error (task:875) <br>Traceback (most recent call last): <br> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/task.py", line 882, in _run <br> return fn(*args, **kargs) <br> File "<string>", line 2, in deleteImage <br> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py", line 49, in method <br> ret = func(*args, **kwargs) <br> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/hsm.py", line 1503, in deleteImage <br> raise se.ImageDoesNotExistInSD(imgUU<wbr>ID, sdUUID) <br>ImageDoesNotExistInSD: Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835<wbr>-5f603e682f33, domain=e5fd29c8-52ba-467e-be09<wbr>-ca40ff054dd4' <br>2018-04-22 13:06:28,317+0300 INFO (jsonrpc/7) [storage.TaskManager.Task] (Task='3d5f2a8a-881c-409e-93e9<wbr>-aaa643c10e42') aborting: Task is aborted: "Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835<wbr>-<br>5f603e682f33, domain=e5fd29c8-52ba-467e-be09<wbr>-ca40ff054dd4'" - code 268 (task:1181) <br>2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH deleteImage error=Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835<wbr>-5f603e682f33, domain=e5fd29c8-52ba-467e-be09<wbr>-ca40ff054d<br>d4' (dispatcher:82)<br></span><br></div><div><div class="m_793288165441172942h5"><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon <span dir="ltr"><<a href="mailto:ebenahar@redhat.com" target="_blank">ebenahar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">Triggered a sanity tier1 execution [1] using [2], which covers all the requested areas, on iSCSI, NFS and Gluster. </div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">I'll update with the results.</div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><br></div><div>[1]</div><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2_dev/job/rhv-4.2-ge-flow-storage/1161/" style="color:rgb(17,85,204);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px" target="_blank">https://rhv-jenkins.rhev-ci-vm<wbr>s.eng.rdu2.redhat.com/view/4.2<wbr>_dev/job/rhv-4.2-ge-flow-stora<wbr>ge/1161/</a><div><br></div><div>[2]</div><div><a href="https://gerrit.ovirt.org/#/c/89830/" target="_blank">https://gerrit.ovirt.org/#/c/8<wbr>9830/</a><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">vdsm-4.30.0-291.git77aef9a.el7<wbr>.x86_64</span><br></span><br></div><div><div class="m_793288165441172942m_2057066576539246219m_-2623767660581280952gmail-m_-7173335272469596554gmail-h5"><div><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik <span dir="ltr"><<a href="mailto:mpolednik@redhat.com" target="_blank">mpolednik@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span>On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi Martin,<br>
<br>
I see [1] requires a rebase, can you please take care?<br>
</blockquote>
<br></span>
Should be rebased.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
At the moment, our automation is stable only on iSCSI, NFS, Gluster and FC.<br>
Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not<br>
stable enough at the moment.<br>
</blockquote>
<br></span>
That is still pretty good.<div class="m_793288165441172942m_2057066576539246219m_-2623767660581280952gmail-m_-7173335272469596554gmail-m_-2951423565569528281gmail-m_1458218254404552003HOEnZb"><div class="m_793288165441172942m_2057066576539246219m_-2623767660581280952gmail-m_-7173335272469596554gmail-m_-2951423565569528281gmail-m_1458218254404552003h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
[1] <a href="https://gerrit.ovirt.org/#/c/89830/" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/#/c/8<wbr>9830/</a><br>
<br>
<br>
Thanks<br>
<br>
On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank">mpolednik@redhat.com</a>><br>
wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi, sorry if I misunderstood, I waited for more input regarding what areas<br>
have to be tested here.<br>
<br>
</blockquote>
<br>
I'd say that you have quite a bit of freedom in this regard. GlusterFS<br>
should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite<br>
that covers basic operations (start & stop VM, migrate it), snapshots<br>
and merging them, and whatever else would be important for storage<br>
sanity.<br>
<br>
mpolednik<br>
<br>
<br>
On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank">mpolednik@redhat.com</a>><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
wrote:<br>
<br>
On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
will<br>
have to check, since usually, we don't execute our automation on them.<br>
<br>
<br>
</blockquote>
Any update on this? I believe the gluster tests were successful, OST<br>
passes fine and unit tests pass fine, that makes the storage backends<br>
test the last required piece.<br>
<br>
<br>
On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir <<a href="mailto:ratamir@redhat.com" target="_blank">ratamir@redhat.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
+Elad<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg <<a href="mailto:danken@redhat.com" target="_blank">danken@redhat.com</a>><br>
wrote:<br>
<br>
On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer <<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>><br>
wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri <<a href="mailto:eedri@redhat.com" target="_blank">eedri@redhat.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Please make sure to run as much OST suites on this patch as possible<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
before merging ( using 'ci please build' )<br>
<br>
<br>
But note that OST is not a way to verify the patch.<br>
</blockquote>
<br>
Such changes require testing with all storage types we support.<br>
<br>
Nir<br>
<br>
On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <<br>
<a href="mailto:mpolednik@redhat.com" target="_blank">mpolednik@redhat.com</a><br>
><br>
<br>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Hey,<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I've created a patch[0] that is finally able to activate libvirt's<br>
dynamic_ownership for VDSM while not negatively affecting<br>
functionality of our storage code.<br>
<br>
That of course comes with quite a bit of code removal, mostly in<br>
the<br>
area of host devices, hwrng and anything that touches devices;<br>
bunch<br>
of test changes and one XML generation caveat (storage is handled<br>
by<br>
VDSM, therefore disk relabelling needs to be disabled on the VDSM<br>
level).<br>
<br>
Because of the scope of the patch, I welcome storage/virt/network<br>
people to review the code and consider the implication this change<br>
has<br>
on current/future features.<br>
<br>
[0] <a href="https://gerrit.ovirt.org/#/c/89830/" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/#/c/8<wbr>9830/</a><br>
<br>
<br>
In particular: dynamic_ownership was set to 0 prehistorically (as<br>
</blockquote>
<br>
</blockquote>
part<br>
</blockquote>
of <a href="https://bugzilla.redhat.com/show_bug.cgi?id=554961" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=554961</a> ) because<br>
libvirt,<br>
running as root, was not able to play properly with root-squash nfs<br>
mounts.<br>
<br>
Have you attempted this use case?<br>
<br>
I join to Nir's request to run this with storage QE.<br>
<br>
<br>
<br>
</blockquote>
<br>
--<br>
<br>
<br>
Raz Tamir<br>
Manager, RHV QE<br>
<br>
<br>
<br>
</blockquote></blockquote></blockquote></blockquote></blockquote></blockquote>
</div></div></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>