<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi Elad,<div class="">why did you install vdsm-hook-allocate_net?</div><div class=""><br class=""></div><div class="">adding Dan as I think the hook is not supposed to fail this badly in any case</div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">michal<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 5 May 2018, at 19:22, Elad Ben Aharon <<a href="mailto:ebenahar@redhat.com" class="">ebenahar@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Start VM fails on:<div class=""><br class=""></div><div class=""><span style="font-family:monospace" class=""><span style="background-color: rgb(255, 255, 255);" class="">2018-05-05 17:53:27,399+0300 INFO (vm/e6ce66ce) [virt.vm] (vmId='e6ce66ce-852f-48c5-9997-5d2959432a27') drive 'vda' path: 'dev=/rhev/data-center/mnt/blockSD/db5a6696-d907-4938-9a78-bdd13a843c62/images/6cdabfe5- </span><br class="">d1ca-40af-ae63-9834f235d1c8/7ef97445-30e6-4435-8425-f35a01928211' -> u'*dev=/rhev/data-center/mnt/blockSD/db5a6696-d907-4938-9a78-bdd13a843c62/images/6cdabfe5-d1ca-40af-ae63-9834f235d1c8/7ef97445-30e6-4435-8425- <br class="">f35a01928211' (storagexml:334) <br class="">2018-05-05 17:53:27,888+0300 INFO (jsonrpc/1) [vdsm.api] START getSpmStatus(spUUID='940fe6f3-b0c6-4d0c-a921-198e7819c1cc', options=None) from=::ffff:10.35.161.127,53512, task_id=c70ace39-dbfe-4f5c-ae49-a1e3a82c <br class="">2758 (api:46) <br class="">2018-05-05 17:53:27,909+0300 INFO (vm/e6ce66ce) [root] /usr/libexec/vdsm/hooks/before_device_create/10_allocate_net: rc=2 err=vm net allocation hook: [unexpected error]: Traceback (most recent call last): <br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 105, in <module> <br class=""> main() <br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 93, in main <br class=""> allocate_random_network(device_xml) <br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 62, in allocate_random_network <br class=""> net = _get_random_network() <br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 50, in _get_random_network <br class=""> available_nets = _parse_nets() <br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 46, in _parse_nets <br class=""> return [net for net in os.environ[AVAIL_NETS_KEY].split()] <br class=""> File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ <br class=""> raise KeyError(key) <br class="">KeyError: 'equivnets' <br class="">
<br class="">
<br class="">(hooks:110) <br class="">2018-05-05 17:53:27,915+0300 <span style="color:rgb(255,255,255);background-color:rgb(0,0,0)" class="">ERROR</span><span style="background-color: rgb(255, 255, 255);" class=""> (vm/e6ce66ce) [virt.vm] (vmId='e6ce66ce-852f-48c5-9997-5d2959432a27') The vm start process failed (vm:943) </span><br class="">Traceback (most recent call last): <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm <br class=""> self._run() <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861, in _run <br class=""> domxml = hooks.before_vm_start(self._buildDomainXML(), <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2254, in _buildDomainXML <br class=""> dom, <a href="http://self.id/" class="">self.id</a>, self._custom['custom']) <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/virt/domxml_preprocess.py", line 240, in replace_device_xml_with_hooks_xml <br class=""> dev_custom) <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/common/hooks.py", line 134, in before_device_create <br class=""> params=customProperties) <br class=""> File "/usr/lib/python2.7/site-packages/vdsm/common/hooks.py", line 120, in _runHooksDir <br class=""> raise exception.HookError(err) <br class="">HookError: Hook Error: ('vm net allocation hook: [unexpected error]: Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 105, in <module>\n main()\n<br class=""> File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 93, in main\n allocate_random_network(device_xml)\n File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 62, i<br class="">n allocate_random_network\n net = _get_random_network()\n File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 50, in _get_random_network\n available_nets = _parse_nets()\n File "/us<br class="">r/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 46, in _parse_nets\n return [net for net in os.environ[AVAIL_NETS_KEY].split()]\n File "/usr/lib64/python2.7/UserDict.py", line 23, in __getit<br class="">em__\n raise KeyError(key)\nKeyError: \'equivnets\'\n\n\n',)<br class=""></span><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">Hence, the success rate was 28% against 100% running with d/s (d/s). If needed, I'll compare against the latest master, but I think you get the picture with d/s.</div><div class=""><br class=""></div><div class=""><span style="font-family:monospace" class=""><span style="background-color: rgb(255, 255, 255);" class="">vdsm-4.20.27-3.gitfee7810.el7.centos.x86_64 </span><br class="">libvirt-3.9.0-14.el7_5.3.x86_64 <br class="">qemu-kvm-rhev-2.10.0-21.el7_5.2.x86_64 <br class="">kernel 3.10.0-862.el7.x86_64</span></div><div class=""><span style="font-family:monospace" class="">rhel7.5<br class=""></span><br class=""></div><div class=""><br class=""></div><div class="">Logs attached</div></div><div class="gmail_extra"><br class=""><div class="gmail_quote">On Sat, May 5, 2018 at 1:26 PM, Elad Ben Aharon <span dir="ltr" class=""><<a href="mailto:ebenahar@redhat.com" target="_blank" class="">ebenahar@redhat.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="">nvm, found gluster 3.12 repo, managed to install vdsm</div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Sat, May 5, 2018 at 1:12 PM, Elad Ben Aharon <span dir="ltr" class=""><<a href="mailto:ebenahar@redhat.com" target="_blank" class="">ebenahar@redhat.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="">No, vdsm requires it:<div class=""><br class=""></div><div class=""><span style="font-family:monospace" class=""><span style="background-color: rgb(255, 255, 255);" class="">Error: Package: vdsm-4.20.27-3.gitfee7810.el7.<wbr class="">centos.x86_64 (/vdsm-4.20.27-3.gitfee7810.el<wbr class="">7.centos.x86_64) </span><br class=""> Requires: glusterfs-fuse >= 3.12 <br class=""> Installed: glusterfs-fuse-3.8.4-54.8.el7.<wbr class="">x86_64 (@rhv-4.2.3)<br class=""></span><br class=""></div><div class="">Therefore, vdsm package installation is skipped upon force install.</div></div><div class="m_8270803836802176999HOEnZb"><div class="m_8270803836802176999h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Sat, May 5, 2018 at 11:42 AM, Michal Skrivanek <span dir="ltr" class=""><<a href="mailto:michal.skrivanek@redhat.com" target="_blank" class="">michal.skrivanek@redhat.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><br class=""><div class=""><span class=""><br class=""><blockquote type="cite" class=""><div class="">On 5 May 2018, at 00:38, Elad Ben Aharon <<a href="mailto:ebenahar@redhat.com" target="_blank" class="">ebenahar@redhat.com</a>> wrote:</div><br class="m_8270803836802176999m_4224343900157515506m_-5974818518343566788Apple-interchange-newline"><div class=""><div dir="ltr" class="">Hi guys, <div class=""><br class=""></div><div class="">The vdsm build from the patch requires glusterfs-fuse > 3.12. This is while the latest 4.2.3-5 d/s build requires 3.8.4 (<span style="font-family:monospace" class=""><span style="background-color:rgb(255,255,255)" class="">3.4.0.59rhs-1.el7)</span><br class=""></span></div></div></div></blockquote><div class=""><br class=""></div></span>because it is still oVirt, not a downstream build. We can’t really do downstream builds with unmerged changes:/</div><div class=""><span class=""><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><font face="monospace" class="">Trying to get this gluster-fuse build, so far no luck.</font></div><div class=""><font face="monospace" class="">Is this requirement intentional? </font></div></div></div></blockquote><div class=""><br class=""></div></span>it should work regardless, I guess you can force install it without the dependency</div><div class=""><div class="m_8270803836802176999m_4224343900157515506h5"><div class=""><br class=""><blockquote type="cite" class=""><div class=""><div class="gmail_extra"><br class=""><div class="gmail_quote">On Fri, May 4, 2018 at 2:38 PM, Michal Skrivanek <span dir="ltr" class=""><<a href="mailto:michal.skrivanek@redhat.com" target="_blank" class="">michal.skrivanek@redhat.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class="">Hi Elad,<div class="">to make it easier to compare, Martin backported the change to 4.2 so it is actually comparable with a run without that patch. Would you please try that out? </div><div class="">It would be best to have 4.2 upstream and this[1] run to really minimize the noise.</div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">michal</div><div class=""><br class=""></div><div class="">[1] <a href="http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/28/" target="_blank" class="">http://jenkins.ovirt.org/j<wbr class="">ob/vdsm_4.2_build-artifacts-on<wbr class="">-demand-el7-x86_64/28/</a></div><div class=""><br class=""><div class=""><blockquote type="cite" class=""><div class=""><div class="m_8270803836802176999m_4224343900157515506m_-5974818518343566788h5"><div class="">On 27 Apr 2018, at 09:23, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a>> wrote:</div><br class="m_8270803836802176999m_4224343900157515506m_-5974818518343566788m_-2464431127513935993Apple-interchange-newline"></div></div><div class=""><div class=""><div class=""><div class="m_8270803836802176999m_4224343900157515506m_-5974818518343566788h5">On 24/04/18 00:37 +0300, Elad Ben Aharon wrote:<br class=""><blockquote type="cite" class="">I will update with the results of the next tier1 execution on latest 4.2.3<br class=""></blockquote><br class="">That isn't master but old branch though. Could you run it against<br class="">*current* VDSM master?<br class=""><br class=""><blockquote type="cite" class="">On Mon, Apr 23, 2018 at 3:56 PM, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a>><br class="">wrote:<br class=""><br class=""><blockquote type="cite" class="">On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:<br class=""><br class=""><blockquote type="cite" class="">Hi, I've triggered another execution [1] due to some issues I saw in the<br class="">first which are not related to the patch.<br class=""><br class="">The success rate is 78% which is low comparing to tier1 executions with<br class="">code from downstream builds (95-100% success rates) [2].<br class=""><br class=""></blockquote><br class="">Could you run the current master (without the dynamic_ownership patch)<br class="">so that we have viable comparision?<br class=""><br class="">From what I could see so far, there is an issue with move and copy<br class=""><blockquote type="cite" class="">operations to and from Gluster domains. For example [3].<br class=""><br class="">The logs are attached.<br class=""><br class=""><br class="">[1]<br class="">*<a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv" target="_blank" class="">https://rhv-jenkins.rhev-ci-v<wbr class="">ms.eng.rdu2.redhat.com/job/rhv</a><br class="">-4.2-ge-runner-tier1-after-upg<wbr class="">rade/7/testReport/<br class=""><<a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv" target="_blank" class="">https://rhv-jenkins.rhev-ci-v<wbr class="">ms.eng.rdu2.redhat.com/job/rhv</a><br class="">-4.2-ge-runner-tier1-after-upg<wbr class="">rade/7/testReport/>*<br class=""><br class=""><br class=""><br class="">[2]<br class=""><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/" target="_blank" class="">https://rhv-jenkins.rhev-ci-vm<wbr class="">s.eng.rdu2.redhat.com/job/</a><br class=""><br class="">rhv-4.2-ge-runner-tier1-after-<wbr class="">upgrade/7/<br class=""><br class=""><br class=""><br class="">[3]<br class="">2018-04-22 13:06:28,316+0300 INFO (jsonrpc/7) [vdsm.api] FINISH<br class="">deleteImage error=Image does not exist in domain:<br class="">'image=cabb8846-7a4b-4244-9835<wbr class="">-5f603e682f33,<br class="">domain=e5fd29c8-52ba-467e-be09<wbr class="">-ca40ff054dd4'<br class="">from=:<br class="">:ffff:10.35.161.182,40936, flow_id=disks_syncAction_ba6b2<wbr class="">630-5976-4935,<br class="">task_id=3d5f2a8a-881c-409e-93e<wbr class="">9-aaa643c10e42 (api:51)<br class="">2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]<br class="">(Task='3d5f2a8a-881c-409e-93e9<wbr class="">-aaa643c10e42') Unexpected error (task:875)<br class="">Traceback (most recent call last):<br class="">File "/usr/lib/python2.7/site-packa<wbr class="">ges/vdsm/storage/task.py", line 882,<br class="">in<br class="">_run<br class=""> return fn(*args, **kargs)<br class="">File "<string>", line 2, in deleteImage<br class="">File "/usr/lib/python2.7/site-packa<wbr class="">ges/vdsm/common/api.py", line 49, in<br class="">method<br class=""> ret = func(*args, **kwargs)<br class="">File "/usr/lib/python2.7/site-packa<wbr class="">ges/vdsm/storage/hsm.py", line 1503,<br class="">in<br class="">deleteImage<br class=""> raise se.ImageDoesNotExistInSD(imgUU<wbr class="">ID, sdUUID)<br class="">ImageDoesNotExistInSD: Image does not exist in domain:<br class="">'image=cabb8846-7a4b-4244-9835<wbr class="">-5f603e682f33,<br class="">domain=e5fd29c8-52ba-467e-be09<wbr class="">-ca40ff054dd4'<br class=""><br class="">2018-04-22 13:06:28,317+0300 INFO (jsonrpc/7) [storage.TaskManager.Task]<br class="">(Task='3d5f2a8a-881c-409e-93e9<wbr class="">-aaa643c10e42') aborting: Task is aborted:<br class="">"Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835<wbr class="">-<br class="">5f603e682f33, domain=e5fd29c8-52ba-467e-be09<wbr class="">-ca40ff054dd4'" - code 268<br class="">(task:1181)<br class="">2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH<br class="">deleteImage error=Image does not exist in domain:<br class="">'image=cabb8846-7a4b-4244-9835<wbr class="">-5f603e682f33,<br class="">domain=e5fd29c8-52ba-467e-be09<br class="">-ca40ff054d<br class="">d4' (dispatcher:82)<br class=""><br class=""><br class=""><br class="">On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon <<a href="mailto:ebenahar@redhat.com" target="_blank" class="">ebenahar@redhat.com</a>><br class="">wrote:<br class=""><br class="">Triggered a sanity tier1 execution [1] using [2], which covers all the<br class=""><blockquote type="cite" class="">requested areas, on iSCSI, NFS and Gluster.<br class="">I'll update with the results.<br class=""><br class="">[1]<br class=""><a href="https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2" target="_blank" class="">https://rhv-jenkins.rhev-ci-vm<wbr class="">s.eng.rdu2.redhat.com/view/4.2</a><br class="">_dev/job/rhv-4.2-ge-flow-stora<wbr class="">ge/1161/<br class=""><br class="">[2]<br class=""><a href="https://gerrit.ovirt.org/#/c/89830/" target="_blank" class="">https://gerrit.ovirt.org/#/c/8<wbr class="">9830/</a><br class="">vdsm-4.30.0-291.git77aef9a.el7<wbr class="">.x86_64<br class=""><br class=""><br class=""><br class="">On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a>><br class="">wrote:<br class=""><br class="">On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:<br class=""><blockquote type="cite" class=""><br class="">Hi Martin,<br class=""><blockquote type="cite" class=""><br class="">I see [1] requires a rebase, can you please take care?<br class=""><br class=""><br class=""></blockquote>Should be rebased.<br class=""><br class="">At the moment, our automation is stable only on iSCSI, NFS, Gluster and<br class=""><br class=""><blockquote type="cite" class="">FC.<br class="">Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's<br class="">not<br class="">stable enough at the moment.<br class=""><br class=""><br class=""></blockquote>That is still pretty good.<br class=""><br class=""><br class="">[1] <a href="https://gerrit.ovirt.org/#/c/89830/" target="_blank" class="">https://gerrit.ovirt.org/#/c/8<wbr class="">9830/</a><br class=""><br class=""><blockquote type="cite" class=""><br class=""><br class="">Thanks<br class=""><br class="">On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik <<a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a><br class="">><br class="">wrote:<br class=""><br class="">On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:<br class=""><br class=""><blockquote type="cite" class=""><br class="">Hi, sorry if I misunderstood, I waited for more input regarding what<br class=""><br class=""><blockquote type="cite" class="">areas<br class="">have to be tested here.<br class=""><br class=""><br class="">I'd say that you have quite a bit of freedom in this regard.<br class=""></blockquote>GlusterFS<br class="">should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite<br class="">that covers basic operations (start & stop VM, migrate it), snapshots<br class="">and merging them, and whatever else would be important for storage<br class="">sanity.<br class=""><br class="">mpolednik<br class=""><br class=""><br class="">On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <<br class=""><a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a><br class="">><br class=""><br class="">wrote:<br class=""><blockquote type="cite" class=""><br class="">On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:<br class=""><br class=""><br class=""><blockquote type="cite" class="">We can test this on iSCSI, NFS and GlusterFS. As for ceph and<br class="">cinder,<br class=""><br class="">will<br class=""><blockquote type="cite" class="">have to check, since usually, we don't execute our automation on<br class="">them.<br class=""><br class=""><br class="">Any update on this? I believe the gluster tests were successful,<br class="">OST<br class=""><br class=""></blockquote>passes fine and unit tests pass fine, that makes the storage<br class="">backends<br class="">test the last required piece.<br class=""><br class=""><br class="">On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir <<a href="mailto:ratamir@redhat.com" target="_blank" class="">ratamir@redhat.com</a>><br class="">wrote:<br class=""><br class=""><br class="">+Elad<br class=""><blockquote type="cite" class=""><br class=""><br class="">On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg <<a href="mailto:danken@redhat.com" target="_blank" class="">danken@redhat.com</a><br class=""><blockquote type="cite" class="">><br class="">wrote:<br class=""><br class="">On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer <<a href="mailto:nsoffer@redhat.com" target="_blank" class="">nsoffer@redhat.com</a>><br class="">wrote:<br class=""><br class=""><br class="">On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri <<a href="mailto:eedri@redhat.com" target="_blank" class="">eedri@redhat.com</a>><br class=""><blockquote type="cite" class="">wrote:<br class=""><br class=""><br class="">Please make sure to run as much OST suites on this patch as<br class=""><blockquote type="cite" class="">possible<br class=""><br class="">before merging ( using 'ci please build' )<br class=""><br class=""><blockquote type="cite" class=""><br class=""><br class="">But note that OST is not a way to verify the patch.<br class=""><br class=""><br class=""></blockquote>Such changes require testing with all storage types we support.<br class=""><br class="">Nir<br class=""><br class="">On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <<br class=""><a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a><br class="">><br class=""><br class="">wrote:<br class=""><br class=""><br class=""><blockquote type="cite" class="">Hey,<br class=""><br class=""><br class="">I've created a patch[0] that is finally able to activate<br class=""><blockquote type="cite" class="">libvirt's<br class="">dynamic_ownership for VDSM while not negatively affecting<br class="">functionality of our storage code.<br class=""><br class="">That of course comes with quite a bit of code removal, mostly<br class="">in<br class="">the<br class="">area of host devices, hwrng and anything that touches devices;<br class="">bunch<br class="">of test changes and one XML generation caveat (storage is<br class="">handled<br class="">by<br class="">VDSM, therefore disk relabelling needs to be disabled on the<br class="">VDSM<br class="">level).<br class=""><br class="">Because of the scope of the patch, I welcome<br class="">storage/virt/network<br class="">people to review the code and consider the implication this<br class="">change<br class="">has<br class="">on current/future features.<br class=""><br class="">[0] <a href="https://gerrit.ovirt.org/#/c/89830/" target="_blank" class="">https://gerrit.ovirt.org/#/c/8<wbr class="">9830/</a><br class=""><br class=""><br class="">In particular: dynamic_ownership was set to 0 prehistorically<br class="">(as<br class=""><br class=""><br class=""></blockquote>part<br class=""><br class=""></blockquote><br class="">of <a href="https://bugzilla.redhat.com/show_bug.cgi?id=554961" target="_blank" class="">https://bugzilla.redhat.com/sh<wbr class="">ow_bug.cgi?id=554961</a> ) because<br class=""></blockquote>libvirt,<br class="">running as root, was not able to play properly with root-squash<br class="">nfs<br class="">mounts.<br class=""><br class="">Have you attempted this use case?<br class=""><br class="">I join to Nir's request to run this with storage QE.<br class=""><br class=""><br class=""><br class=""><br class="">--<br class=""></blockquote><br class=""><br class="">Raz Tamir<br class="">Manager, RHV QE<br class=""><br class=""><br class=""><br class=""><br class=""><br class=""></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><br class=""></blockquote></blockquote><br class=""><br class=""></blockquote></blockquote></div></div><span class="">______________________________<wbr class="">_________________<br class="">Devel mailing list<br class=""><a href="mailto:Devel@ovirt.org" target="_blank" class="">Devel@ovirt.org</a><br class=""><a href="http://lists.ovirt.org/mailman/listinfo/devel" target="_blank" class="">http://lists.ovirt.org/mailman<wbr class="">/listinfo/devel</a><br class=""><br class=""><br class=""></span></div></div></blockquote></div><br class=""></div></div></blockquote></div><br class=""></div>
</div></blockquote></div><br class=""></div></div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
<span id="cid:77382D15-7BFB-4164-A6D0-F8FA5BE5E692@mrkev"><logs.tar.gz></span></div></blockquote></div><br class=""></div></body></html>