<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Wed, Nov 15, 2017 at 8:58 AM Misak Khachatryan <<a href="mailto:kmisak@gmail.com">kmisak@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
will it be a more clean approach? I can't tolerate full stop of all<br>
VMs just to enable it, seems too disastrous for real production<br>
environment. Will it be some migration mechanisms in future?<br></blockquote><div><br></div><div>You can enable it per vm, you don't need to stop all of them. But I think</div><div>we do not support upgrading a machine with running vms, so upgrading </div><div>requires:</div><div><br></div><div>1. migrating vms from the host you want to upgrade</div><div>2. upgrading the host</div><div>3. stopping the vm you want to upgrade to libgfapi</div><div>4. starting this vm on the upgraded host</div><div><br></div><div>Theoretically qemu could switch from one disk to another, but I'm not</div><div>sure this is supported when switching to the same disk using different</div><div>transports. I know it is not supported now to mirror a network drive to</div><div>another network drive.</div><div><br></div><div>The old disk is using:</div><div><br></div><div><div> <disk device="disk" snapshot="no" type="file"></div><div> <source file="/rhev/data-center/mnt/server:_volname/sd_id/images/img_id/vol_id"/></div><div> <target bus="virtio" dev="vda"/></div><div> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/></div><div> </disk></div></div><div><br></div><div>The new disk should use:</div><div><br></div><div><div> <disk device="disk" snapshot="no" type="network"> </div><div> <source name="volname/sd_id/images/img_id/vol_id" protocol="gluster"></div><div> <host name="1.2.3.4" port="0" transport="tcp"/></div><div> </source></div><div> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/></div><div> </disk></div></div><div><br></div><div>Adding qemu-block mailing list.</div><div><br></div><div>Nir</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Best regards,<br>
Misak Khachatryan<br>
<br>
<br>
On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic <<a href="mailto:budic@onholyground.com" target="_blank">budic@onholyground.com</a>> wrote:<br>
> You do need to stop the VMs and restart them, not just issue a reboot. I<br>
> havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me.<br>
><br>
> ________________________________<br>
> From: Alessandro De Salvo <<a href="mailto:Alessandro.DeSalvo@roma1.infn.it" target="_blank">Alessandro.DeSalvo@roma1.infn.it</a>><br>
> Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2<br>
> Date: November 9, 2017 at 2:35:01 AM CST<br>
> To: <a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a><br>
><br>
><br>
> Hi again,<br>
><br>
> OK, tried to stop all the vms, except the engine, set engine-config -s<br>
> LibgfApiSupported=true (for 4.2 only) and restarted the engine.<br>
><br>
> When I tried restarting the VMs they are still not using gfapi, so it does<br>
> not seem to help.<br>
><br>
> Cheers,<br>
><br>
><br>
> Alessandro<br>
><br>
><br>
><br>
> Il 09/11/17 09:12, Alessandro De Salvo ha scritto:<br>
><br>
> Hi,<br>
> where should I enable gfapi via the UI?<br>
> The only command I tried was engine-config -s LibgfApiSupported=true but the<br>
> result is what is shown in my output below, so it’s set to true for v4.2. Is<br>
> it enough?<br>
> I’ll try restarting the engine. Is it really needed to stop all the VMs and<br>
> restart them all? Of course this is a test setup and I can do it, but for<br>
> production clusters in the future it may be a problem.<br>
> Thanks,<br>
><br>
> Alessandro<br>
><br>
> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra <<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>> ha<br>
> scritto:<br>
><br>
> Hi ,<br>
><br>
> The procedure to enable gfapi is below.<br>
><br>
> 1) stop all the vms running<br>
> 2) Enable gfapi via UI or using engine-config command<br>
> 3) Restart ovirt-engine service<br>
> 4) start the vms.<br>
><br>
> Hope you have not missed any !!<br>
><br>
> Thanks<br>
> kasturi<br>
><br>
> On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo<br>
> <<a href="mailto:Alessandro.DeSalvo@roma1.infn.it" target="_blank">Alessandro.DeSalvo@roma1.infn.it</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> I'm using the latest 4.2 beta release and want to try the gfapi access,<br>
>> but I'm currently failing to use it.<br>
>><br>
>> My test setup has an external glusterfs cluster v3.12, not managed by<br>
>> oVirt.<br>
>><br>
>> The compatibility flag is correctly showing gfapi should be enabled with<br>
>> 4.2:<br>
>><br>
>> # engine-config -g LibgfApiSupported<br>
>> LibgfApiSupported: false version: 3.6<br>
>> LibgfApiSupported: false version: 4.0<br>
>> LibgfApiSupported: false version: 4.1<br>
>> LibgfApiSupported: true version: 4.2<br>
>><br>
>> The data center and cluster have the 4.2 compatibility flags as well.<br>
>><br>
>> However, when starting a VM with a disk on gluster I can still see the<br>
>> disk is mounted via fuse.<br>
>><br>
>> Any clue of what I'm still missing?<br>
>><br>
>> Thanks,<br>
>><br>
>><br>
>> Alessandro<br>
>><br>
>> _______________________________________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote></div></div>