[ovirt-users] Enabling libgfapi disk access with oVirt 4.2

Misak Khachatryan kmisak at gmail.com
Thu Nov 16 06:41:28 UTC 2017


Hi,

I don't mean on the fly upgrade, just confused that i should stop all
VMs at once, as i understood procedure. If it's possible to do per VM,
it's perfectly OK for me.

Thank you Nir for clarification.

Best regards,
Misak Khachatryan


On Thu, Nov 16, 2017 at 2:05 AM, Nir Soffer <nsoffer at redhat.com> wrote:
> On Wed, Nov 15, 2017 at 8:58 AM Misak Khachatryan <kmisak at gmail.com> wrote:
>>
>> Hi,
>>
>> will it be a more clean approach? I can't tolerate full stop of all
>> VMs just to enable it, seems too disastrous for real production
>> environment. Will it be some migration mechanisms in future?
>
>
> You can enable it per vm, you don't need to stop all of them. But I think
> we do not support upgrading a machine with running vms, so upgrading
> requires:
>
> 1. migrating vms from the host you want to upgrade
> 2. upgrading the host
> 3. stopping the vm you want to upgrade to libgfapi
> 4. starting this vm on the upgraded host
>
> Theoretically qemu could switch from one disk to another, but I'm not
> sure this is supported when switching to the same disk using different
> transports. I know it is not supported now to mirror a network drive to
> another network drive.
>
> The old disk is using:
>
>             <disk device="disk" snapshot="no" type="file">
>                 <source
> file="/rhev/data-center/mnt/server:_volname/sd_id/images/img_id/vol_id"/>
>                 <target bus="virtio" dev="vda"/>
>                 <driver cache="none" error_policy="stop" io="threads"
> name="qemu" type="raw"/>
>             </disk>
>
> The new disk should use:
>
>             <disk device="disk" snapshot="no" type="network">
>                 <source name="volname/sd_id/images/img_id/vol_id"
> protocol="gluster">
>                     <host name="1.2.3.4" port="0" transport="tcp"/>
>                 </source>
>                 <driver cache="none" error_policy="stop" io="threads"
> name="qemu" type="raw"/>
>             </disk>
>
> Adding qemu-block mailing list.
>
> Nir
>
>>
>>
>> Best regards,
>> Misak Khachatryan
>>
>>
>> On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic <budic at onholyground.com>
>> wrote:
>> > You do need to stop the VMs and restart them, not just issue a reboot. I
>> > havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me.
>> >
>> > ________________________________
>> > From: Alessandro De Salvo <Alessandro.DeSalvo at roma1.infn.it>
>> > Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
>> > Date: November 9, 2017 at 2:35:01 AM CST
>> > To: users at ovirt.org
>> >
>> >
>> > Hi again,
>> >
>> > OK, tried to stop all the vms, except the engine, set engine-config -s
>> > LibgfApiSupported=true (for 4.2 only) and restarted the engine.
>> >
>> > When I tried restarting the VMs they are still not using gfapi, so it
>> > does
>> > not seem to help.
>> >
>> > Cheers,
>> >
>> >
>> >     Alessandro
>> >
>> >
>> >
>> > Il 09/11/17 09:12, Alessandro De Salvo ha scritto:
>> >
>> > Hi,
>> > where should I enable gfapi via the UI?
>> > The only command I tried was engine-config -s LibgfApiSupported=true but
>> > the
>> > result is what is shown in my output below, so it’s set to true for
>> > v4.2. Is
>> > it enough?
>> > I’ll try restarting the engine. Is it really needed to stop all the VMs
>> > and
>> > restart them all? Of course this is a test setup and I can do it, but
>> > for
>> > production clusters in the future it may be a problem.
>> > Thanks,
>> >
>> >    Alessandro
>> >
>> > Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra <knarra at redhat.com>
>> > ha
>> > scritto:
>> >
>> > Hi ,
>> >
>> >     The procedure to enable gfapi is below.
>> >
>> > 1) stop all the vms running
>> > 2) Enable gfapi via UI or using engine-config command
>> > 3) Restart ovirt-engine service
>> > 4) start the vms.
>> >
>> > Hope you have not missed any !!
>> >
>> > Thanks
>> > kasturi
>> >
>> > On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo
>> > <Alessandro.DeSalvo at roma1.infn.it> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I'm using the latest 4.2 beta release and want to try the gfapi access,
>> >> but I'm currently failing to use it.
>> >>
>> >> My test setup has an external glusterfs cluster v3.12, not managed by
>> >> oVirt.
>> >>
>> >> The compatibility flag is correctly showing gfapi should be enabled
>> >> with
>> >> 4.2:
>> >>
>> >> # engine-config -g LibgfApiSupported
>> >> LibgfApiSupported: false version: 3.6
>> >> LibgfApiSupported: false version: 4.0
>> >> LibgfApiSupported: false version: 4.1
>> >> LibgfApiSupported: true version: 4.2
>> >>
>> >> The data center and cluster have the 4.2 compatibility flags as well.
>> >>
>> >> However, when starting a VM with a disk on gluster I can still see the
>> >> disk is mounted via fuse.
>> >>
>> >> Any clue of what I'm still missing?
>> >>
>> >> Thanks,
>> >>
>> >>
>> >>    Alessandro
>> >>
>> >> _______________________________________________
>> >> Users mailing list
>> >> Users at ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list