4.3 live migration creates wrong image permissions.

after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again.

On Thu, Jun 13, 2019, 12:19 Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again.
This is a known issue with early 4.3 release, please upgrade to latest 4.3. Nir _______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again.
We had an old bug about that: https://bugzilla.redhat.com/show_bug.cgi?id=1666795 but it's reported as fixed. Can you please detail the exact version of ovirt-engine and vdsm you are using on all of your hosts?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...
-- Simone Tiraboschi He / Him / His Principal Software Engineer Red Hat <https://www.redhat.com/> stirabos@redhat.com @redhatjobs <https://twitter.com/redhatjobs> redhatjobs <https://www.facebook.com/redhatjobs> @redhatjobs <https://instagram.com/redhatjobs> <https://red.ht/sig> <https://redhat.com/summit>

engine: 4.3.4.2-1.el7 Node Versions OS Version: RHEL - 7 - 6.1810.2.el7.centos OS Description: CentOS Linux 7 (Core) Kernel Version: 3.10.0 - 957.12.2.el7.x86_64 KVM Version: 2.12.0 - 18.el7_6.5.1 LIBVIRT Version: libvirt-4.5.0-10.el7_6.10 VDSM Version: vdsm-4.30.17-1.el7 SPICE Version: 0.14.0 - 6.el7_6.1 GlusterFS Version: [N/A] On 2019-06-13 06:51, Simone Tiraboschi wrote:
On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again.
We had an old bug about that: https://bugzilla.redhat.com/show_bug.cgi?id=1666795 but it's reported as fixed.
Can you please detail the exact version of ovirt-engine and vdsm you are using on all of your hosts?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...
--
Simone Tiraboschi
He / Him / His
Principal Software Engineer
Red Hat [1]
stirabos@redhat.com
@redhatjobs [2] redhatjobs [3] @redhatjobs [4]
[5]
[6]
Links: ------ [1] https://www.redhat.com/ [2] https://twitter.com/redhatjobs [3] https://www.facebook.com/redhatjobs [4] https://instagram.com/redhatjobs [5] https://red.ht/sig [6] https://redhat.com/summit

Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795 Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm? *Regards,* *Shani Leviim* On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789. What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

Yes we are using GlusterFS distributed replicate with libgfapi VDSM 4.30.17 On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

In this case, i should be able to edit /etc/libvirtd/qemu.conf on all the nodes to disable dynamic ownership as a temporary measure until this is patched for libgfapi? On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

Alex McWhirter <alex@triadic.us> writes:
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all the nodes to disable dynamic ownership as a temporary measure until this is patched for libgfapi?
No, other devices might have permission problems in such a case.
On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal <mzamazal@redhat.com> wrote:
Alex McWhirter <alex@triadic.us> writes:
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all the nodes to disable dynamic ownership as a temporary measure until this is patched for libgfapi?
No, other devices might have permission problems in such a case.
I wonder how libvirt can change the permissions for devices it does not know about? When using libgfapi, we pass libivrt: <disk name='vda' snapshot='external' type='network'> <source protocol='gluster' name='volume/11111111-1111-1111-1111-111111111111' type='network'> <host name="brick1.example.com" port="49152" transport="tcp"/> <host name="brick2.example.com" port="49153" transport="tcp"/> </source> </disk> So libvirt does not have the path to the file, and it cannot change the permissions. Alex, can you reproduce this flow and attach vdsm and engine logs from all hosts to the bug? Nir
On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFM...

I have gone in an changed the libvirt configuration files on the cluster nodes, which has resolved the issue for the time being. I can reverse one of them and post the logs to help with the issue, hopefully tomorrow. On 2019-06-14 17:56, Nir Soffer wrote:
On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal <mzamazal@redhat.com> wrote:
Alex McWhirter <alex@triadic.us> writes:
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all the nodes to disable dynamic ownership as a temporary measure until this is patched for libgfapi?
No, other devices might have permission problems in such a case.
I wonder how libvirt can change the permissions for devices it does not know about?
When using libgfapi, we pass libivrt: <disk name='vda' snapshot='external' type='network'> <source protocol='gluster' name='volume/11111111-1111-1111-1111-111111111111' type='network'> <host name="brick1.example.com [1]" port="49152" transport="tcp"/> <host name="brick2.example.com [2]" port="49153" transport="tcp"/> </source> </disk>
So libvirt does not have the path to the file, and it cannot change the permissions.
Alex, can you reproduce this flow and attach vdsm and engine logs from all hosts to the bug?
Nir
On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim <sleviim@redhat.com> writes:
Hi, It seems that you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal <mzamazal@redhat.com>, Can you please confirm?
There may still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFM...
Links: ------ [1] http://brick1.example.com [2] http://brick2.example.com

Can you attach vdsm and engine logs? Does this happen for new VMs as well? On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

Also, what is the storage domain type? Block or File? On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Can you attach vdsm and engine logs? Does this happen for new VMs as well?
On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...

Gluster storage type, Ill do some migrations and attach logs from the same period shortly On 2019-06-13 07:47, Benny Zlotnik wrote:
Also, what is the storage domain type? Block or File?
On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Can you attach vdsm and engine logs? Does this happen for new VMs as well?
On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter <alex@triadic.us> wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the permissions back to vdsm:kvm on the images, and the VM will boot again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGS...
participants (6)
-
Alex McWhirter
-
Benny Zlotnik
-
Milan Zamazal
-
Nir Soffer
-
Shani Leviim
-
Simone Tiraboschi