
Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- Jose Ferradeira http://www.logicworks.pt

Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR...

I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR...

I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR...

With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERR OR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst dom ain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERR OR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR...

You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL...

Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL...

No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL...

Hello, This is weird. Today I was able to do a live migration. Has happened to me before , sometimes I can do a disk migration some other times don't, without changing anything. But with the VM down it fails. Version 4.3.10.4-1.el7 # gluster --version glusterfs 6.10 LIVE MIGRATION gluster brick log: [2020-11-25 09:40:43.097977] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:40:43.097988] D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: fafaa24f-7174-4fb6-b9ac-6d10974598ed [2020-11-25 09:40:43.097995] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:40:43.098011] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk [2020-11-25 09:40:43.098014] D [socket.c:720:__socket_rwv] 0-tcp.data-server: would have passed zero length to read/write [2020-11-25 09:40:43.098003] D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: fafaa24f-7174-4fb6-b9ac-6d10974598ed [2020-11-25 09:40:43.098010] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: READ scheduled as slow priority fop [2020-11-25 09:40:43.098045] D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: fafaa24f-7174-4fb6-b9ac-6d10974598ed [2020-11-25 09:40:43.098047] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:40:43.098046] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:40:53.843509] D [socket.c:720:__socket_rwv] 0-tcp.data-server: would have passed zero length to read/write [2020-11-25 09:40:53.843517] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 4 [2020-11-25 09:40:53.843520] D [socket.c:720:__socket_rwv] 0-tcp.data-server: would have passed zero length to read/write [2020-11-25 09:40:53.843530] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:40:53.843530] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk [2020-11-25 09:40:53.843545] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 The message "D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: d6a457b3-d600-477c-a123-2f939b96f6fc" repeated 3 times between [2020-11-25 09:40:53.843456] and [2020-11-25 09:40:53.843496] [2020-11-25 09:40:53.843529] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: READ scheduled as slow priority fop [2020-11-25 09:40:53.843578] D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: d6a457b3-d600-477c-a123-2f939b96f6fc [2020-11-25 09:40:53.843586] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:05.902502] D [socket.c:720:__socket_rwv] 0-tcp.data-server: would have passed zero length to read/write [2020-11-25 09:41:05.902521] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:05.902527] D [socket.c:720:__socket_rwv] 0-tcp.data-server: would have passed zero length to read/write [2020-11-25 09:41:05.902545] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:05.902590] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:05.902600] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: 9fca07e6-5d59-4c99-8129-bca123f0d876" repeated 2 times between [2020-11-25 09:41:05.902476] and [2020-11-25 09:41:05.902518] [2020-11-25 09:41:05.902600] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: READ scheduled as slow priority fop [2020-11-25 09:41:05.902638] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:4dc35afd-d6a1-4aaa-9d25-6254f6a3df6d-GRAPH_ID:0-PID:4191-HOST:node3.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:41:05.902639] D [MSGID: 0] [posix-metadata.c:131:posix_fetch_mdata_xattr] 0-data-posix: No such attribute:trusted.glusterfs.mdata for file null gfid: 9fca07e6-5d59-4c99-8129-bca123f0d876 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:21.536567] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 7 [2020-11-25 09:41:21.536598] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:41:21.536646] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 6 [2020-11-25 09:41:21.536708] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 5 [2020-11-25 09:41:21.536763] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:21.537487] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:21.537516] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FSYNC scheduled as slow priority fop" repeated 8 times between [2020-11-25 09:41:21.533386] and [2020-11-25 09:41:21.533940] [2020-11-25 09:41:21.537516] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: WRITE scheduled as slow priority fop [2020-11-25 09:41:21.537637] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x555ab) [0x7f2324ef15ab] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:33.722393] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x555ab) [0x7f2324ef15ab] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:33.722927] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:33.722956] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:41:33.722961] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: WRITE scheduled as slow priority fop" repeated 13 times between [2020-11-25 09:41:33.216986] and [2020-11-25 09:41:33.722236] [2020-11-25 09:41:33.722959] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FSYNC scheduled as slow priority fop [2020-11-25 09:41:33.723067] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 4 [2020-11-25 09:41:33.723166] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:41:33.723445] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:33.723984] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:34.968525] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:34.968544] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FLUSH scheduled as normal priority fop [2020-11-25 09:41:34.968642] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x4fbad) [0x7f2324eebbad] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:35.742535] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:35.742574] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FSTAT scheduled as fast priority fop [2020-11-25 09:41:35.742668] D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.file-size' would not be sent on wire in the future [Invalid argument] [2020-11-25 09:41:35.742729] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x6311e) [0x7f2324eff11e] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:36.608888] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:36.608925] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: WRITE scheduled as slow priority fop [2020-11-25 09:41:36.609122] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x555ab) [0x7f2324ef15ab] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:40.208226] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x58c3b) [0x7f2324ef4c3b] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:40.208932] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:40.209125] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x58c3b) [0x7f2324ef4c3b] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:40.209653] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:40.209687] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr] 0-dict: key 'trusted.glusterfs.mdata' would not be sent on wire in the future [Invalid argument]" repeated 2 times between [2020-11-25 09:41:39.550205] and [2020-11-25 09:41:40.209075] [2020-11-25 09:41:40.209686] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: STATFS scheduled as fast priority fop [2020-11-25 09:41:40.209820] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x5c0f3) [0x7f2324ef80f3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:40.211481] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:40.211628] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x5c0f3) [0x7f2324ef80f3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:7606cd9d-c765-49cc-95a4-cabc3bcffdce-GRAPH_ID:0-PID:12047-HOST:node5.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:41:44.702715] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:44.702738] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: OPEN scheduled as fast priority fop [2020-11-25 09:41:44.702869] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x53319) [0x7f2324eef319] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:44.703204] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:44.703331] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:41:44.704059] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:41:44.704079] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk [2020-11-25 09:41:44.703220] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: READ scheduled as slow priority fop [2020-11-25 09:41:44.704078] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FLUSH scheduled as normal priority fop [2020-11-25 09:41:44.704171] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x4fbad) [0x7f2324eebbad] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:42:14.711157] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:14.711316] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x53319) [0x7f2324eef319] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:42:14.711859] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:14.711875] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: OPEN scheduled as fast priority fop" repeated 2 times between [2020-11-25 09:42:14.214506] and [2020-11-25 09:42:14.711180] [2020-11-25 09:42:14.711874] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: READ scheduled as slow priority fop [2020-11-25 09:42:14.712000] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x61cd3) [0x7f2324efdcd3] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:42:14.712651] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:14.712670] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FLUSH scheduled as normal priority fop [2020-11-25 09:42:14.712769] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x4fbad) [0x7f2324eebbad] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:3f9791be-32b5-42fb-8292-c180eab1ddd6-GRAPH_ID:0-PID:1265-HOST:node2.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [root@gfs3 bricks]# tail home-brick1.log [2020-11-25 09:42:56.482283] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:56.482323] D [logging.c:2006:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FSYNC scheduled as slow priority fop" repeated 2 times between [2020-11-25 09:42:56.463753] and [2020-11-25 09:42:56.464000] [2020-11-25 09:42:56.482320] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: WRITE scheduled as slow priority fop [2020-11-25 09:42:56.482497] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x555ab) [0x7f2324ef15ab] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 [2020-11-25 09:42:56.483120] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:56.483138] D [client_t.c:324:gf_client_ref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x34a85) [0x7f2324ed0a85] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x1171d) [0x7f2324ead71d] -->/lib64/libglusterfs.so.0(gf_client_ref+0x6e) [0x7f233a64438e] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 3 [2020-11-25 09:42:56.483140] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data-io-threads: FSYNC scheduled as slow priority fop [2020-11-25 09:42:56.483274] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 2 [2020-11-25 09:42:56.483683] D [client_t.c:433:gf_client_unref] (-->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0x550eb) [0x7f2324ef10eb] -->/usr/lib64/glusterfs/6.10/xlator/protocol/server.so(+0xaecb) [0x7f2324ea6ecb] -->/lib64/libglusterfs.so.0(gf_client_unref+0x7b) [0x7f233a6444db] ) 0-client_t: CTX_ID:831d9da7-0563-40d2-bfc1-964284a8b556-GRAPH_ID:0-PID:4201-HOST:node4.acloud.pt-PC_NAME:data-client-0-RECON_NO:-0: ref-count 1 VM down brick log: You can find the log's here: https://drive.acloud.pt/s/F5xpXU3sSl3T09Z De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL...

I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL...

I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ...

Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH...

Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF...

No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF...

Usually distributed volumes are supported on a Single-node setup, but it shouldn't be the problem. As you know the affected VMs , you can easily find the disks of a VM. Then try to read the VM's disk: sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/<Storage_domain_UUID>/images/<DISK-UUID>/<VM_DISK> of=/dev/null bs=4M status=progress Does it give errors ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4VXQ7QT27RT4N...

No errors # sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242 of=/dev/null bs=4M status=progress 107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s 25600+0 records in 25600+0 records out 107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s After this I tried again to move the disk, and surprise, successfully I didn't believe it. Try to move another disk, the same error came back I did a dd to this other disk and tried again to move it, again successfully !!! De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 20:22:36 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Usually distributed volumes are supported on a Single-node setup, but it shouldn't be the problem. As you know the affected VMs , you can easily find the disks of a VM. Then try to read the VM's disk: sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/<Storage_domain_UUID>/images/<DISK-UUID>/<VM_DISK> of=/dev/null bs=4M status=progress Does it give errors ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4VXQ7QT27RT4N...

This looks like the bug I have reported a long time ago. The only fix I found was to create new gluster volume and "cp -a" all data from the old to the new volume. Do you have spare space for a new Gluster volume ? If yes, create the new volume and add it to Ovirt, then dd the file and move the disk to that new storage. Once you move all VM's disks you can get rid of the old Gluster volume and reuse the space . P.S.: Sadly I didn't have the time to look at your logs . Best Regards, Strahil Nikolov В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2, <suporte@logicworks.pt> написа: No errors # sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242 of=/dev/null bs=4M status=progress 107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s 25600+0 records in 25600+0 records out 107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s After this I tried again to move the disk, and surprise, successfully I didn't believe it. Try to move another disk, the same error came back I did a dd to this other disk and tried again to move it, again successfully !!! ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 20:22:36 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Usually distributed volumes are supported on a Single-node setup, but it shouldn't be the problem. As you know the affected VMs , you can easily find the disks of a VM. Then try to read the VM's disk: sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/<Storage_domain_UUID>/images/<DISK-UUID>/<VM_DISK> of=/dev/null bs=4M status=progress Does it give errors ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4VXQ7QT27RT4N... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUA72SNMYMOT5X...

Thanks Did you use the command cp to copy data between gluster volumes? Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Terça-feira, 1 De Dezembro de 2020 8:05:17 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks This looks like the bug I have reported a long time ago. The only fix I found was to create new gluster volume and "cp -a" all data from the old to the new volume. Do you have spare space for a new Gluster volume ? If yes, create the new volume and add it to Ovirt, then dd the file and move the disk to that new storage. Once you move all VM's disks you can get rid of the old Gluster volume and reuse the space . P.S.: Sadly I didn't have the time to look at your logs . Best Regards, Strahil Nikolov В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2, <suporte@logicworks.pt> написа: No errors # sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242 of=/dev/null bs=4M status=progress 107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s 25600+0 records in 25600+0 records out 107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s After this I tried again to move the disk, and surprise, successfully I didn't believe it. Try to move another disk, the same error came back I did a dd to this other disk and tried again to move it, again successfully !!! ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 20:22:36 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Usually distributed volumes are supported on a Single-node setup, but it shouldn't be the problem. As you know the affected VMs , you can easily find the disks of a VM. Then try to read the VM's disk: sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/<Storage_domain_UUID>/images/<DISK-UUID>/<VM_DISK> of=/dev/null bs=4M status=progress Does it give errors ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4VXQ7QT27RT4N... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUA72SNMYMOT5X...

Actually I have to dig the mailing list, cause I can't remmember the exact steps and if you miss something - everything can go wild. I have the vague feeling that I just copied the data inside the volume and then I just renamed the master directories . There is a catch - oVirt is not very smart and it doesn't expect any foreign data to reside there. Of course, I could survive the downtime. Best Regards, Strahil Nikolov В вторник, 1 декември 2020 г., 19:40:28 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Thanks Did you use the command cp to copy data between gluster volumes? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Terça-feira, 1 De Dezembro de 2020 8:05:17 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks This looks like the bug I have reported a long time ago. The only fix I found was to create new gluster volume and "cp -a" all data from the old to the new volume. Do you have spare space for a new Gluster volume ? If yes, create the new volume and add it to Ovirt, then dd the file and move the disk to that new storage. Once you move all VM's disks you can get rid of the old Gluster volume and reuse the space . P.S.: Sadly I didn't have the time to look at your logs . Best Regards, Strahil Nikolov В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2, <suporte@logicworks.pt> написа: No errors # sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242 of=/dev/null bs=4M status=progress 107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s 25600+0 records in 25600+0 records out 107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s After this I tried again to move the disk, and surprise, successfully I didn't believe it. Try to move another disk, the same error came back I did a dd to this other disk and tried again to move it, again successfully !!! ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 20:22:36 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Usually distributed volumes are supported on a Single-node setup, but it shouldn't be the problem. As you know the affected VMs , you can easily find the disks of a VM. Then try to read the VM's disk: sudo -u vdsm dd if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/<Storage_domain_UUID>/images/<DISK-UUID>/<VM_DISK> of=/dev/null bs=4M status=progress Does it give errors ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: No heals pending There are some VM's I can move the disk but some others VM's I cannot move the disk It's a simple gluster ]# gluster volume info Volume Name: gfs1data Type: Distribute Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gfs1.server.pt:/home/brick1 Options Reconfigured: diagnostics.brick-log-level: INFO performance.client-io-threads: off server.event-threads: 4 client.event-threads: 4 cluster.choose-local: yes user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 17:27:04 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks Are you sure you don't have any heals pending ? I should admit I have never seen this type of error. Is it happening for all VMs or only specific ones ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Sorry, I found this error on gluster logs: [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: Failed to get anonymous fd for real_path: /home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such file or directory] ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Domingo, 29 De Novembro de 2020 13:13:00 Assunto: [ovirt-users] Re: Unable to move or copy disks I don't find any error in the gluster logs, I just find this error in the vdsm log: 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [storage.SANLock] Successfully released Lease(name='61d85180-65a4-452d-8773-db778f56e242', path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease', offset=0) (clusterlock:524) 2020-11-29 12:57:45,528+0000 ERROR (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run self._run() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", line 86, in _run self._operation.run() File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in run for data in self._operation.watch(): File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242', '-O', 'raw', u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 134086625: No such file or directory\n') 2020-11-29 12:57:45,528+0000 INFO (tasks/1) [root] Job u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds (jobs:249) 2020-11-29 12:57:45,529+0000 INFO (tasks/1) [storage.ThreadPool.WorkerThread] FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210) Any idea? Regards José ________________________________ De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: users@ovirt.org Enviadas: Sábado, 28 De Novembro de 2020 18:39:47 Assunto: [ovirt-users] Re: Unable to move or copy disks I really don't understand this. I have 2 glusters same version, 6.10 I can move a disk from gluster2 to gluster1, but cannot move the same disk from gluster1 to gluster2 ovirt version: 4.3.10.4-1.el7 Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks No, but keep an eye on you "/var/log" as debug is providing a lot of info. Usually when you got a failure to move the disk, you can disable and check the logs. Best Regards, Strahil Nikolov В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2, <suporte@logicworks.pt> написа: Do I need to restart gluster after enable debug level? gluster volume set data2 diagnostics.brick-log-level DEBUG ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sábado, 21 De Novembro de 2020 19:42:44 Assunto: Re: [ovirt-users] Re: Unable to move or copy disks You still haven't provided debug logs from the Gluster Bricks. There will be always a chance that a bug hits you ... no matter OS and tech. What matters - is how you debug and overcome that bug. Check the gluster brick debug logs and you can test if the issue happens with an older version. Also, consider providing oVirt version, Gluster version and some details about your setup - otherwise helping you is almost impossible. Best Regards, Strahil Nikolov В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: With older gluster verison this does not happens. Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: () and in the vdsm.log: ERROR (tasks/7) [storage.Image] Copy image error: image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst domain=0b80eac1-8bbb-4634-9098-4155602c7b38 (image:485) ERROR (tasks/7) [storage.TaskManager.Task] (Task='fbf02f6b-c107-4c1e-a9c7-b13261bf99e0') Unexpected error (task:875) For smaller installations, without the need of storage HA, maybe it's better to use NFS? Is more stable? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: users@ovirt.org Enviadas: Sexta-feira, 20 De Novembro de 2020 21:55:28 Assunto: Re: [ovirt-users] Unable to move or copy disks I can recommend you to: - enable debug level of gluster's bricks - try to reproduce the issue I had similar issue with gluster v6.6 and above. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2, <suporte@logicworks.pt> написа: I tried to move the VM disk with the VM up I also tried to move or copy the disk with the VM down, and get the same error. Strange is, I have a gluster storage domain, with an older gluster version, that worked. I was able to move the disk to this older gluster and also I was able to copy the disk from to this older gluster storage What should I do with this VM? Should I shut it down and try delete the snapshot? Regards José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: users@ovirt.org, suporte@logicworks.pt Enviadas: Sexta-feira, 20 De Novembro de 2020 21:05:19 Assunto: Re: [ovirt-users] Unable to move or copy disks Can you try a live migration ? I got a similar case and the live migration somehow triggered a fix. Best Regards, Strahil Nikolov В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2, <suporte@logicworks.pt> написа: Hi, I was trying to move a disk between glusters domain storage without success. # gluster --version glusterfs 6.10 Now have a VM with this message: The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete oVirt verison is Version 4.3.10.4-1.el7 I cannot delete the snapshot. What should I do? hanks -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAYBO6KEKQSSZR... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIVNXE4C3MZTCL... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTOPYUWNFDHXFQ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NWLJYU2RDLQOH... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYQSHSJQ67YYCF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4VXQ7QT27RT4N... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUA72SNMYMOT5X...
participants (2)
-
Strahil Nikolov
-
suporte@logicworks.pt