Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

Hello, thank you for helping me with my problem. I replayed migration (10:38:02 local time) and recorded vdsm.log of source and destination as attached. I can't find anything in the gluster logs that shows an error. One information: my FQDN glusterfs.rxmgmt.databay.de points to all the gluster hosts: glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.121 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.125 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.127 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.122 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.124 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.123 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.126 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.128 I double checked all gluster hosts. They all are configured the same regarding "option rpc-auth-allow-insecure on" No iptables rules on the host. Bye Am 24.08.2017 um 16:38 schrieb Yaniv Kaul:
Can you also post to the users mailing list the destination VDSM log?
On Thu, Aug 24, 2017 at 4:55 PM, Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
nice to hear it worked for you.
Attached you find the vdsm.log (from migration source) including the error and engine.log which looks ok.
Hostnames/IP-Adresses are correct and use the ovirtmgmt Network.
I checked (on both hosts):
[root@microcloud22 glusterfs]# gluster volume get gv0 storage.owner-uid Option Value ------ ----- storage.owner-uid 36 [root@microcloud22 glusterfs]# gluster volume get gv0 storage.owner-gid Option Value ------ ----- storage.owner-gid 36 [root@microcloud22 glusterfs]# gluster volume get gv0 server.allow-insecure Option Value ------ ----- server.allow-insecure on
and /etc/glusterd
root@microcloud22 glusterfs]# cat /etc/glusterfs/glusterd.vol volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option ping-timeout 0 option event-threads 1 option rpc-auth-allow-insecure on # option transport.address-family inet6 # option base-port 49152 end-volume
Bye
Am 24.08.2017 um 15:25 schrieb Denis Chaplygin:
Hello!
On Thu, Aug 24, 2017 at 3:07 PM, Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Responsiveness of VM is much better (already seen when Updateng OS Packages).
But I'm not able to migrate the mashine live to another host in the cluster. Manager only states "Migration failed"
Live migration worked for me.
Cold you please provide some details? Engine/vdsm logs in +/- 10 minutes in the vicinity of migration failure.
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 <tel:+49%202405%20408370> fax +49 (0) 24 05 / 40 83 759 <tel:+49%202405%204083759> mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------

Hello! On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk <rs@databay.de> wrote:
I replayed migration (10:38:02 local time) and recorded vdsm.log of source and destination as attached. I can't find anything in the gluster logs that shows an error. One information: my FQDN glusterfs.rxmgmt.databay.de points to all the gluster hosts:
glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.121 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.125 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.127 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.122 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.124 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.123 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.126 glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.128
I double checked all gluster hosts. They all are configured the same regarding "option rpc-auth-allow-insecure on" No iptables rules on the host.
Do you use 'glusterfs.rxmgmt.databay.de" as a storage domain host name? I'm not a gluster guru, but i'm afraid that some internal gluster client code may go crazy, when it receives different address or several ip addresses every time. Is it possible to try with separate names? You can create a storage domain using 172.16.252.121 for example and it should work bypassing your DNS. If it is possible to make that, could you please do that and retry live migration?

This is a multi-part message in MIME format. --------------0D0952143881FE6C964E43BB Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Hello, I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use). I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula. Bye Am 25.08.2017 um 13:11 schrieb Denis Chaplygin:
Hello!
On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
I replayed migration (10:38:02 local time) and recorded vdsm.log of source and destination as attached. I can't find anything in the gluster logs that shows an error. One information: my FQDN glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de> points to all the gluster hosts:
glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.121 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.125 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.127 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.122 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.124 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.123 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.126 glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>. 84600 IN A 172.16.252.128
I double checked all gluster hosts. They all are configured the same regarding "option rpc-auth-allow-insecure on" No iptables rules on the host.
Do you use 'glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>" as a storage domain host name? I'm not a gluster guru, but i'm afraid that some internal gluster client code may go crazy, when it receives different address or several ip addresses every time. Is it possible to try with separate names? You can create a storage domain using 172.16.252.121 for example and it should work bypassing your DNS. If it is possible to make that, could you please do that and retry live migration?
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------ --------------0D0952143881FE6C964E43BB Content-Type: multipart/related; boundary="------------8175539110BE9D6A1EBCBC6A" --------------8175539110BE9D6A1EBCBC6A Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p><font face="Helvetica, Arial, sans-serif">Hello,<br> </font></p> <p><font face="Helvetica, Arial, sans-serif">I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use).</font></p> <p><font face="Helvetica, Arial, sans-serif">I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula.</font></p> <p><font face="Helvetica, Arial, sans-serif">Bye<br> </font></p> <p><font face="Helvetica, Arial, sans-serif"></font><br> </p> <br> <div class="moz-cite-prefix">Am 25.08.2017 um 13:11 schrieb Denis Chaplygin:<br> </div> <blockquote type="cite" cite="mid:CANVzE5=mWyKDxjX9e+B2BCGvZ8CHSVPPuuzXMvSNM=24UzBbNw@mail.gmail.com"> <div dir="ltr">Hello! <div class="gmail_extra"><br> <div class="gmail_quote">On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk <span dir="ltr"><<a href="mailto:rs@databay.de" target="_blank" moz-do-not-send="true">rs@databay.de</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"><br> I replayed migration (10:38:02 local time) and recorded vdsm.log of source and destination as attached. I can't find anything in the gluster logs that shows an error. One information: my FQDN <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a> points to all the gluster hosts:<br> <br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.121<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.125<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.127<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.122<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.124<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.123<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.126<br> <a href="http://glusterfs.rxmgmt.databay.de" target="_blank" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>. 84600 IN A 172.16.252.128<br> <br> I double checked all gluster hosts. They all are configured the same regarding "<tt>option rpc-auth-allow-insecure on</tt>" No iptables rules on the host.<br> </div> </blockquote> <div><br> </div> <div>Do you use '<a href="http://glusterfs.rxmgmt.databay.de" moz-do-not-send="true">glusterfs.rxmgmt.databay.de</a>" as a storage domain host name? I'm not a gluster guru, but i'm afraid that some internal gluster client code may go crazy, when it receives different address or several ip addresses every time. Is it possible to try with separate names? You can create a storage domain using 172.16.252.121 for example and it should work bypassing your DNS. If it is possible to make that, could you please do that and retry live migration?</div> </div> </div> </div> </blockquote> <br> <div class="moz-signature">-- <br> <p> </p> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td colspan="3"><img src="cid:part12.39ADF8E4.CC33E9E4@databay.de" height="30" width="151" border="0"></td> </tr> <tr> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Ralf Schenk</b><br> fon +49 (0) 24 05 / 40 83 70<br> fax +49 (0) 24 05 / 40 83 759<br> mail <a href="mailto:rs@databay.de"><font color="#FF0000"><b>rs@databay.de</b></font></a><br> </font> </td> <td width="30"> </td> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Databay AG</b><br> Jens-Otto-Krag-Straße 11<br> D-52146 Würselen<br> <a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a> </font> </td> </tr> <tr> <td colspan="3" valign="top"> <font size="1" face="Verdana, Arial, sans-serif"><br> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202<br> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns<br> Aufsichtsratsvorsitzender: Wilhelm Dohmen </font> </td> </tr> </tbody> </table> <hr noshade="noshade" size="1" color="#000000" width="100%"> </div> </body> </html> --------------8175539110BE9D6A1EBCBC6A Content-Type: image/gif; name="gaaiceogelpgnhae.gif" Content-Transfer-Encoding: base64 Content-ID: <part12.39ADF8E4.CC33E9E4@databay.de> Content-Disposition: inline; filename="gaaiceogelpgnhae.gif" R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA/// /yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8 rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4 4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl 0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1 kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D 0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw cPfs+xACADs= --------------8175539110BE9D6A1EBCBC6A-- --------------0D0952143881FE6C964E43BB--

Hello! On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk <rs@databay.de> wrote:
Hello,
I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use).
As far as i know, backup-volfile-servers is a recommended way to keep you filesystem mountable in case of server failure. While fs is mounted, gluster will automatically provide failover. And you definitely can specify backup-volfile-servers in the storage domain configuration.
I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula.
Yes, but it was based on a accessing gluster volume as a mount filesystem, not directly... And i would like to exclude that from list of possible causes.

This is a multi-part message in MIME format. --------------3AF71384E88C3DCBA50A18ED Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Hello, setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change anything. [root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de ; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;glusterfs.rxmgmt.databay.de. IN A ;; ANSWER SECTION: glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.121 ;; AUTHORITY SECTION: rxmgmt.databay.de. 84600 IN NS ns3.databay.de. rxmgmt.databay.de. 84600 IN NS ns.databay.de. vdsm.log still shows: 2017-08-25 14:02:38,476+0200 INFO (periodic/0) [vdsm.api] FINISH repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295126', 'lastCheck': '0.8', 'valid': True}, u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck': '3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000324379', 'lastCheck': '3.6', 'valid': True}, u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck': '4.1', 'valid': True}} from=internal, task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52) 2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success (migration:287) 2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run self._startUnderlyingMigration(time.time()) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 487, in _startUnderlyingMigration self._perform_with_conv_schedule(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 563, in _perform_with_conv_schedule self._perform_migration(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success One thing I noticed in destination vdsm.log: 2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for device type disk during migration at destination host (vm:4587)* 2017-08-25 10:38:03,478+0200 INFO (jsonrpc/7) [root] (hooks:108) 2017-08-25 10:38:03,492+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539) 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [vdsm.api] START destroy(gracefulAttempts=1) from=::ffff:172.16.252.122,45736 (api:46) 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources (vm:4254) 2017-08-25 10:38:03,670+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430) 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] START teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0', spUUID=u'00000001-0001-0001-0001-0000000000b9', imgUUID=u'9c007b27-0ab7-4474-9317-a294fd04c65f', volUUID=None) from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:46) 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] FINISH teardownImage return=None from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:52) 2017-08-25 10:38:03,672+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430) Am 25.08.2017 um 14:03 schrieb Denis Chaplygin:
Hello!
On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use).
As far as i know, backup-volfile-servers is a recommended way to keep you filesystem mountable in case of server failure. While fs is mounted, gluster will automatically provide failover. And you definitely can specify backup-volfile-servers in the storage domain configuration.
I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula.
Yes, but it was based on a accessing gluster volume as a mount filesystem, not directly... And i would like to exclude that from list of possible causes.
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------ --------------3AF71384E88C3DCBA50A18ED Content-Type: multipart/related; boundary="------------36006B01677FB988B30A2A2E" --------------36006B01677FB988B30A2A2E Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p><font face="Helvetica, Arial, sans-serif">Hello,</font></p> <p><font face="Helvetica, Arial, sans-serif">setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change anything.</font></p> <tt>[root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de</tt><tt><br> </tt><tt><br> </tt><tt>; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de</tt><tt><br> </tt><tt>;; global options: +cmd</tt><tt><br> </tt><tt>;; Got answer:</tt><tt><br> </tt><tt>;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135</tt><tt><br> </tt><tt>;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6</tt><tt><br> </tt><tt><br> </tt><tt>;; OPT PSEUDOSECTION:</tt><tt><br> </tt><tt>; EDNS: version: 0, flags:; udp: 4096</tt><tt><br> </tt><tt>;; QUESTION SECTION:</tt><tt><br> </tt><tt>;glusterfs.rxmgmt.databay.de. IN A</tt><tt><br> </tt><tt><br> </tt><tt>;; ANSWER SECTION:</tt><tt><br> </tt><tt>glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.121</tt><tt><br> </tt><tt><br> </tt><tt>;; AUTHORITY SECTION:</tt><tt><br> </tt><tt>rxmgmt.databay.de. 84600 IN NS ns3.databay.de.</tt><tt><br> </tt><tt>rxmgmt.databay.de. 84600 IN NS ns.databay.de.</tt><tt><br> </tt><br> vdsm.log still shows:<br> <tt>2017-08-25 14:02:38,476+0200 INFO (periodic/0) [vdsm.api] FINISH repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295126', 'lastCheck': '0.8', 'valid': True}, u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck': '3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000324379', 'lastCheck': '3.6', 'valid': True}, u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck': '4.1', 'valid': True}} from=internal, task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52)</tt><tt><br> </tt><tt>2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success (migration:287)</tt><tt><br> </tt><tt>2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate (migration:429)</tt><tt><br> </tt><tt>Traceback (most recent call last):</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run</tt><tt><br> </tt><tt> self._startUnderlyingMigration(time.time())</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 487, in _startUnderlyingMigration</tt><tt><br> </tt><tt> self._perform_with_conv_schedule(duri, muri)</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 563, in _perform_with_conv_schedule</tt><tt><br> </tt><tt> self._perform_migration(duri, muri)</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in _perform_migration</tt><tt><br> </tt><tt> self._vm._dom.migrateToURI3(duri, params, flags)</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f</tt><tt><br> </tt><tt> ret = attr(*args, **kwargs)</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper</tt><tt><br> </tt><tt> ret = f(*args, **kwargs)</tt><tt><br> </tt><tt> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in wrapper</tt><tt><br> </tt><tt> return func(inst, *args, **kwargs)</tt><tt><br> </tt><tt> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3</tt><tt><br> </tt><tt> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self)</tt><tt><br> </tt><tt>libvirtError: failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success<br> <br> </tt><br> One thing I noticed in destination vdsm.log:<br> 2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') <b>Alias not found for device type disk during migration at destination host (vm:4587)</b><br> 2017-08-25 10:38:03,478+0200 INFO (jsonrpc/7) [root] (hooks:108)<br> 2017-08-25 10:38:03,492+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539)<br> 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [vdsm.api] START destroy(gracefulAttempts=1) from=::ffff:172.16.252.122,45736 (api:46)<br> 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources (vm:4254)<br> 2017-08-25 10:38:03,670+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430)<br> 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] START teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0', spUUID=u'00000001-0001-0001-0001-0000000000b9', imgUUID=u'9c007b27-0ab7-4474-9317-a294fd04c65f', volUUID=None) from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:46)<br> 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] FINISH teardownImage return=None from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:52)<br> 2017-08-25 10:38:03,672+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430)<br> <br> <br> <br> <br> <div class="moz-cite-prefix">Am 25.08.2017 um 14:03 schrieb Denis Chaplygin:<br> </div> <blockquote type="cite" cite="mid:CANVzE5=EHucHyfjax9v56DZJ4+vh8VTa33aXq9p=m+71ChmUnQ@mail.gmail.com"> <div dir="ltr">Hello! <div class="gmail_extra"><br> <div class="gmail_quote">On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk <span dir="ltr"><<a href="mailto:rs@databay.de" target="_blank" moz-do-not-send="true">rs@databay.de</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"> <p><font face="Helvetica, Arial, sans-serif">Hello,<br> </font></p> <p><font face="Helvetica, Arial, sans-serif">I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use).</font></p> </div> </blockquote> <div><br> </div> <div>As far as i know, <span style="color:rgb(0,0,0);white-space:pre-wrap">backup-volfile-servers is a recommended way to keep you filesystem mountable in case of server failure. While fs is mounted, gluster will automatically provide failover. And you definitely can specify </span><span style="color:rgb(0,0,0);white-space:pre-wrap">backup-volfile-servers in the storage domain configuration.</span></div> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"> <p><font face="Helvetica, Arial, sans-serif">I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula.</font></p> <p><font face="Helvetica, Arial, sans-serif"><br> </font></p> </div> </blockquote> <div>Yes, but it was based on a accessing gluster volume as a mount filesystem, not directly... And i would like to exclude that from list of possible causes.</div> </div> <br> </div> </div> </blockquote> <br> <div class="moz-signature">-- <br> <p> </p> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td colspan="3"><img src="cid:part2.13442652.15678464@databay.de" height="30" width="151" border="0"></td> </tr> <tr> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Ralf Schenk</b><br> fon +49 (0) 24 05 / 40 83 70<br> fax +49 (0) 24 05 / 40 83 759<br> mail <a href="mailto:rs@databay.de"><font color="#FF0000"><b>rs@databay.de</b></font></a><br> </font> </td> <td width="30"> </td> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Databay AG</b><br> Jens-Otto-Krag-Straße 11<br> D-52146 Würselen<br> <a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a> </font> </td> </tr> <tr> <td colspan="3" valign="top"> <font size="1" face="Verdana, Arial, sans-serif"><br> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202<br> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns<br> Aufsichtsratsvorsitzender: Wilhelm Dohmen </font> </td> </tr> </tbody> </table> <hr noshade="noshade" size="1" color="#000000" width="100%"> </div> </body> </html> --------------36006B01677FB988B30A2A2E Content-Type: image/gif; name="dconcgdjnkmigkkd.gif" Content-Transfer-Encoding: base64 Content-ID: <part2.13442652.15678464@databay.de> Content-Disposition: inline; filename="dconcgdjnkmigkkd.gif" R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA/// /yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8 rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4 4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl 0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1 kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D 0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw cPfs+xACADs= --------------36006B01677FB988B30A2A2E-- --------------3AF71384E88C3DCBA50A18ED--

Hello, Progress: I finally tried to migrate the machine to other hosts in the cluster. For one this was working ! See attached vdsm.log. The migration to host microcloud25 worked as expected, migrating back to initial host microloud22 also. Other hosts (microcloud21, microcloud23,microcloud24 where not working at all as a migration target. Perhaps the working ones were the two that I rebooted after upgrading all hosts to Ovirt 4.1.5. I'll check with another host to reboot it and try again.Perhaps any other daemon (libvirt/supervdsm or I don't know has to be restarted) Bye. Am 25.08.2017 um 14:14 schrieb Ralf Schenk:
Hello,
setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change anything.
[root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de
; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;glusterfs.rxmgmt.databay.de. IN A
;; ANSWER SECTION: glusterfs.rxmgmt.databay.de. 84600 IN A 172.16.252.121
;; AUTHORITY SECTION: rxmgmt.databay.de. 84600 IN NS ns3.databay.de. rxmgmt.databay.de. 84600 IN NS ns.databay.de.
vdsm.log still shows: 2017-08-25 14:02:38,476+0200 INFO (periodic/0) [vdsm.api] FINISH repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295126', 'lastCheck': '0.8', 'valid': True}, u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck': '3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000324379', 'lastCheck': '3.6', 'valid': True}, u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck': '4.1', 'valid': True}} from=internal, task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52) 2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success (migration:287) 2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run self._startUnderlyingMigration(time.time()) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 487, in _startUnderlyingMigration self._perform_with_conv_schedule(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 563, in _perform_with_conv_schedule self._perform_migration(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: failed to initialize gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
One thing I noticed in destination vdsm.log: 2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for device type disk during migration at destination host (vm:4587)* 2017-08-25 10:38:03,478+0200 INFO (jsonrpc/7) [root] (hooks:108) 2017-08-25 10:38:03,492+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539) 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [vdsm.api] START destroy(gracefulAttempts=1) from=::ffff:172.16.252.122,45736 (api:46) 2017-08-25 10:38:03,669+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources (vm:4254) 2017-08-25 10:38:03,670+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430) 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] START teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0', spUUID=u'00000001-0001-0001-0001-0000000000b9', imgUUID=u'9c007b27-0ab7-4474-9317-a294fd04c65f', volUUID=None) from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:46) 2017-08-25 10:38:03,671+0200 INFO (jsonrpc/2) [vdsm.api] FINISH teardownImage return=None from=::ffff:172.16.252.122,45736, task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:52) 2017-08-25 10:38:03,672+0200 INFO (jsonrpc/2) [virt.vm] (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection (guestagent:430)
Am 25.08.2017 um 14:03 schrieb Denis Chaplygin:
Hello!
On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
I'm using the DNS Balancing gluster hostname for years now, not only with ovirt. No software so far had a problem. And setting the hostname to only one Host of course breaks one advantage of a distributed/replicated Cluster File-System like loadbalancing the connections to the storage and/or failover if one host is missing. In earlier ovirt it wasn't possible to specify something like "backupvolfile-server" for a High-Available hosted-engine rollout (which I use).
As far as i know, backup-volfile-servers is a recommended way to keep you filesystem mountable in case of server failure. While fs is mounted, gluster will automatically provide failover. And you definitely can specify backup-volfile-servers in the storage domain configuration.
I already used live migration in such a setup. This was done with pure libvirt setup/virsh and later using OpenNebula.
Yes, but it was based on a accessing gluster volume as a mount filesystem, not directly... And i would like to exclude that from list of possible causes.
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------
participants (2)
-
Denis Chaplygin
-
Ralf Schenk