ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

Hello Community, I'm trying to deploy a hosted engine on GlusterFS which fails with the following error: [ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": [{"msg": "The 'ovirt_storage_domains' module is being renamed 'ovirt_storage_domain'", "version": 2.8}], "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400."} I have deployed GlusterFS via the HyperConverged Option in Cockpit and the volumes are up and running. [root@ovirt1 ~]# gluster volume status engine Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1:/gluster_bricks/engine/engine 49152 0 Y 26268 Brick ovirt2:/gluster_bricks/engine/engine 49152 0 Y 24116 Brick glarbiter:/gluster_bricks/engine/engi ne 49152 0 Y 23526 Self-heal Daemon on localhost N/A N/A Y 31229 Self-heal Daemon on ovirt2 N/A N/A Y 27097 Self-heal Daemon on glarbiter N/A N/A Y 25888 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks I'm using the following guide : https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-sto... And on step 4 - Storage - I have defined it as follows: Storage Type: Gluster Storage Connection: ovirt1.localdomain:/gluster_bricks/engine/ Mount Options: backup-volfile-servers=ovirt2.localdomain:glarbiter.localdomain Can someone hint me where is the problem ?

It seems that I have picked the wrong deploy method. Switching to "HyperConverged" -> "Use existing" fixes the error.

It seems that "Use existing" is not working. I have tried multiple times to redeploy the engine and it always fails . Here is the last log from vdsm : 2018-12-09 15:56:40,269+0200 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132) 2018-12-09 15:56:40,310+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-12-09 15:56:40,317+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-12-09 15:56:40,321+0200 INFO (jsonrpc/4) [vdsm.api] START getStorageDomainInfo(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', options=None) from=::1,4 7806, task_id=b3b5e7aa-998c-419e-9958-fe762dbf6d18 (api:46) 2018-12-09 15:56:40,321+0200 INFO (jsonrpc/4) [storage.StorageDomain] sdUUID=143d800a-06e1-48b5-aa7c-21cb9f3a89a7 (fileSD:534) 2018-12-09 15:56:40,324+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getStorageDomainInfo return={'info': {'uuid': u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 'vers ion': '4', 'role': 'Master', 'remotePath': 'ovirt1:/engine', 'type': 'GLUSTERFS', 'class': 'Data', 'pool': ['7845b386-fbb3-11e8-bfa8-00163e54fd43'], 'name': 'hosted_storage'}} from=::1,47806, task_id=b3b5e7aa-998c-419e-9958-fe762dbf6d18 (api:52) 2018-12-09 15:56:40,325+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getInfo succeeded in 0.01 seconds (__init__:573) 2018-12-09 15:56:40,328+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-12-09 15:56:40,332+0200 INFO (jsonrpc/0) [vdsm.api] START getStorageDomainInfo(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', options=None) from=::1,4 7806, task_id=19f713e3-7387-4265-a821-f636b2415f42 (api:46) 2018-12-09 15:56:40,332+0200 INFO (jsonrpc/0) [storage.StorageDomain] sdUUID=143d800a-06e1-48b5-aa7c-21cb9f3a89a7 (fileSD:534) 2018-12-09 15:56:40,336+0200 INFO (jsonrpc/0) [vdsm.api] FINISH getStorageDomainInfo return={'info': {'uuid': u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 'vers ion': '4', 'role': 'Master', 'remotePath': 'ovirt1:/engine', 'type': 'GLUSTERFS', 'class': 'Data', 'pool': ['7845b386-fbb3-11e8-bfa8-00163e54fd43'], 'name': 'hosted_storage'}} from=::1,47806, task_id=19f713e3-7387-4265-a821-f636b2415f42 (api:52) 2018-12-09 15:56:40,337+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getInfo succeeded in 0.00 seconds (__init__:573) 2018-12-09 15:56:40,341+0200 INFO (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'i d': u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d', u'vfs_type': u'glusterfs', u'mnt_options': u'backup-volfile-servers=ovirt2:glarbiter', u'connection': u'ovirt1:/ engine', u'user': u'kvm'}], options=None) from=::1,47806, task_id=2d919607-796e-4528-9f54-4dc437eddada (api:46) 2018-12-09 15:56:40,345+0200 INFO (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'e29cf818-5ee5-46e1-85c1-8a eefa33e95d'}]} from=::1,47806, task_id=2d919607-796e-4528-9f54-4dc437eddada (api:52) 2018-12-09 15:56:40,345+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:573) 2018-12-09 15:56:40,348+0200 INFO (jsonrpc/2) [vdsm.api] START getStorageDomainStats(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', options=None) from=::1, 47806, task_id=47edd838-f7f5-448b-9b6a-a650460614ac (api:46) 2018-12-09 15:56:40,556+0200 INFO (jsonrpc/2) [storage.StorageDomain] Removing remnants of deleted images [] (fileSD:734) 2018-12-09 15:56:40,557+0200 INFO (jsonrpc/2) [vdsm.api] FINISH getStorageDomainStats return={'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True , 'diskfree': '103464173568', 'disktotal': '107313364992', 'mdafree': 0}} from=::1,47806, task_id=47edd838-f7f5-448b-9b6a-a650460614ac (api:52) 2018-12-09 15:56:40,558+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getStats succeeded in 0.21 seconds (__init__:573) 2018-12-09 15:56:40,564+0200 INFO (jsonrpc/3) [vdsm.api] START prepareImage(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', spUUID=u'00000000-0000-0000-0000 -000000000000', imgUUID=u'dabcff49-56c0-4557-9c82-3df9e6c11991', leafUUID=u'6a4441f0-641e-49c0-a117-7913110874c6', allowIllegal=False) from=::1,47806, task_i d=edb0b0fa-d528-426e-8250-04c0f7864224 (api:46) 2018-12-09 15:56:40,584+0200 INFO (jsonrpc/3) [vdsm.api] FINISH prepareImage error=Volume does not exist: (u'6a4441f0-641e-49c0-a117-7913110874c6',) from=:: 1,47806, task_id=edb0b0fa-d528-426e-8250-04c0f7864224 (api:50) 2018-12-09 15:56:40,584+0200 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='edb0b0fa-d528-426e-8250-04c0f7864224') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in prepareImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3182, in prepareImage Can someone guide me how to debug this ?

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. That did the trick for me. Hope this helps. Cost me a week.

I am having the same issue from CLI and trying to use existing gluster storage (server1:/gluster_bricks/engine). Error: [ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": [{"msg": "The 'ovirt_storage_domains' module is being renamed 'ovirt_storage_domain'", "version": 2.8}], "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: If you found a solution can you please share. thanks,

Apologies, did not see a previous post from Julie This works for me: --------------------------------------------------------------------------------------------------- I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. --------------------------------------------------------------------------------------------- thank you.

Apologies, did not see a previous post from Julie This works for me: --------------------------------------------------------------------------------------------------- I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. --------------------------------------------------------------------------------------------- thank you.

Apologies, did not see a previous post from Julie This works for me: --------------------------------------------------------------------------------------------------- I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. --------------------------------------------------------------------------------------------- thank you.

Apologies, did not see a previous post This works for me: --------------------------------------------------------------------------------------------------- I had this issue this week as well. When asked about the glusterfs that you self provisioned you stated "ovirt1.localdomain:/gluster_bricks/engine” So I am new to gluster but as a client of gluster you can only refer to it via volume name. host:/<volume name> Hence maybe try ovirt1.localdomain:/engine. --------------------------------------------------------------------------------------------- thank you.

Apologies, just saw the answer on a previous post in this same thread
participants (3)
-
adrianquintero@gmail.com
-
hunter86_bg@yahoo.com
-
jurie@velos.co.za