Re: HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7
by Strahil
Hi Ralf,
When the deployment fail - you can dump the xml from virsh , edit it, undefine the current HostedEngine and define your modified HostedEngine 's xml.
Once you do that, you can try to start the
VM.
Good luck.
Best Regards,
Strahil NikolovOn Nov 27, 2019 18:28, Ralf Schenk <rs(a)databay.de> wrote:
>
> Hello,
>
> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
>
> At the time the locally deployed Engine ist copied to hosted-storage (in my case NFS) and deployment tries to start the Engine (via ovirt-ha-agent) this fails.
>
> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows "2019-11-27 16:17:16.833+0000: shutting down, reason=failed".
>
> Researching the cause is: The built Libvirt VM XML includes the feature "virt-ssbd" as requirement, which is simly not there.
>
> From VM XML:
>
> <cpu mode='custom' match='exact' check='partial'>
> <model fallback='allow'>EPYC</model>
> <topology sockets='16' cores='4' threads='1'/>
> <feature policy='require' name='ibpb'/>
> <feature policy='require' name='virt-ssbd'/>
>
> from cat /proc/cpuinfo:
>
> processor : 47
> vendor_id : AuthenticAMD
> cpu family : 23
> model : 49
> model name : AMD EPYC 7402P 24-Core Processor
> stepping : 0
> microcode : 0x830101c
> cpu MHz : 2800.000
> cache size : 512 KB
> physical id : 0
> siblings : 48
> core id : 30
> cpu cores : 24
> apicid : 61
> initial apicid : 61
> fpu : yes
> fpu_exception : yes
> cpuid level : 16
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
> bogomips : 5600.12
> TLB size : 3072 4K pages
> clflush size : 64
> cache_alignment : 64
> address sizes : 43 bits physical, 48 bits virtual
> power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
>
> Any solution/workaround available ?
4 years, 12 months
Re: Moving HostedEngine
by Strahil
It is highly 'super duper' not recommended to do that.
Also, consider using thinLVM (requirement for gluster snapshots) for a brick of the separate gluster volume.
I'm using the 3rd partirion of my OS SSD to host my gluster's volume and I have no issues so far.
Best Regards,
Strahil NikolovOn Nov 28, 2019 01:05, Joseph Goldman <joseph(a)goldman.id.au> wrote:
>
> So I can't host OTHER VM's on this gluster volume? If its already a running GLuser for other VMs i can't now re-reploy HE in that gluster volume?
>
> On 2019-11-28 3:27 AM, Alan G wrote:
>
> I've had to do this a couple of times and always ended up with a working system in the end.
>
> As a fall back option (although I've never had to use it) I have a backup engine VM running completely outside of oVIrt (ESXi host in my case). Then if the hosted_engine deploy fails for any reason you can restore onto the backup vm as a temp solution while you work through the hosted engine deploy issues.
>
> A few things that come to mind: -
>
> * You will need a dedicated gluster volume for hosted_storage and it needs to be replica+arbiter.
>>
>> * Make sure you put the cluster in global maint mode before performing the engine backup, I recall having issues with the restore when I didn't do that.
>> * Migrate all other VMs off the host running Engine before doing the backup. This will be the host you will restore onto.
>>
>>
>> ---- On Wed, 27 Nov 2019 09:46:23 +0000 Joseph Goldman <joseph(a)goldman.id.au> wrote ----
>>
>>> Hi List,
>>>
>>> In one of my installs, I set up the first storage domain (and where
>>> the HostedEngine is) on a bigger NFS NAS - since then I have created a
>>> Gluster volume that spans the 3 hosts and I'm putting a few VM's in
>>> there for higher reliability (as SAN is single point of failure) namely
>>> I'd like to put HostedEngine in there so it stays up no matter what and
>>> can help report if issues occur (network issue to NAS, NAS dies etc etc)
>>>
>>> Looking through other posts and documentation, there's no real way to
>>> move the HostedEngine storage, is this correct? The solution I've seen
>>> is to backup the hosted engine DB, blow it away, and re-deploy it from
>>> the .backup file configuring it to the new storage domain in the deploy
>>> script - is this the only process? How likely is this to fail? Is it
>>> likely that all VM's and settings will be picked straight back up and
>>> continue to operate like normal? I dont have a test setup to play around
>>> with atm so just trying to gauge confidence in such a solution.
>>>
>>> Thanks,
>>> Joe
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
4 years, 12 months
Re: Moving HostedEngine
by Strahil
Hi Joseph,
It is mandatory to create a dedicated gluster volume for the HostedEngine.
There is a thread which explains why and the risks of not following these guidlines.
You can set the volume on thinLVM, so you can snapshot your gluster's volume before patching of the engine (of course you have to shutdown the engine in advance).
About the transfer, the safest way is to backup & restore as you are not migrating from 1 type of storage domain to another of same type.
Yet, I can't help you with that procedure - as I have never done it.
Best Regards,
Strahil NikolovOn Nov 27, 2019 11:46, Joseph Goldman <joseph(a)goldman.id.au> wrote:
>
> Hi List,
>
> In one of my installs, I set up the first storage domain (and where
> the HostedEngine is) on a bigger NFS NAS - since then I have created a
> Gluster volume that spans the 3 hosts and I'm putting a few VM's in
> there for higher reliability (as SAN is single point of failure) namely
> I'd like to put HostedEngine in there so it stays up no matter what and
> can help report if issues occur (network issue to NAS, NAS dies etc etc)
>
> Looking through other posts and documentation, there's no real way to
> move the HostedEngine storage, is this correct? The solution I've seen
> is to backup the hosted engine DB, blow it away, and re-deploy it from
> the .backup file configuring it to the new storage domain in the deploy
> script - is this the only process? How likely is this to fail? Is it
> likely that all VM's and settings will be picked straight back up and
> continue to operate like normal? I dont have a test setup to play around
> with atm so just trying to gauge confidence in such a solution.
>
> Thanks,
> Joe
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTO6Q3UTEDJ...
4 years, 12 months
Re: spice connection error
by Strahil
As far as I know , the engine plays a role as a proxy during the establishment of the connection.
Check that you can reach both engine and the host from your system.
For the same reason, I use noVNC - as you just need a single port to the engine in addition to the rest of the settings.
Best Regards,
Strahil NikolovOn Nov 27, 2019 11:27, kim.kargaard(a)noroff.no wrote:
>
> Hi,
>
> When trying to connect from a remote network on the spice console to a VM, I get the following error:
>
> (remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out
>
> I found that the display is set to the management network and not the VM networkn in the cluster logical network. However, when I try to set the other vlan to be the display network, I get the following error:
>
> Error while executing action: Cannot edit Network. IP address has to be set for the NIC that bears a role network. Network: student-vlan100, Nic: p2p1.100 on host node3 violates that rule.
>
> I am not sure what this means. Any ideas?
>
> Kind regards
>
> Kim
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YEI3I4NCAH...
4 years, 12 months
Re: Engine deployment last step.... Can anyone help ?
by Strahil
Hi Rob,
Can you check the gluster's volume ?
gluster volume status engine
gluster volume heal engine info summary
Still, this is strange that the firewall is having such differences.
Fastest way to sync that node to the settings from the other is to:
1. Backup /etc/firewalld
2. Remove /etc/firewalld
3. Copy /etc/firewalld from another node (either scp or rsync wil do)
4. Restart the firewalld daemon
5. Make a test firewalld reload
Best Regards,
Strahil NikolovOn Nov 27, 2019 00:09, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>
> Is this the issue ?
>
> on the deployment host the firewall is active but only on eno2, eno1 appears to have decided to be unmanaged …
>
> also the deployment host has 17 rules active
>
> the other two have 7 each…
>
>
> Zone
> Interfaces
> IP Range
> Public
> default
> eno2
> *
>
> Unmanaged Interfaces
>
>
> Name
>
> IP Address
>
> Sending
>
> Receiving
>
> ;vdsmdummy;
>
>
> Inactive
>
> eno1
>
>
>
>
> ovirtmgmt
>
> 192.168.100.38/24
>
>
>
> virbr0-nic
>
>
>>
>> On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>
>> Rather than disabling firewalld,
>> You can add the ports and restart the firewalld service
>>
>> # firewall-cmd --add-service=cockpit
>>
>> # firewall-cmd --add-service=cockpit --permanent
>>
>>
>>
>> On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer(a)redhat.com> wrote:
>>>
>>> firewalld is running?
>>>
>>> systemctl disable --now firewalld
>>>
>>> On Monday, November 25, 2019, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>
>>>> It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
>>>>
>>>>
>>>>> On 25 Nov 2019, at 12:55, Gobinda Das <godas(a)redhat.com> wrote:
>>>>>
>>>>> There could be two reason
>>>>> 1- Your gluster service may not be running.
>>>>> 2- In Storage Connection there mentioned <volumename> may not exists
>>>>>
>>>>> can you please paste the output of "gluster volume status" ?
>>>>>
>>>>> On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>
>>>>>> [ INFO ] skipping: [localhost]
>>>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
>>>>>>
>>>>>>
>>>>>>> On 25 Nov 2019, at 09:16, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>
>>>>>>> Yes,
>>>>>>>
>>>>>>> I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
>>>>>>>
>>>>>>> ovirt-hosted-engine-cleanup
>>>>>>> vdsm-tool configure --force
>>>>>>> systemctl restart libvirtd
>>>>>>> systemctl restart vdsm
>>>>>>>
>>>>>>> although last time I did
>>>>>>>
>>>>>>> systemctl restart vdsm
>>>>>>>
>>>>>>> VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
>>>>>>>
>>>>>>>
>>>>>>>> On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>>>>>>>
>>>>>>>> Can you please share the error in case it fails again?
>>>>>>>>
>>>>>>>> On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>>
>>>>>>>>> hmm, I’’l try again, that failed last time.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Hey!
>>>>>>>>>>
>>>>>>>>>> For
>>>>>>>>>> Storage Connection you can add - <hostname1>:/engine
>>>>>>>>>> And for
>>>>>>>>>> Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 25, 2019 at 2:31 PM <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>>>>
>>>>>>>>>>> So...
>>>>>>>>>>>
>>>>>>>>>>> I have got to the last step
>>>>>>>>>>>
>>>>>>>>>>> 3 Machines with Gluster Storage configured however at the last screen
>>>>>>>>>>>
>>>>>>>>>>> ....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
>>>>>>>>>>>
>>>>>>>>>>> Hosted Engine Deployment
>>>>>>>>>>>
>>>>>>>>>>> Storage Connection
>>>>>>>>>>> and
>>>>>>>>>>> Mount Options
>>>>>>>>>>>
>>>>>>>>>>> I also had to expand /tmp as it was not big enough to fit the engine before moving...
>>>>>>>>>>>
>>>>>>>>>>> What can I do to get the auto complete sorted out ?
>>>>>>>>>>>
>>>>>>>>>>> I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name
>>>>>>>>>>> and
>>>>>>>>>>> ovirt1.kvm.private:/gluster_bricks/engine
>>>>>>>>>>>
>>>>>>>>>>> Ovirt1 being the actual machine I'm running this on.
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6...
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6AB...
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Gobinda
>>>>
>>>>
>
4 years, 12 months
HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7
by Ralf Schenk
Hello,
This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
At the time the locally deployed Engine ist copied to hosted-storage (in
my case NFS) and deployment tries to start the Engine (via
ovirt-ha-agent) this fails.
QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows
"2019-11-27 16:17:16.833+0000: shutting down, reason=failed".
Researching the cause is: The built Libvirt VM XML includes the feature
"virt-ssbd" as requirement, which is simly not there.
From VM XML:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>EPYC</model>
<topology sockets='16' cores='4' threads='1'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='virt-ssbd'/>
from cat /proc/cpuinfo:
processor : 47
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7402P 24-Core Processor
stepping : 0
microcode : 0x830101c
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 48
core id : 30
cpu cores : 24
apicid : 61
initial apicid : 61
fpu : yes
fpu_exception : yes
cpuid level : 16
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology
nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3
fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2
cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp
vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap
clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc
cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv
svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov
succor smca
bogomips : 5600.12
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
Any solution/workaround available ?
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
4 years, 12 months
Certificate of host is invalid
by Jon bae
Hello everybody,
since last update to 4.3.7 I get this error message:
Certificate of host *host.name <http://host.name>* is invalid. The
certificate doesn't contain valid subject alternative name, please enroll
new certificate for the host.
Have you an idea of how I can fix that?
Regards
Jonathan
4 years, 12 months
spice connection error
by kim.kargaard@noroff.no
Hi,
When trying to connect from a remote network on the spice console to a VM, I get the following error:
(remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out
I found that the display is set to the management network and not the VM networkn in the cluster logical network. However, when I try to set the other vlan to be the display network, I get the following error:
Error while executing action: Cannot edit Network. IP address has to be set for the NIC that bears a role network. Network: student-vlan100, Nic: p2p1.100 on host node3 violates that rule.
I am not sure what this means. Any ideas?
Kind regards
Kim
4 years, 12 months
Engine deployment last step.... Can anyone help ?
by rob.downer@orbitalsystems.co.uk
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection
and
Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name
and
ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks
4 years, 12 months
Re: Engine deployment last step.... Can anyone help ?
by Strahil
You need to set it also permanent via '--permanent', otherwise after a reload - it's gone.
Best Regards,
Strahil NikolovOn Nov 25, 2019 19:51, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>
> [root@ovirt1 ~]# firewall-cmd --add-service=cockpit
> Warning: ALREADY_ENABLED: 'cockpit' already in 'public'
> success
> [root@ovirt1 ~]#
>
>
>
>> On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>
>> Rather than disabling firewalld,
>> You can add the ports and restart the firewalld service
>>
>> # firewall-cmd --add-service=cockpit
>>
>> # firewall-cmd --add-service=cockpit --permanent
>>
>>
>>
>> On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer(a)redhat.com> wrote:
>>>
>>> firewalld is running?
>>>
>>> systemctl disable --now firewalld
>>>
>>> On Monday, November 25, 2019, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>
>>>> It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
>>>>
>>>>
>>>>> On 25 Nov 2019, at 12:55, Gobinda Das <godas(a)redhat.com> wrote:
>>>>>
>>>>> There could be two reason
>>>>> 1- Your gluster service may not be running.
>>>>> 2- In Storage Connection there mentioned <volumename> may not exists
>>>>>
>>>>> can you please paste the output of "gluster volume status" ?
>>>>>
>>>>> On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>
>>>>>> [ INFO ] skipping: [localhost]
>>>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
>>>>>>
>>>>>> <
4 years, 12 months