Engine deployment last step.... Can anyone help ?

So... I have got to the last step 3 Machines with Gluster Storage configured however at the last screen ....Deploying the Engine to Gluster and the wizard does not auto fill the two fields Hosted Engine Deployment Storage Connection and Mount Options I also had to expand /tmp as it was not big enough to fit the engine before moving... What can I do to get the auto complete sorted out ? I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine Ovirt1 being the actual machine I'm running this on. Thanks

Hey! For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3> On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI...

hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>

Can you please share the error in case it fails again? On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI...

Yes, I’ll restart all Nodes after wiping the failed setup of Hosted engine using. ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm although last time I did systemctl restart vdsm VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>

Can you attach the /var/log/vdsm/vdsm.log here? On Mon, Nov 25, 2019 at 2:47 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
* ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm*
*although last time I did *
*systemctl restart vdsm*
*VDSM did **not** restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?*
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI...

As such there are no errors in the vdsm log. Maybe you can try these steps again *ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm* And continue with the hosted engine deployment, in case the hosted engine deployment fails you can look for errors under /var/log/ovirt-hosted-engine-setup/engine.log On Mon, Nov 25, 2019 at 3:13 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
On 25 Nov 2019, at 09:28, Parth Dhanjal <dparth@redhat.com> wrote:
/var/log/vdsm/vdsm.log

it's plausible that systemctl restart supervdsm is required as well On Mon, Nov 25, 2019 at 12:17 PM Parth Dhanjal <dparth@redhat.com> wrote:
As such there are no errors in the vdsm log. Maybe you can try these steps again
*ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm*
And continue with the hosted engine deployment, in case the hosted engine deployment fails you can look for errors under /var/log/ovirt-hosted-engine-setup/engine.log
On Mon, Nov 25, 2019 at 3:13 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
On 25 Nov 2019, at 09:28, Parth Dhanjal <dparth@redhat.com> wrote:
/var/log/vdsm/vdsm.log
_______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XD3TXRCVGTSJFU...

[ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm
although last time I did
systemctl restart vdsm
VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ...

There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists can you please paste the output of "gluster volume status" ? On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
[ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
* ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm*
*although last time I did *
*systemctl restart vdsm*
*VDSM did **not** restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?*
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC...
-- Thanks, Gobinda

Admin Console: https://10.10.45.11:9090/ or https://192.168.100.38:9090/ [root@ovirt1 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1.gluster.private:/gluster_bricks/ data/data 49152 0 Y 3205 Brick gfs2.gluster.private:/gluster_bricks/ data/data 49152 0 Y 3193 Brick gfs3.gluster.private:/gluster_bricks/ data/data 49152 0 Y 3240 Self-heal Daemon on localhost N/A N/A Y 3637 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771 Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 3216 Brick gfs2.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 3206 Brick gfs3.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 3251 Self-heal Daemon on localhost N/A N/A Y 3637 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771 Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 3225 Brick gfs2.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 3235 Brick gfs3.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 3264 Self-heal Daemon on localhost N/A N/A Y 3637 Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771 Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks [root@ovirt1 ~]# Is this a DNS issue, the back end runs on the same physical network which would be OK but is it OK for the Engine ? I tried setting up the last step with the backend FQDN it fails I tried setting up via the front end it fails nslookup from LAN Robs-Air:~ rob$ Robs-Air:~ rob$ nslookup gfs1.gluster.private 192.168.100.1 Server: 192.168.100.1 Address: 192.168.100.1#53 Name: gfs1.gluster.private Address: 10.10.45.11 Robs-Air:~ rob$
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com> wrote:
gluster volume status

It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm
although last time I did
systemctl restart vdsm
VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJA7BAVSKVVDDUBIEYXT/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC6IAFKWTXN52T2C7CS5/>
--
Thanks, Gobinda

firewalld is running? systemctl disable --now firewalld On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk> wrote:
It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
[ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
* ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm*
*although last time I did *
*systemctl restart vdsm*
*VDSM did **not** restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?*
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/LQ5YXX7IVV6ZPJA7BAVSKVVDDUBIEYXT/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/VHQ3LTES6ABPLC6IAFKWTXN52T2C7CS5/
--
Thanks, Gobinda

Rather than disabling firewalld, You can add the ports and restart the firewalld service # firewall-cmd --add-service=cockpit # firewall-cmd --add-service=cockpit --permanent On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer@redhat.com> wrote:
firewalld is running?
systemctl disable --now firewalld
On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk> wrote:
It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
[ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
* ovirt-hosted-engine-cleanup* *vdsm-tool configure --force* *systemctl restart libvirtd* *systemctl restart vdsm*
*although last time I did *
*systemctl restart vdsm*
*VDSM did **not** restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?*
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk> wrote:
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC...
--
Thanks, Gobinda

[root@ovirt1 ~]# firewall-cmd --add-service=cockpit Warning: ALREADY_ENABLED: 'cockpit' already in 'public' success [root@ovirt1 ~]#
On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth@redhat.com> wrote:
Rather than disabling firewalld, You can add the ports and restart the firewalld service
# firewall-cmd --add-service=cockpit # firewall-cmd --add-service=cockpit --permanent
On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer@redhat.com <mailto:abawer@redhat.com>> wrote: firewalld is running?
systemctl disable --now firewalld
On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com <mailto:godas@redhat.com>> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm
although last time I did
systemctl restart vdsm
VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJA7BAVSKVVDDUBIEYXT/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC6IAFKWTXN52T2C7CS5/>
--
Thanks, Gobinda

Is this the issue ? on the deployment host the firewall is active but only on eno2, eno1 appears to have decided to be unmanaged … also the deployment host has 17 rules active the other two have 7 each… Zone Interfaces IP Range Public default eno2 * Unmanaged Interfaces Name IP Address Sending Receiving ;vdsmdummy;
Inactive eno1
ovirtmgmt 192.168.100.38/24 virbr0-nic
On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth@redhat.com> wrote:
Rather than disabling firewalld, You can add the ports and restart the firewalld service
# firewall-cmd --add-service=cockpit # firewall-cmd --add-service=cockpit --permanent
On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer@redhat.com <mailto:abawer@redhat.com>> wrote: firewalld is running?
systemctl disable --now firewalld
On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com <mailto:godas@redhat.com>> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
On 25 Nov 2019, at 09:16, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote:
Yes,
I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm
although last time I did
systemctl restart vdsm
VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: hmm, I’’l try again, that failed last time.
On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Hey!
For Storage Connection you can add - <hostname1>:/engine And for Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
On Mon, Nov 25, 2019 at 2:31 PM <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection and Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name and ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZPJA7BAVSKVVDDUBIEYXT/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPLC6IAFKWTXN52T2C7CS5/>
--
Thanks, Gobinda
participants (5)
-
Amit Bawer
-
Gobinda Das
-
Parth Dhanjal
-
Rob
-
rob.downer@orbitalsystems.co.uk