ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this? -- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted. I'll attach an edited config as an example. On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...

Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants. Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image. Also having a ton of strange issues with the hosted-engine vm deployment. On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2. On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image. Also having a ton of strange issues with the hosted-engine vm deployment.
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

On each host you should check if systemctl status glusterd shows "enabled" and whatever is the gluster events daemon. (I'm not logged in to look right now) I'm not sure which part of gluster-wizard or hosted-engine engine installation is supposed to do the enabling, but I've seen where incomplete installs left it disabled. If the gluster servers haven't come up properly then there's no working image for engine. I had a situation where it was in a "paused" state and I had to run "hosted-engine --vm-status" on possible nodes to find which one has VM in paused state then log into that node and run this command.. virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz@gmail.com> wrote:
On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2.
On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image. Also having a ton of strange issues with the hosted-engine vm deployment.
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

Glusterd was enabled, just crashes on boot. It's a known issue that was resolved in 3.13, but ovirt-node only has 3.12. The VM is at that point, paused. So I manually startup glusterd again and ensure all nodes are online, and then resume the hosted engine. Sometimes it works, sometimes not. I think the issue here is that there are multiple issues with the current ovirt-node release iso. I was able to get everything working with Centos base and installing ovirt manually. Still had the same problem with the gluster wizard not using any of my settings, but after that, and ensuring i restart all services after a reboot, things came to life. Trying to discuss with devs, but so far no luck. I keep hearing that the previous release of ovirt-node (iso) was just much smoother, but haven't seen anyone addressing the issues in current release. On Mon, Feb 4, 2019 at 2:16 PM Edward Berger <edwberger@gmail.com> wrote:
On each host you should check if systemctl status glusterd shows "enabled" and whatever is the gluster events daemon. (I'm not logged in to look right now)
I'm not sure which part of gluster-wizard or hosted-engine engine installation is supposed to do the enabling, but I've seen where incomplete installs left it disabled.
If the gluster servers haven't come up properly then there's no working image for engine. I had a situation where it was in a "paused" state and I had to run "hosted-engine --vm-status" on possible nodes to find which one has VM in paused state then log into that node and run this command..
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz@gmail.com> wrote:
On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2.
On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image. Also having a ton of strange issues with the hosted-engine vm deployment.
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

Sahina, Gobinda, Can you check this thread? On Mon, Feb 4, 2019 at 6:02 PM feral <blistovmhz@gmail.com> wrote:
Glusterd was enabled, just crashes on boot. It's a known issue that was resolved in 3.13, but ovirt-node only has 3.12. The VM is at that point, paused. So I manually startup glusterd again and ensure all nodes are online, and then resume the hosted engine. Sometimes it works, sometimes not.
I think the issue here is that there are multiple issues with the current ovirt-node release iso. I was able to get everything working with Centos base and installing ovirt manually. Still had the same problem with the gluster wizard not using any of my settings, but after that, and ensuring i restart all services after a reboot, things came to life. Trying to discuss with devs, but so far no luck. I keep hearing that the previous release of ovirt-node (iso) was just much smoother, but haven't seen anyone addressing the issues in current release.
On Mon, Feb 4, 2019 at 2:16 PM Edward Berger <edwberger@gmail.com> wrote:
On each host you should check if systemctl status glusterd shows "enabled" and whatever is the gluster events daemon. (I'm not logged in to look right now)
I'm not sure which part of gluster-wizard or hosted-engine engine installation is supposed to do the enabling, but I've seen where incomplete installs left it disabled.
If the gluster servers haven't come up properly then there's no working image for engine. I had a situation where it was in a "paused" state and I had to run "hosted-engine --vm-status" on possible nodes to find which one has VM in paused state then log into that node and run this command..
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz@gmail.com> wrote:
On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2.
On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image.
Have you reported bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine is a good generic place to start
Also having a ton of strange issues with the hosted-engine vm deployment.
Can you elaborate and or report bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
Please report a bug on this, with steps to reproduce. https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
New install of ovirt-node 4.2 (from iso). Setup each node with networking and ssh keys, and use the hyperconverged gluster deployment wizard. None of the user specified settings are ever reflected in the gdeployConfig.conf. Anyone running into this?
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM...
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3WPWZZUFH3R5C...
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

Sure Greg, I will look into this and get back to you guys. On Tue, Feb 5, 2019 at 7:22 AM Greg Sheremeta <gshereme@redhat.com> wrote:
Sahina, Gobinda,
Can you check this thread?
On Mon, Feb 4, 2019 at 6:02 PM feral <blistovmhz@gmail.com> wrote:
Glusterd was enabled, just crashes on boot. It's a known issue that was resolved in 3.13, but ovirt-node only has 3.12. The VM is at that point, paused. So I manually startup glusterd again and ensure all nodes are online, and then resume the hosted engine. Sometimes it works, sometimes not.
I think the issue here is that there are multiple issues with the current ovirt-node release iso. I was able to get everything working with Centos base and installing ovirt manually. Still had the same problem with the gluster wizard not using any of my settings, but after that, and ensuring i restart all services after a reboot, things came to life. Trying to discuss with devs, but so far no luck. I keep hearing that the previous release of ovirt-node (iso) was just much smoother, but haven't seen anyone addressing the issues in current release.
On Mon, Feb 4, 2019 at 2:16 PM Edward Berger <edwberger@gmail.com> wrote:
On each host you should check if systemctl status glusterd shows "enabled" and whatever is the gluster events daemon. (I'm not logged in to look right now)
I'm not sure which part of gluster-wizard or hosted-engine engine installation is supposed to do the enabling, but I've seen where incomplete installs left it disabled.
If the gluster servers haven't come up properly then there's no working image for engine. I had a situation where it was in a "paused" state and I had to run "hosted-engine --vm-status" on possible nodes to find which one has VM in paused state then log into that node and run this command..
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz@gmail.com> wrote:
On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2.
On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image.
Have you reported bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine is a good generic place to start
Also having a ton of strange issues with the hosted-engine vm deployment.
Can you elaborate and or report bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
Yes, I had that issue with an 4.2.8 installation. I had to manually edit the "web-UI-generated" config to be anywhere close to what I wanted.
Please report a bug on this, with steps to reproduce. https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote:
> New install of ovirt-node 4.2 (from iso). Setup each node with > networking and ssh keys, and use the hyperconverged gluster deployment > wizard. None of the user specified settings are ever reflected in the > gdeployConfig.conf. > Anyone running into this? > > -- > _____ > Fact: > 1. Ninjas are mammals. > 2. Ninjas fight ALL the time. > 3. The purpose of the ninja is to flip out and kill people. > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM... >
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3WPWZZUFH3R5C...
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
gshereme@redhat.com IRC: gshereme <https://red.ht/sig>
-- Thanks, Gobinda

Fyi, this is just a vanilla install from the ovirt node 4.2 iso. Install 3 nodes, sync up hosts file and exchange SSH keys, and hit the webui for hyperconverged deployment. The only setting I enter that make it into the config, are the hostnames. On Mon, Feb 4, 2019, 8:44 PM Gobinda Das <godas@redhat.com wrote:
Sure Greg, I will look into this and get back to you guys.
On Tue, Feb 5, 2019 at 7:22 AM Greg Sheremeta <gshereme@redhat.com> wrote:
Sahina, Gobinda,
Can you check this thread?
On Mon, Feb 4, 2019 at 6:02 PM feral <blistovmhz@gmail.com> wrote:
Glusterd was enabled, just crashes on boot. It's a known issue that was resolved in 3.13, but ovirt-node only has 3.12. The VM is at that point, paused. So I manually startup glusterd again and ensure all nodes are online, and then resume the hosted engine. Sometimes it works, sometimes not.
I think the issue here is that there are multiple issues with the current ovirt-node release iso. I was able to get everything working with Centos base and installing ovirt manually. Still had the same problem with the gluster wizard not using any of my settings, but after that, and ensuring i restart all services after a reboot, things came to life. Trying to discuss with devs, but so far no luck. I keep hearing that the previous release of ovirt-node (iso) was just much smoother, but haven't seen anyone addressing the issues in current release.
On Mon, Feb 4, 2019 at 2:16 PM Edward Berger <edwberger@gmail.com> wrote:
On each host you should check if systemctl status glusterd shows "enabled" and whatever is the gluster events daemon. (I'm not logged in to look right now)
I'm not sure which part of gluster-wizard or hosted-engine engine installation is supposed to do the enabling, but I've seen where incomplete installs left it disabled.
If the gluster servers haven't come up properly then there's no working image for engine. I had a situation where it was in a "paused" state and I had to run "hosted-engine --vm-status" on possible nodes to find which one has VM in paused state then log into that node and run this command..
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz@gmail.com> wrote:
On that note, have you also had issues with gluster not restarting on reboot, as well as all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the cluster to come back to life, is to manually restart glusterd on all nodes, then put the cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through the first 2.
On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz@gmail.com> wrote:
Yea, I've been able to build a config manually myself, but sure would be nice if the gdeploy worked (at all), as it takes an hour to deploy every test, and manually creating the conf, I have to be super conservative about my sizes, as I'm still not entirely sure what the deploy script actually does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to build a deployment to make use of more than 900GB, it fails as it's creating the thinpool with whatever size it wants.
Just wanted to make sure I wasn't the only one having this issue. Given we know at least two people have noticed, who's the best to contact? I haven't been able to get any response from devs on any of (the myriad) of issues with the 4.2.8 image.
Have you reported bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine is a good generic place to start
Also having a ton of strange issues with the hosted-engine vm deployment.
Can you elaborate and or report bugs? https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
On Mon, Feb 4, 2019 at 11:59 AM Edward Berger <edwberger@gmail.com> wrote:
> Yes, I had that issue with an 4.2.8 installation. > I had to manually edit the "web-UI-generated" config to be anywhere > close to what I wanted. >
Please report a bug on this, with steps to reproduce. https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
> I'll attach an edited config as an example. > > On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz@gmail.com> wrote: > >> New install of ovirt-node 4.2 (from iso). Setup each node with >> networking and ssh keys, and use the hyperconverged gluster deployment >> wizard. None of the user specified settings are ever reflected in the >> gdeployConfig.conf. >> Anyone running into this? >> >> -- >> _____ >> Fact: >> 1. Ninjas are mammals. >> 2. Ninjas fight ALL the time. >> 3. The purpose of the ninja is to flip out and kill people. >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM... >> >
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3WPWZZUFH3R5C...
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
gshereme@redhat.com IRC: gshereme <https://red.ht/sig>
--
Thanks, Gobinda

Dear Feral,
On that note, have you also had issues with gluster not restarting on reboot, as well as >all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the >cluster to come back to life, is to manually restart glusterd on all nodes, then put the >cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. >This also fails after 2 or 3 power losses, even though the entire cluster is happy through >the first 2.
About the gluster not starting - use systemd.mount unit files.here is my setup and for now works: [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount # /etc/systemd/system/gluster_bricks-engine.mount [Unit] Description=Mount glusterfs brick - ENGINE Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target [Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_engine Where=/gluster_bricks/engine Type=xfs Options=inode64,noatime,nodiratime [Install] WantedBy=glusterd.service [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount # /etc/systemd/system/gluster_bricks-engine.automount [Unit] Description=automount for gluster brick ENGINE [Automount] Where=/gluster_bricks/engine [Install] WantedBy=multi-user.target [root@ovirt2 yum.repos.d]# systemctl cat glusterd # /etc/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount After=network.target rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount Before=network-online.target [Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 Environment="LOG_LEVEL=INFO" EnvironmentFile=-/etc/sysconfig/glusterd ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS KillMode=process SuccessExitStatus=15 [Install] WantedBy=multi-user.target # /etc/systemd/system/glusterd.service.d/99-cpu.conf [Service] CPUAccounting=yes Slice=glusterfs.slice Best Regards,Strahil Nikolov

Using SystemD makes way more sense to me. I was just trying to use ovirt-node as it was ... intended? Mainly because I have no idea how it all works yet, so I've been trying to do the most stockish deployment possible, following deployment instructions and not thinking I'm smarter than the software :p. I've given up on 4.2 for now, as 4.3 was just released, so giving that a try now. Will report back. Hopefully 4.3 enlists systemd for stuff? On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Dear Feral,
On that note, have you also had issues with gluster not restarting on reboot, as well as >all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the >cluster to come back to life, is to manually restart glusterd on all nodes, then put the >cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through >the first 2.
About the gluster not starting - use systemd.mount unit files. here is my setup and for now works:
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount # /etc/systemd/system/gluster_bricks-engine.mount [Unit] Description=Mount glusterfs brick - ENGINE Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target
[Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_engine Where=/gluster_bricks/engine Type=xfs Options=inode64,noatime,nodiratime
[Install] WantedBy=glusterd.service [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount # /etc/systemd/system/gluster_bricks-engine.automount [Unit] Description=automount for gluster brick ENGINE
[Automount] Where=/gluster_bricks/engine
[Install] WantedBy=multi-user.target [root@ovirt2 yum.repos.d]# systemctl cat glusterd # /etc/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount After=network.target rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 Environment="LOG_LEVEL=INFO" EnvironmentFile=-/etc/sysconfig/glusterd ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS KillMode=process SuccessExitStatus=15
[Install] WantedBy=multi-user.target
# /etc/systemd/system/glusterd.service.d/99-cpu.conf [Service] CPUAccounting=yes Slice=glusterfs.slice
Best Regards, Strahil Nikolov
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people.

+Sachidananda URS to review user request about systemd mount files On Tue, Feb 5, 2019 at 10:22 PM feral <blistovmhz@gmail.com> wrote:
Using SystemD makes way more sense to me. I was just trying to use ovirt-node as it was ... intended? Mainly because I have no idea how it all works yet, so I've been trying to do the most stockish deployment possible, following deployment instructions and not thinking I'm smarter than the software :p. I've given up on 4.2 for now, as 4.3 was just released, so giving that a try now. Will report back. Hopefully 4.3 enlists systemd for stuff?
On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Dear Feral,
On that note, have you also had issues with gluster not restarting on reboot, as well as >all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the >cluster to come back to life, is to manually restart glusterd on all nodes, then put the >cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. >This also fails after 2 or 3 power losses, even though the entire cluster is happy through >the first 2.
About the gluster not starting - use systemd.mount unit files. here is my setup and for now works:
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount # /etc/systemd/system/gluster_bricks-engine.mount [Unit] Description=Mount glusterfs brick - ENGINE Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target
[Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_engine Where=/gluster_bricks/engine Type=xfs Options=inode64,noatime,nodiratime
[Install] WantedBy=glusterd.service [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount # /etc/systemd/system/gluster_bricks-engine.automount [Unit] Description=automount for gluster brick ENGINE
[Automount] Where=/gluster_bricks/engine
[Install] WantedBy=multi-user.target [root@ovirt2 yum.repos.d]# systemctl cat glusterd # /etc/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount After=network.target rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 Environment="LOG_LEVEL=INFO" EnvironmentFile=-/etc/sysconfig/glusterd ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS KillMode=process SuccessExitStatus=15
[Install] WantedBy=multi-user.target
# /etc/systemd/system/glusterd.service.d/99-cpu.conf [Service] CPUAccounting=yes Slice=glusterfs.slice
Best Regards, Strahil Nikolov
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7XBT...

Hi, On Thu, Feb 7, 2019 at 9:27 AM Sahina Bose <sabose@redhat.com> wrote:
+Sachidananda URS to review user request about systemd mount files
On Tue, Feb 5, 2019 at 10:22 PM feral <blistovmhz@gmail.com> wrote:
Using SystemD makes way more sense to me. I was just trying to use
I've given up on 4.2 for now, as 4.3 was just released, so giving that a
ovirt-node as it was ... intended? Mainly because I have no idea how it all works yet, so I've been trying to do the most stockish deployment possible, following deployment instructions and not thinking I'm smarter than the software :p. try now. Will report back. Hopefully 4.3 enlists systemd for stuff?
Unless we have really complicated mount setup, it is better to use fstab. We had certain difficulties while using vdo, maybe for such cases? However the systemd.mount(5) manpage suggests that the preferred way of mount configuration should be /etc/fstab. src: https://manpages.debian.org/jessie/systemd/systemd.mount.5.en.html#/ETC/FSTA... <snip> /ETC/FSTAB Mount units may either be configured via unit files, or via /etc/fstab (seefstab(5) for details). Mounts listed in /etc/fstab will be converted into native units dynamically at boot and when the configuration of the system manager is reloaded. In general, configuring mount points through /etc/fstab is the preferred approach. See systemd-fstab-generator(8) for details about the conversion. </snip>
On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Dear Feral,
On that note, have you also had issues with gluster not restarting on
reboot, as well as >all of the HA stuff failing on reboot after power loss? Thus far, the only way I've got the >cluster to come back to life, is to manually restart glusterd on all nodes, then put the >cluster back into "not mainentance" mode, and then manually starting the hosted-engine vm. This also fails after 2 or 3 power losses, even though the entire cluster is happy through >the first 2.
About the gluster not starting - use systemd.mount unit files. here is my setup and for now works:
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount # /etc/systemd/system/gluster_bricks-engine.mount [Unit] Description=Mount glusterfs brick - ENGINE Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target
[Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_engine Where=/gluster_bricks/engine Type=xfs Options=inode64,noatime,nodiratime
[Install] WantedBy=glusterd.service [root@ovirt2 yum.repos.d]# systemctl cat
gluster_bricks-engine.automount
# /etc/systemd/system/gluster_bricks-engine.automount [Unit] Description=automount for gluster brick ENGINE
[Automount] Where=/gluster_bricks/engine
[Install] WantedBy=multi-user.target [root@ovirt2 yum.repos.d]# systemctl cat glusterd # /etc/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount After=network.target rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount gluster_bricks-isos.mount Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 Environment="LOG_LEVEL=INFO" EnvironmentFile=-/etc/sysconfig/glusterd ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS KillMode=process SuccessExitStatus=15
[Install] WantedBy=multi-user.target
# /etc/systemd/system/glusterd.service.d/99-cpu.conf [Service] CPUAccounting=yes Slice=glusterfs.slice
Best Regards, Strahil Nikolov
-- _____ Fact: 1. Ninjas are mammals. 2. Ninjas fight ALL the time. 3. The purpose of the ninja is to flip out and kill people. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7XBT...
participants (7)
-
Edward Berger
-
feral
-
Gobinda Das
-
Greg Sheremeta
-
Sachidananda URS
-
Sahina Bose
-
Strahil Nikolov