HE - engine gluster volume - not mounted

Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much ! -- Best regards, Leo David

Hi, to understand what's failing I'd suggest to start attaching setup logs. On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...

Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow. On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...

On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...

Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following: 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }" I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ? On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David

Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say? On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following:
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
-- Thanks, Gobinda

Hi, I have just hit "Redeploy" and not the volume seems to be mounted: Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/onn-ovirt--node--ng--4.3.2--0.20190319.0+1 ext4 57G 3.0G 51G 6% / devtmpfs devtmpfs 48G 0 48G 0% /dev tmpfs tmpfs 48G 4.0K 48G 1% /dev/shm tmpfs tmpfs 48G 34M 48G 1% /run tmpfs tmpfs 48G 0 48G 0% /sys/fs/cgroup /dev/sda1 ext4 976M 183M 726M 21% /boot /dev/mapper/onn-var ext4 15G 4.4G 9.5G 32% /var /dev/mapper/onn-tmp ext4 976M 3.2M 906M 1% /tmp /dev/mapper/onn-var_log ext4 17G 56M 16G 1% /var/log /dev/mapper/onn-var_log_audit ext4 2.0G 8.7M 1.8G 1% /var/log/audit /dev/mapper/onn-home ext4 976M 2.6M 907M 1% /home /dev/mapper/onn-var_crash ext4 9.8G 37M 9.2G 1% /var/crash tmpfs tmpfs 9.5G 0 9.5G 0% /run/user/0 /dev/mapper/gluster_vg_sdb-gluster_lv_engine xfs 100G 35M 100G 1% /gluster_bricks/engine c6100-ch3-node1-gluster.internal.lab:/engine fuse.glusterfs 100G 1.1G 99G 2% /rhev/data-center/mnt/glusterSD/c6100-ch3-node1-gluster.internal.lab:_engine [root@c6100-ch3-node1 ovirt-hosted-engine-setup]# gluster v status Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick c6100-ch3-node1-gluster.internal.lab: /gluster_bricks/engine/engine 49152 0 Y 25397 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks The problem is that the deployment still not finishing, now the error is: INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."} I just do not understand anymore... On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following:
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David

And there it is the last lines on the ansible_create_storage_domain log: 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400." }" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'} 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs ignore_errors:None 2019-04-02 10:53:49,148+0100 INFO ansible stats { "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_playbook_duration": "01:15 Minutes", "ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}", "ansible_type": "finish", "status": "FAILED" } 2019-04-02 10:53:49,149+0100 INFO SUMMARY: Duration Task Name -------- -------- [ < 1 sec ] Execute just a specific set of steps [ 00:02 ] Force facts gathering [ 00:02 ] Check local VM dir stat [ 00:02 ] Obtain SSO token using username/password credentials [ 00:02 ] Fetch host facts [ 00:01 ] Fetch cluster ID [ 00:02 ] Fetch cluster facts [ 00:02 ] Fetch Datacenter facts [ 00:01 ] Fetch Datacenter ID [ 00:01 ] Fetch Datacenter name [ 00:02 ] Add glusterfs storage domain [ 00:02 ] Get storage domain details [ 00:02 ] Find the appliance OVF [ 00:02 ] Parse OVF [ 00:02 ] Get required size [ FAILED ] Activate storage domain Any ideea on how to escalate this issue ? It just does not make sense to not be able to install from scratch a fresh node... Have a nice day ! Leo On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following:
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David

Is it possible you have not cleared the gluster volume between installs? What's the corresponding error in vdsm.log? On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex@gmail.com> wrote:
And there it is the last lines on the ansible_create_storage_domain log:
2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400." }" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'} 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs ignore_errors:None 2019-04-02 10:53:49,148+0100 INFO ansible stats { "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_playbook_duration": "01:15 Minutes", "ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}", "ansible_type": "finish", "status": "FAILED" } 2019-04-02 10:53:49,149+0100 INFO SUMMARY: Duration Task Name -------- -------- [ < 1 sec ] Execute just a specific set of steps [ 00:02 ] Force facts gathering [ 00:02 ] Check local VM dir stat [ 00:02 ] Obtain SSO token using username/password credentials [ 00:02 ] Fetch host facts [ 00:01 ] Fetch cluster ID [ 00:02 ] Fetch cluster facts [ 00:02 ] Fetch Datacenter facts [ 00:01 ] Fetch Datacenter ID [ 00:01 ] Fetch Datacenter name [ 00:02 ] Add glusterfs storage domain [ 00:02 ] Get storage domain details [ 00:02 ] Find the appliance OVF [ 00:02 ] Parse OVF [ 00:02 ] Get required size [ FAILED ] Activate storage domain
Any ideea on how to escalate this issue ? It just does not make sense to not be able to install from scratch a fresh node...
Have a nice day !
Leo
On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following:
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote:
Hi, to understand what's failing I'd suggest to start attaching setup logs.
On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote: > > Hello Everyone, > Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. > Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... > Thank you very much ! > > -- > Best regards, Leo David > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXCMJK...

Just to loop in, i've forgot to hit "Reply all" I have deleted everything in the engine gluster mount path, unmounted the engine gluster volume ( not deleted the volume ) , and started the wizard with "Use already configured storage". I have pointed to use this gluster volume, volume gets mounted under the correct path, but installation still fails: [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."} On the node's vdsm.log I can continuously see: 2019-04-02 13:02:18,832+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.03 seconds (__init__:312) 2019-04-02 13:02:19,906+0100 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:709) 2019-04-02 13:02:21,737+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48) 2019-04-02 13:02:21,738+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:54) Should I perform an "engine-cleanup", delete lvms from Cockpit and start it all over ? Did anyone successfully used this particular iso image "ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node installation ? Thank you ! Leo On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose <sabose@redhat.com> wrote:
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex@gmail.com> wrote:
And there it is the last lines on the ansible_create_storage_domain log:
2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false, "exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is
}" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'} 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs ignore_errors:None 2019-04-02 10:53:49,148+0100 INFO ansible stats { "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_playbook_duration": "01:15 Minutes", "ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}", "ansible_type": "finish", "status": "FAILED" } 2019-04-02 10:53:49,149+0100 INFO SUMMARY: Duration Task Name -------- -------- [ < 1 sec ] Execute just a specific set of steps [ 00:02 ] Force facts gathering [ 00:02 ] Check local VM dir stat [ 00:02 ] Obtain SSO token using username/password credentials [ 00:02 ] Fetch host facts [ 00:01 ] Fetch cluster ID [ 00:02 ] Fetch cluster facts [ 00:02 ] Fetch Datacenter facts [ 00:01 ] Fetch Datacenter ID [ 00:01 ] Fetch Datacenter name [ 00:02 ] Add glusterfs storage domain [ 00:02 ] Get storage domain details [ 00:02 ] Find the appliance OVF [ 00:02 ] Parse OVF [ 00:02 ] Get required size [ FAILED ] Activate storage domain
Any ideea on how to escalate this issue ? It just does not make sense to not be able to install from scratch a fresh node...
Have a nice day !
Leo
On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single
instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment:
INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false, "exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n",
"failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."
}"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote:
Thank you Simone. I've decides to go for a new fresh install from iso, and i'll keep
Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com>
wrote:
> > Hi, > to understand what's failing I'd suggest to start attaching setup logs. > > On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote: >> >> Hello Everyone, >> Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. >> Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this
\"[]\". HTTP response code is 400." throws the following: posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. particularly one failed to start for some reason...
>> Thank you very much ! >> >> -- >> Best regards, Leo David >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXCMJK...
-- Best regards, Leo David

On Tue, Apr 2, 2019 at 8:14 PM Leo David <leoalex@gmail.com> wrote:
Just to loop in, i've forgot to hit "Reply all"
I have deleted everything in the engine gluster mount path, unmounted the engine gluster volume ( not deleted the volume ) , and started the wizard with "Use already configured storage". I have pointed to use this gluster volume, volume gets mounted under the correct path, but installation still fails:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}
And I guess we don't have the engine logs to look at this? Is there any way you can access the engine console to check?
On the node's vdsm.log I can continuously see: 2019-04-02 13:02:18,832+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.03 seconds (__init__:312) 2019-04-02 13:02:19,906+0100 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:709) 2019-04-02 13:02:21,737+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48) 2019-04-02 13:02:21,738+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:54)
Any calls to "START connectStorageServer" in vdsm.log?
Should I perform an "engine-cleanup", delete lvms from Cockpit and start it all over ?
I doubt if that would resolve issue since you did clean up files from the mount.
Did anyone successfully used this particular iso image "ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node installation ? Sorry, don't know.
Thank you ! Leo
On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose <sabose@redhat.com> wrote:
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex@gmail.com> wrote:
And there it is the last lines on the ansible_create_storage_domain log:
2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400." }" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'} 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs ignore_errors:None 2019-04-02 10:53:49,148+0100 INFO ansible stats { "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_playbook_duration": "01:15 Minutes", "ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}", "ansible_type": "finish", "status": "FAILED" } 2019-04-02 10:53:49,149+0100 INFO SUMMARY: Duration Task Name -------- -------- [ < 1 sec ] Execute just a specific set of steps [ 00:02 ] Force facts gathering [ 00:02 ] Check local VM dir stat [ 00:02 ] Obtain SSO token using username/password credentials [ 00:02 ] Fetch host facts [ 00:01 ] Fetch cluster ID [ 00:02 ] Fetch cluster facts [ 00:02 ] Fetch Datacenter facts [ 00:01 ] Fetch Datacenter ID [ 00:01 ] Fetch Datacenter name [ 00:02 ] Add glusterfs storage domain [ 00:02 ] Get storage domain details [ 00:02 ] Find the appliance OVF [ 00:02 ] Parse OVF [ 00:02 ] Get required size [ FAILED ] Activate storage domain
Any ideea on how to escalate this issue ? It just does not make sense to not be able to install from scratch a fresh node...
Have a nice day !
Leo
On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment: INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log throws the following:
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{ "changed": false, "exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n", "failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409." }"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote: > > Thank you Simone. > I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. > Im just trying to understand node's underneath workflow.
hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. Everything else is just in the engine DB.
> > > On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote: >> >> Hi, >> to understand what's failing I'd suggest to start attaching setup logs. >> >> On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote: >>> >>> Hello Everyone, >>> Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. >>> Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this particularly one failed to start for some reason... >>> Thank you very much ! >>> >>> -- >>> Best regards, Leo David >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXCMJK...
-- Best regards, Leo David

Hi, Started from scratch... And all the things became more strange. First of all, after adding fqdn names for both management and gluster interface in /etc/hosts ( ip address specification for gluster nodes is not possible because of a known bug ) and although i had proper dns resolving for gluster fqdn address , installation went almost to finnish: [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the local bootstrap VM to be down at engine eyes] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_vms": [{"affinity_labels": [], "applications": [], "bios": {"boot_menu": {"enabled": false}, "type": "i440fx_sea_bios"}, "cdroms": [], "cluster": {"href": "/ovirt-engine/api/clusters/b4eb4bba-5564-11e9-82f1-00163e41da1e", "id": "b4eb4bba-5564-11e9-82f1-00163e41da1e"}, "comment": "", "cpu": {"architecture": "x86_64", "topology": {"cores": 1, "sockets": 8, "threads": 1}}, "cpu_profile": {"href": "/ovirt-engine/api/cpuprofiles/58ca604e-01a7-003f-01de-000000000250", "id": "58ca604e-01a7-003f-01de-000000000250"}, "cpu_shares": 0, "creation_time": "2019-04-02 17:42:48.463000+01:00", "delete_protected": false, "description": "", "disk_attachments": [], "display": {"address": "127.0.0.1", "allow_override": false, "certificate": {"content": "-----BEGIN CERTIFICATE-----\nMIID9DCCAtygAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMxFTATBgNVBAoM\nDHN5bmNyYXN5LmxhYjEyMDAGA1UEAwwpdmlydHVhbGlzYXRpb24tc2FuZGJveC5zeW5jcmFzeS5s\nYWIuNDk5NjcwHhcNMTkwNDAxMTYzMDA5WhcNMjkwMzMwMTYzMDA5WjBYMQswCQYDVQQGEwJVUzEV\nMBMGA1UECgwMc3luY3Jhc3kubGFiMTIwMAYDVQQDDCl2aXJ0dWFsaXNhdGlvbi1zYW5kYm94LnN5\nbmNyYXN5LmxhYi40OTk2NzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANdcj83LBAsU\nLUS18TIKmFG4pFj0a3VR1r3gfA9+FBVzm60dmIs7zmFR843xQjNTe4n6+uJCbQ09XdOSUyRpWi+9\nq4T5nL4kHbEnPbMUnQ9TDf3bX3S6SQXN678JELobeBDRaV89kGMCsjb7boQUofs3ScMduK77Fmvf\nyhCBVomo2nS8R9FQsv7KnR+3UXPQ1LQ30gv0hRs22vRWUB8ljCh1BCEDBMh1xdDLRI+jhf3mqMZc\n3Sb6qeLyslB9p1kmb/s2wxvdrjrsvpNSpQeZbi7r0FhbkH1GMgsi8V9NGaX3zKwPDgdYt18H2k5K\niRGpF2dWBxxeBPY9R7P+5tKIflcCAwEAAaOBxzCBxDAdBgNVHQ4EFgQUyKAePwI5dLdXIpWuqDDY\njS5S0dMwgYEGA1UdIwR6MHiAFMigHj8COXS3VyKVrqgw2I0uUtHToVykWjBYMQswCQYDVQQGEwJV\nUzEVMBMGA1UECgwMc3luY3Jhc3kubGFiMTIwMAYDVQQDDCl2aXJ0dWFsaXNhdGlvbi1zYW5kYm94\nLnN5bmNyYXN5LmxhYi40OTk2N4ICEAAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw\nDQYJKoZIhvcNAQELBQADggEBAElAlZvQZHep9ujnvJ3cOGe1bHeRpvFyThAb3YEpG9LRx91jYl+N\ndd6YmIa/wbUt9/SIwlsB5lOzbwI47yFK9zRjjIfR1nDuv5aDL+ZQhoU0zTypa3dx6OZekx11VGyF\ndFBMFSYVM2uiSaKzLB9clQjCMiLpiT00zfpCBDrORrpIJjWNWyC5AJiq0CXPQzTUq5Lylafe6fhH\nJab3bxrCDkREgb3eZN9uuT12BxrVtJkF4QaonTn2o/62hEOyVy6v8vyC66r4lz7AGwVIkuxa2bXU\nQvIhfhm1mC4ZFzKPMcJzpW0ze+OCoFPYaQFDmiO210j7prZaPobvq7JCBh1GleM=\n-----END CERTIFICATE-----\n", "organization": "internal.lab", "subject": "O=internal.lab,CN=c6100-ch3-node1.internal.lab"}, "copy_paste_enabled": true, "disconnect_action": "LOCK_SCREEN", "file_transfer_enabled": true, "monitors": 1, "port": 5900, "single_qxl_pci": false, "smartcard_enabled": false, "type": "vnc"}, "fqdn": "virtualisation-sandbox.internal.lab", "graphics_consoles": [], "guest_operating_system": {"architecture": "x86_64", "codename": "", "distribution": "CentOS Linux", "family": "Linux", "kernel": {"version": {"build": 0, "full_version": "3.10.0-957.10.1.el7.x86_64", "major": 3, "minor": 10, "revision": 957}}, "version": {"full_version": "7", "major": 7}}, "guest_time_zone": {"name": "BST", "utc_offset": "+01:00"}, "high_availability": {"enabled": false, "priority": 0}, "host": {"href": "/ovirt-engine/api/hosts/740c07ae-504a-49b5-967c-676fd6ca16c3", "id": "740c07ae-504a-49b5-967c-676fd6ca16c3"}, "host_devices": [], "href": "/ovirt-engine/api/vms/780c584b-28fa-4bde-af02-99b296522d17", "id": "780c584b-28fa-4bde-af02-99b296522d17", "io": {"threads": 1}, "katello_errata": [], "large_icon": {"href": "/ovirt-engine/api/icons/defaf775-731c-4e75-8c51-9119ac6dc689", "id": "defaf775-731c-4e75-8c51-9119ac6dc689"}, "memory": 34359738368, "memory_policy": {"guaranteed": 34359738368, "max": 34359738368}, "migration": {"auto_converge": "inherit", "compressed": "inherit"}, "migration_downtime": -1, "multi_queues_enabled": true, "name": "external-HostedEngineLocal", "next_run_configuration_exists": false, "nics": [], "numa_nodes": [], "numa_tune_mode": "interleave", "origin": "external", "original_template": {"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000", "id": "00000000-0000-0000-0000-000000000000"}, "os": {"boot": {"devices": ["hd"]}, "type": "other"}, "permissions": [], "placement_policy": {"affinity": "migratable"}, "quota": {"id": "d27a97ee-5564-11e9-bba0-00163e41da1e"}, "reported_devices": [], "run_once": false, "sessions": [], "small_icon": {"href": "/ovirt-engine/api/icons/a29967f4-53e5-4acc-92d8-4a971b54e655", "id": "a29967f4-53e5-4acc-92d8-4a971b54e655"}, "snapshots": [], "sso": {"methods": [{"id": "guest_agent"}]}, "start_paused": false, "stateless": false, "statistics": [], "status": "up", "storage_error_resume_behaviour": "auto_resume", "tags": [], "template": {"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000", "id": "00000000-0000-0000-0000-000000000000"}, "time_zone": {"name": "Etc/GMT"}, "type": "desktop", "usb": {"enabled": false}, "watchdogs": []}]}, "attempts": 24, "changed": false} The engine eventually went up, and i could login into the UI. Here, i've I have found an additional stopped vm called "external-HostedEngineLocal" - i assume the playbook didnt managed to delete it. I just don't know what to say, if this installation is reliable considering it is a fresh installation from official iso image... Do you think it would be better to wait for the next release when hopefully gluster 5.5 will be integrated too ? Thank very much for your answers ! On Tue, Apr 2, 2019 at 6:31 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Apr 2, 2019 at 8:14 PM Leo David <leoalex@gmail.com> wrote:
Just to loop in, i've forgot to hit "Reply all"
I have deleted everything in the engine gluster mount path, unmounted
the engine gluster volume ( not deleted the volume ) , and started the wizard with "Use already configured storage". I have pointed to use this gluster volume, volume gets mounted under the correct path, but installation still fails:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}
And I guess we don't have the engine logs to look at this? Is there any way you can access the engine console to check?
On the node's vdsm.log I can continuously see: 2019-04-02 13:02:18,832+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer]
RPC call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54) 2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:709) 2019-04-02 13:02:21,737+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48) 2019-04-02 13:02:21,738+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:54)
Any calls to "START connectStorageServer" in vdsm.log?
Should I perform an "engine-cleanup", delete lvms from Cockpit and start it all over ?
I doubt if that would resolve issue since you did clean up files from the mount.
Did anyone successfully used this particular iso image "ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node installation ? Sorry, don't know.
Thank you ! Leo
On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose <sabose@redhat.com> wrote:
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex@gmail.com> wrote:
And there it is the last lines on the ansible_create_storage_domain
2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false, "exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[]\". HTTP response code is 400."
}" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]" 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'} 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs ignore_errors:None 2019-04-02 10:53:49,148+0100 INFO ansible stats { "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_playbook_duration": "01:15 Minutes", "ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}", "ansible_type": "finish", "status": "FAILED" } 2019-04-02 10:53:49,149+0100 INFO SUMMARY: Duration Task Name -------- -------- [ < 1 sec ] Execute just a specific set of steps [ 00:02 ] Force facts gathering [ 00:02 ] Check local VM dir stat [ 00:02 ] Obtain SSO token using username/password credentials [ 00:02 ] Fetch host facts [ 00:01 ] Fetch cluster ID [ 00:02 ] Fetch cluster facts [ 00:02 ] Fetch Datacenter facts [ 00:01 ] Fetch Datacenter ID [ 00:01 ] Fetch Datacenter name [ 00:02 ] Add glusterfs storage domain [ 00:02 ] Get storage domain details [ 00:02 ] Find the appliance OVF [ 00:02 ] Parse OVF [ 00:02 ] Get required size [ FAILED ] Activate storage domain
Any ideea on how to escalate this issue ? It just does not make sense to not be able to install from scratch a fresh node...
Have a nice day !
Leo
On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas@redhat.com> wrote:
Hi Leo, Can you please paste "df -Th" and "gluster v status" out put ? Wanted to make sure engine mounted and volumes and bricks are up. What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex@gmail.com> wrote:
Thank you very much ! I have just installed a new fresh node, and triggered the single
instance hyperconverged setup. It seems it fails at the hosted engine final steps of deployment:
INFO ] TASK [ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Storage. There is no active Host in the Data Center.]". HTTP response code is 409. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."} Also, the ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost"
var "otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false, "exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409.\n",
"failed": true, "msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[Cannot attach Storage. There is no active Host in the Data Center.]\". HTTP response code is 409."
}"
I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far, I am unable to deploy oVirt single node Hyperconverged... Any thoughts ?
On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi < stirabos@redhat.com> wrote: > > > > On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex@gmail.com> wrote: >> >> Thank you Simone. >> I've decides to go for a new fresh install from iso, and i'll keep posted if any troubles arise. But I am still trying to understand what are the services that mount the lvms and volumes after configuration. There is nothing related in fstab, so I assume there are a couple of .mount files somewhere in the filesystem. >> Im just trying to understand node's underneath workflow. > > > hosted-engine configuration is stored in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the hosted-engine storage domain according to that and so ovirt-ha-agent will be able to start the engine VM. > Everything else is just in the engine DB. > >> >> >> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos@redhat.com> wrote: >>> >>> Hi, >>> to understand what's failing I'd suggest to start attaching setup logs. >>> >>> On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex@gmail.com> wrote: >>>> >>>> Hello Everyone, >>>> Using 4.3.2 installation, and after running through HyperConverged Setup, at the last stage it fails. It seems that the
>>>> Any ideea which are the services responsible of mounting the volumes on oVirt Node distribution ? I'm thinking that maybe this
log: throws the following: previously created "engine" volume is not mounted under "/rhev" path, therefore the setup cannot finish the deployment. particularly one failed to start for some reason...
>>>> Thank you very much ! >>>> >>>> -- >>>> Best regards, Leo David >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4T...
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDEIWD...
--
Thanks, Gobinda
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXCMJK...
-- Best regards, Leo David
-- Best regards, Leo David
participants (4)
-
Gobinda Das
-
Leo David
-
Sahina Bose
-
Simone Tiraboschi