Hi,
I have just hit "Redeploy" and not the volume seems to be mounted:
Filesystem Type
Size Used Avail Use% Mounted on
/dev/mapper/onn-ovirt--node--ng--4.3.2--0.20190319.0+1 ext4
57G 3.0G 51G 6% /
devtmpfs devtmpfs
48G 0 48G 0% /dev
tmpfs tmpfs
48G 4.0K 48G 1% /dev/shm
tmpfs tmpfs
48G 34M 48G 1% /run
tmpfs tmpfs
48G 0 48G 0% /sys/fs/cgroup
/dev/sda1 ext4
976M 183M 726M 21% /boot
/dev/mapper/onn-var ext4
15G 4.4G 9.5G 32% /var
/dev/mapper/onn-tmp ext4
976M 3.2M 906M 1% /tmp
/dev/mapper/onn-var_log ext4
17G 56M 16G 1% /var/log
/dev/mapper/onn-var_log_audit ext4
2.0G 8.7M 1.8G 1% /var/log/audit
/dev/mapper/onn-home ext4
976M 2.6M 907M 1% /home
/dev/mapper/onn-var_crash ext4
9.8G 37M 9.2G 1% /var/crash
tmpfs tmpfs
9.5G 0 9.5G 0% /run/user/0
/dev/mapper/gluster_vg_sdb-gluster_lv_engine xfs
100G 35M 100G 1% /gluster_bricks/engine
c6100-ch3-node1-gluster.internal.lab:/engine fuse.glusterfs
100G 1.1G 99G 2%
/rhev/data-center/mnt/glusterSD/c6100-ch3-node1-gluster.internal.lab:_engine
[root@c6100-ch3-node1 ovirt-hosted-engine-setup]# gluster v status
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick c6100-ch3-node1-gluster.internal.lab:
/gluster_bricks/engine/engine 49152 0 Y
25397
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
The problem is that the deployment still not finishing, now the error is:
INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response
code
is 400."}
I just do not understand anymore...
On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas(a)redhat.com> wrote:
Hi Leo,
Can you please paste "df -Th" and "gluster v status" out put ?
Wanted to make sure engine mounted and volumes and bricks are up.
What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex(a)gmail.com> wrote:
> Thank you very much !
> I have just installed a new fresh node, and triggered the single
> instance hyperconverged setup. It seems it fails at the hosted engine final
> steps of deployment:
> INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
> domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
> space]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot attach Storage. There is no active Host in the Data Center.]".
> HTTP response code is 409.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[Cannot attach
Storage.
> There is no active Host in the Data Center.]\". HTTP response code is
409."}
> Also, the
> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
throws
> the following:
>
> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "<type 'dict'>"
value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line
664,
> in main\n storage_domains_module.post_create_check(sd_id)\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line
526,
> in post_create_check\n id=storage_domain.id,\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053,
in
> add\n return self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line
232,
> in _internal_add\n return future.wait() if wait else future\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\n return self._code(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\n self._check_fault(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n self._raise_error(response, body)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\n raise error\nError: Fault reason is \"Operation
Failed\".
> Fault detail is \"[Cannot attach Storage. There is no active Host in the
> Data Center.]\". HTTP response code is 409.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault
detail is
> \"[Cannot attach Storage. There is no active Host in the Data Center.]\".
> HTTP response code is 409."
> }"
>
> I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So
> far, I am unable to deploy oVirt single node Hyperconverged...
> Any thoughts ?
>
>
>
> On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex(a)gmail.com> wrote:
>>
>>> Thank you Simone.
>>> I've decides to go for a new fresh install from iso, and i'll keep
>>> posted if any troubles arise. But I am still trying to understand what are
>>> the services that mount the lvms and volumes after configuration. There is
>>> nothing related in fstab, so I assume there are a couple of .mount files
>>> somewhere in the filesystem.
>>> Im just trying to understand node's underneath workflow.
>>>
>>
>> hosted-engine configuration is stored
>> in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount
>> the hosted-engine storage domain according to that and so ovirt-ha-agent
>> will be able to start the engine VM.
>> Everything else is just in the engine DB.
>>
>>
>>>
>>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos(a)redhat.com>
>>> wrote:
>>>
>>>> Hi,
>>>> to understand what's failing I'd suggest to start attaching
setup
>>>> logs.
>>>>
>>>> On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex(a)gmail.com>
wrote:
>>>>
>>>>> Hello Everyone,
>>>>> Using 4.3.2 installation, and after running through HyperConverged
>>>>> Setup, at the last stage it fails. It seems that the previously
created
>>>>> "engine" volume is not mounted under "/rhev"
path, therefore the setup
>>>>> cannot finish the deployment.
>>>>> Any ideea which are the services responsible of mounting the volumes
>>>>> on oVirt Node distribution ? I'm thinking that maybe this
particularly one
>>>>> failed to start for some reason...
>>>>> Thank you very much !
>>>>>
>>>>> --
>>>>> Best regards, Leo David
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZW...
>>>>>
>>>>
>
> --
> Best regards, Leo David
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDE...
>
--
Thanks,
Gobinda