Install test lab single host HCI with plain CentOS as OS

Hello, if I want to install a test lab environment of type single host HCI and want to use CentOS 8.2 as the OS for the host, can I still use the graphical wizard from cockpit? I see it accessing the server... Or is it thought only to be run from ovirt-node-ng system? In case it is not possible is there a command line workflow I can use to get the same as the wizard playbook run? BTW: I want to use CentOS for this test lab because I need to install custom packages (for performance counters and other things) so I'd prefer the flexibility to have a plain CentOS and not the NG node. Thanks in advance, Gianluca

Hi Gianluca, Yes, You can use cockpit UI for deployment. You need to install cockpit-ovirt-dashboard pkg for that. On Fri, Oct 30, 2020 at 6:18 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, if I want to install a test lab environment of type single host HCI and want to use CentOS 8.2 as the OS for the host, can I still use the graphical wizard from cockpit? I see it accessing the server... Or is it thought only to be run from ovirt-node-ng system? In case it is not possible is there a command line workflow I can use to get the same as the wizard playbook run?
BTW: I want to use CentOS for this test lab because I need to install custom packages (for performance counters and other things) so I'd prefer the flexibility to have a plain CentOS and not the NG node.
Thanks in advance, Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4ZURQ3SM2TBI2...
-- Thanks, Gobinda

On Fri, Oct 30, 2020 at 6:32 AM Gobinda Das <godas@redhat.com> wrote:
Hi Gianluca, Yes, You can use cockpit UI for deployment. You need to install cockpit-ovirt-dashboard pkg for that.
OK, thanks for confirmation. At first try the graphical gui complained about: " gluster-ansible-roles is not installed on Host. To continue deployment, please install gluster-ansible-roles on Host and try again. " So I installed gluster-ansible-roles package and its dependencies. Then retried but during first phases of deploy I got: " ... TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** failed: [ovirtst.mydomain.storage] (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} " My current config was [root@ovirt ~]# firewall-cmd --list-services cockpit dhcpv6-client ssh [root@ovirt ~]# but the problem seemed that it didn't recognize glusterfs as a known service to enable in firewalld... In fact: [root@ovirt ~]# firewall-cmd --get-services | grep gluster [root@ovirt ~]# After running dnf install glusterfs-server it added the glusterfs service to firewalld config Now it fails at stage 3 Prepare VM " . . . [ INFO ] TASK [ovirt.hosted_engine_setup : Enable GlusterFS at cluster level] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Set VLAN ID at datacenter level] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Get active list of active firewalld zones] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Configure libvirt firewalld zone] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add host] [ INFO ] changed: [localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Always revoke the SSO token] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : set_fact] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Collect error events from the Engine] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Generate the error message from the engine events] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fail with error description] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The host has been set in non_operational status, deployment errors: code 4035: Gluster command [<UNKNOWN>] failed on server <UNKNOWN>., code 9000: Failed to verify Power Management configuration for Host ovirt.mydomain.local., code 10802: VDSM ovirt.mydomain.local command GlusterServersListVDS failed: The method does not exist or is not available: {'method': 'GlusterHost.list'}, fix accordingly and re-deploy."} [ INFO ] TASK [ovirt.hosted_engine_setup : Sync on engine machine] [ INFO ] changed: [localhost] In the first phases it put the temporary ip in /etc/hosts 192.168.222.233 ovengine.mydomain.local But now it seems the /etc/hosts of the host is already cleaned from the temporary ip, but the engine is set at it yet [root@ovirt ovirt-hosted-engine-setup]# ping -c 3 192.168.222.233 PING 192.168.222.233 (192.168.222.233) 56(84) bytes of data. 64 bytes from 192.168.222.233: icmp_seq=1 ttl=64 time=0.262 ms 64 bytes from 192.168.222.233: icmp_seq=2 ttl=64 time=0.276 ms 64 bytes from 192.168.222.233: icmp_seq=3 ttl=64 time=0.267 ms --- 192.168.222.233 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 30ms rtt min/avg/max/mdev = 0.262/0.268/0.276/0.014 ms [root@ovirt ovirt-hosted-engine-setup]# [root@ovirt ovirt-hosted-engine-setup]# grep 192.168.222.233 /etc/hosts [root@ovirt ovirt-hosted-engine-setup]# In /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20201030105013-yf9fv8.log 2020-10-30 11:02:39,758+0100 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Al ways revoke the SSO token kwargs 2020-10-30 11:02:41,035+0100 ERROR ansible failed { "ansible_host": "localhost", "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_result": { "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "ca_file": null, "compress": true, "headers": null, "hostname": null, "insecure": null, "kerberos": false, "ovirt_auth": { "ansible_facts": { "ovirt_auth": { "ca_file": null, "compress": true, "headers": null, "insecure": true, "kerberos": false, "timeout": 0, "token": "Q-eWV_-KjpmnCsVUoVftaQUlTO4_n-iGrKmJn4SXq-c-YOSF-ojRjdxb5ilRLHxSfZ1keR1pIIc3TTKxylyBtw", "url": " https://ovengine.mydomain.local/ovirt-engine/api" } }, "attempts": 1, "changed": false, "failed": false }, "password": null, "state": "absent", "timeout": 0, "token": null, "url": null, "username": null } }, "msg": "You must specify either 'url' or 'hostname'." In /var/log/ovirt-hosted-engine-setup/engine-logs-2020-10-30T09:52:10Z/ovirt-engine/engine.log 020-10-30 11:04:29,786+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [] Updating cluster CPU flags and verb according to the configuration of the Secure Intel Cascadelake Server Family cpu 2020-10-30 11:04:29,788+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2020-10-30 11:04:29,815+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 8a0dae8a-1a96-11eb-874f-00163e1de730 Type: ClusterActio n group EDIT_CLUSTER_CONFIGURATION with role type ADMIN 2020-10-30 11:04:29,830+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] EVENT_ID: SYSTEM_UPDATE_CLUSTER(835), Host cluster Default was updated by system 2020-10-30 11:04:29,830+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2020-10-30 11:04:29,846+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] START, GlusterServersListVDSCommand(HostName = ovirt.mydomain.local, VdsIdVDSCommandParametersBase:{hostId='bc9fb648-3928-4693-b740-65e38cc8d304'}), log id: 58c3bd1d 2020-10-30 11:04:29,864+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,865+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,865+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Failed in 'GlusterServersListVDS' method 2020-10-30 11:04:29,865+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,871+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt.mydomain.local command GlusterServersListVDS failed: The method does not exist or is not available: {'method': 'GlusterHost.list'} 2020-10-30 11:04:29,872+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Command 'GlusterServersListVDSCommand(HostName = ovirt.mydomain.local, VdsIdVDSCommandParametersBase:{hostId='bc9fb648-3928-4693-b740-65e38cc8d304'})' execution failed: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist or is not available: {'method': 'GlusterHost.list'}, code = -32601 I still can ssh into the engine on its temp ip... in case other log files needed, not copied over to the host. Let me know if you need any other log file Thanks in advance, Gianluca

Hey! It seems vdsm packages are missing. Can you try installing vdsm-gluster and ovirt-engine-appliance packages? In case you face repo issues, just install yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm Then try again. Thanks! On Fri, Oct 30, 2020 at 4:00 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 30, 2020 at 6:32 AM Gobinda Das <godas@redhat.com> wrote:
Hi Gianluca, Yes, You can use cockpit UI for deployment. You need to install cockpit-ovirt-dashboard pkg for that.
OK, thanks for confirmation. At first try the graphical gui complained about:
" gluster-ansible-roles is not installed on Host. To continue deployment, please install gluster-ansible-roles on Host and try again. "
So I installed gluster-ansible-roles package and its dependencies. Then retried but during first phases of deploy I got: " ...
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** failed: [ovirtst.mydomain.storage] (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
" My current config was
[root@ovirt ~]# firewall-cmd --list-services cockpit dhcpv6-client ssh [root@ovirt ~]#
but the problem seemed that it didn't recognize glusterfs as a known service to enable in firewalld...
In fact: [root@ovirt ~]# firewall-cmd --get-services | grep gluster [root@ovirt ~]#
After running
dnf install glusterfs-server
it added the glusterfs service to firewalld config
Now it fails at stage 3 Prepare VM " . . . [ INFO ] TASK [ovirt.hosted_engine_setup : Enable GlusterFS at cluster level] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Set VLAN ID at datacenter level] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Get active list of active firewalld zones] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Configure libvirt firewalld zone] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add host] [ INFO ] changed: [localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Always revoke the SSO token] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : set_fact] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Collect error events from the Engine] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Generate the error message from the engine events] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fail with error description] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The host has been set in non_operational status, deployment errors: code 4035: Gluster command [<UNKNOWN>] failed on server <UNKNOWN>., code 9000: Failed to verify Power Management configuration for Host ovirt.mydomain.local., code 10802: VDSM ovirt.mydomain.local command GlusterServersListVDS failed: The method does not exist or is not available: {'method': 'GlusterHost.list'}, fix accordingly and re-deploy."} [ INFO ] TASK [ovirt.hosted_engine_setup : Sync on engine machine] [ INFO ] changed: [localhost]
In the first phases it put the temporary ip in /etc/hosts
192.168.222.233 ovengine.mydomain.local
But now it seems the /etc/hosts of the host is already cleaned from the temporary ip, but the engine is set at it yet
[root@ovirt ovirt-hosted-engine-setup]# ping -c 3 192.168.222.233 PING 192.168.222.233 (192.168.222.233) 56(84) bytes of data. 64 bytes from 192.168.222.233: icmp_seq=1 ttl=64 time=0.262 ms 64 bytes from 192.168.222.233: icmp_seq=2 ttl=64 time=0.276 ms 64 bytes from 192.168.222.233: icmp_seq=3 ttl=64 time=0.267 ms
--- 192.168.222.233 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 30ms rtt min/avg/max/mdev = 0.262/0.268/0.276/0.014 ms [root@ovirt ovirt-hosted-engine-setup]#
[root@ovirt ovirt-hosted-engine-setup]# grep 192.168.222.233 /etc/hosts [root@ovirt ovirt-hosted-engine-setup]#
In /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20201030105013-yf9fv8.log
2020-10-30 11:02:39,758+0100 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Al ways revoke the SSO token kwargs 2020-10-30 11:02:41,035+0100 ERROR ansible failed { "ansible_host": "localhost", "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml", "ansible_result": { "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "ca_file": null, "compress": true, "headers": null, "hostname": null, "insecure": null, "kerberos": false, "ovirt_auth": { "ansible_facts": { "ovirt_auth": { "ca_file": null, "compress": true, "headers": null, "insecure": true, "kerberos": false, "timeout": 0, "token": "Q-eWV_-KjpmnCsVUoVftaQUlTO4_n-iGrKmJn4SXq-c-YOSF-ojRjdxb5ilRLHxSfZ1keR1pIIc3TTKxylyBtw", "url": " https://ovengine.mydomain.local/ovirt-engine/api" } }, "attempts": 1, "changed": false, "failed": false }, "password": null, "state": "absent", "timeout": 0, "token": null, "url": null, "username": null } }, "msg": "You must specify either 'url' or 'hostname'."
In /var/log/ovirt-hosted-engine-setup/engine-logs-2020-10-30T09:52:10Z/ovirt-engine/engine.log
020-10-30 11:04:29,786+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [] Updating cluster CPU flags and verb according to the configuration of the Secure Intel Cascadelake Server Family cpu 2020-10-30 11:04:29,788+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2020-10-30 11:04:29,815+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 8a0dae8a-1a96-11eb-874f-00163e1de730 Type: ClusterActio n group EDIT_CLUSTER_CONFIGURATION with role type ADMIN 2020-10-30 11:04:29,830+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] EVENT_ID: SYSTEM_UPDATE_CLUSTER(835), Host cluster Default was updated by system 2020-10-30 11:04:29,830+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2020-10-30 11:04:29,846+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] START, GlusterServersListVDSCommand(HostName = ovirt.mydomain.local, VdsIdVDSCommandParametersBase:{hostId='bc9fb648-3928-4693-b740-65e38cc8d304'}), log id: 58c3bd1d 2020-10-30 11:04:29,864+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,865+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,865+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Failed in 'GlusterServersListVDS' method 2020-10-30 11:04:29,865+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Unexpected return value: Status [code=-32601, message=The method does not exist or is not available: {'method': 'GlusterHost.list'}] 2020-10-30 11:04:29,871+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt.mydomain.local command GlusterServersListVDS failed: The method does not exist or is not available: {'method': 'GlusterHost.list'} 2020-10-30 11:04:29,872+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-61) [3e8a4535] Command 'GlusterServersListVDSCommand(HostName = ovirt.mydomain.local, VdsIdVDSCommandParametersBase:{hostId='bc9fb648-3928-4693-b740-65e38cc8d304'})' execution failed: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist or is not available: {'method': 'GlusterHost.list'}, code = -32601
I still can ssh into the engine on its temp ip... in case other log files needed, not copied over to the host. Let me know if you need any other log file
Thanks in advance,
Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQTCLD5LEF4QD3...

On Fri, Oct 30, 2020 at 11:43 AM Parth Dhanjal <dparth@redhat.com> wrote:
Hey! It seems vdsm packages are missing. Can you try installing vdsm-gluster and ovirt-engine-appliance packages? In case you face repo issues, just install yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm Then try again.
Ok, thanks. The appliance was already installed but vdsm-gluster was indeed missing. So currently before deployment I executed on CentOS host (initially installed choosing "server" option): yum update yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm yum update yum install cockpit-ovirt-dashboard gluster-ansible-roles glusterfs-server vdsm-gluster systemctl enable --now cockpit.socket reboot Is there a list of all what is needed for this test lab single host HCI with gluster? Anything else? And what is the safe way to clean up before retrying deploy? Thanks, Gianluca

Looks like your gluster deployment is happy, all gluster volumes will be up and running. Only you need to cleanup hosted-engine-deployment And start with Hosted Engine deployment. run the command "ovirt-hosted-engine-cleanup -q" and from wizard use "Hosted Engine" deployment. Please make sure appliances is installed and vdsm service is up and running. On Fri, Oct 30, 2020 at 4:20 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 30, 2020 at 11:43 AM Parth Dhanjal <dparth@redhat.com> wrote:
Hey! It seems vdsm packages are missing. Can you try installing vdsm-gluster and ovirt-engine-appliance packages? In case you face repo issues, just install yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm Then try again.
Ok, thanks. The appliance was already installed but vdsm-gluster was indeed missing.
So currently before deployment I executed on CentOS host (initially installed choosing "server" option): yum update yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm yum update yum install cockpit-ovirt-dashboard gluster-ansible-roles glusterfs-server vdsm-gluster systemctl enable --now cockpit.socket reboot
Is there a list of all what is needed for this test lab single host HCI with gluster? Anything else?
And what is the safe way to clean up before retrying deploy?
Thanks, Gianluca
-- Thanks, Gobinda

On Fri, Oct 30, 2020 at 12:02 PM Gobinda Das <godas@redhat.com> wrote:
Looks like your gluster deployment is happy, all gluster volumes will be up and running. Only you need to cleanup hosted-engine-deployment And start with Hosted Engine deployment. run the command "ovirt-hosted-engine-cleanup -q" and from wizard use "Hosted Engine" deployment. Please make sure appliances is installed and vdsm service is up and running.
Thanks Gobinda and Parth. I followed what indicated by Gobinda: [root@ovirt ~]# ovirt-hosted-engine-cleanup -q -=== Destroy hosted-engine VM ===- You must run deploy first -=== Killing left-behind HostedEngine processes ===- /sbin/ovirt-hosted-engine-cleanup: line 57: kill: (18988) - No such process error: failed to get domain 'HostedEngine' -=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- You must run deploy first -=== De-configure VDSM networks ===- ovirtmgmt ovirtmgmt -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd-admin.socket libvirtd.socket libvirtd-ro.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing ? /etc/ovirt-hosted-engine/hosted-engine.conf already missing - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@ovirt ~]# Then I executed the hosted engine setup from the wizard and chose Gluster as the shared storage and all worked ok. Now I have the final engine up and the 3 storage domains active. The only strange thing I see is that on the host the volumes are active (obviously ;-) but in ovirt web admin gui the storage -> volumes page is empty. [root@ovirt ~]# gluster volume list data engine vmstore [root@ovirt ~]# gluster volume status engine Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/engine/engine 49152 0 Y 12947 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks [root@ovirt ~]# gluster volume status data Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/data/data 49153 0 Y 13195 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks [root@ovirt ~]# gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/vmstore/vmstore 49154 0 Y 13447 Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks [root@ovirt ~]# [root@ovirt ~]# df -h | egrep "Size|rhev" Filesystem Size Used Avail Use% Mounted on ovirtst.mydomain.storage:/engine 100G 8.6G 92G 9% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_engine ovirtst.mydomain.storage:/data 300G 5.2G 295G 2% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_data ovirtst.mydomain.storage:/vmstore 200G 3.5G 197G 2% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_vmstore [root@ovirt ~]# domains: https://drive.google.com/file/d/1euiUtsnfoaL9XgSs8T7akz47JvIFr_tn/view?usp=s... volumes: https://drive.google.com/file/d/1n46KQEEGbKKRRhgYa6UTYKXP25A5FB59/view?usp=s... Is there anything to enable to have the volumes visible? Gianluca

HI Gianluca, Glad to hear that your problem is solved. As you redeployed Hosted Engine , default gluster service is not enabled in cluster because of that you are not able to see gluster volumes. Please edit cluster in admin protal and select the checkbox "Enable Gluster service" and save, your problem will be resolved :) On Fri, Oct 30, 2020 at 7:43 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 30, 2020 at 12:02 PM Gobinda Das <godas@redhat.com> wrote:
Looks like your gluster deployment is happy, all gluster volumes will be up and running. Only you need to cleanup hosted-engine-deployment And start with Hosted Engine deployment. run the command "ovirt-hosted-engine-cleanup -q" and from wizard use "Hosted Engine" deployment. Please make sure appliances is installed and vdsm service is up and running.
Thanks Gobinda and Parth.
I followed what indicated by Gobinda:
[root@ovirt ~]# ovirt-hosted-engine-cleanup -q -=== Destroy hosted-engine VM ===- You must run deploy first -=== Killing left-behind HostedEngine processes ===- /sbin/ovirt-hosted-engine-cleanup: line 57: kill: (18988) - No such process error: failed to get domain 'HostedEngine'
-=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- You must run deploy first -=== De-configure VDSM networks ===- ovirtmgmt ovirtmgmt -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd-admin.socket libvirtd.socket libvirtd-ro.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing ? /etc/ovirt-hosted-engine/hosted-engine.conf already missing - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@ovirt ~]#
Then I executed the hosted engine setup from the wizard and chose Gluster as the shared storage and all worked ok.
Now I have the final engine up and the 3 storage domains active. The only strange thing I see is that on the host the volumes are active (obviously ;-) but in ovirt web admin gui the storage -> volumes page is empty.
[root@ovirt ~]# gluster volume list data engine vmstore
[root@ovirt ~]# gluster volume status engine Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/engine/engine 49152 0 Y 12947
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
[root@ovirt ~]# gluster volume status data Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/data/data 49153 0 Y 13195
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
[root@ovirt ~]# gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirtst.righini.storage:/gluster_bric ks/vmstore/vmstore 49154 0 Y 13447
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks
[root@ovirt ~]#
[root@ovirt ~]# df -h | egrep "Size|rhev" Filesystem Size Used Avail Use% Mounted on ovirtst.mydomain.storage:/engine 100G 8.6G 92G 9% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_engine ovirtst.mydomain.storage:/data 300G 5.2G 295G 2% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_data ovirtst.mydomain.storage:/vmstore 200G 3.5G 197G 2% /rhev/data-center/mnt/glusterSD/ovirtst.mydomain.storage:_vmstore [root@ovirt ~]#
domains:
https://drive.google.com/file/d/1euiUtsnfoaL9XgSs8T7akz47JvIFr_tn/view?usp=s...
volumes:
https://drive.google.com/file/d/1n46KQEEGbKKRRhgYa6UTYKXP25A5FB59/view?usp=s...
Is there anything to enable to have the volumes visible?
Gianluca
-- Thanks, Gobinda

On Fri, Oct 30, 2020 at 3:33 PM Gobinda Das <godas@redhat.com> wrote:
HI Gianluca, Glad to hear that your problem is solved. As you redeployed Hosted Engine , default gluster service is not enabled in cluster because of that you are not able to see gluster volumes. Please edit cluster in admin protal and select the checkbox "Enable Gluster service" and save, your problem will be resolved :)
Thanks Gobinda, it worked like a charm. Actually I was indeed searching for it remembering in the past, but I was misled by firefox showing a corrupt page: for this problem I'm going to open a distinct thread. Do I need to restart engine service? Because I can go into Storage --> Volumes but as soon as I click inside one if them (eg engine) I get a gui exception (Chrome 86.0.4240.111 (Official Build) (64-bit) on Fedora 31 google-chrome-stable-86.0.4240.111-1.x86_64) See here: https://drive.google.com/file/d/1svzQjrGctRMQFvZEpW9zpOlUbv8XXl3Q/view?usp=s... Conversely instead Firefox 82 (firefox-82.0-4.fc31.x86_64) doesn't show this GUI exception.... Gianluca

You can reboot the system, just in case the kernel was updated. You can clean the existing system from the UI itself. Upon selecting the option to deploy a Hyperconverged Infrastructure, it'll give you an option to clean the existing setup. Also you need to ensure that ssh-key is available in the hosts file, so that ansible can execute roles on the server. You can refer to this doc as well - https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperc... On Fri, Oct 30, 2020 at 4:20 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 30, 2020 at 11:43 AM Parth Dhanjal <dparth@redhat.com> wrote:
Hey! It seems vdsm packages are missing. Can you try installing vdsm-gluster and ovirt-engine-appliance packages? In case you face repo issues, just install yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm Then try again.
Ok, thanks. The appliance was already installed but vdsm-gluster was indeed missing.
So currently before deployment I executed on CentOS host (initially installed choosing "server" option): yum update yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm yum update yum install cockpit-ovirt-dashboard gluster-ansible-roles glusterfs-server vdsm-gluster systemctl enable --now cockpit.socket reboot
Is there a list of all what is needed for this test lab single host HCI with gluster? Anything else?
And what is the safe way to clean up before retrying deploy?
Thanks, Gianluca
participants (3)
-
Gianluca Cecchi
-
Gobinda Das
-
Parth Dhanjal