4.5.2 Create Additional Gluster Logical Volumes fails
by simon@justconnect.ie
Hi,
In 4.4 adding additional gluster volumes was a simple ansible task (or via cockpit).
With 4.5.2 I tried to add new volumes but the logic has changed/broken. Here's the error I am getting:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ********************************************************************************************************************************
failed: [bdtovirthcidmz02-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010442", "end": "2022-11-10 13:11:16.717772", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:11:16.707330", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz03-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010231", "end": "2022-11-10 13:12:35.607565", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:12:35.597334", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz01-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.011282", "end": "2022-11-10 13:13:24.336233", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:13:24.324951", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
The vg was created as part of the initial ansible build with logical volumes being added when required.
Any assistance would be greatly appreciated.
Kind regards
Simon
1 year, 8 months
Host Reboot Timeout of 10 Minutes
by Peter H
I'm working in a group that maintains a large oVirt setup based on 4.4.1
which works very well. We are afraid of upgrading and prefer setting up a
new installation and gradually enlist the hosts one by one into the new
installation.
We have tried 4.4.10 and 4.5.1 - 4.5.4 based on CentOS Stream 8, Rocky 8,
Alma Linux 9.1 with various problems. Worst was the problem that the rpm db
ended up in a catch-22 state.
Using Alma Linux 9.1 and current oVirt 4.5.4 seems promising as no rpm
problems are present after installation. We have only one nuisance left
which we have seen in all installation attempts we have made since 4.4.10.
When rebooting a host it takes 10 minutes before it's activated again. In
4.4.1 the hosts are activated a few seconds after they have booted up.
I have found the following in the engine log:
2023-01-24 23:01:57,564+01 INFO
[org.ovirt.engine.core.bll.SshHostRebootCommand]
(EE-ManagedThreadFactory-engine-Thread-1513) [2bb08d20] Waiting 600
seconds, for server to finish reboot process.
Our ansible playbooks for deployment times out and we could increase the
timeout but how come that this 10 minutes delay has been introduced?
Does a config file exist where this timeout can be set to a lower value?
BR
Peter H.
1 year, 8 months
Out-of-sync networks can only be detached
by Sakhi Hadebe
Hi,
I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.
Please have been stuck on this for almost the whole day now. How do I fix
this error?
--
Regards,
Sakhi Hadebe
1 year, 8 months
{REQUEST} oVirt (including RHHI) as HCI in the future
by 樽井周久(KTCSP)
To whom it may concern
I have been using oVirt in an HCI environment for over 5 years.
The reason why I chose oVirt (most attractive point of oVirt for me) was
that it is easy to build an HCI environment with GUI deployment.
Now, I found out that the deployment by GUI has been deprecated (from
ovirt4.5) on your forum site,
and that all the contents related to HCI were deleted (dropped) in your
official installation guide site.
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBDUBBKLTCW4MMWCXTRXNWDYPLP5CBUP/
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/index.html
It seems that oVirt will not be able to build HCI environment in the future.
So I'm very worried about whether I can continue to use oVirt as before.
I would appreciate it if you could tell me about your prospects
regarding oVirt as HCI.
For your reference below.
On 4.5.3 oVirt host, Gluster deployment was done with GUI,
but hosted-engine had to be deployed with CLI.
On 4.5.4 oVirt host, Guster deployment failed with GUI,
so subsequent hosted-engine deployment could not proceed.
('str object' has no attribute 'vgname')
Best regards,
Kanehisa Tarui
Tokyo, Japan
1 year, 9 months
Re: Self-hosted engine 4.5.0 deployment fails
by John
I would run the deploy again, wait until the engine is up and then from
the server you are deploying the engine on
# virsh list
obtain the virtual machine number from the above command, in the example
below we assume the number is 1
# virsh console 1
login as root
wait for all packages to finish updating:
tail -f /var/log/dnf.rpm.log
then:
# dnf downgrade postgresql-jdbc
once that's done the deployment of the engine should get past the
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is
up]
step.
On 03/05/2022 17:03, Mohamed Roushdy wrote:
>
> Hello,
>
> I’m deploying on Ovirt nodes v4.5.0.1, but it fails to deploy the
> self-hosted engine with the following errors in the web installer:
>
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page
> is up]
> [ ERROR ] fatal: [localhost -> 192.168.222.197]: FAILED! =>
> {"attempts": 30, "changed": false, "connection": "close",
> "content_encoding": "identity", "content_length": "86",
> "content_type": "text/html; charset=UTF-8", "date": "Tue, 03 May 2022
> 15:57:20 GMT", "elapsed": 0, "msg": "Status code was 500 and not
> [200]: HTTP Error 500: Internal Server Error", "redirected": false,
> "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k
> mod_auth_gssapi/1.6.1", "status": 500, "url":
> http://localhost/ovirt-engine/services/health}
> [ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
> [ INFO ] changed: [localhost -> 192.168.222.197]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
> [ INFO ] changed: [localhost -> 192.168.222.197]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination
> directory path]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination
> directory]
> [ INFO ] changed: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local
> appliance image]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to
> flush dirty buffers]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of
> copied engine logs]
> [ INFO ] changed: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about
> a failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "There was a failure deploying the engine on the local engine VM. The
> system may not be provisioned according to the playbook results:
> please check the logs for the issue, fix accordingly or re-deploy from
> scratch.\n"}
>
> Any idea if this is a bug or something? I tried to deploy for eight
> times with different configuration
>
> Thank y9ou,
>
>
> _______________________________________________
> Users mailing list --users(a)ovirt.org
> To unsubscribe send an email tousers-leave(a)ovirt.org
> Privacy Statement:https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/
> List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/XL...
1 year, 9 months
multiple disk snapshot_memory stay in locked state
by Nathanaël Blanchet
Hello,
After many (failed) snapshot attempts, I find many disk snapshot_memory
in locked state into the disks menu.
unlock utility returns anything... how can I erase those disks, knowing
that there is no listed snapshot for this vm
1 year, 9 months
I need the deploy script to wait for fixing network configuration manually in oVirt 4.3.10
by lars.stolpe@bvg.de
Hi,
i'm planing to upgrade our production environment from oVirt 4.3 to 4.4.
So i do need a fresh oVirt 4.3 installation to test the procedure before doing it in production.
The command line deploy script can't handle network interfaces correctly. If i use either a single NIC oder a bond (active/passive) i get the error message "The selected network interface is not valid".
If i predefine the management bridge to a running state, the deploy process goes on, but fails to activate the added host and removes the already running engine vm.
The deploy process fails to synchronize the existing working network configuration with the engine configuration.
I can already log in to the engine GUI and see, that the bridge "ovirtmgmt" needs to be assigned to the bonding IF, but i'm not fast enough to do so, because the deployment process ist already shutting down and erasing the vm.
I do see the following ways to succeed:
1. make the depoyment process accept the given interfaces (maybe ignore errors)
2. make the deploy process wait for me to take necessary actions before checking the engine
Do anyone know, how to achieve this?
All i need is a running engine on hosted_storage... any other issues i can fix later.
Another idea is to use one of the destined hosts as a bare metal engine, add hosts, backup the engine and use that backup for a hosted engine restore deploy, since the deploy script asks to wait after local vm is ready, but only if i do a recovery deploy.
Any suggestions?
1 year, 9 months
How to view multi ovirt-manager VM ?
by rak kim
Hello, I'm kim
I have 7 ovirt-manager running on each service
I want to manage 7 ovirt-manger on one platform
Is there any program to view VMs at each ovirt-manager for one platform ? (like multi-cloud platform ?)
I guess I just need to be able to control/view the VM.
1 year, 9 months