Re: gluster service on the cluster is unchecked on hci cluster
by Strahil Nikolov
Can you check for AVC denials and the error message like the described in https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
Best Regards,Strahil Nikolov
On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka<jiri.slezka(a)slu.cz> wrote: Hello,
On 7/11/22 14:34, Strahil Nikolov wrote:
> Can you check something on the host:
> cat /etc/glusterfs/eventsconfig.json
cat /etc/glusterfs/eventsconfig.json
{
"log-level": "INFO",
"port": 24009,
"disable-events-log": false
}
> semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); print
> $2}' /etc/glusterfs/eventsconfig.json)
semanage port -l | grep 24009
returns empty set, it looks like this port is not labeled
Cheers,
Jiri
>
> Best Regards,
> Strahil Nikolov
> В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka
> <jiri.slezka(a)slu.cz> написа:
>
>
> Hi,
>
> I would like to change CPU Type in my oVirt 4.4.10 HCI cluster (based on
> 3 glusterfs/virt hosts). When I try to I got this error
>
> Error while executing action: Cannot disable gluster service on the
> cluster as it contains volumes.
>
> As I remember I had Gluster Service enabled on this cluster but now both
> (Enable Virt Services and Enable Gluster Service) checkboxes are grayed
> out and Gluster Service is unchecked.
>
> Also Storage / Volumes displays my volumes... well, displays one brick
> on particular host in unknown state (? mark) which is new situation. As
> I can see from command line all bricks are online, no healing in
> progress, all looks good...
>
> I am not sure if the second issue is relevant to first one so main
> question is how can I (re)enable gluster service in my cluster?
>
> Thanks in advance,
>
> Jiri
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJ...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJ...>
2 years, 6 months
Cannot Enable incremental backup
by Jonas
Hello all
I'm trying to create incremental backups for my VMs on a testing cluster
and am using the functions from
https://gitlab.com/nirs/ovirt-stress/-/blob/master/backup/backup.py. So
far it works well, but on some disks it is not possible to enable
incremental backups even when the VM is powered off (see screenshot
below). Does anyone know why this might be the case and how to activate
it? I think I already checked the docs and didn't find anything but feel
free to nudge me in the right direction.
By the way, is there a backup solution that is somewhat endorsed by the
community here?
Thank you and kind regards,
Jonas
Screenshot of oVirt Disk
2 years, 6 months
Problem with engine deployment
by varekoarfa@gmail.com
HI everyone, hope all is good.
OS: Centos Stream
ovirt 4.5
I'm having problems deploying the hosted engine both through cockpit and cli.
I have 3 servers, where through cockpit, I have managed to configure and deploy glusterfs without problems. but when I want to deploy the hosted engine it tells me "No valid network interface has been found".
the 3 servers have 2 nic each one, I have created a bond in each one with cockpit and with the name bond0 and in XOR mode.
if someone can help me, please.
ansible packages installed:
[root@vs05 pre_checks]# rpq -qa | ansi
-bash: ansi: no se encontró la orden
-bash: rpq: no se encontró la orden
[root@vs05 pre_checks]# rpq -qa |grep ansi
-bash: rpq: no se encontró la orden
[root@vs05 pre_checks]# rpm -qa |grep ansi
ansible-collection-ansible-posix-1.3.0-1.2.el8.noarch
ansible-collection-ansible-netcommon-2.2.0-3.2.el8.noarch
ansible-collection-ansible-utils-2.3.0-2.2.el8.noarch
gluster-ansible-maintenance-1.0.1-12.el8.noarch
gluster-ansible-features-1.0.5-15.el8.noarch
ovirt-ansible-collection-2.1.0-1.el8.noarch
gluster-ansible-cluster-1.0-5.el8.noarch
gluster-ansible-repositories-1.0.1-5.el8.noarch
ansible-core-2.12.7-1.el8.x86_64
gluster-ansible-roles-1.0.5-28.el8.noarch
gluster-ansible-infra-1.0.4-22.el8.noarch
2 years, 6 months
Host certificate expired
by Rob B
Hi,
We have a ovirt host in a 'Unassigned' state because its certificate has expired.
The ovirt events show...
VDSM host1 command Get Host Capabilities failed: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed
This is the only host in the cluster, and has local storage so I don't have any options to start the single VM elsewhere.
Is there a way to renew the certificate on this host? I have no option to put the host in maintenance mode and 'Enroll Certificate' as its in the unassigned state.
The oVirt manager is running version: 4.4.10.7-1.el8
The oVirt host in the bad state is running: ovirt-host-4.4.9-2, vdsm-4.40.100.2-1.
Please let me know if you need any more info, and thanks in advance.
Rob
2 years, 6 months
Moving raw/sparse disk from NFS to iSCSI fails on oVirt 4.5.1
by Guillaume Pavese
On a 4.5.1 DC, I have imported a vm and its disk from an old 4.3 DC
(through an export domain if that's relevant)
The DC/Cluster compat level is 4.7 and the vm was upgraded to it.
"Original custom compatibility version 4.3 of imported VM xxx is not
supported. Changing it to the lowest supported version: 4.7."
The disk is raw and sparse :
<format>raw</format>
<sparse>true</sparse>
I initially put the VM's disks on an NFS storage domain, but I want to move
the disks to an iSCSI one
However, after copying data for a while the task fails "User has failed to
move disk VM-TEMPLATE-COS7_Disk1 to domain iSCSI-STO-FR-301"
in engine.log :
qemu-img: error while writing at byte xxx: No space left on device
2022-07-21 08:58:23,240+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] FINISH, GetHostJobsVDSCommand,
return:
{0aa2d519-8130-4e2f-bc4f-892e5f7b5206=HostJobInfo:{id='0aa2d519-8130-4e2f-bc4f-892e5f7b5206',
type='storage', description='copy_data', status='failed', progress='79',
error='VDSError:{code='GeneralException', message='General Exception:
("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
'none', '-f', 'raw', '-O', 'qcow2', '-o', 'compat=1.1',
'/rhev/data-center/mnt/svc-int-prd-sto-fr-301.hostics.fr:_volume1_ovirt-int-2_data/1ce95c4a-2ec5-47b7-bd24-e540165c6718/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea',
'/rhev/data-center/mnt/blockSD/b5dc9c01-3749-4326-99c5-f84f683190bd/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea']
failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at
byte 13639873536: No space left on device\\n')",)'}'}}, log id: 73f77495
2022-07-21 08:58:23,241+02 INFO
[org.ovirt.engine.core.bll.StorageJobCallback]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] Command CopyData id:
'521bdf57-8379-40ce-a682-af859fb0cad7': job
'0aa2d519-8130-4e2f-bc4f-892e5f7b5206' execution was completed with VDSM
job status 'failed'
I do want the conversion from raw/sparse to qcow2/sparse to happen, as I
want to activate incremental backups.
I think that it may fail because the virtual size is bigger than the
initial size, as I think someone as explained on this list earlier? Can
anybody confirm?
It seems to be a pretty common use case to support though?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
2 years, 6 months
ovirt-engine manager, certificate issue
by david
hello
I have a problem to log in to ovirt-engine manager in my browser
the warning message in the browser display me this text:
PKIX path validation failed: java.security.cert.CertPathValidatorException:
validity check failed
to solve this problem I am offered to run engine-setup
and here is a question: the engine-setup will have no impact to the
hosts(hypervisors) working?
ovirt version 4.4.4.7-1.el8
thanks
2 years, 6 months
Issue with oVirt 4.5 and Data Warehouse installed on a Separate Machine
by Igor Davidoff
Hello,
i have an issue with 'engine-seup' step on DWH (separate server) aufter upgrade from 4.4.10 to 4.5.
It looks like the ovirt-engine-setup are looking for rpm-package 'ovirt-engine' instead of 'ovirt-engine-dwh'.
the reporting error is:
"
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220502100751-fqwb07.log
[WARNING] Remote engine was not configured to be able to access DWH, please check the logs.
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220502101130-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
"
in Setup log i found:
"
2022-05-02 10:11:30,000+0000 DEBUG otopi.context context._executeMethod:127 Stage validation METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin._validation
2022-05-02 10:11:30,001+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), executable='None', cwd='None', env=None
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), rc=1
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stdout:
package ovirt-engine is not installed
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stderr:
2022-05-02 10:11:30,013+0000 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 463, in _validation
oenginecons.Const.ENGINE_PACKAGE_NAME,
File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
command=args[0],
RuntimeError: Command '/usr/bin/rpm' failed to execute
2022-05-02 10:11:30,015+0000 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
"
usually the upgrade of minor versions in 4.4 was just:
# yum update ovirt\*setup\*
# engine-setup
# yum update
as it did not work, i tried the fresh installation of centos8 stream and recovery of DWH Database and configuration:
# engine-backup --mode=restore --file=backup.bck --provision-all-databases
-> no luck
the last idea was fresh installation centos8 stream + fresh installation of ovirt-engine-dwh 4.5 (without recovery)
-> the same error.
the engine side works fine.
i compared the current setup logs with the installation and all the minor upgrades of ovirt-engine-dwh before 4.5
and only found the rpm-validation for the package 'ovirt-engine-dwh':
"
2022-02-08 16:11:29,846+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), executable='None', cwd='None', env=None
2022-02-08 16:11:29,877+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), rc=0
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stdout:
ovirt-engine-dwh-4.4.10-1.el8.noarch
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stderr:
2022-02-08 16:11:29,878+0000 DEBUG otopi.transaction transaction.commit:152 committing 'DWH Engine database Transaction'
"
It looks like engine-setup knows it is the DWH-Server, but is trying to validate the wrong rpm package.
Any Ideas, how to work around this.
Thank you!
2 years, 6 months
Gluster volume "deleted" by accident --- Is it possible to recover?
by itforums51@gmail.com
hi everyone,
I have a 3x node ovirt 4.4.6 cluster in HC setup.
Today I was intending to extend the data and vmstore volume adding another brick each; then by accident I pressed the "cleanup" button. Basically it looks that the volume were deleted.
I am wondering whether there is a process of trying to recover these volumes and therefore all VMs (including the Hosted-Engine).
```
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_data-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.45
gluster_lv_engine gluster_vg_sda4 -wi-a----- 100.00g
gluster_lv_vmstore gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_vmstore-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.33
gluster_thinpool_gluster_vg_sda4 gluster_vg_sda4 twi-aot--- <7.07t 11.46 0.89
```
I would appreciate any advice.
TIA
2 years, 6 months
Engine update failing 4.5.1.3-1 -> 4.5.2-1
by xavierl@rogers.com
Hi there,
I'm trying to update my engine version so I can mitigate a bug which was in the previous update and is failing. The log file is proving to be very difficult to pinpoint the cause of the error however I can provide it. Wondering if anyone else has had this issue and if there's a way to fix it while preserving the VMs I have on this single host setup.
[ INFO ] Stage: Setup validation
During execution engine service will be stopped (OK, Cancel) [OK]:
[ INFO ] Hosted Engine HA is in Global Maintenance mode.
Setup version: 4.5.2-1.el8
Engine version: 4.5.1.3-1.el8
[ ERROR ] Failed to execute stage 'Setup validation': Setup and (updated) Engine versions must match
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220728194403-9ev4xe.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220728194429-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
2 years, 6 months
ovirt vms NotResponding related to firewalls
by Bill James
we recently added a bunch of firewalls around vlans where our VMs live and
it may be causing a lot of confusion to ovirt management node?
Though I don't know why firewall would cause intermittent problems.
We see a lot of errors like: (in engine.log)
2022-07-28 19:39:47,923-07 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-1) [] VM '6e177451-d43f-4e
91-85ad-6b8f3bcfc4ad'(epayfraud2.j2noc.com) moved from 'NotResponding' -->
'Up'
...
2022-07-28 19:39:53,676-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineSch
eduled-Thread-58) [] EVENT_ID: VM_NOT_RESPONDING(126), VM epayfraud
2.j2noc.com is not responding.
...
2022-07-28 19:40:36,189-07 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-10) [] VM '6e177451-d43f-4e91-85ad-6b8f3bcfc4ad'(
epayfraud2.j2noc.com) moved from 'NotResponding' --> 'Up'
I guess my first question is how does ovirt determine if VM is up or not,
is it the management node or the host VM is running on and what ports does
it use to determine health?
2 years, 6 months