Uncertain what to do with "Master storage domain" residing on obsolete storage domain.
by goestin@intert00bz.nl
Hello All,
I want to phase out a storage domain however it is marked as "master", I have the following questions:
1. What does it mean when a storage domain is "master".
2. What is the correct way to remove a storage domain that is assigned the "master" status.
Any insight on the matter would be highly appreciated.
Kind regards,
Justin
1 year, 10 months
Emergency Mode following Disk addition
by simon@justconnect.ie
Hi All,
I added a new local disk and created a new LVM thinpool, VG and lVm’s for Gluster in a 4.5.3 HCI environment.
Following reboot the server enters Emergency Mode caused by inactive LV/VG. Running ‘vgchange - ay vg’ activates the LV’s.
LVM is configured to use device file but disabling this and using a filter allows the system to start normally. Unfortunately ‘lsblk’ shows that the disk is under multipath control.
I’ve tried to exclude this disk from multipath but all efforts have failed.
‘pvscan’ shows the device as /dev/mapper/‘wwid’ instead of /dev/sd*
I’ve added the wwid to the blacklists manually and restarted multipath but males no change.
Regards
Simon
1 year, 10 months
VDI management on top of Ovirt
by samuel.xhu@horebdata.cn
Hello, Ovirt folks,
What is the most popular VDI management framework for Ovirt? After googling, I find adfinis/virtesk however which is not active now.
I would be much apppreciated if anyone can point out the best open source VDI solutions with Ovirt?
best regards,
Samuel
Do Right Thing (做正确的事) / Pursue Excellence (追求卓越) / Help Others Succeed (成就他人)
1 year, 10 months
Unable to change engine-config
by James Wadsworth
I wanted to change the default keyboard layout. I ran
# engine-config -s VncKeyboardLayout=it
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Cannot set value it to key VncKeyboardLayout. Valid values are [ar,cz,da,de,de-ch,en-gb,en-us,es,et,fi,fo,fr,fr-be,fr-ca,fr-ch,hr,hu,is,it,ja,lt,lv,mk,nl,no,pl,pt,pt-br,ru,sl,sv,th,tr]
I have tried with other values but the same result:
# engine-config -s ClientModeConsoleDefault=vnc
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Cannot set value vnc to key ClientModeConsoleDefault. Valid values are [vnc,spice]
Am I making a mistake with the syntax or should I open a bug?
Thanks, James
1 year, 10 months
Unable to reinstall 2nd and 3rd Host after restore HE backup onto 1st host
by James Wadsworth
Hi there,
We are running ovirt 4.5.4 on 3 RHEL 8.7 host with a on self hosted engine again using RHEL 8.7. We are making some changes to our company network so we followed these instructions here to move the ovirt engine onto a new storage with different IP address. https://access.redhat.com/solutions/6529691
We used a host to redeploy the hosted engine from a backup and we managed to get the hosted engine running on the new storage with the IP address. We then tried to reinstall the 2 other hosts but the reinstallation failed with the message:
Host ovirt2.ad.tintolav.com installation failed. Task Restart services failed to execute. Please check logs for more details: /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20230323083407-ovirt2.ad.tintolav.com-d950815b-f1b9-4dcf-b609-3ff1866a6b70.log
In the ovirt-engie host deploy log we see [ "Traceback (most recent call last):", " File \"/usr/bin/vdsm-tool\", line 18, in <module>", " import vdsm.tool", "ModuleNotFoundError: No module named 'vdsm'" ]
In the vdsm log on the host there are no errors except for a couple of warnings
2023-03-23 07:58:22,335+0100 WARN (periodic/1) [throttled] MOM not available. Error: [Errno 2] No such file or directory (throttledlog:87)
2023-03-23 07:58:22,336+0100 WARN (periodic/1) [throttled] MOM not available, KSM stats will be missing. Error: (throttledlog:87)
We have tried a completely new install for ovirt2, but we have finished back at the same point with the same error.
[root@ovirt2 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Thu 2023-03-23 08:35:32 CET; 56min ago
Process: 22942 ExecStart=/usr/libexec/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/libexec/vdsm/vdsmd (code=exited, status=0/SUCCESS)
Process: 22876 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 22942 (code=exited, status=0/SUCCESS)
Mar 23 07:58:20 ovirt2.ad.tintolav.com vdsmd_init_common.sh[22876]: vdsm: Running test_space
Mar 23 07:58:20 ovirt2.ad.tintolav.com vdsmd_init_common.sh[22876]: vdsm: Running test_lo
Mar 23 07:58:20 ovirt2.ad.tintolav.com systemd[1]: Started Virtual Desktop Server Manager.
Mar 23 07:58:22 ovirt2.ad.tintolav.com vdsm[22942]: WARN MOM not available. Error: [Errno 2] No such file or directory
Mar 23 07:58:22 ovirt2.ad.tintolav.com vdsm[22942]: WARN MOM not available, KSM stats will be missing. Error:
Mar 23 08:35:32 ovirt2.ad.tintolav.com systemd[1]: Stopping Virtual Desktop Server Manager...
Mar 23 08:35:32 ovirt2.ad.tintolav.com systemd[1]: vdsmd.service: Succeeded.
Mar 23 08:35:32 ovirt2.ad.tintolav.com systemd[1]: Stopped Virtual Desktop Server Manager.
Mar 23 08:35:33 ovirt2.ad.tintolav.com systemd[1]: Dependency failed for Virtual Desktop Server Manager.
Mar 23 08:35:33 ovirt2.ad.tintolav.com systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
Mar 23 08:35:33 ovirt2.ad.tintolav.com systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
[root@ovirt2 ~]# systemctl start vdsmd
A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
[root@ovirt2 ~]# journalctl -xe
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: Stopped Auxiliary vdsm service for running helper functions as root.
-- Subject: Unit supervdsmd.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit supervdsmd.service has finished shutting down.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
-- Subject: Unit supervdsmd.service has finished start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit supervdsmd.service has finished starting up.
--
-- The start-up result is done.
Mar 23 09:34:35 ovirt2.ad.tintolav.com daemonAdapter[28677]: Traceback (most recent call last):
Mar 23 09:34:35 ovirt2.ad.tintolav.com daemonAdapter[28677]: File "/usr/libexec/vdsm/daemonAdapter", line 16, in <module>
Mar 23 09:34:35 ovirt2.ad.tintolav.com daemonAdapter[28677]: from vdsm.config import config
Mar 23 09:34:35 ovirt2.ad.tintolav.com daemonAdapter[28677]: ModuleNotFoundError: No module named 'vdsm'
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Main process exited, code=exited, status=1/FAILURE
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit supervdsmd.service has entered the 'failed' state with result 'exit-code'.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Service RestartSec=100ms expired, scheduling restart.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Automatic restarting of the unit supervdsmd.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: Stopped Auxiliary vdsm service for running helper functions as root.
-- Subject: Unit supervdsmd.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit supervdsmd.service has finished shutting down.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Start request repeated too quickly.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: supervdsmd.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit supervdsmd.service has entered the 'failed' state with result 'exit-code'.
Mar 23 09:34:35 ovirt2.ad.tintolav.com systemd[1]: Failed to start Auxiliary vdsm service for running helper functions as root.
-- Subject: Unit supervdsmd.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit supervdsmd.service has failed.
--
-- The result is failed.
It gives the same error on the 3rd host.
Is this a python dependency issue? We are currently using python3.9. Does anyone have any ideas how we get host 2 and 3 up and running again in the cluster?
Thanks James
1 year, 10 months
Re: Expand the disk space of the hosting engine
by BJ一哥
Hello
I got an error when adding a hard disk to HostedEngine in wei ui
An error occurred while performing the operation: HostedEngine:
Unable to add virtual disk. The engine is not managing this virtual machine.
Should I use the command line to add disks
Use vdsm-client VM diskSizeExtend or vdsm-client Volume extendSize. I
have not tested these two commands successfully
Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com> 于2023年1月3日周二 13:27写道:
> You allocated a 100G storage domain.
>
> Within that storage domain, you allocated a 50G disk to hold the disk
> image.
>
> Just like any other storage domain, you can create additional disks. I
> have seen warnings, for going over 80% allocated within this storage domain.
>
> When building the Self-Hosted-Engine, I specify 75GB when asked about the
> size of the disk to create, and then use LVM to add the unused storage of
> the disk to the root and swap logical volumes.
>
> You should be able to add a second disk of about 25G to the SHE, and use
> LVM to add it to the existing volume groups and expand the existing logical
> volumes.
>
> Of course, I have not tested this.
>
>
>
> -----Original Message-----
> From: ziyi Liu <lzy19930211(a)gmail.com>
> Sent: Thursday, December 29, 2022 9:21 PM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Re: Expand the disk space of the hosting engine
>
> Thank you very much, I know the operation steps, but there is still one
> point that I don’t quite understand, fdisk -l only shows 50G, the actual
> disk allocation has 100G, how can I display the remaining 50G.
> The second question is that if the allocated 100G is also full, how should
> I expand it? I can’t expand it using wei ui
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRKI2S4WLJ2...
>
1 year, 10 months
Update oVirt Node to 4.5.4 fails do to disk full (with urandom)
by Sven Jansen
Hi,
i try to update my oVirt Nodes from 4.5.3 to 4.5.4 but it always fails on my hosts. Watching the rpm install process i see that the rpm is building the boot image/bank and /var/tmp filling up quickly. Looking deeper into this folders i see that there is a dracut folder created by the update process that contains a urandom file with the size of all available space in /var/tmp.
[root@ovnode01 dev]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 504G 4.0K 504G 1% /dev/shm
tmpfs 202G 11M 202G 1% /run
/dev/mapper/onn_ovnode01-ovirt--node--ng--4.5.3--0.20221018.0+1 136G 5.4G 130G 4% /
/dev/mapper/onn_ovnode01-home 1014M 40M 975M 4% /home
/dev/mapper/onn_ovnode01-tmp 1014M 40M 975M 4% /tmp
/dev/mapper/onn_ovnode01-var 5.0G 110M 4.9G 3% /var
/dev/sda2 1014M 577M 438M 57% /boot
/dev/sda1 599M 7.5M 592M 2% /boot/efi
/dev/mapper/onn_ovnode01-var_crash 10G 105M 9.9G 2% /var/crash
/dev/mapper/onn_ovnode01-var_log 8.0G 1.8G 6.3G 22% /var/log
/dev/mapper/onn_ovnode01-var_tmp 10G 10G 44K 100% /var/tmp
/dev/mapper/onn_ovnode01-var_log_audit 2.0G 50M 2.0G 3% /var/log/audit
/dev/loop1 4.2G 4.0G 171M 96% /tmp/tmp.crVxhNRBFq
tmpfs 101G 0 101G 0% /run/user/0
[root@ovnode01 dev]# pwd
/var/tmp/dracut.H2mKd4/initramfs/dev
[root@ovnode01 dev]# ls -lh
total 9.9G
-rw-r--r--. 1 root root 9.9G Mar 20 16:18 urandom
For me it looks like that imgbase (or whatever build this images) copies /dev/urandom into the new image, as expected this is not possible. Anyone know an workaround for this issue?
1 year, 10 months
HostedEngine: Unable to add virtual disk
by ziyi Liu
I want to add a disk to HostedEngine, and the following error occurs when adding a disk in the web ui
HostedEngine:
Unable to add virtual disk. The engine is not managing this virtual machine.
Does HostedEngine need to use vdms to add disks?
Is it the following operation
1. Set managed engine maintenance mode to global
2. Close HostedEngine
3. Extend the disk vdsm-client Volume extendSize
4. start vm
5. Use fdisk to partition and expand
1 year, 10 months
Deploy hosted-engine by command line failed
by pengCH
Hi,
ovirt-node-ng-installer-4.5.4-2022120615.el8.iso
ovirt-engine-appliance-4.5-20221206133948.1.el8.x86_64.rpm
After deployed gfs cluster, then install hosted engine by commad line prompt the follwing message:
1 year, 10 months