Can't upgrade ovirt 4.4.3 to 4.4.9
by jihwahn1018@naver.com
Hello,
I tried to upgrade ovirt to 4.4.9 from 4.4.3
when i do 'engine-setup' i get error
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
[ INFO ] DNF Performing DNF transaction rollback
[ ERROR ] DNF module 'dnf.history' has no attribute 'open_history'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
from the log i found that column "count_threads_as_cores" does not exist
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
193375 2022-01-18 16:36:26,309+0900 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_dwh.db.schema plugin.execute :926 execute-output: ['/usr/share/ovirt-engine-dwh/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u ', 'ovirt_engine_history', '-d', 'ovirt_engine_history', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setu p-20220118162426-z0o4xg.log', '-c', 'apply'] stderr:
193376 psql:/usr/share/ovirt-engine-dwh/dbscripts/create_views_4_4.sql:148: ERROR: column "count_threads_as_cores" does not exist
193377 LINE 10: count_threads_as_cores as count_threads_as_cores,
193378 ^
193379 FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine-dwh/dbscripts/create_views_4_4.sql
193380
193381 2022-01-18 16:36:26,309+0900 DEBUG otopi.context context._executeMethod:145 method exception
193382 Traceback (most recent call last):
193383 File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
193384 method['method']()
193385 File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-dwh/db/schema.py", line 367, in _misc
193386 odwhcons.DBEnv.PGPASS_FILE
193387 File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
193388 command=args[0],
193389 RuntimeError: Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
193390 2022-01-18 16:36:26,330+0900 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc con figuration': Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
193391 2022-01-18 16:36:26,330+0900 DEBUG otopi.transaction transaction.abort:119 aborting 'DNF Transaction'
193392 2022-01-18 16:36:26,331+0900 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:75 DNF Closi ng transaction with rollback
193393 2022-01-18 16:36:26,731+0900 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:79 DNF Performin g DNF transaction rollback
193394 2022-01-18 16:36:27,570+0900 ERROR otopi.plugins.otopi.packagers.dnfpackager dnfpackager.error:84 DNF module 'dnf.history' has no attribute 'open_history'
193395 2022-01-18 16:36:27,571+0900 DEBUG otopi.transaction transaction.abort:125 Unexpected exception from abort() of 'DNF Transaction'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
when i search for this problem, i found that column "count_threads_as_cores" is added from ovirt-engine 4.4.6.7. and ovirt-dwh 4.4.7.
do i need to update 4.4.3 to 4.4.6( or 4.4.7) then 4.4.6(or 4.4.7) to 4.4.9?
Thanks,
Jihwan
3 years, 1 month
HostedEngine deployment fails
by Christoph Köhler
Hello,
on CentOS Stream the deployment of HE fails with the following message:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Gather facts on installed
packages]
[ INFO ] ok: [localhost -> 192.168.1.239]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Fail when firewall manager is
not installed]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install required packages for
oVirt Engine deployment]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.1.239]: FAILED! => {"changed":
false, "msg": "Failed to download metadata for repo
'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were
tried", "rc": 1, "results": []}
It's because we have to use a proxy. It is configured on the host but
setup does not deliver it into the appliance (temporarily HE). So what
to do? Howto tell setup to use a proxy in HE appliance?
Thank you for hits!
Chris
3 years, 1 month
About go-ovirt Library
by Yusuf Papurcu
After v4.3.4 this package is unable to get via go modules. Please check this out.
3 years, 1 month
Re: Failed HostedEngine Deployment
by Strahil Nikolov
yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common-6.0.0-33.el8s qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-ui-opengl-6.0.0-33.el8s qemu-kvm-block-rbd-6.0.0-33.el8s qemu-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s qemu-kvm-block-curl-6.0.0-33.el8s qemu-kvm-block-ssh-6.0.0-33.el8s qemu-kvm-ui-spice-6.0.0-33.el8s ipxe-roms-qemu-6.0.0-33.el8s qemu-kvm-core-6.0.0-33.el8s qemu-kvm-docs-6.0.0-33.el8s qemu-kvm-block-6.0.0-33.el8s
Best Regards,Strahil Nikolov
On Sun, Jan 23, 2022 at 22:47, Robert Tongue<phunyguy(a)neverserio.us> wrote: #yiv7072323153 P {margin-top:0;margin-bottom:0;}Ahh, I did some repoquery commands can see a good bit of qemu* packages are coming from appstream rather than ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, January 23, 2022 3:41 PM
To: users <users(a)ovirt.org>; Robert Tongue <phunyguy(a)neverserio.us>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
<!--#yiv7072323153 #yiv7072323153x_yiv4464233184 p {margin-top:0;margin-bottom:0;}-->Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
3 years, 1 month
Major problems after upgrading 2 (of 4) Red Hat hosts to 4.4.10
by David White
I have a Hyperconverged cluster with 4 hosts.
Gluster is replicated across 2 hosts, and a 3rd host is an arbiter node.
The 4th host is compute only.
I updated the compute-only node, as well as the arbiter node, early this morning. I didn't touch either of the actual storage nodes.That said, I forgot to upgrade the engine.
oVirt Manager thinks that all but 1 of the hosts in the cluster are unhealthy. However, all 4 hosts are online. oVirt Manager (Engine) also keeps deactivating at least 1, if not 2 of the 3 (total) bricks behind each volume.
Even though the Engine thinks that only 1 host is healthy, VMs are clearly running on some of the other hosts. However, in troubleshooting, some of the customer VMs were turned off, and oVirt is refusing to start those VMs, because it only recognizes that 1 of the hosts is healthy -- and that host's resources are maxed out.
This afternoon, I went ahead and upgraded (and rebooted) the Engine VM, so it is now up-to-date. Unfortunately, that didn't resolve the issue. So I took one of the "unhealthy" hosts which didn't have any VMs on it (which was the host that is our compute-only server hosting no gluster data), and I used oVirt to "reinstall" the oVirt software. That didn't resolve the issue for that host.
How can I troubleshoot this? I need:
- To figure out why oVirt keeps trying to deactivate volumes
- From the command line, `gluster peer status` show all nodes connected, and all volumes appear to be healthy
- More importantly, I need to get these VMs that are currently down back online. Is there a way to somehow force oVirt to launch the VMs on the "unhealthy" nodes?
What logs should I be looking at? Any help would be greatly appreciated .
Sent with ProtonMail Secure Email.
3 years, 1 month
Re: Failed HostedEngine Deployment
by Strahil Nikolov
I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
#yiv4464233184 P {margin-top:0;margin-bottom:0;}Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
3 years, 1 month
Failed HostedEngine Deployment
by Robert Tongue
Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --console
The engine VM is running on this host
Escape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage
Volume Name: storage
Type: Distribute
Volume ID: e9544310-8890-43e3-b49c-6e8c7472dbbb
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node1:/var/glusterfs/storage/1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.ping-timeout: 5
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1024
cluster.locking-scheme: full
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHz
stepping : 9
microcode : 0x21
cpu MHz : 4000.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds
bogomips : 7199.86
clflush size : 64
cache_alignment: 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[ plus 7 more ]
Thanks for any insight that can be provided.
3 years, 1 month
How does the Stream based management engine transition to RHV?
by Thomas Hoberg
In the recent days, I've been trying to validate the transition from CentOS 8 to Alma, Rocky, Oracle and perhaps soon Liberty Linux for existing HCI clusters.
I am using nested virtualization on a VMware workstation host, because I understand snapshoting and linked clones much better on VMware, even if I've tested nested virtualization to some degree with oVirt as well. It makes moving forth and back between distros and restarting failed oVirt deployments much easier and more reliable than ovirt-hosted-engine-cleanup.
Installing oVirt 4.10 on TrueCentOS systems, which had been freshly switched to Alma, Rocky and Oracle went relatively well, apart from Oracle pushing UEK kernels, which break VDO (and some Python2 mishaps).
I'm still testing transitioning pre-existing TrueCentOS HCI glusters to Alma, Rocky and Oracle.
While that solves the issue of having the hosts running a mature OS which is downstream of RHEL, there is still an issue with the management engine being based on the upstream stream release: It doesn't have the vulnerability managment baked in, which is required even for labs use in an enterprise.
So I'd like to ask our Redhat friends here: How does this work when releases of oVirt transition to RHV? Do you backport oVirt changes from Stream to RHEL? When bugs are found in that process, are they then fed back into oVirt or into the oVirt-to-RHEV proces?
3 years, 1 month
CentOS 8.4 Linux hosts from 4.4.8 to Rocky Linux 4.4.10
by Gianluca Cecchi
Hello,
after updating the external engine from CentOS 8.4 and 4.4.8 to Rocky Linux
8.5 and 4.4.9 as outlined here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YUDJRC22SQP...
I went further and updated also the hosts.
Environment is with an external engine and 3 CentOS Linux 8.4 hosts in
4.4.8 with iSCSI storage domain.
Preliminarily I upgraded the engine to 4.4.10 (not yet the just released
async) without problems.
Then, one host at a time:
. put host into maintenance from web admin UI
Management --> Maintenance
. In a terminal f host set proxy for my environment needs
export https_proxy=http://my_proxy:my_proxy_port
export http_proxy=http://my_proxy:my_proxy_port (not sure if this
necessary...)
. in the same terminal execute migration script
./migrate2rocky.sh -r
. executed Management --> SSH Management --> SSH Restart from web admin ui
the host comes on in maintenance mode
. selected Installation --> Check for Upgrade but the host is detected as
already update
. for further security and to be sure that all upgrade steps are applied I
executed
Installation --> Reinstall
I deselected
- activate host after install
- reboot host after install
It went ok so
. executed Management --> SSH Management --> SSH Restart from web admin ui
the host comes on in maintenance mode
. Management --> Activate
. Empty another host moving its VMs to the just updated host and continue
in the same way, also electing as new SPM the updated host
All went smoothly and without VMs disruption.
Let's see how it goes next days with the light workload I have on this
testing environment.
Currently the async 1 of 4.4.10 is not catched up by engine-upgrade-check
command.. I'm going to retry applying again during the next few days.
Gianluca
3 years, 1 month
mdadm vs. JBOD
by jonas@rabe.ch
Hi,
We are currently building a three node hyper-converged cluster based on oVirt Node and Gluster. While discussing the different storage layout we couldn't get to a final decision.
Currently our servers are equipped as follows:
- servers 1 & 2:
- Two 800GB disks for OS
- 100GB RAID1 used as LVM PV for OS
- Nine 7.68TB disks for Gluster
- 60TB RAID 5 used as LVM PV for Gluster
- server 3
- Two 800GB disks for OS & Gluster
- 100GB RAID 1 used as LVM PV for OS
- 700GB RAID 1 used as LVM PV for Gluster
Unfortunately I couldn't find much information about mdadm on this topic. The hyper-convergence guides ([1], [2]) seem to assume that there is either a hardware RAID in place or JBOD is used. Is there some documentation available on what to consider when using mdadm? Or would it be more sensible to just use JBOD and then add redundancy on the LVM or Gluster level?
If choosing to go with mdadm, what option should I choose in the bricks wizard screen (RAID 5 or JBOD)?
[1]: https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
[2]: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
3 years, 1 month