Moving interfaces for switch maintenance
by David White
So I have two switches.
All 3 of my HCI oVirt servers are connected to both switches.
1 switch serves the ovirtmgmt network (internal, gluster communication and everything else on that subnet)
The other switch serves the "main" front-end network (Private).
It turns out that my datacenter plugged the switches into the wrong power supplies, and now needs to move them.
My switches have single PSUs, so when we move them, the network will go down.
Obviously, I'm just going to move 1 switch at a time.
What I'm wondering, though, is there's an "easy" way to force all the traffic for both networks out one of the single interfaces, so that I don't experience any more downtime during this switch maintenance.
Should I assign another temporary IP address on each of the interfaces for each server that is in the same subnet as the switch we're going to take down?
Sent with ProtonMail Secure Email.
3 years, 11 months
Issue upgrading from 4.3 (Centos 7) to 4.4 (Centos 8)
by ling@aliko.com
Hello,
I have been trying to upgrade my self-hosted engine from 4.3 to 4.4 but running into issue while performing hosted-engine deploy.
Old hypervisor hosts are all running Centos 8 and old ovirt-engine is also running Centos 7.
I created a brand new baremental node running Centos 8, Kernel 4.18.0-240.15.1.el8_3.x86_64 and the following engine versions:
ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
ovirt-hosted-engine-ha-2.4.6-1.el8.noarch
ovirt-engine-appliance-4.4-20210323171213.1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.10-1.el8.x86_64
I have many VLANs in my environment. But on this host, I only have these network devices set up (eth0 is the main network, eth1 for storage):
# nmcli con
NAME UUID TYPE DEVICE
ovirtmgmt 02f64861-d992-4e56-8cec-da1906bac09f bridge ovirtmgmt
System eth1 bd9e565f-bdc3-4e43-bbd3-5875b9d7fed7 ethernet eth1
virbr0 78e6875d-70f6-4c89-89dd-180dbb9250b1 bridge virbr0
eth0 743b0e26-aae7-44b8-9215-3754a537e90b ethernet eth0
vnet0 bcfead6d-c5b6-4428-9f89-41589735be02 tun vnet0
When I run hosted-engine --deploy --restore-from-file=backup_050321.bck, it hangs after showing:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Copy yum configuration file]
[ INFO ] changed: [localhost -> ovirt.safari.apple.com]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Set 'best' to false]
[ INFO ] changed: [localhost -> ovirt.safari.apple.com]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Update all packages]
virsh shows the VM is in paused state:
# virsh list
Id Name State
----------------------------------
1 HostedEngineLocal paused
I was able to ssh onto the VM until that point.
Do I need to set up all the network connection for all the VLANs before running the deploy script?
And how about engine storage domain? I have a new NFS mount ready but it did not ask me about which storage domain to use. Will it ask in later stage?
Thanks.
3 years, 11 months
Something broke & took down multiple VMs for ~20 minutes
by David White
As the subject suggestions, something in oVirt HCI broke. I have no idea what, and it recovered on its own after about 20 minutes or so.
I believe that the issue was limited to a single host (although I don't know that for sure), as we had two VMs go completely unresponsive, but a 3rd VM remained operational. For a while during the outage, I was able to log into the oVirt admin web portal, and I noticed at least 1-2 of my hosts (I have 3 hosts) showed the problematic VMs as being problematic inside of oVirt.
Reviewing the oVirt Events, I see that this basically started right when the ETL Service Started. There were no events before that point since yesterday, but right when the ETL Service started, it seems like all hell broke loose.
oVirt detected "No faulty multipaths" on any of the hosts, but then very quickly started indicating that hosts, vms, and storage targets were unavailable. See my screenshot below.
Around 30 - 35 minutes later, it appears that the Hosted Engine terminated due to a storage issue, and auto recovered on a different host. There's a 2nd screenshot beneath the first.
Everything came back up shortly before 9am, and has been stable since.
In fact, the Volume replication issues that I saw in my environment after I performed maintenance on 1 of my hosts on Friday are no longer present. It appears that the Hosted Engine sees the storage as being perfectly healthy.
How do I even begin to figure out what happened, and try to prevent it from happening again?
[Screenshot from 2021-04-26 16-36-47.png]
[Screenshot from 2021-04-26 16-44-08.png]
Sent with ProtonMail Secure Email.
3 years, 11 months
pool list vm assign user
by Dominique D
Is there a way to know how to see who the vm of a pool assigned to?
I am able on the portal to see those who are "logged-in user" but the others VM I don't know to whom they are assigned.
3 years, 11 months
[ANN] Async oVirt Node release for oVirt 4.4.6
by Sandro Bonazzola
On May 10th 2021 the oVirt project released an async update of oVirt Node
(4.4.6.1)
Changes:
- Updated Advanced Virtualization packages
- Updated ovn2.11 and openvswitch2.11
- Updated ansible 2.9.21 (
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
)
- Update libssh (fixes CVE-2020-16135)
Full diff list:
--- ovirt-node-ng-image-4.4.6.manifest-rpm 2021-05-04 17:02:01.839123874
+0200
+++ ovirt-node-ng-image-4.4.6.1.manifest-rpm 2021-05-11 08:39:44.714649170
+0200
@@ -24 +24 @@
-ansible-2.9.20-2.el8.noarch
+ansible-2.9.21-2.el8.noarch
@@ -89 +89 @@
-corosynclib-3.1.0-4.el8.0.1.x86_64
+corosynclib-3.1.0-5.el8.x86_64
@@ -100 +100 @@
-cups-libs-2.2.6-38.el8.x86_64
+cups-libs-2.2.6-39.el8.x86_64
@@ -130,5 +130,5 @@
-dracut-049-135.git20210121.el8.x86_64
-dracut-config-generic-049-135.git20210121.el8.x86_64
-dracut-live-049-135.git20210121.el8.x86_64
-dracut-network-049-135.git20210121.el8.x86_64
-dracut-squash-049-135.git20210121.el8.x86_64
+dracut-049-136.git20210426.el8.x86_64
+dracut-config-generic-049-136.git20210426.el8.x86_64
+dracut-live-049-136.git20210426.el8.x86_64
+dracut-network-049-136.git20210426.el8.x86_64
+dracut-squash-049-136.git20210426.el8.x86_64
@@ -148,36 +148,36 @@
-fence-agents-all-4.2.1-67.el8.x86_64
-fence-agents-amt-ws-4.2.1-67.el8.noarch
-fence-agents-apc-4.2.1-67.el8.noarch
-fence-agents-apc-snmp-4.2.1-67.el8.noarch
-fence-agents-bladecenter-4.2.1-67.el8.noarch
-fence-agents-brocade-4.2.1-67.el8.noarch
-fence-agents-cisco-mds-4.2.1-67.el8.noarch
-fence-agents-cisco-ucs-4.2.1-67.el8.noarch
-fence-agents-common-4.2.1-67.el8.noarch
-fence-agents-compute-4.2.1-67.el8.noarch
-fence-agents-drac5-4.2.1-67.el8.noarch
-fence-agents-eaton-snmp-4.2.1-67.el8.noarch
-fence-agents-emerson-4.2.1-67.el8.noarch
-fence-agents-eps-4.2.1-67.el8.noarch
-fence-agents-heuristics-ping-4.2.1-67.el8.noarch
-fence-agents-hpblade-4.2.1-67.el8.noarch
-fence-agents-ibmblade-4.2.1-67.el8.noarch
-fence-agents-ifmib-4.2.1-67.el8.noarch
-fence-agents-ilo-moonshot-4.2.1-67.el8.noarch
-fence-agents-ilo-mp-4.2.1-67.el8.noarch
-fence-agents-ilo-ssh-4.2.1-67.el8.noarch
-fence-agents-ilo2-4.2.1-67.el8.noarch
-fence-agents-intelmodular-4.2.1-67.el8.noarch
-fence-agents-ipdu-4.2.1-67.el8.noarch
-fence-agents-ipmilan-4.2.1-67.el8.noarch
-fence-agents-kdump-4.2.1-67.el8.x86_64
-fence-agents-mpath-4.2.1-67.el8.noarch
-fence-agents-redfish-4.2.1-67.el8.x86_64
-fence-agents-rhevm-4.2.1-67.el8.noarch
-fence-agents-rsa-4.2.1-67.el8.noarch
-fence-agents-rsb-4.2.1-67.el8.noarch
-fence-agents-sbd-4.2.1-67.el8.noarch
-fence-agents-scsi-4.2.1-67.el8.noarch
-fence-agents-vmware-rest-4.2.1-67.el8.noarch
-fence-agents-vmware-soap-4.2.1-67.el8.noarch
-fence-agents-wti-4.2.1-67.el8.noarch
+fence-agents-all-4.2.1-68.el8.x86_64
+fence-agents-amt-ws-4.2.1-68.el8.noarch
+fence-agents-apc-4.2.1-68.el8.noarch
+fence-agents-apc-snmp-4.2.1-68.el8.noarch
+fence-agents-bladecenter-4.2.1-68.el8.noarch
+fence-agents-brocade-4.2.1-68.el8.noarch
+fence-agents-cisco-mds-4.2.1-68.el8.noarch
+fence-agents-cisco-ucs-4.2.1-68.el8.noarch
+fence-agents-common-4.2.1-68.el8.noarch
+fence-agents-compute-4.2.1-68.el8.noarch
+fence-agents-drac5-4.2.1-68.el8.noarch
+fence-agents-eaton-snmp-4.2.1-68.el8.noarch
+fence-agents-emerson-4.2.1-68.el8.noarch
+fence-agents-eps-4.2.1-68.el8.noarch
+fence-agents-heuristics-ping-4.2.1-68.el8.noarch
+fence-agents-hpblade-4.2.1-68.el8.noarch
+fence-agents-ibmblade-4.2.1-68.el8.noarch
+fence-agents-ifmib-4.2.1-68.el8.noarch
+fence-agents-ilo-moonshot-4.2.1-68.el8.noarch
+fence-agents-ilo-mp-4.2.1-68.el8.noarch
+fence-agents-ilo-ssh-4.2.1-68.el8.noarch
+fence-agents-ilo2-4.2.1-68.el8.noarch
+fence-agents-intelmodular-4.2.1-68.el8.noarch
+fence-agents-ipdu-4.2.1-68.el8.noarch
+fence-agents-ipmilan-4.2.1-68.el8.noarch
+fence-agents-kdump-4.2.1-68.el8.x86_64
+fence-agents-mpath-4.2.1-68.el8.noarch
+fence-agents-redfish-4.2.1-68.el8.x86_64
+fence-agents-rhevm-4.2.1-68.el8.noarch
+fence-agents-rsa-4.2.1-68.el8.noarch
+fence-agents-rsb-4.2.1-68.el8.noarch
+fence-agents-sbd-4.2.1-68.el8.noarch
+fence-agents-scsi-4.2.1-68.el8.noarch
+fence-agents-vmware-rest-4.2.1-68.el8.noarch
+fence-agents-vmware-soap-4.2.1-68.el8.noarch
+fence-agents-wti-4.2.1-68.el8.noarch
@@ -187 +187 @@
-filesystem-3.8-4.el8.x86_64
+filesystem-3.8-3.el8.x86_64
@@ -193 +193 @@
-freetype-2.9.1-5.el8.x86_64
+freetype-2.9.1-4.el8_3.1.x86_64
@@ -199 +199 @@
-fwupd-1.5.5-3.el8.x86_64
+fwupd-1.5.9-1.el8.x86_64
@@ -210 +210 @@
-glib2-2.56.4-10.el8.x86_64
+glib2-2.56.4-11.el8.x86_64
@@ -269,2 +269,2 @@
-iproute-5.9.0-4.el8.x86_64
-iproute-tc-5.9.0-4.el8.x86_64
+iproute-5.12.0-0.el8.x86_64
+iproute-tc-5.12.0-0.el8.x86_64
@@ -317,2 +317,2 @@
-krb5-libs-1.18.2-9.el8.x86_64
-krb5-workstation-1.18.2-9.el8.x86_64
+krb5-libs-1.18.2-10.el8.x86_64
+krb5-workstation-1.18.2-10.el8.x86_64
@@ -381 +381 @@
-libgcc-8.4.1-1.el8.x86_64
+libgcc-8.4.1-2.1.el8.x86_64
@@ -393 +393 @@
-libgomp-8.4.1-1.el8.x86_64
+libgomp-8.4.1-2.1.el8.x86_64
@@ -409 +409 @@
-libkadm5-1.18.2-9.el8.x86_64
+libkadm5-1.18.2-10.el8.x86_64
@@ -469,2 +469,2 @@
-libssh-0.9.4-2.el8.x86_64
-libssh-config-0.9.4-2.el8.noarch
+libssh-0.9.4-3.el8.x86_64
+libssh-config-0.9.4-3.el8.noarch
@@ -476 +476 @@
-libstdc++-8.4.1-1.el8.x86_64
+libstdc++-8.4.1-2.1.el8.x86_64
@@ -498,26 +498,26 @@
-libvirt-7.0.0-9.el8s.x86_64
-libvirt-admin-7.0.0-9.el8s.x86_64
-libvirt-bash-completion-7.0.0-9.el8s.x86_64
-libvirt-client-7.0.0-9.el8s.x86_64
-libvirt-daemon-7.0.0-9.el8s.x86_64
-libvirt-daemon-config-network-7.0.0-9.el8s.x86_64
-libvirt-daemon-config-nwfilter-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-interface-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-network-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-nodedev-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-nwfilter-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-qemu-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-secret-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-core-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-disk-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-gluster-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-iscsi-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-iscsi-direct-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-logical-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-mpath-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-rbd-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-scsi-7.0.0-9.el8s.x86_64
-libvirt-daemon-kvm-7.0.0-9.el8s.x86_64
-libvirt-libs-7.0.0-9.el8s.x86_64
-libvirt-lock-sanlock-7.0.0-9.el8s.x86_64
+libvirt-7.0.0-14.el8s.x86_64
+libvirt-admin-7.0.0-14.el8s.x86_64
+libvirt-bash-completion-7.0.0-14.el8s.x86_64
+libvirt-client-7.0.0-14.el8s.x86_64
+libvirt-daemon-7.0.0-14.el8s.x86_64
+libvirt-daemon-config-network-7.0.0-14.el8s.x86_64
+libvirt-daemon-config-nwfilter-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-interface-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-network-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-nodedev-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-nwfilter-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-qemu-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-secret-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-core-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-disk-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-gluster-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-iscsi-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-iscsi-direct-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-logical-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-mpath-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-rbd-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-scsi-7.0.0-14.el8s.x86_64
+libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
+libvirt-libs-7.0.0-14.el8s.x86_64
+libvirt-lock-sanlock-7.0.0-14.el8s.x86_64
@@ -533 +533,2 @@
-libxcrypt-4.1.1-5.el8.x86_64
+libxcrypt-4.1.1-6.el8.x86_64
+libxkbcommon-0.9.1-1.el8.x86_64
@@ -618,2 +619,2 @@
-openscap-1.3.4-5.el8.x86_64
-openscap-scanner-1.3.4-5.el8.x86_64
+openscap-1.3.5-2.el8.x86_64
+openscap-scanner-1.3.5-2.el8.x86_64
@@ -626 +627 @@
-openvswitch2.11-2.11.0-50.el8.x86_64
+openvswitch2.11-2.11.3-87.el8s.x86_64
@@ -642 +643 @@
-ovirt-node-ng-image-update-placeholder-4.4.6-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.6.1-1.el8.noarch
@@ -650,2 +651,2 @@
-ovirt-release-host-node-4.4.6-1.el8.noarch
-ovirt-release44-4.4.6-1.el8.noarch
+ovirt-release-host-node-4.4.6.1-1.el8.noarch
+ovirt-release44-4.4.6.1-1.el8.noarch
@@ -654,2 +655,2 @@
-ovn2.11-2.11.1-39.el8.x86_64
-ovn2.11-host-2.11.1-39.el8.x86_64
+ovn2.11-2.11.1-57.el8s.x86_64
+ovn2.11-host-2.11.1-57.el8s.x86_64
@@ -788 +789 @@
-python3-openvswitch2.11-2.11.0-50.el8.x86_64
+python3-openvswitch2.11-2.11.3-87.el8s.x86_64
@@ -828 +829 @@
-python3-subscription-manager-rhsm-1.28.13-2.el8.x86_64
+python3-subscription-manager-rhsm-1.28.16-1.el8.x86_64
@@ -830 +831 @@
-python3-syspurpose-1.28.13-2.el8.x86_64
+python3-syspurpose-1.28.16-1.el8.x86_64
@@ -836,12 +837,14 @@
-qemu-guest-agent-5.2.0-11.el8s.x86_64
-qemu-img-5.2.0-11.el8s.x86_64
-qemu-kvm-5.2.0-11.el8s.x86_64
-qemu-kvm-block-curl-5.2.0-11.el8s.x86_64
-qemu-kvm-block-gluster-5.2.0-11.el8s.x86_64
-qemu-kvm-block-iscsi-5.2.0-11.el8s.x86_64
-qemu-kvm-block-rbd-5.2.0-11.el8s.x86_64
-qemu-kvm-block-ssh-5.2.0-11.el8s.x86_64
-qemu-kvm-common-5.2.0-11.el8s.x86_64
-qemu-kvm-core-5.2.0-11.el8s.x86_64
-quota-4.04-13.el8.x86_64
-quota-nls-4.04-13.el8.noarch
+qemu-guest-agent-5.2.0-16.el8s.x86_64
+qemu-img-5.2.0-16.el8s.x86_64
+qemu-kvm-5.2.0-16.el8s.x86_64
+qemu-kvm-block-curl-5.2.0-16.el8s.x86_64
+qemu-kvm-block-gluster-5.2.0-16.el8s.x86_64
+qemu-kvm-block-iscsi-5.2.0-16.el8s.x86_64
+qemu-kvm-block-rbd-5.2.0-16.el8s.x86_64
+qemu-kvm-block-ssh-5.2.0-16.el8s.x86_64
+qemu-kvm-common-5.2.0-16.el8s.x86_64
+qemu-kvm-core-5.2.0-16.el8s.x86_64
+qemu-kvm-ui-opengl-5.2.0-16.el8s.x86_64
+qemu-kvm-ui-spice-5.2.0-16.el8s.x86_64
+quota-4.04-14.el8.x86_64
+quota-nls-4.04-14.el8.noarch
@@ -889 +892 @@
-sos-4.0-11.el8.noarch
+sos-4.1-1.el8.noarch
@@ -903 +906 @@
-subscription-manager-rhsm-certificates-1.28.13-2.el8.x86_64
+subscription-manager-rhsm-certificates-1.28.16-1.el8.x86_64
@@ -960,0 +964 @@
+xkeyboard-config-2.28-1.el8.noarch
@@ -963,0 +968,2 @@
+xmlsec1-1.2.25-4.el8.x86_64
+xmlsec1-openssl-1.2.25-4.el8.x86_64
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 11 months
Re: Changing the ovirtmgmt IP address
by Matthew.Stier@fujitsu.com
Found the answer:
Update /var/lib/vdsm/persistence/netconf/nets/ovritmgmt and reboot.
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Monday, May 10, 2021 3:18 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Changing the ovirtmgmt IP address
Version: 4.3.10
I'm attempting to change the IP address, netmask and gateway of the ovirtmgmt NIC of a host, but everytime I reboot the host, the old address/netmask/gateway re-assert themselves.
Where do I need to make the changes, so they will be permanent?
I've modified /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt and route-ovirtmgmt, but they don't stick through a reboot.
3 years, 11 months
Changing the ovirtmgmt IP address
by Matthew.Stier@fujitsu.com
Version: 4.3.10
I'm attempting to change the IP address, netmask and gateway of the ovirtmgmt NIC of a host, but everytime I reboot the host, the old address/netmask/gateway re-assert themselves.
Where do I need to make the changes, so they will be permanent?
I've modified /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt and route-ovirtmgmt, but they don't stick through a reboot.
3 years, 11 months
Changing 1 node Gluster Distributed to replica
by Ernest Clyde Chua
Good day,
currently we have a 1 node host that also runs a gluster in 1 node
distributed mode and we recently decided to upgrade to a 3 node host
which also runs gluster and set a replica count of 3.
can someone help me how can i safely change the volume type to replicated
3 years, 11 months
Sharding decision for oVirt
by levin@mydream.com.hk
Description of problem:
Intermittent VM pause and Qcow image corruption after add new bricks.
I'm suffered an issue on image corruption on oVirt 4.3 caused by default gluster ovirt profile, and intermittent VM pause. the problem is similar to #2246 #2254 in glusterfs issue and VM pause issue report in ovirt user group. The gluster vol did not have pending heal object, vol appear in good shape, xfs is healthy, no hardware issue. Sadly few VM have mystery corruption after new bricks added.
Afterwards, I try to simulate the problem with or without "cluster.lookup-optimize off" few time, but the problem is not 100% reproducible with lookup-optimize on, I got 1 of 3 attempt that able to reproduce it. It really depend on the workloads and cache status at that moment and the number of object after rebalance as well.
Also I tried to disable all sharding features, it ran very solid, write performance increase by far, no corruption, no VM pause when the gluster under stress.
So, here is a decision question on shard or not shard.
IMO, even recommendation document saying it break large file into smaller chunk that allow healing to complete faster, a larger file can spread over multiple bricks. But there are uncovered issue compared to full large file in this case, I'd like to further deep dive into the reason why recommend shard as default for oVirt? Especially from the reliability and performance perspective, sharding seems losing this end for ovirt/kvm workloads. Is it more appropriate to just tell ovirt user to ensure underlying single bricks shall be large enough to hold the largest chunk instead? Besides, anything i'm overlooked for the shard setting? I'm really doubt to enable sharding on the volume after disaster.
3 years, 11 months