Storage performance comparison (gluster vs FC)
by Rik Theys
Hi,
We currently use oVirt on two hosts that connect to a shared storage
using SAS. In oVirt this is a "FC" storage domain. Since the warrantly
on the storage box is ending, we are looking at alternatives.
One of the options would be to use gluster and use a "hyperconverged"
setup where compute and gluster are on the same hosts. We would probably
end up with 3 hosts and a "replica 3 arbiter 1" gluster volume. (Or is
another volume type more recommended for this type of setup?)
I was wondering what the expected performance would be of this type of
setup compared to a shared storage over FC. I expect the I/O latency of
gluster to be much higher than the latency for the SAS connected storage
box? Has anybody compared these storage setups?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
4 years, 9 months
oVirt 4.3.8 spice console not working after vm migration
by Gianluca Cecchi
Hello,
I'm registering problems with spice console session that remains stuck and
not working after vm migration.
Tried with 7.6 and 8.1 based virtual machines.
The client is an updated Fedora 31 system.
Using lsof -Pp pid command I don't see any tcp sessione opened
Making the same when session is connected before migration:
remote-vi 8013 g.cecchi 21u IPv4 243582 0t0
TCP localhost.localdomain:35218->10.4.192.32:5902 (ESTABLISHED)
remote-vi 8013 g.cecchi 22u IPv4 248960 0t0
TCP localhost.localdomain:35226->10.4.192.32:5902 (ESTABLISHED)
remote-vi 8013 g.cecchi 23u IPv4 248961 0t0
TCP localhost.localdomain:35228->10.4.192.32:5902 (ESTABLISHED)
remote-vi 8013 g.cecchi 24u IPv4 248962 0t0
TCP localhost.localdomain:35230->10.4.192.32:5902 (ESTABLISHED)
Right after migration I see only the close wait of the previous but not a
new one with the ip of the destination host:
remote-vi 8013 g.cecchi 21u IPv4 243582 0t0
TCP localhost.localdomain:35218->10.4.192.32:5902 (CLOSE_WAIT)
And after a few seconds none.
Also, closing the blocked spice window doesn't work: it becomes gray with
in the middle "Waiting for display 1..." and I have to "kill -9 the
remote-viewer process.
Strangely the reverse works. After force closing spice console and
connecting to the remote and making migrate reverse, the spice console
session keeps working...
The hosts have same versions of components:
OS Version: RHEL - 7 - 7.1908.0.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 1062.12.1.el7.x86_64
KVM Version: 2.12.0 - 33.1.el7_7.4
LIBVIRT Version: libvirt-4.5.0-23.el7_7.5
VDSM Version: vdsm-4.30.40-1.el7
SPICE Version: 0.14.0 - 7.el7
GlusterFS Version: [N/A]
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.11.0-4.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption: Disabled
What else can I check?
On Fedora I have virt-viewer-8.0-3.fc31.x86_64
Gianluca
4 years, 9 months
Re: Damaged hard disk in volume replica gluster
by Strahil
As you are replacing an old brick, you have to recreate the old LV and mount it on the same location.
Then you can use the gluster's "reset-brick" (i thing ovirt UI has that option too) and all data will be replicated there.
You got also the "replace-brick" option if you decided to change the mount location.
P.S.: with replica volumes your volume should be still working, in case it has stopped - you have to investigate before proceeding.
Best Regards,
Strahil NikolovOn Oct 12, 2019 13:12, matteo fedeli <matmilan97(a)gmail.com> wrote:
>
> Hi, I have in my ovirt HCI a volume that not work properly because one of three hdd fail. I bought a new hdd, I recreate old lvm partitioning and mount point. Now, how can I attach this empty new brick?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5XQ4M2K5RB...
4 years, 9 months
adding a second subnet external provider ovirt 4.38
by eevans@digitaldatatechs.com
I'm a bit confused about adding a second network provider. If I understand it correctly, I have to add it on a host and then import it? Is that correct or am I miss reading it.
Please enlighten me.
Thanks.
4 years, 9 months
HostedEngine Upgrade and Rollback
by jrbdeguzman05@gmail.com
Hi Everyone,
Our HostedEngine is previously running in version 4.3.7 then updated it to 4.3.8. Before the update, we did a full backup of the HE.
After a successful update, we're testing to roll it back to the previous backup. The cluster setting has been rolled back but the Hosted version is still 4.3.8.
Is there a best way to rollback a HostedEngine after a successful minor update?
I've read somewhere that if an upgrade / update is unsuccessful, HostedEngine will be rolled back to the previous version automatically.
Thanks in advance.
4 years, 9 months
Ovirt ansible snapshots, specify disk to snap
by dwakefi2@gmu.edu
Is there a way to specify only certain disks to snapshot using the ansible playbooks? We have servers with hostname_disk0 and hostname_disk1. And i just want to create snapshots with the OS disk which is always hostname_disk0.
Thanks in advance.
Tom
4 years, 9 months
oVirt ansible backup improvements
by Jayme
If anyone has been following along, I had previously shared a blog post and
GitHub repo regarding my unofficial solution for backing up oVirt VMs using
Ansible.
Martin Necas reached out to me and we collaborated on some great
improvements. Namely, it is now possible to run the playbook from any host
without requiring direct access to storage (which I was previously using
for export status verification). There were several other improvements and
cleanups made as well.
The changes have been merged in and the READMME updated, you can find the
project here: https://github.com/silverorange/ovirt_ansible_backup
Big thanks to Martin for helping out. Very much appreciated!
- Jayme
4 years, 9 months