recovering gluster volumes when moving hosts to a new datacenter
by Nathanaël Blanchet
Hi,
I moved 3 hosts (for replica 3 volume) from a previous datacenter to a
new datacenter.
Those hosts had initially been configured in a gluster cluster and
managed multiple gluster volume created with ovirt UI.
These hosts are now apart of a new gluster cluster and they see each
others with cli "gluster peer status" as well as the associated volumes
"gluter volumes list".
But I can't make gluster volumes visible in the usually "volume" tab, so
ovirt doesn't seem to see them.
Is there a way to import pre existing gluster volume so as to ovirt be
able to manage them ?
Thanks.
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 9 months
Windows 10 on older CPUs
by Chris Adams
I have an oVirt cluster running 4.3.10 on older hardware with a cluster
CPU type of Intel Nehalem. I needed to install a Windows 10 (latest
release) VM, which oVirt rejected because of the CPU model. I remember
there being issues with certain versions of Windows and this CPU model
under oVirt (I don't do Windows very often and had forgotten).
I instead told oVirt I was installing Windows 8 (which it accepted), and
then installed - it installed and appears to be working just fine. I
don't know if Microsoft changed whatever used to break or what... I
remember the problem before not even getting through install, so it
seems to be okay.
Any issues I should be worried about?
--
Chris Adams <cma(a)cmadams.net>
3 years, 9 months
ovirt node ng 4.4 crash in anaconda when setting ntp
by Gianluca Cecchi
Hello,
when installing ovirt node ng 4.4 on a Dell M620 I remember I had a crash
in anaconda if I try to set up ntp.
Using ovirt-node-ng-installer-4.4.4-2020122111.el8.iso
As soon as I select it and try to type the hostname to use, all stops and
then anaconda aborts.
Just today I had the same with the latest RHVH 4.4 iso:
RHVH-4.4-20201210.0-RHVH-x86_64-dvd1.iso on an R630.
Quite disappointing because I also have to fight with iDRAC8 to install the
OS: it is slow to die.
In practice I waste about one hour....
Is anyone aware of it or able to reproduce on another platform so that
eventually I'm going to open a bug/case for it?
My config is default one in anaconda accepting default and creating a
bonded connection (LACP).
Then as the last step I go into the set time/date and click on the ntp
button and I get the problem as soon as I try to type inside the text box.
Thanks,
Gianluca
3 years, 9 months
LVM Filesystem not mount VMStore
by marcel@deheureu.se
Hi,
We got the following error message from our 3 Server Setup this morning.
Two of the servers where switched off by a "User". Simply power off.
One of the server startup well and the other one provide the following
message.
What should we do first, we can't mount the vmstore manual. The system
takes a lot of minutes. Did anyone have an idea?
Br
Marcel
3 years, 9 months
[ANN] oVirt 4.4.5 Third Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.5 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.5
Third Release Candidate for testing, as of January 29th, 2021.
This update is the fifth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.5 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.5 release highlights:
http://www.ovirt.org/release/4.4.5/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.5/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 9 months
How to reset network on all cluster members (controller and host) to defaults for cleans start?
by David Johnson
I have a new Ovirt 4.4.5 cluster consisting of a controller and a single
host, both running on centos 8. I believe I have messed up my network
configurations by using a mix of oVirt and nmcli functions to try to
resolve problems.
Is there an easy way to reset everything on the controller and host
machines to get a clean start? Or do I need to start from scratch and
re-install centos.
Thank you in advance.
David Johnson
3 years, 9 months
Gluster volume slower then raid1 zpool speed
by Harry O
Hi,
Can anyone help me with the performance on my 3 node gluster on zfs (it is setup with one arbiter)
The performance on the single vm I have on it (with engine) is 50% worse then a single bare metal disk, on the writes.
I have enabled "Optimize for virt store"
I run 1Gbps 1500MTU network, could this be the write performance killer?
Is this to be expected from a 2xHDD zfs raid one on each node, with 3xNode arbiter setup?
Maybe I should move to raid 5 or 6?
Maybe I should add SSD cache to raid1 zfs zpools?
What are your thoughts? What to do for optimize this setup?
I would like to run zfs with gluster and I can deal with a little performance loss, but not that much.
3 years, 9 months
vGPU on ovirt 4.3
by kim.kargaard@noroff.no
Hi,
We are looking at getting some GPU's for our servers and to use vGPU passthrough so that our students can do some video renders on the VM's. Does anyone have good experience with the Nvidia Quadro RTX6000 or RTX8000 and ovirt 4.3?
Thanks.
Kim
3 years, 9 months
Re: VM templates
by Strahil Nikolov
If you system will have only Gluster -> you can blacklist everything in multipath.But imagine you have gluster and also SAN or iSCSI -> then you need to blacklist only local disks that Gluster uses, but not the SAN.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Wed, Jan 27, 2021 at 16:28, Robert Tongue<phunyguy(a)neverserio.us> wrote: #yiv8013230980 P {margin-top:0;margin-bottom:0;}Ahh, OK, I didn't realize that. I appreciate it, and was hoping to get good feedback like this when I posted what I did. Will make the changes.
What is meant by "gluster only machine" here?
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Wednesday, January 27, 2021 7:11 AM
To: Robert Tongue <phunyguy(a)neverserio.us>; users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: VM templates You should create a file like mine, cause vdsm manages /etc/multipathd.conf
# cat /etc/multipath/conf.d/blacklist.confblacklist { devnode "*" wwid nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001 wwid TOSHIBA-TR200_Z7KB600SK46S wwid ST500NM0011_Z1M00LM7 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189 wwid WDC_WD15EADS-00P8B0_WD-WMAVU0885453 wwid WDC_WD5003ABYZ-011FA0_WD-WMAYP0F35PJ4}
Keep in mind 'devnode *' is OK only for gluster-only machine.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Wed, Jan 27, 2021 at 6:02, Robert Tongue<phunyguy(a)neverserio.us> wrote:_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YD7ROMATPWF...
3 years, 9 months