Re: Ovirt 4.4/ Centos 8 issue with nfs?
by Lee Hanel
sigh.
Finally got back around to this. Solved it by creating a subdirectory under the actual share and mounting that.
Thanks for the help,
Lee
4 years, 2 months
Q: Hybrid GlusterFS / local storage setup?
by Gilboa Davara
Hello all,
I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the use case for this cluster is somewhat different:
While I do want most of the VMs to be highly available (Via 2+1 GFS
storage domain), I'd also want pin at least one "desktop" VM to each
host (possibly with vGPU) and let this VM access the local storage
directly in-order to get near bare metal performance.
Now, I am aware that I can simply share an LVM LV over NFS / localhost
and pin a specific VM to each specific host, and the performance will
be acceptable, I seem to remember that there's a POSIX-FS storage
domain that at least in theory should be able to give me per-host
private storage.
A. Am I barking at the wrong tree here? Is this setup even possible?
B. If it is even possible, any documentation / pointers on setting up
per-host private storage?
I should mention that these workstations are quite beefy (64-128GB
RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
can even split the local storage and GFS to different arrays).
- Gilboa
4 years, 2 months
multiple new networks
by kim.kargaard@noroff.no
Hi all,
We have Ovirt 4.3, with 11 hosts, and need a bunch of VLANs for our student's to be isolated and do specific things. We have created the VLANs on the switch, but need to create them on the admin portal, with vlan tagging and then add them to the interface on the hosts. We are talking about 400 VLANs. I have done this manually for 4 VLANs and all works fine, but was wondering if there is a way of doing this in one go for all? So I don't have to do it 400 times (at least creating the VLANs on the admin portal).
Thanks.
Kim
4 years, 2 months
Hyperconverged Gluster Deployment in cockpit with zfs
by harryo.dk@gmail.com
Hi,
When I want to use zfs for software raid on my oVirt nodes instead of a hardware raid controller, I don't know what to type in "Device Name" I don't know if this step should be skipped for zfs raid, I don't know the location of my zfs vdev or if thre is anyhing else i should input. If I would set this up via CLI, no "Device Name" is needed in the process, why is it then nedded in the Hyperconverged Gluster Deployment on cockpit? There is plenty of guides for Gluster on top of zfs online, but the porcess differ because of the "Device Name"
4 years, 2 months
Remove VM that failed being built from template
by Matthew.Stier@fujitsu.com
oVirt 4.3.6
Attempted to build 10 VMs from a template, concurrently, and apparently overloaded the 1GB link to the iSCSI storage. I now have four VMs hung in the Locked connection on the display, and no means to cancel, remove them.
The all have 'Failed to complete VM **** creation.' log entries, under Events.
Any suggestions on how to address?
4 years, 2 months
Need assistance in migrating between iSCSI arrays.
by Matthew.Stier@fujitsu.com
To get a project going, I used an iSCSI SAN array of limited storage, while waiting to obtain, and make operational another iSCSI SAN of considerably more storage.
I've attached the new iSCSI array to my cluster of hosts, attached a number of LUNs as additional storage domains, and move my collection of ISO and disk images to the new storage. I also detached and removed the old iSCSI storage domains I no longer need. I'm now down to the last two Storage Domains. The hosted_engine, and the last DATA domain, holding templates.
Now the hard part reared its head.
I need to completely detach the old iSCSI SAN array, which means I need to migrate the hosted-engine, and my small collection of templates, which I created on the old iSCSI array, to storage domains on the new iSCSI array.
I'm looking for recommendations on how to do both.
As to the details, I am running Oracle Linux Virtualization Manager 4.3.6, in hosted-engine mode, which means I am running oVirt 4.3.6, in hosted-engine mode.
Note: I have configured the first three hosts in the cluster to support hosted-engine, so the hosted-engine can be moved around, for system maintenance.
4 years, 2 months
[ANN] oVirt 4.4.3 Fifth Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Fifth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Fifth Release Candidate for testing, as of October 16th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 2 months
How to clean up Storage Domain manually
by miguel.garcia@toshibagcs.com
We have a problem in our cluster where the storage domain was consumed almost totally so we proceed to do a release space by deleting vms and templates that are not longer needed. Despite of removal we got the following eerror message removing some assets "The Storage Domain may be manually cleaned-up from possible leftover". Also this problem is triggered when new vms are created.
How can we perform a manual clean-up in the storage domain?
4 years, 2 months
Add direct lun (Fibre Channel) to VM
by jpedrolima@gmail.com
Hi
how to add directly to a VM 4 fiber channel luns without losing the data they have, these luns need to be seen directly on the VM as "raw devices".
Thks
Plima
4 years, 2 months
Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs"
by gantonjo-ovirt@yahoo.com
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Dependencies resolved.
=============================================================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================================================================================================================================================================================
Upgrading:
ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M
replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary
=============================================================================================================================================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 782 M
Is this ok [y/N]: y
Downloading Packages:
ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 8.6 MB/s | 782 MB 01:31
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3
Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs
See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
Storage domains were found in:
/rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
/rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md
/rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md
/rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md
/rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md
/rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md
/rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md
/rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md
/rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md
/rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md
/rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md
/rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md
/rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md
/rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update
Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3
Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3
Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3
Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm
Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help.
4 years, 2 months