Storage domain in maintenance
by Strahil Nikolov
Hello All,
I have several storage domains in maintenance mode and I can't neither detach, nor activate.
Can someone explain to me what is actually doing the engine whwn I try to activate - so I can debug it?
The error from vdsm-client is that the storage domain does not exist, yet it is in the DB .
I assume that there is somw storage issue, but in order to debug it - I need to know what is gooing on under the hood.
Thanks in advance
Best Regards,
Strahil Nikolov
4 years, 7 months
Windows 10 Pro 64 (1909) crashes when migrating
by Maton, Brett
I recently added a Windows 10 Pro 64 bit (release 1909) VM, and I'm seeing
a lot of failures when oVirt tries to move the VM to another host
(triggered by load balancing),
These errors are showing up in the UI event log
Migration failed (VM: <vm label>, Source: <host 1>, Destination: <host 2>).
Followed by:
VM <host> is down with error. Exit message: Lost connection with qemu
process.
Google returned some references to 'options kvm ignore_msrs=1' which I've
added to /etc/modprobe/d/kvm.conf and restarted the hosts but that doesn't
appear to have made a difference.
Is this a known issue with Windows 10 guests?
4 years, 7 months
Firewall GARP not reachable to VM
by k.betsis@gmail.com
Hi all
Does anyone know how i can allow my Firewall VM cluster act as the default gateway to VMs within the same network?
I've configured the GARP functionality on the OPNSENSE firewalls (PFSENSE fork).
VMs within the same network can ping the firewall IP addresses successfully but not the GARP IP.
The ovirt network has been configured with the MAC Address Anti-spoofing to false.
One firewall has been configured with virtio network drivers and the with e1000 both exhibiting the same behavior.
Currently all VMs have been configured with a default gateway the primary firewall.
Network workarounds using BGP and attributes can work, but are way to complicate to streamline for all VMs when a simple VRRP can do the job.
Any ideas what i am missing?
4 years, 7 months
Data domain down due to space crunch
by Crazy Ayansh
Hi Guys,
I am using Ovirt 4.3.7 in my environment.I am using servers local storage
for data domains. Today one of the servers data domain got full and due to
this my VMs paused.
[image: image.png]
Now i am unable to export my VM as the storage in ovirt is down.....What
should i do now.....any suggestions.....?
Thanks
4 years, 7 months
Ovirt and Dell Compellent in ISCSI
by dalmasso@cines.fr
hi all,
we use ovirt 4.3 on dell server r640 runing centos 7.7 and a storage bay Dell Compellent SCv3020 in ISCSI.
We use two 10gb interfaces for iSCSI connection on each dell server.
If we configure ISCSI connection directly from web IU, we can’t specify the two physical ethernet interface , and there are missing path . (only 4 path on 8)
So, on the shell of hypervisor we use this commands for configure the connections :
iscsiadm -m iface -I em1 --op=new # 1st ethernet interface
iscsiadm -m iface -I p3p1 --op=new # 2d ethernet interface
iscsiadm -m discovery -t sendtargets -p xx.xx.xx.xx
iscsiadm -m node -o show
iscsiadm -m node --login
after this, on the web IU we can connect our LUN with all path.
Also, I don’t understand how to configure multipath in the web UI . By defaut the configuration is in failover :
multipath -ll :
36000d3100457e4000000000000000005 dm-3 COMPELNT,Compellent Vol
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 23:0:0:1 sdb 8:16 active ready running
|- 24:0:0:1 sdd 8:48 active ready running
|- 25:0:0:1 sdc 8:32 active ready running
|- 26:0:0:1 sde 8:64 active ready running
|- 31:0:0:1 sdf 8:80 active ready running
|- 32:0:0:1 sdg 8:96 active ready running
|- 33:0:0:1 sdh 8:112 active ready running
|- 34:0:0:1 sdi 8:128 active ready running
I think round robind or another configuration will be more performent.
So can we made this configuration , select physical interface and configure multipath in web UI ? for easyly maintenance and adding other server ?
Thank you.
Sylvain.
4 years, 7 months
oVirt 4.4.0 Beta release refresh is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Beta release refresh is now available for testing
The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 refresh for testing, as of April 9th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet
due to some incompatibility in openvswitch package shipped in CentOS Virt
SIG which requires to rebuild openvswitch on top of CentOS 8.2.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t
upload images to data domains using the engine web application. You can
still copy iso images into the deprecated ISO domain for installing VMs or
upload and download to/from data domains is fully functional via the REST
API and SDK.
For uploading and downloading via the SDK, please see:
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/downlo...
Both scripts are standalone command line tool, try --help for more info.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps 389-ds
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support only for file based storage. iSCSI/FC storage do not
support 4k disks yet.
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 7 months
access to QEMU monitor
by Matthias Leopold
Hi,
for educational purposes I'm trying to access the QEMU monitor of oVirt
VMs. Can someone tell me how this can be done? Connecting to the unix
socket with socat doesn't work, probably because it's not started with
"server,nowait". Can the QEMU monitor be reached with the SPICE console?
thx
Matthias
4 years, 7 months
UI lockup
by Shareef Jalloq
Hi,
when browsing to the admin portal of the Engine, I get a popup:
Uncaught exception occurred. Please try reloading the page. Details:
(NS_ERROR_STORAGE_BUSY).
And the ui.log has the error at the end of the mail.
What gives?
Shareef.
2020-04-07 15:54:12,255Z ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-66) [] Permutation name: 1AD871380C0F85A90CBF764A6CF6663F
2020-04-07 15:54:12,255Z ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-66) [] Uncaught exception:
com.google.gwt.core.client.JavaScriptException: (NS_ERROR_STORAGE_BUSY) :
at Unknown.loadFromLocalStorage(
https://ovirt-engine.phoelex.com/ovirt-engine/webadmin/theme/00-ovirt.bra...
)
at Unknown.init(
https://ovirt-engine.phoelex.com/ovirt-engine/webadmin/theme/00-ovirt.bra...
)
at Unknown.fn.setupVerticalNavigation(
https://ovirt-engine.phoelex.com/ovirt-engine/webadmin/theme/00-ovirt.bra...
)
at
org.ovirt.engine.ui.webadmin.section.main.view.MainSectionView$lambda$0$Type.execute(MainSectionView.java:48)
at
com.google.gwt.core.client.impl.SchedulerImpl.runScheduledTasks(SchedulerImpl.java:167)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.$flushPostEventPumpCommands(SchedulerImpl.java:338)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl$Flusher.execute(SchedulerImpl.java:76)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.execute(SchedulerImpl.java:140)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275)
[gwt-servlet.jar:]
at Unknown.Tu/<(
https://ovirt-engine.phoelex.com/ovirt-engine/webadmin/?locale=en_US line 9
> scriptElement)
at Unknown.d(
https://ovirt-engine.phoelex.com/ovirt-engine/webadmin/?locale=en_US line 9
> scriptElement)
at Unknown.anonymous(Unknown)
4 years, 7 months
Cluster Console Encryption
by Tommaso - Shellrent
Hi to all.
We are tryng to manage the value of "Enable VNC Encryption" on the
cluster via ansible[or any automatic mode].
Someone got any hint?
Regards
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
4 years, 7 months