VM Portal --> Windows SSO
by Andrey Rusakov
Hi,
Resently I configure oVirt, FreeIPA and Windows Guest VM.
I would like to SSO to Windows VM thrue VM portal or Rest API (dowloading spice config) or Admin Port ...
I don't see any login attemps in log file, errors, or ...
I found a lot of docs for SSO setup - it is so simple (Agent, Correct Domain name in login profile ...), But at the end i found that SSO module was deleted from 4.3 agent. Is it true?
The major question - Is it working ?
2 years, 10 months
node down due to gluster storage
by suporte@logicworks.pt
Hello,
I'm running an ovirt 2 nodes and 2 gluster storages.
The second gluster sotrage is down due to an hardware problem
The node 1 is allways trying to connect to the second storage and does not come up.
I'm running 4.3.6 version
Any idea?
thanks
--
Jose Ferradeira
http://www.logicworks.pt
2 years, 10 months
Gluster deployment fails with missing UUID
by Shareef Jalloq
Hi,
I'm running the gluster deployment flow and am trying to use a second drive
as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD
mode.
I'm seeing the following gluster ansible task fail and a google search
doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item",
"changed": false, "err": " Couldn't find device with uuid
Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid
tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid
RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid
lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
2 years, 10 months
stty: standard tty: inappropriate ioctl for device
by Charles Kozler
I'd like to share this with the list because its something that I changed
for convenience, in .bashrc, but had a not so obvious rippling impact on
the ovirt self hosted installer. I could imagine a few others doing this
too and I'd rather save future them hours of google time
Every install failed no matter what and it was always at the SSO step to
revoke token (here:
https://github.com/machacekondra/ansible/blob/71547905fab67a876450f95c9ca...)
and then reissue new token but the engine log yielded different
information. The SSO failure was a red herring
engine.log:2020-05-04 22:09:17,150-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [61038b68] Host installation
failed for host '672be551-9259-4d2d-823d-07f586b4e0f1', 'node1': Unexpected
error during execution: stty: standard input: Inappropriate ioctl for device
engine.log:2020-05-04 22:09:17,145-04 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-1) [61038b68] SSH error running
command root@node1:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d
-t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True': RuntimeException: Unexpected error during
execution: stty: standard input: Inappropriate ioctl for device
I was on the hunt for this for the better part of 2 days or so, because who
else has anything to do during quarantine, wracking my brain and trying
everything to figure out what was going on
Well, it was my own fault
# cat .bashrc | grep -i stty
stty erase ^?
With this set in the .bashrc of the node I was running the installer via
cockpit from, ovirt installer will fail to install
This was set for convenience to have backspace work in vim since at some
point it stopped working for me
Should I file this as a bug? The message generated is more of a warning
then a failure but I do not know the internals of ovirt like that. Commands
still actually execute fine
ovirt-engine(a)192.168.222.84) # ssh 10.0.16.221 "uptime"
root(a)10.0.16.221's password:
stty: standard input: Inappropriate ioctl for device
22:30:01 up 22:45, 2 users, load average: 0.80, 0.62, 0.80
One thing I think should be called out in the docs, and called out very
loud, is that the entire ovirt installer expects a clean 110% machine that
is done right after install and provided and IP and hostname. Its not that
obvious, but it is now
2 years, 10 months
Sometimes VM fails to power on: device busy in adding vnet port
by Gianluca Cecchi
Hi,
my environment info:
centos-release.x86_64 7-7.1908.0.el7.centos
glusterfs.x86_64 6.8-1.el7
ovirt-engine.noarch 4.3.9.4-1.el7
ovirt-release43.noarch 4.3.9-1.el7
I have an external agent (openstack staging_oviirt) that powers on/off VMs
during bulk install. See also here some other details:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/B4CDWTAR7YLR...
I repeat many times the installation for test purposes and during these
phases the application powers on (through ostacpm user) about 8 VMs in the
same moment.
Sometimes it happens that one or more of them fail to start and in web
admin gui I see this kind of sequence of events:
VM ostack-ceph2 was started by ostackpm@internal-authz (Host:
ovirt.mydomain).
User <UNKNOWN> got disconnected from VM ostack-ceph2.
VM ostack-ceph2 is down with error. Exit message: Unable to add bridge
vlan23 port vnet21: Device or resource busy.
Failed to run VM ostack-ceph2 on Host ovirt.mydomain
Failed to run VM ostack-ceph2 (User: ostackpm@internal-authz).
Soon I press the start button from web admin gui and the VM powers on
without problems.
I think this kind of problem doesn't depend on the rest api agent.
Could it be a sort of bug or concurrency of power on commands?
If I grep VM UUID and name of the VM for the one above, I get this in
engine.log:
https://drive.google.com/file/d/1QsedDqvlbvVPCifDNted4ipSsJ2GeBJV/view?us...
On ovirt host I see:
2020-05-04 10:16:05,491+0200 ERROR (vm/38f806e7) [virt.vm]
(vmId='38f806e7-5c48-4ffe-90e8-671b1a585256') The vm start process failed
(vm:934)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 868, in
_startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2895, in
_run
dom.createWithFlags(flags)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirtError: Unable to add bridge vlan23 port vnet21: Device or resource
busy
2020-05-04 10:16:05,492+0200 INFO (vm/38f806e7) [virt.vm]
(vmId='38f806e7-5c48-4ffe-90e8-671b1a585256') Changed state to Down: Unable
to add bridge vlan23 port vnet21: Device or resource busy (code=1) (vm:1702)
Let me know if you meed more information
Gianluca
2 years, 10 months
oVirt 4.4.0 Beta release refresh is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Beta release refresh is now available for testing
The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 refresh (beta 4) for testing, as of April 17th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet
due to some incompatibility in openvswitch package shipped in CentOS Virt
SIG which requires to rebuild openvswitch on top of CentOS 8.2.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t
upload images to data domains using the engine web application. You can
still copy iso images into the deprecated ISO domain for installing VMs or
upload and download to/from data domains is fully functional via the REST
API and SDK.
For uploading and downloading via the SDK, please see:
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/downlo...
Both scripts are standalone command line tool, try --help for more info.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support only for file based storage. iSCSI/FC storage do not
support 4k disks yet.
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 10 months
Engine UI exception while viewing unattached hosts in Network tab
by Shareef Jalloq
Hi,
I'm following the Gluster setup blog post (
https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster...)
and am at the 'Storage network' section.
I've created a new 'gluster' network, selected the network and navigated to
the 'Hosts' tab. The guide tells you to click the 'Unattached' button and
then 'Setup Host Networks' for each host. However, clicking the
'Unattached' button fires off a bunch of exceptions rather than displaying
my 3 hosts. The message is:
"Uncaught exception occurred. Please try reloading the page. Details:
(TypeError) : Cannot read property 'N' of null
Please have your administrator check the UI logs"
Anyone seen this before? Where is the UI log located?
Shareef.
2 years, 10 months
Unable to remove system roles from Everyone group
by miguel.garcia@toshibagcs.com
I opened Everyone group and added system role UserRole and realized now that everybody can see all VMS which is not good. I tried to remove the role from the group but got the error "Error while executing action: It's not allowed to remove system permissions assigned to built-in Everyone group"
Looking into Ovirt forums I see that this change should be made through the engine database but I was not able to reach that part.
Can someone help me out how to remove the system role from Everyone group? or at least reset the group?
2 years, 10 months