"gluster-ansible-roles is not installed on Host" error on Cockpit
by Hesham Ahmed
On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
(also when trying adding a new gluster volume to existing clusters)
using Cockpit, an error is displayed "gluster-ansible-roles is not
installed on Host. To continue deployment, please install
gluster-ansible-roles on Host and try again". There is no package
named gluster-ansible-roles in the repositories:
[root@localhost ~]# yum install gluster-ansible-roles
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos,
subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile
* ovirt-4.3-epel: mirror.horizon.vn
No package gluster-ansible-roles available.
Error: Nothing to do
Uploading Enabled Repositories Report
Cannot upload enabled repos report, is this client registered?
This is due to check introduced here:
https://gerrit.ovirt.org/#/c/98023/1/dashboard/src/helpers/AnsibleUtil.js
Changing the line from:
[ "rpm", "-qa", "gluster-ansible-roles" ], { "superuser":"require" }
to
[ "rpm", "-qa", "gluster-ansible" ], { "superuser":"require" }
resolves the issue. The above code snippet is installed at
/usr/share/cockpit/ovirt-dashboard/app.js on oVirt node and can be
patched by running "sed -i 's/gluster-ansible-roles/gluster-ansible/g'
/usr/share/cockpit/ovirt-dashboard/app.js && systemctl restart
cockpit"
4 years
ovirt-imageio-proxy not working after updating SSL certificates with a wildcard cert issued by AlphaSSL (intermediate)
by Lynn Dixon
All,
I recently bought a wildcard certificate for my lab domain (shadowman.dev)
and I replaced all the certs on my RHV4.3 machine per our documentation.
The WebUI presents the certs successfully and without any issues, and
everything seemed to be fine, until I tried to upload a disk image (or an
ISO) to my storage domain. I get this error in the events tab:
https://share.getcloudapp.com/p9uPvegx
[image: image.png]
I also see that the disk is showing up in my storage domain, but its
showing "Paused by System" and I can't do anything with it. I cant even
delete it!
I have tried following this document to fix the issue, but it didn't work:
https://access.redhat.com/solutions/4148361
I am seeing this error pop into my engine.log:
https://pastebin.com/kDLSEq1A
And I see this error in my image-proxy.log:
WARNING 2020-07-24 15:26:34,802 web:137:web:(log_error) ERROR [172.17.0.30]
PUT /tickets/ [403] Error verifying signed ticket: Invalid ovirt ticket
(data='------my_ticket_data-----', reason=Untrusted certificate)
[request=0.002946/1]
Now, when I bought my wildcard, I was given a root certificate for the CA,
as well as a separate intermediate CA certificate from the provider.
Likewise, they gave me a certificate and a private key of course. The root
and intermediate CA's certificates have been added
to /etc/pki/ca-trust/source/anchors/ and I did an update-ca-trust.
I also started experiencing issues with the ovpn network provider at the
same time I replaced the SSL certs, but I disregarded it at the time, but
now I am thinking its related. Any advice on what to look for to fix the
ovirt-imageio-proxy?
Thanks!
*Lynn Dixon* | Red Hat Certified Architect #100-006-188
*Solutions Architect* | NA Commercial
Google Voice: 423-618-1414
Cell/Text: 423-774-3188
Click here to view my Certification Portfolio <http://red.ht/1XMX2Mi>
4 years
Single Node HCI upgrade procedure from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4?
by thomas@hoberg.net
I can hear you saying: "You did understand that single node HCI is just a toy, right?"
For me the primary use of a single node HCI is adding some disaster resilience in small server edge type scenarios, where a three node HCI provides the fault tolerance: 3+1 with a bit of distance, warm or even cold stand-by, potentially manual switch and reduced workload in case disaster strikes.
Of course, another 3nHCI would be better, but who gets that type of budget, right?
What I am trying say: If you want oVirt to gain market share, try to give HCI more love. And while you're at it, try to make expanding from 1nHCI to 3nHCI (and higher counts) a standard operational procedure to allow expanding a disaster stand-by into a production setup, while the original 3nHCI is being rebuilt.
For me low-budget HCI is where oVirt has its biggest competitive advantage against vSan and Nutanix, so please don't treat the HCI/gluster variant like an unwanted child any more.
In the mean-time OVA imports (from 4.3.10 exports) on my 4.4.2 1nHCI fail again, which I'll report separately.
4 years
Problem with Cluster-wise BIOS Settings in oVirt 4.4
by Rodrigo G. López
Hi all,
We are running an oVirt 4.4 Hosted Engine as a VM, and after changing
the Cluster's BIOS type from Q35 with Legacy BIOS (the default one after
installation) to Preexistent, the VM fails with the following error:
XML error: The device at PCI address 0000:00:02.0 cannot be plugged into
the PCI controller with index='0'. It requires a controller that accepts
a pcie-root-port.
We need it so that we can run imported VMs from a previous version of
oVirt, namely 4.0.
Applying the BIOS settings individually works but as an attempt to
generalize the settings we decided to apply it to the full cluster.
Tell me if you need more data.
Cheers,
-rodri
4 years
POWER9 (ppc64le) Support on oVirt 4.4.1
by Vinícius Ferrão
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error:
The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info:
timebase : 512000000
platform : PowerNV
model : 8335-GTH
machine : PowerNV 8335-GTH
firmware : OPAL
MMU : Radix
Thanks,
4 years, 1 month
Gluster Domain Storage full
by suporte@logicworks.pt
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 0GB available and 100% used
I can not do anything with this disk, for example, if I try to move it to another Gluster Domain Storage get the message:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years, 1 month
adminstration portal wont complete load, looping
by Philip Brown
I have an odd situation:
When I go to
https://ovengine/ovirt-engine/webadmin/?locale=en_US
after authentication passes...
it shows the top banner of
oVirt OPEN VIRTUALIZATION MANAGER
and the
Loading ...
in the center. but never gets past that. Any suggestions on how I could investigate and fix this?
background:
I recently updated certs to be signed wildcard certs, but this broke consoles somehow.
So I restored the original certs, and restarted things... but got stuck with this.
Interestingly, the VM portal loads fine. But not the admin portal.
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
4 years, 1 month
Latest ManagedBlockDevice documentation
by Michael Thomas
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt.org/develop/release-management/features/storage/cinderlib-i...
...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."
The oVirt administration guide[1] does not talk about managed block devices.
I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.
Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?
--Mike
[1]ovirt.org/documentation/administration_guide/
4 years, 1 month
Moving VM disks from one storage domain to another. Automate?
by Green, Jacob Allen /C
I am looking for an automated way, via Ansible to move a VM disk from one storage domain to another. I found the following, https://docs.ansible.com/ansible/latest/modules/ovirt_disk_module.html and while it mentions copying a VM disk image from one domain to another it does not mention a live storage migration. Which is what I am looking to do. I want to take roughly 100 VMs and move their disk images from one domain to another that is available to the datacenter in some automated/scripted fashion. I am just curious if anyone out there has had to do this and how they tackled it. Or perhaps I am missing some easy obvious way, other than clicking all the disks and clicking move. However from the looks of it, if I did click all the disk and selected move, it appears RHV tries to do them all at once, which is probably not ideal, I would like it to move the disks in a serial One after another fashion, to conserve throughput and IO.
I also did not see anything on Ansible galaxy or the ovirt github that would do this.
Thank you.
4 years, 1 month