Adding new host to a cluster and stuck at "Detect if host is a prebuilt image".
by fauzuwan.nazri93@gmail.com
Hello everyone, I try to add a host to a new cluster and
during provisioning the event showed "Detect if host is a prebuilt image."
and it just stuck there forever.
Version: ovirt-node-ng-image-update-placeholder-4.4.7.1-1.el8.noarch
Actual results: Host provisioning stuck at "Detect if host is a prebuilt image."
Expected results: Provisioning successful.
By checking the ovirt host deployment playbook logs, I can't found any useful output.
Below is the last output form the logs.
2021-08-06 19:25:12 MYT - TASK [ovirt-host-deploy-facts : Detect if host is a prebuilt image] ************
2021-08-06 19:25:12 MYT - ok: [ovirth02.zyzyx.virtnet]
2021-08-06 19:25:12 MYT - {
"status" : "OK",
"msg" : "",
"data" : {
"uuid" : "a7b74be9-30ab-4ca4-b2b6-e7af6b86bb6c",
"counter" : 24,
"stdout" : "ok: [ovirth02.zyzyx.virtnet]",
"start_line" : 22,
"end_line" : 23,
"runner_ident" : "f155ccc8-f6a8-11eb-94c2-00e04cf8ff45",
"event" : "runner_on_ok",
"pid" : 1549383,
"created" : "2021-08-06T11:25:10.333884",
"parent_uuid" : "00e04cf8-ff45-b99d-863c-000000000182",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "c1e368bf-e183-47c0-b6f5-377114c20eab",
"play" : "all",
"play_uuid" : "00e04cf8-ff45-b99d-863c-000000000007",
"play_pattern" : "all",
"task" : "Detect if host is a prebuilt image",
"task_uuid" : "00e04cf8-ff45-b99d-863c-000000000182",
"task_action" : "set_fact",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-facts/tasks/host-os.yml:26",
"role" : "ovirt-host-deploy-facts",
"host" : "ovirth02.zyzyx.virtnet",
"remote_addr" : "ovirth02.zyzyx.virtnet",
"res" : {
"changed" : false,
"ansible_facts" : {
"node_host" : true
},
"_ansible_no_log" : false
},
"start" : "2021-08-06T11:25:10.254355",
"end" : "2021-08-06T11:25:10.333550",
"duration" : 0.079195,
"event_loop" : null,
"uuid" : "a7b74be9-30ab-4ca4-b2b6-e7af6b86bb6c"
}
}
}
2021-08-06 19:25:12 MYT - TASK [ovirt-host-deploy-facts : Reset configuration of advanced virtualization module] ***
2021-08-06 19:25:12 MYT - TASK [ovirt-host-deploy-facts : Find relevant advanced virtualization module version] ***
2021-08-06 19:25:12 MYT - TASK [ovirt-host-deploy-facts : Enable advanced virtualization module] *********
2021-08-06 19:25:12 MYT - TASK [ovirt-host-deploy-facts : Ensure Python3 is installed for CentOS/RHEL8 hosts] ***
Thank You.
3 years, 7 months
fencing and (virtual) power button pression reaction
by Gianluca Cecchi
Hello,
in RHCS we have the fencing concept for targets very similar to the oVirt
ones: avoid data corruption and also fast react to problematic hosts
situations.
The implementation is quite similar to the oVirt one, with several fencing
agents sometimes common, like fence_ipmilan.
In RHCS documentation there is a chapter describing how to configure hosts
so that they don't react to power button pressure. This guarantees that
failover is as fast as possible, and also that the same host to be fenced,
could create more damages if it reacts and begins the shutdown procedure
instead of simply powering off.
See:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/...
It seems to me that oVirt / RHV lack this feature.
Inside oVirt NGN and also RHVH-H, the /etc/systemd/logind.conf file is not
configured with an entry of type
HandlePowerKey=ignore
So in some cases of tests I'm doing, I see in its virtual console that the
to-be-fenced host begins its OS shutdown flow when it detects the power
button pressure. Typically after 2-3 seconds the system powers off and then
on again, but I also saw about 10 seconds delays in one case.
I have opened a case (number 03002278) for my RHV products, but I would
also like to get your comments here if I'm wrong with my considerations.
Gianluca
3 years, 7 months
Data recovery from (now unused, but still mounted) Gluster Volume for a single VM
by David White
My hyperconverged cluster was running out of space.
The reason for that is a good problem to have - I've grown more in the last 4 months than in the past 4-5 years combined.
But the downside was, I had to go ahead and upgrade my storage, and it became urgent to do so.
I began that process last week.
I have 3 volumes:
[root@cha2-storage dwhite]# gluster volume list
data
engine
vmstore
I did the following on all 3 of my volumes:
1) Converted cluster from Replica 3 to Replica 2, arbiter 1
- I did run into an issue where some VMs were paused, but I was able to poweroff and power on those VMs again with no issue
2) Ran the following on Host 2:
# gluster volume remove-brick data replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/data/data force
# gluster volume remove-brick vmstore replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
# gluster volume remove-brick engine replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/engine/engine force
(rebuilt the array & reboot the server -- when I rebooted, I commented out the original UUIDs from /etc/fstab for the gluster storage)
# lvcreate -L 2157G --zero n -T gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# lvcreate -L 75G -n gluster_lv_engine gluster_vg_sdb
# lvcreate -V 600G --thin -n gluster_lv_data gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# lvcreate -V 1536G --thin -n gluster_lv_vmstore gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_engine
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_data
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_vmstore
At this point, I ran lsblk --fs to get the new UUIDs, and put them into /etc/fstab
# mount -a
# gluster volume add-brick engine replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/engine/engine force
# gluster volume add-brick vmstore replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
# gluster volume add-brick data replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/data/data force
So far so good.
3) I was running critically low on disk space on the replica, so I:- Let the gluster volume heal
- Then I removed the Host 1 bricks from the volumes
When I removed the Host 1 bricks, I made sure that there were no additional healing tasks, then proceeded to do so:# gluster volume remove-brick data replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/data/data force
# gluster volume remove-brick vmstore replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
At this point, I ran into problems.
All of my VMs went into a Paused state, and I had to reboot all of them.
All VMs came back online, but two VMs were corrupted. I wound up re-building them from scratch.
Unfortunately, we lost a lot of data on 1 of the VMs, as I didn't realize backups were broken on that particular VM.
Is there a way for me to go into those disks (that are still mounted to Host 1), examine the Gluster content, and somehow mount / recover the data from the VM that we lost?
Sent with ProtonMail Secure Email.
3 years, 7 months
non-critical request - Disk volume label - Web-ui
by Jorge Visentini
Hi everyone!
Firstly, congratulations for the evolution of the oVirt 4.4.7.
A *non-critical* request for a future version... if possible, add a label
in disk volumes, in Web-ui.
Thank you all!
[image: image.png]
--
Att,
Jorge Visentini
+55 55 98432-9868
3 years, 7 months
Wondering a way to create a universal USB Live OS to connect to VM Portal
by eta.entropy@gmail.com
Hi All,
once oVirt Infrastracture has been setup, VMs created, assigned to users and they manage them through VM Portal,
I'm wondering if there is an easy way to provide to end users a USB key that will just
> plug into any computer or boot from it
> connect to given VM Portal to manage assigned VMs
> connect to given SPICE console to enter assigned VMs
Just like a Linux Live USB with preconfigured service to be started and to connect to VM Portal or SPICE console
Is there something already available to start from ?
Is this something doable or am I dreaming ?
Thanks for any input
3 years, 7 months
Question about PCI storage passthrough for a single guest VM
by Tony Pearce
I have configured a host with pci passthrough for GPU pass through. Using
this knowledge I went ahead and configured nvme SSD pci pass through. On
the guest, I partitioned and mounted the SSD without any issues.
Searching google for this exact setup I only see results about "local
storage" where local storage = using a disk image on the hosts storage. So
I have come here to try and find out if there are any concerns or gripes or
issues with using nvme pci pass through compared to local storage.
Some more detail about the setup:
I have 2 identical hosts (nvidia gpu and also nvme pci SSD). A few weeks
ago when I started researching converting one of these systems over (from
native ubuntu) to ovirt using gpu pci pass through I found the information
about local storage. I have 1 host (host #1) set up with local storage mode
and the guest VM is using a disk image on this local storage.
Host 2 has an identical hardware setup but I did not configure local
storage for this host. Instead, I have the ovirt host OS installed on a
SATA HDD and the nvme SSD is in pci pass through to a different guest
instance.
What I notice is Host 2 disk performance is approx. +30% increase over host
#1 when running simple dd tests to write data to the disk. So at first
glance it appears the nvme pci pass through gives better performance and
this is desired, but I have not seen any ovirt documentation that explains
that this is supported or any guidelines on configuring such a setup.
Aside from the usual caveats when running pci pass through, are there any
other gotchya's when running this type of setup (pci nvme ssd pass
through)? I am trying to discover any unknowns about this before I use this
for real data. I have no previous experience with this and this is my main
reason for emailing the group.
Any insight appreciated.
Kind regards,
Tony Pearce
3 years, 7 months
glance.ovirt.org planned outage: 10.08.2021 at 01:00 UTC
by Evgheni Dereveanchin
Hi everyone,
There's an outage scheduled in order to move glance.ovirt.org to new
hardware. This will happen after midnight the upcoming Tuesday between 1AM
and 3AM UTC. It will not be possible to pull images from our Glance image
registry during this period. Other services will not be affected.
If you see any CI jobs failing on Glance tests - please re-run them in the
morning after the planned outage window is over. If issues persist please
report it via JIRA or reach out to me personally.
--
Regards,
Evgheni Dereveanchin
3 years, 7 months
[ANN] oVirt 4.4.8 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.8 Fourth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.8
Fourth Release Candidate for testing, as of August 6th, 2021.
This update is the eighth in a series of stabilization updates to the 4.4
series.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.4.8 release highlights:
http://www.ovirt.org/release/4.4.8/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.8/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 7 months