Training Video or document or training session
by info@worldhostess.com
Can someone recommend a training video of some kind of step by step document
to do the installation and administration for OVirt? Or if someone that
wants to do a one on one online training? I would be super excited to do
online training.
I have been trying many time to install and I am sure I make some mistakes
because I can't get OVirt to work properly.
I have done numerous installations during the last few weeks.
Yours Sincerely,
Henni
2 years, 10 months
oVirt 4.3.10 RHEL 7.9 support
by KSNull Zero
Hello!
Does oVirt 4.3.10 supports RHEL 7.9 hosts ?
Is it safe to update ovirt 4.3.10 hosts to the latest RHEL 7.9 ?
Thank you.
2 years, 11 months
Install test lab single host HCI with plain CentOS as OS
by Gianluca Cecchi
Hello,
if I want to install a test lab environment of type single host HCI and
want to use CentOS 8.2 as the OS for the host, can I still use the
graphical wizard from cockpit? I see it accessing the server...
Or is it thought only to be run from ovirt-node-ng system?
In case it is not possible is there a command line workflow I can use to
get the same as the wizard playbook run?
BTW: I want to use CentOS for this test lab because I need to install
custom packages (for performance counters and other things) so I'd prefer
the flexibility to have a plain CentOS and not the NG node.
Thanks in advance,
Gianluca
2 years, 11 months
Add Node to a single node installation with self hosted engine.
by Marcel d'Heureuse
Hi,
I got a problem with my Ovirt installation. Normally we deliver Ovirt as
single node installation and we told our service guys if the internal
client will have more redundancy we need two more server and add this to
the single node installation. i thought that no one would order two new
servers.
Now I have the problem to get the system running.
First item is, that this environment has no internet access. So I can't
install software by update with yum.
The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
three servers have the same software installed.
On the single node I have installed the hosted Engine package 1,1 GB to
deploy the self-hosted engine without internet. That works.
Gluster, Ovirt, Self-Hosted engine are running on the server 01.
What should I do first?
Deploy the Glusterfs first and then add the two new hosts to the single
node installation?
Or should I deploy a new Ovirt System to the two new hosts and add later
the cleaned host to the new Ovirt System?
I have not found any items in this mailing list which gives me an idea
what I should do now.
Br
Marcel
2 years, 11 months
Re: vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
by Strahil Nikolov
That's not expected. You definately need to check the Engine's logs (on the Hosted or Dedicated Engine system) and the vdsm logs on the host.
Usually , the first step is to "evacuate" (live migrate) all VMs from the Host and if it fails to do that in a reasonable timeframe - the maintenance is cancelled.Next it will set the host into maintenance and most probably (not sure about this one) the engine will assign a new host as SPM.
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 05:04:44 Гринуич+2, lifuqiong(a)sunyainfo.com <lifuqiong(a)sunyainfo.com> написа:
Hi, Strahil,
Thank you for your reply.
I've try setting host to maintenance and the host reboot immediately, What does vdsm do when setting host to maintenance? Thank you
Best Regards
Mark Lee
> From: Strahil Nikolov via Users
> Date: 2020-10-27 23:44
> To: users; lifuqiong(a)sunyainfo.com
> Subject: [ovirt-users] Re: vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
> When you set a host to maintenance from oVirt API/UI, one of the tasks is to umount any shared storage (incluing the NFS you got). Then rebooting should work like a charm.
>
>
>
> Why did you reboot without putting the node in maintenance ?
>
>
>
> P.S.: Do not confuse rebooting with fencing - the latter kills the node ungracefully in order to safely start HA VMs on another node.
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>
>
>
>
>
>
>
>
>
>
>
>
> В вторник, 27 октомври 2020 г., 10:27:01 Гринуич+2, lifuqiong(a)sunyainfo.com <lifuqiong(a)sunyainfo.com> написа:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi everyone:
>
> Description of problem:
>
>
>
> When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
>
> other messages may be useful: [] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
>
> []systemd-shutdown[1]: Failed to wait for process: Protocol error
>
> []systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource busy
>
> []systemd-shutdown[1]: Failed to wait for process: Protocol error
>
> dracut Warning: Killing all remaining processes
>
> dracut Warning: Killing all remaining processes
>
>
>
> Version-Release number of selected component (if applicable):
>
> Software Version:4.2.8.2-1.el7
>
> OS: CentOS Linux release 7.5.1804 (Core)
>
> How reproducible:
>
> 100%
>
> Steps to Reproduce:
>
> 1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
>
> vdsm: 172.17.99.105/16
>
> nfs server: 172.17.81.14/16Actual results:
>
> As above. the server will reboot more than 30 minutes
>
> Expected results:
>
> the server will reboot in a short time.
>
> What I have done:
>
> I have capture packet in nfs server while vdsm is rebooting, I found vdsm is always sending nfs packet to nfs server circularly as follows:this is some log files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion is:
>
> 1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
>
> 2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
>
> 3. there is nothing message import to this issue.The logs is in the attachment.I'm very appreciate if anyone can help me. Thank you.
>
> _______________________________________________
>
> Users mailing list -- users(a)ovirt.org
>
> To unsubscribe send an email to users-leave(a)ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2GATAD35SU...
>
> _______________________________________________
>
> Users mailing list -- users(a)ovirt.org
>
> To unsubscribe send an email to users-leave(a)ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3ETYUH2QDB...
>
2 years, 11 months
[ANN] oVirt 4.4.3 Seventh Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Seventh Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Seventh Release Candidate for testing, as of October 29th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
You can hit:
Problem: cannot install the best update candidate for package
ovirt-engine-metrics-1.4.1.1-1.el8.noarch
- nothing provides rhel-system-roles >= 1.0-19 needed by
ovirt-engine-metrics-1.4.2-1.el8.noarch
in order to get rhel-system-roles >= 1.0-19 you need
https://buildlogs.centos.org/centos/8/virt/x86_64/ovirt-44/ repo since that
package can be promoted to release only at 4.4.3 GA.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 11 months
Hosted Engine install via cockpit - proxy issue
by simon@justconnect.ie
I am installing oVirt in a closed environment where internet access is controlled by proxies.
This works until the hosted engine install via cockpit where it fails to complete as it appears to require internet access to the repository.
The only workaround I have found is to ssh onto the engine ‘mid install’ and add the proxy address to /etc/dnf/dnf.conf. After doing this the install is successful.
Am I missing something or does this type of install require unfettered internet access?
2 years, 11 months
when configuring multi path logical network selection area is empty, hence not able to configure multipathing
by dhanaraj.ramesh@yahoo.com
Hi team,
I have 4 node cluster where in each node I configured 2 dedicated 10 gig NIC with
dedicated subnet each ( NIC 1 = 10.10.10.0/24, NIC 2 = 10.10.20.0/24 ) and on the array
side I configured 2 targets with 10.10.10.0 /24 & another 2 target with 10.10.20.0/24
subnet. without any errors I could check in all four paths and able to mount iscsi luns in
all 4 nodes. However when i try to configure mutipathing at Data center level I could see
all the paths but not the logical network, it stays empty although I configured logical
network label for both NICs with dedicated names as ISCSI1 & ISCI2. these logical
names visible and green at the host network level, no errors, they are just L2 IP config
Am I missing something here? what else I should do to enable multiplathing
2 years, 11 months