No virtio-win vfd oVirt 4.3.7
by Vrgotic, Marko
Dear oVIrt,
I am trying to install Windows 10 on oVirt, following procedure on:
https://www.ovirt.org/documentation/vmm-guide/chap-Installing_Windows_Vir...
Unfortunately, there is no option as explained in the “Installing Windows on VirtIO-Optimized Hardware”.
There is just no “virtio-win.vfd” option when attaching Floppy. Is this valid.
I have ISO Domain, which is not hosted by Hosted Engine, but mounte additionally. Is this required in order to have the virtio-win.vfd option?
If not, please let me know what am I missing.
In the meanwhile, I am working on building custom Wind 10 ISO which contains Virtio/Virtio-SCSI drivers.
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
4 years, 11 months
ovirt-host 4.4.0 Task Update system failed to execute - Running QEMU processes found, cannot upgrade Vdsm
by alexandermurashkin@msn.com
On a regular basis, there is "Task Update system failed to execute" error in the engine.log. At the same time, the ovirt-host-mgmt-ansible-check log contains "Running QEMU processes found, cannot upgrade Vdsm" error.
Is there some configuration setting that controls it? Why oVirt is trying to update ovirt-host package on the host at all if there are virtual machines running?
Note that
2019-12-27 11:57:21,880-06 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 3 threads waiting for tasks.
2019-12-27 12:02:30,878-06 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedExecutorService-hostUpdatesChecker-Thread-4) [] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Check for update of host harrier. Gathering Facts.
2019-12-27 12:02:45,990-06 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedExecutorService-hostUpdatesChecker-Thread-4) [] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Check for update of host harrier. Install ovirt-host package if it isn't installed.
2019-12-27 12:02:55,024-06 ERROR [org.ovirt.engine.core.bll.host.HostUpgradeManager] (EE-ManagedExecutorService-hostUpdatesChecker-Thread-4) [] Failed to read host packages: Task Update system failed to execute:
2019-12-27 12:02:55,025-06 ERROR [org.ovirt.engine.core.bll.hostdeploy.HostUpdatesChecker] (EE-ManagedExecutorService-hostUpdatesChecker-Thread-4) [] Failed to check if updates are available for host 'harrier' with error message 'Task Update system failed to execute: '
2019-12-27 12:02:55,106-06 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedExecutorService-hostUpdatesChecker-Thread-4) [] EVENT_ID: HOST_AVAILABLE_UPDATES_FAILED(839), Failed to check for available updates on host harrier with message 'Task Update system failed to execute: '.
2
# rpm -q ovirt-host
ovirt-host-4.4.0-0.2.alpha.el8.x86_64
# dnf update
Updating Subscription Management repositories.
Last metadata expiration check: 0:11:31 ago on Fri Dec 27 13:16:55 2019.
Error: Running QEMU processes found, cannot upgrade Vdsm.
# dnf update ovirt-host
Updating Subscription Management repositories.
Extra Packages for Enterprise Linux 8 - x86_64 7.3 kB/s | 5.3 kB 00:00
GlusterFS 6 packages for x86_64 2.8 kB/s | 3.0 kB 00:01
GlusterFS 6 noarch packages 18 kB/s | 3.0 kB 00:00
virtio-win builds roughly matching what will be shipped in upcoming RHEL 8.2 kB/s | 3.0 kB 00:00
Copr repo for EL8_collection owned by sbonazzo 10 kB/s | 3.6 kB 00:00
Copr repo for gluster-ansible owned by sac 13 kB/s | 3.3 kB 00:00
Copr repo for ovsdbapp owned by mdbarroso 14 kB/s | 3.3 kB 00:00
Copr repo for nmstate-stable owned by nmstate 14 kB/s | 3.3 kB 00:00
Copr repo for NetworkManager-1.20 owned by networkmanager 14 kB/s | 3.3 kB 00:00
Latest oVirt master nightly snapshot 4.0 kB/s | 3.0 kB 00:00
Latest oVirt master additional nightly snapshot 5.0 kB/s | 3.0 kB 00:00
Red Hat CodeReady Linux Builder for RHEL 8 x86_64 (RPMs) 14 kB/s | 4.5 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 27 kB/s | 4.1 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 36 kB/s | 4.5 kB 00:00
Dependencies resolved.
Nothing to do.
Complete!
------------------------- ovirt-host-mgmt-ansible-check log --------------------------------------------
2019-12-27 12:02:45 CST - TASK [ovirt-host-upgrade : Update system] **************************************
2019-12-27 12:02:55 CST - fatal: [harrier....]: FAILED! => {"changed": false, "failures": [], "msg": "Unknown Error occured: Running QEMU processes found, cannot upgrade Vdsm.", "rc": 1, "results": []}
2019-12-27 12:02:55 CST - {
"status" : "OK",
"msg" : "",
"data" : {
"event" : "runner_on_failed",
"uuid" : "74b495fa-2534-4850-b905-7255fed1fc56",
"stdout" : "fatal: [harrier...]: FAILED! => {\"changed\": false, \"failures\": [], \"msg\": \"Unknown Error occured: Running QEMU processes found, cannot upgrade Vdsm.\", \"rc\": 1, \"results\": []}",
"counter" : 26,
"pid" : 27445,
"created" : "2019-12-27T18:02:53.245099",
"end_line" : 25,
"runner_ident" : "094f2f72-28d3-11ea-bc88-525400060930",
"start_line" : 24,
"event_data" : {
"play_pattern" : "all",
"play" : "all",
"event_loop" : null,
"task_args" : "",
"remote_addr" : "harrier....",
"res" : {
"_ansible_no_log" : false,
"changed" : false,
"results" : [ ],
"msg" : "Unknown Error occured: Running QEMU processes found, cannot upgrade Vdsm.",
"rc" : 1,
"invocation" : {
"module_args" : {
"lock_timeout" : 300,
"update_cache" : true,
"conf_file" : null,
"exclude" : [ ],
"allow_downgrade" : false,
"disable_gpg_check" : false,
"disable_excludes" : null,
"validate_certs" : true,
"state" : "latest",
"disablerepo" : [ ],
"releasever" : null,
"skip_broken" : false,
"autoremove" : false,
"download_dir" : null,
"installroot" : "/",
"install_weak_deps" : true,
"name" : [ "*" ],
"download_only" : false,
"bugfix" : false,
"list" : null,
"install_repoquery" : true,
"update_only" : false,
"disable_plugin" : [ ],
"enablerepo" : [ ],
"security" : false,
"enable_plugin" : [ ]
}
},
"failures" : [ ]
},
"pid" : 27445,
"play_uuid" : "52540006-0930-56fb-1424-000000000006",
"task_uuid" : "52540006-0930-56fb-1424-000000000012",
"task" : "Update system",
"playbook_uuid" : "758d0326-a52a-4fe9-87b3-6be0d3492784",
"playbook" : "ovirt-host-upgrade.yml",
"task_action" : "yum",
"host" : "harrier....",
"ignore_errors" : null,
"role" : "ovirt-host-upgrade",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-upgrade/tasks/main.yml:18"
},
"parent_uuid" : "52540006-0930-56fb-1424-000000000012"
}
}
4 years, 11 months
I had a crash yesterday in my ovirt cluster,I just tried to add a new network,Which migration Policy do you recommend?
by zhouhao@vip.friendtimes.net
I had a crash yesterday in my ovirt cluster, which is made up of 3 nodes.
I just tried to add a new network, but the whole cluster crashed
I added a new network to my cluster, but while I was debugging the newswitch, when the switch was poweroff, the node detected the network card status down and then moved to Non-Operational state.
At this time all of 3 nodes moved to Non-Operational state.
All virtual machines have started automatic migration,When I received the alert email, all virtual machines were suspended
In 15 minutes my newswitch were power up again.The 3 ovirt-nodes become active again, but many virtual machines become unresponsive or suspended due to forced migration, and only a few virtual machines are pulled up again due to cancelled migration
After I tried to terminate the migration tasks and restart ovirt-engine service, I was still unable to restore most of the virtual machines, so I had to restart 3 ovirt-nodes to restore my virtual machine
I didn't recover all the virtual machines until an hour later
Then I modify my migration policy to " Do Not Migrate Virtual Machines"
Which migration Policy do you recommend?
I'm afraid to use cluster...
zhouhao(a)vip.friendtimes.net
4 years, 11 months
Re: Overt Node | Non-Responsive
by Strahil
Hi Vijay,
Can you share the error messages in thelogs ,as the screenshot shows only 'INFO' messages.
Best Regards,
Strahil NikolovOn Dec 26, 2019 10:17, Vijay Sachdeva <vijay.sachdeva(a)indiqus.com> wrote:
>
> Dear Team,
>
>
>
> I have a host on which I am using local storage for my VMs. Now I rebooted my host because it was shifted to another location, right now in vdsm logs I see errors like:
>
>
>
>
>
> Where in host I can see my mounted LVM volume is perfectly fine. I started VSDM services and libvirtd, but host is not leaving Non-Responsive state.
>
>
>
> Any help ..!!
>
>
>
>
>
> Vijay Sachdeva
>
>
>
>
4 years, 11 months
Re: oVirt and Containers
by Jan Zmeskal
Okay, so this topic is quite vast, but I believe I can at the very least
give you a few pointers and maybe others might chime in as well.
Firstly, there's the Kubevirt project. It enables you to manage both
application containers and virtual machines workloads (that cannot be
easily containerized) in a shared environment. Another benefit is getting
advantages of the powerful Kubernetes scheduler. I myself am not too
familiar with Kubevirt, so I can only offer this high-level overview. More
info here: https://kubevirt.io/
Then there is another approach which I am more familiar with. You might
want to use oVirt as an infrastructure layer on top of which you run
containerized workflow. This is achieved by deploying either OpenShift
<https://www.openshift.com/> or the upstream project OKD
<https://www.okd.io/> in the oVirt virtual machines. In that scenario,
oVirt VMs are considered by OpenShift as compute resources and are used for
scheduling containers. There are some advantages to this setup and two come
into mind. Firstly, you can scale such OpenShift cluster up or down by
adding/removing oVirt VMs according to your needs. Secondly, you don't need
to set up all of this yourself.
For OpenShift 3, Red Hat provides detailed guide on how to go about this.
Part of that guide are Ansible playbooks that automate the deployment for
you as long as you provide required variables. More info here:
https://docs.openshift.com/container-platform/3.11/install_config/configu...
When it comes to OpenShift 4, there are two types of deployment. There's
UPI - user provisioned infrastructure. In that scenario, you prepare all
the resources for OpenShift 4 beforehand and deploy it in that existing
environment. And there's also IPI - installer provisioned infrastructure.
This means that you just give the installer access to your environment
(e.g. AWS public cloud) and the installer provisions resources for itself
based on recommendations and best practices. At this point, neither UPI nor
IPI is supported for oVirt. However there is a GitHub repository
<https://github.com/sa-ne/openshift4-rhv-upi> that can guide you through
UPI installation on oVirt and also provides automation playbooks for that.
I have personally followed the steps from the repository and deployed
OpenShift 4.2 on top of oVirt without any major issues. As far as I
remember, I might have needed occasional variable here and there but the
process worked.
Hope this helps!
Jan
On Tue, Dec 24, 2019 at 8:21 PM Robert Webb <rwebb(a)ropeguru.com> wrote:
> Hi Jan,
>
> Honestly, I didn't have anything specific in mind, just what is being used
> out there today and what may be more prevalent.
>
> Just getting my oVIrt set up and want to know what might be recommended.
> Would probably be mostly deploying images like Homeassistent, piHole, etc..
> for now.
>
> I guess if there is good oVirt direct integration, it would be nice to
> keep it all in a single interface.
>
> Thanks..
>
> ________________________________________
> From: Jan Zmeskal <jzmeskal(a)redhat.com>
> Sent: Tuesday, December 24, 2019 1:54 PM
> To: Robert Webb
> Cc: users
> Subject: Re: [ovirt-users] oVirt and Containers
>
> Hi Robert,
>
> there are different answers based on what you mean by integrating oVirt
> and containers. Do you mean:
>
> - Installing container management (Kubernetes or OpenShift) on top of
> oVirt and using oVirt as infrastructure?
> - Managing containers from oVirt interface?
> - Running VM workloads inside containers?
> - Something different?
>
> I can elaborate more based on your specific needs
>
> Best regards
> Jan
>
> On Tue, Dec 24, 2019 at 3:52 PM Robert Webb <rwebb(a)ropeguru.com<mailto:
> rwebb(a)ropeguru.com>> wrote:
> I was searching around to try and figure out the best way to integrate
> oVirt and containers.
>
> I have found some sites that discuss it but all of them are like 2017 and
> older.
>
> Any recommendations?
>
> Just build VM’s to host containers or is there some direct integration?
>
> Here are a couple of the old sites
>
> https://fromanirh.github.io/containers-in-ovirt.html
>
>
> https://kubernetes.io/docs/setup/production-environment/on-premises-vm/ov...
>
>
> https://www.ovirt.org/develop/release-management/features/integration/con...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org<mailto:
> users-leave(a)ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BIW2Z3ILBS...
>
>
> --
>
> Jan Zmeskal
>
> Quality Engineer, RHV Core System
>
> Red Hat <https://www.redhat.com>
>
> [
> https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c...
> ]<https://www.redhat.com>
>
>
--
Jan Zmeskal
Quality Engineer, RHV Core System
Red Hat <https://www.redhat.com>
<https://www.redhat.com>
4 years, 11 months
Overt-Engine Down | Host Unresponsive
by Vijay Sachdeva
Dear Team,
I have one question, if Ovirt-engine goes down for some reason. Will it impact my host to go un-responsive and all VMs down?
I just faced a situation where my host was in un-responsive status and all VMs were down. I restart the ovirt-engine service and host got activated.
Can anyone let me know this situation?
Thanks
Vijay Sachdeva
4 years, 11 months
oVirt on a dedicated server with a single NIC and IP
by alexcustos@gmail.com
Hi, My goal is to set up the latest oVirt on a dedicated server (Hetzner) with a single NIC and IP. Then using a Reverse Proxy and/or VPN (on the same host) to manage connections to VMs. I expected I could add a subnet as an alias to the main interface, or set it up on VLAN, or using vSwitch at the last resort, but all my attempts failed. Some configurations work until the first reboot, then VDSM screw up the main interface and I lost connection to the server, or Hosted Engine doesn't start from global maintenance. I've run out of good ideas already. I'm extremely sorry if this question already been discussed somewhere, but I found nothing helpful. I would be very appreciated for any hint on how to achieve this or at least a confirmation that the desired configuration is viable.
4 years, 11 months
Overt Node | Non-Responsive
by Vijay Sachdeva
Dear Team,
I have a host on which I am using local storage for my VMs. Now I rebooted my host because it was shifted to another location, right now in vdsm logs I see errors like:
Where in host I can see my mounted LVM volume is perfectly fine. I started VSDM services and libvirtd, but host is not leaving Non-Responsive state.
Any help ..!!
Vijay Sachdeva
4 years, 11 months
Re: [ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by Strahil
Hi,
If the VMs are starting properly and they don't use cloudinit - then the issue is not oVirt specific, but guest specific (Linux/Windows depending on guest OS).
So your should check:
1. Does your host have any Netwprks out of sync (Host's network tab)
If yes - put the server into maintenance and fix the issue (host's network tab)
2. Check each VM's configuration if it is defined to use CloudInit -> if yes, verify that cloudinit's service is running on the guest
3. Verify each problematic guest's network settings. If needed, set a static IP and try to ping another IP from the same subnet/Vlan .
Best Regards,
Strahil NikolovOn Dec 25, 2019 11:41, lifuqiong(a)sunyainfo.com wrote:
>>
>>
>>> Dear All:
>>> My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
>>> One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
>>>
>>> the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
>>> by ovirt rest api. But the ip was changed to templated ip when the accident happened.
>>>
>>> the vm's os is centos.
>>>
>>> Hope get help from you soon, Thank you.
>>>
>>> Mark
>>> Sincerely.
4 years, 11 months
[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months