Re: How to debug "Non Operational" host
by Gervais de Montbrun
Hi Paul.
I've updated the /etc/hosts file on the engine and restarted the hosted-engine. It seems that the "Could not associate brick" errors have stopped. Thank you!
No change in my issue though ☹️.
It looks like Glusterd is starting up OK. I did force start the bricks on ovirt1.
[root(a)ovirt1.dgi ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/glusterd.service.d
└─99-cpu.conf
Active: active (running) since Wed 2021-11-24 16:19:25 UTC; 1h 42min ago
Docs: man:glusterd(8)
Process: 2321 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2354 (glusterd)
Tasks: 92 (limit: 1648201)
Memory: 63.7G
CPU: 1h 17min 42.666s
CGroup: /glusterfs.slice/glusterd.service
├─2354 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
├─3247 /usr/sbin/glusterfsd -s ovirt1-storage.dgi --volfile-id vmstore.ovirt1-storage.dgi.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/ovirt1-storage.dgi-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/fb93ffff591764c8.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_b>
├─3275 /usr/sbin/glusterfsd -s ovirt1-storage.dgi --volfile-id engine.ovirt1-storage.dgi.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/ovirt1-storage.dgi-gluster_bricks-engine-engine.pid -S /var/run/gluster/66ebd47080b528d1.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-en>
└─3287 /usr/sbin/glusterfs -s localhost --volfile-id shd/engine -p /var/run/gluster/shd/engine/engine-shd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/c9b8692f3e532562.socket --xlator-option *replicate*.node-uuid=fdf2cf13-c2c5-4afa-8d73-76c50c69122a --process-name glustershd --client-pid=-6
Nov 24 16:19:22 ovirt1.dgi systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 24 16:19:25 ovirt1.dgi systemd[1]: Started GlusterFS, a clustered file-system server.
Nov 24 16:19:28 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:28.909836] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume engine. Stopping local bricks.
Nov 24 16:19:28 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:28.910745] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume vmstore. Stopping local bricks.
Nov 24 16:19:31 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:31.925206] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
Nov 24 16:19:31 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:31.938507] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
The bricks are showing all green at least in the UI, but they never seem to catch up and not show any unsynced entries:
As for mounting the bricks, they are mounting based on what is in /etc/fstab.
[root(a)ovirt1.dgi ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Feb 17 20:17:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/onn/ovirt-node-ng-4.4.9-0.20211026.0+1 / xfs defaults,discard 0 0
UUID=07e9dfea-9710-483d-bc63-aea7dbb5801d /boot xfs defaults 0 0
/dev/mapper/onn-home /home xfs defaults,discard 0 0
/dev/mapper/onn-tmp /tmp xfs defaults,discard 0 0
/dev/mapper/onn-var /var xfs defaults,discard 0 0
/dev/mapper/onn-var_log /var/log xfs defaults,discard 0 0
/dev/mapper/onn-var_log_audit /var/log/audit xfs defaults,discard 0 0
/dev/mapper/onn-swap none swap defaults 0 0
UUID=4e2c88e4-2bae-4b41-bb62-631820435845 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
UUID=ad938b6e-44d2-492a-a313-c4d0c0608e09 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime 0 0
[root(a)ovirt1.dgi ~]# mount |grep gluster_bricks
/dev/mapper/gluster_vg_sdb-gluster_lv_engine on /gluster_bricks/engine type xfs (rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=3072,noquota)
/dev/mapper/gluster_vg_sdb-gluster_lv_vmstore on /gluster_bricks/vmstore type xfs (rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=3072,noquota)
What does this "extra" mount?
[root(a)ovirt1.dgi ~]# mount | grep storage
ovirt1-storage.dgi:/engine on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
I've noticed that on my working servers, I see this output:
[root(a)ovirt2.dgi ~]# mount | grep storage
ovirt1-storage.dgi:/engine on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
ovirt1-storage.dgi:/vmstore on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_vmstore type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
There is obviously something not mounting properly on ovirt1. I don't see how this can be a network issue as the storage for the hosted engine is working ok.
I truly appreciate the help. Any other ideas or logs/places to check?
Cheers,
Gervais
> On Nov 24, 2021, at 4:03 AM, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
>
>
> Hi Gervais,
>
> The engine doesn't need to be able to ping the IP address, just needs to know what it is so adding them to the /etc/hosts file should work.
>
> Also, I would check ovirt1, is it mounting the brick, what does "systemctl status glusterd" show, what are the logs in /var/log/gluster ?
>
>
> Regards,
>
> Paul S.
> From: Gervais de Montbrun <gervais(a)demontbrun.com <mailto:gervais@demontbrun.com>>
> Sent: 24 November 2021 01:16
> To: Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk <mailto:P.Staniforth@leedsbeckett.ac.uk>>
> Cc: Vojtech Juranek <vjuranek(a)redhat.com <mailto:vjuranek@redhat.com>>; users(a)ovirt.org <mailto:users@ovirt.org> <users(a)ovirt.org <mailto:users@ovirt.org>>
> Subject: Re: [ovirt-users] How to debug "Non Operational" host
>
> Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
> Hi Paul,
>
> I don't quite get what you mean by this:
>
>> assuming you have a storage network for the gluster nodes the engine needs to resolve be able to resolve the host addresses
>
>
> The storage network is on 10GB network cards and plugged into a stand-alone switch. The hosted-engine is not on the same network at all and can not ping the IP's associated with those cards. Are you saying that it needs access to that network, or that is needs to be able to resolve the IP's. I can add them to the /etc/hosts file on the ovirt-engine or do I need to reconfigure my setup? It was working as it currently configured before applying the update.
>
> I have no idea why the ovirt1 server is not showing up with the fqdn. I set up all the servers the same way. It's been like that since I set things up. I have looked for where this might be corrected, but can't find it. Ideas?
>
> The yellow bricks... I can force start them (and I have in the past), but now it turns green for a few minutes and then returns to red.
>
> Cheers,
> Gervais
>
>
>
>> On Nov 23, 2021, at 12:57 PM, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk <mailto:P.Staniforth@leedsbeckett.ac.uk>> wrote:
>>
>> Hello Gervais,
>>
>> is the brick mounted on ovirt1 ? can you mount it using the settings in /etc/fstab ?
>>
>> The hostname is not using a FQDN for ovirt1
>>
>> assuming you have a storage network for the gluster nodes the engine needs to resolve be able to resolve the host addresses
>> ovirt1-storage.dgi
>> ovirt2-storage.dgi
>> ovirt3-storage.dgi
>>
>> So that it can assign them to the correct network.
>>
>> When the volume is showing yellow you can force restart them again from the GUI.
>>
>> Regards,
>>
>> Paul S.
>> From: Gervais de Montbrun <gervais(a)demontbrun.com <mailto:gervais@demontbrun.com>>
>> Sent: 23 November 2021 13:42
>> To: Vojtech Juranek <vjuranek(a)redhat.com <mailto:vjuranek@redhat.com>>
>> Cc: users(a)ovirt.org <mailto:users@ovirt.org> <users(a)ovirt.org <mailto:users@ovirt.org>>
>> Subject: [ovirt-users] Re: How to debug "Non Operational" host
>>
>> Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>>
>> Hi Vojta,
>>
>> Thanks for the help.
>>
>> I tried to activate my server this morning and captured the logs from vdsm.log and engine.log. They are attached.
>>
>> Something went awry with my gluster (I think) as it is showing that the bricks on the affected server (ovirt1) are not mounted:
>> <PastedGraphic-2.png>
>>
>> <PastedGraphic-3.png>
>>
>> <PastedGraphic-4.png>
>>
>>
>> The networking looks fine.
>>
>> Cheers,
>> Gervais
>>
>>
>>
>> > On Nov 23, 2021, at 3:37 AM, Vojtech Juranek <vjuranek(a)redhat.com <mailto:vjuranek@redhat.com>> wrote:
>> >
>> > On Tuesday, 23 November 2021 03:36:07 CET Gervais de Montbrun wrote:
>> >> Hi Folks,
>> >>
>> >> I did a minor upgrade on the first host in my cluster and now it is
>> >> reporting "Non Operational"
>> >>
>> >> This is what yum showed as updatable. However, I did the update through the
>> >> ovirt-engine web interface.
>> >>
>> >> ovirt-node-ng-image-update.noarch
>> >> 4.4.9-1.el8
>> >> ovirt-4.4 Obsoleting Packages
>> >> ovirt-node-ng-image-update.noarch
>> >> 4.4.9-1.el8
>> >> ovirt-4.4 ovirt-node-ng-image-update.noarch
>> >> 4.4.8.3-1.el8
>> >> @System ovirt-node-ng-image-update.noarch
>> >> 4.4.9-1.el8
>> >> ovirt-4.4
>> >> ovirt-node-ng-image-update-placeholder.noarch
>> >> 4.4.8.3-1.el8
>> >> @System
>> >>
>> >> How do I start to debug this issue?
>> >
>> > Check engine log in /var/log/ovirt-engine/engine.log on the machine where
>> > engine runs
>> >
>> >>
>> >>
>> >> Also, it looks like the vmstore brick is not mounting on that host. I only
>> >> see the engine mounted.
>> >
>> >
>> > Could you also attach relevant part of vdsm log (/var/log/vdsm/vdsm.log) from
>> > the machine where mount failed? You should see some mount related error there.
>> > This could be also a reason why hosts become non-operational.
>> >
>> > Thanks
>> > Vojta
>> >
>> >> Broken server:
>> >> root(a)ovirt1.dgi <mailto:root@ovirt1.dgi> log]# mount | grep storage
>> >> ovirt1-storage.dgi:/engine on
>> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type
>> >> fuse.glusterfs
>> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
>> >> 131072) Working server:
>> >> [root(a)ovirt2.dgi <mailto:root@ovirt2.dgi> ~]# mount | grep storage
>> >> ovirt1-storage.dgi:/engine on
>> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type
>> >> fuse.glusterfs
>> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
>> >> 131072) ovirt1-storage.dgi:/vmstore on
>> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_vmstore type
>> >> fuse.glusterfs
>> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
>> >> 131072)
>> >>
>> >>
>> >> I tried putting the server into maintenance mode and running a reinstall on
>> >> it. No change. I'de really appreciate some help sorting this our.
>> >>
>> >> Cheers,
>> >> Gervais
>> >
>> > _______________________________________________
>> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> > To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S6C7R6LUTJX... <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWSWTXS6CEA... <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
>> To view the terms under which this email is distributed, please go to:-
>> https://leedsbeckett.ac.uk/disclaimer/email <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fleedsbe...>_______________________________________________
>> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YYEHIXZCSQ... <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
> To view the terms under which this email is distributed, please go to:-
> https://leedsbeckett.ac.uk/disclaimer/email <https://leedsbeckett.ac.uk/disclaimer/email>_______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/H7SLDPRW6FJ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/H7SLDPRW6FJ...>
3 years, 4 months
Re: Creating VMs from templates with their own disks
by Sina Owolabi
Hello
Sorry for the late reply, work has been crazy.
This doesnt seem to work as advertised, or I am still not getting it.
Either way I would really appreciate some help and guidance.
Steps I have attempted:
1. Create and configure VM as I want it to be (disk, partitioning, etc).
2. Shutdown the vm, create a template from it.
Cloning manually:
Cloning fails with this message:
Error while executing action:
clone00.domain.tld:
- Cannot add VM. One or more provided storage domains are either not in
active status or of an illegal type for the requested operation.
I cant modify the storage allocation, and the disk its attempting to use is
the disk of the source VM.
Manual template install:
Choosing to install manually with a template requires me to add a new disk,
and to boot off the CD (defined in the template) and manually set things
up. This I do not wish to do, because I would rather automate.
Cloning with ansible, defining the cloud-init script and using the template:
VM is successfully created, but logging in with remote-viewer drops me into
the installation process (setting up from the attached ISO). Which is also
not desired.
Please help me with what I am doing wrong.
Again the goal is to have the vm setup with its own credentials.
On Thu, Nov 18, 2021 at 9:24 AM Staniforth, Paul <
P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> Hello,
> The VMs can get created from a template otherwise the blank
> template is used if a particular template is used it can be thin dependent
> VM the VMs disks is linked to the Templates disk and it just carries the
> changes made in the VMs disk (this is quicker and uses less space if you a
> lot of disks). The other option is to create a cloned VM and this will copy
> the Templates disk to the VM so it's no longer dependent.
>
> In the ansible documentation look for the clone option.
>
> https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_vm_...
>
>
> https://www.ovirt.org/documentation/virtual_machine_management_guide/inde...
> Virtual Machine Management Guide
> <https://www.ovirt.org/documentation/virtual_machine_management_guide/inde...>
> oVirt is a free open-source virtualization solution for your entire
> enterprise
> www.ovirt.org
> For the cloud-init does the cloud-init package need to be installed on the
> template image?
>
>
> Regards,
>
> Paul S.
> ------------------------------
> *From:* notify.sina(a)gmail.com <notify.sina(a)gmail.com>
> *Sent:* 18 November 2021 07:34
> *To:* users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* [ovirt-users] Re: Creating VMs from templates with their own
> disks
>
> Caution External Mail: Do not click any links or open any attachments
> unless you trust the sender and know that the content is safe.
>
> Im sorry, I am trying to wrap my head around this but it is difficult.
>
> I just want to be able to stand up new vms, with their own storage,
> similar to how I can with plain vanilla KVM, with a template or without,
> maybe even with a kickstart, and hopefully with ansible.
>
> Right now anytime I try to create a VM, using the template, (with
> ansible), it gets the template disk attached, and from the console I see
> the new vm is named as the vm I created the template with. Cloud init
> script that is meant to rename the vm, and join it to IPA, is ignored.
>
> If I create storage for the vm, before creating it, both the template
> storage and the new storage are attached to the vm, which is also
> confusing. Cloud init is also ignored.
>
> I didn't think something this straightforward would end up needing a shift
> in thinking about how vms are created, especially with a product that's
> more than likely using kvm under the hood.
>
> I would appreciate some straightforward guiding steps, if I can get them.
> Really. It's been a frustrating week.
>
>
> > On 2021-11-17 13:50, Sina Owolabi wrote:
> >
> >
> > You can create a template with no disk, then VM's created from that
> > template will also have no disk. Then add a new disk to the VM after you
> > create it. This is how the default blank template works. You can also
> > create a template with an empty disk, then every VM created will also
> > get an empty disk by default. You can always rename disks as well.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> oVirt Code of Conduct:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> List Archives:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
> To view the terms under which this email is distributed, please go to:-
> https://leedsbeckett.ac.uk/disclaimer/email
>
>
--
cordially yours,
Sina Owolabi
+2348176469061
3 years, 4 months
Re: How to debug "Non Operational" host
by Gervais de Montbrun
Hi Paul,
I don't quite get what you mean by this:
> assuming you have a storage network for the gluster nodes the engine needs to resolve be able to resolve the host addresses
The storage network is on 10GB network cards and plugged into a stand-alone switch. The hosted-engine is not on the same network at all and can not ping the IP's associated with those cards. Are you saying that it needs access to that network, or that is needs to be able to resolve the IP's. I can add them to the /etc/hosts file on the ovirt-engine or do I need to reconfigure my setup? It was working as it currently configured before applying the update.
I have no idea why the ovirt1 server is not showing up with the fqdn. I set up all the servers the same way. It's been like that since I set things up. I have looked for where this might be corrected, but can't find it. Ideas?
The yellow bricks... I can force start them (and I have in the past), but now it turns green for a few minutes and then returns to red.
Cheers,
Gervais
> On Nov 23, 2021, at 12:57 PM, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
>
> Hello Gervais,
>
> is the brick mounted on ovirt1 ? can you mount it using the settings in /etc/fstab ?
>
> The hostname is not using a FQDN for ovirt1
>
> assuming you have a storage network for the gluster nodes the engine needs to resolve be able to resolve the host addresses
> ovirt1-storage.dgi
> ovirt2-storage.dgi
> ovirt3-storage.dgi
>
> So that it can assign them to the correct network.
>
> When the volume is showing yellow you can force restart them again from the GUI.
>
> Regards,
>
> Paul S.
> From: Gervais de Montbrun <gervais(a)demontbrun.com <mailto:gervais@demontbrun.com>>
> Sent: 23 November 2021 13:42
> To: Vojtech Juranek <vjuranek(a)redhat.com <mailto:vjuranek@redhat.com>>
> Cc: users(a)ovirt.org <mailto:users@ovirt.org> <users(a)ovirt.org <mailto:users@ovirt.org>>
> Subject: [ovirt-users] Re: How to debug "Non Operational" host
>
> Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>
> Hi Vojta,
>
> Thanks for the help.
>
> I tried to activate my server this morning and captured the logs from vdsm.log and engine.log. They are attached.
>
> Something went awry with my gluster (I think) as it is showing that the bricks on the affected server (ovirt1) are not mounted:
> <PastedGraphic-2.png>
>
> <PastedGraphic-3.png>
>
> <PastedGraphic-4.png>
>
>
> The networking looks fine.
>
> Cheers,
> Gervais
>
>
>
> > On Nov 23, 2021, at 3:37 AM, Vojtech Juranek <vjuranek(a)redhat.com <mailto:vjuranek@redhat.com>> wrote:
> >
> > On Tuesday, 23 November 2021 03:36:07 CET Gervais de Montbrun wrote:
> >> Hi Folks,
> >>
> >> I did a minor upgrade on the first host in my cluster and now it is
> >> reporting "Non Operational"
> >>
> >> This is what yum showed as updatable. However, I did the update through the
> >> ovirt-engine web interface.
> >>
> >> ovirt-node-ng-image-update.noarch
> >> 4.4.9-1.el8
> >> ovirt-4.4 Obsoleting Packages
> >> ovirt-node-ng-image-update.noarch
> >> 4.4.9-1.el8
> >> ovirt-4.4 ovirt-node-ng-image-update.noarch
> >> 4.4.8.3-1.el8
> >> @System ovirt-node-ng-image-update.noarch
> >> 4.4.9-1.el8
> >> ovirt-4.4
> >> ovirt-node-ng-image-update-placeholder.noarch
> >> 4.4.8.3-1.el8
> >> @System
> >>
> >> How do I start to debug this issue?
> >
> > Check engine log in /var/log/ovirt-engine/engine.log on the machine where
> > engine runs
> >
> >>
> >>
> >> Also, it looks like the vmstore brick is not mounting on that host. I only
> >> see the engine mounted.
> >
> >
> > Could you also attach relevant part of vdsm log (/var/log/vdsm/vdsm.log) from
> > the machine where mount failed? You should see some mount related error there.
> > This could be also a reason why hosts become non-operational.
> >
> > Thanks
> > Vojta
> >
> >> Broken server:
> >> root(a)ovirt1.dgi <mailto:root@ovirt1.dgi> log]# mount | grep storage
> >> ovirt1-storage.dgi:/engine on
> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type
> >> fuse.glusterfs
> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
> >> 131072) Working server:
> >> [root(a)ovirt2.dgi <mailto:root@ovirt2.dgi> ~]# mount | grep storage
> >> ovirt1-storage.dgi:/engine on
> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type
> >> fuse.glusterfs
> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
> >> 131072) ovirt1-storage.dgi:/vmstore on
> >> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_vmstore type
> >> fuse.glusterfs
> >> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
> >> 131072)
> >>
> >>
> >> I tried putting the server into maintenance mode and running a reinstall on
> >> it. No change. I'de really appreciate some help sorting this our.
> >>
> >> Cheers,
> >> Gervais
> >
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S6C7R6LUTJX... <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWSWTXS6CEA... <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
> To view the terms under which this email is distributed, please go to:-
> https://leedsbeckett.ac.uk/disclaimer/email <https://leedsbeckett.ac.uk/disclaimer/email>_______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YYEHIXZCSQ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YYEHIXZCSQ...>
3 years, 4 months
P2V Import Not Bootable
by mark.b.burgess@gmail.com
Hi,
we are doing a P2V import of an Oracle Linux 6 physical host to Ovirt 4.2.1. The process we are using as follows:
1. Use VMWare Converter to perform the P2V migration into VSphere (we tried the virt-p2v but it was unworkable in this environment).
2. Export the virtual machine from VMWare to OVF.
3. Convert the OVF to an OVA using ovftool on Linux.
4. Import the OVA into KVM.
The OVA import process completes successfully with no errors in the log. When trying to boot the guest the message of "BdsDxe: No bootable option or device was found" is displayed on the console. The source physical host is UEFI and the physical host uses GPT partition table. I have tried all sorts of combinations of BIOS, virtual disk type (VirtIO, SATA) but nothing seems to be working. The disks can be found when booting into a rescue DVD and the root file system can be mounted under rescue mode. Everything looks ok from the OVA import log.
Software versions below:
KVM Version: 4.2.1 - 11.el7
LIBVIRT Version: libvirt-5.7.0-31.el7
VDSM Version: vdsm-4.30.46-1.0.6.el7
I'm at a bit of a loss how to get this going as there seems to be something fundamental that is not working for this type of guest.
Any suggestions or help would be much appreciated.
Regards,
Mark
3 years, 4 months
How to debug "Non Operational" host
by Gervais de Montbrun
Hi Folks,
I did a minor upgrade on the first host in my cluster and now it is reporting "Non Operational"
This is what yum showed as updatable. However, I did the update through the ovirt-engine web interface.
ovirt-node-ng-image-update.noarch 4.4.9-1.el8 ovirt-4.4
Obsoleting Packages
ovirt-node-ng-image-update.noarch 4.4.9-1.el8 ovirt-4.4
ovirt-node-ng-image-update.noarch 4.4.8.3-1.el8 @System
ovirt-node-ng-image-update.noarch 4.4.9-1.el8 ovirt-4.4
ovirt-node-ng-image-update-placeholder.noarch 4.4.8.3-1.el8 @System
How do I start to debug this issue?
Also, it looks like the vmstore brick is not mounting on that host. I only see the engine mounted.
Broken server:
root(a)ovirt1.dgi log]# mount | grep storage
ovirt1-storage.dgi:/engine on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Working server:
[root(a)ovirt2.dgi ~]# mount | grep storage
ovirt1-storage.dgi:/engine on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
ovirt1-storage.dgi:/vmstore on /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_vmstore type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
I tried putting the server into maintenance mode and running a reinstall on it. No change. I'de really appreciate some help sorting this our.
Cheers,
Gervais
3 years, 4 months
oVirt installation via PXE
by Jean-Louis Dupond
Hi,
We would like to install oVirt Node via some automated way.
I think the best option to do this is via a PXE boot, and there run the
installation.
But I would like to know how you can customize the installation of oVirt
Node. So no manual intervention is needed.
Is it just creating a kickstart file based from the
'interactive-defaults.ks' kickstart in the iso and have the
'ovirt-node-ng-image.squashfs.img' file accessible somewhere?
Are there things that should not be used in the kickstart file? I think
like disk settings as this is already built-in into oVirt Node?
Other idea's are also welcome :)
Thanks
Jean-Louis
3 years, 4 months
host non responsive
by Nathanaël Blanchet
Hello,
Some of my hosts are for some different reason in "non responsive"
state. Fortunately, critical vms continue to run on it.
I didn't manage to recover the up state and the only solution will be at
a predefined date to stop and fence the host.
Waiting for this date, I ssh stopped a targeted vm for maintenance (with
init 0) and now I want to reboot it on a healthy host. This vm is ovirt
high available with a lease on a storage domain.
How can I tell to engine that the vm lease is not anymore on the non
responding host so as to start the vm elsewhere?
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 4 months
upgrade dependency issues
by John Florian
I recently upgrade my engine to 4.4.9 following the usual engine-setup
procedure. Once that was done, I tried to do a dnf upgrade to get
everything else and found:
Error:
Problem: cannot install the best update candidate for package
ovirt-engine-metrics-1.4.3-1.el8.noarch
- nothing provides rhel-system-roles >= 1.7.2-1 needed by
ovirt-engine-metrics-1.4.4-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest'
to use not only best candidate packages)
I was also trying to update my hosts thru the GUI and that was failing
also. I couldn't find the right log with good details as to why, so I
just tried to dnf upgrade from the shell to see what got reported and found:
Error:
Problem 1: cannot install the best update candidate for package
ovirt-host-dependencies-4.4.8-1.el8.x86_64
- nothing provides rsyslog-openssl needed by
ovirt-host-dependencies-4.4.9-2.el8.x86_64
Problem 2: cannot install the best update candidate for package
ovirt-hosted-engine-setup-2.5.3-1.el8.noarch
- nothing provides ovirt-host >= 4.5.0 needed by
ovirt-hosted-engine-setup-2.5.4-1.el8.noarch
- nothing provides vdsm-python >= 4.50 needed by
ovirt-hosted-engine-setup-2.5.4-1.el8.noarch
Problem 3: cannot install the best update candidate for package
vdsm-4.40.80.5-1.el8.x86_64
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 4: package ovirt-host-4.4.9-2.el8.x86_64 requires
ovirt-host-dependencies = 4.4.9-2.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
ovirt-host-4.4.8-1.el8.x86_64
- nothing provides rsyslog-openssl needed by
ovirt-host-dependencies-4.4.9-2.el8.x86_64
Problem 5: package vdsm-hook-fcoe-4.40.90.3-1.el8.noarch requires
vdsm, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-http =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-http =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-http =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-http =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-http =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-http =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-http =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-http =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-http =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-http =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-http =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-http =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-http =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-http =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-http =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-http =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-http =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-http =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-http =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-http =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-http =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-http =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-http =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-http =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-http =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-http =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires ovirt-imageio-common =
2.0.6, but none of the providers can be installed
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.80.5-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.17-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.18-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.19-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.20-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.21-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.22-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.26.3-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.30-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.31-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.32-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.33-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.34-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.35-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.35.1-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.36-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.37-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.38-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.39-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.40-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.50.8-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.50.9-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.60.6-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.60.7-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.70.6-1.el8.noarch
- cannot install both vdsm-http-4.40.90.3-1.el8.noarch and
vdsm-http-4.40.80.6-1.el8.noarch
- cannot install both ovirt-imageio-common-2.3.0-1.el8.x86_64 and
ovirt-imageio-common-2.0.6-0.el8.x86_64
- cannot install the best update candidate for package
vdsm-http-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
vdsm-hook-fcoe-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
ovirt-imageio-common-2.2.0-1.el8.x86_64
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 6: package vdsm-hook-ethtool-options-4.40.90.3-1.el8.noarch
requires vdsm, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires ovirt-imageio-daemon =
2.0.6, but none of the providers can be installed
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.80.5-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.17-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.18-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.19-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.20-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.21-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.22-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.26.3-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.30-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.31-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.32-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.33-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.34-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.35-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.35.1-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.36-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.37-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.38-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.39-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.40-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.50.8-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.50.9-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.60.6-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.60.7-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.70.6-1.el8.noarch
- cannot install both vdsm-jsonrpc-4.40.90.3-1.el8.noarch and
vdsm-jsonrpc-4.40.80.6-1.el8.noarch
- cannot install both ovirt-imageio-daemon-2.3.0-1.el8.x86_64 and
ovirt-imageio-daemon-2.0.6-0.el8.x86_64
- cannot install the best update candidate for package
vdsm-jsonrpc-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
vdsm-hook-ethtool-options-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
ovirt-imageio-daemon-2.2.0-1.el8.x86_64
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 7: package ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
requires vdsm, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-python =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires vdsm-python =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-python =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-python =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-python =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-python =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-python =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-python =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-python =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-python =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-python =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-python =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-python =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-python =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-python =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-python =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-python =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-python =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-python =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-python =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-python =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-python =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-python =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-python =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-python =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-python =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-python =
4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.80.5-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.16-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.17-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.18-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.19-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.20-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.21-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.22-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.26.3-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.30-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.31-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.32-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.33-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.34-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.35-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.35.1-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.36-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.37-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.38-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.39-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.40-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.50.8-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.50.9-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.60.6-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.60.7-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.70.6-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.80.6-1.el8.noarch
- cannot install the best update candidate for package
vdsm-python-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 8: package ovirt-hosted-engine-ha-2.4.9-1.el8.noarch requires
vdsm >= 4.40.0, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-python =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-python =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-python =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-python =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-python =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-python =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-python =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-python =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-python =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-python =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-python =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-python =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-python =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-python =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-python =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-python =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-python =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-python =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-python =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-python =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-python =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-python =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-python =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-python =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-python =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-python =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires ovirt-imageio-common =
2.0.6, but none of the providers can be installed
- package vdsm-python-4.40.80.5-1.el8.noarch requires vdsm-api =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.17-1.el8.noarch requires vdsm-api =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.18-1.el8.noarch requires vdsm-api =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.19-1.el8.noarch requires vdsm-api =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.20-1.el8.noarch requires vdsm-api =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.21-1.el8.noarch requires vdsm-api =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.22-1.el8.noarch requires vdsm-api =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.26.3-1.el8.noarch requires vdsm-api =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.30-1.el8.noarch requires vdsm-api =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.31-1.el8.noarch requires vdsm-api =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.32-1.el8.noarch requires vdsm-api =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.33-1.el8.noarch requires vdsm-api =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.34-1.el8.noarch requires vdsm-api =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35-1.el8.noarch requires vdsm-api =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35.1-1.el8.noarch requires vdsm-api =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.36-1.el8.noarch requires vdsm-api =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.37-1.el8.noarch requires vdsm-api =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.38-1.el8.noarch requires vdsm-api =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.39-1.el8.noarch requires vdsm-api =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.40-1.el8.noarch requires vdsm-api =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.8-1.el8.noarch requires vdsm-api =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.9-1.el8.noarch requires vdsm-api =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.6-1.el8.noarch requires vdsm-api =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.7-1.el8.noarch requires vdsm-api =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.70.6-1.el8.noarch requires vdsm-api =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.80.6-1.el8.noarch requires vdsm-api =
4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both ovirt-imageio-common-2.3.0-1.el8.x86_64 and
ovirt-imageio-common-2.0.6-0.el8.x86_64
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.80.5-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.17-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.18-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.19-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.20-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.21-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.22-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.26.3-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.30-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.31-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.32-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.33-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.34-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.35-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.35.1-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.36-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.37-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.38-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.39-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.40-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.50.8-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.50.9-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.60.6-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.60.7-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.70.6-1.el8.noarch
- cannot install both vdsm-api-4.40.90.3-1.el8.noarch and
vdsm-api-4.40.80.6-1.el8.noarch
- package ovirt-imageio-client-2.3.0-1.el8.x86_64 requires
ovirt-imageio-common = 2.3.0-1.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
vdsm-api-4.40.80.5-1.el8.noarch
- cannot install the best update candidate for package
ovirt-imageio-client-2.2.0-1.el8.x86_64
- cannot install the best update candidate for package
ovirt-hosted-engine-ha-2.4.8-1.el8.noarch
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 9: problem with installed package vdsm-4.40.80.5-1.el8.x86_64
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-python =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-python =
4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.80.5-1.el8.noarch
- cannot install both vdsm-python-4.40.90.3-1.el8.noarch and
vdsm-python-4.40.80.6-1.el8.noarch
- package vdsm-client-4.40.90.3-1.el8.noarch requires vdsm-python =
4.40.90.3-1.el8, but none of the providers can be installed
- cannot install the best update candidate for package
vdsm-client-4.40.80.5-1.el8.noarch
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 10: problem with installed package
vdsm-hook-fcoe-4.40.80.5-1.el8.noarch
- package vdsm-hook-fcoe-4.40.80.5-1.el8.noarch requires vdsm =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-hook-fcoe-4.40.90.3-1.el8.noarch requires vdsm, but
none of the providers can be installed
- package vdsm-hook-fcoe-4.40.80.6-1.el8.noarch requires vdsm =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-python =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires vdsm-python =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-python =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-python =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-python =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-python =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-python =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-python =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-python =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-python =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-python =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-python =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-python =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-python =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-python =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-python =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-python =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-python =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-python =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-python =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-python =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-python =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-python =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-python =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-python =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-python =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-python =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.80.5-1.el8.noarch requires vdsm-common =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.16-1.el8.noarch requires vdsm-common =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.17-1.el8.noarch requires vdsm-common =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.18-1.el8.noarch requires vdsm-common =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.19-1.el8.noarch requires vdsm-common =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.20-1.el8.noarch requires vdsm-common =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.21-1.el8.noarch requires vdsm-common =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.22-1.el8.noarch requires vdsm-common =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.26.3-1.el8.noarch requires vdsm-common =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.30-1.el8.noarch requires vdsm-common =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.31-1.el8.noarch requires vdsm-common =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.32-1.el8.noarch requires vdsm-common =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.33-1.el8.noarch requires vdsm-common =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.34-1.el8.noarch requires vdsm-common =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35-1.el8.noarch requires vdsm-common =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35.1-1.el8.noarch requires vdsm-common =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.36-1.el8.noarch requires vdsm-common =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.37-1.el8.noarch requires vdsm-common =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.38-1.el8.noarch requires vdsm-common =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.39-1.el8.noarch requires vdsm-common =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.40-1.el8.noarch requires vdsm-common =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.8-1.el8.noarch requires vdsm-common =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.9-1.el8.noarch requires vdsm-common =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.6-1.el8.noarch requires vdsm-common =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.7-1.el8.noarch requires vdsm-common =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.70.6-1.el8.noarch requires vdsm-common =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.80.6-1.el8.noarch requires vdsm-common =
4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.80.5-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.16-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.17-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.18-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.19-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.20-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.21-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.22-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.26.3-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.30-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.31-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.32-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.33-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.34-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.35-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.35.1-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.36-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.37-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.38-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.39-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.40-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.50.8-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.50.9-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.60.6-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.60.7-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.70.6-1.el8.noarch
- cannot install both vdsm-common-4.40.90.3-1.el8.noarch and
vdsm-common-4.40.80.6-1.el8.noarch
- cannot install the best update candidate for package
vdsm-common-4.40.80.5-1.el8.noarch
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 11: problem with installed package
vdsm-hook-ethtool-options-4.40.80.5-1.el8.noarch
- package vdsm-hook-ethtool-options-4.40.80.5-1.el8.noarch requires
vdsm = 4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-hook-ethtool-options-4.40.90.3-1.el8.noarch requires
vdsm, but none of the providers can be installed
- package vdsm-hook-ethtool-options-4.40.80.6-1.el8.noarch requires
vdsm = 4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-python =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires vdsm-python =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-python =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-python =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-python =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-python =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-python =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-python =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-python =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-python =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-python =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-python =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-python =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-python =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-python =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-python =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-python =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-python =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-python =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-python =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-python =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-python =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-python =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-python =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-python =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-python =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-python =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.80.5-1.el8.noarch requires vdsm-network =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.16-1.el8.noarch requires vdsm-network =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.17-1.el8.noarch requires vdsm-network =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.18-1.el8.noarch requires vdsm-network =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.19-1.el8.noarch requires vdsm-network =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.20-1.el8.noarch requires vdsm-network =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.21-1.el8.noarch requires vdsm-network =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.22-1.el8.noarch requires vdsm-network =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.26.3-1.el8.noarch requires vdsm-network =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.30-1.el8.noarch requires vdsm-network =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.31-1.el8.noarch requires vdsm-network =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.32-1.el8.noarch requires vdsm-network =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.33-1.el8.noarch requires vdsm-network =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.34-1.el8.noarch requires vdsm-network =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35-1.el8.noarch requires vdsm-network =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.35.1-1.el8.noarch requires vdsm-network =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.36-1.el8.noarch requires vdsm-network =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.37-1.el8.noarch requires vdsm-network =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.38-1.el8.noarch requires vdsm-network =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.39-1.el8.noarch requires vdsm-network =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.40-1.el8.noarch requires vdsm-network =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.8-1.el8.noarch requires vdsm-network =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.50.9-1.el8.noarch requires vdsm-network =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.6-1.el8.noarch requires vdsm-network =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.60.7-1.el8.noarch requires vdsm-network =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.70.6-1.el8.noarch requires vdsm-network =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-python-4.40.80.6-1.el8.noarch requires vdsm-network =
4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.80.5-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.16-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.17-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.18-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.19-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.20-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.21-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.22-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.26.3-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.30-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.31-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.32-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.33-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.34-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.35-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.35.1-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.36-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.37-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.38-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.39-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.40-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.50.8-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.50.9-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.60.6-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.60.7-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.70.6-1.el8.x86_64
- cannot install both vdsm-network-4.40.90.3-1.el8.x86_64 and
vdsm-network-4.40.80.6-1.el8.x86_64
- cannot install the best update candidate for package
vdsm-network-4.40.80.5-1.el8.x86_64
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
Problem 12: problem with installed package
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
- package ovirt-provider-ovn-driver-1.2.34-1.el8.noarch requires
vdsm, but none of the providers can be installed
- package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-4.40.16-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-4.40.17-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-4.40.18-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-4.40.19-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-4.40.20-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-4.40.21-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-4.40.22-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-4.40.30-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-4.40.31-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-4.40.32-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-4.40.33-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-4.40.34-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-4.40.35-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-4.40.36-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-4.40.37-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-4.40.38-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-4.40.39-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-4.40.40-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-jsonrpc =
4.40.80.6-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.80.5-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.80.5-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.16-1.el8.noarch requires vdsm-yajsonrpc =
4.40.16-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.17-1.el8.noarch requires vdsm-yajsonrpc =
4.40.17-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.18-1.el8.noarch requires vdsm-yajsonrpc =
4.40.18-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.19-1.el8.noarch requires vdsm-yajsonrpc =
4.40.19-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.20-1.el8.noarch requires vdsm-yajsonrpc =
4.40.20-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.21-1.el8.noarch requires vdsm-yajsonrpc =
4.40.21-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.22-1.el8.noarch requires vdsm-yajsonrpc =
4.40.22-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.26.3-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.26.3-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.30-1.el8.noarch requires vdsm-yajsonrpc =
4.40.30-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.31-1.el8.noarch requires vdsm-yajsonrpc =
4.40.31-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.32-1.el8.noarch requires vdsm-yajsonrpc =
4.40.32-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.33-1.el8.noarch requires vdsm-yajsonrpc =
4.40.33-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.34-1.el8.noarch requires vdsm-yajsonrpc =
4.40.34-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.35-1.el8.noarch requires vdsm-yajsonrpc =
4.40.35-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.35.1-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.35.1-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.36-1.el8.noarch requires vdsm-yajsonrpc =
4.40.36-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.37-1.el8.noarch requires vdsm-yajsonrpc =
4.40.37-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.38-1.el8.noarch requires vdsm-yajsonrpc =
4.40.38-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.39-1.el8.noarch requires vdsm-yajsonrpc =
4.40.39-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.40-1.el8.noarch requires vdsm-yajsonrpc =
4.40.40-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.50.8-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.50.8-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.50.9-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.50.9-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.60.6-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.60.6-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.60.7-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.60.7-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.70.6-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.70.6-1.el8, but none of the providers can be installed
- package vdsm-jsonrpc-4.40.80.6-1.el8.noarch requires vdsm-yajsonrpc
= 4.40.80.6-1.el8, but none of the providers can be installed
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.80.5-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.16-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.17-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.18-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.19-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.20-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.21-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.22-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.26.3-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.30-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.31-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.32-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.33-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.34-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.35-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.35.1-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.36-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.37-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.38-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.39-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.40-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.50.8-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.50.9-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.60.6-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.60.7-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.70.6-1.el8.noarch
- cannot install both vdsm-yajsonrpc-4.40.90.3-1.el8.noarch and
vdsm-yajsonrpc-4.40.80.6-1.el8.noarch
- cannot install the best update candidate for package
vdsm-yajsonrpc-4.40.80.5-1.el8.noarch
- nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by
vdsm-4.40.90.3-1.el8.x86_64
(try to add '--allowerasing' to command line to replace conflicting
packages or '--skip-broken' to skip uninstallable packages or '--nobest'
to use not only best candidate packages)
--
John Florian
3 years, 4 months
Update Issue
by Gary Pedretty
After doing the latest updates on one of my CentOS 8 Stream hosts, it will non longer allow VMs to be migrated to it. VMs can be started on the host, but then have no network access. The hosted engine can be started on the host, but again no network access. I have tried removing the host, rebuilding the OS from scratch and re-adding it, with no change. It shows up and you do not get any error messages that I can tell when a migrate fails, it just shows that it failed.
The updated host has the following versions, only KVM and LIBVIRT appear to be different. I tried downgrading KVM and LIBVIRT, but the migration still fails. I have avoided trying up update the other hosts since I cannot migrate any of my VMs to the one host that I already tried to update.
RHEL - 8.6 - 1.el8
OS Description:
CentOS Stream 8
Kernel Version:
4.18.0 - 348.el8.x86_64
KVM Version:
6.1.0 - 4.module_el8.6.0+983+a7505f3f
LIBVIRT Version:
libvirt-7.9.0-1.module_el8.6.0+983+a7505f3f
VDSM Version:
vdsm-4.40.90.4-1.el8
The other two hosts which have not been updated and still work normally have
RHEL - 8.6 - 1.el8
OS Description:
CentOS Stream 8
Kernel Version:
4.18.0 - 348.el8.x86_64
KVM Version:
6.0.0 - 33.el8s
LIBVIRT Version:
libvirt-7.6.0-4.el8s
VDSM Version:
vdsm-4.40.90.4-1.el8
The engine log from an attempted migrations is as follows.
2021-11-17 21:12:46,099-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: fa1a2d6b-99cb-42bd-a343-91f314d5f47b Type: VMAction group MIGRATE_VM with role type USER
2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 3522dfb9
2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateBrokerVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 23f98865
2021-11-17 21:12:46,141-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateBrokerVDSCommand, return: , log id: 23f98865
2021-11-17 21:12:46,143-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3522dfb9
2021-11-17 21:12:46,149-09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net, User: admin@internal-authz).
2021-11-17 21:12:49,052-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] Fetched 2 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b'
2021-11-17 21:12:49,053-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' is migrating to VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) ignoring it in the refresh until migration is done
2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' was reported as Down on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net)
2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672')
2021-11-17 21:12:54,315-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] START, DestroyVDSCommand(HostName = ravn-kvm-9.ravnalaska.net, DestroyVmVDSCommandParameters:{hostId='10491335-e2e1-49f2-96c3-79331535542b', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 389ad865
2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] FINISH, DestroyVDSCommand, return: , log id: 389ad865
2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672')
2021-11-17 21:12:54,596-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] Migration of VM 'ns8' to host 'ravn-kvm-9.ravnalaska.net' failed: VM destroyed during the startup.
2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) moved from 'MigratingFrom' --> 'Up'
2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] Adding VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) to re-run list
2021-11-17 21:12:54,644-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-5) [] Rerun VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'. Called from VDS 'ravn-kvm-8.ravnalaska.net'
2021-11-17 21:12:54,679-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] START, MigrateStatusVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateStatusVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b'}), log id: 7ce5593e
2021-11-17 21:12:54,681-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] FINISH, MigrateStatusVDSCommand, return: , log id: 7ce5593e
2021-11-17 21:12:54,695-09 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-7232) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net).
2021-11-17 21:12:54,697-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] Lock freed to object 'EngineLock:{exclusiveLocks='[fa1a2d6b-99cb-42bd-a343-91f314d5f47b=VM]', sharedLocks=''}'
2021-11-17 21:13:04,061-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [] Fetched 1 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b'
_______________________________
Gary Pedretty
Director of IT
Ravn Alaska
Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedretty(a)ravnalaska.com
3 years, 4 months