Re: Storage domain 'Inactive' but still functional
by Martijn Grendelman
Op 24-7-2019 om 10:07 schreef Benny Zlotnik:
We have seen something similar in the past and patches were posted to deal with this issue, but it's still in progress[1]
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1553133
That's some interesting reading, and it sure looks like the problem I had. Thanks!
Best regards,
Martijn.
On Mon, Jul 22, 2019 at 8:07 PM Strahil <hunter86_bg(a)yahoo.com<mailto:hunter86_bg@yahoo.com>> wrote:
I have a theory... But after all without any proof it will remain theory.
The storage volumes are just VGs over a shared storage.The SPM host is supposed to be the only one that is working with the LVM metadata, but I have observed that when someone is executing a simple LVM command (for example -lvs, vgs or pvs ) while another one is going on on another host - your metadata can corrupt, due to lack of clvmd.
As a protection, I could offer you to try the following solution:
1. Create new iSCSI lun
2. Share it to all nodes and create the storage domain. Set it to maintenance.
3. Start dlm & clvmd services on all hosts
4. Convert the VG of your shared storage domain to have a 'cluster'-ed flag:
vgchange -c y mynewVG
5. Check the lvs of that VG.
6. Activate the storage domain.
Of course test it on a test cluster before inplementing it on Prod.
This is one of the approaches used in Linux HA clusters in order to avoid LVM metadata corruption.
Best Regards,
Strahil Nikolov
On Jul 22, 2019 15:46, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl<mailto:Martijn.Grendelman@isaac.nl>> wrote:
Hi,
Op 22-7-2019 om 14:30 schreef Strahil:
If you can give directions (some kind of history) , the dev might try to reproduce this type of issue.
If it is reproduceable - a fix can be provided.
Based on my experience, if something as used as Linux LVM gets broken, the case is way hard to reproduce.
Yes, I'd think so too, especially since this activity (online moving of disk images) is done all the time, mostly without problems. In this case, there was a lot of activity on all storage domains, because I'm moving all my storage (> 10TB in 185 disk images) to a new storage platform. During the online move of one the images, the metadata checksum became corrupted and the storage domain went offline.
Of course, I could dig up the engine logs and vdsm logs of when it happened, but that would be some work and I'm not very confident that the actual cause would be in there.
If any oVirt devs are interested in the logs, I'll provide them, but otherwise I think I'll just see it as an incident and move on.
Best regards,
Martijn.
On Jul 22, 2019 10:17, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl><mailto:Martijn.Grendelman@isaac.nl> wrote:
Hi,
Thanks for the tips! I didn't know about 'pvmove', thanks.
In the mean time, I managed to get it fixed by restoring the VG metadata on the iSCSI server, so on the underlying Zvol directly, rather than via the iSCSI session on the oVirt host. That allowed me to perform the restore without bringing all VMs down, which was important to me, because if I had to shut down VMs, I was sure I wouldn't be able to restart them before the storage domain was back online.
Of course this is a more a Linux problem than an oVirt problem, but oVirt did cause it ;-)
Thanks,
Martijn.
Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
Hi Martin,
First check what went wrong with the VG -as it could be something simple.
vgcfgbackup -f VGname will create a file which you can use to compare current metadata with a previous version.
If you have Linux boxes - you can add disks from another storage and then pvmove the data inside the VM. Of course , you will need to reinstall grub on the new OS disk , or you won't be able to boot afterwards.
If possible, try with a test VM before proceeding with important ones.
Backing up the VMs is very important , because working on LVM metada
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/37UDAWDXON3...
--
Met vriendelijke groet,
Kind regards,
[Martijn]<mailto:martijn.grendelman@isaac.nl>
Martijn Grendelman Infrastructure Architect
T: +31 (0)40 264 94 44
[ISAAC]<https://www.isaac.nl>
ISAAC Marconilaan 16 5621 AA Eindhoven The Netherlands
T: +31 (0)40 290 89 79 www.isaac.nl<https://www.isaac.nl>
[ISAAC #1 Again!]<https://www.isaac.nl/nl/over-ons/nieuws/isaac-news/ISAAC-voor-tweede-keer...>
Dit e-mail bericht is alleen bestemd voor de geadresseerde(n). Indien dit bericht niet voor u is bedoeld wordt u verzocht de afzender hiervan op de hoogte te stellen door het bericht te retourneren en de inhoud niet te gebruiken. Aan dit bericht kunnen geen rechten worden ontleend.
3 years, 10 months
Re: Storage domain 'Inactive' but still functional
by Strahil
I have a theory... But after all without any proof it will remain theory.
The storage volumes are just VGs over a shared storage.The SPM host is supposed to be the only one that is working with the LVM metadata, but I have observed that when someone is executing a simple LVM command (for example -lvs, vgs or pvs ) while another one is going on on another host - your metadata can corrupt, due to lack of clvmd.
As a protection, I could offer you to try the following solution:
1. Create new iSCSI lun
2. Share it to all nodes and create the storage domain. Set it to maintenance.
3. Start dlm & clvmd services on all hosts
4. Convert the VG of your shared storage domain to have a 'cluster'-ed flag:
vgchange -c y mynewVG
5. Check the lvs of that VG.
6. Activate the storage domain.
Of course test it on a test cluster before inplementing it on Prod.
This is one of the approaches used in Linux HA clusters in order to avoid LVM metadata corruption.
Best Regards,
Strahil NikolovOn Jul 22, 2019 15:46, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>
> Hi,
>
> Op 22-7-2019 om 14:30 schreef Strahil:
>>
>> If you can give directions (some kind of history) , the dev might try to reproduce this type of issue.
>>
>> If it is reproduceable - a fix can be provided.
>>
>> Based on my experience, if something as used as Linux LVM gets broken, the case is way hard to reproduce.
>
>
> Yes, I'd think so too, especially since this activity (online moving of disk images) is done all the time, mostly without problems. In this case, there was a lot of activity on all storage domains, because I'm moving all my storage (> 10TB in 185 disk images) to a new storage platform. During the online move of one the images, the metadata checksum became corrupted and the storage domain went offline.
>
> Of course, I could dig up the engine logs and vdsm logs of when it happened, but that would be some work and I'm not very confident that the actual cause would be in there.
>
> If any oVirt devs are interested in the logs, I'll provide them, but otherwise I think I'll just see it as an incident and move on.
>
> Best regards,
> Martijn.
>
>
>
>
> On Jul 22, 2019 10:17, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>>>
>>> Hi,
>>>
>>> Thanks for the tips! I didn't know about 'pvmove', thanks.
>>>
>>> In the mean time, I managed to get it fixed by restoring the VG metadata on the iSCSI server, so on the underlying Zvol directly, rather than via the iSCSI session on the oVirt host. That allowed me to perform the restore without bringing all VMs down, which was important to me, because if I had to shut down VMs, I was sure I wouldn't be able to restart them before the storage domain was back online.
>>>
>>> Of course this is a more a Linux problem than an oVirt problem, but oVirt did cause it ;-)
>>>
>>> Thanks,
>>> Martijn.
>>>
>>>
>>>
>>> Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
>>>>
>>>> Hi Martin,
>>>>
>>>> First check what went wrong with the VG -as it could be something simple.
>>>> vgcfgbackup -f VGname will create a file which you can use to compare current metadata with a previous version.
>>>>
>>>> If you have Linux boxes - you can add disks from another storage and then pvmove the data inside the VM. Of course , you will need to reinstall grub on the new OS disk , or you won't be able to boot afterwards.
>>>> If possible, try with a test VM before proceeding with important ones.
>>>>
>>>> Backing up the VMs is very important , because working on LVM metada
3 years, 10 months
USB turns off and wont come alive
by Darin Schmidt
I have 2 usb controllers installed via a riser card plugged into a M.2 slot. (Like those used for mining with GPU's). Everything works great for a long time, then my Windows 10 VM freezes and I lose all USB activity. LSPCI still sees the devices:
0a:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
0b:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
43:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
44:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
I assume a and b are the same physical card as its supposed to have 2 channels each, same with 44 and 43.
I tried for both 44 and 43
echo "1" > /sys/bus/pci/devices/0000\:43\:00.0/remove
echo "1" > /sys/bus/pci/rescan
echo "1" > /sys/bus/pci/devices/0000\:43\:00.0/reset
This did not work. I cannot determine if its just the VM thats no longer seeing/using the hardware or if its the hardware itself. I wonder if its a power state thing as well? Nothing I plug into the card seems to be recognized. Any Suggestions? rebooting the VM doesnt help either. It appears the hardware is functioning, but anything you plug into it isnt being detected.
3 years, 10 months
Import VM from ovirt-exported ova
by david.paning@gmx.ch
Hello,
i installed a new oVirt 4.3.4 from scratch and am attempting to restore the VMs which ran perfectly fine on my last oVirt 4.2.8, where i exported the VM to the export domain and also as an OVA file.
Importing the export domain fails, which i plan to debug later.
But importing the OVA file via the Administration Portal also fails with
Error ID: 1153
Message: Failed to import the VM to Data Center, Cluster.
The VM name is still recognised by the import routine, which tells me the file is readable...
Now my questions:
Is there any option to try the same from CLI, hoping to get better clues for what is going on?
Are there specific log options which could give some clues?
Would it be better in general to import the VM from an export-storage domain instead of using a OVA export?
Any hint would highly appreciated.
Best
3 years, 10 months
Re: major network changes
by Strahil
Hi Carl,
I think there is another thread here related to the migration to another network.
As far as I know, the check liveliness try's to access the ovirt's health page.
Does the new engine's ip has A/PTR record setup?
Also, check the engine logs, once the HostedEngine VM is up and running.
Best Regards,
Strahil NikolovOn Jul 23, 2019 16:13, carl langlois <crl.langlois(a)gmail.com> wrote:
>
> Hi,
>
> We have managed to stabilize the DNS udpate in out network. Now the current situation is.
> I have 3 hosts that can run the engine (hosted-engine).
> They were all in the 10.8.236.x. Now i have moved one of them in the 10.16.248.x.
>
> If i boot the engine on one of the host that is in the 10.8.236.x the engine is going up with status "good". I can access the engine UI. I can see all my hosts even the one in the 10.16.248.x network.
>
> But if i boot the engine on the hosted-engine host that was switch to the 10.16.248.x the engine is booting. I can ssh to it but the status is always " fail for liveliness check".
> The main difference is that when i boot on the host that is in the 10.16.248.x network the engine gets a address in the 248.x network.
>
> On the engine i have this in the /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> 019-07-23 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can not sample data, oVirt Engine is not updating the statistics. Please check your oVirt Engine status.|9704
> the engine.log seems okey.
>
> So i need to understand what this " liveliness check" do(or try to do) so i can investigate why the engine status is not becoming good.
>
> The initial deployment was done in the 10.8.236.x network. Maybe is as something to do with that.
>
> Thanks & Regards
>
> Carl
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> wrote:
>>
>> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>> <mdbarroso(a)redhat.com> wrote:
>> >
>> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois <crl.langlois(a)gmail.com> wrote:
>> > >
>> > > Hi Miguel,
>> > >
>> > > I have managed to change the config for the ovn-controler.
>> > > with those commands
>> > > ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:10.16.248.74:6642
>> > > ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65
>> > > and restating the services
>> >
>> > Yes, that's what the script is supposed to do, check [0].
>> >
>> > Not sure why running vdsm-tool didn't work for you.
>> >
>> > >
>> > > But even with this i still have the "fail for liveliness check" when starting the ovirt engine. But one thing i notice with our new network is that the reverse DNS does not work(IP -> hostname). The forward is working fine. I am trying to see with our IT why it is not working.
>> >
>> > Do you guys use OVN? If not, you could disable the provider, install
>> > the hosted-engine VM, then, if needed, re-add / re-activate it .
>>
>> I'm assuming it fails for the same reason you've stated initially -
>> i.e. ovn-controller is involved; if it is not, disregard this msg :)
>> >
>> > [0] - https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/se...
>> >
>> > >
>> > > Regards.
>> > > Carl
>> > >
>> > > On Thu, Jul 18, 2019 at 4:03 AM Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> wrote:
>> > >>
>> > >> On Wed, Jul 17, 2019 at 7:07 PM carl langlois <crl.langlois(a)gmail.com> wrote:
>> > >> >
>> > >> > Hi
>> > >> > Here is the output of the command
>> > >> >
>> > >> > [root@ovhost1 ~]# vdsm-tool --vvverbose ovn-config 10.16.248.74 ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17 13:02:52,
3 years, 10 months
Stuck in "Finalizing" disk upload phase
by Vrgotic, Marko
Dear oVIrt,
I initiated upload of qcow2 disk image for Centos 6.5:
It reached finalizing phase and than started throwing following errors:
2019-07-17 14:40:51,480Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-86) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:40:51,480Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-86) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:01,572Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:01,574Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:11,690Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-7) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:11,690Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-7) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:21,781Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:21,782Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
I can not cancel it, can not stop it, not via UI not via force option using ovirt_disk module.
Help!
oVIrt 4.3.4.3-1 version running with CentOS 7.6 Hosts.
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
ActiveVideo
3 years, 10 months
major network changes
by carl langlois
Hi
We are in a process of changing our network connection. Our current network
is using 10.8.256.x and we will change to 10.16.248.x. We have a HA ovirt
cluster (around 10 nodes) currently configure on the 10.8.256.x. So my
question is is it possible to relocate the ovirt cluster to the
10.16.248.x. We have tried to move everything to the new network without
success. All the node seem to boot up properly, our gluster storage also
work properly.
When we try to start the hosted-engine it goes up but fail the liveliness
check. We have notice in the /var/log/openvswitch/ovn-controller.log that
he is triying to connect to the hold ip address of the hosted-engine vm.
019-07-16T18:41:29.483Z|01992|reconnect|INFO|ssl:10.8.236.244:6642: waiting
8 seconds before reconnect
2019-07-16T18:41:37.489Z|01993|reconnect|INFO|ssl:10.8.236.244:6642:
connecting...
2019-07-16T18:41:45.497Z|01994|reconnect|INFO|ssl:10.8.236.244:6642:
connection attempt timed out
So my question is were is the 10.8.236.244 come from.
The routing table for one of our host look like this
estination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0
ovirtmgmt
10.16.248.0 0.0.0.0 255.255.255.0 U 0 0 0
ovirtmgmt
link-local 0.0.0.0 255.255.0.0 U 1002 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1003 0 0 eno2
link-local 0.0.0.0 255.255.0.0 U 1025 0 0
ovirtmgmt
Any help would be really appreciated.
Regards
Carl
3 years, 10 months
Attach the snapshot to the backup virtual machine and activate the disk
by smidhunraj@gmail.com
This is the doubt regarding to the rest api
<https://ovirt.org/documentation/admin-guide/chap-Backups_and_Migration.ht...>
Can you please tell me what would be the response of the api request of step 4 ((will it be a success or a error))
=====================================================================
Attach the snapshot to the backup virtual machine and activate the disk:
POST /api/vms/22222222-2222-2222-2222-222222222222/disks/ HTTP/1.1
Accept: application/xml
Content-type: application/xml
<disk id="11111111-1111-1111-1111-111111111111">
<snapshot id="11111111-1111-1111-1111-111111111111"/>
<active>true</active>
</disk>
==============================================================================================
if we are setting up a bare VM without any backup mechanism in it and try to attach a vm to it.
3 years, 10 months
Re: ovirt-engine-appliance ova
by Strahil
It looks like a kickstart error.
Maybe a parameter in the kickstart is missing ?
Best Regards,
Strahil NikolovOn Jul 23, 2019 09:08, Yedidyah Bar David <didi(a)redhat.com> wrote:
>
> On Mon, Jul 22, 2019 at 11:53 PM Jingjie Jiang <jingjie.jiang(a)oracle.com> wrote:
>>
>> Hi David,
>
>
> (Actually it's "Yedidyah" or "Didi")
>
>>
>> Thanks for your info.
>>
>> Please check my reply inline.
>>
>>
>> -Jingjie
>>
>> On 7/16/19 3:55 AM, Yedidyah Bar David wrote:
>>>
>>> On Thu, Jul 11, 2019 at 10:46 PM <jingjie.jiang(a)oracle.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Can someone tell me how to generate ovirt-engine-appliance ova file in ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
>>>>
>>> You might want to check the project's source code:
>>>
>>>
>>>
>>> https://github.com/ovirt/ovirt-appliance/
>>>
>>>
>>>
>>> Or study the logs of a CI build of it:
>>>
>>>
>>>
>>> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el7-...
>>>
>>>
>>>
>>> I never tried building it myself locally, though.
>>
>> I tried to build after checked out the source code from https://github.com/ovirt/ovirt-appliance/,
>>
>> but the build failed.
>>
>> # make
>> livemedia-creator --make-disk --ram=2048 --vcpus=4 --iso=boot.iso --ks=ovirt-engine-appliance.ks --qcow2 --image-name=ovirt-engine-appliance.qcow2
>> 2019-07-22 12:34:00,095: livemedia-creator 19.7.19-1
>> 2019-07-22 12:34:00,154: disk_size = 51GiB
>> 2019-07-22 12:34:00,154: disk_img = /var/tmp/ovirt-engine-appliance.qcow2
>> 2019-07-22 12:34:00,154: install_log = /root/ovirt/ovirt-appliance/engine-appliance/virt-install.log
>> mount: /dev/loop0 is write-protected, mounting read-only
>> Formatting '/var/tmp/ovirt-engine-appliance.qcow2', fmt=qcow2 size=54760833024 encryption=off cluster_size=65536 lazy_refcounts=off
>> 2019-07-22 12:34:10,195: Running virt-install.
>>
>> Starting install...
>> Retrieving file vmlinuz... | 6.3 MB 00:00
>> Retrieving file initrd.img... | 50 MB 00:00
>> Domain installation still in progress. You can reconnect to
>> the console to complete the installation process.
>> ......
>> 2019-07-22 12:35:15,281: Installation error detected. See logfile.
>> 2019-07-22 12:35:15,283: Shutting down LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55
>> Domain LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55 destroyed
>>
>> Domain LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55 has been undefined
>>
>> 2019-07-22 12:35:15,599: unmounting the iso
>> 2019-07-22 12:35:20,612: Install failed: virt_install failed
>> 2019-07-22 12:35:20,613: Removing bad disk image
>> 2019-07-22 12:35:20,613: virt_install failed
>> make: *** [ovirt-engine-appliance.qcow2] Error 1
>>
>> In the log I found the error as following from virt-install.log:
>>
>> 16:35:07,472 ERR anaconda:CmdlineError: The following mandatory spokes are not completed:#012Installation source#012Software selection
>> 16:35:07,472 DEBUG anaconda:running handleException
>> 16:35:07,473 CRIT anaconda:Traceback (most recent call last):#012#012 File "/usr/lib64/python2.7/site-packages/pyanaconda/ui/tui/simpleline/base.py", line 352, in _mainloop#012 prompt = last_screen.prompt(self._screens[-1][1])#012#012 File "/usr/lib64/python2.7/site-packages/pyanaconda/ui/tui/hubs/summary.py", line 107, in prompt#012 raise CmdlineError(errtxt)#012#012CmdlineError: The following mandatory spokes are not completed:#012Installation source#012Software selection
>> 16:35:08,020 DEBUG anaconda:Gtk cannot be initialized
>> 16:35:08,020 DEBUG anaconda:In the main thread, running exception handler
>> 16:35:08,386 NOTICE multipathd:zram0: add path (uevent)
>> 16:35:08,386 NOTICE multipathd:zram0: spurious uevent, path already in pathvec
>> 16:35:08,386 NOTICE multipathd:zram0: HDIO_GETGEO failed with 25
>> 16:35:08,386 ERR multipathd:zram0: failed to get path uid
>> 16:35:08,388 ERR multipathd:uevent trigger error
>>
>> Can you help me to fix the issue?
>
>
> Sorry, I never tried to build it myself, nor have experience with livemedia-creator. As I wrote above, I suggest to compare your output/result with that of oVirt CI. Otherwise, I'd probably start debugging by searching the net for the error messages you received.
>
> Good luck and best regards,
>
>>
>>
>>>> I tried to import ovirt-engine-appliance ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I got error as following:
>>>>
>>>> Failed to load VM configuration from OVA file: /var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova
>>>>
>>> No idea why this failed.
>>>
>>>
>>>
>>>> I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than CentOS7.6.
>>>>
>>> It has CentOS + oVirt engine.
>>>
>>>
>>>
>>> The only major use for it is by hosted-engine --deploy. In theory you
>>>
>>> can try importing it elsewhere, but I do not recall reports about
>>>
>>> people that tried this and whether it works.
>>>
>>>
>>>
>>> Best regards,
>>>
>
>
> --
> Didi
3 years, 10 months
guest agent w10
by suporte@logicworks.pt
Hello,
I installed Qemu guest agent 7.4.5 on windows 10. The service is running, but I cannot see the IP address or the FQDN on the engine.
I'm running oVirt 4.3.4.3-1.el7
Am I forgetting something?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
3 years, 10 months