if hosted engine corrupted
by Crazy Ayansh
Hi All,
What is the way if self hosted engine corrupted in any environment and we
need to setup it from scratch however on the new engine we need the same
hosts and virtual machine without loosing data.
Thanks
Shashakn
5 years, 4 months
reinstallation information
by nikkognt@gmail.com
Hi,
after a blackout one host of my ovirt not working properly. I tried to Reinstall it but ends with the following error "Failed to install Host ov1. Failed to execute stage 'Closing up': Failed to start service 'vdsmd'." I tried to start manually but it not start .
Now, I would like to reinstall from iso ovirt node.
After I put the host is in maintenance must I remove host from the cluster (Hosts -> host1 -> Remove) or I can reinstall without remove it?
If I remore it from the cluster the network configurations I lose them or not?
My ovirt version is oVirt Engine Version: 4.1.9.1-1.el7.centos.
5 years, 4 months
Re: Storage domain 'Inactive' but still functional
by Strahil
I forgot to mention that LVM config has to be modified in order to 'inform' local LVM stack to rely on clvmd/dlm for locking purposes.
Yet, this brings abother layer of complexity which I prefer to avoid , thus I use HA-LVM on my pacemaker clusters.
@Martin,
Check the link from Benny and if possible check if the 2 cases are related.
Best Regards,
Strahil NikolovOn Jul 24, 2019 11:07, Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
>
> We have seen something similar in the past and patches were posted to deal with this issue, but it's still in progress[1]
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1553133
>
> On Mon, Jul 22, 2019 at 8:07 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> I have a theory... But after all without any proof it will remain theory.
>>
>> The storage volumes are just VGs over a shared storage.The SPM host is supposed to be the only one that is working with the LVM metadata, but I have observed that when someone is executing a simple LVM command (for example -lvs, vgs or pvs ) while another one is going on on another host - your metadata can corrupt, due to lack of clvmd.
>>
>> As a protection, I could offer you to try the following solution:
>> 1. Create new iSCSI lun
>> 2. Share it to all nodes and create the storage domain. Set it to maintenance.
>> 3. Start dlm & clvmd services on all hosts
>> 4. Convert the VG of your shared storage domain to have a 'cluster'-ed flag:
>> vgchange -c y mynewVG
>> 5. Check the lvs of that VG.
>> 6. Activate the storage domain.
>>
>> Of course test it on a test cluster before inplementing it on Prod.
>> This is one of the approaches used in Linux HA clusters in order to avoid LVM metadata corruption.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Jul 22, 2019 15:46, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>>>
>>> Hi,
>>>
>>> Op 22-7-2019 om 14:30 schreef Strahil:
>>>>
>>>> If you can give directions (some kind of history) , the dev might try to reproduce this type of issue.
>>>>
>>>> If it is reproduceable - a fix can be provided.
>>>>
>>>> Based on my experience, if something as used as Linux LVM gets broken, the case is way hard to reproduce.
>>>
>>>
>>> Yes, I'd think so too, especially since this activity (online moving of disk images) is done all the time, mostly without problems. In this case, there was a lot of activity on all storage domains, because I'm moving all my storage (> 10TB in 185 disk images) to a new storage platform. During the online move of one the images, the metadata checksum became corrupted and the storage domain went offline.
>>>
>>> Of course, I could dig up the engine logs and vdsm logs of when it happened, but that would be some work and I'm not very confident that the actual cause would be in there.
>>>
>>> If any oVirt devs are interested in the logs, I'll provide them, but otherwise I think I'll just see it as an incident and move on.
>>>
>>> Best regards,
>>> Martijn.
>>>
>>>
>>>
>>>
>>> On Jul 22, 2019 10:17, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for the tips! I didn't know about 'pvmove', thanks.
>>>>>
>>>>> In the mean time, I managed to get it fixed by restoring the VG metadata on the iSCSI server, so on the underlying Zvol directly, rather than via the iSCSI session on the oVirt host. That allowed me to perform the restore without bringing all VMs down, which was important to me, because if I had to shut down VMs, I was sure I wouldn't be able to restart them before the storage domain was back online.
>>>>>
>>>>> Of course this is a more a Linux problem than an oVirt problem, but oVirt did cause it ;-)
>>>>>
>>>>> Thanks,
>>>>> Martijn.
>>>>>
>>>>>
>>>>>
>>>>> Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
>>>>>>
>>>>>> Hi Martin,
>>>>>>
>>>>>> First check what went wrong with the VG -as it could be something simple.
>>>>>> vgcfgbackup -f VGname will create a file which you can use to compare current metadata with a previous version.
>>>>>>
>>>>>> If you have Linux boxes - you can add disks from another storage an
5 years, 4 months
Re: Storage domain 'Inactive' but still functional
by Martijn Grendelman
Op 24-7-2019 om 10:07 schreef Benny Zlotnik:
We have seen something similar in the past and patches were posted to deal with this issue, but it's still in progress[1]
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1553133
That's some interesting reading, and it sure looks like the problem I had. Thanks!
Best regards,
Martijn.
On Mon, Jul 22, 2019 at 8:07 PM Strahil <hunter86_bg(a)yahoo.com<mailto:hunter86_bg@yahoo.com>> wrote:
I have a theory... But after all without any proof it will remain theory.
The storage volumes are just VGs over a shared storage.The SPM host is supposed to be the only one that is working with the LVM metadata, but I have observed that when someone is executing a simple LVM command (for example -lvs, vgs or pvs ) while another one is going on on another host - your metadata can corrupt, due to lack of clvmd.
As a protection, I could offer you to try the following solution:
1. Create new iSCSI lun
2. Share it to all nodes and create the storage domain. Set it to maintenance.
3. Start dlm & clvmd services on all hosts
4. Convert the VG of your shared storage domain to have a 'cluster'-ed flag:
vgchange -c y mynewVG
5. Check the lvs of that VG.
6. Activate the storage domain.
Of course test it on a test cluster before inplementing it on Prod.
This is one of the approaches used in Linux HA clusters in order to avoid LVM metadata corruption.
Best Regards,
Strahil Nikolov
On Jul 22, 2019 15:46, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl<mailto:Martijn.Grendelman@isaac.nl>> wrote:
Hi,
Op 22-7-2019 om 14:30 schreef Strahil:
If you can give directions (some kind of history) , the dev might try to reproduce this type of issue.
If it is reproduceable - a fix can be provided.
Based on my experience, if something as used as Linux LVM gets broken, the case is way hard to reproduce.
Yes, I'd think so too, especially since this activity (online moving of disk images) is done all the time, mostly without problems. In this case, there was a lot of activity on all storage domains, because I'm moving all my storage (> 10TB in 185 disk images) to a new storage platform. During the online move of one the images, the metadata checksum became corrupted and the storage domain went offline.
Of course, I could dig up the engine logs and vdsm logs of when it happened, but that would be some work and I'm not very confident that the actual cause would be in there.
If any oVirt devs are interested in the logs, I'll provide them, but otherwise I think I'll just see it as an incident and move on.
Best regards,
Martijn.
On Jul 22, 2019 10:17, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl><mailto:Martijn.Grendelman@isaac.nl> wrote:
Hi,
Thanks for the tips! I didn't know about 'pvmove', thanks.
In the mean time, I managed to get it fixed by restoring the VG metadata on the iSCSI server, so on the underlying Zvol directly, rather than via the iSCSI session on the oVirt host. That allowed me to perform the restore without bringing all VMs down, which was important to me, because if I had to shut down VMs, I was sure I wouldn't be able to restart them before the storage domain was back online.
Of course this is a more a Linux problem than an oVirt problem, but oVirt did cause it ;-)
Thanks,
Martijn.
Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
Hi Martin,
First check what went wrong with the VG -as it could be something simple.
vgcfgbackup -f VGname will create a file which you can use to compare current metadata with a previous version.
If you have Linux boxes - you can add disks from another storage and then pvmove the data inside the VM. Of course , you will need to reinstall grub on the new OS disk , or you won't be able to boot afterwards.
If possible, try with a test VM before proceeding with important ones.
Backing up the VMs is very important , because working on LVM metada
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/37UDAWDXON3...
--
Met vriendelijke groet,
Kind regards,
[Martijn]<mailto:martijn.grendelman@isaac.nl>
Martijn Grendelman Infrastructure Architect
T: +31 (0)40 264 94 44
[ISAAC]<https://www.isaac.nl>
ISAAC Marconilaan 16 5621 AA Eindhoven The Netherlands
T: +31 (0)40 290 89 79 www.isaac.nl<https://www.isaac.nl>
[ISAAC #1 Again!]<https://www.isaac.nl/nl/over-ons/nieuws/isaac-news/ISAAC-voor-tweede-keer...>
Dit e-mail bericht is alleen bestemd voor de geadresseerde(n). Indien dit bericht niet voor u is bedoeld wordt u verzocht de afzender hiervan op de hoogte te stellen door het bericht te retourneren en de inhoud niet te gebruiken. Aan dit bericht kunnen geen rechten worden ontleend.
5 years, 4 months
Re: Storage domain 'Inactive' but still functional
by Strahil
I have a theory... But after all without any proof it will remain theory.
The storage volumes are just VGs over a shared storage.The SPM host is supposed to be the only one that is working with the LVM metadata, but I have observed that when someone is executing a simple LVM command (for example -lvs, vgs or pvs ) while another one is going on on another host - your metadata can corrupt, due to lack of clvmd.
As a protection, I could offer you to try the following solution:
1. Create new iSCSI lun
2. Share it to all nodes and create the storage domain. Set it to maintenance.
3. Start dlm & clvmd services on all hosts
4. Convert the VG of your shared storage domain to have a 'cluster'-ed flag:
vgchange -c y mynewVG
5. Check the lvs of that VG.
6. Activate the storage domain.
Of course test it on a test cluster before inplementing it on Prod.
This is one of the approaches used in Linux HA clusters in order to avoid LVM metadata corruption.
Best Regards,
Strahil NikolovOn Jul 22, 2019 15:46, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>
> Hi,
>
> Op 22-7-2019 om 14:30 schreef Strahil:
>>
>> If you can give directions (some kind of history) , the dev might try to reproduce this type of issue.
>>
>> If it is reproduceable - a fix can be provided.
>>
>> Based on my experience, if something as used as Linux LVM gets broken, the case is way hard to reproduce.
>
>
> Yes, I'd think so too, especially since this activity (online moving of disk images) is done all the time, mostly without problems. In this case, there was a lot of activity on all storage domains, because I'm moving all my storage (> 10TB in 185 disk images) to a new storage platform. During the online move of one the images, the metadata checksum became corrupted and the storage domain went offline.
>
> Of course, I could dig up the engine logs and vdsm logs of when it happened, but that would be some work and I'm not very confident that the actual cause would be in there.
>
> If any oVirt devs are interested in the logs, I'll provide them, but otherwise I think I'll just see it as an incident and move on.
>
> Best regards,
> Martijn.
>
>
>
>
> On Jul 22, 2019 10:17, Martijn Grendelman <Martijn.Grendelman(a)isaac.nl> wrote:
>>>
>>> Hi,
>>>
>>> Thanks for the tips! I didn't know about 'pvmove', thanks.
>>>
>>> In the mean time, I managed to get it fixed by restoring the VG metadata on the iSCSI server, so on the underlying Zvol directly, rather than via the iSCSI session on the oVirt host. That allowed me to perform the restore without bringing all VMs down, which was important to me, because if I had to shut down VMs, I was sure I wouldn't be able to restart them before the storage domain was back online.
>>>
>>> Of course this is a more a Linux problem than an oVirt problem, but oVirt did cause it ;-)
>>>
>>> Thanks,
>>> Martijn.
>>>
>>>
>>>
>>> Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
>>>>
>>>> Hi Martin,
>>>>
>>>> First check what went wrong with the VG -as it could be something simple.
>>>> vgcfgbackup -f VGname will create a file which you can use to compare current metadata with a previous version.
>>>>
>>>> If you have Linux boxes - you can add disks from another storage and then pvmove the data inside the VM. Of course , you will need to reinstall grub on the new OS disk , or you won't be able to boot afterwards.
>>>> If possible, try with a test VM before proceeding with important ones.
>>>>
>>>> Backing up the VMs is very important , because working on LVM metada
5 years, 4 months
USB turns off and wont come alive
by Darin Schmidt
I have 2 usb controllers installed via a riser card plugged into a M.2 slot. (Like those used for mining with GPU's). Everything works great for a long time, then my Windows 10 VM freezes and I lose all USB activity. LSPCI still sees the devices:
0a:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
0b:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
43:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
44:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
I assume a and b are the same physical card as its supposed to have 2 channels each, same with 44 and 43.
I tried for both 44 and 43
echo "1" > /sys/bus/pci/devices/0000\:43\:00.0/remove
echo "1" > /sys/bus/pci/rescan
echo "1" > /sys/bus/pci/devices/0000\:43\:00.0/reset
This did not work. I cannot determine if its just the VM thats no longer seeing/using the hardware or if its the hardware itself. I wonder if its a power state thing as well? Nothing I plug into the card seems to be recognized. Any Suggestions? rebooting the VM doesnt help either. It appears the hardware is functioning, but anything you plug into it isnt being detected.
5 years, 4 months
Import VM from ovirt-exported ova
by david.paning@gmx.ch
Hello,
i installed a new oVirt 4.3.4 from scratch and am attempting to restore the VMs which ran perfectly fine on my last oVirt 4.2.8, where i exported the VM to the export domain and also as an OVA file.
Importing the export domain fails, which i plan to debug later.
But importing the OVA file via the Administration Portal also fails with
Error ID: 1153
Message: Failed to import the VM to Data Center, Cluster.
The VM name is still recognised by the import routine, which tells me the file is readable...
Now my questions:
Is there any option to try the same from CLI, hoping to get better clues for what is going on?
Are there specific log options which could give some clues?
Would it be better in general to import the VM from an export-storage domain instead of using a OVA export?
Any hint would highly appreciated.
Best
5 years, 4 months
Re: major network changes
by Strahil
Hi Carl,
I think there is another thread here related to the migration to another network.
As far as I know, the check liveliness try's to access the ovirt's health page.
Does the new engine's ip has A/PTR record setup?
Also, check the engine logs, once the HostedEngine VM is up and running.
Best Regards,
Strahil NikolovOn Jul 23, 2019 16:13, carl langlois <crl.langlois(a)gmail.com> wrote:
>
> Hi,
>
> We have managed to stabilize the DNS udpate in out network. Now the current situation is.
> I have 3 hosts that can run the engine (hosted-engine).
> They were all in the 10.8.236.x. Now i have moved one of them in the 10.16.248.x.
>
> If i boot the engine on one of the host that is in the 10.8.236.x the engine is going up with status "good". I can access the engine UI. I can see all my hosts even the one in the 10.16.248.x network.
>
> But if i boot the engine on the hosted-engine host that was switch to the 10.16.248.x the engine is booting. I can ssh to it but the status is always " fail for liveliness check".
> The main difference is that when i boot on the host that is in the 10.16.248.x network the engine gets a address in the 248.x network.
>
> On the engine i have this in the /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> 019-07-23 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can not sample data, oVirt Engine is not updating the statistics. Please check your oVirt Engine status.|9704
> the engine.log seems okey.
>
> So i need to understand what this " liveliness check" do(or try to do) so i can investigate why the engine status is not becoming good.
>
> The initial deployment was done in the 10.8.236.x network. Maybe is as something to do with that.
>
> Thanks & Regards
>
> Carl
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> wrote:
>>
>> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>> <mdbarroso(a)redhat.com> wrote:
>> >
>> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois <crl.langlois(a)gmail.com> wrote:
>> > >
>> > > Hi Miguel,
>> > >
>> > > I have managed to change the config for the ovn-controler.
>> > > with those commands
>> > > ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:10.16.248.74:6642
>> > > ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65
>> > > and restating the services
>> >
>> > Yes, that's what the script is supposed to do, check [0].
>> >
>> > Not sure why running vdsm-tool didn't work for you.
>> >
>> > >
>> > > But even with this i still have the "fail for liveliness check" when starting the ovirt engine. But one thing i notice with our new network is that the reverse DNS does not work(IP -> hostname). The forward is working fine. I am trying to see with our IT why it is not working.
>> >
>> > Do you guys use OVN? If not, you could disable the provider, install
>> > the hosted-engine VM, then, if needed, re-add / re-activate it .
>>
>> I'm assuming it fails for the same reason you've stated initially -
>> i.e. ovn-controller is involved; if it is not, disregard this msg :)
>> >
>> > [0] - https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/se...
>> >
>> > >
>> > > Regards.
>> > > Carl
>> > >
>> > > On Thu, Jul 18, 2019 at 4:03 AM Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> wrote:
>> > >>
>> > >> On Wed, Jul 17, 2019 at 7:07 PM carl langlois <crl.langlois(a)gmail.com> wrote:
>> > >> >
>> > >> > Hi
>> > >> > Here is the output of the command
>> > >> >
>> > >> > [root@ovhost1 ~]# vdsm-tool --vvverbose ovn-config 10.16.248.74 ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17 13:02:52,
5 years, 4 months
Stuck in "Finalizing" disk upload phase
by Vrgotic, Marko
Dear oVIrt,
I initiated upload of qcow2 disk image for Centos 6.5:
It reached finalizing phase and than started throwing following errors:
2019-07-17 14:40:51,480Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-86) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:40:51,480Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-86) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:01,572Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:01,574Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:11,690Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-7) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:11,690Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-7) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
2019-07-17 14:41:21,781Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Finalizing successful transfer for Upload disk 'av-07-centos-65-base' (disk id: '3452459d-aec6-430e-9509-1d9ca815b2d8', image id: 'b44659a9-607a-4eeb-a255-99532fd4fce4')
2019-07-17 14:41:21,782Z WARN [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [a43180ec-afc7-429e-9f30-9e851eaf7ce7] Failed to stop image transfer session. Ticket does not exist for image '3452459d-aec6-430e-9509-1d9ca815b2d8'
I can not cancel it, can not stop it, not via UI not via force option using ovirt_disk module.
Help!
oVIrt 4.3.4.3-1 version running with CentOS 7.6 Hosts.
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
ActiveVideo
5 years, 4 months
major network changes
by carl langlois
Hi
We are in a process of changing our network connection. Our current network
is using 10.8.256.x and we will change to 10.16.248.x. We have a HA ovirt
cluster (around 10 nodes) currently configure on the 10.8.256.x. So my
question is is it possible to relocate the ovirt cluster to the
10.16.248.x. We have tried to move everything to the new network without
success. All the node seem to boot up properly, our gluster storage also
work properly.
When we try to start the hosted-engine it goes up but fail the liveliness
check. We have notice in the /var/log/openvswitch/ovn-controller.log that
he is triying to connect to the hold ip address of the hosted-engine vm.
019-07-16T18:41:29.483Z|01992|reconnect|INFO|ssl:10.8.236.244:6642: waiting
8 seconds before reconnect
2019-07-16T18:41:37.489Z|01993|reconnect|INFO|ssl:10.8.236.244:6642:
connecting...
2019-07-16T18:41:45.497Z|01994|reconnect|INFO|ssl:10.8.236.244:6642:
connection attempt timed out
So my question is were is the 10.8.236.244 come from.
The routing table for one of our host look like this
estination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0
ovirtmgmt
10.16.248.0 0.0.0.0 255.255.255.0 U 0 0 0
ovirtmgmt
link-local 0.0.0.0 255.255.0.0 U 1002 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1003 0 0 eno2
link-local 0.0.0.0 255.255.0.0 U 1025 0 0
ovirtmgmt
Any help would be really appreciated.
Regards
Carl
5 years, 4 months