all hosts lost scenario - restore engine from backup file - LOCKED HOST 4.3 version
by goosesk blabla
Hi,
I have problem with LOCKED host after restore of engine.
i am trying many disaster scenarios and one from them is loosing of all hosts with self hosted engine included with backup file of engine only.
When all hosts and engine VM were destroyed at the same time, i installed new self hosted engine on hardware from one lost host. Then tried restore engine
engine-backup --mode=restore --scope=all --file=backup/file/ovirt-engine-backup --log=backup/log/ovirt-engine-backup.log --no-restore-permissions --provision-db --provision-dwh-db –provision-reports-db
After this, lost engine was restored successfully. I had offline Datacentrer, dead hosts and VM. I added new hosts, which were able to connect to domain storage automatically and I was able to start VM.
New host cannot has the same IP as already dead host or hardware UID. This can be solved by setup old dead host to maintenance mode and then delete host. Then the same hardware and IP can be reused.
But, host where old engine was running is LOCKED. You cannot migrate engine VM, cannot start hosted engine from new engine GUI, cannot setup host to maintenance mode and delete.. This is problem when your hardware for hosts is limited.
I would like to ask how to solve this situation, Is there any way how to “reinstall” old hosts? I see that SSL certificates were changed with new installation of new hosts, but I don’t know if there is way how to enable old dead hosts.
Is there any way how to destroy old engine VM to add host back to work ?
Thank you
2 years, 9 months
Hosted engine deployment fails with storage domaine error
by Eugène Ngontang
Hi,
Hope you are well on your end.
I'm still trying to setup a brand new hosted engine but I'm having "Hosted
engine deployment fails with storage domaine error" issue, like shown in
the screenshot.
I can't figure out what's going on, I have checked the NFS server and
everything seems to be working fine, as well as the nfs mount point on the
client side (RHV host).
Please can one here help moving fast with this troubleshooting? Any idea?
Regards,
Eugène NG
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
2 years, 9 months
Re: mdadm vs. JBOD
by jonas@rabe.ch
Thanks to Nikolov and Strahil for the valuable input! I was off for a few weeks, so I would like to apologize if I'm potentially reviving a zombie thread.
I am a bit confused about where to go with this environment after the discontinuation of the hyperconverged setup. What alternative options are there for us? Or do you think going the Gluster way would still be advisable, even though it seems as it is being discontinued over time?
Thanks for any input on this!
Best regards,
Jonas
On 1/22/22 14:31, Strahil Nikolov via Users wrote:
> Using the wizzard is utilizing the Gluster Andible roles.
> I would highly recommend using it, unless you know what you are doing (for example storage alignment when using Hardware raid).
>
> Keep in mind that the DHT xlator (the logic in distributed volumes) is shard aware, so your shards are spread between subvolumes and additional performance can be gained.So using replicated-distributed volumes have their benefits.
>
> If you decide to avoid the software raid, use only replica3 volumes as with SSDs/NVMEs usually the failures are not physical, but logical (maximum writes reached -> predictive failure -> total failure).
>
> Also, consider mounting via noatime/relatime and context="system_u:object_r:glusterd_brick_t:s0" for your gluster bricks.
>
> Best Regards,
> Strahil Nikolov
>
>
> > On Fri, Jan 21, 2022 at 11:00, Gilboa Davara
> > <gilboad(a)gmail.com> mailto:gilboad@gmail.com wrote:
> >
> >
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org mailto:users@ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org mailto:users-leave@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/U2ZEWLRF5D6...
> >
> >
>
2 years, 9 months
querying which LUNs are associated to a specific VM disks
by Sandro Bonazzola
I got a question on oVirt Itala Telegram group about how to get which LUNs
are used by the disks attached to a specific VMs.
This information doesn't seem to be exposed in API or within the engine DB.
Has anybody ever tried something like this?
Thanks,
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 9 months
Reported Devices - Shows all interfaces
by Jean-Louis Dupond
Hi,
To find out which guest nic is mapped to which ovirt nic you can do the
following API call:
/ovirt-engine/api/vms/4c1d50b2-4eee-46d6-a1b1-e4d9e21edaa6/nics/76ff8008-ae2e-46da-aaf4-8cc589dd0c12/reporteddevices
But I would expect the reporteddevices to only show the one from the
specified NIC, but it's not the case as all of them are displayed.
Is this a bug? Or was this done by design?
Thanks
Jean-Louis
2 years, 9 months
OVIRT INSTALLATION IN SAS RAID
by muhammad.riyaz@ajex.ae
I have a Dell PowerEdge 710 server with SAS RAID 1 PERC 6/i, I am trying to install ovirt 4.4 but I could not find the destination disk. in the destination disk page, it is only showing my USB bootable device only. probably this might be SAS RAID diver error, if so how can i load driver during the installation.
2 years, 9 months
can not add Storage Domain using offload iscsi
by lkopyt@gmail.com
Hi,
some facts:
ovirt-node 4.3.10 (based on Centos 7)
hosts are HP blades BL460C with network card supporting iscsi HBA (there are no iscsi nics visible on OS level, but i had to configure ip addresses manually since they were not inherited from BIOS )
# iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
be2iscsi.c4:34:6b:b3:85:75.ipv4.0 be2iscsi,c4:34:6b:b3:85:75,172.40.2.21,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:75.ipv6.0 be2iscsi,c4:34:6b:b3:85:75,<empty>,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv6.0 be2iscsi,c4:34:6b:b3:85:71,<empty>,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv4.0 be2iscsi,c4:34:6b:b3:85:71,172.40.1.21,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
# iscsiadm -m session
be2iscsi: [1] 172.40.2.1:3260,12 iqn.1992-04.com.emc:cx.ckm00143501947.a0 (non-flash)
be2iscsi: [2] 172.40.2.2:3260,6 iqn.1992-04.com.emc:cx.ckm00143501947.b1 (non-flash)
be2iscsi: [5] 172.40.1.1:3260,5 iqn.1992-04.com.emc:cx.ckm00143501947.a1 (non-flash)
be2iscsi: [6] 172.40.1.2:3260,4 iqn.1992-04.com.emc:cx.ckm00143501947.b0 (non-flash)
[root@worker1 ~]# multipath -l
3600601604a003a00ee4b8ec05aa5ec11 dm-47 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 3:0:2:1 sds 65:32 active undef running
| `- 4:0:0:1 sdo 8:224 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 3:0:3:1 sdu 65:64 active undef running
`- 4:0:1:1 sdq 65:0 active undef running
...
target SAN storage is detected, and exposed LUNs are visible. I can even partition them, create filesystem and mount them in OS, all when doing manually step by step.
when trying to add Storage Domain in iscsi the LUNs/targets are nicely visible in GUI, buf after choosing a LUN the domain becomes locked and finally enters detatched mode. can not attach the Datacenter.
in engine.log similar entry for each host:
2022-03-16 22:27:40,736+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1111) [e8778752-1bc1-4ba9-b9c0-ce651d35d824] START, ConnectStorageServerVDSCommand(HostName = worker1, StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31', connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnecti
ons:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 317c3ffd
...
2022-03-16 22:30:40,836+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1111) [e8778752-1bc1-4ba9-b9c0-ce651d35d824] Command 'ConnectStorageServerVDSCommand(HostName = worker1, StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31', connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnec
tions:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
how to make it working?
I read offload iscsi is supported but all the documentation i found is for software iscsi.
any hints welcomed
thanks
2 years, 9 months
Moving from oVirt 4.4.4 to 4.4.10
by ayoub souihel
Dears ,
I hope you are doing well .
I have a cluster of 02 nodes and one standalone Manager , all of them are running on top of CentOS 8.3 witch already EOL , i would upgrade to the latest version witch is based on CentOS stream , but i am really confused about the best way to do it , basically i draft tis action plan , please advice me :
- Redeploy a new engine with CentOS Stream .
- Restore the ovirt backup on this new engine .
- Redeploy one host with 4.4.10 .
- Add the node to the cluster .
- Move the VMs to the new hosts .
- upgrade the second node .
thank you in advance .
Regards,
2 years, 9 months