querying which LUNs are associated to a specific VM disks
by Sandro Bonazzola
I got a question on oVirt Itala Telegram group about how to get which LUNs
are used by the disks attached to a specific VMs.
This information doesn't seem to be exposed in API or within the engine DB.
Has anybody ever tried something like this?
Thanks,
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 1 month
Reported Devices - Shows all interfaces
by Jean-Louis Dupond
Hi,
To find out which guest nic is mapped to which ovirt nic you can do the
following API call:
/ovirt-engine/api/vms/4c1d50b2-4eee-46d6-a1b1-e4d9e21edaa6/nics/76ff8008-ae2e-46da-aaf4-8cc589dd0c12/reporteddevices
But I would expect the reporteddevices to only show the one from the
specified NIC, but it's not the case as all of them are displayed.
Is this a bug? Or was this done by design?
Thanks
Jean-Louis
2 years, 1 month
OVIRT INSTALLATION IN SAS RAID
by muhammad.riyaz@ajex.ae
I have a Dell PowerEdge 710 server with SAS RAID 1 PERC 6/i, I am trying to install ovirt 4.4 but I could not find the destination disk. in the destination disk page, it is only showing my USB bootable device only. probably this might be SAS RAID diver error, if so how can i load driver during the installation.
2 years, 1 month
can not add Storage Domain using offload iscsi
by lkopyt@gmail.com
Hi,
some facts:
ovirt-node 4.3.10 (based on Centos 7)
hosts are HP blades BL460C with network card supporting iscsi HBA (there are no iscsi nics visible on OS level, but i had to configure ip addresses manually since they were not inherited from BIOS )
# iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
be2iscsi.c4:34:6b:b3:85:75.ipv4.0 be2iscsi,c4:34:6b:b3:85:75,172.40.2.21,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:75.ipv6.0 be2iscsi,c4:34:6b:b3:85:75,<empty>,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv6.0 be2iscsi,c4:34:6b:b3:85:71,<empty>,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv4.0 be2iscsi,c4:34:6b:b3:85:71,172.40.1.21,<empty>,iqn.1990-07.com.emulex:ovirt1worker1
# iscsiadm -m session
be2iscsi: [1] 172.40.2.1:3260,12 iqn.1992-04.com.emc:cx.ckm00143501947.a0 (non-flash)
be2iscsi: [2] 172.40.2.2:3260,6 iqn.1992-04.com.emc:cx.ckm00143501947.b1 (non-flash)
be2iscsi: [5] 172.40.1.1:3260,5 iqn.1992-04.com.emc:cx.ckm00143501947.a1 (non-flash)
be2iscsi: [6] 172.40.1.2:3260,4 iqn.1992-04.com.emc:cx.ckm00143501947.b0 (non-flash)
[root@worker1 ~]# multipath -l
3600601604a003a00ee4b8ec05aa5ec11 dm-47 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 3:0:2:1 sds 65:32 active undef running
| `- 4:0:0:1 sdo 8:224 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 3:0:3:1 sdu 65:64 active undef running
`- 4:0:1:1 sdq 65:0 active undef running
...
target SAN storage is detected, and exposed LUNs are visible. I can even partition them, create filesystem and mount them in OS, all when doing manually step by step.
when trying to add Storage Domain in iscsi the LUNs/targets are nicely visible in GUI, buf after choosing a LUN the domain becomes locked and finally enters detatched mode. can not attach the Datacenter.
in engine.log similar entry for each host:
2022-03-16 22:27:40,736+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1111) [e8778752-1bc1-4ba9-b9c0-ce651d35d824] START, ConnectStorageServerVDSCommand(HostName = worker1, StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31', connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnecti
ons:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 317c3ffd
...
2022-03-16 22:30:40,836+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1111) [e8778752-1bc1-4ba9-b9c0-ce651d35d824] Command 'ConnectStorageServerVDSCommand(HostName = worker1, StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31', connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnec
tions:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
how to make it working?
I read offload iscsi is supported but all the documentation i found is for software iscsi.
any hints welcomed
thanks
2 years, 1 month
Moving from oVirt 4.4.4 to 4.4.10
by ayoub souihel
Dears ,
I hope you are doing well .
I have a cluster of 02 nodes and one standalone Manager , all of them are running on top of CentOS 8.3 witch already EOL , i would upgrade to the latest version witch is based on CentOS stream , but i am really confused about the best way to do it , basically i draft tis action plan , please advice me :
- Redeploy a new engine with CentOS Stream .
- Restore the ovirt backup on this new engine .
- Redeploy one host with 4.4.10 .
- Add the node to the cluster .
- Move the VMs to the new hosts .
- upgrade the second node .
thank you in advance .
Regards,
2 years, 1 month
oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.
by simon@justconnect.ie
2 days ago I found that 2 of the 3 oVirt nodes had been set to 'Non-Operational'. GlusterFS seemed to be ok from the commandline, but the oVirt engine WebUI was reporting 2 out of 3 bricks per volume as down and event logs were filling up with the following types of messages.
****************************************************
Failed to connect Host ddmovirtprod03 to the Storage Domains data03.
The error message for connection ddmovirtprod03-strg:/data03 returned by VDSM was: Problem while trying to mount target
Failed to connect Host ddmovirtprod03 to Storage Serverthe s
Host ddmovirtprod03 cannot access the Storage Domain(s) data03 attached to the Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod03 to Storage Pool
Host ddmovirtprod01 reports about one of the Active Storage Domains as Problematic.
Host ddmovirtprod01 cannot access the Storage Domain(s) data03 attached to the Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod01 to Storage Pool DDM_Production_DC
****************************************************
The following is from the vdsm.log on host01:
****************************************************
[root@ddmovirtprod01 vdsm]# tail -f /var/log/vdsm/vdsm.log | grep "WARN"
2022-03-15 11:37:14,299+0000 WARN (ioprocess/232748) [IOProcess] (6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: '/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:data03/.prob-6c101766-4e5d-40c6-8fa8-0f7e3b3e931e', error: 'Stale file handle' (init:461)
2022-03-15 11:37:24,313+0000 WARN (ioprocess/232748) [IOProcess] (6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: '/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-c3fa017b-94dc-47d1-89a4-8ee046509a32', error: 'Stale file handle' (init:461)
2022-03-15 11:37:34,325+0000 WARN (ioprocess/232748) [IOProcess] (6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: '/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-e173ecac-4d4d-4b59-a437-61eb5d0beb83', error: 'Stale file handle' (init:461)
2022-03-15 11:37:44,337+0000 WARN (ioprocess/232748) [IOProcess] (6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: '/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-baf13698-0f43-4672-90a4-86cecdf9f8d0', error: 'Stale file handle' (init:461)
2022-03-15 11:37:54,350+0000 WARN (ioprocess/232748) [IOProcess] (6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: '/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-1e92fdfd-d8e9-48b4-84a9-a2b84fc0d14c', error: 'Stale file handle' (init_:461)
****************************************************
After trying different methods to resolve without success I did the following.
1. Moved any VM disks using Storage Domain data03 onto other Storage Domains.
2. Placed data03 Storage Domain ionto Maintenance mode.
3. Placed host03 into Maintenance mode, stopping Gluster services and rebooting.
4. Ensuring all Bricks were up, the peers connected and healing started.
5. Once Gluster volumes were healed I activated host03, at which point host01 also activated.
6. Host01 was showing as disconnected on most bricks so I rebooted it which resolved this.
7. I activated Storage Domain data03 without issue.
The system has been left for 24hrs with no further issues.
The issue is now resolved but it would be helful to know what happened to cause the issues with the Storage Domain data03 and where do I look to confirm.
Regards
Simon...
2 years, 1 month
ovirt-node-ng state "Bond status: NONE"
by Renaud RAKOTOMALALA
Hello,
I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed by an ovirt-engine version 4.4.10.
My cluster is composed of other ovirt-node-ng which have been successively updated from version 4.4.4 to version 4.4.10 without any problem.
This new node is integrated normally in the cluster, however when I look at the status of the network part in the tab "Network interface" I see that all interfaces are "down".
I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
I compared the content of "/etc/sysconfig/network-script" between an hypervisor which works and the one which has the problem and I notice that a whole bunch of files are missing and in particular the "ifup/ifdown...." files. The folder contains only the cluster specific files + the "ovirtmgmt" interface.
The hypervisor which has the problem seems to be perfectly functional, ovirt-engine does not raise any problem.
Have you already encountered this type of problem?
Cheers,
Renaud
2 years, 1 month
setup Ovirt-node without internet connection
by david
Hi
unable add Ovirt-node to the cluster without internet connection
the node is a pre-build ISO image
ovirt-node-ng-installer-4.4.10-2022030308.el8.iso
the node hasn't internet connection
the error message displayed::
"Host test-node2 installation failed. Task Ensure Python3 is installed for
CentOS/RHEL8 hosts failed to execute. Please check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20220315080157-test-node2.imp.int-b8648e98-c2eb-481c-8155-38319cb041f7.log"
2 years, 1 month