I believe I have found the relevant error in the engine.log.

__________________________________________________________________________________________________________________________________________________________

Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume: u'vgname=0e4ca0da-1721-4ea7-92af-25233a8679e0 lvname=bb20b34d-96b9-4dfe-b122-0172546e51ce err=[\'  /dev/mapp
er/36589cfc0000003413cb667d8cd46ffb8: read failed after 0 of 4096 at 0: Input/output error\', \'  /dev/mapper/36589cfc0000003413cb667d8cd46ffb8: read failed after 0 of 4096 at 858993393664: Input/output error\', \'  /dev/mapper/36589cfc00
00003413cb667d8cd46ffb8: read failed after 0 of 4096 at 858993451008: Input/output error\', \'  WARNING: Error counts reached a limit of 3. Device /dev/mapper/36589cfc0000003413cb667d8cd46ffb8 was disabled\', \'  Logical Volume "bb20b34d-
96b9-4dfe-b122-0172546e51ce" already exists in volume group "0e4ca0da-1721-4ea7-92af-25233a8679e0"\']', code = 550'

__________________________________________________________________________________________________________________________________________________________


So my question is how do I delete that already existing Logical Volume safely, so that I can move a disk from my iscsi to the FC. This VM used to exsist on the FC before I migrated which is how I ended up in this situation.



On 12/03/2018 01:13 PM, Jacob Green wrote:

    Any thoughts on how I remove an un-imported VM from my FIbre Channel storage? I cannot move my working VMs back to the FC, and I believe it is because the older un-imported version is creating conflict.


On 11/29/2018 03:12 PM, Jacob Green wrote:

    Ok, so here is the situation, before moving/importing our primary storage domain, I exported a few VMs, so they would not need to go down during the "Big migration" however they are now residing on some iscsi storage, now that the Fibre Channel storage is back in place, I cannot move the disks of the VMs to the Fiber channel because their is an unimported version of that VM residing on the Fibre Channel. I need to delete or remove the VM from the Fibre Channel so I can move the working VM back to the Fibre Channel.

I hope that makes sense, but essentially I have a current duplicate VM running in iscsi that I need to move to the fibre channel. however because the VM used to exsist on the fibre channel and has the same disk name, I cannot move it to the fibre channel, also it seems odd to me there is no way to clear the VMs from the storage without importing them. There must be a way?


Thank you.



On 11/28/2018 10:14 PM, Jacob Green wrote:

Hey wanted to thank you for your reply, and wanted to let you know that late after I sent this email, my colleuege and I figured out we needed to enable the FCoE key in the ovirt manager and tell ovirt that eno51 and eno52 interfaces are FCoE.


However I ran into another issue. Now that our Fiber channel is imported to the new environment and were able to import VMs, we have some VMs that we will not be importing, however we see no way to delete them from the storage in the GUI. Or I am just missing it.


TLDR: How does one delete VMs available for import, without importing them from the storage domain?


Thank you.



On 11/28/2018 01:26 AM, Luca 'remix_tj' Lorenzetto wrote:
On Wed, Nov 28, 2018 at 6:54 AM Jacob Green <jgreen@aasteel.com> wrote:
Any help or insight into fiber channel with ovirt 4.2 would be greatly
appreciated.

Hello Jacob,

we're running a cluster of 6 HP BL460 G9 with virtual connect without issues.

[root@kvmsv003 ~]# dmidecode | grep -A3 '^System Information'
System Information
Manufacturer: HP
Product Name: ProLiant BL460c Gen9
Version: Not Specified

We've, anyway, a different kind of CNA:

[root@kvmsv003 ~]# cat /sys/class/fc_host/host1/device/scsi_host/host1/modeldesc
HP FlexFabric 20Gb 2-port 650FLB Adapter

But i see is running the same module you're reporting

[root@kvmsv003 ~]# lsmod | grep bnx
bnx2fc                103061  0
cnic                   67392  1 bnx2fc
libfcoe                58854  2 fcoe,bnx2fc
libfc                 116357  3 fcoe,libfcoe,bnx2fc
scsi_transport_fc      64007  4 fcoe,lpfc,libfc,bnx2fc
[root@fapikvmpdsv003 ~]#

Since the fcoe connection is managed directly by virtual connect, i'm
not having fcoe informations shown with fcoeadm:

[root@kvmsv003 ~]# fcoeadm -i
[root@kvmsv003 ~]#

Are you sure you set up the right configuration on virtual connect side?

Luca


-- 
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PR2ABJDQLY7JIR3SDSMYWXMHPE2RSBIH/

-- 
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQ7G5QQZZMEOV7G43EUU3NRRMCMML7HR/

-- 
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRHR3TWOHGA32JOKGSXFL6CN5EHQNUV6/

-- 
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690