Re: Yuriy about NVMe over fabrics for oVirt
by Nir Soffer
On Thu, Nov 2, 2023 at 4:40 PM Yuriy <ykhokhlov(a)gmail.com> wrote:
> Hi Nir!
>
> This is Yuriy.
> We agreed to continue the subject via email.
>
So the options are:
1. Using Managed Block Storage (cinderlib) with a driver that supports
NVMe/TCP.
Lastest oVirt has the needed changes to configure this. Benny and I tested
with Lightbits[1] driver in a
virtualized environment. This is basically a POC that may work for you or
not, or require more work
that you will have to do yourself since not much development is happening
now in oVirt.
2. Using the devices via multipath
Legacy storage domains are based on multipath. It may be possible to use
multipath on top of
NVMe devices, and in this case they look like a normal LUN so you can
create a storage domain
from such devices.
oVirt will not handle connections for you, and all the devices must be
connected to all nodes at the
same time, just like FC/iSCSI LUNs. You will likely not get the performance
benefit of NVMe/TCP.
3. Using host devices
If what you need is using some devices (which happen to be connected via be
NVMe/TCP), maybe you
can attach them to a VM directly (using host devices). This gives the best
possible performance but no
features (snapshots, backup, live migration, live storage migration, etc.)
[1] https://www.lightbitslabs.com/
Nir
4 months, 1 week
Minor PR submitted to clean up a 404 on the website
by Geoff O'Callaghan
Hi Dev Team
I'm not sure if my email made it to the list, so re-posting for the ovirt list archives site.
I reported and issue and created a PR to fix it. Just a minor problem.
Removed broken github link and added link to public repos by gocallag · Pull Request #3170 · oVirt/ovirt-site
If someone could review/merge that would be great. I have a few others lined up, just want to get the process right.
Thanks
Geoff
9 months, 4 weeks