On Sat, Jun 29, 2024 at 5:19 PM Yuriy <ykhokhlov(a)gmail.com> wrote:
Greetings to all!
Nir, there are some rumors here that RH directly or indirectly wants to resume support
for oVirt. Do you know something?
I did not hear about it, but I'm not working in this area for a while.
There are others investing in oVirt:
https://careers.team.blue/jobs/4597190-java-python-developer-virtualizati...
Maybe Jean-Louis can add more info on this.
Also users(a)ovirt.og seem to be very active recently.
Nir
Best wishes!
Yuriy.
On Thu, Nov 23, 2023, 9:01 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> On Thu, Nov 2, 2023 at 4:40 PM Yuriy <ykhokhlov(a)gmail.com> wrote:
>>
>> Hi Nir!
>>
>> This is Yuriy.
>> We agreed to continue the subject via email.
>
>
> So the options are:
>
> 1. Using Managed Block Storage (cinderlib) with a driver that supports NVMe/TCP.
>
> Lastest oVirt has the needed changes to configure this. Benny and I tested with
Lightbits[1] driver in a
> virtualized environment. This is basically a POC that may work for you or not, or
require more work
> that you will have to do yourself since not much development is happening now in
oVirt.
>
> 2. Using the devices via multipath
>
> Legacy storage domains are based on multipath. It may be possible to use multipath on
top of
> NVMe devices, and in this case they look like a normal LUN so you can create a
storage domain
> from such devices.
>
> oVirt will not handle connections for you, and all the devices must be connected to
all nodes at the
> same time, just like FC/iSCSI LUNs. You will likely not get the performance benefit
of NVMe/TCP.
>
> 3. Using host devices
>
> If what you need is using some devices (which happen to be connected via be
NVMe/TCP), maybe you
> can attach them to a VM directly (using host devices). This gives the best possible
performance but no
> features (snapshots, backup, live migration, live storage migration, etc.)
>
> [1]
https://www.lightbitslabs.com/
>
> Nir