On Wed, Feb 23, 2022 at 6:24 PM Muli Ben-Yehuda <muli@lightbitslabs.com> wrote:
>
> Thanks for the detailed instructions, Nir. I'm going to scrounge up some hardware.
> By the way, if anyone else would like to work on NVMe/TCP support, for NVMe/TCP target you can either use Lightbits (talk to me offline for details) or use the upstream Linux NVMe/TCP target. Lightbits is a clustered storage system while upstream is a single target, but the client side should be close enough for vdsm/ovirt purposes.
I played with NVMe/TCP a little bit, using qemu to create a virtual
NVMe disk, and export
it using the kernel on one VM, and consume it on another VM.
https://futurewei-cloud.github.io/ARM-Datacenter/qemu/nvme-of-tcp-vms/
One question about device naming - do we always get the same name of the
device in all hosts?
To support VM migration, every device must have unique name in the cluster.
With multipath we always have unique name, since we disable "friendly names",
so we always have:
/dev/mapper/{wwid}
With rbd we also do not use /dev/rbdN but a unique path:
/dev/rbd/poolname/volume-vol-id
How do we ensure cluster-unique device path? If os_brick does not handle it, we
can to do in ovirt, for example:
/run/vdsm/mangedvolumes/{uuid} -> /dev/nvme7n42
but I think this should be handled in cinderlib, since openstack have
the same problem with migration.
Lightbits Labs
Lead the cloud-native data center
transformation by delivering scalable and effici
This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.