On 2020/8/14 下午1:16, Yan Zhao wrote:
On Thu, Aug 13, 2020 at 12:24:50PM +0800, Jason Wang wrote:
> On 2020/8/10 下午3:46, Yan Zhao wrote:
>>> driver is it handled by?
>> It looks that the devlink is for network device specific, and in
>> devlink.h, it says
>> include/uapi/linux/devlink.h - Network physical device Netlink
>> interface,
>
> Actually not, I think there used to have some discussion last year and the
> conclusion is to remove this comment.
>
> It supports IB and probably vDPA in the future.
>
hmm... sorry, I didn't find the referred discussion. only below discussion
regarding to why to add devlink.
https://www.mail-archive.com/netdev@vger.kernel.org/msg95801.html
>This doesn't seem to be too much related to networking? Why can't something
>like this be in sysfs?
It is related to networking quite bit. There has been couple of
iteration of this, including sysfs and configfs implementations. There
has been a consensus reached that this should be done by netlink. I
believe netlink is really the best for this purpose. Sysfs is not a good
idea
See the discussion here:
https://patchwork.ozlabs.org/project/netdev/patch/20191115223355.1277139-...
https://www.mail-archive.com/netdev@vger.kernel.org/msg96102.html
>there is already a way to change eth/ib via
>echo 'eth' > /sys/bus/pci/drivers/mlx4_core/0000:02:00.0/mlx4_port1
>
>sounds like this is another way to achieve the same?
It is. However the current way is driver-specific, not correct.
For mlx5, we need the same, it cannot be done in this way. Do devlink is
the correct way to go.
https://lwn.net/Articles/674867/
There a is need for some userspace API that would allow to expose things
that are not directly related to any device class like net_device of
ib_device, but rather chip-wide/switch-ASIC-wide stuff.
Use cases:
1) get/set of port type (Ethernet/InfiniBand)
2) monitoring of hardware messages to and from chip
3) setting up port splitters - split port into multiple ones and squash again,
enables usage of splitter cable
4) setting up shared buffers - shared among multiple ports within one chip
we actually can also retrieve the same information through sysfs, .e.g
|- [path to device]
|--- migration
| |--- self
| | |---device_api
| | |---mdev_type
| | |---software_version
| | |---device_id
| | |---aggregator
| |--- compatible
| | |---device_api
| | |---mdev_type
| | |---software_version
| | |---device_id
| | |---aggregator
Yes but:
- You need one file per attribute (one syscall for one attribute)
- Attribute is coupled with kobject
All of above seems unnecessary.
Another point, as we discussed in another thread, it's really hard to
make sure the above API work for all types of devices and frameworks. So
having a vendor specific API looks much better.
>> I feel like it's not very appropriate for a GPU driver to use
>> this interface. Is that right?
>
> I think not though most of the users are switch or ethernet devices. It
> doesn't prevent you from inventing new abstractions.
so need to patch devlink core and the userspace devlink tool?
e.g. devlink migration
It quite flexible, you can extend devlink, invent your own or let mgmt
to establish devlink directly.
> Note that devlink is based on netlink, netlink has been widely used by
> various subsystems other than networking.
the advantage of netlink I see is that it can monitor device status and
notify upper layer that migration database needs to get updated.
I may miss something, but why this is needed?
From device point of view, the following capability should be
sufficient to support live migration:
- set/get device state
- report dirty page tracking
- set/get capability
But not sure whether openstack would like to use this capability.
As Sean said, it's heavy for openstack. it's heavy for vendor driver
as well :)
Well, it depends several factors. Just counting LOCs, sysfs based
attributes is not lightweight.
Thanks
And devlink monitor now listens the notification and dumps the state
changes. If we want to use it, need to let it forward the notification
and dumped info to openstack, right?
Thanks
Yan