This series adds support for the use of user virtual addresses in the
vDPA simulator devices.
The main reason for this change is to lift the pinning of all guest memory.
Especially with virtio devices implemented in software.
The next step would be to generalize the code in vdpa-sim to allow the
implementation of in-kernel software devices. Similar to vhost, but using vDPA
so we can reuse the same software stack (e.g. in QEMU) for both HW and SW
devices.
For example, we have never merged vhost-blk, and lately there has been interest.
So it would be nice to do it directly with vDPA to reuse the same code in the
VMM for both HW and SW vDPA block devices.
The main problem (addressed by this series) was due to the pinning of all
guest memory, which thus prevented the overcommit of guest memory.
There are still some TODOs to be fixed, but I would like to have your feedback
on this RFC.
Thanks,
Stefano
Note: this series is based on Linux v6.1 + couple of fixes (that I needed to
run libblkio tests) already posted but not yet merged.
Tree available here: https://gitlab.com/sgarzarella/linux/-/tree/vdpa-sim-use-va
Stefano Garzarella (6):
vdpa: add bind_mm callback
vhost-vdpa: use bind_mm device callback
vringh: support VA with iotlb
vdpa_sim: make devices agnostic for work management
vdpa_sim: use kthread worker
vdpa_sim: add support for user VA
drivers/vdpa/vdpa_sim/vdpa_sim.h | 7 +-
include/linux/vdpa.h | 8 +
include/linux/vringh.h | 5 +-
drivers/vdpa/mlx5/core/resources.c | 3 +-
drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
drivers/vdpa/vdpa_sim/vdpa_sim.c | 132 +++++++++++++-
drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 6 +-
drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 6 +-
drivers/vhost/vdpa.c | 22 +++
drivers/vhost/vringh.c | 250 +++++++++++++++++++++------
10 files changed, 370 insertions(+), 71 deletions(-)
--
2.38.1
This new optional callback is used to bind the device to a specific
address space so the vDPA framework can use VA when this callback
is implemented.
Suggested-by: Jason Wang <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
include/linux/vdpa.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 6d0f5e4e82c2..34388e21ef3f 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -282,6 +282,12 @@ struct vdpa_map_file {
* @iova: iova to be unmapped
* @size: size of the area
* Returns integer: success (0) or error (< 0)
+ * @bind_mm: Bind the device to a specific address space
+ * so the vDPA framework can use VA when this
+ * callback is implemented. (optional)
+ * @vdev: vdpa device
+ * @mm: address space to bind
+ * @owner: process that owns the address space
* @free: Free resources that belongs to vDPA (optional)
* @vdev: vdpa device
*/
@@ -341,6 +347,8 @@ struct vdpa_config_ops {
u64 iova, u64 size);
int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group,
unsigned int asid);
+ int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm,
+ struct task_struct *owner);
/* Free device resources */
void (*free)(struct vdpa_device *vdev);
--
2.38.1
vDPA supports the possibility to use user VA in the iotlb messages.
So, let's add support for user VA in vringh to use it in the vDPA
simulators.
Signed-off-by: Stefano Garzarella <[email protected]>
---
include/linux/vringh.h | 5 +-
drivers/vdpa/mlx5/core/resources.c | 3 +-
drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
drivers/vdpa/vdpa_sim/vdpa_sim.c | 4 +-
drivers/vhost/vringh.c | 250 +++++++++++++++++++++++------
5 files changed, 207 insertions(+), 57 deletions(-)
diff --git a/include/linux/vringh.h b/include/linux/vringh.h
index 212892cf9822..c70962f16b1f 100644
--- a/include/linux/vringh.h
+++ b/include/linux/vringh.h
@@ -32,6 +32,9 @@ struct vringh {
/* Can we get away with weak barriers? */
bool weak_barriers;
+ /* Use user's VA */
+ bool use_va;
+
/* Last available index we saw (ie. where we're up to). */
u16 last_avail_idx;
@@ -279,7 +282,7 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb,
spinlock_t *iotlb_lock);
int vringh_init_iotlb(struct vringh *vrh, u64 features,
- unsigned int num, bool weak_barriers,
+ unsigned int num, bool weak_barriers, bool use_va,
struct vring_desc *desc,
struct vring_avail *avail,
struct vring_used *used);
diff --git a/drivers/vdpa/mlx5/core/resources.c b/drivers/vdpa/mlx5/core/resources.c
index 9800f9bec225..e0bab3458b40 100644
--- a/drivers/vdpa/mlx5/core/resources.c
+++ b/drivers/vdpa/mlx5/core/resources.c
@@ -233,7 +233,8 @@ static int init_ctrl_vq(struct mlx5_vdpa_dev *mvdev)
if (!mvdev->cvq.iotlb)
return -ENOMEM;
- vringh_set_iotlb(&mvdev->cvq.vring, mvdev->cvq.iotlb, &mvdev->cvq.iommu_lock);
+ vringh_set_iotlb(&mvdev->cvq.vring, mvdev->cvq.iotlb,
+ &mvdev->cvq.iommu_lock);
return 0;
}
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 90913365def4..81ba0867e2c8 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -2504,7 +2504,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev)
if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ))
err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features,
- MLX5_CVQ_MAX_ENT, false,
+ MLX5_CVQ_MAX_ENT, false, false,
(struct vring_desc *)(uintptr_t)cvq->desc_addr,
(struct vring_avail *)(uintptr_t)cvq->driver_addr,
(struct vring_used *)(uintptr_t)cvq->device_addr);
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index b20689f8fe89..2e0ee7280aa8 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -67,7 +67,7 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx)
{
struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx];
- vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false,
+ vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false, false,
(struct vring_desc *)(uintptr_t)vq->desc_addr,
(struct vring_avail *)
(uintptr_t)vq->driver_addr,
@@ -87,7 +87,7 @@ static void vdpasim_vq_reset(struct vdpasim *vdpasim,
vq->cb = NULL;
vq->private = NULL;
vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features,
- VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL);
+ VDPASIM_QUEUE_MAX, false, false, NULL, NULL, NULL);
vq->vring.notify = NULL;
}
diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
index 11f59dd06a74..c1f77dc93482 100644
--- a/drivers/vhost/vringh.c
+++ b/drivers/vhost/vringh.c
@@ -1094,15 +1094,99 @@ EXPORT_SYMBOL(vringh_need_notify_kern);
#if IS_REACHABLE(CONFIG_VHOST_IOTLB)
-static int iotlb_translate(const struct vringh *vrh,
- u64 addr, u64 len, u64 *translated,
- struct bio_vec iov[],
- int iov_size, u32 perm)
+static int iotlb_translate_va(const struct vringh *vrh,
+ u64 addr, u64 len, u64 *translated,
+ struct iovec iov[],
+ int iov_size, u32 perm)
{
struct vhost_iotlb_map *map;
struct vhost_iotlb *iotlb = vrh->iotlb;
+ u64 s = 0, last = addr + len - 1;
+ int ret = 0;
+
+ spin_lock(vrh->iotlb_lock);
+
+ while (len > s) {
+ u64 size;
+
+ if (unlikely(ret >= iov_size)) {
+ ret = -ENOBUFS;
+ break;
+ }
+
+ map = vhost_iotlb_itree_first(iotlb, addr, last);
+ if (!map || map->start > addr) {
+ ret = -EINVAL;
+ break;
+ } else if (!(map->perm & perm)) {
+ ret = -EPERM;
+ break;
+ }
+
+ size = map->size - addr + map->start;
+ iov[ret].iov_len = min(len - s, size);
+ iov[ret].iov_base = (void __user *)(unsigned long)
+ (map->addr + addr - map->start);
+ s += size;
+ addr += size;
+ ++ret;
+ }
+
+ spin_unlock(vrh->iotlb_lock);
+
+ if (translated)
+ *translated = min(len, s);
+
+ return ret;
+}
+
+static inline int copy_from_va(const struct vringh *vrh, void *dst, void *src,
+ u64 len, u64 *translated)
+{
+ struct iovec iov[16];
+ struct iov_iter iter;
+ int ret;
+
+ ret = iotlb_translate_va(vrh, (u64)(uintptr_t)src, len, translated, iov,
+ ARRAY_SIZE(iov), VHOST_MAP_RO);
+ if (ret == -ENOBUFS)
+ ret = ARRAY_SIZE(iov);
+ else if (ret < 0)
+ return ret;
+
+ iov_iter_init(&iter, READ, iov, ret, *translated);
+
+ return copy_from_iter(dst, *translated, &iter);
+}
+
+static inline int copy_to_va(const struct vringh *vrh, void *dst, void *src,
+ u64 len, u64 *translated)
+{
+ struct iovec iov[16];
+ struct iov_iter iter;
+ int ret;
+
+ ret = iotlb_translate_va(vrh, (u64)(uintptr_t)dst, len, translated, iov,
+ ARRAY_SIZE(iov), VHOST_MAP_WO);
+ if (ret == -ENOBUFS)
+ ret = ARRAY_SIZE(iov);
+ else if (ret < 0)
+ return ret;
+
+ iov_iter_init(&iter, WRITE, iov, ret, *translated);
+
+ return copy_to_iter(src, *translated, &iter);
+}
+
+static int iotlb_translate_pa(const struct vringh *vrh,
+ u64 addr, u64 len, u64 *translated,
+ struct bio_vec iov[],
+ int iov_size, u32 perm)
+{
+ struct vhost_iotlb_map *map;
+ struct vhost_iotlb *iotlb = vrh->iotlb;
+ u64 s = 0, last = addr + len - 1;
int ret = 0;
- u64 s = 0;
spin_lock(vrh->iotlb_lock);
@@ -1114,8 +1198,7 @@ static int iotlb_translate(const struct vringh *vrh,
break;
}
- map = vhost_iotlb_itree_first(iotlb, addr,
- addr + len - 1);
+ map = vhost_iotlb_itree_first(iotlb, addr, last);
if (!map || map->start > addr) {
ret = -EINVAL;
break;
@@ -1143,28 +1226,61 @@ static int iotlb_translate(const struct vringh *vrh,
return ret;
}
+static inline int copy_from_pa(const struct vringh *vrh, void *dst, void *src,
+ u64 len, u64 *translated)
+{
+ struct bio_vec iov[16];
+ struct iov_iter iter;
+ int ret;
+
+ ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)src, len, translated, iov,
+ ARRAY_SIZE(iov), VHOST_MAP_RO);
+ if (ret == -ENOBUFS)
+ ret = ARRAY_SIZE(iov);
+ else if (ret < 0)
+ return ret;
+
+ iov_iter_bvec(&iter, READ, iov, ret, *translated);
+
+ return copy_from_iter(dst, *translated, &iter);
+}
+
+static inline int copy_to_pa(const struct vringh *vrh, void *dst, void *src,
+ u64 len, u64 *translated)
+{
+ struct bio_vec iov[16];
+ struct iov_iter iter;
+ int ret;
+
+ ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)dst, len, translated, iov,
+ ARRAY_SIZE(iov), VHOST_MAP_WO);
+ if (ret == -ENOBUFS)
+ ret = ARRAY_SIZE(iov);
+ else if (ret < 0)
+ return ret;
+
+ iov_iter_bvec(&iter, WRITE, iov, ret, *translated);
+
+ return copy_to_iter(src, *translated, &iter);
+}
+
static inline int copy_from_iotlb(const struct vringh *vrh, void *dst,
void *src, size_t len)
{
u64 total_translated = 0;
while (total_translated < len) {
- struct bio_vec iov[16];
- struct iov_iter iter;
u64 translated;
int ret;
- ret = iotlb_translate(vrh, (u64)(uintptr_t)src,
- len - total_translated, &translated,
- iov, ARRAY_SIZE(iov), VHOST_MAP_RO);
- if (ret == -ENOBUFS)
- ret = ARRAY_SIZE(iov);
- else if (ret < 0)
- return ret;
-
- iov_iter_bvec(&iter, READ, iov, ret, translated);
+ if (vrh->use_va) {
+ ret = copy_from_va(vrh, dst, src,
+ len - total_translated, &translated);
+ } else {
+ ret = copy_from_pa(vrh, dst, src,
+ len - total_translated, &translated);
+ }
- ret = copy_from_iter(dst, translated, &iter);
if (ret < 0)
return ret;
@@ -1182,22 +1298,17 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst,
u64 total_translated = 0;
while (total_translated < len) {
- struct bio_vec iov[16];
- struct iov_iter iter;
u64 translated;
int ret;
- ret = iotlb_translate(vrh, (u64)(uintptr_t)dst,
- len - total_translated, &translated,
- iov, ARRAY_SIZE(iov), VHOST_MAP_WO);
- if (ret == -ENOBUFS)
- ret = ARRAY_SIZE(iov);
- else if (ret < 0)
- return ret;
-
- iov_iter_bvec(&iter, WRITE, iov, ret, translated);
+ if (vrh->use_va) {
+ ret = copy_to_va(vrh, dst, src,
+ len - total_translated, &translated);
+ } else {
+ ret = copy_to_pa(vrh, dst, src,
+ len - total_translated, &translated);
+ }
- ret = copy_to_iter(src, translated, &iter);
if (ret < 0)
return ret;
@@ -1212,20 +1323,36 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst,
static inline int getu16_iotlb(const struct vringh *vrh,
u16 *val, const __virtio16 *p)
{
- struct bio_vec iov;
- void *kaddr, *from;
int ret;
/* Atomic read is needed for getu16 */
- ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL,
- &iov, 1, VHOST_MAP_RO);
- if (ret < 0)
- return ret;
+ if (vrh->use_va) {
+ struct iovec iov;
+
+ ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p),
+ NULL, &iov, 1, VHOST_MAP_RO);
+ if (ret < 0)
+ return ret;
- kaddr = kmap_atomic(iov.bv_page);
- from = kaddr + iov.bv_offset;
- *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
- kunmap_atomic(kaddr);
+ ret = __get_user(*val, (__virtio16 *)iov.iov_base);
+ if (ret)
+ return ret;
+
+ *val = vringh16_to_cpu(vrh, *val);
+ } else {
+ struct bio_vec iov;
+ void *kaddr, *from;
+
+ ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p),
+ NULL, &iov, 1, VHOST_MAP_RO);
+ if (ret < 0)
+ return ret;
+
+ kaddr = kmap_atomic(iov.bv_page);
+ from = kaddr + iov.bv_offset;
+ *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
+ kunmap_atomic(kaddr);
+ }
return 0;
}
@@ -1233,20 +1360,36 @@ static inline int getu16_iotlb(const struct vringh *vrh,
static inline int putu16_iotlb(const struct vringh *vrh,
__virtio16 *p, u16 val)
{
- struct bio_vec iov;
- void *kaddr, *to;
int ret;
/* Atomic write is needed for putu16 */
- ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL,
- &iov, 1, VHOST_MAP_WO);
- if (ret < 0)
- return ret;
+ if (vrh->use_va) {
+ struct iovec iov;
- kaddr = kmap_atomic(iov.bv_page);
- to = kaddr + iov.bv_offset;
- WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
- kunmap_atomic(kaddr);
+ ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p),
+ NULL, &iov, 1, VHOST_MAP_RO);
+ if (ret < 0)
+ return ret;
+
+ val = cpu_to_vringh16(vrh, val);
+
+ ret = __put_user(val, (__virtio16 *)iov.iov_base);
+ if (ret)
+ return ret;
+ } else {
+ struct bio_vec iov;
+ void *kaddr, *to;
+
+ ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL,
+ &iov, 1, VHOST_MAP_WO);
+ if (ret < 0)
+ return ret;
+
+ kaddr = kmap_atomic(iov.bv_page);
+ to = kaddr + iov.bv_offset;
+ WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
+ kunmap_atomic(kaddr);
+ }
return 0;
}
@@ -1308,6 +1451,7 @@ static inline int putused_iotlb(const struct vringh *vrh,
* @features: the feature bits for this ring.
* @num: the number of elements.
* @weak_barriers: true if we only need memory barriers, not I/O.
+ * @use_va: true if IOTLB contains user VA
* @desc: the userpace descriptor pointer.
* @avail: the userpace avail pointer.
* @used: the userpace used pointer.
@@ -1315,11 +1459,13 @@ static inline int putused_iotlb(const struct vringh *vrh,
* Returns an error if num is invalid.
*/
int vringh_init_iotlb(struct vringh *vrh, u64 features,
- unsigned int num, bool weak_barriers,
+ unsigned int num, bool weak_barriers, bool use_va,
struct vring_desc *desc,
struct vring_avail *avail,
struct vring_used *used)
{
+ vrh->use_va = use_va;
+
return vringh_init_kern(vrh, features, num, weak_barriers,
desc, avail, used);
}
--
2.38.1
On Thu, Dec 15, 2022 at 12:30 AM Stefano Garzarella <[email protected]> wrote:
>
> This new optional callback is used to bind the device to a specific
> address space so the vDPA framework can use VA when this callback
> is implemented.
>
> Suggested-by: Jason Wang <[email protected]>
> Signed-off-by: Stefano Garzarella <[email protected]>
> ---
> include/linux/vdpa.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index 6d0f5e4e82c2..34388e21ef3f 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -282,6 +282,12 @@ struct vdpa_map_file {
> * @iova: iova to be unmapped
> * @size: size of the area
> * Returns integer: success (0) or error (< 0)
> + * @bind_mm: Bind the device to a specific address space
> + * so the vDPA framework can use VA when this
> + * callback is implemented. (optional)
> + * @vdev: vdpa device
> + * @mm: address space to bind
Do we need an unbind or did a NULL mm mean unbind?
> + * @owner: process that owns the address space
Any reason we need the task_struct here?
Thanks
> * @free: Free resources that belongs to vDPA (optional)
> * @vdev: vdpa device
> */
> @@ -341,6 +347,8 @@ struct vdpa_config_ops {
> u64 iova, u64 size);
> int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group,
> unsigned int asid);
> + int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm,
> + struct task_struct *owner);
>
> /* Free device resources */
> void (*free)(struct vdpa_device *vdev);
> --
> 2.38.1
>
On Fri, Dec 16, 2022 at 02:37:45PM +0800, Jason Wang wrote:
>On Thu, Dec 15, 2022 at 12:30 AM Stefano Garzarella <[email protected]> wrote:
>>
>> This new optional callback is used to bind the device to a specific
>> address space so the vDPA framework can use VA when this callback
>> is implemented.
>>
>> Suggested-by: Jason Wang <[email protected]>
>> Signed-off-by: Stefano Garzarella <[email protected]>
>> ---
>> include/linux/vdpa.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
>> index 6d0f5e4e82c2..34388e21ef3f 100644
>> --- a/include/linux/vdpa.h
>> +++ b/include/linux/vdpa.h
>> @@ -282,6 +282,12 @@ struct vdpa_map_file {
>> * @iova: iova to be unmapped
>> * @size: size of the area
>> * Returns integer: success (0) or error (< 0)
>> + * @bind_mm: Bind the device to a specific address space
>> + * so the vDPA framework can use VA when this
>> + * callback is implemented. (optional)
>> + * @vdev: vdpa device
>> + * @mm: address space to bind
>
>Do we need an unbind or did a NULL mm mean unbind?
Yep, your comment in patch 6 makes it necessary. I will add it!
>
>> + * @owner: process that owns the address space
>
>Any reason we need the task_struct here?
Mainly to attach to kthread to the process cgroups, but that part is
still in TODO since I need to understand it better.
Maybe we can remove the task_struct here and use `current` directly in
the callback.
Thanks,
Stefano
在 2022/12/16 16:17, Stefano Garzarella 写道:
> On Fri, Dec 16, 2022 at 02:37:45PM +0800, Jason Wang wrote:
>> On Thu, Dec 15, 2022 at 12:30 AM Stefano Garzarella
>> <[email protected]> wrote:
>>>
>>> This new optional callback is used to bind the device to a specific
>>> address space so the vDPA framework can use VA when this callback
>>> is implemented.
>>>
>>> Suggested-by: Jason Wang <[email protected]>
>>> Signed-off-by: Stefano Garzarella <[email protected]>
>>> ---
>>> include/linux/vdpa.h | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
>>> index 6d0f5e4e82c2..34388e21ef3f 100644
>>> --- a/include/linux/vdpa.h
>>> +++ b/include/linux/vdpa.h
>>> @@ -282,6 +282,12 @@ struct vdpa_map_file {
>>> * @iova: iova to be unmapped
>>> * @size: size of the area
>>> * Returns integer: success (0) or
>>> error (< 0)
>>> + * @bind_mm: Bind the device to a specific
>>> address space
>>> + * so the vDPA framework can use VA
>>> when this
>>> + * callback is implemented. (optional)
>>> + * @vdev: vdpa device
>>> + * @mm: address space to bind
>>
>> Do we need an unbind or did a NULL mm mean unbind?
>
> Yep, your comment in patch 6 makes it necessary. I will add it!
>
>>
>>> + * @owner: process that owns the
>>> address space
>>
>> Any reason we need the task_struct here?
>
> Mainly to attach to kthread to the process cgroups, but that part is
> still in TODO since I need to understand it better.
Ok I see.
>
> Maybe we can remove the task_struct here and use `current` directly in
> the callback.
Yes, it's easier to start without cgroup and we can add it on top.
Thanks
>
> Thanks,
> Stefano
>