The remap of fill and completion rings was frowned upon as they
control the usage of UMEM which does not support concurrent use.
At the same time this would disallow the remap of these rings
into another process.
A possible use case is that the user wants to transfer the socket/
UMEM ownership to another process (via SYS_pidfd_getfd) and so
would need to also remap these rings.
This will have no impact on current usages and just relaxes the
remap limitation.
Signed-off-by: Nuno Gonçalves <[email protected]>
---
V3 -> V4: Remove undesired format changes
V2 -> V3: Call READ_ONCE for each variable and not for the ternary operator
V1 -> V2: Format and comment changes
net/xdp/xsk.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 2ac58b282b5eb..cc1e7f15fa731 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -1301,9 +1301,10 @@ static int xsk_mmap(struct file *file, struct socket *sock,
loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT;
unsigned long size = vma->vm_end - vma->vm_start;
struct xdp_sock *xs = xdp_sk(sock->sk);
+ int state = READ_ONCE(xs->state);
struct xsk_queue *q = NULL;
- if (READ_ONCE(xs->state) != XSK_READY)
+ if (state != XSK_READY && state != XSK_BOUND)
return -EBUSY;
if (offset == XDP_PGOFF_RX_RING) {
@@ -1314,9 +1315,11 @@ static int xsk_mmap(struct file *file, struct socket *sock,
/* Matches the smp_wmb() in XDP_UMEM_REG */
smp_rmb();
if (offset == XDP_UMEM_PGOFF_FILL_RING)
- q = READ_ONCE(xs->fq_tmp);
+ q = state == XSK_READY ? READ_ONCE(xs->fq_tmp) :
+ READ_ONCE(xs->pool->fq);
else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING)
- q = READ_ONCE(xs->cq_tmp);
+ q = state == XSK_READY ? READ_ONCE(xs->cq_tmp) :
+ READ_ONCE(xs->pool->cq);
}
if (!q)
--
2.40.0
On Fri, Mar 24, 2023 at 10:02:22AM +0000, Nuno Gon?alves wrote:
> The remap of fill and completion rings was frowned upon as they
> control the usage of UMEM which does not support concurrent use.
> At the same time this would disallow the remap of these rings
> into another process.
>
> A possible use case is that the user wants to transfer the socket/
> UMEM ownership to another process (via SYS_pidfd_getfd) and so
> would need to also remap these rings.
>
> This will have no impact on current usages and just relaxes the
> remap limitation.
>
> Signed-off-by: Nuno Gon?alves <[email protected]>
> ---
> V3 -> V4: Remove undesired format changes
> V2 -> V3: Call READ_ONCE for each variable and not for the ternary operator
> V1 -> V2: Format and comment changes
thanks, it now looks good to me, i applied this locally and it builds, so:
Reviewed-by: Maciej Fijalkowski <[email protected]>
but i am giving a last call to Magnus since he was acking this before.
>
> net/xdp/xsk.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 2ac58b282b5eb..cc1e7f15fa731 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -1301,9 +1301,10 @@ static int xsk_mmap(struct file *file, struct socket *sock,
> loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT;
> unsigned long size = vma->vm_end - vma->vm_start;
> struct xdp_sock *xs = xdp_sk(sock->sk);
> + int state = READ_ONCE(xs->state);
> struct xsk_queue *q = NULL;
>
> - if (READ_ONCE(xs->state) != XSK_READY)
> + if (state != XSK_READY && state != XSK_BOUND)
> return -EBUSY;
>
> if (offset == XDP_PGOFF_RX_RING) {
> @@ -1314,9 +1315,11 @@ static int xsk_mmap(struct file *file, struct socket *sock,
> /* Matches the smp_wmb() in XDP_UMEM_REG */
> smp_rmb();
> if (offset == XDP_UMEM_PGOFF_FILL_RING)
> - q = READ_ONCE(xs->fq_tmp);
> + q = state == XSK_READY ? READ_ONCE(xs->fq_tmp) :
> + READ_ONCE(xs->pool->fq);
> else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING)
> - q = READ_ONCE(xs->cq_tmp);
> + q = state == XSK_READY ? READ_ONCE(xs->cq_tmp) :
> + READ_ONCE(xs->pool->cq);
> }
>
> if (!q)
> --
> 2.40.0
>
On Fri, 24 Mar 2023 at 13:22, Maciej Fijalkowski
<[email protected]> wrote:
>
> On Fri, Mar 24, 2023 at 10:02:22AM +0000, Nuno Gonçalves wrote:
> > The remap of fill and completion rings was frowned upon as they
> > control the usage of UMEM which does not support concurrent use.
> > At the same time this would disallow the remap of these rings
> > into another process.
> >
> > A possible use case is that the user wants to transfer the socket/
> > UMEM ownership to another process (via SYS_pidfd_getfd) and so
> > would need to also remap these rings.
> >
> > This will have no impact on current usages and just relaxes the
> > remap limitation.
> >
> > Signed-off-by: Nuno Gonçalves <[email protected]>
> > ---
> > V3 -> V4: Remove undesired format changes
> > V2 -> V3: Call READ_ONCE for each variable and not for the ternary operator
> > V1 -> V2: Format and comment changes
>
> thanks, it now looks good to me, i applied this locally and it builds, so:
> Reviewed-by: Maciej Fijalkowski <[email protected]>
>
> but i am giving a last call to Magnus since he was acking this before.
I have already acked it, but I can do it twice.
Acked-by: Magnus Karlsson <[email protected]>
> >
> > net/xdp/xsk.c | 9 ++++++---
> > 1 file changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 2ac58b282b5eb..cc1e7f15fa731 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -1301,9 +1301,10 @@ static int xsk_mmap(struct file *file, struct socket *sock,
> > loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT;
> > unsigned long size = vma->vm_end - vma->vm_start;
> > struct xdp_sock *xs = xdp_sk(sock->sk);
> > + int state = READ_ONCE(xs->state);
> > struct xsk_queue *q = NULL;
> >
> > - if (READ_ONCE(xs->state) != XSK_READY)
> > + if (state != XSK_READY && state != XSK_BOUND)
> > return -EBUSY;
> >
> > if (offset == XDP_PGOFF_RX_RING) {
> > @@ -1314,9 +1315,11 @@ static int xsk_mmap(struct file *file, struct socket *sock,
> > /* Matches the smp_wmb() in XDP_UMEM_REG */
> > smp_rmb();
> > if (offset == XDP_UMEM_PGOFF_FILL_RING)
> > - q = READ_ONCE(xs->fq_tmp);
> > + q = state == XSK_READY ? READ_ONCE(xs->fq_tmp) :
> > + READ_ONCE(xs->pool->fq);
> > else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING)
> > - q = READ_ONCE(xs->cq_tmp);
> > + q = state == XSK_READY ? READ_ONCE(xs->cq_tmp) :
> > + READ_ONCE(xs->pool->cq);
> > }
> >
> > if (!q)
> > --
> > 2.40.0
> >
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <[email protected]>:
On Fri, 24 Mar 2023 10:02:22 +0000 you wrote:
> The remap of fill and completion rings was frowned upon as they
> control the usage of UMEM which does not support concurrent use.
> At the same time this would disallow the remap of these rings
> into another process.
>
> A possible use case is that the user wants to transfer the socket/
> UMEM ownership to another process (via SYS_pidfd_getfd) and so
> would need to also remap these rings.
>
> [...]
Here is the summary with links:
- [bpf-next,V4] xsk: allow remap of fill and/or completion rings
https://git.kernel.org/bpf/bpf-next/c/5f5a7d8d8bd4
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html