2023-10-16 03:17:48

by 黄杰

[permalink] [raw]
Subject: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

In the previous implementation, when multiple xsk sockets were
associated with a single xsk_buff_pool, a situation could arise
where the xsk_tx_list maintained data at the front for one xsk
socket while starving the xsk sockets at the back of the list.
This could result in issues such as the inability to transmit packets,
increased latency, and jitter. To address this problem, we introduced
a new variable called tx_budget_cache, which limits each xsk to transmit
a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
equitable opportunities for subsequent xsk sockets to send tx descriptors.
The value of MAX_XSK_TX_BUDGET is temporarily set to 16.

Signed-off-by: Albert Huang <[email protected]>
---
include/net/xdp_sock.h | 6 ++++++
net/xdp/xsk.c | 18 ++++++++++++++++++
2 files changed, 24 insertions(+)

diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index 69b472604b86..f617ff54e38c 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -44,6 +44,7 @@ struct xsk_map {
struct xdp_sock __rcu *xsk_map[];
};

+#define MAX_XSK_TX_BUDGET 16
struct xdp_sock {
/* struct sock must be the first member of struct xdp_sock */
struct sock sk;
@@ -63,6 +64,11 @@ struct xdp_sock {

struct xsk_queue *tx ____cacheline_aligned_in_smp;
struct list_head tx_list;
+ /* Record the actual number of times xsk has transmitted a tx
+ * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
+ */
+ u32 tx_budget_cache;
+
/* Protects generic receive. */
spinlock_t rx_lock;

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f5e96e0d6e01..087f2675333c 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);

bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
{
+ u32 xsk_full_count = 0;
struct xdp_sock *xs;

rcu_read_lock();
+again:
list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
+ if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
+ xsk_full_count++;
+ continue;
+ }
+
if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
if (xskq_has_descs(xs->tx))
xskq_cons_release(xs->tx);
continue;
}

+ xs->tx_budget_cache++;
+
/* This is the backpressure mechanism for the Tx path.
* Reserve space in the completion queue and only proceed
* if there is space in it. This avoids having to implement
@@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
return true;
}

+ if (unlikely(xsk_full_count > 0)) {
+ list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
+ xs->tx_budget_cache = 0;
+ }
+ xsk_full_count = 0;
+ goto again;
+ }
+
out:
rcu_read_unlock();
return false;
@@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
xs->zc = xs->umem->zc;
xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
xs->queue_id = qid;
+ xs->tx_budget_cache = 0;
xp_add_xsk(xs->pool, xs);

out_unlock:
--
2.20.1


2023-10-16 06:41:44

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

On Mon, 16 Oct 2023 at 05:17, Albert Huang
<[email protected]> wrote:
>
> In the previous implementation, when multiple xsk sockets were
> associated with a single xsk_buff_pool, a situation could arise
> where the xsk_tx_list maintained data at the front for one xsk
> socket while starving the xsk sockets at the back of the list.
> This could result in issues such as the inability to transmit packets,
> increased latency, and jitter. To address this problem, we introduced
> a new variable called tx_budget_cache, which limits each xsk to transmit
> a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> equitable opportunities for subsequent xsk sockets to send tx descriptors.
> The value of MAX_XSK_TX_BUDGET is temporarily set to 16.

Hi Albert. Yes you are correct that there is nothing hindering this to
happen in the code at the moment, so let us fix it.

> Signed-off-by: Albert Huang <[email protected]>
> ---
> include/net/xdp_sock.h | 6 ++++++
> net/xdp/xsk.c | 18 ++++++++++++++++++
> 2 files changed, 24 insertions(+)
>
> diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> index 69b472604b86..f617ff54e38c 100644
> --- a/include/net/xdp_sock.h
> +++ b/include/net/xdp_sock.h
> @@ -44,6 +44,7 @@ struct xsk_map {
> struct xdp_sock __rcu *xsk_map[];
> };
>
> +#define MAX_XSK_TX_BUDGET 16

I think something like MAX_PER_SOCKET_BUDGET would be clearer.

> struct xdp_sock {
> /* struct sock must be the first member of struct xdp_sock */
> struct sock sk;
> @@ -63,6 +64,11 @@ struct xdp_sock {
>
> struct xsk_queue *tx ____cacheline_aligned_in_smp;
> struct list_head tx_list;
> + /* Record the actual number of times xsk has transmitted a tx
> + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> + */
> + u32 tx_budget_cache;
> +
> /* Protects generic receive. */
> spinlock_t rx_lock;
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index f5e96e0d6e01..087f2675333c 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
>
> bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> {
> + u32 xsk_full_count = 0;

Enough with a bool;

> struct xdp_sock *xs;
>
> rcu_read_lock();
> +again:
> list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> + xsk_full_count++;
> + continue;
> + }

The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
for the <= 2 socket case. If I have 3 sockets sharing a
netdev/queue_id, the two first sockets can still starve the third one
since the total budget per send is 32. You need to go through the list
of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
compute this dynamically before each call. Or cache this value
somehow, in the pool for example. Actually, the refcount in the
buf_pool will tell you how many sockets are sharing the same buf_pool.
Try using that to form MAX_XSK_TX_BUDGET on the fly.

Another simpler way of accomplishing this would be to just reorder the
list every time. Put the first socket last in the list every time. The
drawback of this is that you need to hold the xsk_tx_list_lock while
doing this so might be slower. The per socket batch size would also be
32 and you would not receive "fairness" over a single call to
sendto(). Would that be a problem for you?

> +
> if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> if (xskq_has_descs(xs->tx))
> xskq_cons_release(xs->tx);
> continue;
> }
>
> + xs->tx_budget_cache++;
> +
> /* This is the backpressure mechanism for the Tx path.
> * Reserve space in the completion queue and only proceed
> * if there is space in it. This avoids having to implement
> @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> return true;
> }
>
> + if (unlikely(xsk_full_count > 0)) {
> + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> + xs->tx_budget_cache = 0;
> + }
> + xsk_full_count = 0;
> + goto again;
> + }
> +
> out:
> rcu_read_unlock();
> return false;
> @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> xs->zc = xs->umem->zc;
> xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> xs->queue_id = qid;
> + xs->tx_budget_cache = 0;
> xp_add_xsk(xs->pool, xs);
>
> out_unlock:
> --
> 2.20.1
>
>

2023-10-16 08:56:08

by 黄杰

[permalink] [raw]
Subject: Re: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
>
> On Mon, 16 Oct 2023 at 05:17, Albert Huang
> <[email protected]> wrote:
> >
> > In the previous implementation, when multiple xsk sockets were
> > associated with a single xsk_buff_pool, a situation could arise
> > where the xsk_tx_list maintained data at the front for one xsk
> > socket while starving the xsk sockets at the back of the list.
> > This could result in issues such as the inability to transmit packets,
> > increased latency, and jitter. To address this problem, we introduced
> > a new variable called tx_budget_cache, which limits each xsk to transmit
> > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
>
> Hi Albert. Yes you are correct that there is nothing hindering this to
> happen in the code at the moment, so let us fix it.
>
thanks.

> > Signed-off-by: Albert Huang <[email protected]>
> > ---
> > include/net/xdp_sock.h | 6 ++++++
> > net/xdp/xsk.c | 18 ++++++++++++++++++
> > 2 files changed, 24 insertions(+)
> >
> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index 69b472604b86..f617ff54e38c 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -44,6 +44,7 @@ struct xsk_map {
> > struct xdp_sock __rcu *xsk_map[];
> > };
> >
> > +#define MAX_XSK_TX_BUDGET 16
>
> I think something like MAX_PER_SOCKET_BUDGET would be clearer.
>

OK, this will be considered in the next patch.

> > struct xdp_sock {
> > /* struct sock must be the first member of struct xdp_sock */
> > struct sock sk;
> > @@ -63,6 +64,11 @@ struct xdp_sock {
> >
> > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > struct list_head tx_list;
> > + /* Record the actual number of times xsk has transmitted a tx
> > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > + */
> > + u32 tx_budget_cache;
> > +
> > /* Protects generic receive. */
> > spinlock_t rx_lock;
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index f5e96e0d6e01..087f2675333c 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> >
> > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > {
> > + u32 xsk_full_count = 0;
>
> Enough with a bool;
>
> > struct xdp_sock *xs;
> >
> > rcu_read_lock();
> > +again:
> > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > + xsk_full_count++;
> > + continue;
> > + }
>
> The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> for the <= 2 socket case. If I have 3 sockets sharing a
> netdev/queue_id, the two first sockets can still starve the third one
> since the total budget per send is 32.

Why is there a limit of 32? I'm not quite clear on the implications of these,
Did I miss something?
BR
Albert

>You need to go through the list
> of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> compute this dynamically before each call. Or cache this value
> somehow, in the pool for example. Actually, the refcount in the
> buf_pool will tell you how many sockets are sharing the same buf_pool.
> Try using that to form MAX_XSK_TX_BUDGET on the fly.
>
> Another simpler way of accomplishing this would be to just reorder the
> list every time. Put the first socket last in the list every time. The
> drawback of this is that you need to hold the xsk_tx_list_lock while
> doing this so might be slower. The per socket batch size would also be
> 32 and you would not receive "fairness" over a single call to
> sendto(). Would that be a problem for you?
>

Yes, I did consider this approach, but I abandoned it because it would lose
the performance advantages of lock-free operations(RCU read)
thanks
Albert


> > +
> > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > if (xskq_has_descs(xs->tx))
> > xskq_cons_release(xs->tx);
> > continue;
> > }
> >
> > + xs->tx_budget_cache++;
> > +
> > /* This is the backpressure mechanism for the Tx path.
> > * Reserve space in the completion queue and only proceed
> > * if there is space in it. This avoids having to implement
> > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > return true;
> > }
> >
> > + if (unlikely(xsk_full_count > 0)) {
> > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > + xs->tx_budget_cache = 0;
> > + }
> > + xsk_full_count = 0;
> > + goto again;
> > + }

this section of code only enters when it's unable to acquire any TX
descriptors and
xsk_full_count > 0.

> > +
> > out:
> > rcu_read_unlock();
> > return false;
> > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > xs->zc = xs->umem->zc;
> > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > xs->queue_id = qid;
> > + xs->tx_budget_cache = 0;
> > xp_add_xsk(xs->pool, xs);
> >
> > out_unlock:
> > --
> > 2.20.1
> >
> >

2023-10-16 09:14:25

by Magnus Karlsson

[permalink] [raw]
Subject: Re: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

On Mon, 16 Oct 2023 at 10:54, 黄杰 <[email protected]> wrote:
>
> Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
> >
> > On Mon, 16 Oct 2023 at 05:17, Albert Huang
> > <[email protected]> wrote:
> > >
> > > In the previous implementation, when multiple xsk sockets were
> > > associated with a single xsk_buff_pool, a situation could arise
> > > where the xsk_tx_list maintained data at the front for one xsk
> > > socket while starving the xsk sockets at the back of the list.
> > > This could result in issues such as the inability to transmit packets,
> > > increased latency, and jitter. To address this problem, we introduced
> > > a new variable called tx_budget_cache, which limits each xsk to transmit
> > > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
> >
> > Hi Albert. Yes you are correct that there is nothing hindering this to
> > happen in the code at the moment, so let us fix it.
> >
> thanks.
>
> > > Signed-off-by: Albert Huang <[email protected]>
> > > ---
> > > include/net/xdp_sock.h | 6 ++++++
> > > net/xdp/xsk.c | 18 ++++++++++++++++++
> > > 2 files changed, 24 insertions(+)
> > >
> > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > > index 69b472604b86..f617ff54e38c 100644
> > > --- a/include/net/xdp_sock.h
> > > +++ b/include/net/xdp_sock.h
> > > @@ -44,6 +44,7 @@ struct xsk_map {
> > > struct xdp_sock __rcu *xsk_map[];
> > > };
> > >
> > > +#define MAX_XSK_TX_BUDGET 16
> >
> > I think something like MAX_PER_SOCKET_BUDGET would be clearer.
> >
>
> OK, this will be considered in the next patch.
>
> > > struct xdp_sock {
> > > /* struct sock must be the first member of struct xdp_sock */
> > > struct sock sk;
> > > @@ -63,6 +64,11 @@ struct xdp_sock {
> > >
> > > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > > struct list_head tx_list;
> > > + /* Record the actual number of times xsk has transmitted a tx
> > > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > > + */
> > > + u32 tx_budget_cache;
> > > +
> > > /* Protects generic receive. */
> > > spinlock_t rx_lock;
> > >
> > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > index f5e96e0d6e01..087f2675333c 100644
> > > --- a/net/xdp/xsk.c
> > > +++ b/net/xdp/xsk.c
> > > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> > >
> > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > {
> > > + u32 xsk_full_count = 0;
> >
> > Enough with a bool;
> >
> > > struct xdp_sock *xs;
> > >
> > > rcu_read_lock();
> > > +again:
> > > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > > + xsk_full_count++;
> > > + continue;
> > > + }
> >
> > The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> > for the <= 2 socket case. If I have 3 sockets sharing a
> > netdev/queue_id, the two first sockets can still starve the third one
> > since the total budget per send is 32.
>
> Why is there a limit of 32? I'm not quite clear on the implications of these,
> Did I miss something?
> BR
> Albert

There is a define TX_BATCH_SIZE 32 that controls the max number of
packets a sendto() call can send before it exits. It is used in
__xsk_generic_xmit().

> >You need to go through the list
> > of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> > compute this dynamically before each call. Or cache this value
> > somehow, in the pool for example. Actually, the refcount in the
> > buf_pool will tell you how many sockets are sharing the same buf_pool.
> > Try using that to form MAX_XSK_TX_BUDGET on the fly.
> >
> > Another simpler way of accomplishing this would be to just reorder the
> > list every time. Put the first socket last in the list every time. The
> > drawback of this is that you need to hold the xsk_tx_list_lock while
> > doing this so might be slower. The per socket batch size would also be
> > 32 and you would not receive "fairness" over a single call to
> > sendto(). Would that be a problem for you?
> >
>
> Yes, I did consider this approach, but I abandoned it because it would lose
> the performance advantages of lock-free operations(RCU read)
> thanks
> Albert

OK, then let us not consider it and try to make your current approach work.

>
> > > +
> > > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > > if (xskq_has_descs(xs->tx))
> > > xskq_cons_release(xs->tx);
> > > continue;
> > > }
> > >
> > > + xs->tx_budget_cache++;
> > > +
> > > /* This is the backpressure mechanism for the Tx path.
> > > * Reserve space in the completion queue and only proceed
> > > * if there is space in it. This avoids having to implement
> > > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > return true;
> > > }
> > >
> > > + if (unlikely(xsk_full_count > 0)) {
> > > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > + xs->tx_budget_cache = 0;
> > > + }
> > > + xsk_full_count = 0;
> > > + goto again;
> > > + }
>
> this section of code only enters when it's unable to acquire any TX
> descriptors and
> xsk_full_count > 0.
>
> > > +
> > > out:
> > > rcu_read_unlock();
> > > return false;
> > > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > > xs->zc = xs->umem->zc;
> > > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > > xs->queue_id = qid;
> > > + xs->tx_budget_cache = 0;
> > > xp_add_xsk(xs->pool, xs);
> > >
> > > out_unlock:
> > > --
> > > 2.20.1
> > >
> > >

2023-10-17 06:56:08

by 黄杰

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

Magnus Karlsson <[email protected]> 于2023年10月16日周一 17:13写道:
>
> On Mon, 16 Oct 2023 at 10:54, 黄杰 <[email protected]> wrote:
> >
> > Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
> > >
> > > On Mon, 16 Oct 2023 at 05:17, Albert Huang
> > > <[email protected]> wrote:
> > > >
> > > > In the previous implementation, when multiple xsk sockets were
> > > > associated with a single xsk_buff_pool, a situation could arise
> > > > where the xsk_tx_list maintained data at the front for one xsk
> > > > socket while starving the xsk sockets at the back of the list.
> > > > This could result in issues such as the inability to transmit packets,
> > > > increased latency, and jitter. To address this problem, we introduced
> > > > a new variable called tx_budget_cache, which limits each xsk to transmit
> > > > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > > > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > > > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
> > >
> > > Hi Albert. Yes you are correct that there is nothing hindering this to
> > > happen in the code at the moment, so let us fix it.
> > >
> > thanks.
> >
> > > > Signed-off-by: Albert Huang <[email protected]>
> > > > ---
> > > > include/net/xdp_sock.h | 6 ++++++
> > > > net/xdp/xsk.c | 18 ++++++++++++++++++
> > > > 2 files changed, 24 insertions(+)
> > > >
> > > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > > > index 69b472604b86..f617ff54e38c 100644
> > > > --- a/include/net/xdp_sock.h
> > > > +++ b/include/net/xdp_sock.h
> > > > @@ -44,6 +44,7 @@ struct xsk_map {
> > > > struct xdp_sock __rcu *xsk_map[];
> > > > };
> > > >
> > > > +#define MAX_XSK_TX_BUDGET 16
> > >
> > > I think something like MAX_PER_SOCKET_BUDGET would be clearer.
> > >
> >
> > OK, this will be considered in the next patch.
> >
> > > > struct xdp_sock {
> > > > /* struct sock must be the first member of struct xdp_sock */
> > > > struct sock sk;
> > > > @@ -63,6 +64,11 @@ struct xdp_sock {
> > > >
> > > > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > > > struct list_head tx_list;
> > > > + /* Record the actual number of times xsk has transmitted a tx
> > > > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > > > + */
> > > > + u32 tx_budget_cache;
> > > > +
> > > > /* Protects generic receive. */
> > > > spinlock_t rx_lock;
> > > >
> > > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > > index f5e96e0d6e01..087f2675333c 100644
> > > > --- a/net/xdp/xsk.c
> > > > +++ b/net/xdp/xsk.c
> > > > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> > > >
> > > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > {
> > > > + u32 xsk_full_count = 0;
> > >
> > > Enough with a bool;
> > >
> > > > struct xdp_sock *xs;
> > > >
> > > > rcu_read_lock();
> > > > +again:
> > > > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > > > + xsk_full_count++;
> > > > + continue;
> > > > + }
> > >
> > > The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> > > for the <= 2 socket case. If I have 3 sockets sharing a
> > > netdev/queue_id, the two first sockets can still starve the third one
> > > since the total budget per send is 32.
> >
> > Why is there a limit of 32? I'm not quite clear on the implications of these,
> > Did I miss something?
> > BR
> > Albert
>
> There is a define TX_BATCH_SIZE 32 that controls the max number of
> packets a sendto() call can send before it exits. It is used in
> __xsk_generic_xmit().

OK,I got it . I missed the logic here. I will reconsider the logic in this part.
Thanks
BR
Albert

>
> > >You need to go through the list
> > > of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> > > compute this dynamically before each call. Or cache this value
> > > somehow, in the pool for example. Actually, the refcount in the
> > > buf_pool will tell you how many sockets are sharing the same buf_pool.
> > > Try using that to form MAX_XSK_TX_BUDGET on the fly.
> > >
> > > Another simpler way of accomplishing this would be to just reorder the
> > > list every time. Put the first socket last in the list every time. The
> > > drawback of this is that you need to hold the xsk_tx_list_lock while
> > > doing this so might be slower. The per socket batch size would also be
> > > 32 and you would not receive "fairness" over a single call to
> > > sendto(). Would that be a problem for you?
> > >
> >
> > Yes, I did consider this approach, but I abandoned it because it would lose
> > the performance advantages of lock-free operations(RCU read)
> > thanks
> > Albert
>
> OK, then let us not consider it and try to make your current approach work.
>
> >
> > > > +
> > > > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > > > if (xskq_has_descs(xs->tx))
> > > > xskq_cons_release(xs->tx);
> > > > continue;
> > > > }
> > > >
> > > > + xs->tx_budget_cache++;
> > > > +
> > > > /* This is the backpressure mechanism for the Tx path.
> > > > * Reserve space in the completion queue and only proceed
> > > > * if there is space in it. This avoids having to implement
> > > > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > return true;
> > > > }
> > > >
> > > > + if (unlikely(xsk_full_count > 0)) {
> > > > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > + xs->tx_budget_cache = 0;
> > > > + }
> > > > + xsk_full_count = 0;
> > > > + goto again;
> > > > + }
> >
> > this section of code only enters when it's unable to acquire any TX
> > descriptors and
> > xsk_full_count > 0.
> >
> > > > +
> > > > out:
> > > > rcu_read_unlock();
> > > > return false;
> > > > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > > > xs->zc = xs->umem->zc;
> > > > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > > > xs->queue_id = qid;
> > > > + xs->tx_budget_cache = 0;
> > > > xp_add_xsk(xs->pool, xs);
> > > >
> > > > out_unlock:
> > > > --
> > > > 2.20.1
> > > >
> > > >

2023-10-19 08:42:29

by 黄杰

[permalink] [raw]
Subject: Re: [External] Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
>
> On Mon, 16 Oct 2023 at 05:17, Albert Huang
> <[email protected]> wrote:
> >
> > In the previous implementation, when multiple xsk sockets were
> > associated with a single xsk_buff_pool, a situation could arise
> > where the xsk_tx_list maintained data at the front for one xsk
> > socket while starving the xsk sockets at the back of the list.
> > This could result in issues such as the inability to transmit packets,
> > increased latency, and jitter. To address this problem, we introduced
> > a new variable called tx_budget_cache, which limits each xsk to transmit
> > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
>
> Hi Albert. Yes you are correct that there is nothing hindering this to
> happen in the code at the moment, so let us fix it.
>
> > Signed-off-by: Albert Huang <[email protected]>
> > ---
> > include/net/xdp_sock.h | 6 ++++++
> > net/xdp/xsk.c | 18 ++++++++++++++++++
> > 2 files changed, 24 insertions(+)
> >
> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index 69b472604b86..f617ff54e38c 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -44,6 +44,7 @@ struct xsk_map {
> > struct xdp_sock __rcu *xsk_map[];
> > };
> >
> > +#define MAX_XSK_TX_BUDGET 16
>
> I think something like MAX_PER_SOCKET_BUDGET would be clearer.
>
> > struct xdp_sock {
> > /* struct sock must be the first member of struct xdp_sock */
> > struct sock sk;
> > @@ -63,6 +64,11 @@ struct xdp_sock {
> >
> > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > struct list_head tx_list;
> > + /* Record the actual number of times xsk has transmitted a tx
> > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > + */
> > + u32 tx_budget_cache;
> > +
> > /* Protects generic receive. */
> > spinlock_t rx_lock;
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index f5e96e0d6e01..087f2675333c 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> >
> > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > {
> > + u32 xsk_full_count = 0;
>
> Enough with a bool;
>
> > struct xdp_sock *xs;
> >
> > rcu_read_lock();
> > +again:
> > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > + xsk_full_count++;
> > + continue;
> > + }
>
> The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> for the <= 2 socket case. If I have 3 sockets sharing a
> netdev/queue_id, the two first sockets can still starve the third one
> since the total budget per send is 32. You need to go through the list
> of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> compute this dynamically before each call. Or cache this value
> somehow, in the pool for example. Actually, the refcount in the
> buf_pool will tell you how many sockets are sharing the same buf_pool.
> Try using that to form MAX_XSK_TX_BUDGET on the fly.
>
> Another simpler way of accomplishing this would be to just reorder the
> list every time. Put the first socket last in the list every time. The
> drawback of this is that you need to hold the xsk_tx_list_lock while
> doing this so might be slower. The per socket batch size would also be
> 32 and you would not receive "fairness" over a single call to
> sendto(). Would that be a problem for you?
>

Currently, there are two paths in the kernel that consume TX queue descriptors:

1、Native XSK
xsk_tx_peek_desc
xskq_cons_peek_desc

In the first scenario, we consume TX descriptors by sequentially
traversing the pool->xsk_tx_list
without any implicit code logic to ensure fairness. This can lead to a
scenario of starvation,
making it a top priority for us to address.

2、Generic XSK
__xsk_sendmsg (or xsk_poll)
xsk_generic_xmit
__xsk_generic_xmit
xskq_cons_peek_desc

In the second scenario, TX descriptors are consumed by using sendto.
Currently, __xsk_generic_xmit
sends a maximum of 32 TX descriptors each time, and the process
scheduling strategy already
ensures a certain level of fairness. In this scenario, should we
consider not addressing it and
instead prioritize the first scenario?

Additionally, based on my understanding, there should not be
applications concurrently using generic
XSK and native XSK on the same pool.

Magnus, how do you view this issue? I'm concerned that striving for
absolute fairness might introduce
additional complexity in the logic.

BR
Albert



> > +
> > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > if (xskq_has_descs(xs->tx))
> > xskq_cons_release(xs->tx);
> > continue;
> > }
> >
> > + xs->tx_budget_cache++;
> > +
> > /* This is the backpressure mechanism for the Tx path.
> > * Reserve space in the completion queue and only proceed
> > * if there is space in it. This avoids having to implement
> > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > return true;
> > }
> >
> > + if (unlikely(xsk_full_count > 0)) {
> > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > + xs->tx_budget_cache = 0;
> > + }
> > + xsk_full_count = 0;
> > + goto again;
> > + }
> > +
> > out:
> > rcu_read_unlock();
> > return false;
> > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > xs->zc = xs->umem->zc;
> > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > xs->queue_id = qid;
> > + xs->tx_budget_cache = 0;
> > xp_add_xsk(xs->pool, xs);
> >
> > out_unlock:
> > --
> > 2.20.1
> >
> >

2023-10-19 09:13:12

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [External] Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

On Thu, 19 Oct 2023 at 10:41, 黄杰 <[email protected]> wrote:
>
> Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
> >
> > On Mon, 16 Oct 2023 at 05:17, Albert Huang
> > <[email protected]> wrote:
> > >
> > > In the previous implementation, when multiple xsk sockets were
> > > associated with a single xsk_buff_pool, a situation could arise
> > > where the xsk_tx_list maintained data at the front for one xsk
> > > socket while starving the xsk sockets at the back of the list.
> > > This could result in issues such as the inability to transmit packets,
> > > increased latency, and jitter. To address this problem, we introduced
> > > a new variable called tx_budget_cache, which limits each xsk to transmit
> > > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
> >
> > Hi Albert. Yes you are correct that there is nothing hindering this to
> > happen in the code at the moment, so let us fix it.
> >
> > > Signed-off-by: Albert Huang <[email protected]>
> > > ---
> > > include/net/xdp_sock.h | 6 ++++++
> > > net/xdp/xsk.c | 18 ++++++++++++++++++
> > > 2 files changed, 24 insertions(+)
> > >
> > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > > index 69b472604b86..f617ff54e38c 100644
> > > --- a/include/net/xdp_sock.h
> > > +++ b/include/net/xdp_sock.h
> > > @@ -44,6 +44,7 @@ struct xsk_map {
> > > struct xdp_sock __rcu *xsk_map[];
> > > };
> > >
> > > +#define MAX_XSK_TX_BUDGET 16
> >
> > I think something like MAX_PER_SOCKET_BUDGET would be clearer.
> >
> > > struct xdp_sock {
> > > /* struct sock must be the first member of struct xdp_sock */
> > > struct sock sk;
> > > @@ -63,6 +64,11 @@ struct xdp_sock {
> > >
> > > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > > struct list_head tx_list;
> > > + /* Record the actual number of times xsk has transmitted a tx
> > > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > > + */
> > > + u32 tx_budget_cache;
> > > +
> > > /* Protects generic receive. */
> > > spinlock_t rx_lock;
> > >
> > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > index f5e96e0d6e01..087f2675333c 100644
> > > --- a/net/xdp/xsk.c
> > > +++ b/net/xdp/xsk.c
> > > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> > >
> > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > {
> > > + u32 xsk_full_count = 0;
> >
> > Enough with a bool;
> >
> > > struct xdp_sock *xs;
> > >
> > > rcu_read_lock();
> > > +again:
> > > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > > + xsk_full_count++;
> > > + continue;
> > > + }
> >
> > The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> > for the <= 2 socket case. If I have 3 sockets sharing a
> > netdev/queue_id, the two first sockets can still starve the third one
> > since the total budget per send is 32. You need to go through the list
> > of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> > compute this dynamically before each call. Or cache this value
> > somehow, in the pool for example. Actually, the refcount in the
> > buf_pool will tell you how many sockets are sharing the same buf_pool.
> > Try using that to form MAX_XSK_TX_BUDGET on the fly.
> >
> > Another simpler way of accomplishing this would be to just reorder the
> > list every time. Put the first socket last in the list every time. The
> > drawback of this is that you need to hold the xsk_tx_list_lock while
> > doing this so might be slower. The per socket batch size would also be
> > 32 and you would not receive "fairness" over a single call to
> > sendto(). Would that be a problem for you?
> >
>
> Currently, there are two paths in the kernel that consume TX queue descriptors:
>
> 1、Native XSK
> xsk_tx_peek_desc
> xskq_cons_peek_desc
>
> In the first scenario, we consume TX descriptors by sequentially
> traversing the pool->xsk_tx_list
> without any implicit code logic to ensure fairness. This can lead to a
> scenario of starvation,
> making it a top priority for us to address.
>
> 2、Generic XSK
> __xsk_sendmsg (or xsk_poll)
> xsk_generic_xmit
> __xsk_generic_xmit
> xskq_cons_peek_desc
>
> In the second scenario, TX descriptors are consumed by using sendto.
> Currently, __xsk_generic_xmit
> sends a maximum of 32 TX descriptors each time, and the process
> scheduling strategy already
> ensures a certain level of fairness. In this scenario, should we
> consider not addressing it and
> instead prioritize the first scenario?

Agree. The first scenario is the problematic one. One problem we have
to solve there is that the batch size is up to the driver in the
zero-copy case so the xsk core has no idea. Maybe introduce a pointer
that tells us what socket to get packets from first and make sure this
pointer gets updated to the next socket in the list every time the
function is exited? Please make sure that the one socket case is not
hurt.

> Additionally, based on my understanding, there should not be
> applications concurrently using generic
> XSK and native XSK on the same pool.

That is correct.

> Magnus, how do you view this issue? I'm concerned that striving for
> absolute fairness might introduce
> additional complexity in the logic.

Is this a problem that you have observed or need to guard against in
an application? If so, let us fix it.

> BR
> Albert
>
>
>
> > > +
> > > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > > if (xskq_has_descs(xs->tx))
> > > xskq_cons_release(xs->tx);
> > > continue;
> > > }
> > >
> > > + xs->tx_budget_cache++;
> > > +
> > > /* This is the backpressure mechanism for the Tx path.
> > > * Reserve space in the completion queue and only proceed
> > > * if there is space in it. This avoids having to implement
> > > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > return true;
> > > }
> > >
> > > + if (unlikely(xsk_full_count > 0)) {
> > > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > + xs->tx_budget_cache = 0;
> > > + }
> > > + xsk_full_count = 0;
> > > + goto again;
> > > + }
> > > +
> > > out:
> > > rcu_read_unlock();
> > > return false;
> > > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > > xs->zc = xs->umem->zc;
> > > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > > xs->queue_id = qid;
> > > + xs->tx_budget_cache = 0;
> > > xp_add_xsk(xs->pool, xs);
> > >
> > > out_unlock:
> > > --
> > > 2.20.1
> > >
> > >

2023-10-23 09:38:06

by 黄杰

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

Magnus Karlsson <[email protected]> 于2023年10月19日周四 17:13写道:
>
> On Thu, 19 Oct 2023 at 10:41, 黄杰 <[email protected]> wrote:
> >
> > Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
> > >
> > > On Mon, 16 Oct 2023 at 05:17, Albert Huang
> > > <[email protected]> wrote:
> > > >
> > > > In the previous implementation, when multiple xsk sockets were
> > > > associated with a single xsk_buff_pool, a situation could arise
> > > > where the xsk_tx_list maintained data at the front for one xsk
> > > > socket while starving the xsk sockets at the back of the list.
> > > > This could result in issues such as the inability to transmit packets,
> > > > increased latency, and jitter. To address this problem, we introduced
> > > > a new variable called tx_budget_cache, which limits each xsk to transmit
> > > > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > > > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > > > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
> > >
> > > Hi Albert. Yes you are correct that there is nothing hindering this to
> > > happen in the code at the moment, so let us fix it.
> > >
> > > > Signed-off-by: Albert Huang <[email protected]>
> > > > ---
> > > > include/net/xdp_sock.h | 6 ++++++
> > > > net/xdp/xsk.c | 18 ++++++++++++++++++
> > > > 2 files changed, 24 insertions(+)
> > > >
> > > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > > > index 69b472604b86..f617ff54e38c 100644
> > > > --- a/include/net/xdp_sock.h
> > > > +++ b/include/net/xdp_sock.h
> > > > @@ -44,6 +44,7 @@ struct xsk_map {
> > > > struct xdp_sock __rcu *xsk_map[];
> > > > };
> > > >
> > > > +#define MAX_XSK_TX_BUDGET 16
> > >
> > > I think something like MAX_PER_SOCKET_BUDGET would be clearer.
> > >
> > > > struct xdp_sock {
> > > > /* struct sock must be the first member of struct xdp_sock */
> > > > struct sock sk;
> > > > @@ -63,6 +64,11 @@ struct xdp_sock {
> > > >
> > > > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > > > struct list_head tx_list;
> > > > + /* Record the actual number of times xsk has transmitted a tx
> > > > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > > > + */
> > > > + u32 tx_budget_cache;
> > > > +
> > > > /* Protects generic receive. */
> > > > spinlock_t rx_lock;
> > > >
> > > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > > index f5e96e0d6e01..087f2675333c 100644
> > > > --- a/net/xdp/xsk.c
> > > > +++ b/net/xdp/xsk.c
> > > > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> > > >
> > > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > {
> > > > + u32 xsk_full_count = 0;
> > >
> > > Enough with a bool;
> > >
> > > > struct xdp_sock *xs;
> > > >
> > > > rcu_read_lock();
> > > > +again:
> > > > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > > > + xsk_full_count++;
> > > > + continue;
> > > > + }
> > >
> > > The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> > > for the <= 2 socket case. If I have 3 sockets sharing a
> > > netdev/queue_id, the two first sockets can still starve the third one
> > > since the total budget per send is 32. You need to go through the list
> > > of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> > > compute this dynamically before each call. Or cache this value
> > > somehow, in the pool for example. Actually, the refcount in the
> > > buf_pool will tell you how many sockets are sharing the same buf_pool.
> > > Try using that to form MAX_XSK_TX_BUDGET on the fly.
> > >
> > > Another simpler way of accomplishing this would be to just reorder the
> > > list every time. Put the first socket last in the list every time. The
> > > drawback of this is that you need to hold the xsk_tx_list_lock while
> > > doing this so might be slower. The per socket batch size would also be
> > > 32 and you would not receive "fairness" over a single call to
> > > sendto(). Would that be a problem for you?
> > >
> >
> > Currently, there are two paths in the kernel that consume TX queue descriptors:
> >
> > 1、Native XSK
> > xsk_tx_peek_desc
> > xskq_cons_peek_desc
> >
> > In the first scenario, we consume TX descriptors by sequentially
> > traversing the pool->xsk_tx_list
> > without any implicit code logic to ensure fairness. This can lead to a
> > scenario of starvation,
> > making it a top priority for us to address.
> >
> > 2、Generic XSK
> > __xsk_sendmsg (or xsk_poll)
> > xsk_generic_xmit
> > __xsk_generic_xmit
> > xskq_cons_peek_desc
> >
> > In the second scenario, TX descriptors are consumed by using sendto.
> > Currently, __xsk_generic_xmit
> > sends a maximum of 32 TX descriptors each time, and the process
> > scheduling strategy already
> > ensures a certain level of fairness. In this scenario, should we
> > consider not addressing it and
> > instead prioritize the first scenario?
>
> Agree. The first scenario is the problematic one. One problem we have
> to solve there is that the batch size is up to the driver in the
> zero-copy case so the xsk core has no idea. Maybe introduce a pointer
> that tells us what socket to get packets from first and make sure this
> pointer gets updated to the next socket in the list every time the
> function is exited? Please make sure that the one socket case is not
> hurt.

The method of "introducing a pointer that tells us which socket to get
packets from first" is useful, but it requires us to manage socket
additions and removals. This would introduce
locking operations.

So it seems that the following code is simple enough and appears to
solve the problem:

1.During each iteration, check if the current socket being traversed
has exhausted its quota. If it has, skip it and continue iterating
through the remaining sockets.
2.If all sockets have been traversed, and no available transmission
descriptors (tx desc) have been found, consider whether it's time to
start a fresh iteration.
3.The logic for a fresh iteration involves checking if any socket has
used up its quota during the traversal. If any socket has reached its
quota, set the tx_budget_cache of all sockets to 0 and begin a new
iteration of the list.

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f5e96e0d6e01..2cf2822e9d16 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);

bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
{
+ bool xsk_cache_full = false;
struct xdp_sock *xs;

rcu_read_lock();
+again:
list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
+ if (xs->tx_budget_cache >= MAX_PER_SOCKET_BUDGET) {
+ xsk_cache_full = true;
+ continue;
+ }
+
if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
if (xskq_has_descs(xs->tx))
xskq_cons_release(xs->tx);
continue;
}

+ xs->tx_budget_cache++;
+
/* This is the backpressure mechanism for the Tx path.
* Reserve space in the completion queue and only proceed
* if there is space in it. This avoids having to implement
@@ -436,6 +445,15 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool,
struct xdp_desc *desc)
return true;
}

+not_found:
+ if (xsk_cache_full == true) {
+ list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
+ xs->tx_budget_cache = 0;
+ }
+ xsk_cache_full = false;
+ goto again;
+ }
+
out:
rcu_read_unlock();
return false;

Although this method cannot achieve perfect fairness, it prevents any
sockets from starving

>
> > Additionally, based on my understanding, there should not be
> > applications concurrently using generic
> > XSK and native XSK on the same pool.
>
> That is correct.
>
> > Magnus, how do you view this issue? I'm concerned that striving for
> > absolute fairness might introduce
> > additional complexity in the logic.
>
> Is this a problem that you have observed or need to guard against in
> an application? If so, let us fix it.

Currently, we are facing issue 1.

>
> > BR
> > Albert
> >
> >
> >
> > > > +
> > > > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > > > if (xskq_has_descs(xs->tx))
> > > > xskq_cons_release(xs->tx);
> > > > continue;
> > > > }
> > > >
> > > > + xs->tx_budget_cache++;
> > > > +
> > > > /* This is the backpressure mechanism for the Tx path.
> > > > * Reserve space in the completion queue and only proceed
> > > > * if there is space in it. This avoids having to implement
> > > > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > return true;
> > > > }
> > > >
> > > > + if (unlikely(xsk_full_count > 0)) {
> > > > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > + xs->tx_budget_cache = 0;
> > > > + }
> > > > + xsk_full_count = 0;
> > > > + goto again;
> > > > + }
> > > > +
> > > > out:
> > > > rcu_read_unlock();
> > > > return false;
> > > > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > > > xs->zc = xs->umem->zc;
> > > > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > > > xs->queue_id = qid;
> > > > + xs->tx_budget_cache = 0;
> > > > xp_add_xsk(xs->pool, xs);
> > > >
> > > > out_unlock:
> > > > --
> > > > 2.20.1
> > > >
> > > >

2023-10-23 10:25:29

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] xsk: Avoid starving xsk at the end of the list

On Mon, 23 Oct 2023 at 11:37, 黄杰 <[email protected]> wrote:
>
> Magnus Karlsson <[email protected]> 于2023年10月19日周四 17:13写道:
> >
> > On Thu, 19 Oct 2023 at 10:41, 黄杰 <[email protected]> wrote:
> > >
> > > Magnus Karlsson <[email protected]> 于2023年10月16日周一 14:41写道:
> > > >
> > > > On Mon, 16 Oct 2023 at 05:17, Albert Huang
> > > > <[email protected]> wrote:
> > > > >
> > > > > In the previous implementation, when multiple xsk sockets were
> > > > > associated with a single xsk_buff_pool, a situation could arise
> > > > > where the xsk_tx_list maintained data at the front for one xsk
> > > > > socket while starving the xsk sockets at the back of the list.
> > > > > This could result in issues such as the inability to transmit packets,
> > > > > increased latency, and jitter. To address this problem, we introduced
> > > > > a new variable called tx_budget_cache, which limits each xsk to transmit
> > > > > a maximum of MAX_XSK_TX_BUDGET tx descriptors. This allocation ensures
> > > > > equitable opportunities for subsequent xsk sockets to send tx descriptors.
> > > > > The value of MAX_XSK_TX_BUDGET is temporarily set to 16.
> > > >
> > > > Hi Albert. Yes you are correct that there is nothing hindering this to
> > > > happen in the code at the moment, so let us fix it.
> > > >
> > > > > Signed-off-by: Albert Huang <[email protected]>
> > > > > ---
> > > > > include/net/xdp_sock.h | 6 ++++++
> > > > > net/xdp/xsk.c | 18 ++++++++++++++++++
> > > > > 2 files changed, 24 insertions(+)
> > > > >
> > > > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > > > > index 69b472604b86..f617ff54e38c 100644
> > > > > --- a/include/net/xdp_sock.h
> > > > > +++ b/include/net/xdp_sock.h
> > > > > @@ -44,6 +44,7 @@ struct xsk_map {
> > > > > struct xdp_sock __rcu *xsk_map[];
> > > > > };
> > > > >
> > > > > +#define MAX_XSK_TX_BUDGET 16
> > > >
> > > > I think something like MAX_PER_SOCKET_BUDGET would be clearer.
> > > >
> > > > > struct xdp_sock {
> > > > > /* struct sock must be the first member of struct xdp_sock */
> > > > > struct sock sk;
> > > > > @@ -63,6 +64,11 @@ struct xdp_sock {
> > > > >
> > > > > struct xsk_queue *tx ____cacheline_aligned_in_smp;
> > > > > struct list_head tx_list;
> > > > > + /* Record the actual number of times xsk has transmitted a tx
> > > > > + * descriptor, with a maximum limit not exceeding MAX_XSK_TX_BUDGET
> > > > > + */
> > > > > + u32 tx_budget_cache;
> > > > > +
> > > > > /* Protects generic receive. */
> > > > > spinlock_t rx_lock;
> > > > >
> > > > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > > > index f5e96e0d6e01..087f2675333c 100644
> > > > > --- a/net/xdp/xsk.c
> > > > > +++ b/net/xdp/xsk.c
> > > > > @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
> > > > >
> > > > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > > {
> > > > > + u32 xsk_full_count = 0;
> > > >
> > > > Enough with a bool;
> > > >
> > > > > struct xdp_sock *xs;
> > > > >
> > > > > rcu_read_lock();
> > > > > +again:
> > > > > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > > + if (xs->tx_budget_cache >= MAX_XSK_TX_BUDGET) {
> > > > > + xsk_full_count++;
> > > > > + continue;
> > > > > + }
> > > >
> > > > The problem here is that the fixed MAX_XSK_TX_BUDGET is only useful
> > > > for the <= 2 socket case. If I have 3 sockets sharing a
> > > > netdev/queue_id, the two first sockets can still starve the third one
> > > > since the total budget per send is 32. You need to go through the list
> > > > of sockets in the beginning to compute the MAX_XSK_TX_BUDGET to
> > > > compute this dynamically before each call. Or cache this value
> > > > somehow, in the pool for example. Actually, the refcount in the
> > > > buf_pool will tell you how many sockets are sharing the same buf_pool.
> > > > Try using that to form MAX_XSK_TX_BUDGET on the fly.
> > > >
> > > > Another simpler way of accomplishing this would be to just reorder the
> > > > list every time. Put the first socket last in the list every time. The
> > > > drawback of this is that you need to hold the xsk_tx_list_lock while
> > > > doing this so might be slower. The per socket batch size would also be
> > > > 32 and you would not receive "fairness" over a single call to
> > > > sendto(). Would that be a problem for you?
> > > >
> > >
> > > Currently, there are two paths in the kernel that consume TX queue descriptors:
> > >
> > > 1、Native XSK
> > > xsk_tx_peek_desc
> > > xskq_cons_peek_desc
> > >
> > > In the first scenario, we consume TX descriptors by sequentially
> > > traversing the pool->xsk_tx_list
> > > without any implicit code logic to ensure fairness. This can lead to a
> > > scenario of starvation,
> > > making it a top priority for us to address.
> > >
> > > 2、Generic XSK
> > > __xsk_sendmsg (or xsk_poll)
> > > xsk_generic_xmit
> > > __xsk_generic_xmit
> > > xskq_cons_peek_desc
> > >
> > > In the second scenario, TX descriptors are consumed by using sendto.
> > > Currently, __xsk_generic_xmit
> > > sends a maximum of 32 TX descriptors each time, and the process
> > > scheduling strategy already
> > > ensures a certain level of fairness. In this scenario, should we
> > > consider not addressing it and
> > > instead prioritize the first scenario?
> >
> > Agree. The first scenario is the problematic one. One problem we have
> > to solve there is that the batch size is up to the driver in the
> > zero-copy case so the xsk core has no idea. Maybe introduce a pointer
> > that tells us what socket to get packets from first and make sure this
> > pointer gets updated to the next socket in the list every time the
> > function is exited? Please make sure that the one socket case is not
> > hurt.
>
> The method of "introducing a pointer that tells us which socket to get
> packets from first" is useful, but it requires us to manage socket
> additions and removals. This would introduce
> locking operations.
>
> So it seems that the following code is simple enough and appears to
> solve the problem:
>
> 1.During each iteration, check if the current socket being traversed
> has exhausted its quota. If it has, skip it and continue iterating
> through the remaining sockets.
> 2.If all sockets have been traversed, and no available transmission
> descriptors (tx desc) have been found, consider whether it's time to
> start a fresh iteration.
> 3.The logic for a fresh iteration involves checking if any socket has
> used up its quota during the traversal. If any socket has reached its
> quota, set the tx_budget_cache of all sockets to 0 and begin a new
> iteration of the list.
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index f5e96e0d6e01..2cf2822e9d16 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -413,16 +413,25 @@ EXPORT_SYMBOL(xsk_tx_release);
>
> bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> {
> + bool xsk_cache_full = false;
> struct xdp_sock *xs;
>
> rcu_read_lock();
> +again:
> list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> + if (xs->tx_budget_cache >= MAX_PER_SOCKET_BUDGET) {

The problem here is what to set this MAX_PER_SOCKET_BUDGET to? We do
not want to penalize the one socket per page pool case, so this would
then have to be very large. But this might not be a problem as all new
drivers are using the batched interface (since Maciej is forcing
everyone to use it :-) ), and it will only go this path that you are
modifying in the multiple sockets per page pool case. So I think it's
fine. But still, what is a good value? 32 or 64?

> + xsk_cache_full = true;
> + continue;
> + }
> +
> if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> if (xskq_has_descs(xs->tx))
> xskq_cons_release(xs->tx);
> continue;
> }
>
> + xs->tx_budget_cache++;
> +
> /* This is the backpressure mechanism for the Tx path.
> * Reserve space in the completion queue and only proceed
> * if there is space in it. This avoids having to implement
> @@ -436,6 +445,15 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool,
> struct xdp_desc *desc)
> return true;
> }
>
> +not_found:

You are not using this label, but I am probably not seeing all the code here.

> + if (xsk_cache_full == true) {
> + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> + xs->tx_budget_cache = 0;
> + }
> + xsk_cache_full = false;
> + goto again;
> + }
> +
> out:
> rcu_read_unlock();
> return false;
>
> Although this method cannot achieve perfect fairness, it prevents any
> sockets from starving

That is perfectly fine. We should not aim for perfect fairness since
that would be prohibitively expensive. If someone wants that, they can
implement that code on top of this.

Your approach looks good to me. Please produce a patch.

Thanks!

> >
> > > Additionally, based on my understanding, there should not be
> > > applications concurrently using generic
> > > XSK and native XSK on the same pool.
> >
> > That is correct.
> >
> > > Magnus, how do you view this issue? I'm concerned that striving for
> > > absolute fairness might introduce
> > > additional complexity in the logic.
> >
> > Is this a problem that you have observed or need to guard against in
> > an application? If so, let us fix it.
>
> Currently, we are facing issue 1.
>
> >
> > > BR
> > > Albert
> > >
> > >
> > >
> > > > > +
> > > > > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
> > > > > if (xskq_has_descs(xs->tx))
> > > > > xskq_cons_release(xs->tx);
> > > > > continue;
> > > > > }
> > > > >
> > > > > + xs->tx_budget_cache++;
> > > > > +
> > > > > /* This is the backpressure mechanism for the Tx path.
> > > > > * Reserve space in the completion queue and only proceed
> > > > > * if there is space in it. This avoids having to implement
> > > > > @@ -436,6 +445,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
> > > > > return true;
> > > > > }
> > > > >
> > > > > + if (unlikely(xsk_full_count > 0)) {
> > > > > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
> > > > > + xs->tx_budget_cache = 0;
> > > > > + }
> > > > > + xsk_full_count = 0;
> > > > > + goto again;
> > > > > + }
> > > > > +
> > > > > out:
> > > > > rcu_read_unlock();
> > > > > return false;
> > > > > @@ -1230,6 +1247,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> > > > > xs->zc = xs->umem->zc;
> > > > > xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
> > > > > xs->queue_id = qid;
> > > > > + xs->tx_budget_cache = 0;
> > > > > xp_add_xsk(xs->pool, xs);
> > > > >
> > > > > out_unlock:
> > > > > --
> > > > > 2.20.1
> > > > >
> > > > >