Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp178918pxf; Wed, 24 Mar 2021 02:20:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwF4ai5tVlELIWgYbbcawpQ8jXly7LP8cUx+u0TTidpYYjXb2WMlTisxlKNKybXAG0t0ke X-Received: by 2002:a05:6402:104c:: with SMTP id e12mr2295031edu.108.1616577638884; Wed, 24 Mar 2021 02:20:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616577638; cv=none; d=google.com; s=arc-20160816; b=jzRRm6JoL7b4v20yc5YLQaDUQKI+KRp0/7CLFYwfQ78wzatC2F494gUQTnDy5R47p2 Uds6yRun4vywvFbuHvqFHulzmDvSxiQjnvdD/5Csj0NGfupp3Gb9Wqyq7y6N5o1jNJTX VNlDcFOR19FPqCPL0O8zUlNUQK2Mrkel0yqGvRgRTxDxNhmVxThc1XpJ16gXBmrPYdUN P7pM1jsi1yln6BZuxDXTJqjIt5hTa+oDZAvaadPkrUEmw13u29tFHEGq7AwhfQOYWb6h A4PreZazs3qDmtaxXRKDX6WuFPSxZ+7uSQPWjTkmV3pPtBDpe3+W4Un0KMEDZHmB1kDK 4kug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=Yba5ffRCZVNoT9oQOeuIAejkwbaKJr8Sm8kZj6tbr7A=; b=peosWFTzwjXFf5xh1ojhGjo8kymkZ+j1g2xXWkQYBpLW1/Srf23DRaMBQ234F8uWOe BhPSj09gnXh294Q5MSxtngVi7bZfIOpRImspnb0+SP2+PUXJZwUMZ2lQlO7G+d/jj2yr RkvjDywNkC069WCYvRQoj93/SiNzLAEofKkfhrVF6rXaTL+qWqb0e2mFRlPpB+0uuVcD by/8vpuNiSRMMxBHeg89BJzb3HcDLLVQOM/5pqMKNRZZOgWcpC3klapahuWlQsfW3OUc lnoMCnuDG1hijr32DB9AfJ/w6ccYbwY5hFUQ5oSW//DVlWaIs7KSRmI2RCm256k36uyD PW3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r14si1273385ejb.283.2021.03.24.02.20.13; Wed, 24 Mar 2021 02:20:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232099AbhCXCY3 (ORCPT + 99 others); Tue, 23 Mar 2021 22:24:29 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:14857 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234920AbhCXCYL (ORCPT ); Tue, 23 Mar 2021 22:24:11 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4F4sSd0Fyyz93BL; Wed, 24 Mar 2021 10:22:09 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 24 Mar 2021 10:24:02 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH net v2] net: sched: fix packet stuck problem for lockless qdisc Date: Wed, 24 Mar 2021 10:24:37 +0800 Message-ID: <1616552677-39016-1-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Lockless qdisc has below concurrent problem: cpu0 cpu1 . . q->enqueue . . . qdisc_run_begin() . . . dequeue_skb() . . . sch_direct_xmit() . . . . q->enqueue . qdisc_run_begin() . return and do nothing . . qdisc_run_end() . cpu1 enqueue a skb without calling __qdisc_run() because cpu0 has not released the lock yet and spin_trylock() return false for cpu1 in qdisc_run_begin(), and cpu0 do not see the skb enqueued by cpu1 when calling dequeue_skb() because cpu1 may enqueue the skb after cpu0 calling dequeue_skb() and before cpu0 calling qdisc_run_end(). Lockless qdisc has below another concurrent problem when tx_action is involved: cpu0(serving tx_action) cpu1 cpu2 . . . . q->enqueue . . qdisc_run_begin() . . dequeue_skb() . . . q->enqueue . . . . sch_direct_xmit() . . . qdisc_run_begin() . . return and do nothing . . . clear __QDISC_STATE_SCHED . . qdisc_run_begin() . . return and do nothing . . . . . . qdisc_run_end() . This patch fixes the above data race by: 1. Get the flag before doing spin_trylock(). 2. If the first spin_trylock() return false and the flag is not set before the first spin_trylock(), Set the flag and retry another spin_trylock() in case other CPU may not see the new flag after it releases the lock. 3. reschedule if the flags is set after the lock is released at the end of qdisc_run_end(). For tx_action case, the flags is also set when cpu1 is at the end if qdisc_run_end(), so tx_action will be rescheduled again to dequeue the skb enqueued by cpu2. Only clear the flag before retrying a dequeuing when dequeuing returns NULL in order to reduce the overhead of the above double spin_trylock() and __netif_schedule() calling. The performance impact of this patch, tested using pktgen and dummy netdev with pfifo_fast qdisc attached: threads without+this_patch with+this_patch delta 1 2.6Mpps 2.6Mpps +0.0% 2 3.9Mpps 3.8Mpps -2.5% 4 5.6Mpps 5.6Mpps -0.0% 8 2.7Mpps 2.8Mpps +3.7% 16 2.2Mpps 2.2Mpps +0.0% Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking") Signed-off-by: Yunsheng Lin --- V2: Avoid the overhead of fixing the data race as much as possible. --- include/net/sch_generic.h | 48 ++++++++++++++++++++++++++++++++++++++++++++++- net/sched/sch_generic.c | 12 ++++++++++++ 2 files changed, 59 insertions(+), 1 deletion(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index f7a6e14..09a755d 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -36,6 +36,7 @@ struct qdisc_rate_table { enum qdisc_state_t { __QDISC_STATE_SCHED, __QDISC_STATE_DEACTIVATED, + __QDISC_STATE_NEED_RESCHEDULE, }; struct qdisc_size_table { @@ -159,12 +160,42 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc) static inline bool qdisc_run_begin(struct Qdisc *qdisc) { if (qdisc->flags & TCQ_F_NOLOCK) { + bool dont_retry = test_bit(__QDISC_STATE_NEED_RESCHEDULE, + &qdisc->state); + + if (spin_trylock(&qdisc->seqlock)) + goto out; + + /* If the flag is set before doing the spin_trylock() and + * the above spin_trylock() return false, it means other cpu + * holding the lock will do dequeuing for us, or it wil see + * the flag set after releasing lock and reschedule the + * net_tx_action() to do the dequeuing. + */ + if (dont_retry) + return false; + + /* We could do set_bit() before the first spin_trylock(), + * and avoid doing secord spin_trylock() completely, then + * we could have multi cpus doing the test_bit(). Here use + * dont_retry to avoiding the test_bit() and the second + * spin_trylock(), which has 5% performance improvement than + * doing the set_bit() before the first spin_trylock(). + */ + set_bit(__QDISC_STATE_NEED_RESCHEDULE, + &qdisc->state); + + /* Retry again in case other CPU may not see the new flag + * after it releases the lock at the end of qdisc_run_end(). + */ if (!spin_trylock(&qdisc->seqlock)) return false; WRITE_ONCE(qdisc->empty, false); } else if (qdisc_is_running(qdisc)) { return false; } + +out: /* Variant of write_seqcount_begin() telling lockdep a trylock * was attempted. */ @@ -176,8 +207,23 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) static inline void qdisc_run_end(struct Qdisc *qdisc) { write_seqcount_end(&qdisc->running); - if (qdisc->flags & TCQ_F_NOLOCK) + if (qdisc->flags & TCQ_F_NOLOCK) { spin_unlock(&qdisc->seqlock); + + /* qdisc_run_end() is protected by RCU lock, and + * qdisc reset will do a synchronize_net() after + * setting __QDISC_STATE_DEACTIVATED, so testing + * the below two bits separately should be fine. + * For qdisc_run() in net_tx_action() case, we + * really should provide rcu protection explicitly + * for document purposes or PREEMPT_RCU. + */ + if (unlikely(test_bit(__QDISC_STATE_NEED_RESCHEDULE, + &qdisc->state) && + !test_bit(__QDISC_STATE_DEACTIVATED, + &qdisc->state))) + __netif_schedule(qdisc); + } } static inline bool qdisc_may_bulk(const struct Qdisc *qdisc) diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 44991ea..7e3426b 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -640,8 +640,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) { struct pfifo_fast_priv *priv = qdisc_priv(qdisc); struct sk_buff *skb = NULL; + bool need_retry = true; int band; +retry: for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) { struct skb_array *q = band2list(priv, band); @@ -652,6 +654,16 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) } if (likely(skb)) { qdisc_update_stats_at_dequeue(qdisc, skb); + } else if (need_retry && + test_and_clear_bit(__QDISC_STATE_NEED_RESCHEDULE, + &qdisc->stat)) { + /* do another enqueuing after clearing the flag to + * avoid calling __netif_schedule(). + */ + smp_mb__after_atomic(); + need_retry = false; + + goto retry; } else { WRITE_ONCE(qdisc->empty, true); } -- 2.7.4