Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1944447pxk; Tue, 1 Sep 2020 11:28:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw45WDxUq8hOEtM3rUXlWOAYwBEFw265+C9wFgW44Cor2lBozaY1QjQAFHPtUmaKwQsm3lv X-Received: by 2002:a17:906:54d3:: with SMTP id c19mr2961799ejp.408.1598984894579; Tue, 01 Sep 2020 11:28:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598984894; cv=none; d=google.com; s=arc-20160816; b=0uqsQykI72imDkNCBv+H5cnQX30RP3rZr7LQXAvy11uYyMYCkKTKvgjBgnLcBj6By3 Hp0i+Mm9xvuQ5deSbyIRY89R51MPHT2ojT5Kmu5bHN6JokGnYCq/2QCFJ4tVBOR8NHyX kRnhJVkPjnL6m7uB9GzUEEhx7SoIEFrQFG2j2FgDNAhpM47vJRAmtdqWaQAGzOLiOAwC QcjxggHVPndrDpuDWq4UJ+25t3H+6Qk9oAc9rrjGEL0frjkGw/vy7rAlE0HDPW/k0jpT 8xddee+A0ETh4TQWejPdoqzLIVjDMWUjt0xU8uYDbEy4hcAf1paK25JlmeuC53PPad/d 5GGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=M6eOAwwv7MGhiV6uSQC+4Pepfi5boLDT2O8Y3f/TMPk=; b=etKTaiZEFFx6D6nr12Ds4rteM6B10/ZlMGB8Oc/D120nULR6qX3Q9v8rApRgNILlp3 0TW+goYU4UG48iCnCJ6kj+cevumUovfNZYOPTchkLTBcM4jH5XvyLTeQE3WP1s4r/Hdg 7fV+UlFqczJLBezgO8ZpXI9eeJ2RcBmOTn2ua3L2aXTzSc6zi17vzJjDlqrY8QYgDfeZ KV/de2VqkQXBB14kINR4NCGAKCGLdQ9H+jCzTI265o6LURKCIt2Tb+K/VPgF8hHaZt/u axYh/aVU8ftkM/20Ds1o9N7pDYiED1LJTfCo9S2bww2PsRJMKvpdobmvUASq06rOrHfF r+mQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="eLs/bSEq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i3si1031826edb.410.2020.09.01.11.27.48; Tue, 01 Sep 2020 11:28:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="eLs/bSEq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729956AbgIASY5 (ORCPT + 99 others); Tue, 1 Sep 2020 14:24:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726107AbgIASYz (ORCPT ); Tue, 1 Sep 2020 14:24:55 -0400 Received: from mail-io1-xd43.google.com (mail-io1-xd43.google.com [IPv6:2607:f8b0:4864:20::d43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FA8AC061244; Tue, 1 Sep 2020 11:24:55 -0700 (PDT) Received: by mail-io1-xd43.google.com with SMTP id m23so2410008iol.8; Tue, 01 Sep 2020 11:24:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=M6eOAwwv7MGhiV6uSQC+4Pepfi5boLDT2O8Y3f/TMPk=; b=eLs/bSEqYDaz+P3iZLzdmLXfBxFyLMzskjfDUf8fpsbByHUl3y85wi/ROgbQ/iwIW4 dRVFky99AKU/3ujHtRZdfRtT28m4LsGDzPHwkG8G6s6mdQiPdDPEGdMlFB7luQe7kFue t+pWkGZWaOv1L7hQlOexAu4hv02+H6RoxC6lwNcXbMqRJmdII3TBHRr9d0hnA0hJMXlB pwRpqjo9SO1OpUinUBYzzdODVQyf1vMqAdaTkHs9oHQJly1b21LuhX2VZfvJWIa8z1CT oSskLQNDyG/YWg/6hJ/V4g7YVvpxmmMeVysKoCxZajqQDWCXNECfhgzvc53qE54qnwWx x+Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M6eOAwwv7MGhiV6uSQC+4Pepfi5boLDT2O8Y3f/TMPk=; b=CHzeIddLv31+f0DN3BJrVJrv9DBonWrHr2drpUBLgl0y26Bl6eO36Lc6dOJlPcDbBe q3qcqrAKnQYH5ezH3HQ1MQlU1EsgNaEdvd5UwgcbQS+55MKjjAxpNB+Bwx+LYTgAX+jY LTxdxtUyX3WbvZUgqKdF2uCvRE9cdrQRXIJnGZpAl+8aPgjo26KEPqgnjpfpe2rqsksZ Ln/YvJ/03wZAgCqbTesHsrAnChNVrWGsDR515jbc1AhVE29UlsIes742OCO+YMBFYFet U8HAhSXQ9tjCmGE42a4jK8ulCX+gb3tFr/TenwLaxSOEnGoDQnAJw9B/qks/InYm03cf G/dw== X-Gm-Message-State: AOAM530Za96iCt94P/706zAgIIEYl8eu2tlI4YsFMlFWvqcY0KCLBQ2R at0NiOuQQCHU9lhk0fa9WjikJ5yAkq5rXSbJfQw= X-Received: by 2002:a02:778e:: with SMTP id g136mr2616404jac.49.1598984694291; Tue, 01 Sep 2020 11:24:54 -0700 (PDT) MIME-Version: 1.0 References: <1598921718-79505-1-git-send-email-linyunsheng@huawei.com> In-Reply-To: <1598921718-79505-1-git-send-email-linyunsheng@huawei.com> From: Cong Wang Date: Tue, 1 Sep 2020 11:24:43 -0700 Message-ID: Subject: Re: [PATCH net-next] net: sch_generic: aviod concurrent reset and enqueue op for lockless qdisc To: Yunsheng Lin Cc: Jamal Hadi Salim , Jiri Pirko , David Miller , Jakub Kicinski , Linux Kernel Network Developers , LKML , linuxarm@huawei.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 31, 2020 at 5:59 PM Yunsheng Lin wrote: > > Currently there is concurrent reset and enqueue operation for the > same lockless qdisc when there is no lock to synchronize the > q->enqueue() in __dev_xmit_skb() with the qdisc reset operation in > qdisc_deactivate() called by dev_deactivate_queue(), which may cause > out-of-bounds access for priv->ring[] in hns3 driver if user has > requested a smaller queue num when __dev_xmit_skb() still enqueue a > skb with a larger queue_mapping after the corresponding qdisc is > reset, and call hns3_nic_net_xmit() with that skb later. Can you be more specific here? Which call path requests a smaller tx queue num? If you mean netif_set_real_num_tx_queues(), clearly we already have a synchronize_net() there. > > Avoid the above concurrent op by calling synchronize_rcu_tasks() > after assigning new qdisc to dev_queue->qdisc and before calling > qdisc_deactivate() to make sure skb with larger queue_mapping > enqueued to old qdisc will always be reset when qdisc_deactivate() > is called. Like Eric said, it is not nice to call such a blocking function when we have a large number of TX queues. Possibly we just need to add a synchronize_net() as in netif_set_real_num_tx_queues(), if it is missing. Thanks.