Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3258753ybi; Sun, 26 May 2019 18:51:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqzzxqZA7tXi5TxvhhlOUojCLHqqWswInpg7Zjhzg5rBKgMW62KGVE8lQJerfh07i7iUNorq X-Received: by 2002:a65:550b:: with SMTP id f11mr25904310pgr.311.1558921889795; Sun, 26 May 2019 18:51:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558921889; cv=none; d=google.com; s=arc-20160816; b=ui3n9T2zNKCKSa5EjW9W5rDvICNi7u2eFqryFDus/+VfVIBP0WhxcxTjSWEcqKL8y9 eUVlN06KkdZdNj/Nz6akKZkP2DJbaJxc7rgzjuCRpJsBHAQ+zYpy79A4ZmzpMLOww/Nx mDCK8zCoW0iLoZpAdj/UaEXd7xLT7Bki70qElwBfFyqfnWsC2xkKoa+PJ4GHo1GgSitM ORY9nIouRHyNzyz7USBQ457a1V5laletjXJw8i9IBR8RKjgWjigzYzZpoSzzM1W8fg45 nfUFKS2y2EkNuq9/LUvR6FsxThV4FTCQp9cl/OI8/ryoFUi2XPsfUrw+itYYJSr2FBxt V3Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=X1gPQwqlDVB2V1G9HBL1h9mvsIbRf1piJ2P+Q5rEbRU=; b=V3smLRHJ9P2hE4JHrn3C0Uk93TNRF32Ul342EIe9mxJMzuHui3eX5aCjlXlKHG8L1e iFa8fh9ncvwRmxK2HCNSYpeNVdrBK7FFFWRBoqERVt20KB70D12NrHIGActlodOTyyNF pZCdztKzFNNXg6C3rNerVaTaAwEDAUSrlsBijoVnyS3d1DZuv7Kyn80fwpmQbjKcrBhU +QswWcpoRBX9j1T+kEQ8y2uEQC90L/0NqT4O1qIjou4obgk6EqGaks7Gu90ihL8+vjd7 ra3gOuvZ68lBsatCTeKnAcrUZB86fbLfrf0q74FMEwwPE0CPAk71ZquIbyB13ZDNz6Vu XEzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p8si15935174pgc.362.2019.05.26.18.51.12; Sun, 26 May 2019 18:51:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726114AbfE0Bs4 (ORCPT + 99 others); Sun, 26 May 2019 21:48:56 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:42436 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725867AbfE0Bsz (ORCPT ); Sun, 26 May 2019 21:48:55 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3BBB134096C981AB2BCB; Mon, 27 May 2019 09:48:52 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.439.0; Mon, 27 May 2019 09:48:42 +0800 From: Yunsheng Lin To: CC: , , , , Subject: [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq Date: Mon, 27 May 2019 09:47:54 +0800 Message-ID: <1558921674-158349-1-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When user has configured a large number of virtual netdev, such as 4K vlans, the carrier on/off operation of the real netdev will also cause it's virtual netdev's link state to be processed in linkwatch. Currently, the processing is done in a work queue, which may cause worker starvation problem for other work queue. This patch releases the cpu when link watch worker has processed a fixed number of netdev' link watch event, and schedule the work queue again when there is still link watch event remaining. Signed-off-by: Yunsheng Lin --- net/core/link_watch.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/net/core/link_watch.c b/net/core/link_watch.c index 7f51efb..06276ff 100644 --- a/net/core/link_watch.c +++ b/net/core/link_watch.c @@ -168,9 +168,16 @@ static void linkwatch_do_dev(struct net_device *dev) static void __linkwatch_run_queue(int urgent_only) { +#define MAX_DO_DEV_PER_LOOP 100 + + int do_dev = MAX_DO_DEV_PER_LOOP; struct net_device *dev; LIST_HEAD(wrk); + /* Give urgent case more budget */ + if (urgent_only) + do_dev += MAX_DO_DEV_PER_LOOP; + /* * Limit the number of linkwatch events to one * per second so that a runaway driver does not @@ -189,7 +196,7 @@ static void __linkwatch_run_queue(int urgent_only) spin_lock_irq(&lweventlist_lock); list_splice_init(&lweventlist, &wrk); - while (!list_empty(&wrk)) { + while (!list_empty(&wrk) && do_dev > 0) { dev = list_first_entry(&wrk, struct net_device, link_watch_list); list_del_init(&dev->link_watch_list); @@ -201,8 +208,13 @@ static void __linkwatch_run_queue(int urgent_only) spin_unlock_irq(&lweventlist_lock); linkwatch_do_dev(dev); spin_lock_irq(&lweventlist_lock); + + do_dev--; } + /* Add the remaining work back to lweventlist */ + list_splice_init(&wrk, &lweventlist); + if (!list_empty(&lweventlist)) linkwatch_schedule_work(0); spin_unlock_irq(&lweventlist_lock); -- 2.8.1