Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp4414359ybi; Mon, 27 May 2019 18:05:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqxVTwG86hoF+oSik1NkfAhMGNLWrrRcQ1wDxQGUovy0EcOGw4FhYW4WgBAYp1dtJYJuvXTg X-Received: by 2002:a17:902:b084:: with SMTP id p4mr62357209plr.59.1559005559011; Mon, 27 May 2019 18:05:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559005559; cv=none; d=google.com; s=arc-20160816; b=feliQCw18/PNLRXaB3mhJaPeQ5nNAwGiYMAiot1PY5J64iGv/fwRCBQM6SiLe/hdC4 HXnQ632/vcTKGrfc8XnP6q1bHNLUMjmVCXVvgnp3QQP/hxGMj9cux6A6nkeBBMxbltSp Qx6OLv0j/g0xPcj1WHdKKHeFBeFIoFjyUUuIrsZNo0t27oJqEvM36yf4eQj6GbtDBIYj 8CVvcpSQU1PaU77xgebV+ZT5+yCfBctnTMxPS8j39JQTORsvOrM7Ujb+9UYN4WrJq7FY pClG47mskjoNxWLgX+cDInFr2hxVQRKpWlqSW4tA/8YjuImqwAHfWooSLmEzxzESTYrd Prvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=Yr2H1Iqgd6w/eYyNAqrtaJ19uTTXRyLBp5apIK1Jx7k=; b=coTkvq/bMVt0UNUiMiAVP0RJDbOsQrvLIlIqf8dd3HGP+OEPHlUJ3QGdHZbpUE0BZ8 iORS0yx3V1wezcnPF8dRh2wbOQuy9AvicmnGgRSffIAHAvY4n+Z8+85B3F8RCfi4N+2E p0Gl5LuJ8n1EXs0+LYRZ2BcjBV1ZvtmsRbNQ1O5+2ntY9xFGYpDAdZZjATGDlPJ3tabw cFxXbMwuvpxbyYAwLUzVIp7yyWhGtMqh9Uv9+Z83A/tfTwYycBoKqsqVDnUzd5zotZTw 7IKCp1CMTLGlEatLBc9Tfpc/1jOlQJwDoGW/W1Rsd74PO8XJUNkwQcCuGjS7k75bsHa5 G7jQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l3si19810350pgp.179.2019.05.27.18.05.41; Mon, 27 May 2019 18:05:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727623AbfE1BE2 (ORCPT + 99 others); Mon, 27 May 2019 21:04:28 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:51944 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727257AbfE1BE2 (ORCPT ); Mon, 27 May 2019 21:04:28 -0400 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id A2211C3242096969BE59; Tue, 28 May 2019 09:04:25 +0800 (CST) Received: from [127.0.0.1] (10.74.191.121) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.439.0; Tue, 28 May 2019 09:04:19 +0800 Subject: Re: [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq To: Stephen Hemminger CC: , , , , , References: <1558921674-158349-1-git-send-email-linyunsheng@huawei.com> <20190527075838.5a65abf9@hermes.lan> From: Yunsheng Lin Message-ID: Date: Tue, 28 May 2019 09:04:18 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <20190527075838.5a65abf9@hermes.lan> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.74.191.121] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/5/27 22:58, Stephen Hemminger wrote: > On Mon, 27 May 2019 09:47:54 +0800 > Yunsheng Lin wrote: > >> When user has configured a large number of virtual netdev, such >> as 4K vlans, the carrier on/off operation of the real netdev >> will also cause it's virtual netdev's link state to be processed >> in linkwatch. Currently, the processing is done in a work queue, >> which may cause worker starvation problem for other work queue. >> >> This patch releases the cpu when link watch worker has processed >> a fixed number of netdev' link watch event, and schedule the >> work queue again when there is still link watch event remaining. >> >> Signed-off-by: Yunsheng Lin > > Why not put link watch in its own workqueue so it is scheduled > separately from the system workqueue? From testing and debuging, the workqueue runs on the cpu where the workqueue is schedule when using normal workqueue, even using its own workqueue instead of system workqueue. So if the cpu is busy processing the linkwatch event, it is not able to process other workqueue' work when the workqueue is scheduled on the same cpu. Using unbound workqueue may solve the cpu starvation problem. But the __linkwatch_run_queue is called with rtnl_lock, so if it takes a lot time to process, other need to take the rtnl_lock may not be able to move forward. > >