Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6051818ybi; Wed, 29 May 2019 02:01:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1CKiqGcjZcL79L3RlyyMXtsoru9ibOSDZajCww0F1XhE+cQKuBed9ON9Yq9Cz23sQ8S+v X-Received: by 2002:a62:2f87:: with SMTP id v129mr75802992pfv.9.1559120490711; Wed, 29 May 2019 02:01:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559120490; cv=none; d=google.com; s=arc-20160816; b=PhWBpB1ntuuo7BvxQdbDww4GOHcpjK+av4XMkqxNleFpea3dUtesdqu3YxJB8QXkl/ xm/bUMYi4WPG6Hc2Kv1pHuSbeRfi1BrwPb6QJ9inKuNOvLqYt82fXSzvtBldzBzKef1O 0vspNj+he96MXzrqeP+McjmlEa0QnYcIqPYGy9YensqvCg/mLMzhoPB0p3gU19ol7x6K /Kmi0gs/gq59jc04a45IGMAljPSbcZh7G4bfmyHW/0Onily5TTYNUvUE+pKR5l2X48uC HkLJ2Wo0EsOs/N3jVtFJSLzFe2wW2yDVtGD1PWWqMnbya4lw8hEu69iJCTP5ffSekzqZ lZbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=yrsEM4yigZmkH6R01rF281tA3mQGZhqzUbEVYodiAWc=; b=x+xgxg5OTvCjiOYW6EkcTMv1MjmZC4ceQswJLPE7hDwM7qUJENsnVDHWLDINFLHukF nGBDqnRuTJP2wC9E0l1aEKkWBtFl3YSeYNw9HioqVhT5nQ06AdVMCsxwXkiPsuyrcrbN 4EsSbtrN7KHdIpP9SegK139gSIIlB1Kue1UbV/fOgtZhntM8CLkxSVIp7wMZo0pIYdfS oKCMUu6z0eQxOet1DINrhzo9cHbh4/kqXOxbxqQ1KPiC4z1Bo0Yj3p9B3tqBdkFL+QWe 4CRbSAOBVxhLUmOa3lugRUJ3rF/qBHocf3m3pZ81mwwEjinW4DL9U9CfT9Tpk6AVU/gr YDxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l1si23779236plb.302.2019.05.29.02.01.14; Wed, 29 May 2019 02:01:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726581AbfE2I70 (ORCPT + 99 others); Wed, 29 May 2019 04:59:26 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:17603 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725911AbfE2I7Z (ORCPT ); Wed, 29 May 2019 04:59:25 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7F31D5AE3A8FF4DC23B1; Wed, 29 May 2019 16:59:23 +0800 (CST) Received: from [127.0.0.1] (10.74.191.121) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.439.0; Wed, 29 May 2019 16:59:13 +0800 Subject: Re: [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq To: David Miller CC: , , , , References: <1558921674-158349-1-git-send-email-linyunsheng@huawei.com> <20190528.235806.323127882998745493.davem@davemloft.net> From: Yunsheng Lin Message-ID: <6e9b41c9-6edb-be7f-07ee-5480162a227e@huawei.com> Date: Wed, 29 May 2019 16:59:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <20190528.235806.323127882998745493.davem@davemloft.net> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.74.191.121] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/5/29 14:58, David Miller wrote: > From: Yunsheng Lin > Date: Mon, 27 May 2019 09:47:54 +0800 > >> When user has configured a large number of virtual netdev, such >> as 4K vlans, the carrier on/off operation of the real netdev >> will also cause it's virtual netdev's link state to be processed >> in linkwatch. Currently, the processing is done in a work queue, >> which may cause worker starvation problem for other work queue. >> >> This patch releases the cpu when link watch worker has processed >> a fixed number of netdev' link watch event, and schedule the >> work queue again when there is still link watch event remaining. >> >> Signed-off-by: Yunsheng Lin > > Why not rtnl_unlock(); yield(); rtnl_lock(); every "100" events > processed? > > That seems better than adding all of this overhead to reschedule the > workqueue every 100 items. One minor concern, the above solution does not seem to solve the cpu starvation for other normal workqueue which was scheduled on the same cpu as linkwatch. Maybe I misunderstand the workqueue or there is other consideration here? :) Anyway, I will implemet it as you suggested and test it before posting V2. Thanks. > > . >