Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3183200ybd; Mon, 24 Jun 2019 21:11:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqxYNfxzfpwHnFRHljuQ0InBFsfMfNJeubB7wMZyhDTCT9AOfZZocI5g9Lzwf7V1M4HKAAR8 X-Received: by 2002:a63:3d0f:: with SMTP id k15mr36806046pga.343.1561435872813; Mon, 24 Jun 2019 21:11:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561435872; cv=none; d=google.com; s=arc-20160816; b=u5l8z+ozt83Rzgldsc8OqQP/OCyGnHokV1bdUc9BdYp66K9Z9mZVn8U1NjbqAGDt9M fcp+R5PjtEzD77j2/lkL5ZJxi8bx/t7pK8/R05jcFJXx9YBdjP/xNZc6VC2RHxHPmkv0 4BosnLY/C+/O4rtgQdlHWx85dyFjWpZhyCUKjCde27ZSilbwrILv1lYoNRwvKR8xUM02 Z8/SpZN9PxDhuC2aFywN4TlvN+SyuSHyGoSIvs2AzBjXDaLGXTeAuhlK0WnbxGY6KPnx KrZbuKEWS+lSaPvWEHOiPRhpTYVqTjoL+Tyi3xxEbfyQXReDp+GcwkrD7ftla3YIUSUb //QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=oNH9XxjIkOdOqGDq/1jNTaAP4eF8xSH8yjCmGSZDAj0=; b=g3hafzpcbbK4ZFmtM0HTHZaMcEZkjyZs7XEuwuW8dk6lUGEVrF50slwYMTghzkQm9R MSeyg5waI9pCH6LUKGbOYi8zs15/Uxh/3SdQ2DPiUbfSxcu+ucYG1s4P/bAojJpcorJ8 /wbt64Ea2PH9/LTK4FAzAhtI2C8uynAM/SF8EOvjshjzS+XX1JxwPzFGPiB7f00uLuTo i0P24s6EWoAv6e6Xpb0RnTgVzb0Jz0l0EG2j6Na2NV5IrudoJXM5aTEvIWOPMr6aWI/Y LTRAfSS2hKaDmhBH00Fh5/4OTHccE7ASuntFDz1MfFTYWePC71vLoY85f31RSnIjWy5t qYgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i4si11714683pgk.8.2019.06.24.21.10.57; Mon, 24 Jun 2019 21:11:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730419AbfFYC2N (ORCPT + 99 others); Mon, 24 Jun 2019 22:28:13 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:19071 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726774AbfFYC2N (ORCPT ); Mon, 24 Jun 2019 22:28:13 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id CF33BFD7A6B6E20086B8; Tue, 25 Jun 2019 10:28:10 +0800 (CST) Received: from [127.0.0.1] (10.74.191.121) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.439.0; Tue, 25 Jun 2019 10:28:04 +0800 Subject: Re: [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq From: Yunsheng Lin To: David Miller CC: , , , , , , , , "xuwei (O)" References: <1558921674-158349-1-git-send-email-linyunsheng@huawei.com> <20190528.235806.323127882998745493.davem@davemloft.net> <6e9b41c9-6edb-be7f-07ee-5480162a227e@huawei.com> Message-ID: <5c06e5dd-cfb1-870c-a0a3-42397b59c734@huawei.com> Date: Tue, 25 Jun 2019 10:28:04 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <6e9b41c9-6edb-be7f-07ee-5480162a227e@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.74.191.121] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/5/29 16:59, Yunsheng Lin wrote: > On 2019/5/29 14:58, David Miller wrote: >> From: Yunsheng Lin >> Date: Mon, 27 May 2019 09:47:54 +0800 >> >>> When user has configured a large number of virtual netdev, such >>> as 4K vlans, the carrier on/off operation of the real netdev >>> will also cause it's virtual netdev's link state to be processed >>> in linkwatch. Currently, the processing is done in a work queue, >>> which may cause worker starvation problem for other work queue. >>> >>> This patch releases the cpu when link watch worker has processed >>> a fixed number of netdev' link watch event, and schedule the >>> work queue again when there is still link watch event remaining. >>> >>> Signed-off-by: Yunsheng Lin >> >> Why not rtnl_unlock(); yield(); rtnl_lock(); every "100" events >> processed? >> >> That seems better than adding all of this overhead to reschedule the >> workqueue every 100 items. > > One minor concern, the above solution does not seem to solve the cpu > starvation for other normal workqueue which was scheduled on the same > cpu as linkwatch. Maybe I misunderstand the workqueue or there is other > consideration here? :) > > Anyway, I will implemet it as you suggested and test it before posting V2. > Thanks. Hi, David I stress tested the above solution with a lot of vlan dev and qemu-kvm with vf passthrongh mode, the linkwatch wq sometimes block the irqfd_inject wq when they are scheduled on the same cpu, which may cause interrupt delay problem for vm. Rescheduling workqueue every 100 items does give irqfd_inject wq to run sooner, which alleviate the interrupt delay problems for vm. So It is ok for me to fall back to reschedule the link watch wq every 100 items, or is there a better way to fix it properly? > >> >> . >> > > > . >