Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3420907ybi; Mon, 29 Jul 2019 06:18:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5Cfz0wT6Eugg000cZD2H2oRyjNE6BLmb8GMMyKCK/UmYa+bMv9rmIp+NcQzXtN6mnM/XS X-Received: by 2002:a63:60c1:: with SMTP id u184mr100349757pgb.275.1564406327532; Mon, 29 Jul 2019 06:18:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564406327; cv=none; d=google.com; s=arc-20160816; b=JZNRWCn1Rv5NC2G/x1/mkXZ8lgVYcsG+lHk6w9EboRMhcbOZCrLf8tZ8ay68ThpyJX RKjGLUztw4n2H1J8/To8RQyJ16AuP3Hx9MOmp71rcV3T+bwo/nhy2xg8K3vqA/Db6KtQ ocFgg/v6+2P5vKHZ0EZjeMXkjyRCfdMMyrsFUuGypPNfxV9TKhqCkYUWOKBmA7QwZxSS XjZlK/0M30PSRHiH6Fh4mkyzMvZ8bEa720IJFPaUNtY17Gq43puzErziHgM9ELZQXspA XpY54W5KtJQK1eFrHGnqfdlX/MfT5ef7L1GgGdJrQ+UKGne1Wm3IJ1a8eysgxMcdo+k8 a5Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=4FlLspH8Tdmab8isb+tyQs4GH5N3DLLC7cCaXEXZ8FU=; b=r/DG9rMBwT1rixpLnRtZYO4BXJ4LCDIUJGZjl37LL4BWH/IfkipZFDwFltP7I7hO3r zv2AOYuba4pRbpsw8iv+74e2LvQ8wFSqGttUculNfj5ryTqITf5GoWh265B8EbSvn8ma VATpBRnezwwZbJLDAP5NPrHi5aG+c0RMzm60fW50Aq4sb8jH7PqCUgq8HuXNtQqZ/Qj/ 2c+k2nFzjq7iCiKvVnJ/JaE8BacLq+u+GHkmEVHmn/XZ8OBBCKtVSUmKWOlxuHpp9ksr krKxCbCKa+d9qiDU7+rAaZx0od4Ny6vlNyGu/u2ZjaqfqAVxPIQTHOmGJWsyWmJDmzix CKHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a17si27503965pfk.246.2019.07.29.06.18.29; Mon, 29 Jul 2019 06:18:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728271AbfG2Kh2 (ORCPT + 99 others); Mon, 29 Jul 2019 06:37:28 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:3197 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728151AbfG2Kh2 (ORCPT ); Mon, 29 Jul 2019 06:37:28 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id AEA81E6113E8A0C205BC; Mon, 29 Jul 2019 18:37:25 +0800 (CST) Received: from [127.0.0.1] (10.74.191.121) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.439.0; Mon, 29 Jul 2019 18:37:19 +0800 Subject: Re: [PATCH net-next 08/11] net: hns3: add interrupt affinity support for misc interrupt To: Salil Mehta , tanhuazhong , "davem@davemloft.net" CC: "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "Zhuangyuzeng (Yisen)" , Linuxarm , "lipeng (Y)" References: <1563938327-9865-1-git-send-email-tanhuazhong@huawei.com> <1563938327-9865-9-git-send-email-tanhuazhong@huawei.com> From: Yunsheng Lin Message-ID: Date: Mon, 29 Jul 2019 18:37:18 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.74.191.121] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/7/29 17:18, Salil Mehta wrote: >> From: tanhuazhong >> Sent: Wednesday, July 24, 2019 4:19 AM >> To: davem@davemloft.net >> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Salil Mehta >> ; Zhuangyuzeng (Yisen) ; >> Linuxarm ; linyunsheng ; lipeng >> (Y) ; tanhuazhong >> Subject: [PATCH net-next 08/11] net: hns3: add interrupt affinity support for >> misc interrupt >> >> From: Yunsheng Lin >> >> The misc interrupt is used to schedule the reset and mailbox >> subtask, and a 1 sec timer is used to schedule the service >> subtask, which does periodic work. >> >> This patch sets the above three subtask's affinity using the >> misc interrupt' affinity. >> >> Also this patch setups a affinity notify for misc interrupt to >> allow user to change the above three subtask's affinity. >> >> Signed-off-by: Yunsheng Lin >> Signed-off-by: Peng Li >> Signed-off-by: Huazhong Tan >> --- >> .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 59 >> ++++++++++++++++++++-- >> .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 4 ++ >> 2 files changed, 59 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c >> b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c >> index f345095..fe45986 100644 >> --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c >> +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c >> @@ -1270,6 +1270,12 @@ static int hclge_configure(struct hclge_dev *hdev) >> >> hclge_init_kdump_kernel_config(hdev); >> >> + /* Set the init affinity based on pci func number */ >> + i = cpumask_weight(cpumask_of_node(dev_to_node(&hdev->pdev->dev))); >> + i = i ? PCI_FUNC(hdev->pdev->devfn) % i : 0; >> + cpumask_set_cpu(cpumask_local_spread(i, dev_to_node(&hdev->pdev->dev)), >> + &hdev->affinity_mask); >> + >> return ret; >> } >> >> @@ -2502,14 +2508,16 @@ static void hclge_mbx_task_schedule(struct hclge_dev >> *hdev) >> { >> if (!test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state) && >> !test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state)) >> - schedule_work(&hdev->mbx_service_task); >> + queue_work_on(cpumask_first(&hdev->affinity_mask), system_wq, > > > Why we have to use system Work Queue here? This could adversely affect > other work scheduled not related to the HNS3 driver. Mailbox is internal > to the driver and depending upon utilization of the mbx channel(which could > be heavy if many VMs are running), this might stress other system tasks > as well. Have we thought of this? If I understand the CMWQ correctly, it is better to reuse the system Work Queue when the work queue needed by HNS3 driver shares the same property of system Work Queue, because Concurrency Managed Workqueue mechnism has ensured they have the same worker pool if they share the same property. Some driver(i40e) allocates a work queue with WQ_MEM_RECLAIM, I am not sure what is the usecase which requires the WQ_MEM_RECLAIM. Anyway, If the HNS3 need to allocate a new work queue, we need to figure out the usecase first. > > > >> + &hdev->mbx_service_task); >> } >> >> static void hclge_reset_task_schedule(struct hclge_dev *hdev) >> { >> if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) && >> !test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state)) >> - schedule_work(&hdev->rst_service_task); >> + queue_work_on(cpumask_first(&hdev->affinity_mask), system_wq, >> + &hdev->rst_service_task); >> } >> >> static void hclge_task_schedule(struct hclge_dev *hdev) >> @@ -2517,7 +2525,8 @@ static void hclge_task_schedule(struct hclge_dev *hdev) >> if (!test_bit(HCLGE_STATE_DOWN, &hdev->state) && >> !test_bit(HCLGE_STATE_REMOVING, &hdev->state) && >> !test_and_set_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state)) >> - (void)schedule_work(&hdev->service_task); >> + queue_work_on(cpumask_first(&hdev->affinity_mask), system_wq, > > Same here. > > > Salil. > > . >