Received: by 10.223.185.116 with SMTP id b49csp6334667wrg; Wed, 28 Feb 2018 07:45:38 -0800 (PST) X-Google-Smtp-Source: AH8x224ZdDOBNZmz19G3f04YMJEcZmRBakc2hk0+Y2liluMYVxWzPuFuKPi/7ZjbBDd0HkCG0J7g X-Received: by 2002:a17:902:8bc3:: with SMTP id r3-v6mr18299621plo.450.1519832738529; Wed, 28 Feb 2018 07:45:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519832738; cv=none; d=google.com; s=arc-20160816; b=CvVVYy4/hEmyDY3sqeSHiDGoYzebBmUdf6R3i2xO3mlQWKLn3z1NAG6r41inJfS8qa sb5KnrCwJhEHL5tvLxFQ3doAwIFUTWDC4HYzgpY0RpZ23cxiUVqfXpm01tK15nGuXrTh pBxCdnoa7D5daYzDc+UHRf3VjVG6ips7kVbwYuGyrAXBXMwWfvzaG9Q21NEwg00jPDYu vK5cBiUVHXb01P0H33F2Si876kEfdSjgziE6DZkoAQ+7FtFeGSijkoSG+QJISBBu7o8Y kvdovYiqixRmO4enUgPF04i3xcQwKjiffhl/GJRcT83QZAkTOCI43N8ODHANIBKFTDIG 9wtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=MJO52CE6yhWesKWa8vSFQnEUPxf66qFe1RAvZDhPsKw=; b=qvYDOJsZdOCashKjWo2W3pp1hAAHy3oSPpf+fUYLUBvddYgG6kqHPoGwbTPexhLp7u N0Bycg7ErZE1vQTQD1VxvdAMXsHufPCqK5skcsiVTovzlUyKin3TmQUxZs+DeE9xcg6i yvnylys+wDbPCmFIhruyr1HXh8Q80sd0f36lJlKMLPGf7Mn/JF6AQlHNncZxfWO8Y2V5 cYHmXxBqnl26DxwXQW80fSbwImlFexnBnWnwFUIOE6iMp8lUxiVQvNf4towU4Shym2cV 0hf2twJYLJQvMcTh6rjKDZ80mVkjlfAgGsh6veWmoegl8EMpVEshIekZnmOfRSXRUd6c v/2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=buOWtOIB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e3-v6si1457253plb.100.2018.02.28.07.45.23; Wed, 28 Feb 2018 07:45:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=buOWtOIB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933361AbeB1PnS (ORCPT + 99 others); Wed, 28 Feb 2018 10:43:18 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:59914 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932654AbeB1PnP (ORCPT ); Wed, 28 Feb 2018 10:43:15 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w1SFau0h194609; Wed, 28 Feb 2018 15:42:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2017-10-26; bh=MJO52CE6yhWesKWa8vSFQnEUPxf66qFe1RAvZDhPsKw=; b=buOWtOIBHta13QAraFJ4C2ltLFYh6jiHtYs7I7ynYDSQE9k9UdilRS6zK/G4ugwUs5ng xb0lzalTCaSHdrEkMVbRkODrz+LLIUwHut726YyFO1qNr7x//M15YgiXCgLVN6+h9wN9 cuHiI3mMB7TJSTKzUzGYtZye+/KPwLdWTlcoP0w5YCuPVQulHglX1GYsq5HvxysoZ47j dJjJZsQLDpD8b1nqnXehV37h8NakcpHGJwA6x/GPUoJivQRUtr+PY2KG0n1bkIgcmcFi 1h+UsBougEiRhG8T2WdyFib85iGRIXvwGehenj5mtL5hxpHxIKnl+gFaqMbCkNGJgk0S wg== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp2120.oracle.com with ESMTP id 2gdw0d8txu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 28 Feb 2018 15:42:26 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w1SFgQ4C010664 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 28 Feb 2018 15:42:26 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w1SFgPuq027855; Wed, 28 Feb 2018 15:42:25 GMT Received: from [10.191.19.151] (/10.191.19.151) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 28 Feb 2018 07:42:25 -0800 Subject: Re: [PATCH] nvme-pci: assign separate irq vectors for adminq and ioq0 To: Keith Busch Cc: axboe@fb.com, linux-kernel@vger.kernel.org, hch@lst.de, linux-nvme@lists.infradead.org, sagi@grimberg.me References: <1519721177-2099-1-git-send-email-jianchao.w.wang@oracle.com> <20180227151311.GD10832@localhost.localdomain> <9252f0a1-f3e5-414b-db49-e8053dfa48a6@oracle.com> <20180228152741.GA16002@localhost.localdomain> From: "jianchao.wang" Message-ID: <8066e06c-90f4-c21b-e36f-89f6e8ca28c5@oracle.com> Date: Wed, 28 Feb 2018 23:42:21 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180228152741.GA16002@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8817 signatures=668681 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=719 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1802280190 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Keith Thanks for your kindly response and directive On 02/28/2018 11:27 PM, Keith Busch wrote: > On Wed, Feb 28, 2018 at 10:53:31AM +0800, jianchao.wang wrote: >> On 02/27/2018 11:13 PM, Keith Busch wrote: >>> On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote: >>>> Currently, adminq and ioq0 share the same irq vector. This is >>>> unfair for both amdinq and ioq0. >>>> - For adminq, its completion irq has to be bound on cpu0. >>>> - For ioq0, when the irq fires for io completion, the adminq irq >>>> action has to be checked also. >>> >>> This change log could use some improvements. Why is it bad if admin >>> interrupts affinity is with cpu0? >> >> adminq interrupts should be able to fire everywhere. >> do we have any reason to bound it on cpu0 ? > > Your patch will have the admin vector CPU affinity mask set to > 0xff..ff. The first set bit for an online CPU is the one the IRQ handler > will run on, so the admin queue will still only run on CPU 0. hmmm...yes. When I test there is only one irq vector, I get following result: 124: 0 0 253541 0 0 0 0 0 IR-PCI-MSI 1048576-edge nvme0q0, nvme0q1 > >>> Are you able to measure _any_ performance difference on IO queue 1 vs IO >>> queue 2 that you can attribute to IO queue 1's sharing vector 0? >> >> Actually, I didn't get any performance improving on my own NVMe card. >> But it may be needed on some enterprise card, especially the media is persist memory. >> nvme_irq will be invoked twice when ioq0 irq fires, this will introduce another unnecessary DMA >> accessing on cq entry. > > A CPU reading its own memory isn't a DMA. It's just a cheap memory read. Oh sorry, my bad, I mean it is operation on DMA address, it is uncached. nvme_irq -> nvme_process_cq -> nvme_read_cqe -> nvme_cqe_valid static inline bool nvme_cqe_valid(struct nvme_queue *nvmeq, u16 head, u16 phase) { return (le16_to_cpu(nvmeq->cqes[head].status) & 1) == phase; } Sincerely Jianchao