Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751809AbdGRSwb (ORCPT ); Tue, 18 Jul 2017 14:52:31 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:50804 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751392AbdGRSwa (ORCPT ); Tue, 18 Jul 2017 14:52:30 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B9CF861286 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=okaya@codeaurora.org Subject: Re: [PATCH] nvme: Acknowledge completion queue on each iteration To: Keith Busch Cc: linux-nvme@lists.infradead.org, timur@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-kernel@vger.kernel.org References: <1500330983-27501-1-git-send-email-okaya@codeaurora.org> <20170717224551.GA1496@localhost.localdomain> <6d10032c-35ec-978c-6b8f-1ab9c07adf7f@codeaurora.org> <20170717225615.GB1496@localhost.localdomain> <79413407294645f0e1252112c3435a29@codeaurora.org> <20170718143617.GA7613@localhost.localdomain> From: Sinan Kaya Message-ID: Date: Tue, 18 Jul 2017 14:52:26 -0400 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170718143617.GA7613@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1480 Lines: 34 On 7/18/2017 10:36 AM, Keith Busch wrote: > On Mon, Jul 17, 2017 at 07:07:00PM -0400, okaya@codeaurora.org wrote: >> Maybe, I need to understand the design better. I was curious why completion >> and submission queues were protected by a single lock causing lock >> contention. > Ideally the queues are tied to CPUs, so you couldn't have one thread > submitting to a particular queue-pair while another thread is reaping > completions from it. Such a setup wouldn't get lock contention. I do see that the NVMe driver is creating a completion interrupt on each CPU core for the completions. No problems with that. However, I don't think you can guarantee that there will always be a single CPU core targeting one submission queue especially with asynchronous IO. Lock contention counters from CONFIG_LOCK_STAT are pointing to nvmeq->lock in my FIO tests. Did I miss something? > > Some machines have so many CPUs, though, that sharing hardware queues > is required. We've experimented with separate submission and completion > locks for such cases, but I've never seen an improved performance as a > result. > I have also experimented with multiple locks with no significant gains. However, I was curious if somebody else had a better implementation than mine. -- Sinan Kaya Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.