Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp3348432iob; Sun, 1 May 2022 14:36:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxmouYT7YmdAnbRnhBA1h0BO+gu72FXyINMD5q9C0wJ6e0Dl62STFz3g+2wc26Xzxt9kbnD X-Received: by 2002:a17:902:e742:b0:15e:9a7b:24c3 with SMTP id p2-20020a170902e74200b0015e9a7b24c3mr4942536plf.17.1651440960475; Sun, 01 May 2022 14:36:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651440960; cv=none; d=google.com; s=arc-20160816; b=zs+BYB8qCIKtLr/A8SvX4O/6H13FWEPqctPIATJNlCaorZd4YZRBrYtJseZ00ZY9qv K+ER+8qZ1f9UPRWEMFGvXZNMQYbS924XKZfQuNuhNHZH54iMHEzJ6QdMLlQsX4fsRZ7u j/JakFdun0lYtcRBv8QYOV2VDicuO6Bm+aZRKNHt8mmygw6W1wN5JB/8dCIxrTqL0pAP JL3YK6rae0nZtBxQvFLLmFH98gBNBmY0VzHN3GwAZqLDt/X7Wmq3M8ohnj1Y004AzS1c D8TGPLZMQa4WJ2LzdMnlAZ9LxEsReTqGNyPDs8bpWD9U29kNyfSA3xkWmMdjvAEK7x07 WD4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=3eUzrHAktaPV2vPiCroaieaJhKRqBxokvqSNbQGHpag=; b=AyMpeSoZ7+BzKx3GSdfoiXfZKVk32fUOqSjRaunuEQBRZnJIRUX95vbZg9JebZw0+b 9QT6gRLptuXzLOsZSaSQmayVA7rCh6Szdc/CvH6q0Fmoqp2UfK4KJE24SdBHJ9JBE2PR 85bziZNDbTzNi2UoPqFECXw3UZbyuOYElp3uLhorD5VIdS/2UZVcpEu48acO3jvWOlg0 a9G9jmlw2uF/7gAZFt+PLhDg4sz7d+lrIt0uB87/KecjE70RJqqZWkS+qCwMLIYYLo+6 qeiG0n+Cpasy7jdx6/LH5Ug9hPj5VIt8ULhFVy4Jy1+3ePt+rx1HM/2smwETWscT8ZKu ZQ6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t13-20020a1709028c8d00b00158f88c5316si11215181plo.373.2022.05.01.14.35.44; Sun, 01 May 2022 14:36:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348469AbiD1OjZ (ORCPT + 99 others); Thu, 28 Apr 2022 10:39:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348467AbiD1OjW (ORCPT ); Thu, 28 Apr 2022 10:39:22 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75F4F4ECF5 for ; Thu, 28 Apr 2022 07:36:07 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id E289B68AFE; Thu, 28 Apr 2022 16:36:03 +0200 (CEST) Date: Thu, 28 Apr 2022 16:36:03 +0200 From: Christoph Hellwig To: Thomas =?iso-8859-1?Q?Wei=DFschuh?= Cc: Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH] nvme-pci: fix host memory buffer allocation size Message-ID: <20220428143603.GA20460@lst.de> References: <20220428101922.14216-1-linux@weissschuh.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20220428101922.14216-1-linux@weissschuh.net> User-Agent: Mutt/1.5.17 (2007-11-01) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 28, 2022 at 12:19:22PM +0200, Thomas Wei?schuh wrote: > We want to allocate the smallest possible amount of buffers with the > largest possible size (1 buffer of size "hmpre"). > > Previously we were allocating as many buffers as possible of the smallest > possible size. > This also lead to "hmpre" to not be satisifed as not enough buffer slots > where available. > > Signed-off-by: Thomas Wei?schuh > --- > > Also discussed at https://lore.kernel.org/linux-nvme/f94565db-f217-4a56-83c3-c6429807185c@t-8ch.de/ > > drivers/nvme/host/pci.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index 3aacf1c0d5a5..0546523cc20b 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -2090,7 +2090,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred, > > static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) > { > - u64 min_chunk = min_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); > + u64 min_chunk = max_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); preferred is based on the HMPRE field in the spec, which documents the preffered size. So the max here would not make ny sense at all.