Received: by 10.223.185.116 with SMTP id b49csp7913253wrg; Thu, 1 Mar 2018 13:27:06 -0800 (PST) X-Google-Smtp-Source: AG47ELucG9FN49hxsSU97hIcnuIuD6/T+FsK9WGcrRERpm9hgQIEMkAbWWzlu8ZneGaZkKWyPk5I X-Received: by 2002:a17:902:4643:: with SMTP id o61-v6mr3197295pld.103.1519939626326; Thu, 01 Mar 2018 13:27:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519939626; cv=none; d=google.com; s=arc-20160816; b=k27G5dtuZTAKVTBu46SiflaxFNA3nx7qb+AaYU3MyjWZir7P49FC6VTnQrQOrpzdox Y838Uvq81l/sPLBurLnupg3c8UMRKp6j62fIfpyqgn8V1viHVCrTecrvQkDzSy8ySNEC lMYrDuYuj02HHVZBRP9DOFvSJb16l6ROzuMk9wG85GSQ5Y/mHJFksKiFJjAXLpxs9E5h j29aihtZQCn+6K3mcU1BqCCV1tdQtWV53WLnr00uheOINz8dG2rJsc8RomvMwPikE3jN jZp7ZZIzv9UG5Yw5Azz8naTKlOmaStKquSzaBOct1Os0D6moRrrflJf6tZZkaRxXzXyD 2uaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=e+Qy+29vH0Yp4Pe5lHGkXU2UujNJpSMQo6sQcHWxDyU=; b=q4EdJlqyC47xRhJwrnIohLo4mWaj+vzgt8nfxa9VGiMXGG+eC2XnTJYq/DAOS7sMAR MjVroNRb76RrvT5H3aSrpZw/7jitC/koFpPtZoau94fwIwRvZboZAe+lCInO29epjPb7 QSYqNhJBmWFK67xQvcxyLAxcF04aJ8j8dedqo6OT2TaFEcwMg55hNfyW2Rk/SQWWLsa5 50U2skvo4PPs8nMygeppM3nkvpVkQEkV+H2THGexptEd0hdp2CqYLnESS9NkxgSDoerE sUql8DPcF/bjr6RIqQJzKhJlaz4QBa+WxwVuuEasDriSniW+bwdVuQAaeOAyzLDXNh5u ZINA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t7-v6si3559203plz.752.2018.03.01.13.26.51; Thu, 01 Mar 2018 13:27:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162063AbeCAVZy (ORCPT + 99 others); Thu, 1 Mar 2018 16:25:54 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:45554 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1161816AbeCAVZq (ORCPT ); Thu, 1 Mar 2018 16:25:46 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2336C40744C6; Thu, 1 Mar 2018 21:25:45 +0000 (UTC) Received: from redhat.com (ovpn-124-164.rdu2.redhat.com [10.10.124.164]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C9EDA213AEF8; Thu, 1 Mar 2018 21:25:43 +0000 (UTC) Date: Thu, 1 Mar 2018 16:25:42 -0500 From: Jerome Glisse To: Logan Gunthorpe Cc: Benjamin Herrenschmidt , linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , Alex Williamson , Oliver OHalloran Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301212541.GD6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> <20180301211036.GB6742@redhat.com> <8ed955f8-55c9-a2bd-1d58-90bf1dcfa055@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <8ed955f8-55c9-a2bd-1d58-90bf1dcfa055@deltatee.com> User-Agent: Mutt/1.9.2 (2017-12-15) X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 01 Mar 2018 21:25:45 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 01 Mar 2018 21:25:45 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 01, 2018 at 02:15:01PM -0700, Logan Gunthorpe wrote: > > > On 01/03/18 02:10 PM, Jerome Glisse wrote: > > It seems people miss-understand HMM :( you do not have to use all of > > its features. If all you care about is having struct page then just > > use that for instance in your case only use those following 3 functions: > > > > hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() > > for cleanup. > > To what benefit over just using devm_memremap_pages()? If I'm using the hmm > interface and disabling all the features, I don't see the point. We've also > cleaned up the devm_memremap_pages() interface to be more usefully generic > in such a way that I'd hope HMM starts using it too and gets rid of the code > duplication. > The first HMM variant find a hole and do not require a resource as input parameter. Beside that internaly for PCIE device memory devm_memremap_pages() does not do the right thing last time i check it always create a linear mapping of the range ie HMM call add_pages() while devm_memremap_pages() call arch_add_memory() When i upstreamed HMM, Dan didn't want me to touch devm_memremap_pages() to match my need. I am more than happy to modify devm_memremap_pages() to also handle HMM needs. Note that the intention of HMM is to be a middle layer between low level infrastructure and device driver. Idea is that such impedance layer should make it easier down the road to change how thing are handled down below without having to touch many device driver. Cheers, J?r?me