Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp820136ybt; Fri, 10 Jul 2020 13:16:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzIdA5Gc14+2aicKag1ej4wOh+i8bXSCLNMElGOGOzMNupjOrOUVL0WlRQaYhzJPiOY7nty X-Received: by 2002:a05:6402:13d0:: with SMTP id a16mr79207130edx.269.1594412190137; Fri, 10 Jul 2020 13:16:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594412190; cv=none; d=google.com; s=arc-20160816; b=X/R/eui3IVNof03oE5wSuUWU9HS20TN/p10B9ZZl7eHe3onxTASPiBXhypbSYrfO+5 zyc/y2gvCDYPqd4f+XK4Qoj9l4GWZRA4ndccuuDe/qajJUHT2s8hRHYBpT+EJEKdRfNi mxi1eVyVxVqWt4YzUoSIVdy8vn/bbQiYVYmXrzoacRASxBeIntow7wzK//OZnfUNSZ+D I7AjPyZABADNEmnW9zxlfqu/eOhcuzO8mIsBCOazqfB6oB6NJmI4LQ0rUEZAll0yv7Ol l9IZv3tKz92oAa72uzNoOX9x9PDW01TX0owvHowxikUbcIU/HnVETbnV4I651LXIqF6s Ce+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=kBUZGsLnFvs5AsR2jhrWWoSLsoCM1L3w7Wtbc0qtxj0=; b=KBM1vMEOzcg0gD6yhuOloEh14mB0BRh/Ifqc8nSbjaeUuMocXW/JLy31JdNcK8uCvK 20RuvGue5AFNAeauA6Cr8X9zEfbFH4kCkQWNe7clIKKianjPScN9u7EWPxgZBnBzGX/Y AN9iQFE8hkkEGjnXCBtdB3u3gu7HKCYuZRFRVDO/w03o9gxCTnJmvT7r17afd3NOwbYG iB4oUmJD+9GKB6Dnx4dzGZXGlh1hbD+4n/mURGJ4q5vewwhLbZRwBwKmBLDENpy1nAic V1evrVCb9ub/LMAYXjjXDWphPeewknSRVbz8kfRapm+ENz9hDVHDnCUGu6woTNXwQvme dVEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=K3Dzp89z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dn17si5712050ejc.556.2020.07.10.13.16.05; Fri, 10 Jul 2020 13:16:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=K3Dzp89z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727837AbgGJUNs (ORCPT + 99 others); Fri, 10 Jul 2020 16:13:48 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:17155 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726828AbgGJUNr (ORCPT ); Fri, 10 Jul 2020 16:13:47 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 10 Jul 2020 13:11:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 10 Jul 2020 13:13:47 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 10 Jul 2020 13:13:47 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 10 Jul 2020 20:13:46 +0000 Subject: Re: [PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping To: Jason Gunthorpe CC: , , , , , Jerome Glisse , "John Hubbard" , Christoph Hellwig , Andrew Morton , Shuah Khan , Ben Skeggs References: <20200701225352.9649-1-rcampbell@nvidia.com> <20200710192704.GA2128670@nvidia.com> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <886151ff-c8e7-7b49-cc8d-c0e32fdc770b@nvidia.com> Date: Fri, 10 Jul 2020 13:13:46 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200710192704.GA2128670@nvidia.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594411918; bh=kBUZGsLnFvs5AsR2jhrWWoSLsoCM1L3w7Wtbc0qtxj0=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=K3Dzp89zmicqXa+y0n8OOLym6kCUeazXTrxWKuXyC/tYXivwEn0t+LYOLybh7UIRs Whs4nbq3XhoKckYmk5hxdDJJSjc6+tcenmizLcZ+9hITHo2ymlHwDKw/KzgD4jXqsf CXDG/OIoOriWZjxSZ++/DUQBySlgupmLwEWHAfvI027WffQPvJ7ZoaeX/h+2ZvgYeP j7WWMTd0i9jcYA2k64awuSTUKnT1Qh+ex05aqZegl13YwSrWZKL4+LHbSpgsqYIP3I GJp51Ge7rgvO8jvpE3yAqsR2Hr4rRbjWfc9fbzbffbmYY0FtlkVHSYd1HMRqVZkjgo 2VapRmOWglIeg== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/10/20 12:27 PM, Jason Gunthorpe wrote: > On Wed, Jul 01, 2020 at 03:53:47PM -0700, Ralph Campbell wrote: >> The goal for this series is to introduce the hmm_pfn_to_map_order() >> function. This allows a device driver to know that a given 4K PFN is >> actually mapped by the CPU using a larger sized CPU page table entry and >> therefore the device driver can safely map system memory using larger >> device MMU PTEs. >> The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's >> hmm tree. These were originally part of a larger series: >> https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampbell@nvidia.com/ >> >> Changes in v3: >> Replaced the HMM_PFN_P[MU]D flags with hmm_pfn_to_map_order() to >> indicate the size of the CPU mapping. >> >> Changes in v2: >> Make the hmm_range_fault() API changes into a separate series and add >> two output flags for PMD/PUD instead of a single compund page flag as >> suggested by Jason Gunthorpe. >> Make the nouveau page table changes a separate patch as suggested by >> Ben Skeggs. >> Only add support for 2MB nouveau mappings initially since changing the >> 1:1 CPU/GPU page table size assumptions requires a bigger set of changes. >> Rebase to 5.8.0-rc3. >> >> Ralph Campbell (5): >> nouveau/hmm: fault one page at a time >> mm/hmm: add hmm_mapping order >> nouveau: fix mapping 2MB sysmem pages >> nouveau/hmm: support mapping large sysmem pages >> hmm: add tests for HMM_PFN_PMD flag > > Applied to hmm.git. > > I edited the comment for hmm_pfn_to_map_order() and added a function > to compute the field. > > Thanks, > Jason Looks good, thanks.