Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp55258ybm; Tue, 26 May 2020 10:35:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzCjbtfsD65DEqlx49FpG6nV/LPhRbEC+jaVlcxwWx/4MwLeNMFccanDs6tc8UsIYoDzbVt X-Received: by 2002:a17:906:b750:: with SMTP id fx16mr2039080ejb.35.1590514513802; Tue, 26 May 2020 10:35:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590514513; cv=none; d=google.com; s=arc-20160816; b=ejzLUz0tifhatXtpNp9dZbIYxfs/c1YYNsP9NV+PU19OcR4sVVm5UW3AUbDYyZGSiP XAVAhKiYtxA7oVsAvjk3YXEopTuCBTndLTXdW0ca5nUTeBaeDImQnP8SOEyiPXyAKJUK aiMrJDRhXZcQS9nVgnZF3e8HzUp7Ptl8FPkxVvNQAbK24JHosYk5vaOUJk1BxDH7j+6j LJlXNHI2Kpc751laNMl3AjyOma8yyPYyKp2mI9WeP0puNAUZLgAcd/a8riUYj9dpS769 5wtqhrD/8HfvN/rEEzIAevpvwK5FUXHUaZJRWJTrQDKxQFZJiITsKfiTm5ZcvYBLTJ9Z TKEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=gY1xHSzU0BixoChtgpMiGs26xB1k45JS1OITIdF0cPM=; b=Bj4WHL0mbSBBzUR1riWwoHOQdi0jYwXcupTshH9Cfn28RyLoFqMwoNsgUWsP2bEwqm q+uIv87Zt7ECCQM5FNxaLFlBnQT50ZC+oQgGKR9B138OWMLPWllWf78J9k/ggwSZ5kDF oHqK5DGd/+6bqH2j36Kx68IZiH970b8pzBhqm/ykB+TrnFxrG+SPaeyT8YpY6FwbLNyc 2++kCGK+nY1/a80zy7+DqfVJCgE443xp8jh84vfc88QIgU7CylonGJd1EsxSUJINkD4g 2b5MUD2sq9me5hXMpKV3uXxhS7ynAJe19ei4VB2fD+q0nj0N+/v7HxCAtI8VLuYI/+xh 34CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=YT6X8ZnM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t8si243085ejs.266.2020.05.26.10.34.50; Tue, 26 May 2020 10:35:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=YT6X8ZnM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729722AbgEZRcu (ORCPT + 99 others); Tue, 26 May 2020 13:32:50 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:1308 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726930AbgEZRct (ORCPT ); Tue, 26 May 2020 13:32:49 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 26 May 2020 10:31:20 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 26 May 2020 10:32:49 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 26 May 2020 10:32:49 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 May 2020 17:32:48 +0000 Subject: Re: [PATCH 0/6] nouveau/hmm: add support for mapping large pages To: Jason Gunthorpe CC: , , , , , Jerome Glisse , "John Hubbard" , Christoph Hellwig , Ben Skeggs , Andrew Morton , Shuah Khan References: <20200508192009.15302-1-rcampbell@nvidia.com> <20200525134118.GA2536@ziepe.ca> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <4743ec6e-a5a0-16ac-a1b8-992f851515f0@nvidia.com> Date: Tue, 26 May 2020 10:32:48 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200525134118.GA2536@ziepe.ca> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590514280; bh=gY1xHSzU0BixoChtgpMiGs26xB1k45JS1OITIdF0cPM=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=YT6X8ZnMP3j2H98e/7bMTxvdcizzJii7qVVjHUfEFojmdIyhUbS3YLngjP0oO5mnQ vMVFflzmbMRnAtJAvCGfn27zgKrru6NNKyc6215BVmJQv5P2SlhTOzfdIOtvkLILKd eK8IB+aZBBKf+9QcOz3hHhbFq48EGMdTwSkqFXiaK8tMiGfqpR2y6WIl0vArwJinF8 /M/m6VN4l777oH6kgroSbuBL3/KyGVYBbYUEe3EsIObrnVtyX6ZE2IQ8g16O645XNw OzjiUse4raMuhGPim7qMXfvYnXckCIUkzcKfIwBj1rHL5NI2x3jIWhjAqGjyqJWX9L bsa3lTJ9hIpqw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/25/20 6:41 AM, Jason Gunthorpe wrote: > On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote: >> hmm_range_fault() returns an array of page frame numbers and flags for >> how the pages are mapped in the requested process' page tables. The PFN >> can be used to get the struct page with hmm_pfn_to_page() and the page size >> order can be determined with compound_order(page) but if the page is larger >> than order 0 (PAGE_SIZE), there is no indication that the page is mapped >> using a larger page size. To be fully general, hmm_range_fault() would need >> to return the mapping size to handle cases like a 1GB compound page being >> mapped with 2MB PMD entries. However, the most common case is the mapping >> size the same as the underlying compound page size. >> This series adds a new output flag to indicate this so that callers know it >> is safe to use a large device page table mapping if one is available. >> Nouveau and the HMM tests are updated to use the new flag. >> >> Note that this series depends on a patch queued in Ben Skeggs' nouveau >> tree ("nouveau/hmm: map pages after migration") and the patches queued >> in Jason's HMM tree. >> There is also a patch outstanding ("nouveau/hmm: fix nouveau_dmem_chunk >> allocations") that is independent of the above and could be applied >> before or after. > > Did Christoph and Matt's remarks get addressed here? Both questioned the need to add the HMM_PFN_COMPOUND flag to the hmm_range_fault() output array saying that the PFN can be used to get the struct page pointer and the page can be examined to determine the page size. My response is that while is true, it is also important that the device only access the same parts of a large page that the process/cpu has access to. There are places where a large page is mapped with smaller page table entries when a page is shared by multiple processes. After I explained this, I haven't seen any further comments from Christoph and Matt. I'm still looking for reviews, acks, or suggested changes. > I think ODP could use something like this, currently it checks every > page to get back to the huge page size and this flag would optimze > that > > Jason