Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp517772imj; Thu, 7 Feb 2019 07:47:57 -0800 (PST) X-Google-Smtp-Source: AHgI3IYrZXx8AzaBYlRA7voEK4FqQxDKfi3ZEOc0bsKfdjJW2NOSe0zQIByzc/ATVzJFG2EBZ0Ml X-Received: by 2002:a62:5182:: with SMTP id f124mr6830648pfb.238.1549554477882; Thu, 07 Feb 2019 07:47:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549554477; cv=none; d=google.com; s=arc-20160816; b=e7Ma6V0fIIrxFAemIhVC9Oms+6MjErH/KAEYGhm/VvAJv32kNBwiB51bKHGWafglUP IdLyBLjoD+J1AazljaN9OouS9N3rZC7rLS1m5r4LETLhJH5qZ1yhNst3iLbgS0tZVnYD F2QxLBbo45hqMxwaP0/4mgupINpj1DsHTtFlCYouVzx1lLBpEJGCywO0QAlccjq2FHCK H+/+k1beiWvhjHxjeqxZzqkfmc1t9gEUO2+PKXYb/FOI4AL7cD4YVTMEuZiqtudl4QEQ 1UJqQ7pycEczqo1XRHIMclSh9exKg+VvC5KU4FjX26ZiVBBFUbKEx5DYyaYKEiRHs/gB pX8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=O/NDPUvmhc7liFnvbt6y13+UwzEpiHVHRuajdML0aZE=; b=XhAIRx/XTXi7ClKI3zgUcKBholvu2hinzb2lLw5Q43sJJ9kJSkvCfhl9VefQ+cCd3X 7KSW/mHjy0da/l/62qtIIXx0FF/vYo126yjbq8XOHvh5VMANqKwNlwCSx7TTV/EHKA60 H8nbNcTZ6E5x0jIHfQRpMqHvk29imy2fHF69c1cl0CUbRpjTB6BD9Y4XACoYnS4GVi9K +tE32qH0gBcf6C2w4qUYAQ74uTmfbAEXJjie8fWmxGMe/EFCVgI5eea5cYhKrckzzyXV b6KH5UEU28mmzUnaloHHJmqdwY8OJFyZTNDMTY4diUPiE7ooh3yrbxfhMMIy393xp2jD MpWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=qG761ZJ4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14si6773675pln.27.2019.02.07.07.47.42; Thu, 07 Feb 2019 07:47:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=qG761ZJ4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726934AbfBGPpn (ORCPT + 99 others); Thu, 7 Feb 2019 10:45:43 -0500 Received: from mail-lj1-f194.google.com ([209.85.208.194]:45512 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726037AbfBGPpn (ORCPT ); Thu, 7 Feb 2019 10:45:43 -0500 Received: by mail-lj1-f194.google.com with SMTP id s5-v6so217960ljd.12; Thu, 07 Feb 2019 07:45:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=O/NDPUvmhc7liFnvbt6y13+UwzEpiHVHRuajdML0aZE=; b=qG761ZJ4JAzXfWf4k44vzqwTzwJdjw7HrMLbfG1ZctjDfX4AmZwvEAEci1Mvk5c/tO dx/AsCRD999RsyRjScsXZk1cF4SvMrbo0EHBqG7M6gagDU7RrepOTMBVJ0QvngXVdC4C asPeFulrVbB8TB5K9XbHruy7p7DuLl3xyaXwCBIkFFIM2WU2xP4PAwKMeW2CXk5RxCri 9Q9SywOgwTjHgmlgb2mxpeDd/JAHJUPSg6p33yqmJ3WJLcrV8ldWK3NpbE644OabC0iA ySWILr/9OUhjnh56kYlr/R8o0akqI1Znp4lFuZzvqDKPr6QJDmScR9H7v4viyNnc/uUb jyhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=O/NDPUvmhc7liFnvbt6y13+UwzEpiHVHRuajdML0aZE=; b=rX0uYwdQbc6a6yrlkPHmivFdFh62iC51HActLoxr9W5/aiJCnF19bhfF25A8bZ2tsI HOS8kpq2OUMuklNT3aqrGJVMrkqBANs8fWbTK72EbhTnVWcvqGWQzXkhI/uzR8mFvLi1 QunCDTLdU4EJ5ys5M5aKt6IIPPgAv+8w4nfi7ao5SHNhbJJ/Se7od/2yT9ThWG01cmEk tvuA+BUAdgcX4jPhvlEyWgW0SbecVW8+d0JWpbPsl6uZg9lxk+jOEYjkGl+O1mdjXNou jQet2/UPZki/RGb5Cx++gBYn4YjyRNtsxw8QyN+sq1g0n55Z0WrIFoEDtdrJ4Qm1rcg7 GDkA== X-Gm-Message-State: AHQUAuZ9O697AXLgcbO/blX8HjjLJB3Cx66hCAzy2jbi5k8f5ay/xYrg bSr+84lN0AupuamC3jbc2HJsiMb34MZlpaCQegU= X-Received: by 2002:a2e:884b:: with SMTP id z11-v6mr3704699ljj.68.1549554339336; Thu, 07 Feb 2019 07:45:39 -0800 (PST) MIME-Version: 1.0 References: <20190131030812.GA2174@jordon-HP-15-Notebook-PC> <20190131083842.GE28876@rapoport-lnx> In-Reply-To: <20190131083842.GE28876@rapoport-lnx> From: Souptick Joarder Date: Thu, 7 Feb 2019 21:19:47 +0530 Message-ID: Subject: Re: [PATCHv2 1/9] mm: Introduce new vm_insert_range and vm_insert_range_buggy API To: Mike Rapoport Cc: Andrew Morton , Matthew Wilcox , Michal Hocko , "Kirill A. Shutemov" , vbabka@suse.cz, Rik van Riel , Stephen Rothwell , rppt@linux.vnet.ibm.com, Peter Zijlstra , Russell King - ARM Linux , robin.murphy@arm.com, iamjoonsoo.kim@lge.com, treding@nvidia.com, Kees Cook , Marek Szyprowski , stefanr@s5r6.in-berlin.de, hjc@rock-chips.com, Heiko Stuebner , airlied@linux.ie, oleksandr_andrushchenko@epam.com, joro@8bytes.org, pawel@osciak.com, Kyungmin Park , mchehab@kernel.org, Boris Ostrovsky , Juergen Gross , linux-kernel@vger.kernel.org, Linux-MM , linux-arm-kernel@lists.infradead.org, linux1394-devel@lists.sourceforge.net, dri-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org, xen-devel@lists.xen.org, iommu@lists.linux-foundation.org, linux-media@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Mike, On Thu, Jan 31, 2019 at 2:09 PM Mike Rapoport wrote: > > On Thu, Jan 31, 2019 at 08:38:12AM +0530, Souptick Joarder wrote: > > Previouly drivers have their own way of mapping range of > > kernel pages/memory into user vma and this was done by > > invoking vm_insert_page() within a loop. > > > > As this pattern is common across different drivers, it can > > be generalized by creating new functions and use it across > > the drivers. > > > > vm_insert_range() is the API which could be used to mapped > > kernel memory/pages in drivers which has considered vm_pgoff > > > > vm_insert_range_buggy() is the API which could be used to map > > range of kernel memory/pages in drivers which has not considered > > vm_pgoff. vm_pgoff is passed default as 0 for those drivers. > > > > We _could_ then at a later "fix" these drivers which are using > > vm_insert_range_buggy() to behave according to the normal vm_pgoff > > offsetting simply by removing the _buggy suffix on the function > > name and if that causes regressions, it gives us an easy way to revert. > > > > Signed-off-by: Souptick Joarder > > Suggested-by: Russell King > > Suggested-by: Matthew Wilcox > > --- > > include/linux/mm.h | 4 +++ > > mm/memory.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > mm/nommu.c | 14 ++++++++++ > > 3 files changed, 99 insertions(+) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 80bb640..25752b0 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -2565,6 +2565,10 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, > > int remap_pfn_range(struct vm_area_struct *, unsigned long addr, > > unsigned long pfn, unsigned long size, pgprot_t); > > int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); > > +int vm_insert_range(struct vm_area_struct *vma, struct page **pages, > > + unsigned long num); > > +int vm_insert_range_buggy(struct vm_area_struct *vma, struct page **pages, > > + unsigned long num); > > vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, > > unsigned long pfn); > > vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, > > diff --git a/mm/memory.c b/mm/memory.c > > index e11ca9d..0a4bf57 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -1520,6 +1520,87 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, > > } > > EXPORT_SYMBOL(vm_insert_page); > > > > +/** > > + * __vm_insert_range - insert range of kernel pages into user vma > > + * @vma: user vma to map to > > + * @pages: pointer to array of source kernel pages > > + * @num: number of pages in page array > > + * @offset: user's requested vm_pgoff > > + * > > + * This allows drivers to insert range of kernel pages they've allocated > > + * into a user vma. > > + * > > + * If we fail to insert any page into the vma, the function will return > > + * immediately leaving any previously inserted pages present. Callers > > + * from the mmap handler may immediately return the error as their caller > > + * will destroy the vma, removing any successfully inserted pages. Other > > + * callers should make their own arrangements for calling unmap_region(). > > + * > > + * Context: Process context. > > + * Return: 0 on success and error code otherwise. > > + */ > > +static int __vm_insert_range(struct vm_area_struct *vma, struct page **pages, > > + unsigned long num, unsigned long offset) > > +{ > > + unsigned long count = vma_pages(vma); > > + unsigned long uaddr = vma->vm_start; > > + int ret, i; > > + > > + /* Fail if the user requested offset is beyond the end of the object */ > > + if (offset > num) > > + return -ENXIO; > > + > > + /* Fail if the user requested size exceeds available object size */ > > + if (count > num - offset) > > + return -ENXIO; > > + > > + for (i = 0; i < count; i++) { > > + ret = vm_insert_page(vma, uaddr, pages[offset + i]); > > + if (ret < 0) > > + return ret; > > + uaddr += PAGE_SIZE; > > + } > > + > > + return 0; > > +} > > + > > +/** > > + * vm_insert_range - insert range of kernel pages starts with non zero offset > > + * @vma: user vma to map to > > + * @pages: pointer to array of source kernel pages > > + * @num: number of pages in page array > > + * > > + * Maps an object consisting of `num' `pages', catering for the user's > > + * requested vm_pgoff > > + * > > The elaborate description you've added to __vm_insert_range() is better put > here, as this is the "public" function. > > > + * Context: Process context. Called by mmap handlers. > > + * Return: 0 on success and error code otherwise. > > + */ > > +int vm_insert_range(struct vm_area_struct *vma, struct page **pages, > > + unsigned long num) > > +{ > > + return __vm_insert_range(vma, pages, num, vma->vm_pgoff); > > +} > > +EXPORT_SYMBOL(vm_insert_range); > > + > > +/** > > + * vm_insert_range_buggy - insert range of kernel pages starts with zero offset > > + * @vma: user vma to map to > > + * @pages: pointer to array of source kernel pages > > + * @num: number of pages in page array > > + * > > + * Maps a set of pages, always starting at page[0] > > Here I'd add something like: > > Similar to vm_insert_range(), except that it explicitly sets @vm_pgoff to > 0. This function is intended for the drivers that did not consider > @vm_pgoff. Just thought to take opinion for documentation before placing it in v3. Does it looks fine ? +/** + * __vm_insert_range - insert range of kernel pages into user vma + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * @offset: user's requested vm_pgoff + * + * This allow drivers to insert range of kernel pages into a user vma. + * + * Return: 0 on success and error code otherwise. + */ +static int __vm_insert_range(struct vm_area_struct *vma, struct page **pages, + unsigned long num, unsigned long offset) +/** + * vm_insert_range - insert range of kernel pages starts with non zero offset + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * + * Maps an object consisting of `num' `pages', catering for the user's + * requested vm_pgoff + * + * If we fail to insert any page into the vma, the function will return + * immediately leaving any previously inserted pages present. Callers + * from the mmap handler may immediately return the error as their caller + * will destroy the vma, removing any successfully inserted pages. Other + * callers should make their own arrangements for calling unmap_region(). + * + * Context: Process context. Called by mmap handlers. + * Return: 0 on success and error code otherwise. + */ +int vm_insert_range(struct vm_area_struct *vma, struct page **pages, + unsigned long num) +/** + * vm_insert_range_buggy - insert range of kernel pages starts with zero offset + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * + * Similar to vm_insert_range(), except that it explicitly sets @vm_pgoff to + * 0. This function is intended for the drivers that did not consider + * @vm_pgoff. + * + * Context: Process context. Called by mmap handlers. + * Return: 0 on success and error code otherwise. + */ +int vm_insert_range_buggy(struct vm_area_struct *vma, struct page **pages, + unsigned long num)