Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1132150ybl; Tue, 13 Aug 2019 07:53:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqyjLV8nQSfaqFnlosFAp4OGkbZ/lKgocJSZUnYHZWbthFlFnY/sHKPsOCOmwORA/OZWIaUW X-Received: by 2002:a17:902:2be8:: with SMTP id l95mr35866049plb.231.1565707988170; Tue, 13 Aug 2019 07:53:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565707988; cv=none; d=google.com; s=arc-20160816; b=kBrh9zgrhvKbIqkDowES9pfxf2oGSEBjrP1XhDY1y18lJorjEQYqD/dXnK/KAaCiz5 aqg5P9cZ576GU6UnJXhnP0UBxQ8yu7HTcA2gdO1RF2pLnSKSihwhNImWAllAizX3wtvF IS+zKIkvU1QwWaQPhiQ9DEtlM76oAa+zfosOZoOtsz7pPADhcnL27T0MSEiwaNNe8rhK Wiyeq/9w970uk8ZTW5XbBeCGv5E3Wd6bWnhqHsMppOkDU7IzYuqk1vivQ3KIWvVXu+VK giQV9fbAXOR0lPbVW42iaMcyCtZSvrH99jUEQ/Efz9zt9kUtAVkYRs8OOuPXutT5YRlO rVzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=WaEjNJzOt16Zct72f3wtpsTc6IJvEwYmkNxnV4zOlPU=; b=m0Yo/VNQz8M+n3bRfhA/Y4Fbvjk9GaEoT1EJ2gm/yI6yci8pjWtkCTUgHw0GdLRYoH u7gnTFlCje7BhNmbUqgtkYTrpSE191Av++EyPyfQVVOVBC9Hp34MXW0GHNi0Zdo4ck5Z yiJ9mligGGVYwgzCAfFbU0fXStXTDKMsPbzauZMEilMue1TZk2R0BKxgZCvlLbNGxyvH MBEECyGLqBPgMsmcVBfoRVhb/njsV+QwBJJBHeSfPuo7RXFUDc/w1BhuOVgv0kF2CyVs 8UQwdEbo6Xjgp74RGupqOojpS0o+Rt/Uayqcewac6+Svea7x50/aGkNBzjtJaCX5wV/1 2ifg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=hpe.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 64si58244466ply.15.2019.08.13.07.52.52; Tue, 13 Aug 2019 07:53:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729751AbfHMOuz (ORCPT + 99 others); Tue, 13 Aug 2019 10:50:55 -0400 Received: from mx0b-002e3701.pphosted.com ([148.163.143.35]:28856 "EHLO mx0b-002e3701.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729404AbfHMOuz (ORCPT ); Tue, 13 Aug 2019 10:50:55 -0400 Received: from pps.filterd (m0150245.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x7DEfI83009101; Tue, 13 Aug 2019 14:50:31 GMT Received: from g2t2354.austin.hpe.com (g2t2354.austin.hpe.com [15.233.44.27]) by mx0b-002e3701.pphosted.com with ESMTP id 2ubwwp8qrs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 Aug 2019 14:50:31 +0000 Received: from g2t2360.austin.hpecorp.net (g2t2360.austin.hpecorp.net [16.196.225.135]) by g2t2354.austin.hpe.com (Postfix) with ESMTP id B3DF9AE; Tue, 13 Aug 2019 14:50:30 +0000 (UTC) Received: from hpe.com (teo-eag.americas.hpqcorp.net [10.33.152.10]) by g2t2360.austin.hpecorp.net (Postfix) with ESMTP id E0DF13B; Tue, 13 Aug 2019 14:50:29 +0000 (UTC) Date: Tue, 13 Aug 2019 09:50:29 -0500 From: Dimitri Sivanich To: Bharath Vedartham Cc: jhubbard@nvidia.com, gregkh@linuxfoundation.org, sivanich@hpe.com, arnd@arndb.de, ira.weiny@intel.com, jglisse@redhat.com, william.kucharski@oracle.com, hch@lst.de, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kernel-mentees@lists.linuxfoundation.org Subject: Re: [Linux-kernel-mentees][PATCH v5 1/1] sgi-gru: Remove *pte_lookup functions, Convert to get_user_page*() Message-ID: <20190813145029.GA32451@hpe.com> References: <1565379497-29266-1-git-send-email-linux.bhar@gmail.com> <1565379497-29266-2-git-send-email-linux.bhar@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1565379497-29266-2-git-send-email-linux.bhar@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-08-13_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908130156 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Bharath, I do not believe that __get_user_pages_fast will work for the atomic case, as there is no guarantee that the 'current->mm' will be the correct one for the process in question, as the process might have moved away from the cpu that is handling interrupts for it's context. On Sat, Aug 10, 2019 at 01:08:17AM +0530, Bharath Vedartham wrote: > For pages that were retained via get_user_pages*(), release those pages > via the new put_user_page*() routines, instead of via put_page() or > release_pages(). > > This is part a tree-wide conversion, as described in commit fc1d8e7cca2d > ("mm: introduce put_user_page*(), placeholder versions"). > > As part of this conversion, the *pte_lookup functions can be removed and > be easily replaced with get_user_pages_fast() functions. In the case of > atomic lookup, __get_user_pages_fast() is used, because it does not fall > back to the slow path: get_user_pages(). get_user_pages_fast(), on the other > hand, first calls __get_user_pages_fast(), but then falls back to the > slow path if __get_user_pages_fast() fails. > > Also: remove unnecessary CONFIG_HUGETLB ifdefs. > > Cc: Ira Weiny > Cc: John Hubbard > Cc: J?r?me Glisse > Cc: Greg Kroah-Hartman > Cc: Dimitri Sivanich > Cc: Arnd Bergmann > Cc: William Kucharski > Cc: Christoph Hellwig > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: linux-kernel-mentees@lists.linuxfoundation.org > Reviewed-by: Ira Weiny > Reviewed-by: John Hubbard > Reviewed-by: William Kucharski > Signed-off-by: Bharath Vedartham > --- > This is a fold of the 3 patches in the v2 patch series. > The review tags were given to the individual patches. > > Changes since v3 > - Used gup flags in get_user_pages_fast rather than > boolean flags. > Changes since v4 > - Updated changelog according to John Hubbard. > --- > drivers/misc/sgi-gru/grufault.c | 112 +++++++++------------------------------- > 1 file changed, 24 insertions(+), 88 deletions(-) > > diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c > index 4b713a8..304e9c5 100644 > --- a/drivers/misc/sgi-gru/grufault.c > +++ b/drivers/misc/sgi-gru/grufault.c > @@ -166,96 +166,20 @@ static void get_clear_fault_map(struct gru_state *gru, > } > > /* > - * Atomic (interrupt context) & non-atomic (user context) functions to > - * convert a vaddr into a physical address. The size of the page > - * is returned in pageshift. > - * returns: > - * 0 - successful > - * < 0 - error code > - * 1 - (atomic only) try again in non-atomic context > - */ > -static int non_atomic_pte_lookup(struct vm_area_struct *vma, > - unsigned long vaddr, int write, > - unsigned long *paddr, int *pageshift) > -{ > - struct page *page; > - > -#ifdef CONFIG_HUGETLB_PAGE > - *pageshift = is_vm_hugetlb_page(vma) ? HPAGE_SHIFT : PAGE_SHIFT; > -#else > - *pageshift = PAGE_SHIFT; > -#endif > - if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page, NULL) <= 0) > - return -EFAULT; > - *paddr = page_to_phys(page); > - put_page(page); > - return 0; > -} > - > -/* > - * atomic_pte_lookup > + * mmap_sem is already helod on entry to this function. This guarantees > + * existence of the page tables. > * > - * Convert a user virtual address to a physical address > * Only supports Intel large pages (2MB only) on x86_64. > - * ZZZ - hugepage support is incomplete > - * > - * NOTE: mmap_sem is already held on entry to this function. This > - * guarantees existence of the page tables. > + * ZZZ - hugepage support is incomplete. > */ > -static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, > - int write, unsigned long *paddr, int *pageshift) > -{ > - pgd_t *pgdp; > - p4d_t *p4dp; > - pud_t *pudp; > - pmd_t *pmdp; > - pte_t pte; > - > - pgdp = pgd_offset(vma->vm_mm, vaddr); > - if (unlikely(pgd_none(*pgdp))) > - goto err; > - > - p4dp = p4d_offset(pgdp, vaddr); > - if (unlikely(p4d_none(*p4dp))) > - goto err; > - > - pudp = pud_offset(p4dp, vaddr); > - if (unlikely(pud_none(*pudp))) > - goto err; > - > - pmdp = pmd_offset(pudp, vaddr); > - if (unlikely(pmd_none(*pmdp))) > - goto err; > -#ifdef CONFIG_X86_64 > - if (unlikely(pmd_large(*pmdp))) > - pte = *(pte_t *) pmdp; > - else > -#endif > - pte = *pte_offset_kernel(pmdp, vaddr); > - > - if (unlikely(!pte_present(pte) || > - (write && (!pte_write(pte) || !pte_dirty(pte))))) > - return 1; > - > - *paddr = pte_pfn(pte) << PAGE_SHIFT; > -#ifdef CONFIG_HUGETLB_PAGE > - *pageshift = is_vm_hugetlb_page(vma) ? HPAGE_SHIFT : PAGE_SHIFT; > -#else > - *pageshift = PAGE_SHIFT; > -#endif > - return 0; > - > -err: > - return 1; > -} > - > static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr, > int write, int atomic, unsigned long *gpa, int *pageshift) > { > struct mm_struct *mm = gts->ts_mm; > struct vm_area_struct *vma; > unsigned long paddr; > - int ret, ps; > + int ret; > + struct page *page; > > vma = find_vma(mm, vaddr); > if (!vma) > @@ -263,21 +187,33 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr, > > /* > * Atomic lookup is faster & usually works even if called in non-atomic > - * context. > + * context. get_user_pages_fast does atomic lookup before falling back to > + * slow gup. > */ > rmb(); /* Must/check ms_range_active before loading PTEs */ > - ret = atomic_pte_lookup(vma, vaddr, write, &paddr, &ps); > - if (ret) { > - if (atomic) > + if (atomic) { > + ret = __get_user_pages_fast(vaddr, 1, write, &page); > + if (!ret) > goto upm; > - if (non_atomic_pte_lookup(vma, vaddr, write, &paddr, &ps)) > + } else { > + ret = get_user_pages_fast(vaddr, 1, write ? FOLL_WRITE : 0, &page); > + if (!ret) > goto inval; > } > + > + paddr = page_to_phys(page); > + put_user_page(page); > + > + if (unlikely(is_vm_hugetlb_page(vma))) > + *pageshift = HPAGE_SHIFT; > + else > + *pageshift = PAGE_SHIFT; > + > if (is_gru_paddr(paddr)) > goto inval; > - paddr = paddr & ~((1UL << ps) - 1); > + paddr = paddr & ~((1UL << *pageshift) - 1); > *gpa = uv_soc_phys_ram_to_gpa(paddr); > - *pageshift = ps; > + > return VTOP_SUCCESS; > > inval: > -- > 2.7.4 >