Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp7470397ybh; Thu, 8 Aug 2019 16:44:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqyVLcVmz8c2BRrga4FPrL3EI6GIzdAk2i5apSBAxKAxNSHQUZ7LGoUXW/Yw1wFUEZo3YDrK X-Received: by 2002:a17:90a:37e9:: with SMTP id v96mr6425012pjb.10.1565307840706; Thu, 08 Aug 2019 16:44:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565307840; cv=none; d=google.com; s=arc-20160816; b=HykRjPb/QBIBdP/yGZ4v8TO0iQcrsBAg+jyqN+aGxh4IMbZHEvSIVJOJYSW8UIm3DY zujvgi/tDwZqtx0Dk9ZzhTAB3B6J+sfIaj9tXF/itYNKSEVhbr2EzoWR5+gkuPgSAwCV DlCGJA28z6xgHDJaXBHeeecjCSSp11LL6W3y+A70zkF0fvivIWMyarWG7LcKbDxe+S0D HiHakgUFLM6qdCwjKpaSeZomYBM9L9Hn1QXV9ywX6e2HNASHNL2OiTJcCF1Do1slHd0i yMMHBsm9BixP/nNmCC+oBkoZFsC1HN8MxvHq9ELngiMr52rT427A6vVnbVBF2cFp5FM0 7Nyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=lAiuoRSbc3G5zek5xGU/U2TqjiEOlPLTzlLmSq0oGBg=; b=r6WfXJHeyHGngRJiIz1n6tUAvdgcn8lRk1MEuU1UrpzK5TsD3yb5Dk3kDUkhATU71b lD0KMD1tldaiucFmrzxyZMeQQSQo46FoJWeWcln4pyEHH6AbugmPmQ1mD5M4spqLg+ta lbPkiRb6lYkYTwCCffOktVny7LXXvIMOSeUSFfmnyr4XrPMgJcW52q71EEFyWLEFTIyv Zw9z0KHrDkc6DUUlRabNeOOKpxbx3yS44jlI8EmBbmydm+hMCCOOa89b/Dr0IRM65MWv MgZCy+/ir4IirET/dqZHegKwa71ku+vGs8zxd+1clbTEWsUCPlvB5T0jh1uLo0pawW2p PdKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x16si51703837plr.214.2019.08.08.16.43.44; Thu, 08 Aug 2019 16:44:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404372AbfHHXlj (ORCPT + 99 others); Thu, 8 Aug 2019 19:41:39 -0400 Received: from mga01.intel.com ([192.55.52.88]:22779 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725785AbfHHXlj (ORCPT ); Thu, 8 Aug 2019 19:41:39 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 16:41:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="203754786" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga002.fm.intel.com with ESMTP; 08 Aug 2019 16:41:38 -0700 Date: Thu, 8 Aug 2019 16:41:38 -0700 From: Ira Weiny To: John Hubbard Cc: Vlastimil Babka , Michal Hocko , Andrew Morton , Christoph Hellwig , Jan Kara , Jason Gunthorpe , Jerome Glisse , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Dan Williams , Daniel Black , Matthew Wilcox , Mike Kravetz Subject: Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*() Message-ID: <20190808234138.GA15908@iweiny-DESK2.sc.intel.com> References: <20190805222019.28592-1-jhubbard@nvidia.com> <20190805222019.28592-2-jhubbard@nvidia.com> <20190807110147.GT11812@dhcp22.suse.cz> <01b5ed91-a8f7-6b36-a068-31870c05aad6@nvidia.com> <20190808062155.GF11812@dhcp22.suse.cz> <875dca95-b037-d0c7-38bc-4b4c4deea2c7@suse.cz> <306128f9-8cc6-761b-9b05-578edf6cce56@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.11.1 (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 08, 2019 at 03:59:15PM -0700, John Hubbard wrote: > On 8/8/19 12:20 PM, John Hubbard wrote: > > On 8/8/19 4:09 AM, Vlastimil Babka wrote: > >> On 8/8/19 8:21 AM, Michal Hocko wrote: > >>> On Wed 07-08-19 16:32:08, John Hubbard wrote: > >>>> On 8/7/19 4:01 AM, Michal Hocko wrote: > >>>>> On Mon 05-08-19 15:20:17, john.hubbard@gmail.com wrote: > >>>>>> From: John Hubbard > >>>> Actually, I think follow_page_mask() gets all the pages, right? And the > >>>> get_page() in __munlock_pagevec_fill() is there to allow a pagevec_release() > >>>> later. > >>> > >>> Maybe I am misreading the code (looking at Linus tree) but munlock_vma_pages_range > >>> calls follow_page for the start address and then if not THP tries to > >>> fill up the pagevec with few more pages (up to end), do the shortcut > >>> via manual pte walk as an optimization and use generic get_page there. > >> > > > > Yes, I see it finally, thanks. :) > > > >> That's true. However, I'm not sure munlocking is where the > >> put_user_page() machinery is intended to be used anyway? These are > >> short-term pins for struct page manipulation, not e.g. dirtying of page > >> contents. Reading commit fc1d8e7cca2d I don't think this case falls > >> within the reasoning there. Perhaps not all GUP users should be > >> converted to the planned separate GUP tracking, and instead we should > >> have a GUP/follow_page_mask() variant that keeps using get_page/put_page? > >> > > > > Interesting. So far, the approach has been to get all the gup callers to > > release via put_user_page(), but if we add in Jan's and Ira's vaddr_pin_pages() > > wrapper, then maybe we could leave some sites unconverted. > > > > However, in order to do so, we would have to change things so that we have > > one set of APIs (gup) that do *not* increment a pin count, and another set > > (vaddr_pin_pages) that do. > > > > Is that where we want to go...? > > > > Oh, and meanwhile, I'm leaning toward a cheap fix: just use gup_fast() instead > of get_page(), and also fix the releasing code. So this incremental patch, on > top of the existing one, should do it: > > diff --git a/mm/mlock.c b/mm/mlock.c > index b980e6270e8a..2ea272c6fee3 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -318,18 +318,14 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) > /* > * We won't be munlocking this page in the next phase > * but we still need to release the follow_page_mask() > - * pin. We cannot do it under lru_lock however. If it's > - * the last pin, __page_cache_release() would deadlock. > + * pin. > */ > - pagevec_add(&pvec_putback, pvec->pages[i]); > + put_user_page(pages[i]); > pvec->pages[i] = NULL; > } > __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); > spin_unlock_irq(&zone->zone_pgdat->lru_lock); > > - /* Now we can release pins of pages that we are not munlocking */ > - pagevec_release(&pvec_putback); > - I'm not an expert but this skips a call to lru_add_drain(). Is that ok? > /* Phase 2: page munlock */ > for (i = 0; i < nr; i++) { > struct page *page = pvec->pages[i]; > @@ -394,6 +390,8 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec, > start += PAGE_SIZE; > while (start < end) { > struct page *page = NULL; > + int ret; > + > pte++; > if (pte_present(*pte)) > page = vm_normal_page(vma, start, *pte); > @@ -411,7 +409,13 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec, > if (PageTransCompound(page)) > break; > > - get_page(page); > + /* > + * Use get_user_pages_fast(), instead of get_page() so that the > + * releasing code can unconditionally call put_user_page(). > + */ > + ret = get_user_pages_fast(start, 1, 0, &page); > + if (ret != 1) > + break; I like the idea of making this a get/put pair but I'm feeling uneasy about how this is really supposed to work. For sure the GUP/PUP was supposed to be separate from [get|put]_page. Ira > /* > * Increase the address that will be returned *before* the > * eventual break due to pvec becoming full by adding the page > > > thanks, > -- > John Hubbard > NVIDIA