Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp5312645ybh; Wed, 7 Aug 2019 04:10:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqxElRK/GfAs2Wx+vhBXeya/9KMuq3+HpMML/74f6HSnZZ3N9dKQLD1OK0KZfNOBt12knD2Q X-Received: by 2002:a17:902:82c4:: with SMTP id u4mr7703690plz.196.1565176239446; Wed, 07 Aug 2019 04:10:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565176239; cv=none; d=google.com; s=arc-20160816; b=UefcKlJei6nNMgJcN4t83EXGBLQICC9bKXAqQHeyJUOlJp7gB6rQvO65nShs+n9ILC jRsW9XC1D3URuzAVQ1Z9ar6Y+nFO5o5D1QuGzfl0s8kjhXoMV+Q2eA2V2Veiwt2smcYO twSxX7T4QYohVbvxKnfwDozriN0vWUs7i0XfyUB55+90MBDYtvWdgmWBJ2ii+MsSBaLi kALpD8PMWdsFDz/MQJJ52KvCgMTopVXOutrJ+3tsduT0ErEfNFlihbD7djVRddZ5yb0u UXAP4JPVBiyFtKdL2s0HOSN5P01SxSol3X0Da4JaUXUEHIHkwpetVTcnDMB4KiQ0Srek 4P/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=+eLVrO3ftv6KhrHCmUP/sYnuS3BLP7735j97X+SR3hM=; b=vnEMsY44oZmgPWFj9wi/yDxPLRjE43DvqBlEJkX2NEWbUF/g0gweYPnZzJ767NuxG/ MwTOmHYsCxbRYzpSB2xZzaHsKp2e2sDLTro6smxteAlgPIeT76jP/SNPBMGPV/Xw6/SW KrXcAqfldWyL1f37RpTdacycSwEE+wRnwoxnDLRvGH0i7KsdLp9ZDRz98704G7GYEX48 WL3Q8PbIC453DmEiWJXsgkKb7RT98+HTBPl4o26p74YDlTUZIFRxYcum0ACOh4oiHLvb hxU11O5KXxhdCoz81/7yiJdz+CvtSDrOMQ9d6vECn45dKmLuoi/UR29mnS2Asgt1ddVk eV3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t6si49572017pfe.231.2019.08.07.04.10.23; Wed, 07 Aug 2019 04:10:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728670AbfHGLBt (ORCPT + 99 others); Wed, 7 Aug 2019 07:01:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:39552 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726269AbfHGLBt (ORCPT ); Wed, 7 Aug 2019 07:01:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F3BFCAE48; Wed, 7 Aug 2019 11:01:47 +0000 (UTC) Date: Wed, 7 Aug 2019 13:01:47 +0200 From: Michal Hocko To: john.hubbard@gmail.com Cc: Andrew Morton , Christoph Hellwig , Ira Weiny , Jan Kara , Jason Gunthorpe , Jerome Glisse , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, John Hubbard , Dan Williams , Daniel Black , Matthew Wilcox , Mike Kravetz Subject: Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*() Message-ID: <20190807110147.GT11812@dhcp22.suse.cz> References: <20190805222019.28592-1-jhubbard@nvidia.com> <20190805222019.28592-2-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190805222019.28592-2-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 05-08-19 15:20:17, john.hubbard@gmail.com wrote: > From: John Hubbard > > For pages that were retained via get_user_pages*(), release those pages > via the new put_user_page*() routines, instead of via put_page() or > release_pages(). Hmm, this is an interesting code path. There seems to be a mix of pages in the game. We get one page via follow_page_mask but then other pages in the range are filled by __munlock_pagevec_fill and that does a direct pte walk. Is using put_user_page correct in this case? Could you explain why in the changelog? > This is part a tree-wide conversion, as described in commit fc1d8e7cca2d > ("mm: introduce put_user_page*(), placeholder versions"). > > Cc: Dan Williams > Cc: Daniel Black > Cc: Jan Kara > Cc: J?r?me Glisse > Cc: Matthew Wilcox > Cc: Mike Kravetz > Signed-off-by: John Hubbard > --- > mm/mlock.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/mlock.c b/mm/mlock.c > index a90099da4fb4..b980e6270e8a 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) > get_page(page); /* for putback_lru_page() */ > __munlock_isolated_page(page); > unlock_page(page); > - put_page(page); /* from follow_page_mask() */ > + put_user_page(page); /* from follow_page_mask() */ > } > } > } > @@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, > if (page && !IS_ERR(page)) { > if (PageTransTail(page)) { > VM_BUG_ON_PAGE(PageMlocked(page), page); > - put_page(page); /* follow_page_mask() */ > + put_user_page(page); /* follow_page_mask() */ > } else if (PageTransHuge(page)) { > lock_page(page); > /* > @@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, > */ > page_mask = munlock_vma_page(page); > unlock_page(page); > - put_page(page); /* follow_page_mask() */ > + put_user_page(page); /* follow_page_mask() */ > } else { > /* > * Non-huge pages are handled in batches via > -- > 2.22.0 -- Michal Hocko SUSE Labs