Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp184957ybt; Thu, 18 Jun 2020 22:33:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxIdphtogrdB076tKnjn11z35G7gHAaESSFzYWi7isKKpDvDjF77ra0Vxnxwsd7B0+zpHM+ X-Received: by 2002:a17:906:2dc7:: with SMTP id h7mr2105860eji.15.1592544801352; Thu, 18 Jun 2020 22:33:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592544801; cv=none; d=google.com; s=arc-20160816; b=M1dWx2oOsgU7TZ8GMNDQRHMHYy7fwbSx8nAoIAKLxcLZ0y/0AIq93QZQhJm0zP1jb3 lZ7yUq+cw7WUPnP/f+RcHqficD9TdmruKgLHp+FpxIr3zqZhOkDmBsNlrd63iISvZQ6F wKf3iBkfa5MfnZq6oSpsNd0hkD8wNcYZ62Vq/7WFi/SFDG5GANv3Khg6QelR3hRGGYzy 7uOVbNdveC5FG+rnDBIuxojm5CgfrM+Dh0EQ5l6XVwJalIMWDfVMGc9svkyrSkr0wdW9 /SvIxbraiCug7bva9oF0RvQR5vMotR/o8le2xgtvH9tqdnDtH/ad81Gl+J97bjNqnTB+ Orsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=ux+RlrRIaul1vx3+RsucEfoBIW1+wn2hepN2AwYL3gA=; b=wcXWRMhaOY7b30AjHNR2NR8bViXCIC0hJL/R0qoZEW4q2wtMFS2dr6pA7VLbW/5jyN mf+ozpMw+usyktCmRjczTjNg8TT0f8ZHeKx/P2lLfhbCrJoV0gCbV5BS8weOMMuq+WtO UnzZBgslrXkN6mSSW5E9mYGEa45J1JjzaxA5+w0AVugolm7SAtjCMkX1YCRybRS9K5yT 1YicX2gtykSO0cAVMO/KFzb4Q6hlWWKK15Phy0Zx2bDHINycTyKR+o0NbmNFPfXLJN89 guO4FvBo2/V8AqjQ4Ye4rkWkvDLFGguS9yd8EIq9pZxJlS04WkyzBlRZcua1ol6syIQU S5Vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Bt+wAPHb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b17si3225103ejb.626.2020.06.18.22.32.55; Thu, 18 Jun 2020 22:33:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Bt+wAPHb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730291AbgFSDMz (ORCPT + 99 others); Thu, 18 Jun 2020 23:12:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729509AbgFSDMx (ORCPT ); Thu, 18 Jun 2020 23:12:53 -0400 Received: from mail-lf1-x142.google.com (mail-lf1-x142.google.com [IPv6:2a00:1450:4864:20::142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1955CC06174E for ; Thu, 18 Jun 2020 20:12:53 -0700 (PDT) Received: by mail-lf1-x142.google.com with SMTP id c21so4706698lfb.3 for ; Thu, 18 Jun 2020 20:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ux+RlrRIaul1vx3+RsucEfoBIW1+wn2hepN2AwYL3gA=; b=Bt+wAPHbtIEVMjqIuaGC/1uyx9g+h0JmstAgvL3YJl4+t451+WyOTbJOfEvXeXX9eD 1Ltc346gaSEn7p3ivSehx5IBh4xH6ymqh+Cq5lYwtfijHLGYMnDbhhfUEQEC6v7QTi9t Gvpm/+doEjzk2t50RGWoXl/eMh2lar/D82DnoQZeJv6XzRq+Fc5AvAcG2D+/Hskhaijt 5qOCFYr9fyIP7Xa+r/t0Lv45+tYjKdVg59nS+ejWw+WcV4bkYdYXGAGgDgDTeKjRc7CK GEn4QL8/i4qrDlg0rT9E35HKdjJaOz/dsHKWRrtezLn9lpB9h6gMQoE0SemWFonpZquG NmHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ux+RlrRIaul1vx3+RsucEfoBIW1+wn2hepN2AwYL3gA=; b=thahU3ZsmCJ3g5d+Se9Zcg+BQ1lkINFOxlOWbIyh/JxmOjPhGP3gdgmakcQ6hMuF31 V+M9iD04Lt8NjtX/05fAkx98Ma1ZyczUV6odFMGHKRXTxayVTYu5qAZOofI6o8J2K+Qo HHMg3POrLiK16QEcdw4uN/i4h2Fq8jp8zaIrjD9tw3PbP/MxRpuiydTPYC8Ayf2+yOKy vRZebsNqiIuzfBay9AtO12v4sW+pQjvVozXsy9cVfivN+8xRHaB3+GBaEWA1OryVKInw UVBhg2xaUjD+FT4+0fc+GIPNMUeDvF8BVKswqYKja7SsTlwIAnn5pY8isWvZYUrzI7fN yvVQ== X-Gm-Message-State: AOAM533xGEJTVtrLk8pGcgQm0P5h3bAmOBqARAgvf4DLChtX4OUHYU+k UmQmZsMDBtgH33JC/qr/Jby+WXvCKKD3Hf4o+ud8fAqI X-Received: by 2002:a19:b07:: with SMTP id 7mr509645lfl.38.1592536371497; Thu, 18 Jun 2020 20:12:51 -0700 (PDT) MIME-Version: 1.0 References: <1592363698-4266-1-git-send-email-jrdr.linux@gmail.com> In-Reply-To: From: Souptick Joarder Date: Fri, 19 Jun 2020 08:42:39 +0530 Message-ID: Subject: Re: [RFC PATCH] xen/privcmd: Convert get_user_pages*() to pin_user_pages*() To: Boris Ostrovsky Cc: Juergen Gross , sstabellini@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, John Hubbard , paul@xen.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 17, 2020 at 11:29 PM Boris Ostrovsky wrote: > > On 6/16/20 11:14 PM, Souptick Joarder wrote: > > In 2019, we introduced pin_user_pages*() and now we are converting > > get_user_pages*() to the new API as appropriate. [1] & [2] could > > be referred for more information. > > > > [1] Documentation/core-api/pin_user_pages.rst > > > > [2] "Explicit pinning of user-space pages": > > https://lwn.net/Articles/807108/ > > > > Signed-off-by: Souptick Joarder > > Cc: John Hubbard > > --- > > Hi, > > > > I have compile tested this patch but unable to run-time test, > > so any testing help is much appriciated. > > > > Also have a question, why the existing code is not marking the > > pages dirty (since it did FOLL_WRITE) ? > > > Indeed, seems to me it should. Paul? > > > > > > drivers/xen/privcmd.c | 7 ++----- > > 1 file changed, 2 insertions(+), 5 deletions(-) > > > > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c > > index a250d11..543739e 100644 > > --- a/drivers/xen/privcmd.c > > +++ b/drivers/xen/privcmd.c > > @@ -594,7 +594,7 @@ static int lock_pages( > > if (requested > nr_pages) > > return -ENOSPC; > > > > - pinned = get_user_pages_fast( > > + pinned = pin_user_pages_fast( > > (unsigned long) kbufs[i].uptr, > > requested, FOLL_WRITE, pages); > > if (pinned < 0) > > @@ -614,10 +614,7 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages) > > if (!pages) > > return; > > > > - for (i = 0; i < nr_pages; i++) { > > - if (pages[i]) > > - put_page(pages[i]); > > - } > > + unpin_user_pages(pages, nr_pages); > > > Why are you no longer checking for valid pages? My understanding is, in case of lock_pages() end up returning partial mapped pages, we should pass no. of partial mapped pages to unlock_pages(), not nr_pages. This will avoid checking extra check to validate the pages[i]. and if lock_pages() returns 0 in success, anyway we have all the pages[i] valid. I will try to correct it in v2. But I agree, there is no harm to check for pages[i] and I believe, unpin_user_pages() is the right place to do so. John any thought ?