Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp1521741imb; Sat, 2 Mar 2019 19:46:20 -0800 (PST) X-Google-Smtp-Source: APXvYqwOrO+ETHZnMGlB1ZTfyGM81U3iW2L0uanMPTXZrFHmayMQkURJmVW8thxyezqyZ9zySLcG X-Received: by 2002:a63:2c87:: with SMTP id s129mr12293001pgs.311.1551584780854; Sat, 02 Mar 2019 19:46:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551584780; cv=none; d=google.com; s=arc-20160816; b=eSRdk1Vw6zkRgQbqJLXcUibn+XCrd2PToYqcOjV2ojMkhEiOkbyLDaNxKN2q5HO5Ql wOJb0C53wb0mA/pBLmkAEMHLr1FIqdjZRg4jMPwy2BnN58xcCYaqhskfPfPEYvG+0MM1 3tLWMEIqWyHmzrbvF7KKCiP0i3SXXaHQtjC34hmNH3R7O9K+TRVLo80Wbmbam+hWBT/a Kp+6Cug+oUXC+lXkXH6YuRGhO+cBBtmAQ9ZrnmnP/2f+SifNEnU0ATzgFsmOUgHvnBa0 x9qD4A51RnNoIFvdt9k92Syi6P2rX/RkwtqoY3isZMAXZMm9vmlaqf23p6/C6iebArcS gTug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=ejN7RosPXtM9+dZClz8gCr4+GKgap8joGFNG7pkWjew=; b=gBM8T9nlcy8G/gmFS13q8JVQsJOMh8bDIIqdzSsl8f8N5yLLe6BSCQCo6Cq7CZjyFf /wSswEQ3w0EhFUXskpqVHnO3LgNa6HKCHPS6yyuvjBou0Q8UyhE6AAkO63DNIRnbkzqQ cxl6kiOX8g/GMxAC/KQL/BbS9QWM17Q4WGz1WA9YWA5Ue3VcDWEz1wa0+CV4U2S+cjUd u4XZAhw2gO07wKHnA39c7E7fcZFmIJ6xoDsp36FKHgRjSv42aBTKN2bwaz7XYNtj5dRj ij/cSB5A9fWLlrx1iXydhZuqT5md9e8vrozvrHRiqaSvnuiIlL/MafaPBXnShX0ISAU0 aLDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d11si2158032pla.335.2019.03.02.19.46.05; Sat, 02 Mar 2019 19:46:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726359AbfCCDpq (ORCPT + 99 others); Sat, 2 Mar 2019 22:45:46 -0500 Received: from mga14.intel.com ([192.55.52.115]:54519 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbfCCDpp (ORCPT ); Sat, 2 Mar 2019 22:45:45 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Mar 2019 19:45:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,434,1544515200"; d="scan'208";a="128568232" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga008.fm.intel.com with ESMTP; 02 Mar 2019 19:45:44 -0800 Date: Sat, 2 Mar 2019 11:44:03 -0800 From: Ira Weiny To: john.hubbard@gmail.com Cc: linux-mm@kvack.org, Andrew Morton , LKML , John Hubbard , Jason Gunthorpe , Doug Ledford , linux-rdma@vger.kernel.org, artemyko@mellanox.com Subject: Re: [PATCH v2] RDMA/umem: minor bug fix and cleanup in error handling paths Message-ID: <20190302194402.GA24732@iweiny-DESK2.sc.intel.com> References: <20190302032726.11769-2-jhubbard@nvidia.com> <20190302202435.31889-1-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190302202435.31889-1-jhubbard@nvidia.com> User-Agent: Mutt/1.11.1 (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org FWIW I don't have ODP hardware either. So I can't test this either. On Sat, Mar 02, 2019 at 12:24:35PM -0800, john.hubbard@gmail.com wrote: > From: John Hubbard > > 1. Bug fix: the error handling release pages starting > at the first page that experienced an error. > > 2. Refinement: release_pages() is better than put_page() > in a loop. > > 3. Dead code removal: the check for (user_virt & ~page_mask) > is checking for a condition that can never happen, > because earlier: > > user_virt = user_virt & page_mask; > > ...so, remove that entire phrase. > > 4. Minor: As long as I'm here, shorten up a couple of long lines > in the same function, without harming the ability to > grep for the printed error message. > > Cc: Ira Weiny > Cc: Jason Gunthorpe > Cc: Andrew Morton > Cc: Doug Ledford > Cc: linux-rdma@vger.kernel.org > Cc: linux-mm@kvack.org > Signed-off-by: John Hubbard > --- > > v2: Fixes a kbuild test robot reported build failure, by directly > including pagemap.h > > drivers/infiniband/core/umem_odp.c | 25 ++++++++++--------------- > 1 file changed, 10 insertions(+), 15 deletions(-) > > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index acb882f279cb..83872c1f3f2c 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -40,6 +40,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -648,25 +649,17 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, > > if (npages < 0) { > if (npages != -EAGAIN) > - pr_warn("fail to get %zu user pages with error %d\n", gup_num_pages, npages); > + pr_warn("fail to get %zu user pages with error %d\n", > + gup_num_pages, npages); > else > - pr_debug("fail to get %zu user pages with error %d\n", gup_num_pages, npages); > + pr_debug("fail to get %zu user pages with error %d\n", > + gup_num_pages, npages); > break; > } > > bcnt -= min_t(size_t, npages << PAGE_SHIFT, bcnt); > mutex_lock(&umem_odp->umem_mutex); > for (j = 0; j < npages; j++, user_virt += PAGE_SIZE) { > - if (user_virt & ~page_mask) { > - p += PAGE_SIZE; > - if (page_to_phys(local_page_list[j]) != p) { > - ret = -EFAULT; > - break; > - } > - put_page(local_page_list[j]); > - continue; > - } > - I think this is trying to account for compound pages. (ie page_mask could represent more than PAGE_SIZE which is what user_virt is being incrimented by.) But putting the page in that case seems to be the wrong thing to do? Yes this was added by Artemy[1] now cc'ed. > ret = ib_umem_odp_map_dma_single_page( > umem_odp, k, local_page_list[j], > access_mask, current_seq); > @@ -684,9 +677,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, > mutex_unlock(&umem_odp->umem_mutex); > > if (ret < 0) { > - /* Release left over pages when handling errors. */ > - for (++j; j < npages; ++j) > - put_page(local_page_list[j]); > + /* > + * Release pages, starting at the the first page > + * that experienced an error. > + */ > + release_pages(&local_page_list[j], npages - j); My concern here is that release_pages handle compound pages, perhaps differently from the above code so calling it may may not work? But I've not really spent a lot of time on it... :-/ Ira [1] commit 403cd12e2cf759ead5cbdcb62bf9872b9618d400 Author: Artemy Kovalyov Date: Wed Apr 5 09:23:55 2017 +0300 IB/umem: Add contiguous ODP support Currenlty ODP supports only regular MMU pages. Add ODP support for regions consisting of physically contiguous chunks of arbitrary order (huge pages for instance) to improve performance. Signed-off-by: Artemy Kovalyov Signed-off-by: Leon Romanovsky Signed-off-by: Doug Ledford