Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp714695pxb; Wed, 3 Feb 2021 16:19:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJwXVAIFmiRlER5Wrc4nXZSJIkbTG/2dMFSl2yOfpgYu3TOlqU3ttcVrmhPxHkLTJ0WO2qZh X-Received: by 2002:a50:aac8:: with SMTP id r8mr5596953edc.9.1612397970477; Wed, 03 Feb 2021 16:19:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612397970; cv=none; d=google.com; s=arc-20160816; b=0xRt42IZoOHldVGea6isd4l0Ztfp5F4qDBtJf29ERQnvqFaSlrGLIhYdUOtmS6DisQ MJaESg1vh6MwGqOb5Va58uKZ0kIpomVas62fH20d8i+ThJtn1rFRKfs9BQlh58/e/8QA l/Xr6VMTQ+AiUPBsFd5L0+V2rOXA/2TqLxxt/2WfKZJtiaQq/ReUQTphh7RhgT/K4U/e yTvOAIk266M2mY0vok97WrWFtcfTR9npsHrLy2tgAIVEUa6nP9phY2Wy0qJ21PuY5v9W FEV2X4WpxUCKMoI8Tg2zCz9BP0F2g1uBdvjz9JrANsohofGNiVEFp5I37sd4E/L+uR/3 67zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=FKxxVoiW2V506J4PCGyrpYPZ6Kptj9MoY4pk4P9XslA=; b=0vhvailjYPoVs6uciBMuZ7dmTUjKXnumn2IOOJreWhg4dqxYjfXtcFKRVuTMV+CUZA GBvkQnrK0hYJfiMYtk81JfS2ud5OSH8BIT7tmOxSjiSTl3KN4XhBGMK/YjYrJAzIDKr2 MPW32s/CVVnBMjMO5zpBnwtGQ3Oiggh+OuZGMZAthz+UfPsjKuMD04oI+iiAb49g5roP 6Fc8zJaWjZsFUBMZvyCZSpu9MUG0gDZhHGqBTRf3SgZckQ1bel9ihw9sWRyMWiDEpunx 92Jncy70mvjknbBW1m+3kuKJvWLWJee7DbaAhqUHesaMAO07fef5uwjyRYrcyZ7TfqYE vs2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=fJS4knUw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 40si2490349edr.233.2021.02.03.16.19.04; Wed, 03 Feb 2021 16:19:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=fJS4knUw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233905AbhBDAQg (ORCPT + 99 others); Wed, 3 Feb 2021 19:16:36 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:7678 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233393AbhBDAQe (ORCPT ); Wed, 3 Feb 2021 19:16:34 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 03 Feb 2021 16:15:53 -0800 Received: from [10.2.50.90] (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 4 Feb 2021 00:15:53 +0000 Subject: Re: [PATCH 4/4] RDMA/umem: batch page unpin in __ib_mem_release() To: Joao Martins , CC: , , Andrew Morton , Jason Gunthorpe , Doug Ledford , Matthew Wilcox References: <20210203220025.8568-1-joao.m.martins@oracle.com> <20210203220025.8568-5-joao.m.martins@oracle.com> From: John Hubbard Message-ID: <4ed92932-8cf2-97ab-7296-6efee51fc555@nvidia.com> Date: Wed, 3 Feb 2021 16:15:53 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:85.0) Gecko/20100101 Thunderbird/85.0 MIME-Version: 1.0 In-Reply-To: <20210203220025.8568-5-joao.m.martins@oracle.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612397753; bh=FKxxVoiW2V506J4PCGyrpYPZ6Kptj9MoY4pk4P9XslA=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=fJS4knUwtxxz+DimlJaYQXzRUUV1jiXYzmwh2N9EwutcpqMJx2Ir9z7uRwQdFZ9Yd UIsTAjM1UF/vOdTStAC7BGPq2pbqlaLcR3bo5wu3R/h6hu44blEH2R1uykAie8OD4D iArUROqhod14DsVwfpd7RZ6vUKi6wV3dYVQy6pp8pE4TsXc2/rmBCJu+AI+MEq7OUd xUPuFF4pR2tee7vebUeImE7g8PlFC7uvdewEX8Gc4HvQz3UXkf2Np/LevSQpzM1WqV 9coNypNBAlQ/Wdt02KLy8L3/Fro0NJGK8S08FSG92Reo1f4F0IkR/CHoRiyX1eaX4M +qOg/j1pFRz3w== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/3/21 2:00 PM, Joao Martins wrote: > Use the newly added unpin_user_page_range_dirty_lock() > for more quickly unpinning a consecutive range of pages > represented as compound pages. This will also calculate > number of pages to unpin (for the tail pages which matching > head page) and thus batch the refcount update. > > Running a test program which calls mr reg/unreg on a 1G in size > and measures cost of both operations together (in a guest using rxe) > with THP and hugetlbfs: In the patch subject line: s/__ib_mem_release/__ib_umem_release/ > > Before: > 590 rounds in 5.003 sec: 8480.335 usec / round > 6898 rounds in 60.001 sec: 8698.367 usec / round > > After: > 2631 rounds in 5.001 sec: 1900.618 usec / round > 31625 rounds in 60.001 sec: 1897.267 usec / round > > Signed-off-by: Joao Martins > --- > drivers/infiniband/core/umem.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > index 2dde99a9ba07..ea4ebb3261d9 100644 > --- a/drivers/infiniband/core/umem.c > +++ b/drivers/infiniband/core/umem.c > @@ -47,17 +47,17 @@ > > static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty) > { > - struct sg_page_iter sg_iter; > - struct page *page; > + bool make_dirty = umem->writable && dirty; > + struct scatterlist *sg; > + int i; Maybe unsigned int is better, so as to perfectly match the scatterlist.length. > > if (umem->nmap > 0) > ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->sg_nents, > DMA_BIDIRECTIONAL); > > - for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { > - page = sg_page_iter_page(&sg_iter); > - unpin_user_pages_dirty_lock(&page, 1, umem->writable && dirty); > - } > + for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i) The change from umem->sg_nents to umem->nmap looks OK, although we should get IB people to verify that there is not some odd bug or reason to leave it as is. > + unpin_user_page_range_dirty_lock(sg_page(sg), > + DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty); Is it really OK to refer directly to sg->length? The scatterlist library goes to some effort to avoid having callers directly access the struct member variables. Actually, the for_each_sg() code and its behavior with sg->length and sg_page(sg) confuses me because I'm new to it, and I don't quite understand how this works. Especially with SG_CHAIN. I'm assuming that you've monitored /proc/vmstat for nr_foll_pin* ? > > sg_free_table(&umem->sg_head); > } > thanks, -- John Hubbard NVIDIA