Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2700951pxk; Sun, 4 Oct 2020 08:48:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyoJTb4sHSylipy92/gy+uipKCboo+FAcebH+o+xmmTKK6F6uBgUrYoIN1uIoGVh5aP8JN X-Received: by 2002:a05:6402:176c:: with SMTP id da12mr5689066edb.28.1601826505096; Sun, 04 Oct 2020 08:48:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601826505; cv=none; d=google.com; s=arc-20160816; b=uY+fev2byV4p5itK7Llt8ZhB7MZWfSwH5cmigrxDxPXRseHCge4HGuDGWOi5eVmXZY 1pQrR5VFlSxQKqcjkteltjWgxVZh0jVsCLzjHfaVhgrSnyAEk9L2QPWbL5W2PByeDudL ABdkcISAdlIHhR7SA9pbdXkVjswjZWs7VJymOO+B6Pf17ltfd8fgd4VBmzm+nSo2N1e/ zcL4HulojifwcJaNz2EHnUXDlY/tloYGGBlXlLkd5wrl5V+NDmCVEhYU4RZK41GrFds4 brPeSAzkC51qmXexTTyrG9bsH+3Z2DMOB9JLOPDnCr+63Dt0SXpfJFIjsN1aEFvtAgn6 di/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k3AFOgtOchN8XyvWeJ62V5rKr8SN4zTw+mC+mdpPnhw=; b=gXKOMwjwamU8RwZ8Yszs+6SYnMvAnrWoEpuMc+SjXN6aIkCucQ5iM1UfVStvfcagBI EhZNMi7J1N2Ja94W0SHzS2TVhZDpDV3sMdn1Z+ygc+1ib9CP2IO7aAg8YdK0jzb6L+HP I+B/qcZDWIwUgiivLVB8HYcZBjWKFAd3KXMjhsGX42jjYVJApUKm3/ts8f+NnqZUmC+I 3vyCabxyJC9GiByiYULN5xFOI2zU9bSSgMvBinDWsAQohUjhoONGWuocYzhKvRv6AAs6 +i50Lux0ii34X9itpse0QntqyZjXI2LZeYTr9pHCjrfAupEhm34RhTAFzeGUVGVDlUvS fzeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="X//tSZm6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r4si5485650ejy.388.2020.10.04.08.48.02; Sun, 04 Oct 2020 08:48:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="X//tSZm6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726066AbgJDPoH (ORCPT + 99 others); Sun, 4 Oct 2020 11:44:07 -0400 Received: from mail.kernel.org ([198.145.29.99]:57942 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726033AbgJDPoC (ORCPT ); Sun, 4 Oct 2020 11:44:02 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 88C7F20759; Sun, 4 Oct 2020 15:44:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601826241; bh=O7HZcQRzxRmAmnS3hiVfDbaeM/A2kfet6dFl98RAZCk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X//tSZm6sevkjEKhDCnbeghNb7kZQJg6fxH61KLJC8YdLD9vUcDgC4p0C9hU5yL3m ZWSgNOwL/hM/aZT1U76QjuYUS5CgSA3DaUKuoCUu5SYzhqLOZZzOk24IWuunait3zc kkYnHpAnt8kYgYU3KLJprm9srd/PgZEB629dLBRs= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Maor Gottlieb , Christoph Hellwig , Daniel Vetter , David Airlie , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Rodrigo Vivi , Roland Scheidegger , Tvrtko Ursulin , VMware Graphics Subject: [PATCH rdma-next v5 4/4] RDMA/umem: Move to allocate SG table from pages Date: Sun, 4 Oct 2020 18:43:40 +0300 Message-Id: <20201004154340.1080481-5-leon@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004154340.1080481-1-leon@kernel.org> References: <20201004154340.1080481-1-leon@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Maor Gottlieb Remove the implementation of ib_umem_add_sg_table and instead call to __sg_alloc_table_from_pages which already has the logic to merge contiguous pages. Besides that it removes duplicated functionality, it reduces the memory consumption of the SG table significantly. Prior to this patch, the SG table was allocated in advance regardless consideration of contiguous pages. In huge pages system of 2MB page size, without this change, the SG table would contain x512 SG entries. E.g. for 100GB memory registration: Number of entries Size Before 26214400 600.0MB After 51200 1.2MB Signed-off-by: Maor Gottlieb Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem.c | 94 +++++----------------------------- 1 file changed, 12 insertions(+), 82 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index c1ab6a4f2bc3..e9fecbdf391b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -61,73 +61,6 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d sg_free_table(&umem->sg_head); } -/* ib_umem_add_sg_table - Add N contiguous pages to scatter table - * - * sg: current scatterlist entry - * page_list: array of npage struct page pointers - * npages: number of pages in page_list - * max_seg_sz: maximum segment size in bytes - * nents: [out] number of entries in the scatterlist - * - * Return new end of scatterlist - */ -static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg, - struct page **page_list, - unsigned long npages, - unsigned int max_seg_sz, - int *nents) -{ - unsigned long first_pfn; - unsigned long i = 0; - bool update_cur_sg = false; - bool first = !sg_page(sg); - - /* Check if new page_list is contiguous with end of previous page_list. - * sg->length here is a multiple of PAGE_SIZE and sg->offset is 0. - */ - if (!first && (page_to_pfn(sg_page(sg)) + (sg->length >> PAGE_SHIFT) == - page_to_pfn(page_list[0]))) - update_cur_sg = true; - - while (i != npages) { - unsigned long len; - struct page *first_page = page_list[i]; - - first_pfn = page_to_pfn(first_page); - - /* Compute the number of contiguous pages we have starting - * at i - */ - for (len = 0; i != npages && - first_pfn + len == page_to_pfn(page_list[i]) && - len < (max_seg_sz >> PAGE_SHIFT); - len++) - i++; - - /* Squash N contiguous pages from page_list into current sge */ - if (update_cur_sg) { - if ((max_seg_sz - sg->length) >= (len << PAGE_SHIFT)) { - sg_set_page(sg, sg_page(sg), - sg->length + (len << PAGE_SHIFT), - 0); - update_cur_sg = false; - continue; - } - update_cur_sg = false; - } - - /* Squash N contiguous pages into next sge or first sge */ - if (!first) - sg = sg_next(sg); - - (*nents)++; - sg_set_page(sg, first_page, len << PAGE_SHIFT, 0); - first = false; - } - - return sg; -} - /** * ib_umem_find_best_pgsz - Find best HW page size to use for this MR * @@ -217,7 +150,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, struct mm_struct *mm; unsigned long npages; int ret; - struct scatterlist *sg; + struct scatterlist *sg = NULL; unsigned int gup_flags = FOLL_WRITE; /* @@ -272,15 +205,9 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, cur_base = addr & PAGE_MASK; - ret = sg_alloc_table(&umem->sg_head, npages, GFP_KERNEL); - if (ret) - goto vma; - if (!umem->writable) gup_flags |= FOLL_FORCE; - sg = umem->sg_head.sgl; - while (npages) { cond_resched(); ret = pin_user_pages_fast(cur_base, @@ -292,15 +219,19 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, goto umem_release; cur_base += ret * PAGE_SIZE; - npages -= ret; - - sg = ib_umem_add_sg_table(sg, page_list, ret, - dma_get_max_seg_size(device->dma_device), - &umem->sg_nents); + npages -= ret; + sg = __sg_alloc_table_from_pages( + &umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT, + dma_get_max_seg_size(device->dma_device), sg, npages, + GFP_KERNEL); + umem->sg_nents = umem->sg_head.nents; + if (IS_ERR(sg)) { + unpin_user_pages_dirty_lock(page_list, ret, 0); + ret = PTR_ERR(sg); + goto umem_release; + } } - sg_mark_end(sg); - if (access & IB_ACCESS_RELAXED_ORDERING) dma_attr |= DMA_ATTR_WEAK_ORDERING; @@ -318,7 +249,6 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, umem_release: __ib_umem_release(device, umem, 0); -vma: atomic64_sub(ib_umem_num_pages(umem), &mm->pinned_vm); out: free_page((unsigned long) page_list); -- 2.26.2