Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp4220163pxv; Tue, 29 Jun 2021 01:42:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwcTYs5IvyALzaL2EUR7wabAq0WJHjUIEYAqa/A6QSwjGQ+0NxLOeb88+kgU9rBZIunEs0w X-Received: by 2002:a05:6638:120c:: with SMTP id n12mr3339132jas.7.1624956127483; Tue, 29 Jun 2021 01:42:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624956127; cv=none; d=google.com; s=arc-20160816; b=fD8cBXelc6AfI87FLGepi9DmQZSSXsW79pX28w3HlId4HXidt+jSflA4vhD6Wu2oH4 t4IwuAO5glmPY7Gia35ZkN3RG9WVuCQrfSTvAT58iED3KI3G6qrOEQDCrM5Z31m+RWLS 6SmHpmfR5oalOKJBaCvNuwlnaabpPzYDSqNCTruRe6DiTJie65/M28qDowc8GRUfVedP 9oshkSB2e5E+6TQLPibw+6d7++lkC79TEVksm+u+Lccbxg8omQ7PBPtf0FQ/B5M5g2nh ogdt5cmROUJ19sgxTRCHeqoI7mlRllEx6DuU4u7+aPO+UZH9gWG0Qj5QUnlTyGUi578r j+cQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gy6eQwyXtVBybrlCBbPvFdTk1ib2Z6YlJLkWjeewB6Y=; b=ilc11qTLK/J3voV75sJxxgOy4fi9ikpGVqlixOxd08GAQZJznhais55hPgGrjgBAkG vUdK/71yXOHA18x8FPskQH62xVfTxeZXrNeQgc7P364Pw/iqY+hyhAlcjQCwAUZxztuP z4zVO11i2ddfShhvHUsvEnRQvx6jNfkNCnTSa8s8drueBWXa+H1EGusPPbqofYsIqkEO kqAhF8JULLNgwRTT7A5RhwnBGmXvKxebqwtTZESAd1wBy8v+eUu1QkJ7+CAcyH2pv/lk lsX1rntkxXEwwh6GiXpSc5ttjuyWTVOUQojUjcNkVsiqKErymAA/Nf2P24P76EcCril7 f3fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=I9xBZRPb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d187si21783702iog.56.2021.06.29.01.41.56; Tue, 29 Jun 2021 01:42:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=I9xBZRPb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232580AbhF2Iml (ORCPT + 99 others); Tue, 29 Jun 2021 04:42:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:53434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232572AbhF2Imk (ORCPT ); Tue, 29 Jun 2021 04:42:40 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C293361DC8; Tue, 29 Jun 2021 08:40:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624956013; bh=IdyzRcmXVpwvhqUMpa3WRL5I00BM3CX4UDH6kfhD8rI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I9xBZRPbngeToC/9BzLlPyT3s4c5EBlPMHM+kaI64dfK8Z0BFN5C3ryu/3tAc8Ftj mjRoqZ1EinNMCEDRuKyLZRaahnJyLvJRIjvRO+k+GTtlC0KEE+BY/enjYWknbiirSX qaE+zudBjyJaOVD9nLFkGEtuCKZFjV6jRJKFPrTZjI6TKlaPj4sEyZSlq9/7Qm+zeF OY+nWudSUIkwSDEpGSGkzLEFt4FO24Uh1ExAET5MLp1sIQdllhREqniNfmEtyaLvJV 7Bm0bGVWAVu5wjm62T1mq94oVSKU9lbsfOuFjFtan+jEbTBXbh7Vzm0/1RdQejMb9K EB9FWKOmsUVOg== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Maor Gottlieb , Dennis Dalessandro , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mike Marciniszyn , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next v1 2/2] RDMA: Use dma_map_sgtable for map umem pages Date: Tue, 29 Jun 2021 11:40:02 +0300 Message-Id: <70cbe6ddc2aa9bc5efb96d3c932d76fb2d68a50c.1624955710.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Maor Gottlieb In order to avoid incorrect usage of sg_table fields, change umem to use dma_map_sgtable for map the pages for DMA. Since dma_map_sgtable update the nents field (number of DMA entries), there is no need anymore for nmap variable, hence do some cleanups accordingly. Signed-off-by: Maor Gottlieb Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem.c | 29 ++++++++++----------------- drivers/infiniband/core/umem_dmabuf.c | 1 - drivers/infiniband/hw/mlx4/mr.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 3 ++- drivers/infiniband/sw/rdmavt/mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mr.c | 3 ++- include/rdma/ib_umem.h | 5 ++--- include/rdma/ib_verbs.h | 28 ++++++++++++++++++++++++++ 8 files changed, 48 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 0eb40025075f..f620d5b6b0e1 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -51,11 +51,11 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d struct scatterlist *sg; unsigned int i; - if (umem->nmap > 0) - ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->sg_nents, - DMA_BIDIRECTIONAL); + if (dirty) + ib_dma_unmap_sgtable_attrs(dev, &umem->sg_head, + DMA_BIDIRECTIONAL, 0); - for_each_sg(umem->sg_head.sgl, sg, umem->sg_nents, i) + for_each_sgtable_sg(&umem->sg_head, sg, i) unpin_user_page_range_dirty_lock(sg_page(sg), DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty); @@ -111,7 +111,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, /* offset into first SGL */ pgoff = umem->address & ~PAGE_MASK; - for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i) { + for_each_sgtable_dma_sg(&umem->sg_head, sg, i) { /* Walk SGL and reduce max page size if VA/PA bits differ * for any address. */ @@ -121,7 +121,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, * the maximum possible page size as the low bits of the iova * must be zero when starting the next chunk. */ - if (i != (umem->nmap - 1)) + if (i != (umem->sg_head.nents - 1)) mask |= va; pgoff = 0; } @@ -230,7 +230,6 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, 0, ret << PAGE_SHIFT, ib_dma_max_seg_size(device), sg, npages, GFP_KERNEL); - umem->sg_nents = umem->sg_head.nents; if (IS_ERR(sg)) { unpin_user_pages_dirty_lock(page_list, ret, 0); ret = PTR_ERR(sg); @@ -241,16 +240,10 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, if (access & IB_ACCESS_RELAXED_ORDERING) dma_attr |= DMA_ATTR_WEAK_ORDERING; - umem->nmap = - ib_dma_map_sg_attrs(device, umem->sg_head.sgl, umem->sg_nents, - DMA_BIDIRECTIONAL, dma_attr); - - if (!umem->nmap) { - ret = -ENOMEM; + ret = ib_dma_map_sgtable_attrs(device, &umem->sg_head, + DMA_BIDIRECTIONAL, dma_attr); + if (ret) goto umem_release; - } - - ret = 0; goto out; umem_release: @@ -310,8 +303,8 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset, return -EINVAL; } - ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->sg_nents, dst, length, - offset + ib_umem_offset(umem)); + ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->sg_head.orig_nents, + dst, length, offset + ib_umem_offset(umem)); if (ret < 0) return ret; diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0d65ce146fc4..cd2dd1f39aa7 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -57,7 +57,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) umem_dmabuf->umem.sg_head.sgl = umem_dmabuf->first_sg; umem_dmabuf->umem.sg_head.nents = nmap; - umem_dmabuf->umem.nmap = nmap; umem_dmabuf->sgt = sgt; wait_fence: diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index 50becc0e4b62..ab5dc8eac7f8 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -200,7 +200,7 @@ int mlx4_ib_umem_write_mtt(struct mlx4_ib_dev *dev, struct mlx4_mtt *mtt, mtt_shift = mtt->page_shift; mtt_size = 1ULL << mtt_shift; - for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i) { + for_each_sgtable_dma_sg(&umem->sg_head, sg, i) { if (cur_start_addr + len == sg_dma_address(sg)) { /* still the same block */ len += sg_dma_len(sg); @@ -273,7 +273,7 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va, *num_of_mtts = ib_umem_num_dma_blocks(umem, PAGE_SIZE); - for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i) { + for_each_sgtable_dma_sg(&umem->sg_head, sg, i) { /* * Initialization - save the first chunk start as the * current_block_start - block means contiguous pages. diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 3263851ea574..4954fb9eb6dc 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1226,7 +1226,8 @@ int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) orig_sg_length = sg.length; cur_mtt = mtt; - rdma_for_each_block (mr->umem->sg_head.sgl, &biter, mr->umem->nmap, + rdma_for_each_block (mr->umem->sg_head.sgl, &biter, + mr->umem->sg_head.nents, BIT(mr->page_shift)) { if (cur_mtt == (void *)mtt + sg.length) { dma_sync_single_for_device(ddev, sg.addr, sg.length, diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 34b7af6ab9c2..d955c8c4acc4 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -410,7 +410,7 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mr->mr.page_shift = PAGE_SHIFT; m = 0; n = 0; - for_each_sg_page (umem->sg_head.sgl, &sg_iter, umem->nmap, 0) { + for_each_sg_page (umem->sg_head.sgl, &sg_iter, umem->sg_head.nents, 0) { void *vaddr; vaddr = page_address(sg_page_iter_page(&sg_iter)); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 6aabcb4de235..a269085e0946 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -142,7 +142,8 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, if (length > 0) { buf = map[0]->buf; - for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) { + for_each_sg_page(umem->sg_head.sgl, &sg_iter, + umem->sg_head.nents, 0) { if (num_buf >= RXE_BUF_PER_MAP) { map++; buf = map[0]->buf; diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 676c57f5ca80..c754b1a31cc9 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -27,8 +27,6 @@ struct ib_umem { u32 is_dmabuf : 1; struct work_struct work; struct sg_table sg_head; - int nmap; - unsigned int sg_nents; }; struct ib_umem_dmabuf { @@ -77,7 +75,8 @@ static inline void __rdma_umem_block_iter_start(struct ib_block_iter *biter, struct ib_umem *umem, unsigned long pgsz) { - __rdma_block_iter_start(biter, umem->sg_head.sgl, umem->nmap, pgsz); + __rdma_block_iter_start(biter, umem->sg_head.sgl, umem->sg_head.nents, + pgsz); } /** diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 371df1c80aeb..2dba30849731 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4057,6 +4057,34 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev, dma_attrs); } +/** + * ib_dma_map_sgtable_attrs - Map a scatter/gather table to DMA addresses + * @dev: The device for which the DMA addresses are to be created + * @sg: The sg_table object describing the buffer + * @direction: The direction of the DMA + * @attrs: Optional DMA attributes for the map operation + */ +static inline int ib_dma_map_sgtable_attrs(struct ib_device *dev, + struct sg_table *sgt, + enum dma_data_direction direction, + unsigned long dma_attrs) +{ + if (ib_uses_virt_dma(dev)) { + ib_dma_virt_map_sg(dev, sgt->sgl, sgt->orig_nents); + return 0; + } + return dma_map_sgtable(dev->dma_device, sgt, direction, dma_attrs); +} + +static inline void ib_dma_unmap_sgtable_attrs(struct ib_device *dev, + struct sg_table *sgt, + enum dma_data_direction direction, + unsigned long dma_attrs) +{ + if (!ib_uses_virt_dma(dev)) + dma_unmap_sgtable(dev->dma_device, sgt, direction, dma_attrs); +} + /** * ib_dma_map_sg - Map a scatter/gather list to DMA addresses * @dev: The device for which the DMA addresses are to be created -- 2.31.1