Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp764171ybc; Tue, 19 Nov 2019 08:54:23 -0800 (PST) X-Google-Smtp-Source: APXvYqy/G5LGARYcKm9hyyeVJ+p/h5nxzmii2HrY161YG8ZPVoLDNqo2uodlnYqYnNeWpi1IH/0l X-Received: by 2002:a5d:640b:: with SMTP id z11mr37170038wru.195.1574182463191; Tue, 19 Nov 2019 08:54:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574182463; cv=none; d=google.com; s=arc-20160816; b=ow+LM7M/nkiW28xEDLJ/tPOZ/NirmLfyDojnNCt47S2/EqPeuoHmkfEfiV+nozGM43 SCiAOXiIFu/yFaRq0ckXpajJ+ltE3lsVcLl68IK10TOiKbb5xDXjV1MeuT0bYGz73F5r /wQCaJxfm8E0MGR8fzEEy+QdobtlNZ3H40afbFskVTJK8Egy71PJoeOSlxERWKzTxOUn isRLRcg4MJFoJvUN4XQcUY1gP9nhMBiXhueN75/HOCdhGZBrpBzfzSVBAVAm5zq+IYnn lGiyF8WQBBRlV77E5ucyi8XX2h2iQ4mQ6Rppi62hFysJ1bIPFbeyafLFC9GgLEmgTbKS dvjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:subject:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to; bh=rdcLck4IDmaRKjq9jprn2LdZ9Jxn74NarXzZAbfvxG0=; b=aZ6KvANxzjlGTIUpnXEzFLIKHKUrk2msMO+9t7P0Yii9W1UyQgS0ztUF9g0udpwgqz tAZy/Cfrk67w1VAaCiDFrsQg4ZG+MmQwGSnsR7Zsu1h1ORtSfzLzb9CWCt5P36TrEAPU PW8RpTc9rltpsNlIUO0vu7TltxXr7BlHTJ2RB+dukMOxF58RPy5wGpaR8Rf+UMFYyBpi X/qKqE9fOu+vfkNwusMRRGCrImu/8aCxG2AV4991SK+ow9lw8poUsqUynn5f2UQJtKYg /vkiQQ6QkwY3Ut5zzE7+rbqMV18BgPmPvDvZ3v0aMPXHBvxSr7EYChXQFm94ZlsOG8y1 YxfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b47si16892318edc.129.2019.11.19.08.53.57; Tue, 19 Nov 2019 08:54:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728548AbfKSQuU (ORCPT + 99 others); Tue, 19 Nov 2019 11:50:20 -0500 Received: from ale.deltatee.com ([207.54.116.67]:37970 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727212AbfKSQuU (ORCPT ); Tue, 19 Nov 2019 11:50:20 -0500 Received: from guinness.priv.deltatee.com ([172.16.1.162]) by ale.deltatee.com with esmtp (Exim 4.92) (envelope-from ) id 1iX6hg-0003AU-Ba; Tue, 19 Nov 2019 09:50:14 -0700 To: Jiasen Lin , linux-kernel@vger.kernel.org, linux-ntb@googlegroups.com, jdmason@kudzu.us Cc: allenbh@gmail.com, dave.jiang@intel.com References: <1574136121-7941-1-git-send-email-linjiasen@hygon.cn> From: Logan Gunthorpe Message-ID: Date: Tue, 19 Nov 2019 09:50:08 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <1574136121-7941-1-git-send-email-linjiasen@hygon.cn> Content-Type: text/plain; charset=utf-8 Content-Language: en-CA Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 172.16.1.162 X-SA-Exim-Rcpt-To: dave.jiang@intel.com, allenbh@gmail.com, jdmason@kudzu.us, linux-ntb@googlegroups.com, linux-kernel@vger.kernel.org, linjiasen@hygon.cn X-SA-Exim-Mail-From: logang@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE autolearn=ham autolearn_force=no version=3.4.2 Subject: Re: [PATCH v2] NTB: ntb_perf: Fix address err in perf_copy_chunk X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-11-18 9:02 p.m., Jiasen Lin wrote: > peer->outbuf is a virtual address which is get by ioremap, it can not > be converted to a physical address by virt_to_page and page_to_phys. > This conversion will result in DMA error, because the destination address > which is converted by page_to_phys is invalid. > > This patch save the MMIO address of NTB BARx in perf_setup_peer_mw, > and map the BAR space to DMA address after we assign the DMA channel. > Then fill the destination address of DMA descriptor with this DMA address > to guarantee that the address of memory write requests fall into > memory window of NBT BARx with IOMMU enabled and disabled. > > Changes since v1: > * Map NTB BARx MMIO address to DMA address after assign the DMA channel, > to ensure the destination address in valid. (per suggestion from Logan) > > Fixes: 5648e56d03fa ("NTB: ntb_perf: Add full multi-port NTB API support") > Signed-off-by: Jiasen Lin Thanks, looks good to me except for the one nit below. Reviewed-by: Logan Gunthorpe > --- > drivers/ntb/test/ntb_perf.c | 69 ++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 56 insertions(+), 13 deletions(-) > > diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c > index e9b7c2d..dfca7e1 100644 > --- a/drivers/ntb/test/ntb_perf.c > +++ b/drivers/ntb/test/ntb_perf.c > @@ -149,7 +149,8 @@ struct perf_peer { > u64 outbuf_xlat; > resource_size_t outbuf_size; > void __iomem *outbuf; > - > + phys_addr_t out_phys_addr; > + dma_addr_t dma_dst_addr; > /* Inbound MW params */ > dma_addr_t inbuf_xlat; > resource_size_t inbuf_size; > @@ -776,7 +777,8 @@ static void perf_dma_copy_callback(void *data) > } > > static int perf_copy_chunk(struct perf_thread *pthr, > - void __iomem *dst, void *src, size_t len) > + void __iomem *dst, void *src, size_t len, > + dma_addr_t dst_dma_addr) > { > struct dma_async_tx_descriptor *tx; > struct dmaengine_unmap_data *unmap; > @@ -807,8 +809,7 @@ static int perf_copy_chunk(struct perf_thread *pthr, > } > unmap->to_cnt = 1; > > - unmap->addr[1] = dma_map_page(dma_dev, virt_to_page(dst), > - offset_in_page(dst), len, DMA_FROM_DEVICE); > + unmap->addr[1] = dst_dma_addr; > if (dma_mapping_error(dma_dev, unmap->addr[1])) { > ret = -EIO; > goto err_free_resource; > @@ -865,6 +866,7 @@ static int perf_init_test(struct perf_thread *pthr) > { > struct perf_ctx *perf = pthr->perf; > dma_cap_mask_t dma_mask; > + struct perf_peer *peer = pthr->perf->test_peer; > > pthr->src = kmalloc_node(perf->test_peer->outbuf_size, GFP_KERNEL, > dev_to_node(&perf->ntb->dev)); > @@ -882,15 +884,33 @@ static int perf_init_test(struct perf_thread *pthr) > if (!pthr->dma_chan) { > dev_err(&perf->ntb->dev, "%d: Failed to get DMA channel\n", > pthr->tidx); > - atomic_dec(&perf->tsync); > - wake_up(&perf->twait); > - kfree(pthr->src); > - return -ENODEV; > + goto err_free; > + } > + peer->dma_dst_addr = > + dma_map_resource(pthr->dma_chan->device->dev, > + peer->out_phys_addr, peer->outbuf_size, > + DMA_FROM_DEVICE, 0); > + if (dma_mapping_error(pthr->dma_chan->device->dev, > + peer->dma_dst_addr)) { > + dev_err(pthr->dma_chan->device->dev, "%d: Failed to map DMA addr\n", > + pthr->tidx); > + peer->dma_dst_addr = 0; > + dma_release_channel(pthr->dma_chan); > + goto err_free; > } > + dev_dbg(pthr->dma_chan->device->dev, "%d: Map MMIO %pa to DMA addr %pad\n", > + pthr->tidx, > + &peer->out_phys_addr, > + &peer->dma_dst_addr); > > atomic_set(&pthr->dma_sync, 0); > - > return 0; > + > +err_free: > + atomic_dec(&perf->tsync); > + wake_up(&perf->twait); > + kfree(pthr->src); > + return -ENODEV; > } > > static int perf_run_test(struct perf_thread *pthr) > @@ -901,6 +921,8 @@ static int perf_run_test(struct perf_thread *pthr) > u64 total_size, chunk_size; > void *flt_src; > int ret = 0; > + dma_addr_t flt_dma_addr; > + dma_addr_t bnd_dma_addr; > > total_size = 1ULL << total_order; > chunk_size = 1ULL << chunk_order; > @@ -910,11 +932,15 @@ static int perf_run_test(struct perf_thread *pthr) > bnd_dst = peer->outbuf + peer->outbuf_size; > flt_dst = peer->outbuf; > > + flt_dma_addr = peer->dma_dst_addr; > + bnd_dma_addr = peer->dma_dst_addr + peer->outbuf_size; > + > pthr->duration = ktime_get(); > > /* Copied field is cleared on test launch stage */ > while (pthr->copied < total_size) { > - ret = perf_copy_chunk(pthr, flt_dst, flt_src, chunk_size); > + ret = perf_copy_chunk(pthr, flt_dst, flt_src, chunk_size, > + flt_dma_addr); > if (ret) { > dev_err(&perf->ntb->dev, "%d: Got error %d on test\n", > pthr->tidx, ret); > @@ -925,8 +951,15 @@ static int perf_run_test(struct perf_thread *pthr) > > flt_dst += chunk_size; > flt_src += chunk_size; > - if (flt_dst >= bnd_dst || flt_dst < peer->outbuf) { > + flt_dma_addr += chunk_size; > + > + if (flt_dst >= bnd_dst || > + flt_dst < peer->outbuf || > + flt_dma_addr >= bnd_dma_addr || Nit: I'm pretty sure the check against bnd_dma_addr is redundant with the check on bnd_dst. > + flt_dma_addr < peer->dma_dst_addr) { > + > flt_dst = peer->outbuf; > + flt_dma_addr = peer->dma_dst_addr; > flt_src = pthr->src; > } > > @@ -978,8 +1011,13 @@ static void perf_clear_test(struct perf_thread *pthr) > * We call it anyway just to be sure of the transfers completion. > */ > (void)dmaengine_terminate_sync(pthr->dma_chan); > - > - dma_release_channel(pthr->dma_chan); > + if (pthr->perf->test_peer->dma_dst_addr) > + dma_unmap_resource(pthr->dma_chan->device->dev, > + pthr->perf->test_peer->dma_dst_addr, > + pthr->perf->test_peer->outbuf_size, > + DMA_FROM_DEVICE, 0); > + if (pthr->dma_chan) > + dma_release_channel(pthr->dma_chan); > > no_dma_notify: > atomic_dec(&perf->tsync); > @@ -1195,6 +1233,9 @@ static ssize_t perf_dbgfs_read_info(struct file *filep, char __user *ubuf, > "\tOut buffer addr 0x%pK\n", peer->outbuf); > > pos += scnprintf(buf + pos, buf_size - pos, > + "\tOut buff phys addr %pa[p]\n", &peer->out_phys_addr); > + > + pos += scnprintf(buf + pos, buf_size - pos, > "\tOut buffer size %pa\n", &peer->outbuf_size); > > pos += scnprintf(buf + pos, buf_size - pos, > @@ -1388,6 +1429,8 @@ static int perf_setup_peer_mw(struct perf_peer *peer) > if (!peer->outbuf) > return -ENOMEM; > > + peer->out_phys_addr = phys_addr; > + > if (max_mw_size && peer->outbuf_size > max_mw_size) { > peer->outbuf_size = max_mw_size; > dev_warn(&peer->perf->ntb->dev, >