Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp1415701pxb; Fri, 6 Nov 2020 09:05:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJyIPJxkMvkMjNjeOlMJu0sSGnM/fNHNj+rL0j9GzuuZvrrMMHgx7CNIfDpbNBkAVepTZOrT X-Received: by 2002:adf:a2c2:: with SMTP id t2mr3837599wra.54.1604682318263; Fri, 06 Nov 2020 09:05:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604682318; cv=none; d=google.com; s=arc-20160816; b=BLeK9wgfkqv4xuW/HhSvVXIGQtFk+lyoel3LwqKNbLslnUYXcuhjO8f3cpquoNCe1A NpTgVVCCRTUhIg0S57up/Mm07RJPPklg5YCgucgxLuIM+uqm0oPvG7c8GE4qA1MH3bRp KzP3mtAEEHSVJTUweIdoiuTLrJlPl4eEpLfbeBcnqcgdOrwfTLXtkIR/pYGbjZCJYWGZ V8Bjk5b2Hi5TNKhXiTl23V8s4M4TMrPYTschZUKIb6lOzUzdo9ah4bp1W6e3PXJkj9YH KygqzsLl79zxGTOltFpWEtf+++8hAJVL90FQrFQtsvVvXRK26SRVlpd+r1ItGUJZL8wY M9OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:cc:to:from:dkim-signature; bh=7ZcZGtWvTkVsTVUpkhext4YPtK8jbBrBUEsKJaygiKM=; b=rotL/2lC1UtnkncqjuQXOrD7FyzRHdxB83s/aZLipgNdS67wxmNPtybO2WQCbAf7up ITnyuBraH4ZCeOUDqDrlQ/RT/XdmiYIanc7eV7eyQ//zXfBeCacYNLnHbaDW+ae+ojBn f+ro58zHneguW/aGO04BwIWabTfWOriWQ4N/3I0fSQHWlDbqZ8eN0mozpbQxkUfvDxMM uutRtC9WjKry35gBsoSfBsU4dQ6ccIboN8w5DSb25Na86/OHoC2Ocip39nJ5s5Qo2VcZ y2k6vFwLpygp/nXJV3zXCEQP5Wp5pgORc7WWUDiT51nTqPrJ+L2PpdGIYe5m4F85uxYz IBMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@deltatee.com header.s=20200525 header.b="E/wOw3kN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c13si1301169ejs.117.2020.11.06.09.04.54; Fri, 06 Nov 2020 09:05:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@deltatee.com header.s=20200525 header.b="E/wOw3kN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727868AbgKFRBh (ORCPT + 99 others); Fri, 6 Nov 2020 12:01:37 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57756 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727799AbgKFRBB (ORCPT ); Fri, 6 Nov 2020 12:01:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7ZcZGtWvTkVsTVUpkhext4YPtK8jbBrBUEsKJaygiKM=; b=E/wOw3kNyO6V3futeClVovyVEi 0ofnTLpBwpRI12gOy55UDZsn4lgqLbjIrlVDUk5jitjZfiHHi9Uon94ml5xkCw/H9JdFFq0TAR87L drBag+pE3zEJ5UoXM0JywhZX7DgNTUweeo3Jp+DZGgj1gj9d3/9MmGQAvfaWLOn0vLUeAtb5X5+Zs M+ojMJrWn8vEzJ1QUnE2xRy+wNXpVwIkOukWFon2TazamH6LxZD9rDChqhxdoWuUXPusbmpaDrpyS laIGUKQxC+ZCaPc5GDRuDocCPvYNgCrSDMGr9vpm8Sj4MyggVUbqRxnvG4nlrwwCSGtib6yuVDp5S DpoLEf9g==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56h-0002PX-HK; Fri, 06 Nov 2020 10:01:00 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tE-5g; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:30 -0700 Message-Id: <20201106170036.18713-10-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [RFC PATCH 09/15] nvme-pci: Convert to using dma_map_sg for p2pdma pages X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Switch to using sg_dma_p2pdma_len() in places where sg_dma_len() is used. Then replace the calls to pci_p2pdma_[un]map_sg() with calls to dma_[un]map_sg() with DMA_ATTR_P2PDMA. This should be equivalent, though support will be somewhat less (only dma-direct and dma-iommu are currently supported). Using DMA_ATTR_P2PDMA is safe because the block layer restricts requests to be much less than 2GB, thus there's no way for a segment to be greater than 2GB. Signed-off-by: Logan Gunthorpe --- drivers/nvme/host/pci.c | 30 ++++++++++++------------------ 1 file changed, 12 insertions(+), 18 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index ef7ce464a48d..26976bdf4af0 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -528,12 +528,8 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) WARN_ON_ONCE(!iod->nents); - if (is_pci_p2pdma_page(sg_page(iod->sg))) - pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req)); - else - dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); - + dma_unmap_sg_attrs(dev->dev, iod->sg, iod->nents, rq_dma_dir(req), + DMA_ATTR_P2PDMA); if (iod->npages == 0) dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], @@ -570,7 +566,7 @@ static void nvme_print_sgl(struct scatterlist *sgl, int nents) pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d " "dma_address:%pad dma_length:%d\n", i, &phys, sg->offset, sg->length, &sg_dma_address(sg), - sg_dma_len(sg)); + sg_dma_p2pdma_len(sg)); } } @@ -581,7 +577,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, struct dma_pool *pool; int length = blk_rq_payload_bytes(req); struct scatterlist *sg = iod->sg; - int dma_len = sg_dma_len(sg); + int dma_len = sg_dma_p2pdma_len(sg); u64 dma_addr = sg_dma_address(sg); int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); __le64 *prp_list; @@ -601,7 +597,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, } else { sg = sg_next(sg); dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + dma_len = sg_dma_p2pdma_len(sg); } if (length <= NVME_CTRL_PAGE_SIZE) { @@ -650,7 +646,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, goto bad_sgl; sg = sg_next(sg); dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + dma_len = sg_dma_p2pdma_len(sg); } done: @@ -670,7 +666,7 @@ static void nvme_pci_sgl_set_data(struct nvme_sgl_desc *sge, struct scatterlist *sg) { sge->addr = cpu_to_le64(sg_dma_address(sg)); - sge->length = cpu_to_le32(sg_dma_len(sg)); + sge->length = cpu_to_le32(sg_dma_p2pdma_len(sg)); sge->type = NVME_SGL_FMT_DATA_DESC << 4; } @@ -814,14 +810,12 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, if (!iod->nents) goto out; - if (is_pci_p2pdma_page(sg_page(iod->sg))) - nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, - iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN); - else - nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req), DMA_ATTR_NO_WARN); - if (!nr_mapped) + nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, + rq_dma_dir(req), DMA_ATTR_NO_WARN | DMA_ATTR_P2PDMA); + if (!nr_mapped) { + ret = BLK_STS_IOERR; goto out; + } iod->use_sgl = nvme_pci_use_sgls(dev, req); if (iod->use_sgl) -- 2.20.1