Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1977852pxb; Fri, 5 Mar 2021 04:37:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJzzIsxVFQol0Sn3g/O8NUok6RKe+E1+SrV09Pc7y3pK6SFyZrHB0N4I104wRa5r+TGOWjGk X-Received: by 2002:a17:906:b286:: with SMTP id q6mr2055375ejz.422.1614947844683; Fri, 05 Mar 2021 04:37:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614947844; cv=none; d=google.com; s=arc-20160816; b=YX25OyNCFuoBVZVip4+K7rChx8HbgArAhSa+9x/wDHQ5izbVJO3GaySKZvgovzXr5v jvdwrtegjCfBjC9zXuZfrNdMbI8DXsDUfFdJ5jzXXIIV/PaoT+AUlEaGQ4LhjN5TYU1K EM+NmvvpEneErdUwKSJY/IdN/DdFbhwuNGclBVmT+PMaD+KNHr1sjZxmjdvmD/z9H031 //MmtQeG8rMbVvVROoZ48mfQpbjWjZXMmjzd4Yu9i6HsSPRquFlCDu/b3tmBwFeUeX1G bE1cGQH7S3zuhbcMB5GciELNre4eUCxMkqov2JY/oAJtqvvdA8xR4QnOTkADW/MncR5u dkpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VCqV0y7cIQbGfFGvWyW96y+LTixg/Sk1uCINvRkv0Bg=; b=KLr4AOrDlBtU+bmaUiDdXSTI5LUScI+Gd1anr+mjqof1t9Vgcgx4n6jT1oMfPVtz1l ivsJEJnjalIMaY63PBRcAC2xxg3LX/UpzeS/u4YIIqMLPF93Y/neR2BxQ9iC7SfZAop4 G3vgyGDEv4O/qsQR49HgbADjFcB8x8SPqdewDNeAsQ+0GxZBTDkOfVSciZx7VLNh85t2 /R4MOFQj5pLdAadzKXrAlOLKznMvTlC6qYn6NbBKfeXV/Wpv2Dh/ZNCM9DqDlxJieXyJ hg6o1u30MbkXNYcB2KkmYk9JSmbXoMKbwc7RSpndqN9JAJlZQU4hJzKQVNaDZLkQzH7r /UFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HF2A8667; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y8si1520777edc.279.2021.03.05.04.37.02; Fri, 05 Mar 2021 04:37:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HF2A8667; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232417AbhCEMeW (ORCPT + 99 others); Fri, 5 Mar 2021 07:34:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:44826 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232284AbhCEMdi (ORCPT ); Fri, 5 Mar 2021 07:33:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E539F65031; Fri, 5 Mar 2021 12:33:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614947617; bh=BJ47TRwlWXvFhRCoo7kohcLGde2zATmuhBKPLUtphMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HF2A8667LIBkgygSPF5YKyVI/ZV68kiu2jxf/b7+MRftEatnZmFxd+JxHNkcIL52R rXYduDmmby1+FwF3VhsYY5TulggbhhUJyJoAZrreCt19+nSLyR8vEILl+tEB+FJZyo M7YkSVYy3IoGZwBMsiJJJQ4xJ3FCsR/CIAnwrVcQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Christoph Hellwig , Keith Busch , Marc Orr Subject: [PATCH 5.4 03/72] nvme-pci: refactor nvme_unmap_data Date: Fri, 5 Mar 2021 13:21:05 +0100 Message-Id: <20210305120857.514738570@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210305120857.341630346@linuxfoundation.org> References: <20210305120857.341630346@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christoph Hellwig commit 9275c206f88e5c49cb3e71932c81c8561083db9e upstream. Split out three helpers from nvme_unmap_data that will allow finer grained unwinding from nvme_map_data. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Marc Orr Signed-off-by: Marc Orr Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/pci.c | 77 ++++++++++++++++++++++++++++++------------------ 1 file changed, 49 insertions(+), 28 deletions(-) --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -528,50 +528,71 @@ static inline bool nvme_pci_use_sgls(str return true; } -static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) +static void nvme_free_prps(struct nvme_dev *dev, struct request *req) { - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); const int last_prp = dev->ctrl.page_size / sizeof(__le64) - 1; - dma_addr_t dma_addr = iod->first_dma, next_dma_addr; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + dma_addr_t dma_addr = iod->first_dma; int i; - if (iod->dma_len) { - dma_unmap_page(dev->dev, dma_addr, iod->dma_len, - rq_dma_dir(req)); - return; + for (i = 0; i < iod->npages; i++) { + __le64 *prp_list = nvme_pci_iod_list(req)[i]; + dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); + + dma_pool_free(dev->prp_page_pool, prp_list, dma_addr); + dma_addr = next_dma_addr; } - WARN_ON_ONCE(!iod->nents); +} - if (is_pci_p2pdma_page(sg_page(iod->sg))) - pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req)); - else - dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); +static void nvme_free_sgls(struct nvme_dev *dev, struct request *req) +{ + const int last_sg = SGES_PER_PAGE - 1; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + dma_addr_t dma_addr = iod->first_dma; + int i; + for (i = 0; i < iod->npages; i++) { + struct nvme_sgl_desc *sg_list = nvme_pci_iod_list(req)[i]; + dma_addr_t next_dma_addr = le64_to_cpu((sg_list[last_sg]).addr); - if (iod->npages == 0) - dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], - dma_addr); + dma_pool_free(dev->prp_page_pool, sg_list, dma_addr); + dma_addr = next_dma_addr; + } - for (i = 0; i < iod->npages; i++) { - void *addr = nvme_pci_iod_list(req)[i]; +} - if (iod->use_sgl) { - struct nvme_sgl_desc *sg_list = addr; +static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - next_dma_addr = - le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr); - } else { - __le64 *prp_list = addr; + if (is_pci_p2pdma_page(sg_page(iod->sg))) + pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, + rq_dma_dir(req)); + else + dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); +} - next_dma_addr = le64_to_cpu(prp_list[last_prp]); - } +static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - dma_pool_free(dev->prp_page_pool, addr, dma_addr); - dma_addr = next_dma_addr; + if (iod->dma_len) { + dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len, + rq_dma_dir(req)); + return; } + WARN_ON_ONCE(!iod->nents); + + nvme_unmap_sg(dev, req); + if (iod->npages == 0) + dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], + iod->first_dma); + else if (iod->use_sgl) + nvme_free_sgls(dev, req); + else + nvme_free_prps(dev, req); mempool_free(iod->sg, dev->iod_mempool); }