Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp657523pxj; Tue, 18 May 2021 11:10:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxx5pvNe1R9KyHk6MpT2/J98bx9dFMThTu/b92JPO5HksGcLXG6vvDc3hPFrS7tdfQeWIjS X-Received: by 2002:a6b:7b08:: with SMTP id l8mr5188721iop.50.1621361425056; Tue, 18 May 2021 11:10:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621361425; cv=none; d=google.com; s=arc-20160816; b=sAeKuHFgM7rvciTx07Ma8j4Z9BCUDbLy7pVOwI079ukSLlB37Vjotjo1XTeYFwNW6y WXwC3kdQTBNI2pPSAFvzscqalYiBJ/A3lddXgcwYuNw0GUiS8rfUYwjCSUDDy3suEcyH HS3t/F53yGWNTj+bRBJW1PBQfxHYWzqIV3QOlFK++PK5EC5OBGMxhPfqDi4wFz6tQRAL hSZRi+T6j04/LQ29qyhrROZLyOZ1ukfBbw5PvqMOLj3+OL6UWt+7+wZPVZCro7Aj+Gby tGY6Uf8BLiLwTbnxcszU4LLqWlcshuhAdyk6KbcyRNZXaSqli5+tle/qh0OlD/8GDX4e jeuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wMgNOJ9eoi1pYPwWkZZ9ZmcsXEJAtK1ObrzWqFAoF4g=; b=PjpNQwKMtbfvpy5TU5DQq2hGAkAZjZnVWMcRHmqrfYct7/VtDRiKbGx0wvyCmbIFEN ffUgXQNmsBf5324AR6whiHN5mOQa0nTJOBb7+RYAZph57r0LU3ut9CSzi4J5NmlHvzmP nVllbZIVyiFT89x+84rO3KCYFiv0FYk1TpThNAb+R4mTEcQLqhlGEoVIVccfnMHYixG1 jsXoLkpxKr4qcoLTcFnqbiRvF6U57l8wtpgNyswkykLNpq9PhBNvEJPveTbUVwY9wec5 N/qQyg1e+Q7+sThFyU/Wl5SKPwCiyIN/fQ7jamUtvuWXVkQEjJ6P6DtcLd9EZM9wRQVD NNog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=T26CrKGM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f12si11834900jas.31.2021.05.18.11.10.12; Tue, 18 May 2021 11:10:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=T26CrKGM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245023AbhEQQLR (ORCPT + 99 others); Mon, 17 May 2021 12:11:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:35168 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238665AbhEQPsM (ORCPT ); Mon, 17 May 2021 11:48:12 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 38E726195F; Mon, 17 May 2021 14:44:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621262694; bh=dSeimuMNwy4dEfaRkwYJgjL4hWZ68GofmWOEemw/olg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T26CrKGMnWoV6XSNpXazz3okaXUvxjAPnTAqClsTxOm58Uj2HT/XkXfeaWtz3KC1W 4rgvxlM3M3EPlcFM3Vmxl62wr1u6yr+behOMuk9L/Ky47FlG3hjzXOzlhDrzRUGy+R /45sJojohilceAY2zYecRB/lKuVyyTDIKtWEk2Nc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chaitanya Kulkarni , Damien Le Moal , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.10 231/289] nvmet: add lba to sect conversion helpers Date: Mon, 17 May 2021 16:02:36 +0200 Message-Id: <20210517140312.946121942@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210517140305.140529752@linuxfoundation.org> References: <20210517140305.140529752@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Chaitanya Kulkarni [ Upstream commit 193fcf371f9e3705c14a0bf1d4bfc44af0f7c124 ] In this preparation patch, we add helpers to convert lbas to sectors & sectors to lba. This is needed to eliminate code duplication in the ZBD backend. Use these helpers in the block device backend. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/target/io-cmd-bdev.c | 8 +++----- drivers/nvme/target/nvmet.h | 10 ++++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 125dde3f410e..23095bdfce06 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -256,8 +256,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) if (is_pci_p2pdma_page(sg_page(req->sg))) op |= REQ_NOMERGE; - sector = le64_to_cpu(req->cmd->rw.slba); - sector <<= (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { bio = &req->b.inline_bio; @@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req, int ret; ret = __blkdev_issue_discard(ns->bdev, - le64_to_cpu(range->slba) << (ns->blksize_shift - 9), + nvmet_lba_to_sect(ns, range->slba), le32_to_cpu(range->nlb) << (ns->blksize_shift - 9), GFP_KERNEL, 0, bio); if (ret && ret != -EOPNOTSUPP) { @@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) if (!nvmet_check_transfer_len(req, 0)) return; - sector = le64_to_cpu(write_zeroes->slba) << - (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba); nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) << (req->ns->blksize_shift - 9)); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 559a15ccc322..c585f4152535 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -601,4 +601,14 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns) return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple); } +static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect) +{ + return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT)); +} + +static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) +{ + return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); +} + #endif /* _NVMET_H */ -- 2.30.2