Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp498730pxj; Tue, 18 May 2021 08:04:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwjjR9WPolLhxG6qNqREev1oO9oVns3CoAhAmO3CM/4aSqv+9TYIt5uyeNLnM46WQeeaONG X-Received: by 2002:a05:6602:18:: with SMTP id b24mr4487154ioa.1.1621350289158; Tue, 18 May 2021 08:04:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621350289; cv=none; d=google.com; s=arc-20160816; b=fNzDJQKdGbPBMQGxvIYkqW98Yg9zKHrvvIlx3ZfSAkwjlnyHXnBjnkkgT0v4Cf9W6C XlG2EnVZ2KwXs8WMcPnDSFpv72ot37tZ89cbaFV0zCRFmQVUM9nHJffjdJSr+CwC4yUv TfrMcrWub6ddvRKbmmW+Ci0GI/H+JYXsrg5vcdr2bfxowIHNvt+nZ6wOrVh/kbrYMssZ cWqTHdtu/Na0+VdBx2KbTZycRSMsTjpjo6xWfxxf/pD9vFskt2BgnVBiLAyDx5AT9+6s xvIPYw+fumza62m4OTGSnt7iFDZQ6fGAUDesX4G1JwnJ9xP9NxSbX+oGlNBS/0nEktMX Nl0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xvB0SLzLQOCfPu3D8QtiwKuP4fg9NbbCtU3GQHwMxYs=; b=hxfjy961d90EMvLuxGHxKpml0uutLJ4d0IoB1HNpTKouYmFX71fbhkSOORR+JJ0R2y T4noGo4zFU6Eigzi7CMOPC1vpMJYyuvpjL13AZ/+UCvTk+LsA86EKPylnh0h2eGoQLwR SMBFc5KvJEO2LfXQ4dRS0acIcwTlZwlbb96M46PIS3w7AEJHSUDmQYvvw+lJfeh8o9Cp +C/1g8Ghp2YaTEFWGJpHj8+1BzBygTZbtEhypfybqzdCG13KaWpO/VsYTDdYOUxf563a Gyy7PvuNg7rUsZNUBaJ28oeKVeFa+h960litp6ABZ+T+zAM5gt2oBjTPLFeQxokeLvA+ dT8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=QBzaABls; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h21si21933737jav.99.2021.05.18.08.04.36; Tue, 18 May 2021 08:04:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=QBzaABls; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244875AbhEQPvR (ORCPT + 99 others); Mon, 17 May 2021 11:51:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:39334 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243391AbhEQPeJ (ORCPT ); Mon, 17 May 2021 11:34:09 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3D4CF61CDA; Mon, 17 May 2021 14:39:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621262354; bh=Isn1Rdx6DlKgQzdIM6MzKxbV5zVzCUQZSKpqsvlWdxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QBzaABls8+us4PnH7tQK+DuzbVvOMMHCVC2tDJSlEQGLWvRB+OUUgzBjvJTlzehCJ k5ZBoZoPiqyxHhKP551Re1PdbMjJEWRUoN418SX3h1/pq6ra8DU+UlkuyS2qyiWyju O/tCWiwFm2J3/JdwprEUWB73JtyCkHUrNIqeUu04= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chaitanya Kulkarni , Damien Le Moal , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.11 271/329] nvmet: add lba to sect conversion helpers Date: Mon, 17 May 2021 16:03:02 +0200 Message-Id: <20210517140311.290856014@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210517140302.043055203@linuxfoundation.org> References: <20210517140302.043055203@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Chaitanya Kulkarni [ Upstream commit 193fcf371f9e3705c14a0bf1d4bfc44af0f7c124 ] In this preparation patch, we add helpers to convert lbas to sectors & sectors to lba. This is needed to eliminate code duplication in the ZBD backend. Use these helpers in the block device backend. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/target/io-cmd-bdev.c | 8 +++----- drivers/nvme/target/nvmet.h | 10 ++++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 125dde3f410e..23095bdfce06 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -256,8 +256,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) if (is_pci_p2pdma_page(sg_page(req->sg))) op |= REQ_NOMERGE; - sector = le64_to_cpu(req->cmd->rw.slba); - sector <<= (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { bio = &req->b.inline_bio; @@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req, int ret; ret = __blkdev_issue_discard(ns->bdev, - le64_to_cpu(range->slba) << (ns->blksize_shift - 9), + nvmet_lba_to_sect(ns, range->slba), le32_to_cpu(range->nlb) << (ns->blksize_shift - 9), GFP_KERNEL, 0, bio); if (ret && ret != -EOPNOTSUPP) { @@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) if (!nvmet_check_transfer_len(req, 0)) return; - sector = le64_to_cpu(write_zeroes->slba) << - (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba); nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) << (req->ns->blksize_shift - 9)); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 592763732065..8776dd1a0490 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -603,4 +603,14 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns) return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple); } +static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect) +{ + return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT)); +} + +static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) +{ + return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); +} + #endif /* _NVMET_H */ -- 2.30.2