Changelog:
v7 -> v8
Remove wrongly added tags.
v6 -> v7
1. Remove UFS feature layer.
2. Cleanup for sparse error.
v5 -> v6
Change base commit to b53293fa662e28ae0cdd40828dc641c09f133405
v4 -> v5
Delete unused macro define.
v3 -> v4
1. Cleanup.
v2 -> v3
1. Add checking input module parameter value.
2. Change base commit from 5.8/scsi-queue to 5.9/scsi-queue.
3. Cleanup for unused variables and label.
v1 -> v2
1. Change the full boilerplate text to SPDX style.
2. Adopt dynamic allocation for sub-region data structure.
3. Cleanup.
NAND flash memory-based storage devices use Flash Translation Layer (FTL)
to translate logical addresses of I/O requests to corresponding flash
memory addresses. Mobile storage devices typically have RAM with
constrained size, thus lack in memory to keep the whole mapping table.
Therefore, mapping tables are partially retrieved from NAND flash on
demand, causing random-read performance degradation.
To improve random read performance, JESD220-3 (HPB v1.0) proposes HPB
(Host Performance Booster) which uses host system memory as a cache for the
FTL mapping table. By using HPB, FTL data can be read from host memory
faster than from NAND flash memory.
The current version only supports the DCM (device control mode).
This patch consists of 3 parts to support HPB feature.
1) HPB probe and initialization process
2) READ -> HPB READ using cached map information
3) L2P (logical to physical) map management
In the HPB probe and init process, the device information of the UFS is
queried. After checking supported features, the data structure for the HPB
is initialized according to the device information.
A read I/O in the active sub-region where the map is cached is changed to
HPB READ by the HPB.
The HPB manages the L2P map using information received from the
device. For active sub-region, the HPB caches through ufshpb_map
request. For the in-active region, the HPB discards the L2P map.
When a write I/O occurs in an active sub-region area, associated dirty
bitmap checked as dirty for preventing stale read.
HPB is shown to have a performance improvement of 58 - 67% for random read
workload. [1]
This series patches are based on the 5.9/scsi-queue branch.
[1]:
https://www.usenix.org/conference/hotstorage17/program/presentation/jeong
Daejun park (4):
scsi: ufs: Add UFS feature related parameter
scsi: ufs: Introduce HPB feature
scsi: ufs: L2P map management for HPB read
scsi: ufs: Prepare HPB read for cached sub-region
drivers/scsi/ufs/Kconfig | 18 +
drivers/scsi/ufs/Makefile | 1 +
drivers/scsi/ufs/ufs.h | 12 +
drivers/scsi/ufs/ufshcd.c | 42 +
drivers/scsi/ufs/ufshcd.h | 9 +
drivers/scsi/ufs/ufshpb.c | 1926 ++++++++++++++++++++++++++++++++++++++++
drivers/scsi/ufs/ufshpb.h | 241 +++++
7 files changed, 2249 insertions(+)
created mode 100644 drivers/scsi/ufs/ufshpb.c
created mode 100644 drivers/scsi/ufs/ufshpb.h
This is a patch for parameters to be used for UFS feature and HPB
module.
Reviewed-by: Can Guo <[email protected]>
Tested-by: Bean Huo <[email protected]>
Signed-off-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufs.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
index f8ab16f30fdc..ae557b8d3eba 100644
--- a/drivers/scsi/ufs/ufs.h
+++ b/drivers/scsi/ufs/ufs.h
@@ -122,6 +122,7 @@ enum flag_idn {
QUERY_FLAG_IDN_WB_EN = 0x0E,
QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F,
QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10,
+ QUERY_FLAG_IDN_HPB_RESET = 0x11,
};
/* Attribute idn for Query requests */
@@ -195,6 +196,9 @@ enum unit_desc_param {
UNIT_DESC_PARAM_PHY_MEM_RSRC_CNT = 0x18,
UNIT_DESC_PARAM_CTX_CAPABILITIES = 0x20,
UNIT_DESC_PARAM_LARGE_UNIT_SIZE_M1 = 0x22,
+ UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS = 0x23,
+ UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET = 0x25,
+ UNIT_DESC_HPB_LU_NUM_PIN_REGIONS = 0x27,
UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS = 0x29,
};
@@ -235,6 +239,8 @@ enum device_desc_param {
DEVICE_DESC_PARAM_PSA_MAX_DATA = 0x25,
DEVICE_DESC_PARAM_PSA_TMT = 0x29,
DEVICE_DESC_PARAM_PRDCT_REV = 0x2A,
+ DEVICE_DESC_PARAM_HPB_VER = 0x40,
+ DEVICE_DESC_PARAM_HPB_CONTROL = 0x42,
DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F,
DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53,
DEVICE_DESC_PARAM_WB_TYPE = 0x54,
@@ -283,6 +289,10 @@ enum geometry_desc_param {
GEOMETRY_DESC_PARAM_ENM4_MAX_NUM_UNITS = 0x3E,
GEOMETRY_DESC_PARAM_ENM4_CAP_ADJ_FCTR = 0x42,
GEOMETRY_DESC_PARAM_OPT_LOG_BLK_SIZE = 0x44,
+ GEOMETRY_DESC_HPB_REGION_SIZE = 0x48,
+ GEOMETRY_DESC_HPB_NUMBER_LU = 0x49,
+ GEOMETRY_DESC_HPB_SUBREGION_SIZE = 0x4A,
+ GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS = 0x4B,
GEOMETRY_DESC_PARAM_WB_MAX_ALLOC_UNITS = 0x4F,
GEOMETRY_DESC_PARAM_WB_MAX_WB_LUNS = 0x53,
GEOMETRY_DESC_PARAM_WB_BUFF_CAP_ADJ = 0x54,
@@ -327,6 +337,7 @@ enum {
/* Possible values for dExtendedUFSFeaturesSupport */
enum {
+ UFS_DEV_HPB_SUPPORT = BIT(7),
UFS_DEV_WRITE_BOOSTER_SUP = BIT(8),
};
@@ -537,6 +548,7 @@ struct ufs_dev_info {
u8 *model;
u16 wspecversion;
u32 clk_gating_wait_us;
+ u8 b_ufs_feature_sup;
u32 d_ext_ufs_feature_sup;
u8 b_wb_buffer_type;
u32 d_wb_alloc_units;
--
2.17.1
This is a patch for the HPB feature.
This patch adds HPB function calls to UFS core driver.
The mininum size of the memory pool used in the HPB is implemented as a
Kconfig parameter (SCSI_UFS_HPB_HOST_MEM), so that it can be configurable.
Tested-by: Bean Huo <[email protected]>
Signed-off-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/Kconfig | 18 +
drivers/scsi/ufs/Makefile | 1 +
drivers/scsi/ufs/ufshcd.c | 42 +++
drivers/scsi/ufs/ufshcd.h | 9 +
drivers/scsi/ufs/ufshpb.c | 738 ++++++++++++++++++++++++++++++++++++++
drivers/scsi/ufs/ufshpb.h | 169 +++++++++
6 files changed, 977 insertions(+)
create mode 100644 drivers/scsi/ufs/ufshpb.c
create mode 100644 drivers/scsi/ufs/ufshpb.h
diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index f6394999b98c..33296478f411 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -182,3 +182,21 @@ config SCSI_UFS_CRYPTO
Enabling this makes it possible for the kernel to use the crypto
capabilities of the UFS device (if present) to perform crypto
operations on data being transferred to/from the device.
+
+config SCSI_UFS_HPB
+ bool "Support UFS Host Performance Booster"
+ depends on SCSI_UFSHCD
+ help
+ A UFS HPB Feature improves random read performance. It caches
+ L2P map of UFS to host DRAM. The driver uses HPB read command
+ by piggybacking physical page number for bypassing FTL's L2P address
+ translation.
+
+config SCSI_UFS_HPB_HOST_MEM
+ int "Host-side cached memory size (KB) for HPB support"
+ default 32
+ depends on SCSI_UFS_HPB
+ help
+ The mininum size of the memory pool used in the HPB module. It can
+ be configurable by the user. If this value is larger than required
+ memory size, kernel resizes cached memory size.
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index 4679af1b564e..663e17cee359 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -11,6 +11,7 @@ obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o
ufshcd-core-y += ufshcd.o ufs-sysfs.o
ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o
ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
+ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o
obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 307622284239..c60a6bf6ddc6 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -234,6 +234,17 @@ static int ufshcd_wb_ctrl(struct ufs_hba *hba, bool enable);
static int ufshcd_wb_toggle_flush_during_h8(struct ufs_hba *hba, bool set);
static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable);
+#ifndef CONFIG_SCSI_UFS_HPB
+static void ufshpb_resume(struct ufs_hba *hba) {}
+static void ufshpb_suspend(struct ufs_hba *hba) {}
+static void ufshpb_reset(struct ufs_hba *hba) {}
+static void ufshpb_reset_host(struct ufs_hba *hba) {}
+static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
+static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
+static void ufshpb_remove(struct ufs_hba *hba) {}
+static void ufshpb_scan_feature(struct ufs_hba *hba) {}
+#endif
+
static inline bool ufshcd_valid_tag(struct ufs_hba *hba, int tag)
{
return tag >= 0 && tag < hba->nutrs;
@@ -2559,6 +2570,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
ufshcd_comp_scsi_upiu(hba, lrbp);
+ ufshpb_prep(hba, lrbp);
+
err = ufshcd_map_sg(hba, lrbp);
if (err) {
lrbp->cmd = NULL;
@@ -4681,6 +4694,19 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
return scsi_change_queue_depth(sdev, depth);
}
+static void ufshcd_hpb_configure(struct ufs_hba *hba, struct scsi_device *sdev)
+{
+ /* skip well-known LU */
+ if (sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID)
+ return;
+
+ if (!(hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT))
+ return;
+
+ atomic_inc(&hba->ufsf.slave_conf_cnt);
+ wake_up(&hba->ufsf.sdev_wait);
+}
+
/**
* ufshcd_slave_configure - adjust SCSI device configurations
* @sdev: pointer to SCSI device
@@ -4690,6 +4716,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
struct ufs_hba *hba = shost_priv(sdev->host);
struct request_queue *q = sdev->request_queue;
+ ufshcd_hpb_configure(hba, sdev);
+
blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
if (ufshcd_is_rpm_autosuspend_allowed(hba))
@@ -4818,6 +4846,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
*/
pm_runtime_get_noresume(hba->dev);
}
+
+ if (scsi_status == SAM_STAT_GOOD)
+ ufshpb_rsp_upiu(hba, lrbp);
break;
case UPIU_TRANSACTION_REJECT_UPIU:
/* TODO: handle Reject UPIU Response */
@@ -6569,6 +6600,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
* Stop the host controller and complete the requests
* cleared by h/w
*/
+ ufshpb_reset_host(hba);
+
ufshcd_hba_stop(hba);
spin_lock_irqsave(hba->host->host_lock, flags);
@@ -7003,6 +7036,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
/* getting Specification Version in big endian format */
dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 |
desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1];
+ dev_info->b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT];
model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME];
@@ -7373,6 +7407,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba)
}
ufs_bsg_probe(hba);
+ ufshpb_scan_feature(hba);
scsi_scan_host(hba->host);
pm_runtime_put_sync(hba->dev);
@@ -7461,6 +7496,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async)
/* Enable Auto-Hibernate if configured */
ufshcd_auto_hibern8_enable(hba);
+ ufshpb_reset(hba);
out:
trace_ufshcd_init(dev_name(hba->dev), ret,
@@ -8229,6 +8265,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
req_link_state = UIC_LINK_OFF_STATE;
}
+ ufshpb_suspend(hba);
+
/*
* If we can't transition into any of the low power modes
* just gate the clocks.
@@ -8350,6 +8388,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
hba->clk_gating.is_suspended = false;
hba->dev_info.b_rpm_dev_flush_capable = false;
ufshcd_release(hba);
+ ufshpb_resume(hba);
out:
if (hba->dev_info.b_rpm_dev_flush_capable) {
schedule_delayed_work(&hba->rpm_dev_flush_recheck_work,
@@ -8446,6 +8485,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
/* Enable Auto-Hibernate if configured */
ufshcd_auto_hibern8_enable(hba);
+ ufshpb_resume(hba);
+
if (hba->dev_info.b_rpm_dev_flush_capable) {
hba->dev_info.b_rpm_dev_flush_capable = false;
cancel_delayed_work(&hba->rpm_dev_flush_recheck_work);
@@ -8670,6 +8711,7 @@ EXPORT_SYMBOL(ufshcd_shutdown);
void ufshcd_remove(struct ufs_hba *hba)
{
ufs_bsg_remove(hba);
+ ufshpb_remove(hba);
ufs_sysfs_remove_nodes(hba->dev);
blk_cleanup_queue(hba->tmf_queue);
blk_mq_free_tag_set(&hba->tmf_tag_set);
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index b2ef18f1b746..904c19796e09 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -47,6 +47,9 @@
#include "ufs.h"
#include "ufs_quirks.h"
#include "ufshci.h"
+#ifdef CONFIG_SCSI_UFS_HPB
+#include "ufshpb.h"
+#endif
#define UFSHCD "ufshcd"
#define UFSHCD_DRIVER_VERSION "0.2"
@@ -579,6 +582,11 @@ struct ufs_hba_variant_params {
u32 wb_flush_threshold;
};
+struct ufsf_feature_info {
+ atomic_t slave_conf_cnt;
+ wait_queue_head_t sdev_wait;
+};
+
/**
* struct ufs_hba - per adapter private structure
* @mmio_base: UFSHCI base register address
@@ -757,6 +765,7 @@ struct ufs_hba {
bool wb_enabled;
struct delayed_work rpm_dev_flush_recheck_work;
+ struct ufsf_feature_info ufsf;
#ifdef CONFIG_SCSI_UFS_CRYPTO
union ufs_crypto_capabilities crypto_capabilities;
union ufs_crypto_cap_entry *crypto_cap_array;
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
new file mode 100644
index 000000000000..e1f9c68ae415
--- /dev/null
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Universal Flash Storage Host Performance Booster
+ *
+ * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd.
+ *
+ * Authors:
+ * Yongmyung Lee <[email protected]>
+ * Jinyoung Choi <[email protected]>
+ */
+
+#include <asm/unaligned.h>
+#include <linux/async.h>
+
+#include "ufshcd.h"
+#include "ufshpb.h"
+
+/* SYSFS functions */
+#define ufshpb_sysfs_attr_show_func(__name) \
+static ssize_t __name##_show(struct ufshpb_lu *hpb, char *buf) \
+{ \
+ return snprintf(buf, PAGE_SIZE, "%d\n", \
+ atomic_read(&hpb->stats.__name)); \
+}
+
+#define HPB_ATTR_RO(_name) \
+ struct ufshpb_sysfs_entry hpb_attr_##_name = __ATTR_RO(_name)
+
+/* HPB enabled lu list */
+static LIST_HEAD(lh_hpb_lu);
+
+static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb);
+
+static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn,
+ struct ufshpb_subregion *srgn)
+{
+ return rgn->rgn_state != HPB_RGN_INACTIVE &&
+ srgn->srgn_state == HPB_SRGN_VALID;
+}
+
+static inline int ufshpb_get_state(struct ufshpb_lu *hpb)
+{
+ return atomic_read(&hpb->hpb_state);
+}
+
+static inline void ufshpb_set_state(struct ufshpb_lu *hpb, int state)
+{
+ atomic_set(&hpb->hpb_state, state);
+}
+
+void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+{
+}
+
+void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+{
+}
+
+static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ int srgn_idx;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx;
+
+ srgn->rgn_idx = rgn->rgn_idx;
+ srgn->srgn_idx = srgn_idx;
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ }
+}
+
+static inline int ufshpb_alloc_subregion_tbl(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ int srgn_cnt)
+{
+ rgn->srgn_tbl = kvcalloc(srgn_cnt, sizeof(struct ufshpb_subregion),
+ GFP_KERNEL);
+ if (!rgn->srgn_tbl)
+ return -ENOMEM;
+
+ rgn->srgn_cnt = srgn_cnt;
+ return 0;
+}
+
+static void ufshpb_init_lu_parameter(struct ufs_hba *hba,
+ struct ufshpb_lu *hpb,
+ struct ufshpb_dev_info *hpb_dev_info,
+ struct ufshpb_lu_info *hpb_lu_info)
+{
+ u32 entries_per_rgn;
+ u64 rgn_mem_size;
+
+ hpb->lu_pinned_start = hpb_lu_info->pinned_start;
+ hpb->lu_pinned_end = hpb_lu_info->num_pinned ?
+ (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1)
+ : PINNED_NOT_SET;
+
+ rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT
+ / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE;
+ hpb->srgn_mem_size = (1ULL << hpb_dev_info->srgn_size)
+ * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE;
+
+ entries_per_rgn = rgn_mem_size / HPB_ENTRY_SIZE;
+ hpb->entries_per_rgn_shift = ilog2(entries_per_rgn);
+ hpb->entries_per_rgn_mask = entries_per_rgn - 1;
+
+ hpb->entries_per_srgn = hpb->srgn_mem_size / HPB_ENTRY_SIZE;
+ hpb->entries_per_srgn_shift = ilog2(hpb->entries_per_srgn);
+ hpb->entries_per_srgn_mask = hpb->entries_per_srgn - 1;
+
+ hpb->srgns_per_rgn = rgn_mem_size / hpb->srgn_mem_size;
+
+ hpb->rgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks,
+ (rgn_mem_size / HPB_ENTRY_SIZE));
+ hpb->srgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks,
+ (hpb->srgn_mem_size / HPB_ENTRY_SIZE));
+
+ hpb->pages_per_srgn = hpb->srgn_mem_size / PAGE_SIZE;
+
+ dev_info(hba->dev, "ufshpb(%d): region memory size - %llu (bytes)\n",
+ hpb->lun, rgn_mem_size);
+ dev_info(hba->dev, "ufshpb(%d): subregion memory size - %u (bytes)\n",
+ hpb->lun, hpb->srgn_mem_size);
+ dev_info(hba->dev, "ufshpb(%d): total blocks per lu - %d\n",
+ hpb->lun, hpb_lu_info->num_blocks);
+ dev_info(hba->dev, "ufshpb(%d): subregions per region - %d, regions per lu - %u",
+ hpb->lun, hpb->srgns_per_rgn, hpb->rgns_per_lu);
+}
+
+static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn_table, *rgn;
+ int rgn_idx, i;
+ int ret = 0;
+
+ rgn_table = kvcalloc(hpb->rgns_per_lu, sizeof(struct ufshpb_region),
+ GFP_KERNEL);
+ if (!rgn_table)
+ return -ENOMEM;
+
+ hpb->rgn_tbl = rgn_table;
+
+ for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) {
+ int srgn_cnt = hpb->srgns_per_rgn;
+
+ rgn = rgn_table + rgn_idx;
+ rgn->rgn_idx = rgn_idx;
+
+ if (rgn_idx == hpb->rgns_per_lu - 1)
+ srgn_cnt = ((hpb->srgns_per_lu - 1) %
+ hpb->srgns_per_rgn) + 1;
+
+ ret = ufshpb_alloc_subregion_tbl(hpb, rgn, srgn_cnt);
+ if (ret)
+ goto release_srgn_table;
+ ufshpb_init_subregion_tbl(hpb, rgn);
+
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+ }
+
+ return 0;
+
+release_srgn_table:
+ for (i = 0; i < rgn_idx; i++) {
+ rgn = rgn_table + i;
+ if (rgn->srgn_tbl)
+ kvfree(rgn->srgn_tbl);
+ }
+ kvfree(rgn_table);
+ return ret;
+}
+
+static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ int srgn_idx;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ struct ufshpb_subregion *srgn;
+
+ srgn = rgn->srgn_tbl + srgn_idx;
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ }
+}
+
+static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb)
+{
+ int rgn_idx;
+
+ for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) {
+ struct ufshpb_region *rgn;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+ if (rgn->rgn_state != HPB_RGN_INACTIVE) {
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+
+ ufshpb_destroy_subregion_tbl(hpb, rgn);
+ }
+
+ kvfree(rgn->srgn_tbl);
+ }
+
+ kvfree(hpb->rgn_tbl);
+}
+
+static void ufshpb_stat_init(struct ufshpb_lu *hpb)
+{
+ atomic_set(&hpb->stats.hit_cnt, 0);
+ atomic_set(&hpb->stats.miss_cnt, 0);
+ atomic_set(&hpb->stats.rb_noti_cnt, 0);
+ atomic_set(&hpb->stats.rb_active_cnt, 0);
+ atomic_set(&hpb->stats.rb_inactive_cnt, 0);
+ atomic_set(&hpb->stats.map_req_cnt, 0);
+}
+
+struct ufshpb_sysfs_entry {
+ struct attribute attr;
+ ssize_t (*show)(struct ufshpb_lu *hpb, char *page);
+ ssize_t (*store)(struct ufshpb_lu *hpb, const char *page, size_t len);
+};
+
+ufshpb_sysfs_attr_show_func(hit_cnt);
+ufshpb_sysfs_attr_show_func(miss_cnt);
+ufshpb_sysfs_attr_show_func(rb_noti_cnt);
+ufshpb_sysfs_attr_show_func(rb_active_cnt);
+ufshpb_sysfs_attr_show_func(rb_inactive_cnt);
+ufshpb_sysfs_attr_show_func(map_req_cnt);
+
+static HPB_ATTR_RO(hit_cnt);
+static HPB_ATTR_RO(miss_cnt);
+static HPB_ATTR_RO(rb_noti_cnt);
+static HPB_ATTR_RO(rb_active_cnt);
+static HPB_ATTR_RO(rb_inactive_cnt);
+static HPB_ATTR_RO(map_req_cnt);
+
+static struct attribute *hpb_dev_attrs[] = {
+ &hpb_attr_hit_cnt.attr,
+ &hpb_attr_miss_cnt.attr,
+ &hpb_attr_rb_noti_cnt.attr,
+ &hpb_attr_rb_active_cnt.attr,
+ &hpb_attr_rb_inactive_cnt.attr,
+ &hpb_attr_map_req_cnt.attr,
+ NULL,
+};
+
+static struct attribute_group ufshpb_sysfs_group = {
+ .attrs = hpb_dev_attrs,
+};
+
+static ssize_t ufshpb_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *page)
+{
+ struct ufshpb_sysfs_entry *entry;
+ struct ufshpb_lu *hpb;
+ ssize_t error;
+
+ entry = container_of(attr, struct ufshpb_sysfs_entry, attr);
+ hpb = container_of(kobj, struct ufshpb_lu, kobj);
+
+ if (!entry->show)
+ return -EIO;
+
+ mutex_lock(&hpb->sysfs_lock);
+ error = entry->show(hpb, page);
+ mutex_unlock(&hpb->sysfs_lock);
+ return error;
+}
+
+static ssize_t ufshpb_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *page, size_t len)
+{
+ struct ufshpb_sysfs_entry *entry;
+ struct ufshpb_lu *hpb;
+ ssize_t error;
+
+ entry = container_of(attr, struct ufshpb_sysfs_entry, attr);
+ hpb = container_of(kobj, struct ufshpb_lu, kobj);
+
+ if (!entry->store)
+ return -EIO;
+
+ mutex_lock(&hpb->sysfs_lock);
+ error = entry->store(hpb, page, len);
+ mutex_unlock(&hpb->sysfs_lock);
+ return error;
+}
+
+static const struct sysfs_ops ufshpb_sysfs_ops = {
+ .show = ufshpb_attr_show,
+ .store = ufshpb_attr_store,
+};
+
+static struct kobj_type ufshpb_ktype = {
+ .sysfs_ops = &ufshpb_sysfs_ops,
+ .release = NULL,
+};
+
+static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb)
+{
+ int ret;
+
+ ufshpb_stat_init(hpb);
+
+ kobject_init(&hpb->kobj, &ufshpb_ktype);
+ mutex_init(&hpb->sysfs_lock);
+
+ ret = kobject_add(&hpb->kobj, kobject_get(&hba->dev->kobj),
+ "ufshpb_lu%d", hpb->lun);
+
+ if (ret)
+ return ret;
+
+ ret = sysfs_create_group(&hpb->kobj, &ufshpb_sysfs_group);
+
+ if (ret) {
+ dev_err(hba->dev, "ufshpb_lu%d create file error\n", hpb->lun);
+ return ret;
+ }
+
+ dev_info(hba->dev, "ufshpb_lu%d sysfs adds uevent", hpb->lun);
+ kobject_uevent(&hpb->kobj, KOBJ_ADD);
+
+ return 0;
+}
+
+static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb,
+ struct ufshpb_dev_info *hpb_dev_info)
+{
+ int ret;
+
+ spin_lock_init(&hpb->hpb_state_lock);
+
+ ret = ufshpb_alloc_region_tbl(hba, hpb);
+ if (ret)
+ return ret;
+
+ ret = ufshpb_create_sysfs(hba, hpb);
+ if (ret)
+ goto release_rgn_table;
+
+ return 0;
+
+release_rgn_table:
+ ufshpb_destroy_region_tbl(hpb);
+ return ret;
+}
+
+static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
+ struct ufshpb_dev_info *hpb_dev_info,
+ struct ufshpb_lu_info *hpb_lu_info)
+{
+ struct ufshpb_lu *hpb;
+ int ret;
+
+ hpb = kzalloc(sizeof(struct ufshpb_lu), GFP_KERNEL);
+ if (!hpb)
+ return NULL;
+
+ hpb->ufsf = &hba->ufsf;
+ hpb->lun = lun;
+
+ ufshpb_init_lu_parameter(hba, hpb, hpb_dev_info, hpb_lu_info);
+
+ ret = ufshpb_lu_hpb_init(hba, hpb, hpb_dev_info);
+ if (ret) {
+ dev_err(hba->dev, "hpb lu init failed. ret %d", ret);
+ goto release_hpb;
+ }
+
+ return hpb;
+
+release_hpb:
+ kfree(hpb);
+ return NULL;
+}
+
+static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba)
+{
+ int err;
+ int retries;
+
+ for (retries = 0; retries < HPB_RESET_REQ_RETRIES; retries++) {
+ err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG,
+ QUERY_FLAG_IDN_HPB_RESET, 0, NULL);
+ if (err)
+ dev_dbg(hba->dev,
+ "%s: failed with error %d, retries %d\n",
+ __func__, err, retries);
+ else
+ break;
+ }
+
+ if (err) {
+ dev_err(hba->dev,
+ "%s setting fHpbReset flag failed with error %d\n",
+ __func__, err);
+ return;
+ }
+}
+
+static void ufshpb_check_hpb_reset_query(struct ufs_hba *hba)
+{
+ int err;
+ bool flag_res = true;
+ int try = 0;
+
+ /* wait for the device to complete HPB reset query */
+ do {
+ if (++try == HPB_RESET_REQ_RETRIES)
+ break;
+
+ dev_info(hba->dev,
+ "%s start flag reset polling %d times\n",
+ __func__, try);
+
+ /* Poll fHpbReset flag to be cleared */
+ err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG,
+ QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res);
+ usleep_range(1000, 1100);
+ } while (flag_res);
+
+ if (err) {
+ dev_err(hba->dev,
+ "%s reading fHpbReset flag failed with error %d\n",
+ __func__, err);
+ return;
+ }
+
+ if (flag_res) {
+ dev_err(hba->dev,
+ "%s fHpbReset was not cleared by the device\n",
+ __func__);
+ }
+}
+
+void ufshpb_reset(struct ufs_hba *hba)
+{
+ struct ufshpb_lu *hpb;
+
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ ufshpb_set_state(hpb, HPB_PRESENT);
+}
+
+void ufshpb_reset_host(struct ufs_hba *hba)
+{
+ struct ufshpb_lu *hpb;
+
+ dev_info(hba->dev, "ufshpb run reset_host");
+
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ ufshpb_set_state(hpb, HPB_RESET);
+}
+
+void ufshpb_suspend(struct ufs_hba *hba)
+{
+ struct ufshpb_lu *hpb;
+
+ dev_info(hba->dev, "ufshpb goto suspend");
+
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ ufshpb_set_state(hpb, HPB_SUSPEND);
+}
+
+void ufshpb_resume(struct ufs_hba *hba)
+{
+ struct ufshpb_lu *hpb;
+
+ dev_info(hba->dev, "ufshpb resume");
+
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ ufshpb_set_state(hpb, HPB_PRESENT);
+}
+
+static int ufshpb_read_desc(struct ufs_hba *hba, u8 desc_id, u8 desc_index,
+ u8 selector, u8 *desc_buf)
+{
+ int err = 0;
+ int size;
+
+ ufshcd_map_desc_id_to_length(hba, desc_id, &size);
+
+ pm_runtime_get_sync(hba->dev);
+
+ err = ufshcd_query_descriptor_retry(hba, UPIU_QUERY_OPCODE_READ_DESC,
+ desc_id, desc_index,
+ selector,
+ desc_buf, &size);
+ if (err)
+ dev_err(hba->dev, "read desc failed: %d, id %d, idx %d\n",
+ err, desc_id, desc_index);
+
+ pm_runtime_put_sync(hba->dev);
+
+ return err;
+}
+
+static int ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf,
+ struct ufshpb_dev_info *hpb_dev_info)
+{
+ int hpb_device_max_active_rgns = 0;
+ int hpb_num_lu;
+
+ hpb_num_lu = geo_buf[GEOMETRY_DESC_HPB_NUMBER_LU];
+ if (hpb_num_lu == 0) {
+ dev_err(hba->dev, "No HPB LU supported\n");
+ return -ENODEV;
+ }
+
+ hpb_dev_info->rgn_size = geo_buf[GEOMETRY_DESC_HPB_REGION_SIZE];
+ hpb_dev_info->srgn_size = geo_buf[GEOMETRY_DESC_HPB_SUBREGION_SIZE];
+ hpb_device_max_active_rgns =
+ get_unaligned_be16(geo_buf +
+ GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS);
+
+ if (hpb_dev_info->rgn_size == 0 || hpb_dev_info->srgn_size == 0 ||
+ hpb_device_max_active_rgns == 0) {
+ dev_err(hba->dev, "No HPB supported device\n");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int ufshpb_get_dev_info(struct ufs_hba *hba,
+ struct ufshpb_dev_info *hpb_dev_info,
+ u8 *desc_buf)
+{
+ int ret;
+ int version;
+ u8 hpb_mode;
+
+ ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_DEVICE, 0, 0, desc_buf);
+ if (ret) {
+ dev_err(hba->dev, "%s: idn: %d query request failed\n",
+ __func__, QUERY_DESC_IDN_DEVICE);
+ return -ENODEV;
+ }
+
+ hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
+ if (hpb_mode == HPB_HOST_CONTROL) {
+ dev_err(hba->dev, "%s: host control mode is not supported.\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER);
+ if (version != HPB_SUPPORT_VERSION) {
+ dev_err(hba->dev, "%s: HPB %x version is not supported.\n",
+ __func__, version);
+ return -ENODEV;
+ }
+
+ /*
+ * Get the number of user logical unit to check whether all
+ * scsi_device finish initialization
+ */
+ hpb_dev_info->num_lu = desc_buf[DEVICE_DESC_PARAM_NUM_LU];
+
+ ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_GEOMETRY, 0, 0, desc_buf);
+ if (ret) {
+ dev_err(hba->dev, "%s: idn: %d query request failed\n",
+ __func__, QUERY_DESC_IDN_DEVICE);
+ return ret;
+ }
+
+ ret = ufshpb_get_geo_info(hba, desc_buf, hpb_dev_info);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun,
+ struct ufshpb_lu_info *hpb_lu_info,
+ u8 *desc_buf)
+{
+ u16 max_active_rgns;
+ u8 lu_enable;
+ int ret;
+
+ ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_UNIT, lun, 0, desc_buf);
+ if (ret) {
+ dev_err(hba->dev,
+ "%s: idn: %d lun: %d query request failed",
+ __func__, QUERY_DESC_IDN_UNIT, lun);
+ return ret;
+ }
+
+ lu_enable = desc_buf[UNIT_DESC_PARAM_LU_ENABLE];
+ if (lu_enable != LU_ENABLED_HPB_FUNC)
+ return -ENODEV;
+
+ max_active_rgns = get_unaligned_be16(
+ desc_buf + UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS);
+ if (!max_active_rgns) {
+ dev_err(hba->dev,
+ "lun %d wrong number of max active regions\n", lun);
+ return -ENODEV;
+ }
+
+ hpb_lu_info->num_blocks = get_unaligned_be64(
+ desc_buf + UNIT_DESC_PARAM_LOGICAL_BLK_COUNT);
+ hpb_lu_info->pinned_start = get_unaligned_be16(
+ desc_buf + UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET);
+ hpb_lu_info->num_pinned = get_unaligned_be16(
+ desc_buf + UNIT_DESC_HPB_LU_NUM_PIN_REGIONS);
+ hpb_lu_info->max_active_rgns = max_active_rgns;
+
+ return 0;
+}
+
+static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
+ struct ufshpb_dev_info *hpb_dev_info,
+ u8 *desc_buf)
+{
+ struct scsi_device *sdev;
+ struct ufshpb_lu *hpb;
+ int find_hpb_lu = 0;
+ int ret;
+
+ shost_for_each_device(sdev, hba->host) {
+ struct ufshpb_lu_info hpb_lu_info = { 0 };
+ int lun = sdev->lun;
+
+ if (lun >= hba->dev_info.max_lu_supported)
+ continue;
+
+ ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info, desc_buf);
+ if (ret)
+ continue;
+
+ hpb = ufshpb_alloc_hpb_lu(hba, lun, hpb_dev_info,
+ &hpb_lu_info);
+ if (!hpb)
+ continue;
+
+ hpb->sdev_ufs_lu = sdev;
+ sdev->hostdata = hpb;
+
+ list_add_tail(&hpb->list_hpb_lu, &lh_hpb_lu);
+ find_hpb_lu++;
+ }
+
+ if (!find_hpb_lu)
+ return;
+
+ ufshpb_check_hpb_reset_query(hba);
+
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
+ dev_info(hba->dev, "set state to present\n");
+ ufshpb_set_state(hpb, HPB_PRESENT);
+ }
+}
+
+static void ufshpb_init(void *data, async_cookie_t cookie)
+{
+ struct ufsf_feature_info *ufsf = (struct ufsf_feature_info *)data;
+ struct ufs_hba *hba;
+ struct ufshpb_dev_info hpb_dev_info = { 0 };
+ char *desc_buf;
+ int ret;
+
+ hba = container_of(ufsf, struct ufs_hba, ufsf);
+
+ desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL);
+ if (!desc_buf)
+ goto release_desc_buf;
+
+ ret = ufshpb_get_dev_info(hba, &hpb_dev_info, desc_buf);
+ if (ret)
+ goto release_desc_buf;
+
+ /*
+ * Because HPB driver uses scsi_device data structure,
+ * we should wait at this point until finishing initialization of all
+ * scsi devices. Even if timeout occurs, HPB driver will search
+ * the scsi_device list on struct scsi_host (shost->__host list_head)
+ * and can find out HPB logical units in all scsi_devices
+ */
+ wait_event_timeout(hba->ufsf.sdev_wait,
+ (atomic_read(&hba->ufsf.slave_conf_cnt)
+ == hpb_dev_info.num_lu),
+ SDEV_WAIT_TIMEOUT);
+
+ ufshpb_issue_hpb_reset_query(hba);
+
+ dev_dbg(hba->dev, "ufshpb: slave count %d, lu count %d\n",
+ atomic_read(&hba->ufsf.slave_conf_cnt), hpb_dev_info.num_lu);
+
+ ufshpb_scan_hpb_lu(hba, &hpb_dev_info, desc_buf);
+
+release_desc_buf:
+ kfree(desc_buf);
+}
+
+static inline void ufshpb_remove_sysfs(struct ufshpb_lu *hpb)
+{
+ kobject_uevent(&hpb->kobj, KOBJ_REMOVE);
+ dev_info(&hpb->sdev_ufs_lu->sdev_dev,
+ "ufshpb removes sysfs lu %d %p", hpb->lun, &hpb->kobj);
+ kobject_del(&hpb->kobj);
+}
+
+void ufshpb_remove(struct ufs_hba *hba)
+{
+ struct ufshpb_lu *hpb, *n_hpb;
+ struct ufsf_feature_info *ufsf;
+ struct scsi_device *sdev;
+
+ ufsf = &hba->ufsf;
+
+ list_for_each_entry_safe(hpb, n_hpb, &lh_hpb_lu, list_hpb_lu) {
+ ufshpb_set_state(hpb, HPB_FAILED);
+
+ sdev = hpb->sdev_ufs_lu;
+ sdev->hostdata = NULL;
+
+ ufshpb_destroy_region_tbl(hpb);
+
+ list_del_init(&hpb->list_hpb_lu);
+ ufshpb_remove_sysfs(hpb);
+
+ kfree(hpb);
+ }
+
+ dev_info(hba->dev, "ufshpb: remove success\n");
+}
+
+void ufshpb_scan_feature(struct ufs_hba *hba)
+{
+ init_waitqueue_head(&hba->ufsf.sdev_wait);
+ atomic_set(&hba->ufsf.slave_conf_cnt, 0);
+
+ if (hba->dev_info.wspecversion >= HPB_SUPPORT_VERSION &&
+ (hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT))
+ async_schedule(ufshpb_init, &hba->ufsf);
+}
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
new file mode 100644
index 000000000000..b91b447ed0c8
--- /dev/null
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Universal Flash Storage Host Performance Booster
+ *
+ * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd.
+ *
+ * Authors:
+ * Yongmyung Lee <[email protected]>
+ * Jinyoung Choi <[email protected]>
+ */
+
+#ifndef _UFSHPB_H_
+#define _UFSHPB_H_
+
+/* hpb response UPIU macro */
+#define MAX_ACTIVE_NUM 2
+#define MAX_INACTIVE_NUM 2
+#define HPB_RSP_NONE 0x00
+#define HPB_RSP_REQ_REGION_UPDATE 0x01
+#define HPB_RSP_DEV_RESET 0x02
+#define DEV_DATA_SEG_LEN 0x14
+#define DEV_SENSE_SEG_LEN 0x12
+#define DEV_DES_TYPE 0x80
+#define DEV_ADDITIONAL_LEN 0x10
+
+/* hpb map & entries macro */
+#define HPB_RGN_SIZE_UNIT 512
+#define HPB_ENTRY_BLOCK_SIZE 4096
+#define HPB_ENTRY_SIZE 0x8
+#define PINNED_NOT_SET U32_MAX
+
+/* hpb support chunk size */
+#define HPB_MULTI_CHUNK_HIGH 1
+
+/* hpb vender defined opcode */
+#define UFSHPB_READ 0xF8
+#define UFSHPB_READ_BUFFER 0xF9
+#define UFSHPB_READ_BUFFER_ID 0x01
+#define HPB_READ_BUFFER_CMD_LENGTH 10
+#define LU_ENABLED_HPB_FUNC 0x02
+
+#define SDEV_WAIT_TIMEOUT (10 * HZ)
+#define MAP_REQ_TIMEOUT (30 * HZ)
+#define HPB_RESET_REQ_RETRIES 10
+#define HPB_RESET_REQ_MSLEEP 2
+
+#define HPB_SUPPORT_VERSION 0x100
+
+enum UFSHPB_MODE {
+ HPB_HOST_CONTROL,
+ HPB_DEVICE_CONTROL,
+};
+
+enum UFSHPB_STATE {
+ HPB_PRESENT = 1,
+ HPB_SUSPEND,
+ HPB_FAILED,
+ HPB_RESET,
+};
+
+enum HPB_RGN_STATE {
+ HPB_RGN_INACTIVE,
+ HPB_RGN_ACTIVE,
+ /* pinned regions are always active */
+ HPB_RGN_PINNED,
+};
+
+enum HPB_SRGN_STATE {
+ HPB_SRGN_UNUSED,
+ HPB_SRGN_INVALID,
+ HPB_SRGN_VALID,
+ HPB_SRGN_ISSUED,
+};
+
+/**
+ * struct ufshpb_dev_info - UFSHPB device related info
+ * @num_lu: the number of user logical unit to check whether all lu finished
+ * initialization
+ * @rgn_size: device reported HPB region size
+ * @srgn_size: device reported HPB sub-region size
+ */
+struct ufshpb_dev_info {
+ int num_lu;
+ int rgn_size;
+ int srgn_size;
+};
+
+/**
+ * struct ufshpb_lu_info - UFSHPB logical unit related info
+ * @num_blocks: the number of logical block
+ * @pinned_start: the start region number of pinned region
+ * @num_pinned: the number of pinned regions
+ * @max_active_rgns: maximum number of active regions
+ */
+struct ufshpb_lu_info {
+ int num_blocks;
+ int pinned_start;
+ int num_pinned;
+ int max_active_rgns;
+};
+
+struct ufshpb_subregion {
+ enum HPB_SRGN_STATE srgn_state;
+ int rgn_idx;
+ int srgn_idx;
+};
+
+struct ufshpb_region {
+ struct ufshpb_subregion *srgn_tbl;
+ enum HPB_RGN_STATE rgn_state;
+ int rgn_idx;
+ int srgn_cnt;
+};
+
+struct ufshpb_stats {
+ atomic_t hit_cnt;
+ atomic_t miss_cnt;
+ atomic_t rb_noti_cnt;
+ atomic_t rb_active_cnt;
+ atomic_t rb_inactive_cnt;
+ atomic_t map_req_cnt;
+};
+
+struct ufshpb_lu {
+ int lun;
+ struct scsi_device *sdev_ufs_lu;
+ struct ufshpb_region *rgn_tbl;
+
+ struct kobject kobj;
+ struct mutex sysfs_lock;
+
+ spinlock_t hpb_state_lock;
+ atomic_t hpb_state; /* hpb_state_lock */
+
+ /* pinned region information */
+ u32 lu_pinned_start;
+ u32 lu_pinned_end;
+
+ /* HPB related configuration */
+ u32 rgns_per_lu;
+ u32 srgns_per_lu;
+ int srgns_per_rgn;
+ u32 srgn_mem_size;
+ u32 entries_per_rgn_mask;
+ u32 entries_per_rgn_shift;
+ u32 entries_per_srgn;
+ u32 entries_per_srgn_mask;
+ u32 entries_per_srgn_shift;
+ u32 pages_per_srgn;
+
+ struct ufshpb_stats stats;
+
+ struct ufsf_feature_info *ufsf;
+ struct list_head list_hpb_lu;
+};
+
+struct ufs_hba;
+struct ufshcd_lrb;
+
+void ufshpb_resume(struct ufs_hba *hba);
+void ufshpb_suspend(struct ufs_hba *hba);
+void ufshpb_reset(struct ufs_hba *hba);
+void ufshpb_reset_host(struct ufs_hba *hba);
+void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
+void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
+void ufshpb_scan_feature(struct ufs_hba *hba);
+void ufshpb_remove(struct ufs_hba *hba);
+
+#endif /* End of Header */
--
2.17.1
This is a patch for managing L2P map in HPB module.
The HPB divides logical addresses into several regions. A region consists
of several sub-regions. The sub-region is a basic unit where L2P mapping is
managed. The driver loads L2P mapping data of each sub-region. The loaded
sub-region is called active-state. The HPB driver unloads L2P mapping data
as region unit. The unloaded region is called inactive-state.
Sub-region/region candidates to be loaded and unloaded are delivered from
the UFS device. The UFS device delivers the recommended active sub-region
and inactivate region to the driver using sensedata.
The HPB module performs L2P mapping management on the host through the
delivered information.
A pinned region is a pre-set regions on the UFS device that is always
activate-state.
The data structure for map data request and L2P map uses mempool API,
minimizing allocation overhead while avoiding static allocation.
The map_work manages active/inactive by 2 "to-do" lists.
Each hpb lun maintains 2 "to-do" lists:
hpb->lh_inact_rgn - regions to be inactivated, and
hpb->lh_act_srgn - subregions to be activated
Those lists are maintained on IO completion.
Tested-by: Bean Huo <[email protected]>
Signed-off-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 973 +++++++++++++++++++++++++++++++++++++-
drivers/scsi/ufs/ufshpb.h | 72 +++
2 files changed, 1039 insertions(+), 6 deletions(-)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index e1f9c68ae415..25cd7153f102 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -26,6 +26,14 @@ static ssize_t __name##_show(struct ufshpb_lu *hpb, char *buf) \
#define HPB_ATTR_RO(_name) \
struct ufshpb_sysfs_entry hpb_attr_##_name = __ATTR_RO(_name)
+/* memory management */
+static struct kmem_cache *ufshpb_mctx_cache;
+static mempool_t *ufshpb_mctx_pool;
+static mempool_t *ufshpb_page_pool;
+static unsigned int ufshpb_host_map_kbytes;
+
+static struct workqueue_struct *ufshpb_wq;
+
/* HPB enabled lu list */
static LIST_HEAD(lh_hpb_lu);
@@ -38,6 +46,62 @@ static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn,
srgn->srgn_state == HPB_SRGN_VALID;
}
+static inline bool ufshpb_is_general_lun(int lun)
+{
+ return lun < UFS_UPIU_MAX_UNIT_NUM_ID;
+}
+
+static inline bool
+ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx)
+{
+ if (hpb->lu_pinned_end != PINNED_NOT_SET &&
+ rgn_idx >= hpb->lu_pinned_start &&
+ rgn_idx <= hpb->lu_pinned_end)
+ return true;
+
+ return false;
+}
+
+static bool ufshpb_is_empty_rsp_lists(struct ufshpb_lu *hpb)
+{
+ bool ret = true;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ if (!list_empty(&hpb->lh_inact_rgn) || !list_empty(&hpb->lh_act_srgn))
+ ret = false;
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ return ret;
+}
+
+static inline int ufshpb_may_field_valid(struct ufs_hba *hba,
+ struct ufshcd_lrb *lrbp,
+ struct ufshpb_rsp_field *rsp_field)
+{
+ if (be16_to_cpu(rsp_field->sense_data_len) != DEV_SENSE_SEG_LEN ||
+ rsp_field->desc_type != DEV_DES_TYPE ||
+ rsp_field->additional_len != DEV_ADDITIONAL_LEN ||
+ rsp_field->hpb_type == HPB_RSP_NONE ||
+ rsp_field->active_rgn_cnt > MAX_ACTIVE_NUM ||
+ rsp_field->inactive_rgn_cnt > MAX_INACTIVE_NUM ||
+ (!rsp_field->active_rgn_cnt && !rsp_field->inactive_rgn_cnt))
+ return -EINVAL;
+
+ if (!ufshpb_is_general_lun(lrbp->lun)) {
+ dev_warn(hba->dev, "ufshpb: lun(%d) not supported\n",
+ lrbp->lun);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline struct ufshpb_lu *ufshpb_get_hpb_data(struct scsi_cmnd *cmd)
+{
+ return cmd->device->hostdata;
+}
+
static inline int ufshpb_get_state(struct ufshpb_lu *hpb)
{
return atomic_read(&hpb->hpb_state);
@@ -52,8 +116,737 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
}
+static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_req *map_req;
+ struct request *req;
+ struct bio *bio;
+
+ map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
+ if (!map_req)
+ return NULL;
+
+ req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
+ REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT);
+ if (IS_ERR(req))
+ goto free_map_req;
+
+ bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn);
+ if (!bio) {
+ blk_put_request(req);
+ goto free_map_req;
+ }
+
+ map_req->hpb = hpb;
+ map_req->req = req;
+ map_req->bio = bio;
+
+ map_req->rgn_idx = srgn->rgn_idx;
+ map_req->srgn_idx = srgn->srgn_idx;
+ map_req->mctx = srgn->mctx;
+ map_req->lun = hpb->lun;
+
+ return map_req;
+
+free_map_req:
+ kmem_cache_free(hpb->map_req_cache, map_req);
+ return NULL;
+}
+
+static inline void ufshpb_put_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *map_req)
+{
+ bio_put(map_req->bio);
+ blk_put_request(map_req->req);
+ kmem_cache_free(hpb->map_req_cache, map_req);
+}
+
+static inline int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ WARN_ON(!srgn->mctx);
+ bitmap_zero(srgn->mctx->ppn_dirty, hpb->entries_per_srgn);
+ return 0;
+}
+
+static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ list_del_init(&rgn->list_inact_rgn);
+
+ if (list_empty(&srgn->list_act_srgn))
+ list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+}
+
+static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ list_del_init(&srgn->list_act_srgn);
+ }
+
+ if (list_empty(&rgn->list_inact_rgn))
+ list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn);
+}
+
+static void ufshpb_activate_subregion(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_region *rgn;
+
+ /*
+ * If there is no mctx in subregion
+ * after I/O progress for HPB_READ_BUFFER, the region to which the
+ * subregion belongs was evicted.
+ * Mask sure the the region must not evict in I/O progress
+ */
+ WARN_ON(!srgn->mctx);
+
+ rgn = hpb->rgn_tbl + srgn->rgn_idx;
+
+ if (unlikely(rgn->rgn_state == HPB_RGN_INACTIVE)) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "region %d subregion %d evicted\n",
+ srgn->rgn_idx, srgn->srgn_idx);
+ return;
+ }
+ srgn->srgn_state = HPB_SRGN_VALID;
+}
+
+static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
+{
+ struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;
+ struct ufshpb_lu *hpb = map_req->hpb;
+ struct ufshpb_subregion *srgn;
+ unsigned long flags;
+
+ srgn = hpb->rgn_tbl[map_req->rgn_idx].srgn_tbl +
+ map_req->srgn_idx;
+
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ ufshpb_activate_subregion(hpb, srgn);
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+
+ ufshpb_put_map_req(map_req->hpb, map_req);
+}
+
+static inline void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
+ int srgn_idx, int srgn_mem_size)
+{
+ cdb[0] = UFSHPB_READ_BUFFER;
+ cdb[1] = UFSHPB_READ_BUFFER_ID;
+
+ put_unaligned_be16(rgn_idx, &cdb[2]);
+ put_unaligned_be16(srgn_idx, &cdb[4]);
+ put_unaligned_be24(srgn_mem_size, &cdb[6]);
+
+ cdb[9] = 0x00;
+}
+
+static int ufshpb_map_req_add_bio_page(struct ufshpb_lu *hpb,
+ struct request_queue *q, struct bio *bio,
+ struct ufshpb_map_ctx *mctx)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < hpb->pages_per_srgn; i++) {
+ ret = bio_add_pc_page(q, bio, mctx->m_page[i], PAGE_SIZE, 0);
+ if (ret != PAGE_SIZE) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "bio_add_pc_page fail %d\n", ret);
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *map_req)
+{
+ struct request_queue *q;
+ struct request *req;
+ struct scsi_request *rq;
+ int ret = 0;
+
+ q = hpb->sdev_ufs_lu->request_queue;
+ ret = ufshpb_map_req_add_bio_page(hpb, q, map_req->bio,
+ map_req->mctx);
+ if (ret) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "map_req_add_bio_page fail %d - %d\n",
+ map_req->rgn_idx, map_req->srgn_idx);
+ return ret;
+ }
+
+ req = map_req->req;
+
+ blk_rq_append_bio(req, &map_req->bio);
+
+ req->timeout = 0;
+ req->end_io_data = (void *)map_req;
+
+ rq = scsi_req(req);
+ ufshpb_set_read_buf_cmd(rq->cmd, map_req->rgn_idx,
+ map_req->srgn_idx, hpb->srgn_mem_size);
+ rq->cmd_len = HPB_READ_BUFFER_CMD_LENGTH;
+
+ blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_map_req_compl_fn);
+
+ atomic_inc(&hpb->stats.map_req_cnt);
+ return 0;
+}
+
+static struct ufshpb_map_ctx *ufshpb_get_map_ctx(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_map_ctx *mctx;
+ int i, j;
+
+ mctx = mempool_alloc(ufshpb_mctx_pool, GFP_KERNEL);
+ if (!mctx)
+ return NULL;
+
+ mctx->m_page = kmem_cache_alloc(hpb->m_page_cache, GFP_KERNEL);
+ if (!mctx->m_page)
+ goto release_mctx;
+
+ mctx->ppn_dirty = bitmap_zalloc(hpb->entries_per_srgn, GFP_KERNEL);
+ if (!mctx->ppn_dirty)
+ goto release_m_page;
+
+ for (i = 0; i < hpb->pages_per_srgn; i++) {
+ mctx->m_page[i] = mempool_alloc(ufshpb_page_pool, GFP_KERNEL);
+ if (!mctx->m_page[i]) {
+ for (j = 0; j < i; j++)
+ mempool_free(mctx->m_page[j], ufshpb_page_pool);
+ goto release_ppn_dirty;
+ }
+ clear_page(page_address(mctx->m_page[i]));
+ }
+
+ return mctx;
+
+release_ppn_dirty:
+ bitmap_free(mctx->ppn_dirty);
+release_m_page:
+ kmem_cache_free(hpb->m_page_cache, mctx->m_page);
+release_mctx:
+ mempool_free(mctx, ufshpb_mctx_pool);
+ return NULL;
+}
+
+static inline void ufshpb_put_map_ctx(struct ufshpb_lu *hpb,
+ struct ufshpb_map_ctx *mctx)
+{
+ int i;
+
+ for (i = 0; i < hpb->pages_per_srgn; i++)
+ mempool_free(mctx->m_page[i], ufshpb_page_pool);
+
+ bitmap_free(mctx->ppn_dirty);
+ kmem_cache_free(hpb->m_page_cache, mctx->m_page);
+ mempool_free(mctx, ufshpb_mctx_pool);
+}
+
+static int ufshpb_check_issue_state_srgns(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ if (srgn->srgn_state == HPB_SRGN_ISSUED)
+ return -EPERM;
+ }
+ return 0;
+}
+
+static inline void ufshpb_add_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ rgn->rgn_state = HPB_RGN_ACTIVE;
+ list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
+ atomic_inc(&lru_info->active_cnt);
+}
+
+static inline void ufshpb_hit_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ list_move_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
+}
+
+static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+{
+ struct victim_select_info *lru_info = &hpb->lru_info;
+ struct ufshpb_region *rgn, *victim_rgn = NULL;
+
+ list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) {
+ WARN_ON(!rgn);
+ if (ufshpb_check_issue_state_srgns(hpb, rgn))
+ continue;
+
+ victim_rgn = rgn;
+ break;
+ }
+
+ return victim_rgn;
+}
+
+static inline void ufshpb_cleanup_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ list_del_init(&rgn->list_lru_rgn);
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+ atomic_dec(&lru_info->active_cnt);
+}
+
+static inline void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ if (srgn->srgn_state != HPB_SRGN_UNUSED) {
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ srgn->mctx = NULL;
+ }
+}
+
+static void __ufshpb_evict_region(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct victim_select_info *lru_info;
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ lru_info = &hpb->lru_info;
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx);
+
+ ufshpb_cleanup_lru_info(lru_info, rgn);
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ ufshpb_purge_active_subregion(hpb, srgn);
+ }
+}
+
+static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ if (rgn->rgn_state == HPB_RGN_PINNED) {
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "pinned region cannot drop-out. region %d\n",
+ rgn->rgn_idx);
+ goto out;
+ }
+ if (!list_empty(&rgn->list_lru_rgn)) {
+ if (ufshpb_check_issue_state_srgns(hpb, rgn)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ __ufshpb_evict_region(hpb, rgn);
+ }
+out:
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ return ret;
+}
+
+static inline struct
+ufshpb_rsp_field *ufshpb_get_hpb_rsp(struct ufshcd_lrb *lrbp)
+{
+ return (struct ufshpb_rsp_field *)&lrbp->ucd_rsp_ptr->sr.sense_data_len;
+}
+
+static int ufshpb_issue_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_req *map_req;
+ unsigned long flags;
+ int ret;
+ int err = -EAGAIN;
+ bool alloc_required = false;
+ enum HPB_SRGN_STATE state = HPB_SRGN_INVALID;
+
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ /*
+ * Since the region state change occurs only in the map_work,
+ * the state of the region cannot HPB_RGN_INACTIVE at this point.
+ * The region state must be changed in the map_work
+ */
+ WARN_ON(rgn->rgn_state == HPB_RGN_INACTIVE);
+
+ if (srgn->srgn_state == HPB_SRGN_UNUSED)
+ alloc_required = true;
+
+ /*
+ * If the subregion is already ISSUED state,
+ * a specific event (e.g., GC or wear-leveling, etc.) occurs in
+ * the device and HPB response for map loading is received.
+ * In this case, after finishing the HPB_READ_BUFFER,
+ * the next HPB_READ_BUFFER is performed again to obtain the latest
+ * map data.
+ */
+ if (srgn->srgn_state == HPB_SRGN_ISSUED)
+ goto unlock_out;
+
+ srgn->srgn_state = HPB_SRGN_ISSUED;
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+
+ if (alloc_required) {
+ WARN_ON(srgn->mctx);
+ srgn->mctx = ufshpb_get_map_ctx(hpb);
+ if (!srgn->mctx) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "get map_ctx failed. region %d - %d\n",
+ rgn->rgn_idx, srgn->srgn_idx);
+ state = HPB_SRGN_UNUSED;
+ goto change_srgn_state;
+ }
+ }
+
+ ufshpb_clear_dirty_bitmap(hpb, srgn);
+ map_req = ufshpb_get_map_req(hpb, srgn);
+ if (!map_req)
+ goto change_srgn_state;
+
+ ret = ufshpb_execute_map_req(hpb, map_req);
+ if (ret) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: issue map_req failed: %d, region %d - %d\n",
+ __func__, ret, srgn->rgn_idx, srgn->srgn_idx);
+ goto free_map_req;
+ }
+ return 0;
+
+free_map_req:
+ ufshpb_put_map_req(hpb, map_req);
+change_srgn_state:
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ srgn->srgn_state = state;
+unlock_out:
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ return err;
+}
+
+static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
+{
+ struct ufshpb_region *victim_rgn;
+ struct victim_select_info *lru_info = &hpb->lru_info;
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ /*
+ * If region belongs to lru_list, just move the region
+ * to the front of lru list. because the state of the region
+ * is already active-state
+ */
+ if (!list_empty(&rgn->list_lru_rgn)) {
+ ufshpb_hit_lru_info(lru_info, rgn);
+ goto out;
+ }
+
+ if (rgn->rgn_state == HPB_RGN_INACTIVE) {
+ if (atomic_read(&lru_info->active_cnt)
+ == lru_info->max_lru_active_cnt) {
+ /*
+ * If the maximum number of active regions
+ * is exceeded, evict the least recently used region.
+ * This case may occur when the device responds
+ * to the eviction information late.
+ * It is okay to evict the least recently used region,
+ * because the device could detect this region
+ * by not issuing HPB_READ
+ */
+ victim_rgn = ufshpb_victim_lru_info(hpb);
+ if (!victim_rgn) {
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "cannot get victim region error\n");
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "LRU full (%d), choost victim %d\n",
+ atomic_read(&lru_info->active_cnt),
+ victim_rgn->rgn_idx);
+ __ufshpb_evict_region(hpb, victim_rgn);
+ }
+
+ /*
+ * When a region is added to lru_info list_head,
+ * it is guaranteed that the subregion has been
+ * assigned all mctx. If failed, try to receive mctx again
+ * without being added to lru_info list_head
+ */
+ ufshpb_add_lru_info(lru_info, rgn);
+ }
+out:
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ return ret;
+}
+
+static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
+ struct ufshpb_rsp_field *rsp_field)
+{
+ int i, rgn_idx, srgn_idx;
+
+ /*
+ * If the active region and the inactive region are the same,
+ * we will inactivate this region.
+ * The device could check this (region inactivated) and
+ * will response the proper active region information
+ */
+ spin_lock(&hpb->rsp_list_lock);
+ for (i = 0; i < rsp_field->active_rgn_cnt; i++) {
+ rgn_idx =
+ be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn);
+ srgn_idx =
+ be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn);
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "activate(%d) region %d - %d\n", i, rgn_idx, srgn_idx);
+ ufshpb_update_active_info(hpb, rgn_idx, srgn_idx);
+ atomic_inc(&hpb->stats.rb_active_cnt);
+ }
+
+ for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) {
+ rgn_idx = be16_to_cpu(rsp_field->hpb_inactive_field[i]);
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "inactivate(%d) region %d\n", i, rgn_idx);
+ ufshpb_update_inactive_info(hpb, rgn_idx);
+ atomic_inc(&hpb->stats.rb_inactive_cnt);
+ }
+ spin_unlock(&hpb->rsp_list_lock);
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n",
+ rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt);
+
+ queue_work(ufshpb_wq, &hpb->map_work);
+}
+
+/* routine : isr (ufs) */
void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
+ struct ufshpb_lu *hpb;
+ struct ufshpb_rsp_field *rsp_field;
+ int data_seg_len;
+
+ data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2)
+ & MASK_RSP_UPIU_DATA_SEG_LEN;
+
+ /* To flush remained rsp_list, we queue the map_work task */
+ if (!data_seg_len) {
+ if (!ufshpb_is_general_lun(lrbp->lun))
+ return;
+
+ hpb = ufshpb_get_hpb_data(lrbp->cmd);
+ if (!hpb)
+ return;
+
+ if (!ufshpb_is_empty_rsp_lists(hpb))
+ queue_work(ufshpb_wq, &hpb->map_work);
+ return;
+ }
+
+ /* Check HPB_UPDATE_ALERT */
+ if (!(lrbp->ucd_rsp_ptr->header.dword_2 &
+ UPIU_HEADER_DWORD(0, 2, 0, 0)))
+ return;
+
+ rsp_field = ufshpb_get_hpb_rsp(lrbp);
+ if (ufshpb_may_field_valid(hba, lrbp, rsp_field))
+ return;
+
+ hpb = ufshpb_get_hpb_data(lrbp->cmd);
+ if (!hpb)
+ return;
+
+ atomic_inc(&hpb->stats.rb_noti_cnt);
+
+ switch (rsp_field->hpb_type) {
+ case HPB_RSP_REQ_REGION_UPDATE:
+ WARN_ON(data_seg_len != DEV_DATA_SEG_LEN);
+ ufshpb_rsp_req_region_update(hpb, rsp_field);
+ break;
+ case HPB_RSP_DEV_RESET:
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "UFS device lost HPB information during PM.\n");
+ break;
+ default:
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "hpb_type is not available: %d\n",
+ rsp_field->hpb_type);
+ break;
+ }
+}
+
+static void ufshpb_add_active_list(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct ufshpb_subregion *srgn)
+{
+ if (!list_empty(&rgn->list_inact_rgn))
+ return;
+
+ if (!list_empty(&srgn->list_act_srgn)) {
+ list_move(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+ return;
+ }
+
+ list_add(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+}
+
+static void ufshpb_add_pending_evict_list(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct list_head *pending_list)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ if (!list_empty(&rgn->list_inact_rgn))
+ return;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ if (!list_empty(&srgn->list_act_srgn))
+ return;
+ }
+
+ list_add_tail(&rgn->list_inact_rgn, pending_list);
+}
+
+static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ while ((srgn = list_first_entry_or_null(&hpb->lh_act_srgn,
+ struct ufshpb_subregion,
+ list_act_srgn))) {
+ list_del_init(&srgn->list_act_srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ rgn = hpb->rgn_tbl + srgn->rgn_idx;
+ ret = ufshpb_add_region(hpb, rgn);
+ if (ret)
+ goto active_failed;
+
+ ret = ufshpb_issue_map_req(hpb, rgn, srgn);
+ if (ret) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "issue map_req failed. ret %d, region %d - %d\n",
+ ret, rgn->rgn_idx, srgn->srgn_idx);
+ goto active_failed;
+ }
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ }
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ return;
+
+active_failed:
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev, "region %d - %d, will retry\n",
+ rgn->rgn_idx, srgn->srgn_idx);
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_add_active_list(hpb, rgn, srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn;
+ unsigned long flags;
+ int ret;
+ LIST_HEAD(pending_list);
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ while ((rgn = list_first_entry_or_null(&hpb->lh_inact_rgn,
+ struct ufshpb_region,
+ list_inact_rgn))) {
+ list_del_init(&rgn->list_inact_rgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ ret = ufshpb_evict_region(hpb, rgn);
+ if (ret) {
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_add_pending_evict_list(hpb, rgn, &pending_list);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ }
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ }
+
+ list_splice(&pending_list, &hpb->lh_inact_rgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static void ufshpb_map_work_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work);
+
+ ufshpb_run_inactive_region_list(hpb);
+ ufshpb_run_active_subregion_list(hpb);
+}
+
+/*
+ * this function doesn't need to hold lock due to be called in init.
+ * (hpb_state_lock, rsp_list_lock, etc..)
+ */
+static int ufshpb_init_pinned_active_region(struct ufs_hba *hba,
+ struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx, i;
+ int err = 0;
+
+ for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ srgn->mctx = ufshpb_get_map_ctx(hpb);
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ if (!srgn->mctx) {
+ dev_err(hba->dev,
+ "alloc mctx for pinned region failed\n");
+ goto release;
+ }
+
+ list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+ }
+
+ rgn->rgn_state = HPB_RGN_PINNED;
+ return 0;
+
+release:
+ for (i = 0; i < srgn_idx; i++) {
+ srgn = rgn->srgn_tbl + i;
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ }
+ return err;
}
static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb,
@@ -64,6 +857,8 @@ static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb,
for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx;
+ INIT_LIST_HEAD(&srgn->list_act_srgn);
+
srgn->rgn_idx = rgn->rgn_idx;
srgn->srgn_idx = srgn_idx;
srgn->srgn_state = HPB_SRGN_UNUSED;
@@ -95,6 +890,8 @@ static void ufshpb_init_lu_parameter(struct ufs_hba *hba,
hpb->lu_pinned_end = hpb_lu_info->num_pinned ?
(hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1)
: PINNED_NOT_SET;
+ hpb->lru_info.max_lru_active_cnt =
+ hpb_lu_info->max_active_rgns - hpb_lu_info->num_pinned;
rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT
/ HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE;
@@ -147,6 +944,9 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
rgn = rgn_table + rgn_idx;
rgn->rgn_idx = rgn_idx;
+ INIT_LIST_HEAD(&rgn->list_inact_rgn);
+ INIT_LIST_HEAD(&rgn->list_lru_rgn);
+
if (rgn_idx == hpb->rgns_per_lu - 1)
srgn_cnt = ((hpb->srgns_per_lu - 1) %
hpb->srgns_per_rgn) + 1;
@@ -156,7 +956,13 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
goto release_srgn_table;
ufshpb_init_subregion_tbl(hpb, rgn);
- rgn->rgn_state = HPB_RGN_INACTIVE;
+ if (ufshpb_is_pinned_region(hpb, rgn_idx)) {
+ ret = ufshpb_init_pinned_active_region(hba, hpb, rgn);
+ if (ret)
+ goto release_srgn_table;
+ } else {
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+ }
}
return 0;
@@ -180,7 +986,10 @@ static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb,
struct ufshpb_subregion *srgn;
srgn = rgn->srgn_tbl + srgn_idx;
- srgn->srgn_state = HPB_SRGN_UNUSED;
+ if (srgn->srgn_state != HPB_SRGN_UNUSED) {
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ }
}
}
@@ -330,10 +1139,36 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb,
int ret;
spin_lock_init(&hpb->hpb_state_lock);
+ spin_lock_init(&hpb->rsp_list_lock);
+
+ INIT_LIST_HEAD(&hpb->lru_info.lh_lru_rgn);
+ INIT_LIST_HEAD(&hpb->lh_act_srgn);
+ INIT_LIST_HEAD(&hpb->lh_inact_rgn);
+ INIT_LIST_HEAD(&hpb->list_hpb_lu);
+
+ INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
+
+ hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
+ sizeof(struct ufshpb_req), 0, 0, NULL);
+ if (!hpb->map_req_cache) {
+ dev_err(hba->dev, "ufshpb(%d) ufshpb_req_cache create fail",
+ hpb->lun);
+ return -ENOMEM;
+ }
+
+ hpb->m_page_cache = kmem_cache_create("ufshpb_m_page_cache",
+ sizeof(struct page *) * hpb->pages_per_srgn,
+ 0, 0, NULL);
+ if (!hpb->m_page_cache) {
+ dev_err(hba->dev, "ufshpb(%d) ufshpb_m_page_cache create fail",
+ hpb->lun);
+ ret = -ENOMEM;
+ goto release_req_cache;
+ }
ret = ufshpb_alloc_region_tbl(hba, hpb);
if (ret)
- return ret;
+ goto release_m_page_cache;
ret = ufshpb_create_sysfs(hba, hpb);
if (ret)
@@ -343,6 +1178,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb,
release_rgn_table:
ufshpb_destroy_region_tbl(hpb);
+release_m_page_cache:
+ kmem_cache_destroy(hpb->m_page_cache);
+release_req_cache:
+ kmem_cache_destroy(hpb->map_req_cache);
return ret;
}
@@ -375,6 +1214,33 @@ static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
return NULL;
}
+static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn, *next_rgn;
+ struct ufshpb_subregion *srgn, *next_srgn;
+ unsigned long flags;
+
+ /*
+ * If the device reset occurred, the remained HPB region information
+ * may be stale. Therefore, by dicarding the lists of HPB response
+ * that remained after reset, it prevents unnecessary work.
+ */
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ list_for_each_entry_safe(rgn, next_rgn, &hpb->lh_inact_rgn,
+ list_inact_rgn)
+ list_del_init(&rgn->list_inact_rgn);
+
+ list_for_each_entry_safe(srgn, next_srgn, &hpb->lh_act_srgn,
+ list_act_srgn)
+ list_del_init(&srgn->list_act_srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static inline void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
+{
+ cancel_work_sync(&hpb->map_work);
+}
+
static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba)
{
int err;
@@ -448,8 +1314,11 @@ void ufshpb_reset_host(struct ufs_hba *hba)
dev_info(hba->dev, "ufshpb run reset_host");
- list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
ufshpb_set_state(hpb, HPB_RESET);
+ ufshpb_cancel_jobs(hpb);
+ ufshpb_discard_rsp_lists(hpb);
+ }
}
void ufshpb_suspend(struct ufs_hba *hba)
@@ -458,8 +1327,10 @@ void ufshpb_suspend(struct ufs_hba *hba)
dev_info(hba->dev, "ufshpb goto suspend");
- list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
ufshpb_set_state(hpb, HPB_SUSPEND);
+ ufshpb_cancel_jobs(hpb);
+ }
}
void ufshpb_resume(struct ufs_hba *hba)
@@ -468,8 +1339,11 @@ void ufshpb_resume(struct ufs_hba *hba)
dev_info(hba->dev, "ufshpb resume");
- list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu)
+ list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
ufshpb_set_state(hpb, HPB_PRESENT);
+ if (!ufshpb_is_empty_rsp_lists(hpb))
+ queue_work(ufshpb_wq, &hpb->map_work);
+ }
}
static int ufshpb_read_desc(struct ufs_hba *hba, u8 desc_id, u8 desc_index,
@@ -617,6 +1491,8 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
struct scsi_device *sdev;
struct ufshpb_lu *hpb;
int find_hpb_lu = 0;
+ int tot_active_srgn_pages = 0;
+ int pool_size;
int ret;
shost_for_each_device(sdev, hba->host) {
@@ -635,6 +1511,9 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
if (!hpb)
continue;
+ tot_active_srgn_pages += hpb_lu_info.max_active_rgns *
+ hpb->srgns_per_rgn * hpb->pages_per_srgn;
+
hpb->sdev_ufs_lu = sdev;
sdev->hostdata = hpb;
@@ -647,10 +1526,78 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
ufshpb_check_hpb_reset_query(hba);
+ pool_size = DIV_ROUND_UP(ufshpb_host_map_kbytes * 1024, PAGE_SIZE);
+ if (pool_size > tot_active_srgn_pages) {
+ dev_info(hba->dev,
+ "reset pool_size to %lu KB.\n",
+ tot_active_srgn_pages * PAGE_SIZE / 1024);
+ mempool_resize(ufshpb_mctx_pool, tot_active_srgn_pages);
+ mempool_resize(ufshpb_page_pool, tot_active_srgn_pages);
+ }
+
list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
dev_info(hba->dev, "set state to present\n");
ufshpb_set_state(hpb, HPB_PRESENT);
+
+ if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0) {
+ dev_info(hba->dev,
+ "loading pinned regions %d - %d\n",
+ hpb->lu_pinned_start, hpb->lu_pinned_end);
+ queue_work(ufshpb_wq, &hpb->map_work);
+ }
+ }
+}
+
+static int ufshpb_init_mem_wq(void)
+{
+ int ret;
+ unsigned int pool_size;
+
+ ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache",
+ sizeof(struct ufshpb_map_ctx),
+ 0, 0, NULL);
+ if (!ufshpb_mctx_cache) {
+ pr_err("ufshpb: cannot init mctx cache\n");
+ return -ENOMEM;
+ }
+
+ ufshpb_host_map_kbytes = CONFIG_SCSI_UFS_HPB_HOST_MEM;
+ pool_size = DIV_ROUND_UP(ufshpb_host_map_kbytes * 1024, PAGE_SIZE);
+ pr_info("%s:%d ufshpb_host_map_kbytes %u pool_size %u\n",
+ __func__, __LINE__, ufshpb_host_map_kbytes, pool_size);
+
+ ufshpb_mctx_pool = mempool_create_slab_pool(pool_size,
+ ufshpb_mctx_cache);
+ if (!ufshpb_mctx_pool) {
+ pr_err("ufshpb: cannot init mctx pool\n");
+ ret = -ENOMEM;
+ goto release_mctx_cache;
+ }
+
+ ufshpb_page_pool = mempool_create_page_pool(pool_size, 0);
+ if (!ufshpb_page_pool) {
+ pr_err("ufshpb: cannot init page pool\n");
+ ret = -ENOMEM;
+ goto release_mctx_pool;
+ }
+
+ ufshpb_wq = alloc_workqueue("ufshpb-wq",
+ WQ_UNBOUND | WQ_MEM_RECLAIM, 0);
+ if (!ufshpb_wq) {
+ pr_err("ufshpb: alloc workqueue failed\n");
+ ret = -ENOMEM;
+ goto release_page_pool;
}
+
+ return 0;
+
+release_page_pool:
+ mempool_destroy(ufshpb_page_pool);
+release_mctx_pool:
+ mempool_destroy(ufshpb_mctx_pool);
+release_mctx_cache:
+ kmem_cache_destroy(ufshpb_mctx_cache);
+ return ret;
}
static void ufshpb_init(void *data, async_cookie_t cookie)
@@ -661,6 +1608,9 @@ static void ufshpb_init(void *data, async_cookie_t cookie)
char *desc_buf;
int ret;
+ if (ufshpb_init_mem_wq())
+ return;
+
hba = container_of(ufsf, struct ufs_hba, ufsf);
desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL);
@@ -716,14 +1666,25 @@ void ufshpb_remove(struct ufs_hba *hba)
sdev = hpb->sdev_ufs_lu;
sdev->hostdata = NULL;
+ ufshpb_cancel_jobs(hpb);
+
ufshpb_destroy_region_tbl(hpb);
+ kmem_cache_destroy(hpb->map_req_cache);
+ kmem_cache_destroy(hpb->m_page_cache);
+
list_del_init(&hpb->list_hpb_lu);
ufshpb_remove_sysfs(hpb);
kfree(hpb);
}
+ mempool_destroy(ufshpb_page_pool);
+ mempool_destroy(ufshpb_mctx_pool);
+ kmem_cache_destroy(ufshpb_mctx_cache);
+
+ destroy_workqueue(ufshpb_wq);
+
dev_info(hba->dev, "ufshpb: remove success\n");
}
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index b91b447ed0c8..4ed091f5bd57 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -99,10 +99,36 @@ struct ufshpb_lu_info {
int max_active_rgns;
};
+struct ufshpb_active_field {
+ __be16 active_rgn;
+ __be16 active_srgn;
+} __packed;
+
+struct ufshpb_rsp_field {
+ __be16 sense_data_len;
+ u8 desc_type;
+ u8 additional_len;
+ u8 hpb_type;
+ u8 reserved;
+ u8 active_rgn_cnt;
+ u8 inactive_rgn_cnt;
+ struct ufshpb_active_field hpb_active_field[2];
+ __be16 hpb_inactive_field[2];
+} __packed;
+
+struct ufshpb_map_ctx {
+ struct page **m_page;
+ unsigned long *ppn_dirty;
+};
+
struct ufshpb_subregion {
+ struct ufshpb_map_ctx *mctx;
enum HPB_SRGN_STATE srgn_state;
int rgn_idx;
int srgn_idx;
+
+ /* below information is used by rsp_list */
+ struct list_head list_act_srgn;
};
struct ufshpb_region {
@@ -110,6 +136,39 @@ struct ufshpb_region {
enum HPB_RGN_STATE rgn_state;
int rgn_idx;
int srgn_cnt;
+
+ /* below information is used by rsp_list */
+ struct list_head list_inact_rgn;
+
+ /* below information is used by lru */
+ struct list_head list_lru_rgn;
+};
+
+/**
+ * struct ufshpb_req - UFSHPB READ BUFFER (for caching map) request structure
+ * @req: block layer request for READ BUFFER
+ * @bio: bio for holding map page
+ * @hpb: ufshpb_lu structure that related to the L2P map
+ * @mctx: L2P map information
+ * @rgn_idx: target region index
+ * @srgn_idx: target sub-region index
+ * @lun: target logical unit number
+ */
+struct ufshpb_req {
+ struct request *req;
+ struct bio *bio;
+ struct ufshpb_lu *hpb;
+ struct ufshpb_map_ctx *mctx;
+
+ unsigned int rgn_idx;
+ unsigned int srgn_idx;
+ unsigned int lun;
+};
+
+struct victim_select_info {
+ struct list_head lh_lru_rgn;
+ int max_lru_active_cnt; /* supported hpb #region - pinned #region */
+ atomic_t active_cnt;
};
struct ufshpb_stats {
@@ -132,6 +191,16 @@ struct ufshpb_lu {
spinlock_t hpb_state_lock;
atomic_t hpb_state; /* hpb_state_lock */
+ spinlock_t rsp_list_lock;
+ struct list_head lh_act_srgn; /* rsp_list_lock */
+ struct list_head lh_inact_rgn; /* rsp_list_lock */
+
+ /* cached L2P map management worker */
+ struct work_struct map_work;
+
+ /* for selecting victim */
+ struct victim_select_info lru_info;
+
/* pinned region information */
u32 lu_pinned_start;
u32 lu_pinned_end;
@@ -150,6 +219,9 @@ struct ufshpb_lu {
struct ufshpb_stats stats;
+ struct kmem_cache *map_req_cache;
+ struct kmem_cache *m_page_cache;
+
struct ufsf_feature_info *ufsf;
struct list_head list_hpb_lu;
};
--
2.17.1
This patch changes the read I/O to the HPB read I/O.
If the logical address of the read I/O belongs to active sub-region, the
HPB driver modifies the read I/O command to HPB read. It modifies the UPIU
command of UFS instead of modifying the existing SCSI command.
In the HPB version 1.0, the maximum read I/O size that can be converted to
HPB read is 4KB.
The dirty map of the active sub-region prevents an incorrect HPB read that
has stale physical page number which is updated by previous write I/O.
Tested-by: Bean Huo <[email protected]>
Signed-off-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 227 ++++++++++++++++++++++++++++++++++++++
1 file changed, 227 insertions(+)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 25cd7153f102..fe24b2277621 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -46,6 +46,22 @@ static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn,
srgn->srgn_state == HPB_SRGN_VALID;
}
+static inline bool ufshpb_is_read_cmd(struct scsi_cmnd *cmd)
+{
+ return req_op(cmd->request) == REQ_OP_READ;
+}
+
+static inline bool ufshpb_is_write_discard_cmd(struct scsi_cmnd *cmd)
+{
+ return op_is_write(req_op(cmd->request)) ||
+ op_is_discard(req_op(cmd->request));
+}
+
+static inline bool ufshpb_is_support_chunk(int transfer_len)
+{
+ return transfer_len <= HPB_MULTI_CHUNK_HIGH;
+}
+
static inline bool ufshpb_is_general_lun(int lun)
{
return lun < UFS_UPIU_MAX_UNIT_NUM_ID;
@@ -112,8 +128,219 @@ static inline void ufshpb_set_state(struct ufshpb_lu *hpb, int state)
atomic_set(&hpb->hpb_state, state);
}
+static inline u32 ufshpb_get_lpn(struct scsi_cmnd *cmnd)
+{
+ return blk_rq_pos(cmnd->request) >>
+ (ilog2(cmnd->device->sector_size) - 9);
+}
+
+static inline unsigned int ufshpb_get_len(struct scsi_cmnd *cmnd)
+{
+ return blk_rq_sectors(cmnd->request) >>
+ (ilog2(cmnd->device->sector_size) - 9);
+}
+
+static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx, int srgn_offset, int cnt)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ int set_bit_len;
+ int bitmap_len = hpb->entries_per_srgn;
+
+next_srgn:
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ if ((srgn_offset + cnt) > bitmap_len)
+ set_bit_len = bitmap_len - srgn_offset;
+ else
+ set_bit_len = cnt;
+
+ if (rgn->rgn_state != HPB_RGN_INACTIVE &&
+ srgn->srgn_state == HPB_SRGN_VALID)
+ bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len);
+
+ srgn_offset = 0;
+ if (++srgn_idx == hpb->srgns_per_rgn) {
+ srgn_idx = 0;
+ rgn_idx++;
+ }
+
+ cnt -= set_bit_len;
+ if (cnt > 0)
+ goto next_srgn;
+
+ WARN_ON(cnt < 0);
+}
+
+static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx, int srgn_offset, int cnt)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ int bitmap_len = hpb->entries_per_srgn;
+ int bit_len;
+
+next_srgn:
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ if (!ufshpb_is_valid_srgn(rgn, srgn))
+ return true;
+
+ /*
+ * If the region state is active, mctx must be allocated.
+ * In this case, check whether the region is evicted or
+ * mctx allcation fail.
+ */
+ WARN_ON(!srgn->mctx);
+
+ if ((srgn_offset + cnt) > bitmap_len)
+ bit_len = bitmap_len - srgn_offset;
+ else
+ bit_len = cnt;
+
+ if (find_next_bit(srgn->mctx->ppn_dirty,
+ bit_len, srgn_offset) >= srgn_offset)
+ return true;
+
+ srgn_offset = 0;
+ if (++srgn_idx == hpb->srgns_per_rgn) {
+ srgn_idx = 0;
+ rgn_idx++;
+ }
+
+ cnt -= bit_len;
+ if (cnt > 0)
+ goto next_srgn;
+
+ return false;
+}
+
+static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb,
+ struct ufshpb_map_ctx *mctx, int pos, int *error)
+{
+ u64 *ppn_table;
+ struct page *page;
+ int index, offset;
+
+ index = pos / (PAGE_SIZE / HPB_ENTRY_SIZE);
+ offset = pos % (PAGE_SIZE / HPB_ENTRY_SIZE);
+
+ page = mctx->m_page[index];
+ if (unlikely(!page)) {
+ *error = -ENOMEM;
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "error. cannot find page in mctx\n");
+ return 0;
+ }
+
+ ppn_table = page_address(page);
+ if (unlikely(!ppn_table)) {
+ *error = -ENOMEM;
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "error. cannot get ppn_table\n");
+ return 0;
+ }
+
+ return ppn_table[offset];
+}
+
+static inline void
+ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx,
+ int *srgn_idx, int *offset)
+{
+ int rgn_offset;
+
+ *rgn_idx = lpn >> hpb->entries_per_rgn_shift;
+ rgn_offset = lpn & hpb->entries_per_rgn_mask;
+ *srgn_idx = rgn_offset >> hpb->entries_per_srgn_shift;
+ *offset = rgn_offset & hpb->entries_per_srgn_mask;
+}
+
+static void
+ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp,
+ u32 lpn, u64 ppn, unsigned int transfer_len)
+{
+ unsigned char *cdb = lrbp->ucd_req_ptr->sc.cdb;
+
+ cdb[0] = UFSHPB_READ;
+
+ put_unaligned_be64(ppn, &cdb[6]);
+ cdb[14] = transfer_len;
+}
+
+/* routine : READ10 -> HPB_READ */
void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
+ struct ufshpb_lu *hpb;
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ struct scsi_cmnd *cmd = lrbp->cmd;
+ u32 lpn;
+ u64 ppn;
+ unsigned long flags;
+ int transfer_len, rgn_idx, srgn_idx, srgn_offset;
+ int err = 0;
+
+ hpb = ufshpb_get_hpb_data(cmd);
+ if (!hpb)
+ return;
+
+ WARN_ON(hpb->lun != cmd->device->lun);
+ if (!ufshpb_is_write_discard_cmd(cmd) &&
+ !ufshpb_is_read_cmd(cmd))
+ return;
+
+ transfer_len = ufshpb_get_len(cmd);
+ if (unlikely(!transfer_len))
+ return;
+
+ lpn = ufshpb_get_lpn(cmd);
+ ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset);
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ /* If command type is WRITE or DISCARD, set bitmap as drity */
+ if (ufshpb_is_write_discard_cmd(cmd)) {
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
+ transfer_len);
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ return;
+ }
+
+ WARN_ON(!ufshpb_is_read_cmd(cmd));
+
+ if (!ufshpb_is_support_chunk(transfer_len))
+ return;
+
+ spin_lock_irqsave(&hpb->hpb_state_lock, flags);
+ if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
+ transfer_len)) {
+ atomic_inc(&hpb->stats.miss_cnt);
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ return;
+ }
+
+ ppn = ufshpb_get_ppn(hpb, srgn->mctx, srgn_offset, &err);
+ spin_unlock_irqrestore(&hpb->hpb_state_lock, flags);
+ if (unlikely(err)) {
+ /*
+ * In this case, the region state is active,
+ * but the ppn table is not allocated.
+ * Make sure that ppn table must be allocated on
+ * active state.
+ */
+ WARN_ON(true);
+ dev_err(hba->dev, "ufshpb_get_ppn failed. err %d\n", err);
+ return;
+ }
+
+ ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len);
+
+ atomic_inc(&hpb->stats.hit_cnt);
}
static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
--
2.17.1
Hi Avri
what is your plan for this series patchset?
Bean
>
> Hi Avri
> what is your plan for this series patchset?
I already acked it.
Waiting for the senior members to decide.
Thanks,
Avri
>
> Bean
On Thu, 2020-08-06 at 09:56 +0000, Avri Altman wrote:
> >
> > Hi Avri
> > what is your plan for this series patchset?
>
> I already acked it.
> Waiting for the senior members to decide.
>
> Thanks,
> Avri
>
> >
> > Bean
>
>
we didn't see you Acked-by in the pathwork, would you like to add them?
Just for reminding us that you have agreed to mainline this series
patchset.
thanks,
Bean
>
> On Thu, 2020-08-06 at 09:56 +0000, Avri Altman wrote:
> > >
> > > Hi Avri
> > > what is your plan for this series patchset?
> >
> > I already acked it.
> > Waiting for the senior members to decide.
> >
> > Thanks,
> > Avri
> >
> > >
> > > Bean
> >
> >
> we didn't see you Acked-by in the pathwork, would you like to add them?
> Just for reminding us that you have agreed to mainline this series
> patchset.
I acked it - https://www.spinics.net/lists/linux-scsi/msg144660.html
And asked Martin to move forward - https://www.spinics.net/lists/linux-scsi/msg144738.html
Which he did, and got some sparse errors: https://www.spinics.net/lists/linux-scsi/msg144977.html
Which I asked Daejun to fix - https://www.spinics.net/lists/linux-scsi/msg144987.html
For the next chain of events I guess you can follow by yourself.
Thanks,
Avri
On Thu, 2020-08-06 at 10:12 +0000, Avri Altman wrote:
> > >
> >
> > we didn't see you Acked-by in the pathwork, would you like to add
> > them?
> > Just for reminding us that you have agreed to mainline this series
> > patchset.
>
> I acked it - https://www.spinics.net/lists/linux-scsi/msg144660.html
> And asked Martin to move forward -
> https://www.spinics.net/lists/linux-scsi/msg144738.html
> Which he did, and got some sparse errors:
> https://www.spinics.net/lists/linux-scsi/msg144977.html
> Which I asked Daejun to fix -
> https://www.spinics.net/lists/linux-scsi/msg144987.html
>
> For the next chain of events I guess you can follow by yourself.
>
> Thanks,
> Avri
Avri
Sorry for making you confusing. yes, I knew that, and following.
I mean Acked-by tag in the patchset, then we see your acked in the
patchwork, and let others know that you acked it, rather than going
backtrack history email.
Hi Daejun
I think you can add Avri's Acked-by tag in your patchset, just for
quickly moving forward and reminding.
thanks,
Bean
>
> On Thu, 2020-08-06 at 10:12 +0000, Avri Altman wrote:
> > > >
> > >
> > > we didn't see you Acked-by in the pathwork, would you like to add
> > > them?
> > > Just for reminding us that you have agreed to mainline this series
> > > patchset.
> >
> > I acked it - https://www.spinics.net/lists/linux-scsi/msg144660.html
> > And asked Martin to move forward -
> > https://www.spinics.net/lists/linux-scsi/msg144738.html
> > Which he did, and got some sparse errors:
> > https://www.spinics.net/lists/linux-scsi/msg144977.html
> > Which I asked Daejun to fix -
> > https://www.spinics.net/lists/linux-scsi/msg144987.html
> >
> > For the next chain of events I guess you can follow by yourself.
> >
> > Thanks,
> > Avri
>
> Avri
> Sorry for making you confusing. yes, I knew that, and following.
> I mean Acked-by tag in the patchset, then we see your acked in the
> patchwork, and let others know that you acked it, rather than going
> backtrack history email.
>
> Hi Daejun
> I think you can add Avri's Acked-by tag in your patchset, just for
> quickly moving forward and reminding.
Ahhh - One moment please -
While rebasing the v8 on my platform, I noticed some substantial changes since v6.
e.g. the hpb lun ref counting isn't there anymore, as well as some more stuff.
While those changes might be only for the best, I think any tested-by tag should be re-assign.
Anyway, as for myself, I am not planning to put any more time in this,
until there is a clear decision where this series is going to.
Martin - Are you considering to merge the HPB feature eventually to mainline kernel?
Thanks,
Avri
>
> thanks,
> Bean
> -----Original Message-----
> From: Avri Altman <[email protected]>
> Sent: 06 August 2020 19:27
> To: Bean Huo <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; ALIM AKHTAR
> <[email protected]>
> Cc: [email protected]; [email protected]; Sang-yoon Oh
> <[email protected]>; Sung-Jun Park
> <[email protected]>; yongmyung lee
> <[email protected]>; Jinyoung CHOI <[email protected]>;
> Adel Choi <[email protected]>; BoRam Shin
> <[email protected]>
> Subject: RE: [PATCH v8 0/4] scsi: ufs: Add Host Performance Booster Support
>
>
> >
> > On Thu, 2020-08-06 at 10:12 +0000, Avri Altman wrote:
> > > > >
> > > >
> > > > we didn't see you Acked-by in the pathwork, would you like to add
> > > > them?
> > > > Just for reminding us that you have agreed to mainline this series
> > > > patchset.
> > >
> > > I acked it -
> > > https://protect2.fireeye.com/url?k=039c5a1c-5e48e674-039dd153-0cc47a
> > > 3356b2-
> 66867eb5b9700b6a&q=1&u=https%3A%2F%2Fhttp://www.spinics.net%2Flists%
> > > 2Flinux-scsi%2Fmsg144660.html
> > > And asked Martin to move forward -
> > > https://protect2.fireeye.com/url?k=94dceb38-c9085750-94dd6077-0cc47a
> > > 3356b2-
> 19ab1f41f48ff179&q=1&u=https%3A%2F%2Fhttp://www.spinics.net%2Flists%
> > > 2Flinux-scsi%2Fmsg144738.html Which he did, and got some sparse
> > > errors:
> > > https://protect2.fireeye.com/url?k=a40e2dd1-f9da91b9-a40fa69e-0cc47a
> > > 3356b2-
> 81fae05297aebb0e&q=1&u=https%3A%2F%2Fhttp://www.spinics.net%2Flists%
> > > 2Flinux-scsi%2Fmsg144977.html
> > > Which I asked Daejun to fix -
> > > https://protect2.fireeye.com/url?k=6badf100-36794d68-6bac7a4f-0cc47a
> > > 3356b2-
> f84580e236611583&q=1&u=https%3A%2F%2Fhttp://www.spinics.net%2Flists%
> > > 2Flinux-scsi%2Fmsg144987.html
> > >
> > > For the next chain of events I guess you can follow by yourself.
> > >
> > > Thanks,
> > > Avri
> >
> > Avri
> > Sorry for making you confusing. yes, I knew that, and following.
> > I mean Acked-by tag in the patchset, then we see your acked in the
> > patchwork, and let others know that you acked it, rather than going
> > backtrack history email.
> >
> > Hi Daejun
> > I think you can add Avri's Acked-by tag in your patchset, just for
> > quickly moving forward and reminding.
> Ahhh - One moment please -
> While rebasing the v8 on my platform, I noticed some substantial changes since
> v6.
> e.g. the hpb lun ref counting isn't there anymore, as well as some more stuff.
> While those changes might be only for the best, I think any tested-by tag should
> be re-assign.
>
> Anyway, as for myself, I am not planning to put any more time in this, until there
> is a clear decision where this series is going to.
>
> Martin - Are you considering to merge the HPB feature eventually to mainline
> kernel?
>
V8 has removed the "UFS feature layer" which was the main topic of discussion. What else we thing is blocking this to be in mainline?
Bart / Martin, any thought?
> Thanks,
> Avri
> >
> > thanks,
> > Bean
On 2020-08-06 09:36, Alim Akhtar wrote:
> V8 has removed the "UFS feature layer" which was the main topic of discussion. What else we thing is blocking this to be in mainline?
> Bart / Martin, any thought?
Thank you for having posted a version with the UFS feature layer removed. I
will try to find some time this weekend to review version 8 of this patch
series.
Bart.
Hi Avri,
> >
> > On Thu, 2020-08-06 at 10:12 +0000, Avri Altman wrote:
> > > > >
> > > >
> > > > we didn't see you Acked-by in the pathwork, would you like to add
> > > > them?
> > > > Just for reminding us that you have agreed to mainline this series
> > > > patchset.
> > >
> > > I acked it - https://protect2.fireeye.com/v1/url?k=0669baeb-5ba5764e-066831a4-0cc47a30d446-27c87d7023437946&q=1&e=cf195d42-fec6-43bb-b797-203c71fd6665&u=https%3A%2F%2Fwww.spinics.net%2Flists%2Flinux-scsi%2Fmsg144660.html
> > > And asked Martin to move forward -
> > > https://protect2.fireeye.com/v1/url?k=f5035541-a8cf99e4-f502de0e-0cc47a30d446-b2b42329eddc02dc&q=1&e=cf195d42-fec6-43bb-b797-203c71fd6665&u=https%3A%2F%2Fwww.spinics.net%2Flists%2Flinux-scsi%2Fmsg144738.html
> > > Which he did, and got some sparse errors:
> > > https://protect2.fireeye.com/v1/url?k=242040ce-79ec8c6b-2421cb81-0cc47a30d446-4d1c0f96a36b8f4d&q=1&e=cf195d42-fec6-43bb-b797-203c71fd6665&u=https%3A%2F%2Fwww.spinics.net%2Flists%2Flinux-scsi%2Fmsg144977.html
> > > Which I asked Daejun to fix -
> > > https://protect2.fireeye.com/v1/url?k=44587fa8-1994b30d-4459f4e7-0cc47a30d446-d01ce202f9a3c6b5&q=1&e=cf195d42-fec6-43bb-b797-203c71fd6665&u=https%3A%2F%2Fwww.spinics.net%2Flists%2Flinux-scsi%2Fmsg144987.html
> > >
> > > For the next chain of events I guess you can follow by yourself.
> > >
> > > Thanks,
> > > Avri
> >
> > Avri
> > Sorry for making you confusing. yes, I knew that, and following.
> > I mean Acked-by tag in the patchset, then we see your acked in the
> > patchwork, and let others know that you acked it, rather than going
> > backtrack history email.
> >
> > Hi Daejun
> > I think you can add Avri's Acked-by tag in your patchset, just for
> > quickly moving forward and reminding.
> Ahhh - One moment please -
> While rebasing the v8 on my platform, I noticed some substantial changes since v6.
> e.g. the hpb lun ref counting isn't there anymore, as well as some more stuff.
> While those changes might be only for the best, I think any tested-by tag should be re-assign.
In this version, the HPB is no more loadable module, it is sticked with
UFS-core driver. So, I removed reference counter for HPB.
> Anyway, as for myself, I am not planning to put any more time in this,
> until there is a clear decision where this series is going to.
>
> Martin - Are you considering to merge the HPB feature eventually to mainline kernel?
>
> Thanks,
> Avri
> >
> > thanks,
> > Bean
>
>
Thanks,
Daejun
Avri,
> Martin - Are you considering to merge the HPB feature eventually to
> mainline kernel?
I promise to take a look at the new series. But I can't say I'm a big
fan of how this feature was defined in the spec.
And - as discussed a couple of weeks ago - I would still like to see
some supporting evidence from a realistic workload and not a synthetic
benchmark.
--
Martin K. Petersen Oracle Linux Engineering
On 2020-08-06 02:11, Daejun Park wrote:
> +static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba)
> +{
> + int err;
> + int retries;
> +
> + for (retries = 0; retries < HPB_RESET_REQ_RETRIES; retries++) {
> + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG,
> + QUERY_FLAG_IDN_HPB_RESET, 0, NULL);
> + if (err)
> + dev_dbg(hba->dev,
> + "%s: failed with error %d, retries %d\n",
> + __func__, err, retries);
> + else
> + break;
> + }
> +
> + if (err) {
> + dev_err(hba->dev,
> + "%s setting fHpbReset flag failed with error %d\n",
> + __func__, err);
> + return;
> + }
> +}
Please change the "break" into an early return, remove the last
occurrence "if (err)" and remove the final return statement.
> +static void ufshpb_check_hpb_reset_query(struct ufs_hba *hba)
> +{
> + int err;
> + bool flag_res = true;
> + int try = 0;
> +
> + /* wait for the device to complete HPB reset query */
> + do {
> + if (++try == HPB_RESET_REQ_RETRIES)
> + break;
> +
> + dev_info(hba->dev,
> + "%s start flag reset polling %d times\n",
> + __func__, try);
> +
> + /* Poll fHpbReset flag to be cleared */
> + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG,
> + QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res);
> + usleep_range(1000, 1100);
> + } while (flag_res);
> +
> + if (err) {
> + dev_err(hba->dev,
> + "%s reading fHpbReset flag failed with error %d\n",
> + __func__, err);
> + return;
> + }
> +
> + if (flag_res) {
> + dev_err(hba->dev,
> + "%s fHpbReset was not cleared by the device\n",
> + __func__);
> + }
> +}
Should "polling %d times" perhaps be changed into "attempt %d"?
The "if (err)" statement may be reached without "err" having been
initialized. Please fix.
Additionally, please change the do-while loop into a for-loop, e.g. as
follows:
for (try = 0; try < HPB_RESET_REQ_RETRIES; try++)
...
Thanks,
Bart.
On 2020-08-06 02:02, Daejun Park wrote:
> @@ -537,6 +548,7 @@ struct ufs_dev_info {
> u8 *model;
> u16 wspecversion;
> u32 clk_gating_wait_us;
> + u8 b_ufs_feature_sup;
> u32 d_ext_ufs_feature_sup;
> u8 b_wb_buffer_type;
> u32 d_wb_alloc_units;
>
Hmm ... shouldn't this variable be introduced in the patch that introduces
the code that sets and uses this variable?
How about making it clear in the patch subject that this patch adds protocol
constants related to HPB?
Otherwise this patch looks good to me.
Bart.
On 2020-08-06 02:11, Daejun Park wrote:
> This is a patch for the HPB feature.
> This patch adds HPB function calls to UFS core driver.
>
> The mininum size of the memory pool used in the HPB is implemented as a
^^^^^^^
minimum?
> Kconfig parameter (SCSI_UFS_HPB_HOST_MEM), so that it can be configurable.
> +config SCSI_UFS_HPB
> + bool "Support UFS Host Performance Booster"
> + depends on SCSI_UFSHCD
> + help
> + A UFS HPB Feature improves random read performance. It caches
^ ^^^^^^^
The? feature?
> + L2P map of UFS to host DRAM. The driver uses HPB read command
> + by piggybacking physical page number for bypassing FTL's L2P address
> + translation.
Please explain what L2P and FTL mean. Not everyone is familiar with SSD
internals.
> +config SCSI_UFS_HPB_HOST_MEM
> + int "Host-side cached memory size (KB) for HPB support"
> + default 32
> + depends on SCSI_UFS_HPB
> + help
> + The mininum size of the memory pool used in the HPB module. It can
> + be configurable by the user. If this value is larger than required
> + memory size, kernel resizes cached memory size.
^^^^^^^ ^^^^^^^^^^^^^^^^^^
reduces? cache size?
Please make this a kernel module parameter instead of a compile-time constant.
> +#ifndef CONFIG_SCSI_UFS_HPB
> +static void ufshpb_resume(struct ufs_hba *hba) {}
> +static void ufshpb_suspend(struct ufs_hba *hba) {}
> +static void ufshpb_reset(struct ufs_hba *hba) {}
> +static void ufshpb_reset_host(struct ufs_hba *hba) {}
> +static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
> +static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
> +static void ufshpb_remove(struct ufs_hba *hba) {}
> +static void ufshpb_scan_feature(struct ufs_hba *hba) {}
> +#endif
Please move these definitions into ufshpb.h since that is the
recommended Linux kernel coding style.
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index b2ef18f1b746..904c19796e09 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -47,6 +47,9 @@
> #include "ufs.h"
> #include "ufs_quirks.h"
> #include "ufshci.h"
> +#ifdef CONFIG_SCSI_UFS_HPB
> +#include "ufshpb.h"
> +#endif
Please move #ifdef CONFIG_SCSI_UFS_HPB / #endif into ufshpb.h. From
Documentation/process/4.Coding.rst: "As a general rule, #ifdef use
should be confined to header files whenever possible."
> +struct ufsf_feature_info {
> + atomic_t slave_conf_cnt;
> + wait_queue_head_t sdev_wait;
> +};
Please add a comment above this data structure that explains the role
of this data structure and also what "ufsf" stands for.
> +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb);
I don't think that this forward declaration is necessary so please leave it
out.
> +static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn,
> + struct ufshpb_subregion *srgn)
> +{
> + return rgn->rgn_state != HPB_RGN_INACTIVE &&
> + srgn->srgn_state == HPB_SRGN_VALID;
> +}
Please do not declare functions inside .c files inline but instead let
the compiler decide which functions to inline. Modern compilers are really
good at this.
> +static struct kobj_type ufshpb_ktype = {
> + .sysfs_ops = &ufshpb_sysfs_ops,
> + .release = NULL,
> +};
If the release method of a kobj_type is NULL that is a strong sign that
there is something wrong ...
> +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb)
> +{
> + int ret;
> +
> + ufshpb_stat_init(hpb);
> +
> + kobject_init(&hpb->kobj, &ufshpb_ktype);
> + mutex_init(&hpb->sysfs_lock);
> +
> + ret = kobject_add(&hpb->kobj, kobject_get(&hba->dev->kobj),
> + "ufshpb_lu%d", hpb->lun);
> +
> + if (ret)
> + return ret;
> +
> + ret = sysfs_create_group(&hpb->kobj, &ufshpb_sysfs_group);
> +
> + if (ret) {
> + dev_err(hba->dev, "ufshpb_lu%d create file error\n", hpb->lun);
> + return ret;
> + }
> +
> + dev_info(hba->dev, "ufshpb_lu%d sysfs adds uevent", hpb->lun);
> + kobject_uevent(&hpb->kobj, KOBJ_ADD);
> +
> + return 0;
> +}
Please attach these sysfs attributes to /sys/class/scsi_device/*/device
instead of creating a new kobject. Consider using the following
scsi_host_template member to define LUN sysfs attributes:
/*
* Pointer to the SCSI device attribute groups for this host,
* NULL terminated.
*/
const struct attribute_group **sdev_groups;
> +static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
> + struct ufshpb_dev_info *hpb_dev_info,
> + u8 *desc_buf)
> +{
> + struct scsi_device *sdev;
> + struct ufshpb_lu *hpb;
> + int find_hpb_lu = 0;
> + int ret;
> +
> + shost_for_each_device(sdev, hba->host) {
> + struct ufshpb_lu_info hpb_lu_info = { 0 };
> + int lun = sdev->lun;
> +
> + if (lun >= hba->dev_info.max_lu_supported)
> + continue;
> +
> + ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info, desc_buf);
> + if (ret)
> + continue;
> +
> + hpb = ufshpb_alloc_hpb_lu(hba, lun, hpb_dev_info,
> + &hpb_lu_info);
> + if (!hpb)
> + continue;
> +
> + hpb->sdev_ufs_lu = sdev;
> + sdev->hostdata = hpb;
> +
> + list_add_tail(&hpb->list_hpb_lu, &lh_hpb_lu);
> + find_hpb_lu++;
> + }
> +
> + if (!find_hpb_lu)
> + return;
> +
> + ufshpb_check_hpb_reset_query(hba);
> +
> + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
> + dev_info(hba->dev, "set state to present\n");
> + ufshpb_set_state(hpb, HPB_PRESENT);
> + }
> +}
Please remove the loop from the above function, make this function accept a
SCSI device pointer as argument and call this function from
ufshcd_slave_configure() or ufshcd_hpb_configure().
> +static void ufshpb_init(void *data, async_cookie_t cookie)
> +{
> + struct ufsf_feature_info *ufsf = (struct ufsf_feature_info *)data;
> + struct ufs_hba *hba;
> + struct ufshpb_dev_info hpb_dev_info = { 0 };
> + char *desc_buf;
> + int ret;
> +
> + hba = container_of(ufsf, struct ufs_hba, ufsf);
> +
> + desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL);
> + if (!desc_buf)
> + goto release_desc_buf;
> +
> + ret = ufshpb_get_dev_info(hba, &hpb_dev_info, desc_buf);
> + if (ret)
> + goto release_desc_buf;
> +
> + /*
> + * Because HPB driver uses scsi_device data structure,
> + * we should wait at this point until finishing initialization of all
> + * scsi devices. Even if timeout occurs, HPB driver will search
> + * the scsi_device list on struct scsi_host (shost->__host list_head)
> + * and can find out HPB logical units in all scsi_devices
> + */
> + wait_event_timeout(hba->ufsf.sdev_wait,
> + (atomic_read(&hba->ufsf.slave_conf_cnt)
> + == hpb_dev_info.num_lu),
> + SDEV_WAIT_TIMEOUT);
> +
> + ufshpb_issue_hpb_reset_query(hba);
> +
> + dev_dbg(hba->dev, "ufshpb: slave count %d, lu count %d\n",
> + atomic_read(&hba->ufsf.slave_conf_cnt), hpb_dev_info.num_lu);
> +
> + ufshpb_scan_hpb_lu(hba, &hpb_dev_info, desc_buf);
> +
> +release_desc_buf:
> + kfree(desc_buf);
> +}
Since the UFS driver calls scsi_scan_host() from ufshcd_add_lus(), do you
agree that the above wait_event_timeout() call can be eliminated by splitting
ufshpb_init() into two functions and by calling the part below
wait_event_timeout() after scsi_scan_host() has finished?
> +void ufshpb_remove(struct ufs_hba *hba)
> +{
> + struct ufshpb_lu *hpb, *n_hpb;
> + struct ufsf_feature_info *ufsf;
> + struct scsi_device *sdev;
> +
> + ufsf = &hba->ufsf;
> +
> + list_for_each_entry_safe(hpb, n_hpb, &lh_hpb_lu, list_hpb_lu) {
> + ufshpb_set_state(hpb, HPB_FAILED);
> +
> + sdev = hpb->sdev_ufs_lu;
> + sdev->hostdata = NULL;
> +
> + ufshpb_destroy_region_tbl(hpb);
> +
> + list_del_init(&hpb->list_hpb_lu);
> + ufshpb_remove_sysfs(hpb);
> +
> + kfree(hpb);
> + }
> +
> + dev_info(hba->dev, "ufshpb: remove success\n");
> +}
Should the code in the body of the above loop perhaps be called from inside
ufshcd_slave_destroy()?
> +void ufshpb_scan_feature(struct ufs_hba *hba)
> +{
> + init_waitqueue_head(&hba->ufsf.sdev_wait);
> + atomic_set(&hba->ufsf.slave_conf_cnt, 0);
> +
> + if (hba->dev_info.wspecversion >= HPB_SUPPORT_VERSION &&
> + (hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT))
> + async_schedule(ufshpb_init, &hba->ufsf);
> +}
Why does this function use async_schedule()?
Thanks,
Bart.
On 2020-08-06 02:15, Daejun Park wrote:
> + req->end_io_data = (void *)map_req;
Please leave the (void *) cast out since explicit casts from a non-void
to a void pointer are not necessary in C.
> +static inline struct
> +ufshpb_rsp_field *ufshpb_get_hpb_rsp(struct ufshcd_lrb *lrbp)
> +{
> + return (struct ufshpb_rsp_field *)&lrbp->ucd_rsp_ptr->sr.sense_data_len;
> +}
Please introduce a union in struct utp_cmd_rsp instead of using casts
to reinterpret a part of a data structure.
> +/* routine : isr (ufs) */
The above comment looks very cryptic. Should it perhaps be expanded?
> +struct ufshpb_active_field {
> + __be16 active_rgn;
> + __be16 active_srgn;
> +} __packed;
Since "__packed" is not necessary for the above data structure, please
remove it. Note: a typical approach in the Linux kernel to verify that
the compiler has not inserted any padding bytes is to add a BUILD_BUG_ON()
statement in an initialization function that verifies the size of ABI data
structures. See also the output of the following command:
git grep -nH 'BUILD_BUG_ON.sizeof.*!='
> +struct ufshpb_rsp_field {
> + __be16 sense_data_len;
> + u8 desc_type;
> + u8 additional_len;
> + u8 hpb_type;
> + u8 reserved;
> + u8 active_rgn_cnt;
> + u8 inactive_rgn_cnt;
> + struct ufshpb_active_field hpb_active_field[2];
> + __be16 hpb_inactive_field[2];
> +} __packed;
I think the above __packed statement should also be left out.
Thanks,
Bart.
On 2020-08-06 02:18, Daejun Park wrote:
> +static inline u32 ufshpb_get_lpn(struct scsi_cmnd *cmnd)
> +{
> + return blk_rq_pos(cmnd->request) >>
> + (ilog2(cmnd->device->sector_size) - 9);
> +}
Please use sectors_to_logical() from drivers/scsi/sd.h instead of open-coding
that function.
> +static inline unsigned int ufshpb_get_len(struct scsi_cmnd *cmnd)
> +{
> + return blk_rq_sectors(cmnd->request) >>
> + (ilog2(cmnd->device->sector_size) - 9);
> +}
Same comment here.
> +/* routine : READ10 -> HPB_READ */
Please expand this comment.
Thanks,
Bart.
Hi Bart,
> On 2020-08-06 02:02, Daejun Park wrote:
> > @@ -537,6 +548,7 @@ struct ufs_dev_info {
> > u8 *model;
> > u16 wspecversion;
> > u32 clk_gating_wait_us;
> > + u8 b_ufs_feature_sup;
> > u32 d_ext_ufs_feature_sup;
> > u8 b_wb_buffer_type;
> > u32 d_wb_alloc_units;
> >
>
> Hmm ... shouldn't this variable be introduced in the patch that introduces
> the code that sets and uses this variable?
OK, I will move this variable to 2/4 patch.
> How about making it clear in the patch subject that this patch adds protocol
> constants related to HPB?
The subject will be changed :
"Add UFS feature related parameter -> Adds constants related to HPB"
> Otherwise this patch looks good to me.
>
> Bart.
Thanks,
Daejun
Hi Bart,
On 2020-08-06 02:11, Daejun Park wrote:
> > +static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba)
> > +{
> > + int err;
> > + int retries;
> > +
> > + for (retries = 0; retries < HPB_RESET_REQ_RETRIES; retries++) {
> > + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG,
> > + QUERY_FLAG_IDN_HPB_RESET, 0, NULL);
> > + if (err)
> > + dev_dbg(hba->dev,
> > + "%s: failed with error %d, retries %d\n",
> > + __func__, err, retries);
> > + else
> > + break;
> > + }
> > +
> > + if (err) {
> > + dev_err(hba->dev,
> > + "%s setting fHpbReset flag failed with error %d\n",
> > + __func__, err);
> > + return;
> > + }
> > +}
>
> Please change the "break" into an early return, remove the last
> occurrence "if (err)" and remove the final return statement.
OK, I will.
>
> > +static void ufshpb_check_hpb_reset_query(struct ufs_hba *hba)
> > +{
> > + int err;
> > + bool flag_res = true;
> > + int try = 0;
> > +
> > + /* wait for the device to complete HPB reset query */
> > + do {
> > + if (++try == HPB_RESET_REQ_RETRIES)
> > + break;
> > +
> > + dev_info(hba->dev,
> > + "%s start flag reset polling %d times\n",
> > + __func__, try);
> > +
> > + /* Poll fHpbReset flag to be cleared */
> > + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG,
> > + QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res);
> > + usleep_range(1000, 1100);
> > + } while (flag_res);
> > +
> > + if (err) {
> > + dev_err(hba->dev,
> > + "%s reading fHpbReset flag failed with error %d\n",
> > + __func__, err);
> > + return;
> > + }
> > +
> > + if (flag_res) {
> > + dev_err(hba->dev,
> > + "%s fHpbReset was not cleared by the device\n",
> > + __func__);
> > + }
> > +}
>
> Should "polling %d times" perhaps be changed into "attempt %d"?
I will change it.
> The "if (err)" statement may be reached without "err" having been
> initialized. Please fix.
OK, I will initialize err to 0.
> Additionally, please change the do-while loop into a for-loop, e.g. as
> follows:
>
> for (try = 0; try < HPB_RESET_REQ_RETRIES; try++)
> ...
OK, I will change do-while to for-loop.
Thanks,
Daejun
Hi Bart,
On 2020-08-06 02:11, Daejun Park wrote:
> > This is a patch for the HPB feature.
> > This patch adds HPB function calls to UFS core driver.
> >
> > The mininum size of the memory pool used in the HPB is implemented as a
> ^^^^^^^
> minimum?
I will fix it.
> > Kconfig parameter (SCSI_UFS_HPB_HOST_MEM), so that it can be configurable.
>
> > +config SCSI_UFS_HPB
> > + bool "Support UFS Host Performance Booster"
> > + depends on SCSI_UFSHCD
> > + help
> > + A UFS HPB Feature improves random read performance. It caches
> ^ ^^^^^^^
> The? feature?
I will fix it.
> > + L2P map of UFS to host DRAM. The driver uses HPB read command
> > + by piggybacking physical page number for bypassing FTL's L2P address
> > + translation.
>
> Please explain what L2P and FTL mean. Not everyone is familiar with SSD
> internals.
I added full name of the abbreviation.
L2P (logical to physical) map of UFS to host DRAM. The driver uses HPB read command
^^^^^^^^^^^^^^^^^^^
by piggybacking physical page number for bypassing FTL (flash translation layer)
^^^^^^^^^^^^^^^^^^^^^^^^
> > +config SCSI_UFS_HPB_HOST_MEM
> > + int "Host-side cached memory size (KB) for HPB support"
> > + default 32
> > + depends on SCSI_UFS_HPB
> > + help
> > + The mininum size of the memory pool used in the HPB module. It can
> > + be configurable by the user. If this value is larger than required
> > + memory size, kernel resizes cached memory size.
> ^^^^^^^ ^^^^^^^^^^^^^^^^^^
> reduces? cache size?
>
> Please make this a kernel module parameter instead of a compile-time constant.
OK, I will change it.
> > +#ifndef CONFIG_SCSI_UFS_HPB
> > +static void ufshpb_resume(struct ufs_hba *hba) {}
> > +static void ufshpb_suspend(struct ufs_hba *hba) {}
> > +static void ufshpb_reset(struct ufs_hba *hba) {}
> > +static void ufshpb_reset_host(struct ufs_hba *hba) {}
> > +static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
> > +static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
> > +static void ufshpb_remove(struct ufs_hba *hba) {}
> > +static void ufshpb_scan_feature(struct ufs_hba *hba) {}
> > +#endif
>
> Please move these definitions into ufshpb.h since that is the
> recommended Linux kernel coding style.
OK, I will move them.
> > diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> > index b2ef18f1b746..904c19796e09 100644
> > --- a/drivers/scsi/ufs/ufshcd.h
> > +++ b/drivers/scsi/ufs/ufshcd.h
> > @@ -47,6 +47,9 @@
> > #include "ufs.h"
> > #include "ufs_quirks.h"
> > #include "ufshci.h"
> > +#ifdef CONFIG_SCSI_UFS_HPB
> > +#include "ufshpb.h"
> > +#endif
>
> Please move #ifdef CONFIG_SCSI_UFS_HPB / #endif into ufshpb.h. From
> Documentation/process/4.Coding.rst: "As a general rule, #ifdef use
> should be confined to header files whenever possible."
OK, I will fix it.
> > +struct ufsf_feature_info {
> > + atomic_t slave_conf_cnt;
> > + wait_queue_head_t sdev_wait;
> > +};
>
> Please add a comment above this data structure that explains the role
> of this data structure and also what "ufsf" stands for.
"ufsf" is stands for ufs feature. I wiil add comments for the data structure.
> > +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb);
>
> I don't think that this forward declaration is necessary so please leave it
> out.
OK, I will remove it.
> > +static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn,
> > + struct ufshpb_subregion *srgn)
> > +{
> > + return rgn->rgn_state != HPB_RGN_INACTIVE &&
> > + srgn->srgn_state == HPB_SRGN_VALID;
> > +}
>
> Please do not declare functions inside .c files inline but instead let
> the compiler decide which functions to inline. Modern compilers are really
> good at this.
I didn't know about it. Thanks.
> > +static struct kobj_type ufshpb_ktype = {
> > + .sysfs_ops = &ufshpb_sysfs_ops,
> > + .release = NULL,
> > +};
>
> If the release method of a kobj_type is NULL that is a strong sign that
> there is something wrong ...
>
> > +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb)
> > +{
> > + int ret;
> > +
> > + ufshpb_stat_init(hpb);
> > +
> > + kobject_init(&hpb->kobj, &ufshpb_ktype);
> > + mutex_init(&hpb->sysfs_lock);
> > +
> > + ret = kobject_add(&hpb->kobj, kobject_get(&hba->dev->kobj),
> > + "ufshpb_lu%d", hpb->lun);
> > +
> > + if (ret)
> > + return ret;
> > +
> > + ret = sysfs_create_group(&hpb->kobj, &ufshpb_sysfs_group);
> > +
> > + if (ret) {
> > + dev_err(hba->dev, "ufshpb_lu%d create file error\n", hpb->lun);
> > + return ret;
> > + }
> > +
> > + dev_info(hba->dev, "ufshpb_lu%d sysfs adds uevent", hpb->lun);
> > + kobject_uevent(&hpb->kobj, KOBJ_ADD);
> > +
> > + return 0;
> > +}
>
> Please attach these sysfs attributes to /sys/class/scsi_device/*/device
> instead of creating a new kobject. Consider using the following
> scsi_host_template member to define LUN sysfs attributes:
I am not rejecting your comment. But I added kobject for distinguishing
between other attributes and attributes related to HPB feature.
If you think it's pointless, I'll fix it.
> /*
> * Pointer to the SCSI device attribute groups for this host,
> * NULL terminated.
> */
> const struct attribute_group **sdev_groups;
>
> > +static void ufshpb_scan_hpb_lu(struct ufs_hba *hba,
> > + struct ufshpb_dev_info *hpb_dev_info,
> > + u8 *desc_buf)
> > +{
> > + struct scsi_device *sdev;
> > + struct ufshpb_lu *hpb;
> > + int find_hpb_lu = 0;
> > + int ret;
> > +
> > + shost_for_each_device(sdev, hba->host) {
> > + struct ufshpb_lu_info hpb_lu_info = { 0 };
> > + int lun = sdev->lun;
> > +
> > + if (lun >= hba->dev_info.max_lu_supported)
> > + continue;
> > +
> > + ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info, desc_buf);
> > + if (ret)
> > + continue;
> > +
> > + hpb = ufshpb_alloc_hpb_lu(hba, lun, hpb_dev_info,
> > + &hpb_lu_info);
> > + if (!hpb)
> > + continue;
> > +
> > + hpb->sdev_ufs_lu = sdev;
> > + sdev->hostdata = hpb;
> > +
> > + list_add_tail(&hpb->list_hpb_lu, &lh_hpb_lu);
> > + find_hpb_lu++;
> > + }
> > +
> > + if (!find_hpb_lu)
> > + return;
> > +
> > + ufshpb_check_hpb_reset_query(hba);
> > +
> > + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) {
> > + dev_info(hba->dev, "set state to present\n");
> > + ufshpb_set_state(hpb, HPB_PRESENT);
> > + }
> > +}
>
> Please remove the loop from the above function, make this function accept a
> SCSI device pointer as argument and call this function from
> ufshcd_slave_configure() or ufshcd_hpb_configure().
I will move the loop to ufshcd_hpb_configure().
>
> > +static void ufshpb_init(void *data, async_cookie_t cookie)
> > +{
> > + struct ufsf_feature_info *ufsf = (struct ufsf_feature_info *)data;
> > + struct ufs_hba *hba;
> > + struct ufshpb_dev_info hpb_dev_info = { 0 };
> > + char *desc_buf;
> > + int ret;
> > +
> > + hba = container_of(ufsf, struct ufs_hba, ufsf);
> > +
> > + desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL);
> > + if (!desc_buf)
> > + goto release_desc_buf;
> > +
> > + ret = ufshpb_get_dev_info(hba, &hpb_dev_info, desc_buf);
> > + if (ret)
> > + goto release_desc_buf;
> > +
> > + /*
> > + * Because HPB driver uses scsi_device data structure,
> > + * we should wait at this point until finishing initialization of all
> > + * scsi devices. Even if timeout occurs, HPB driver will search
> > + * the scsi_device list on struct scsi_host (shost->__host list_head)
> > + * and can find out HPB logical units in all scsi_devices
> > + */
> > + wait_event_timeout(hba->ufsf.sdev_wait,
> > + (atomic_read(&hba->ufsf.slave_conf_cnt)
> > + == hpb_dev_info.num_lu),
> > + SDEV_WAIT_TIMEOUT);
> > +
> > + ufshpb_issue_hpb_reset_query(hba);
> > +
> > + dev_dbg(hba->dev, "ufshpb: slave count %d, lu count %d\n",
> > + atomic_read(&hba->ufsf.slave_conf_cnt), hpb_dev_info.num_lu);
> > +
> > + ufshpb_scan_hpb_lu(hba, &hpb_dev_info, desc_buf);
> > +
> > +release_desc_buf:
> > + kfree(desc_buf);
> > +}
>
> Since the UFS driver calls scsi_scan_host() from ufshcd_add_lus(), do you
> agree that the above wait_event_timeout() call can be eliminated by splitting
> ufshpb_init() into two functions and by calling the part below
> wait_event_timeout() after scsi_scan_host() has finished?
Yes, I agree the above wait_event_timeout() call can be eliminated.
I will change these functions.
> > +void ufshpb_remove(struct ufs_hba *hba)
> > +{
> > + struct ufshpb_lu *hpb, *n_hpb;
> > + struct ufsf_feature_info *ufsf;
> > + struct scsi_device *sdev;
> > +
> > + ufsf = &hba->ufsf;
> > +
> > + list_for_each_entry_safe(hpb, n_hpb, &lh_hpb_lu, list_hpb_lu) {
> > + ufshpb_set_state(hpb, HPB_FAILED);
> > +
> > + sdev = hpb->sdev_ufs_lu;
> > + sdev->hostdata = NULL;
> > +
> > + ufshpb_destroy_region_tbl(hpb);
> > +
> > + list_del_init(&hpb->list_hpb_lu);
> > + ufshpb_remove_sysfs(hpb);
> > +
> > + kfree(hpb);
> > + }
> > +
> > + dev_info(hba->dev, "ufshpb: remove success\n");
> > +}
>
> Should the code in the body of the above loop perhaps be called from inside
> ufshcd_slave_destroy()?
Moving other stuffs in the loop is good idea, but removing attributes is problem.
To avoid adding new kobject, I will try to use sysfs_merge_group()
for adding attributes. To delete merged attributes, sysfs_unmerge_group()
should be called. But sysfs_remove_groups() is called before calling ufshcd_slave_destroy().
> > +void ufshpb_scan_feature(struct ufs_hba *hba)
> > +{
> > + init_waitqueue_head(&hba->ufsf.sdev_wait);
> > + atomic_set(&hba->ufsf.slave_conf_cnt, 0);
> > +
> > + if (hba->dev_info.wspecversion >= HPB_SUPPORT_VERSION &&
> > + (hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT))
> > + async_schedule(ufshpb_init, &hba->ufsf);
> > +}
>
> Why does this function use async_schedule()?
The wait_event_timeout() call will be removed, so it will be changed.
Thanks,
Daejun
On 2020-08-06 02:18, Daejun Park wrote:
> > +static inline u32 ufshpb_get_lpn(struct scsi_cmnd *cmnd)
> > +{
> > + return blk_rq_pos(cmnd->request) >>
> > + (ilog2(cmnd->device->sector_size) - 9);
> > +}
>
> Please use sectors_to_logical() from drivers/scsi/sd.h instead of open-coding
> that function.
OK, I will.
> > +static inline unsigned int ufshpb_get_len(struct scsi_cmnd *cmnd)
> > +{
> > + return blk_rq_sectors(cmnd->request) >>
> > + (ilog2(cmnd->device->sector_size) - 9);
> > +}
>
> Same comment here.
OK
> > +/* routine : READ10 -> HPB_READ */
>
> Please expand this comment.
OK
Thanks,
Daejun
On 2020-08-06 02:15, Daejun Park wrote:
> > + req->end_io_data = (void *)map_req;
>
> Please leave the (void *) cast out since explicit casts from a non-void
> to a void pointer are not necessary in C.
OK, I will fix it.
> > +static inline struct
> > +ufshpb_rsp_field *ufshpb_get_hpb_rsp(struct ufshcd_lrb *lrbp)
> > +{
> > + return (struct ufshpb_rsp_field *)&lrbp->ucd_rsp_ptr->sr.sense_data_len;
> > +}
>
> Please introduce a union in struct utp_cmd_rsp instead of using casts
> to reinterpret a part of a data structure.
OK. I will introduce a union in struct utp_cmd_rsp and use it.
> > +/* routine : isr (ufs) */
>
> The above comment looks very cryptic. Should it perhaps be expanded?
>
> > +struct ufshpb_active_field {
> > + __be16 active_rgn;
> > + __be16 active_srgn;
> > +} __packed;
>
> Since "__packed" is not necessary for the above data structure, please
> remove it. Note: a typical approach in the Linux kernel to verify that
> the compiler has not inserted any padding bytes is to add a BUILD_BUG_ON()
> statement in an initialization function that verifies the size of ABI data
> structures. See also the output of the following command:
>
> git grep -nH 'BUILD_BUG_ON.sizeof.*!='
OK, I didn't know about it. Thanks.
> > +struct ufshpb_rsp_field {
> > + __be16 sense_data_len;
> > + u8 desc_type;
> > + u8 additional_len;
> > + u8 hpb_type;
> > + u8 reserved;
> > + u8 active_rgn_cnt;
> > + u8 inactive_rgn_cnt;
> > + struct ufshpb_active_field hpb_active_field[2];
> > + __be16 hpb_inactive_field[2];
> > +} __packed;
>
> I think the above __packed statement should also be left out.
OK, I will remove it.
Thanks,
Daejun
On 2020-08-12 20:00, Daejun Park wrote:
> On 2020-08-06 02:11, Daejun Park wrote:
>>> +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb)
>>> +{
>>> + int ret;
>>> +
>>> + ufshpb_stat_init(hpb);
>>> +
>>> + kobject_init(&hpb->kobj, &ufshpb_ktype);
>>> + mutex_init(&hpb->sysfs_lock);
>>> +
>>> + ret = kobject_add(&hpb->kobj, kobject_get(&hba->dev->kobj),
>>> + "ufshpb_lu%d", hpb->lun);
>>> +
>>> + if (ret)
>>> + return ret;
>>> +
>>> + ret = sysfs_create_group(&hpb->kobj, &ufshpb_sysfs_group);
>>> +
>>> + if (ret) {
>>> + dev_err(hba->dev, "ufshpb_lu%d create file error\n", hpb->lun);
>>> + return ret;
>>> + }
>>> +
>>> + dev_info(hba->dev, "ufshpb_lu%d sysfs adds uevent", hpb->lun);
>>> + kobject_uevent(&hpb->kobj, KOBJ_ADD);
>>> +
>>> + return 0;
>>> +}
>>
>> Please attach these sysfs attributes to /sys/class/scsi_device/*/device
>> instead of creating a new kobject. Consider using the following
>> scsi_host_template member to define LUN sysfs attributes:
>
> I am not rejecting your comment. But I added kobject for distinguishing
> between other attributes and attributes related to HPB feature.
> If you think it's pointless, I'll fix it.
Hi Daejun,
I see two reasons to add these sysfs attributes under
/sys/class/scsi_device/*/device:
- This makes the behavior of the UFS driver similar to that of other Linux
SCSI LLD drivers.
- This makes it easier for people who want to write udev rules that read
from these attributes. Since ufshpb_lu%d is attached to the UFS controller
it is not clear to me which attributes will appear first in sysfs - the
SCSI device attributes or the ufshpb_lu%d attributes. If there are only
SCSI device attributes there is no such ambiguity and hence authors of
udev rules won't have to worry about this race condition.
>>> +void ufshpb_remove(struct ufs_hba *hba)
>>> +{
>>> + struct ufshpb_lu *hpb, *n_hpb;
>>> + struct ufsf_feature_info *ufsf;
>>> + struct scsi_device *sdev;
>>> +
>>> + ufsf = &hba->ufsf;
>>> +
>>> + list_for_each_entry_safe(hpb, n_hpb, &lh_hpb_lu, list_hpb_lu) {
>>> + ufshpb_set_state(hpb, HPB_FAILED);
>>> +
>>> + sdev = hpb->sdev_ufs_lu;
>>> + sdev->hostdata = NULL;
>>> +
>>> + ufshpb_destroy_region_tbl(hpb);
>>> +
>>> + list_del_init(&hpb->list_hpb_lu);
>>> + ufshpb_remove_sysfs(hpb);
>>> +
>>> + kfree(hpb);
>>> + }
>>> +
>>> + dev_info(hba->dev, "ufshpb: remove success\n");
>>> +}
>>
>> Should the code in the body of the above loop perhaps be called from inside
>> ufshcd_slave_destroy()?
>
> Moving other stuffs in the loop is good idea, but removing attributes is problem.
> To avoid adding new kobject, I will try to use sysfs_merge_group()
> for adding attributes. To delete merged attributes, sysfs_unmerge_group()
> should be called. But sysfs_remove_groups() is called before calling ufshcd_slave_destroy().
Hmm ... I don't see why the sdev_groups host template attribute can't be used?
Please don't use sysfs_merge_group() and sysfs_unmerge_group() because that
would create a race condition against udev rules if these functions are called
after the device core has emitted a KOBJ_ADD event.
Thanks,
Bart.