2021-02-24 12:45:35

by Daejun Park

[permalink] [raw]
Subject: [PATCH v24 0/4] scsi: ufs: Add Host Performance Booster Support

Changelog:

v23 -> v24
1. Fix build error reported by kernel test robot.

v22 -> v23
1. Add support compatibility of HPB 1.0.
2. Fix read id for single HPB read command.
3. Fix number of pre-allocated requests for write buffer.
4. Add fast path for response UPIU that has same LUN in sense data.
5. Remove WARN_ON for preventing kernel crash.
7. Fix wrong argument for read buffer command.

v21 -> v22
1. Add support processing response UPIU in suspend state.
2. Add support HPB hint from other LU.
3. Add sending write buffer with 0x03 after HPB init.

v20 -> v21
1. Add bMAX_DATA_SIZE_FOR_HPB_SINGLE_CMD attr. and fHPBen flag support.

v19 -> v20
1. Add documentation for sysfs entries of hpb->stat.
2. Fix read buffer command for under-sized sub-region.
3. Fix wrong condition checking for kick map work.
4. Delete redundant response UPIU checking.
5. Add LUN checking in response UPIU.
6. Fix possible deadlock problem due to runtime PM.
7. Add instant changing of sub-region state from response UPIU.
8. Fix endian problem in prefetched PPN.
9. Add JESD220-3A (HPB v2.0) support.

v18 -> 19
1. Fix null pointer error when printing sysfs from non-HPB LU.
2. Apply HPB read opcode in lrbp->cmd->cmnd (from Can Guo's review).
3. Rebase the patch on 5.12/scsi-queue.

v17 -> v18
Fix build error which reported by kernel test robot.

v16 -> v17
1. Rename hpb_state_lock to rgn_state_lock and move it to corresponding
patch.
2. Remove redundant information messages.

v15 -> v16
1. Add missed sysfs ABI documentation.

v14 -> v15
1. Remove duplicated sysfs ABI entries in documentation.
2. Add experiment result of HPB performance testing with iozone.

v13 -> v14
1. Cleanup codes by commentted in Greg's review.
2. Add documentation for sysfs entries (from Greg's review).
3. Add experiment result of HPB performance testing.

v12 -> v13
1. Cleanup codes by comments from Can Guo.
2. Add HPB related descriptor/flag/attributes in sysfs.
3. Change base commit from 5.10/scsi-queue to 5.11/scsi-queue.

v11 -> v12
1. Fixed to return error value when HPB fails to initialize pinned active
region.
2. Fixed to disable HPB feature if HPB fails to allocate essential memory
and workqueue.
3. Fixed to change proper sub-region state when region is already evicted.

v10 -> v11
Add a newline at end the last line on Kconfig file.

v9 -> v10
1. Fixed 64-bit division error
2. Fixed problems commentted in Bart's review.

v8 -> v9
1. Change sysfs initialization.
2. Change reading descriptor during HPB initialization
3. Fixed problems commentted in Bart's review.
4. Change base commit from 5.9/scsi-queue to 5.10/scsi-queue.

v7 -> v8
Remove wrongly added tags.

v6 -> v7
1. Remove UFS feature layer.
2. Cleanup for sparse error.

v5 -> v6
Change base commit to b53293fa662e28ae0cdd40828dc641c09f133405

v4 -> v5
Delete unused macro define.

v3 -> v4
1. Cleanup.

v2 -> v3
1. Add checking input module parameter value.
2. Change base commit from 5.8/scsi-queue to 5.9/scsi-queue.
3. Cleanup for unused variables and label.

v1 -> v2
1. Change the full boilerplate text to SPDX style.
2. Adopt dynamic allocation for sub-region data structure.
3. Cleanup.

NAND flash memory-based storage devices use Flash Translation Layer (FTL)
to translate logical addresses of I/O requests to corresponding flash
memory addresses. Mobile storage devices typically have RAM with
constrained size, thus lack in memory to keep the whole mapping table.
Therefore, mapping tables are partially retrieved from NAND flash on
demand, causing random-read performance degradation.

To improve random read performance, JESD220-3 (HPB v1.0) proposes HPB
(Host Performance Booster) which uses host system memory as a cache for the
FTL mapping table. By using HPB, FTL data can be read from host memory
faster than from NAND flash memory.

The current version only supports the DCM (device control mode).
This patch consists of 3 parts to support HPB feature.

1) HPB probe and initialization process
2) READ -> HPB READ using cached map information
3) L2P (logical to physical) map management

In the HPB probe and init process, the device information of the UFS is
queried. After checking supported features, the data structure for the HPB
is initialized according to the device information.

A read I/O in the active sub-region where the map is cached is changed to
HPB READ by the HPB.

The HPB manages the L2P map using information received from the
device. For active sub-region, the HPB caches through ufshpb_map
request. For the in-active region, the HPB discards the L2P map.
When a write I/O occurs in an active sub-region area, associated dirty
bitmap checked as dirty for preventing stale read.

HPB is shown to have a performance improvement of 58 - 67% for random read
workload. [1]

[1]:
https://www.usenix.org/conference/hotstorage17/program/presentation/jeong

Daejun Park (4):
scsi: ufs: Introduce HPB feature
scsi: ufs: L2P map management for HPB read
scsi: ufs: Prepare HPB read for cached sub-region
scsi: ufs: Add HPB 2.0 support

Documentation/ABI/testing/sysfs-driver-ufs | 150 ++
drivers/scsi/ufs/Kconfig | 9 +
drivers/scsi/ufs/Makefile | 1 +
drivers/scsi/ufs/ufs-sysfs.c | 20 +
drivers/scsi/ufs/ufs.h | 54 +-
drivers/scsi/ufs/ufshcd.c | 69 +-
drivers/scsi/ufs/ufshcd.h | 29 +
drivers/scsi/ufs/ufshpb.c | 2379 ++++++++++++++++++++
drivers/scsi/ufs/ufshpb.h | 276 +++
9 files changed, 2985 insertions(+), 2 deletions(-)
create mode 100644 drivers/scsi/ufs/ufshpb.c
create mode 100644 drivers/scsi/ufs/ufshpb.h

--
2.25.1


2021-02-24 12:46:38

by Daejun Park

[permalink] [raw]
Subject: [PATCH v24 2/4] scsi: ufs: L2P map management for HPB read

This is a patch for managing L2P map in HPB module.

The HPB divides logical addresses into several regions. A region consists
of several sub-regions. The sub-region is a basic unit where L2P mapping is
managed. The driver loads L2P mapping data of each sub-region. The loaded
sub-region is called active-state. The HPB driver unloads L2P mapping data
as region unit. The unloaded region is called inactive-state.

Sub-region/region candidates to be loaded and unloaded are delivered from
the UFS device. The UFS device delivers the recommended active sub-region
and inactivate region to the driver using sensedata.
The HPB module performs L2P mapping management on the host through the
delivered information.

A pinned region is a pre-set regions on the UFS device that is always
activate-state.

The data structure for map data request and L2P map uses mempool API,
minimizing allocation overhead while avoiding static allocation.

The mininum size of the memory pool used in the HPB is implemented
as a module parameter, so that it can be configurable by the user.

To gurantee a minimum memory pool size of 4MB: ufshpb_host_map_kbytes=4096

The map_work manages active/inactive by 2 "to-do" lists.
Each hpb lun maintains 2 "to-do" lists:
hpb->lh_inact_rgn - regions to be inactivated, and
hpb->lh_act_srgn - subregions to be activated
Those lists are maintained on IO completion.

Reviewed-by: Bart Van Assche <[email protected]>
Reviewed-by: Can Guo <[email protected]>
Acked-by: Avri Altman <[email protected]>
Tested-by: Bean Huo <[email protected]>
Signed-off-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufs.h | 36 ++
drivers/scsi/ufs/ufshcd.c | 4 +
drivers/scsi/ufs/ufshpb.c | 1091 ++++++++++++++++++++++++++++++++++++-
drivers/scsi/ufs/ufshpb.h | 65 +++
4 files changed, 1181 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
index 65563635e20e..957763db1006 100644
--- a/drivers/scsi/ufs/ufs.h
+++ b/drivers/scsi/ufs/ufs.h
@@ -472,6 +472,41 @@ struct utp_cmd_rsp {
u8 sense_data[UFS_SENSE_SIZE];
};

+struct ufshpb_active_field {
+ __be16 active_rgn;
+ __be16 active_srgn;
+};
+#define HPB_ACT_FIELD_SIZE 4
+
+/**
+ * struct utp_hpb_rsp - Response UPIU structure
+ * @residual_transfer_count: Residual transfer count DW-3
+ * @reserved1: Reserved double words DW-4 to DW-7
+ * @sense_data_len: Sense data length DW-8 U16
+ * @desc_type: Descriptor type of sense data
+ * @additional_len: Additional length of sense data
+ * @hpb_op: HPB operation type
+ * @lun: LUN of response UPIU
+ * @active_rgn_cnt: Active region count
+ * @inactive_rgn_cnt: Inactive region count
+ * @hpb_active_field: Recommended to read HPB region and subregion
+ * @hpb_inactive_field: To be inactivated HPB region and subregion
+ */
+struct utp_hpb_rsp {
+ __be32 residual_transfer_count;
+ __be32 reserved1[4];
+ __be16 sense_data_len;
+ u8 desc_type;
+ u8 additional_len;
+ u8 hpb_op;
+ u8 lun;
+ u8 active_rgn_cnt;
+ u8 inactive_rgn_cnt;
+ struct ufshpb_active_field hpb_active_field[2];
+ __be16 hpb_inactive_field[2];
+};
+#define UTP_HPB_RSP_SIZE 40
+
/**
* struct utp_upiu_rsp - general upiu response structure
* @header: UPIU header structure DW-0 to DW-2
@@ -482,6 +517,7 @@ struct utp_upiu_rsp {
struct utp_upiu_header header;
union {
struct utp_cmd_rsp sr;
+ struct utp_hpb_rsp hr;
struct utp_upiu_query qr;
};
};
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 49b3d5d24fa6..5852ff44c3cc 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -5021,6 +5021,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
*/
pm_runtime_get_noresume(hba->dev);
}
+
+ if (scsi_status == SAM_STAT_GOOD)
+ ufshpb_rsp_upiu(hba, lrbp);
break;
case UPIU_TRANSACTION_REJECT_UPIU:
/* TODO: handle Reject UPIU Response */
@@ -9221,6 +9224,7 @@ EXPORT_SYMBOL(ufshcd_shutdown);
void ufshcd_remove(struct ufs_hba *hba)
{
ufs_bsg_remove(hba);
+ ufshpb_remove(hba);
ufs_sysfs_remove_nodes(hba->dev);
blk_cleanup_queue(hba->tmf_queue);
blk_mq_free_tag_set(&hba->tmf_tag_set);
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index a36ca89ccd8b..5d7b60c273cc 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -16,6 +16,16 @@
#include "ufshpb.h"
#include "../sd.h"

+/* memory management */
+static struct kmem_cache *ufshpb_mctx_cache;
+static mempool_t *ufshpb_mctx_pool;
+static mempool_t *ufshpb_page_pool;
+/* A cache size of 2MB can cache ppn in the 1GB range. */
+static unsigned int ufshpb_host_map_kbytes = 2048;
+static int tot_active_srgn_pages;
+
+static struct workqueue_struct *ufshpb_wq;
+
bool ufshpb_is_allowed(struct ufs_hba *hba)
{
return !(hba->ufshpb_dev.hpb_disabled);
@@ -36,14 +46,892 @@ static void ufshpb_set_state(struct ufshpb_lu *hpb, int state)
atomic_set(&hpb->hpb_state, state);
}

+static bool ufshpb_is_general_lun(int lun)
+{
+ return lun < UFS_UPIU_MAX_UNIT_NUM_ID;
+}
+
+static bool
+ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx)
+{
+ if (hpb->lu_pinned_end != PINNED_NOT_SET &&
+ rgn_idx >= hpb->lu_pinned_start &&
+ rgn_idx <= hpb->lu_pinned_end)
+ return true;
+
+ return false;
+}
+
+static void ufshpb_kick_map_work(struct ufshpb_lu *hpb)
+{
+ bool ret = false;
+ unsigned long flags;
+
+ if (ufshpb_get_state(hpb) != HPB_PRESENT)
+ return;
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ if (!list_empty(&hpb->lh_inact_rgn) || !list_empty(&hpb->lh_act_srgn))
+ ret = true;
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ if (ret)
+ queue_work(ufshpb_wq, &hpb->map_work);
+}
+
+static bool ufshpb_is_hpb_rsp_valid(struct ufs_hba *hba,
+ struct ufshcd_lrb *lrbp,
+ struct utp_hpb_rsp *rsp_field)
+{
+ /* Check HPB_UPDATE_ALERT */
+ if (!(lrbp->ucd_rsp_ptr->header.dword_2 &
+ UPIU_HEADER_DWORD(0, 2, 0, 0)))
+ return false;
+
+ if (be16_to_cpu(rsp_field->sense_data_len) != DEV_SENSE_SEG_LEN ||
+ rsp_field->desc_type != DEV_DES_TYPE ||
+ rsp_field->additional_len != DEV_ADDITIONAL_LEN ||
+ rsp_field->active_rgn_cnt > MAX_ACTIVE_NUM ||
+ rsp_field->inactive_rgn_cnt > MAX_INACTIVE_NUM ||
+ rsp_field->hpb_op == HPB_RSP_NONE ||
+ (rsp_field->hpb_op == HPB_RSP_REQ_REGION_UPDATE &&
+ !rsp_field->active_rgn_cnt && !rsp_field->inactive_rgn_cnt))
+ return false;
+
+ if (!ufshpb_is_general_lun(rsp_field->lun)) {
+ dev_warn(hba->dev, "ufshpb: lun(%d) not supported\n",
+ lrbp->lun);
+ return false;
+ }
+
+ return true;
+}
+
+static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_req *map_req;
+ struct request *req;
+ struct bio *bio;
+ int retries = HPB_MAP_REQ_RETRIES;
+
+ map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
+ if (!map_req)
+ return NULL;
+
+retry:
+ req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
+ REQ_OP_SCSI_IN, BLK_MQ_REQ_NOWAIT);
+
+ if ((PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) {
+ usleep_range(3000, 3100);
+ goto retry;
+ }
+
+ if (IS_ERR(req))
+ goto free_map_req;
+
+ bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn);
+ if (!bio) {
+ blk_put_request(req);
+ goto free_map_req;
+ }
+
+ map_req->hpb = hpb;
+ map_req->req = req;
+ map_req->bio = bio;
+
+ map_req->rgn_idx = srgn->rgn_idx;
+ map_req->srgn_idx = srgn->srgn_idx;
+ map_req->mctx = srgn->mctx;
+
+ return map_req;
+
+free_map_req:
+ kmem_cache_free(hpb->map_req_cache, map_req);
+ return NULL;
+}
+
+static void ufshpb_put_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *map_req)
+{
+ bio_put(map_req->bio);
+ blk_put_request(map_req->req);
+ kmem_cache_free(hpb->map_req_cache, map_req);
+}
+
+static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ u32 num_entries = hpb->entries_per_srgn;
+
+ if (!srgn->mctx) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "no mctx in region %d subregion %d.\n",
+ srgn->rgn_idx, srgn->srgn_idx);
+ return -1;
+ }
+
+ if (unlikely(srgn->is_last))
+ num_entries = hpb->last_srgn_entries;
+
+ bitmap_zero(srgn->mctx->ppn_dirty, num_entries);
+ return 0;
+}
+
+static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ list_del_init(&rgn->list_inact_rgn);
+
+ if (list_empty(&srgn->list_act_srgn))
+ list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+}
+
+static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+
+ for_each_sub_region(rgn, srgn_idx, srgn)
+ list_del_init(&srgn->list_act_srgn);
+
+ if (list_empty(&rgn->list_inact_rgn))
+ list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn);
+}
+
+static void ufshpb_activate_subregion(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_region *rgn;
+
+ /*
+ * If there is no mctx in subregion
+ * after I/O progress for HPB_READ_BUFFER, the region to which the
+ * subregion belongs was evicted.
+ * Make sure the region must not evict in I/O progress
+ */
+ if (!srgn->mctx) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "no mctx in region %d subregion %d.\n",
+ srgn->rgn_idx, srgn->srgn_idx);
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ return;
+ }
+
+ rgn = hpb->rgn_tbl + srgn->rgn_idx;
+
+ if (unlikely(rgn->rgn_state == HPB_RGN_INACTIVE)) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "region %d subregion %d evicted\n",
+ srgn->rgn_idx, srgn->srgn_idx);
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ return;
+ }
+ srgn->srgn_state = HPB_SRGN_VALID;
+}
+
+static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
+{
+ struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;
+ struct ufshpb_lu *hpb = map_req->hpb;
+ struct ufshpb_subregion *srgn;
+ unsigned long flags;
+
+ srgn = hpb->rgn_tbl[map_req->rgn_idx].srgn_tbl +
+ map_req->srgn_idx;
+
+ ufshpb_clear_dirty_bitmap(hpb, srgn);
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ ufshpb_activate_subregion(hpb, srgn);
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ ufshpb_put_map_req(map_req->hpb, map_req);
+}
+
+static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
+ int srgn_idx, int srgn_mem_size)
+{
+ cdb[0] = UFSHPB_READ_BUFFER;
+ cdb[1] = UFSHPB_READ_BUFFER_ID;
+
+ put_unaligned_be16(rgn_idx, &cdb[2]);
+ put_unaligned_be16(srgn_idx, &cdb[4]);
+ put_unaligned_be24(srgn_mem_size, &cdb[6]);
+
+ cdb[9] = 0x00;
+}
+
+static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *map_req, bool last)
+{
+ struct request_queue *q;
+ struct request *req;
+ struct scsi_request *rq;
+ int mem_size = hpb->srgn_mem_size;
+ int ret = 0;
+ int i;
+
+ q = hpb->sdev_ufs_lu->request_queue;
+ for (i = 0; i < hpb->pages_per_srgn; i++) {
+ ret = bio_add_pc_page(q, map_req->bio, map_req->mctx->m_page[i],
+ PAGE_SIZE, 0);
+ if (ret != PAGE_SIZE) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "bio_add_pc_page fail %d - %d\n",
+ map_req->rgn_idx, map_req->srgn_idx);
+ return ret;
+ }
+ }
+
+ req = map_req->req;
+
+ blk_rq_append_bio(req, &map_req->bio);
+
+ req->end_io_data = map_req;
+
+ rq = scsi_req(req);
+
+ if (unlikely(last))
+ mem_size = hpb->last_srgn_entries * HPB_ENTRY_SIZE;
+
+ ufshpb_set_read_buf_cmd(rq->cmd, map_req->rgn_idx,
+ map_req->srgn_idx, mem_size);
+ rq->cmd_len = HPB_READ_BUFFER_CMD_LENGTH;
+
+ blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_map_req_compl_fn);
+
+ hpb->stats.map_req_cnt++;
+ return 0;
+}
+
+static struct ufshpb_map_ctx *ufshpb_get_map_ctx(struct ufshpb_lu *hpb,
+ bool last)
+{
+ struct ufshpb_map_ctx *mctx;
+ u32 num_entries = hpb->entries_per_srgn;
+ int i, j;
+
+ mctx = mempool_alloc(ufshpb_mctx_pool, GFP_KERNEL);
+ if (!mctx)
+ return NULL;
+
+ mctx->m_page = kmem_cache_alloc(hpb->m_page_cache, GFP_KERNEL);
+ if (!mctx->m_page)
+ goto release_mctx;
+
+ if (unlikely(last))
+ num_entries = hpb->last_srgn_entries;
+
+ mctx->ppn_dirty = bitmap_zalloc(num_entries, GFP_KERNEL);
+ if (!mctx->ppn_dirty)
+ goto release_m_page;
+
+ for (i = 0; i < hpb->pages_per_srgn; i++) {
+ mctx->m_page[i] = mempool_alloc(ufshpb_page_pool, GFP_KERNEL);
+ if (!mctx->m_page[i]) {
+ for (j = 0; j < i; j++)
+ mempool_free(mctx->m_page[j], ufshpb_page_pool);
+ goto release_ppn_dirty;
+ }
+ clear_page(page_address(mctx->m_page[i]));
+ }
+
+ return mctx;
+
+release_ppn_dirty:
+ bitmap_free(mctx->ppn_dirty);
+release_m_page:
+ kmem_cache_free(hpb->m_page_cache, mctx->m_page);
+release_mctx:
+ mempool_free(mctx, ufshpb_mctx_pool);
+ return NULL;
+}
+
+static void ufshpb_put_map_ctx(struct ufshpb_lu *hpb,
+ struct ufshpb_map_ctx *mctx)
+{
+ int i;
+
+ for (i = 0; i < hpb->pages_per_srgn; i++)
+ mempool_free(mctx->m_page[i], ufshpb_page_pool);
+
+ bitmap_free(mctx->ppn_dirty);
+ kmem_cache_free(hpb->m_page_cache, mctx->m_page);
+ mempool_free(mctx, ufshpb_mctx_pool);
+}
+
+static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ for_each_sub_region(rgn, srgn_idx, srgn)
+ if (srgn->srgn_state == HPB_SRGN_ISSUED)
+ return -EPERM;
+
+ return 0;
+}
+
+static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ rgn->rgn_state = HPB_RGN_ACTIVE;
+ list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
+ atomic_inc(&lru_info->active_cnt);
+}
+
+static void ufshpb_hit_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ list_move_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
+}
+
+static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+{
+ struct victim_select_info *lru_info = &hpb->lru_info;
+ struct ufshpb_region *rgn, *victim_rgn = NULL;
+
+ list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) {
+ if (!rgn) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: no region allocated\n",
+ __func__);
+ return NULL;
+ }
+ if (ufshpb_check_srgns_issue_state(hpb, rgn))
+ continue;
+
+ victim_rgn = rgn;
+ break;
+ }
+
+ return victim_rgn;
+}
+
+static void ufshpb_cleanup_lru_info(struct victim_select_info *lru_info,
+ struct ufshpb_region *rgn)
+{
+ list_del_init(&rgn->list_lru_rgn);
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+ atomic_dec(&lru_info->active_cnt);
+}
+
+static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ if (srgn->srgn_state != HPB_SRGN_UNUSED) {
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ srgn->mctx = NULL;
+ }
+}
+
+static void __ufshpb_evict_region(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct victim_select_info *lru_info;
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ lru_info = &hpb->lru_info;
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx);
+
+ ufshpb_cleanup_lru_info(lru_info, rgn);
+
+ for_each_sub_region(rgn, srgn_idx, srgn)
+ ufshpb_purge_active_subregion(hpb, srgn);
+}
+
+static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ if (rgn->rgn_state == HPB_RGN_PINNED) {
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "pinned region cannot drop-out. region %d\n",
+ rgn->rgn_idx);
+ goto out;
+ }
+ if (!list_empty(&rgn->list_lru_rgn)) {
+ if (ufshpb_check_srgns_issue_state(hpb, rgn)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ __ufshpb_evict_region(hpb, rgn);
+ }
+out:
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ return ret;
+}
+
+static int ufshpb_issue_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_req *map_req;
+ unsigned long flags;
+ int ret;
+ int err = -EAGAIN;
+ bool alloc_required = false;
+ enum HPB_SRGN_STATE state = HPB_SRGN_INVALID;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+
+ if (ufshpb_get_state(hpb) != HPB_PRESENT) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: ufshpb state is not PRESENT\n", __func__);
+ goto unlock_out;
+ }
+
+ if ((rgn->rgn_state == HPB_RGN_INACTIVE) &&
+ (srgn->srgn_state == HPB_SRGN_INVALID)) {
+ err = 0;
+ goto unlock_out;
+ }
+
+ if (srgn->srgn_state == HPB_SRGN_UNUSED)
+ alloc_required = true;
+
+ /*
+ * If the subregion is already ISSUED state,
+ * a specific event (e.g., GC or wear-leveling, etc.) occurs in
+ * the device and HPB response for map loading is received.
+ * In this case, after finishing the HPB_READ_BUFFER,
+ * the next HPB_READ_BUFFER is performed again to obtain the latest
+ * map data.
+ */
+ if (srgn->srgn_state == HPB_SRGN_ISSUED)
+ goto unlock_out;
+
+ srgn->srgn_state = HPB_SRGN_ISSUED;
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ if (alloc_required) {
+ if (srgn->mctx) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "map_ctx is already allocated. region %d - %d\n",
+ rgn->rgn_idx, srgn->srgn_idx);
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ }
+ srgn->mctx = ufshpb_get_map_ctx(hpb, srgn->is_last);
+ if (!srgn->mctx) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "get map_ctx failed. region %d - %d\n",
+ rgn->rgn_idx, srgn->srgn_idx);
+ state = HPB_SRGN_UNUSED;
+ goto change_srgn_state;
+ }
+ }
+
+ map_req = ufshpb_get_map_req(hpb, srgn);
+ if (!map_req)
+ goto change_srgn_state;
+
+
+ ret = ufshpb_execute_map_req(hpb, map_req, srgn->is_last);
+ if (ret) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: issue map_req failed: %d, region %d - %d\n",
+ __func__, ret, srgn->rgn_idx, srgn->srgn_idx);
+ goto free_map_req;
+ }
+ return 0;
+
+free_map_req:
+ ufshpb_put_map_req(hpb, map_req);
+change_srgn_state:
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ srgn->srgn_state = state;
+unlock_out:
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ return err;
+}
+
+static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
+{
+ struct ufshpb_region *victim_rgn;
+ struct victim_select_info *lru_info = &hpb->lru_info;
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ /*
+ * If region belongs to lru_list, just move the region
+ * to the front of lru list. because the state of the region
+ * is already active-state
+ */
+ if (!list_empty(&rgn->list_lru_rgn)) {
+ ufshpb_hit_lru_info(lru_info, rgn);
+ goto out;
+ }
+
+ if (rgn->rgn_state == HPB_RGN_INACTIVE) {
+ if (atomic_read(&lru_info->active_cnt) ==
+ lru_info->max_lru_active_cnt) {
+ /*
+ * If the maximum number of active regions
+ * is exceeded, evict the least recently used region.
+ * This case may occur when the device responds
+ * to the eviction information late.
+ * It is okay to evict the least recently used region,
+ * because the device could detect this region
+ * by not issuing HPB_READ
+ */
+ victim_rgn = ufshpb_victim_lru_info(hpb);
+ if (!victim_rgn) {
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "cannot get victim region error\n");
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "LRU full (%d), choose victim %d\n",
+ atomic_read(&lru_info->active_cnt),
+ victim_rgn->rgn_idx);
+ __ufshpb_evict_region(hpb, victim_rgn);
+ }
+
+ /*
+ * When a region is added to lru_info list_head,
+ * it is guaranteed that the subregion has been
+ * assigned all mctx. If failed, try to receive mctx again
+ * without being added to lru_info list_head
+ */
+ ufshpb_add_lru_info(lru_info, rgn);
+ }
+out:
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ return ret;
+}
+
+static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
+ struct utp_hpb_rsp *rsp_field)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ int i, rgn_i, srgn_i;
+
+ BUILD_BUG_ON(sizeof(struct ufshpb_active_field) != HPB_ACT_FIELD_SIZE);
+ /*
+ * If the active region and the inactive region are the same,
+ * we will inactivate this region.
+ * The device could check this (region inactivated) and
+ * will response the proper active region information
+ */
+ for (i = 0; i < rsp_field->active_rgn_cnt; i++) {
+ rgn_i =
+ be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn);
+ srgn_i =
+ be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn);
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "activate(%d) region %d - %d\n", i, rgn_i, srgn_i);
+
+ spin_lock(&hpb->rsp_list_lock);
+ ufshpb_update_active_info(hpb, rgn_i, srgn_i);
+ spin_unlock(&hpb->rsp_list_lock);
+
+ rgn = hpb->rgn_tbl + rgn_i;
+ srgn = rgn->srgn_tbl + srgn_i;
+
+ /* blocking HPB_READ */
+ spin_lock(&hpb->rgn_state_lock);
+ if (srgn->srgn_state == HPB_SRGN_VALID)
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ spin_unlock(&hpb->rgn_state_lock);
+ hpb->stats.rb_active_cnt++;
+ }
+
+ for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) {
+ rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]);
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "inactivate(%d) region %d\n", i, rgn_i);
+
+ spin_lock(&hpb->rsp_list_lock);
+ ufshpb_update_inactive_info(hpb, rgn_i);
+ spin_unlock(&hpb->rsp_list_lock);
+
+ rgn = hpb->rgn_tbl + rgn_i;
+
+ spin_lock(&hpb->rgn_state_lock);
+ if (rgn->rgn_state != HPB_RGN_INACTIVE) {
+ for (srgn_i = 0; srgn_i < rgn->srgn_cnt; srgn_i++) {
+ srgn = rgn->srgn_tbl + srgn_i;
+ if (srgn->srgn_state == HPB_SRGN_VALID)
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ }
+ }
+ spin_unlock(&hpb->rgn_state_lock);
+
+ hpb->stats.rb_inactive_cnt++;
+ }
+
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n",
+ rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt);
+
+ if (ufshpb_get_state(hpb) == HPB_PRESENT)
+ queue_work(ufshpb_wq, &hpb->map_work);
+}
+
+/*
+ * This function will parse recommended active subregion information in sense
+ * data field of response UPIU with SAM_STAT_GOOD state.
+ */
+void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+{
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(lrbp->cmd->device);
+ struct utp_hpb_rsp *rsp_field = &lrbp->ucd_rsp_ptr->hr;
+ int data_seg_len;
+
+ if (unlikely(lrbp->lun != rsp_field->lun)) {
+ struct scsi_device *sdev;
+ bool found = false;
+
+ __shost_for_each_device(sdev, hba->host) {
+ hpb = ufshpb_get_hpb_data(sdev);
+
+ if (!hpb)
+ continue;
+
+ if (rsp_field->lun == hpb->lun) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ return;
+ }
+
+ if (!hpb)
+ return;
+
+ if ((ufshpb_get_state(hpb) != HPB_PRESENT) &&
+ (ufshpb_get_state(hpb) != HPB_SUSPEND)) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: ufshpb state is not PRESENT/SUSPEND\n",
+ __func__);
+ return;
+ }
+
+ data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2)
+ & MASK_RSP_UPIU_DATA_SEG_LEN;
+
+ /* To flush remained rsp_list, we queue the map_work task */
+ if (!data_seg_len) {
+ if (!ufshpb_is_general_lun(hpb->lun))
+ return;
+
+ ufshpb_kick_map_work(hpb);
+ return;
+ }
+
+ BUILD_BUG_ON(sizeof(struct utp_hpb_rsp) != UTP_HPB_RSP_SIZE);
+
+ if (!ufshpb_is_hpb_rsp_valid(hba, lrbp, rsp_field))
+ return;
+
+ hpb->stats.rb_noti_cnt++;
+
+ switch (rsp_field->hpb_op) {
+ case HPB_RSP_REQ_REGION_UPDATE:
+ if (data_seg_len != DEV_DATA_SEG_LEN)
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: data seg length is not same.\n",
+ __func__);
+ ufshpb_rsp_req_region_update(hpb, rsp_field);
+ break;
+ case HPB_RSP_DEV_RESET:
+ dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
+ "UFS device lost HPB information during PM.\n");
+ break;
+ default:
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "hpb_op is not available: %d\n",
+ rsp_field->hpb_op);
+ break;
+ }
+}
+
+static void ufshpb_add_active_list(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct ufshpb_subregion *srgn)
+{
+ if (!list_empty(&rgn->list_inact_rgn))
+ return;
+
+ if (!list_empty(&srgn->list_act_srgn)) {
+ list_move(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+ return;
+ }
+
+ list_add(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+}
+
+static void ufshpb_add_pending_evict_list(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn,
+ struct list_head *pending_list)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx;
+
+ if (!list_empty(&rgn->list_inact_rgn))
+ return;
+
+ for_each_sub_region(rgn, srgn_idx, srgn)
+ if (!list_empty(&srgn->list_act_srgn))
+ return;
+
+ list_add_tail(&rgn->list_inact_rgn, pending_list);
+}
+
+static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ while ((srgn = list_first_entry_or_null(&hpb->lh_act_srgn,
+ struct ufshpb_subregion,
+ list_act_srgn))) {
+ if (ufshpb_get_state(hpb) == HPB_SUSPEND)
+ break;
+
+ list_del_init(&srgn->list_act_srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ rgn = hpb->rgn_tbl + srgn->rgn_idx;
+ ret = ufshpb_add_region(hpb, rgn);
+ if (ret)
+ goto active_failed;
+
+ ret = ufshpb_issue_map_req(hpb, rgn, srgn);
+ if (ret) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "issue map_req failed. ret %d, region %d - %d\n",
+ ret, rgn->rgn_idx, srgn->srgn_idx);
+ goto active_failed;
+ }
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ }
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ return;
+
+active_failed:
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev, "failed to activate region %d - %d, will retry\n",
+ rgn->rgn_idx, srgn->srgn_idx);
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_add_active_list(hpb, rgn, srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn;
+ unsigned long flags;
+ int ret;
+ LIST_HEAD(pending_list);
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ while ((rgn = list_first_entry_or_null(&hpb->lh_inact_rgn,
+ struct ufshpb_region,
+ list_inact_rgn))) {
+ if (ufshpb_get_state(hpb) == HPB_SUSPEND)
+ break;
+
+ list_del_init(&rgn->list_inact_rgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+
+ ret = ufshpb_evict_region(hpb, rgn);
+ if (ret) {
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_add_pending_evict_list(hpb, rgn, &pending_list);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ }
+
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ }
+
+ list_splice(&pending_list, &hpb->lh_inact_rgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static void ufshpb_map_work_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work);
+
+ if (ufshpb_get_state(hpb) != HPB_PRESENT) {
+ dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
+ "%s: ufshpb state is not PRESENT\n", __func__);
+ return;
+ }
+
+ ufshpb_run_inactive_region_list(hpb);
+ ufshpb_run_active_subregion_list(hpb);
+}
+
+/*
+ * this function doesn't need to hold lock due to be called in init.
+ * (rgn_state_lock, rsp_list_lock, etc..)
+ */
+static int ufshpb_init_pinned_active_region(struct ufs_hba *hba,
+ struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct ufshpb_subregion *srgn;
+ int srgn_idx, i;
+ int err = 0;
+
+ for_each_sub_region(rgn, srgn_idx, srgn) {
+ srgn->mctx = ufshpb_get_map_ctx(hpb, srgn->is_last);
+ srgn->srgn_state = HPB_SRGN_INVALID;
+ if (!srgn->mctx) {
+ err = -ENOMEM;
+ dev_err(hba->dev,
+ "alloc mctx for pinned region failed\n");
+ goto release;
+ }
+
+ list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+ }
+
+ rgn->rgn_state = HPB_RGN_PINNED;
+ return 0;
+
+release:
+ for (i = 0; i < srgn_idx; i++) {
+ srgn = rgn->srgn_tbl + i;
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ }
+ return err;
+}
+
static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb,
struct ufshpb_region *rgn, bool last)
{
int srgn_idx;
struct ufshpb_subregion *srgn;

- for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
- srgn = rgn->srgn_tbl + srgn_idx;
+ for_each_sub_region(rgn, srgn_idx, srgn) {
+ INIT_LIST_HEAD(&srgn->list_act_srgn);

srgn->rgn_idx = rgn->rgn_idx;
srgn->srgn_idx = srgn_idx;
@@ -78,6 +966,8 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba,
hpb->lu_pinned_end = hpb_lu_info->num_pinned ?
(hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1)
: PINNED_NOT_SET;
+ hpb->lru_info.max_lru_active_cnt =
+ hpb_lu_info->max_active_rgns - hpb_lu_info->num_pinned;

rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT
* HPB_ENTRY_SIZE;
@@ -129,6 +1019,9 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
rgn = rgn_table + rgn_idx;
rgn->rgn_idx = rgn_idx;

+ INIT_LIST_HEAD(&rgn->list_inact_rgn);
+ INIT_LIST_HEAD(&rgn->list_lru_rgn);
+
if (rgn_idx == hpb->rgns_per_lu - 1) {
srgn_cnt = ((hpb->srgns_per_lu - 1) %
hpb->srgns_per_rgn) + 1;
@@ -140,7 +1033,13 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
goto release_srgn_table;
ufshpb_init_subregion_tbl(hpb, rgn, last_srgn);

- rgn->rgn_state = HPB_RGN_INACTIVE;
+ if (ufshpb_is_pinned_region(hpb, rgn_idx)) {
+ ret = ufshpb_init_pinned_active_region(hba, hpb, rgn);
+ if (ret)
+ goto release_srgn_table;
+ } else {
+ rgn->rgn_state = HPB_RGN_INACTIVE;
+ }
}

return 0;
@@ -159,13 +1058,13 @@ static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb,
struct ufshpb_region *rgn)
{
int srgn_idx;
+ struct ufshpb_subregion *srgn;

- for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) {
- struct ufshpb_subregion *srgn;
-
- srgn = rgn->srgn_tbl + srgn_idx;
- srgn->srgn_state = HPB_SRGN_UNUSED;
- }
+ for_each_sub_region(rgn, srgn_idx, srgn)
+ if (srgn->srgn_state != HPB_SRGN_UNUSED) {
+ srgn->srgn_state = HPB_SRGN_UNUSED;
+ ufshpb_put_map_ctx(hpb, srgn->mctx);
+ }
}

static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb)
@@ -239,11 +1138,47 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
{
int ret;

+ spin_lock_init(&hpb->rgn_state_lock);
+ spin_lock_init(&hpb->rsp_list_lock);
+
+ INIT_LIST_HEAD(&hpb->lru_info.lh_lru_rgn);
+ INIT_LIST_HEAD(&hpb->lh_act_srgn);
+ INIT_LIST_HEAD(&hpb->lh_inact_rgn);
+ INIT_LIST_HEAD(&hpb->list_hpb_lu);
+
+ INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
+
+ hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
+ sizeof(struct ufshpb_req), 0, 0, NULL);
+ if (!hpb->map_req_cache) {
+ dev_err(hba->dev, "ufshpb(%d) ufshpb_req_cache create fail",
+ hpb->lun);
+ return -ENOMEM;
+ }
+
+ hpb->m_page_cache = kmem_cache_create("ufshpb_m_page_cache",
+ sizeof(struct page *) * hpb->pages_per_srgn,
+ 0, 0, NULL);
+ if (!hpb->m_page_cache) {
+ dev_err(hba->dev, "ufshpb(%d) ufshpb_m_page_cache create fail",
+ hpb->lun);
+ ret = -ENOMEM;
+ goto release_req_cache;
+ }
+
ret = ufshpb_alloc_region_tbl(hba, hpb);
+ if (ret)
+ goto release_m_page_cache;

ufshpb_stat_init(hpb);

return 0;
+
+release_m_page_cache:
+ kmem_cache_destroy(hpb->m_page_cache);
+release_req_cache:
+ kmem_cache_destroy(hpb->map_req_cache);
+ return ret;
}

static struct ufshpb_lu *
@@ -275,6 +1210,33 @@ ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
return NULL;
}

+static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_region *rgn, *next_rgn;
+ struct ufshpb_subregion *srgn, *next_srgn;
+ unsigned long flags;
+
+ /*
+ * If the device reset occurred, the remained HPB region information
+ * may be stale. Therefore, by dicarding the lists of HPB response
+ * that remained after reset, it prevents unnecessary work.
+ */
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ list_for_each_entry_safe(rgn, next_rgn, &hpb->lh_inact_rgn,
+ list_inact_rgn)
+ list_del_init(&rgn->list_inact_rgn);
+
+ list_for_each_entry_safe(srgn, next_srgn, &hpb->lh_act_srgn,
+ list_act_srgn)
+ list_del_init(&srgn->list_act_srgn);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+}
+
+static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
+{
+ cancel_work_sync(&hpb->map_work);
+}
+
static bool ufshpb_check_hpb_reset_query(struct ufs_hba *hba)
{
int err = 0;
@@ -318,7 +1280,7 @@ void ufshpb_reset(struct ufs_hba *hba)
struct scsi_device *sdev;

shost_for_each_device(sdev, hba->host) {
- hpb = sdev->hostdata;
+ hpb = ufshpb_get_hpb_data(sdev);
if (!hpb)
continue;

@@ -335,13 +1297,15 @@ void ufshpb_reset_host(struct ufs_hba *hba)
struct scsi_device *sdev;

shost_for_each_device(sdev, hba->host) {
- hpb = sdev->hostdata;
+ hpb = ufshpb_get_hpb_data(sdev);
if (!hpb)
continue;

if (ufshpb_get_state(hpb) != HPB_PRESENT)
continue;
ufshpb_set_state(hpb, HPB_RESET);
+ ufshpb_cancel_jobs(hpb);
+ ufshpb_discard_rsp_lists(hpb);
}
}

@@ -351,13 +1315,14 @@ void ufshpb_suspend(struct ufs_hba *hba)
struct scsi_device *sdev;

shost_for_each_device(sdev, hba->host) {
- hpb = sdev->hostdata;
+ hpb = ufshpb_get_hpb_data(sdev);
if (!hpb)
continue;

if (ufshpb_get_state(hpb) != HPB_PRESENT)
continue;
ufshpb_set_state(hpb, HPB_SUSPEND);
+ ufshpb_cancel_jobs(hpb);
}
}

@@ -367,7 +1332,7 @@ void ufshpb_resume(struct ufs_hba *hba)
struct scsi_device *sdev;

shost_for_each_device(sdev, hba->host) {
- hpb = sdev->hostdata;
+ hpb = ufshpb_get_hpb_data(sdev);
if (!hpb)
continue;

@@ -375,6 +1340,7 @@ void ufshpb_resume(struct ufs_hba *hba)
(ufshpb_get_state(hpb) != HPB_SUSPEND))
continue;
ufshpb_set_state(hpb, HPB_PRESENT);
+ ufshpb_kick_map_work(hpb);
}
}

@@ -427,7 +1393,7 @@ static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun,

void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev)
{
- struct ufshpb_lu *hpb = sdev->hostdata;
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);

if (!hpb)
return;
@@ -437,8 +1403,13 @@ void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev)
sdev = hpb->sdev_ufs_lu;
sdev->hostdata = NULL;

+ ufshpb_cancel_jobs(hpb);
+
ufshpb_destroy_region_tbl(hpb);

+ kmem_cache_destroy(hpb->map_req_cache);
+ kmem_cache_destroy(hpb->m_page_cache);
+
list_del_init(&hpb->list_hpb_lu);

kfree(hpb);
@@ -446,24 +1417,41 @@ void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev)

static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba)
{
+ int pool_size;
struct ufshpb_lu *hpb;
struct scsi_device *sdev;
bool init_success;

+ if (tot_active_srgn_pages == 0) {
+ ufshpb_remove(hba);
+ return;
+ }
+
init_success = !ufshpb_check_hpb_reset_query(hba);

+ pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) / PAGE_SIZE;
+ if (pool_size > tot_active_srgn_pages) {
+ mempool_resize(ufshpb_mctx_pool, tot_active_srgn_pages);
+ mempool_resize(ufshpb_page_pool, tot_active_srgn_pages);
+ }
+
shost_for_each_device(sdev, hba->host) {
- hpb = sdev->hostdata;
+ hpb = ufshpb_get_hpb_data(sdev);
if (!hpb)
continue;

if (init_success) {
ufshpb_set_state(hpb, HPB_PRESENT);
+ if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0)
+ queue_work(ufshpb_wq, &hpb->map_work);
} else {
dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun);
ufshpb_destroy_lu(hba, sdev);
}
}
+
+ if (!init_success)
+ ufshpb_remove(hba);
}

void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev)
@@ -485,6 +1473,9 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev)
if (!hpb)
goto out;

+ tot_active_srgn_pages += hpb_lu_info.max_active_rgns *
+ hpb->srgns_per_rgn * hpb->pages_per_srgn;
+
hpb->sdev_ufs_lu = sdev;
sdev->hostdata = hpb;

@@ -494,6 +1485,57 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev)
ufshpb_hpb_lu_prepared(hba);
}

+static int ufshpb_init_mem_wq(void)
+{
+ int ret;
+ unsigned int pool_size;
+
+ ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache",
+ sizeof(struct ufshpb_map_ctx),
+ 0, 0, NULL);
+ if (!ufshpb_mctx_cache) {
+ pr_err("ufshpb: cannot init mctx cache\n");
+ return -ENOMEM;
+ }
+
+ pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) / PAGE_SIZE;
+ pr_info("%s:%d ufshpb_host_map_kbytes %u pool_size %u\n",
+ __func__, __LINE__, ufshpb_host_map_kbytes, pool_size);
+
+ ufshpb_mctx_pool = mempool_create_slab_pool(pool_size,
+ ufshpb_mctx_cache);
+ if (!ufshpb_mctx_pool) {
+ pr_err("ufshpb: cannot init mctx pool\n");
+ ret = -ENOMEM;
+ goto release_mctx_cache;
+ }
+
+ ufshpb_page_pool = mempool_create_page_pool(pool_size, 0);
+ if (!ufshpb_page_pool) {
+ pr_err("ufshpb: cannot init page pool\n");
+ ret = -ENOMEM;
+ goto release_mctx_pool;
+ }
+
+ ufshpb_wq = alloc_workqueue("ufshpb-wq",
+ WQ_UNBOUND | WQ_MEM_RECLAIM, 0);
+ if (!ufshpb_wq) {
+ pr_err("ufshpb: alloc workqueue failed\n");
+ ret = -ENOMEM;
+ goto release_page_pool;
+ }
+
+ return 0;
+
+release_page_pool:
+ mempool_destroy(ufshpb_page_pool);
+release_mctx_pool:
+ mempool_destroy(ufshpb_mctx_pool);
+release_mctx_cache:
+ kmem_cache_destroy(ufshpb_mctx_cache);
+ return ret;
+}
+
void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf)
{
struct ufshpb_dev_info *hpb_info = &hba->ufshpb_dev;
@@ -558,7 +1600,13 @@ void ufshpb_init(struct ufs_hba *hba)
if (!ufshpb_is_allowed(hba))
return;

+ if (ufshpb_init_mem_wq()) {
+ hpb_dev_info->hpb_disabled = true;
+ return;
+ }
+
atomic_set(&hpb_dev_info->slave_conf_cnt, hpb_dev_info->num_lu);
+ tot_active_srgn_pages = 0;
/* issue HPB reset query */
for (try = 0; try < HPB_RESET_REQ_RETRIES; try++) {
ret = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG,
@@ -567,3 +1615,16 @@ void ufshpb_init(struct ufs_hba *hba)
break;
}
}
+
+void ufshpb_remove(struct ufs_hba *hba)
+{
+ mempool_destroy(ufshpb_page_pool);
+ mempool_destroy(ufshpb_mctx_pool);
+ kmem_cache_destroy(ufshpb_mctx_cache);
+
+ destroy_workqueue(ufshpb_wq);
+}
+
+module_param(ufshpb_host_map_kbytes, uint, 0644);
+MODULE_PARM_DESC(ufshpb_host_map_kbytes,
+ "ufshpb host mapping memory kilo-bytes for ufshpb memory-pool");
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 11f5b018af51..aaffc8968afd 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -40,6 +40,7 @@
#define LU_ENABLED_HPB_FUNC 0x02

#define HPB_RESET_REQ_RETRIES 10
+#define HPB_MAP_REQ_RETRIES 5

#define HPB_SUPPORT_VERSION 0x100

@@ -83,11 +84,19 @@ struct ufshpb_lu_info {
int max_active_rgns;
};

+struct ufshpb_map_ctx {
+ struct page **m_page;
+ unsigned long *ppn_dirty;
+};
+
struct ufshpb_subregion {
+ struct ufshpb_map_ctx *mctx;
enum HPB_SRGN_STATE srgn_state;
int rgn_idx;
int srgn_idx;
bool is_last;
+ /* below information is used by rsp_list */
+ struct list_head list_act_srgn;
};

struct ufshpb_region {
@@ -95,6 +104,43 @@ struct ufshpb_region {
enum HPB_RGN_STATE rgn_state;
int rgn_idx;
int srgn_cnt;
+
+ /* below information is used by rsp_list */
+ struct list_head list_inact_rgn;
+
+ /* below information is used by lru */
+ struct list_head list_lru_rgn;
+};
+
+#define for_each_sub_region(rgn, i, srgn) \
+ for ((i) = 0; \
+ ((i) < (rgn)->srgn_cnt) && ((srgn) = &(rgn)->srgn_tbl[i]); \
+ (i)++)
+
+/**
+ * struct ufshpb_req - UFSHPB READ BUFFER (for caching map) request structure
+ * @req: block layer request for READ BUFFER
+ * @bio: bio for holding map page
+ * @hpb: ufshpb_lu structure that related to the L2P map
+ * @mctx: L2P map information
+ * @rgn_idx: target region index
+ * @srgn_idx: target sub-region index
+ * @lun: target logical unit number
+ */
+struct ufshpb_req {
+ struct request *req;
+ struct bio *bio;
+ struct ufshpb_lu *hpb;
+ struct ufshpb_map_ctx *mctx;
+
+ unsigned int rgn_idx;
+ unsigned int srgn_idx;
+};
+
+struct victim_select_info {
+ struct list_head lh_lru_rgn; /* LRU list of regions */
+ int max_lru_active_cnt; /* supported hpb #region - pinned #region */
+ atomic_t active_cnt;
};

struct ufshpb_stats {
@@ -109,10 +155,22 @@ struct ufshpb_stats {
struct ufshpb_lu {
int lun;
struct scsi_device *sdev_ufs_lu;
+
+ spinlock_t rgn_state_lock; /* for protect rgn/srgn state */
struct ufshpb_region *rgn_tbl;

atomic_t hpb_state;

+ spinlock_t rsp_list_lock;
+ struct list_head lh_act_srgn; /* hold rsp_list_lock */
+ struct list_head lh_inact_rgn; /* hold rsp_list_lock */
+
+ /* cached L2P map management worker */
+ struct work_struct map_work;
+
+ /* for selecting victim */
+ struct victim_select_info lru_info;
+
/* pinned region information */
u32 lu_pinned_start;
u32 lu_pinned_end;
@@ -132,6 +190,9 @@ struct ufshpb_lu {

struct ufshpb_stats stats;

+ struct kmem_cache *map_req_cache;
+ struct kmem_cache *m_page_cache;
+
struct list_head list_hpb_lu;
};

@@ -139,6 +200,7 @@ struct ufs_hba;
struct ufshcd_lrb;

#ifndef CONFIG_SCSI_UFS_HPB
+static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
static void ufshpb_resume(struct ufs_hba *hba) {}
static void ufshpb_suspend(struct ufs_hba *hba) {}
static void ufshpb_reset(struct ufs_hba *hba) {}
@@ -146,10 +208,12 @@ static void ufshpb_reset_host(struct ufs_hba *hba) {}
static void ufshpb_init(struct ufs_hba *hba) {}
static void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
static void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
+static void ufshpb_remove(struct ufs_hba *hba) {}
static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; }
static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {}
static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {}
#else
+void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_resume(struct ufs_hba *hba);
void ufshpb_suspend(struct ufs_hba *hba);
void ufshpb_reset(struct ufs_hba *hba);
@@ -157,6 +221,7 @@ void ufshpb_reset_host(struct ufs_hba *hba);
void ufshpb_init(struct ufs_hba *hba);
void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev);
void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev);
+void ufshpb_remove(struct ufs_hba *hba);
bool ufshpb_is_allowed(struct ufs_hba *hba);
void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf);
void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf);
--
2.25.1

2021-02-24 12:46:44

by Daejun Park

[permalink] [raw]
Subject: [PATCH v24 4/4] scsi: ufs: Add HPB 2.0 support

This patch supports the HPB 2.0.

The HPB 2.0 supports read of varying sizes from 4KB to 512KB.
In the case of Read (<= 32KB) is supported as single HPB read.
In the case of Read (36KB ~ 512KB) is supported by as a combination of
write buffer command and HPB read command to deliver more PPN.
The write buffer commands may not be issued immediately due to busy tags.
To use HPB read more aggressively, the driver can requeue the write buffer
command. The requeue threshold is implemented as timeout and can be
modified with requeue_timeout_ms entry in sysfs.

Signed-off-by: Daejun Park <[email protected]>
---
Documentation/ABI/testing/sysfs-driver-ufs | 35 +-
drivers/scsi/ufs/ufs-sysfs.c | 2 +
drivers/scsi/ufs/ufs.h | 3 +-
drivers/scsi/ufs/ufshcd.c | 18 +-
drivers/scsi/ufs/ufshcd.h | 7 +
drivers/scsi/ufs/ufshpb.c | 616 +++++++++++++++++++--
drivers/scsi/ufs/ufshpb.h | 67 ++-
7 files changed, 669 insertions(+), 79 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
index bf5cb8846de1..0017eaf89cbe 100644
--- a/Documentation/ABI/testing/sysfs-driver-ufs
+++ b/Documentation/ABI/testing/sysfs-driver-ufs
@@ -1253,14 +1253,14 @@ Description: This entry shows the number of HPB pinned regions assigned to

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/hit_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/hit_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of reads that changed to HPB read.

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/miss_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/miss_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of reads that cannot be changed to
@@ -1268,7 +1268,7 @@ Description: This entry shows the number of reads that cannot be changed to

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_noti_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_noti_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of response UPIUs that has
@@ -1276,7 +1276,7 @@ Description: This entry shows the number of response UPIUs that has

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_active_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_active_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of active sub-regions recommended by
@@ -1284,7 +1284,7 @@ Description: This entry shows the number of active sub-regions recommended by

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_inactive_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_inactive_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of inactive regions recommended by
@@ -1292,10 +1292,33 @@ Description: This entry shows the number of inactive regions recommended by

The file is read only.

-What: /sys/class/scsi_device/*/device/hpb_sysfs/map_req_cnt
+What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/map_req_cnt
Date: February 2021
Contact: Daejun Park <[email protected]>
Description: This entry shows the number of read buffer commands for
activating sub-regions recommended by response UPIUs.

The file is read only.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/requeue_timeout_ms
+Date: February 2021
+Contact: Daejun Park <[email protected]>
+Description: This entry shows the requeue timeout threshold for write buffer
+ command in ms. This value can be changed by writing proper integer to
+ this entry.
+
+What: /sys/bus/platform/drivers/ufshcd/*/attributes/max_data_size_hpb_single_cmd
+Date: February 2021
+Contact: Daejun Park <[email protected]>
+Description: This entry shows the maximum HPB data size for using single HPB
+ command.
+
+ === ========
+ 00h 4KB
+ 01h 8KB
+ 02h 12KB
+ ...
+ FFh 1024KB
+ === ========
+
+ The file is read only.
diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
index 2546e7a1ac4f..00fb519406cf 100644
--- a/drivers/scsi/ufs/ufs-sysfs.c
+++ b/drivers/scsi/ufs/ufs-sysfs.c
@@ -841,6 +841,7 @@ out: \
static DEVICE_ATTR_RO(_name)

UFS_ATTRIBUTE(boot_lun_enabled, _BOOT_LU_EN);
+UFS_ATTRIBUTE(max_data_size_hpb_single_cmd, _MAX_HPB_SINGLE_CMD);
UFS_ATTRIBUTE(current_power_mode, _POWER_MODE);
UFS_ATTRIBUTE(active_icc_level, _ACTIVE_ICC_LVL);
UFS_ATTRIBUTE(ooo_data_enabled, _OOO_DATA_EN);
@@ -864,6 +865,7 @@ UFS_ATTRIBUTE(wb_cur_buf, _CURR_WB_BUFF_SIZE);

static struct attribute *ufs_sysfs_attributes[] = {
&dev_attr_boot_lun_enabled.attr,
+ &dev_attr_max_data_size_hpb_single_cmd.attr,
&dev_attr_current_power_mode.attr,
&dev_attr_active_icc_level.attr,
&dev_attr_ooo_data_enabled.attr,
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
index 957763db1006..e0b748777a1b 100644
--- a/drivers/scsi/ufs/ufs.h
+++ b/drivers/scsi/ufs/ufs.h
@@ -123,12 +123,13 @@ enum flag_idn {
QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F,
QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10,
QUERY_FLAG_IDN_HPB_RESET = 0x11,
+ QUERY_FLAG_IDN_HPB_EN = 0x12,
};

/* Attribute idn for Query requests */
enum attr_idn {
QUERY_ATTR_IDN_BOOT_LU_EN = 0x00,
- QUERY_ATTR_IDN_RESERVED = 0x01,
+ QUERY_ATTR_IDN_MAX_HPB_SINGLE_CMD = 0x01,
QUERY_ATTR_IDN_POWER_MODE = 0x02,
QUERY_ATTR_IDN_ACTIVE_ICC_LVL = 0x03,
QUERY_ATTR_IDN_OOO_DATA_EN = 0x04,
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 851c01a26207..a04fb9072f69 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -2656,7 +2656,12 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)

lrbp->req_abort_skip = false;

- ufshpb_prep(hba, lrbp);
+ err = ufshpb_prep(hba, lrbp);
+ if (err == -EAGAIN) {
+ lrbp->cmd = NULL;
+ ufshcd_release(hba);
+ goto out;
+ }

ufshcd_comp_scsi_upiu(hba, lrbp);

@@ -3110,7 +3115,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
*
* Returns 0 for success, non-zero in case of failure
*/
-static int ufshcd_query_attr_retry(struct ufs_hba *hba,
+int ufshcd_query_attr_retry(struct ufs_hba *hba,
enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector,
u32 *attr_val)
{
@@ -7447,8 +7452,14 @@ static int ufs_get_device_desc(struct ufs_hba *hba)

if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION &&
(b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) {
- dev_info->hpb_enabled = true;
+ bool hpb_en = false;
+
ufshpb_get_dev_info(hba, desc_buf);
+
+ err = ufshcd_query_flag_retry(hba, UPIU_QUERY_OPCODE_READ_FLAG,
+ QUERY_FLAG_IDN_HPB_EN, 0, &hpb_en);
+ if (ufshpb_is_legacy(hba) || (!err && hpb_en))
+ dev_info->hpb_enabled = true;
}

err = ufshcd_read_string_desc(hba, model_index,
@@ -8019,6 +8030,7 @@ static const struct attribute_group *ufshcd_driver_groups[] = {
&ufs_sysfs_lun_attributes_group,
#ifdef CONFIG_SCSI_UFS_HPB
&ufs_sysfs_hpb_stat_group,
+ &ufs_sysfs_hpb_param_group,
#endif
NULL,
};
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 961fc5b77943..7d85517464ef 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -654,6 +654,8 @@ struct ufs_hba_variant_params {
* @srgn_size: device reported HPB sub-region size
* @slave_conf_cnt: counter to check all lu finished initialization
* @hpb_disabled: flag to check if HPB is disabled
+ * @max_hpb_single_cmd: maximum size of single HPB command
+ * @is_legacy: flag to check HPB 1.0
*/
struct ufshpb_dev_info {
int num_lu;
@@ -661,6 +663,8 @@ struct ufshpb_dev_info {
int srgn_size;
atomic_t slave_conf_cnt;
bool hpb_disabled;
+ int max_hpb_single_cmd;
+ bool is_legacy;
};
#endif

@@ -1091,6 +1095,9 @@ int ufshcd_read_desc_param(struct ufs_hba *hba,
u8 param_offset,
u8 *param_read_buf,
u8 param_size);
+int ufshcd_query_attr_retry(struct ufs_hba *hba, enum query_opcode opcode,
+ enum attr_idn idn, u8 index, u8 selector,
+ u32 *attr_val);
int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
enum attr_idn idn, u8 index, u8 selector, u32 *attr_val);
int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 1a4a9c548ef3..ddcf6789b3ed 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -31,6 +31,11 @@ bool ufshpb_is_allowed(struct ufs_hba *hba)
return !(hba->ufshpb_dev.hpb_disabled);
}

+bool ufshpb_is_legacy(struct ufs_hba *hba)
+{
+ return hba->ufshpb_dev.is_legacy;
+}
+
static struct ufshpb_lu *ufshpb_get_hpb_data(struct scsi_device *sdev)
{
return sdev->hostdata;
@@ -64,9 +69,19 @@ static bool ufshpb_is_write_or_discard_cmd(struct scsi_cmnd *cmd)
op_is_discard(req_op(cmd->request));
}

-static bool ufshpb_is_support_chunk(int transfer_len)
+static bool ufshpb_is_support_chunk(struct ufshpb_lu *hpb, int transfer_len)
{
- return transfer_len <= HPB_MULTI_CHUNK_HIGH;
+ return transfer_len <= hpb->pre_req_max_tr_len;
+}
+
+/*
+ * In this driver, WRITE_BUFFER CMD support 36KB (len=9) ~ 512KB (len=128) as
+ * default. It is possible to change range of transfer_len through sysfs.
+ */
+static inline bool ufshpb_is_required_wb(struct ufshpb_lu *hpb, int len)
+{
+ return (len >= hpb->pre_req_min_tr_len &&
+ len <= hpb->pre_req_max_tr_len);
}

static bool ufshpb_is_general_lun(int lun)
@@ -74,8 +89,7 @@ static bool ufshpb_is_general_lun(int lun)
return lun < UFS_UPIU_MAX_UNIT_NUM_ID;
}

-static bool
-ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx)
+static bool ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx)
{
if (hpb->lu_pinned_end != PINNED_NOT_SET &&
rgn_idx >= hpb->lu_pinned_start &&
@@ -264,7 +278,8 @@ ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx,

static void
ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp,
- u32 lpn, u64 ppn, unsigned int transfer_len)
+ u32 lpn, u64 ppn, unsigned int transfer_len,
+ int read_id)
{
unsigned char *cdb = lrbp->cmd->cmnd;

@@ -273,15 +288,269 @@ ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp,
/* ppn value is stored as big-endian in the host memory */
memcpy(&cdb[6], &ppn, sizeof(u64));
cdb[14] = transfer_len;
+ cdb[15] = read_id;

lrbp->cmd->cmd_len = UFS_CDB_SIZE;
}

+static inline void ufshpb_set_write_buf_cmd(unsigned char *cdb,
+ unsigned long lpn, unsigned int len,
+ int read_id)
+{
+ cdb[0] = UFSHPB_WRITE_BUFFER;
+ cdb[1] = UFSHPB_WRITE_BUFFER_PREFETCH_ID;
+
+ put_unaligned_be32(lpn, &cdb[2]);
+ cdb[6] = read_id;
+ put_unaligned_be16(len * HPB_ENTRY_SIZE, &cdb[7]);
+
+ cdb[9] = 0x00; /* Control = 0x00 */
+}
+
+static struct ufshpb_req *ufshpb_get_pre_req(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_req *pre_req;
+
+ if (hpb->num_inflight_pre_req >= hpb->throttle_pre_req) {
+ dev_info(&hpb->sdev_ufs_lu->sdev_dev,
+ "pre_req throttle. inflight %d throttle %d",
+ hpb->num_inflight_pre_req, hpb->throttle_pre_req);
+ return NULL;
+ }
+
+ pre_req = list_first_entry_or_null(&hpb->lh_pre_req_free,
+ struct ufshpb_req, list_req);
+ if (!pre_req) {
+ dev_info(&hpb->sdev_ufs_lu->sdev_dev, "There is no pre_req");
+ return NULL;
+ }
+
+ list_del_init(&pre_req->list_req);
+ hpb->num_inflight_pre_req++;
+
+ return pre_req;
+}
+
+static inline void ufshpb_put_pre_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *pre_req)
+{
+ pre_req->req = NULL;
+ pre_req->bio = NULL;
+ list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free);
+ hpb->num_inflight_pre_req--;
+}
+
+static void ufshpb_pre_req_compl_fn(struct request *req, blk_status_t error)
+{
+ struct ufshpb_req *pre_req = (struct ufshpb_req *)req->end_io_data;
+ struct ufshpb_lu *hpb = pre_req->hpb;
+ unsigned long flags;
+ struct scsi_sense_hdr sshdr;
+
+ if (error) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev, "block status %d", error);
+ scsi_normalize_sense(pre_req->sense, SCSI_SENSE_BUFFERSIZE,
+ &sshdr);
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "code %x sense_key %x asc %x ascq %x",
+ sshdr.response_code,
+ sshdr.sense_key, sshdr.asc, sshdr.ascq);
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "byte4 %x byte5 %x byte6 %x additional_len %x",
+ sshdr.byte4, sshdr.byte5,
+ sshdr.byte6, sshdr.additional_length);
+ }
+
+ bio_put(pre_req->bio);
+ blk_mq_free_request(req);
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ ufshpb_put_pre_req(pre_req->hpb, pre_req);
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+}
+
+static int ufshpb_prep_entry(struct ufshpb_req *pre_req, struct page *page)
+{
+ struct ufshpb_lu *hpb = pre_req->hpb;
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+ u64 *addr;
+ int offset = 0;
+ int copied;
+ unsigned long lpn = pre_req->wb.lpn;
+ int rgn_idx, srgn_idx, srgn_offset;
+ unsigned long flags;
+
+ addr = page_address(page);
+ ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset);
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+
+next_offset:
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ if (!ufshpb_is_valid_srgn(rgn, srgn))
+ goto mctx_error;
+
+ if (!srgn->mctx)
+ goto mctx_error;
+
+ copied = ufshpb_fill_ppn_from_page(hpb, srgn->mctx, srgn_offset,
+ pre_req->wb.len - offset,
+ &addr[offset]);
+
+ if (copied < 0)
+ goto mctx_error;
+
+ offset += copied;
+ srgn_offset += offset;
+
+ if (srgn_offset == hpb->entries_per_srgn) {
+ srgn_offset = 0;
+
+ if (++srgn_idx == hpb->srgns_per_rgn) {
+ srgn_idx = 0;
+ rgn_idx++;
+ }
+ }
+
+ if (offset < pre_req->wb.len)
+ goto next_offset;
+
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ return 0;
+mctx_error:
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ return -ENOMEM;
+}
+
+static int ufshpb_pre_req_add_bio_page(struct ufshpb_lu *hpb,
+ struct request_queue *q,
+ struct ufshpb_req *pre_req)
+{
+ struct page *page = pre_req->wb.m_page;
+ struct bio *bio = pre_req->bio;
+ int entries_bytes, ret;
+
+ if (!page)
+ return -ENOMEM;
+
+ if (ufshpb_prep_entry(pre_req, page))
+ return -ENOMEM;
+
+ entries_bytes = pre_req->wb.len * sizeof(u64);
+
+ ret = bio_add_pc_page(q, bio, page, entries_bytes, 0);
+ if (ret != entries_bytes) {
+ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+ "bio_add_pc_page fail: %d", ret);
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+static inline int ufshpb_get_read_id(struct ufshpb_lu *hpb)
+{
+ if (++hpb->cur_read_id >= MAX_HPB_READ_ID)
+ hpb->cur_read_id = 0;
+ return hpb->cur_read_id;
+}
+
+static int ufshpb_execute_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd,
+ struct ufshpb_req *pre_req, int read_id)
+{
+ struct scsi_device *sdev = cmd->device;
+ struct request_queue *q = sdev->request_queue;
+ struct request *req;
+ struct scsi_request *rq;
+ struct bio *bio = pre_req->bio;
+
+ pre_req->hpb = hpb;
+ pre_req->wb.lpn = sectors_to_logical(cmd->device,
+ blk_rq_pos(cmd->request));
+ pre_req->wb.len = sectors_to_logical(cmd->device,
+ blk_rq_sectors(cmd->request));
+ if (ufshpb_pre_req_add_bio_page(hpb, q, pre_req))
+ return -ENOMEM;
+
+ req = pre_req->req;
+
+ /* 1. request setup */
+ blk_rq_append_bio(req, &bio);
+ req->rq_disk = NULL;
+ req->end_io_data = (void *)pre_req;
+ req->end_io = ufshpb_pre_req_compl_fn;
+
+ /* 2. scsi_request setup */
+ rq = scsi_req(req);
+ rq->retries = 1;
+
+ ufshpb_set_write_buf_cmd(rq->cmd, pre_req->wb.lpn, pre_req->wb.len,
+ read_id);
+ rq->cmd_len = scsi_command_size(rq->cmd);
+
+ if (blk_insert_cloned_request(q, req) != BLK_STS_OK)
+ return -EAGAIN;
+
+ hpb->stats.pre_req_cnt++;
+
+ return 0;
+}
+
+static int ufshpb_issue_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd,
+ int *read_id)
+{
+ struct ufshpb_req *pre_req;
+ struct request *req = NULL;
+ struct bio *bio = NULL;
+ unsigned long flags;
+ int _read_id;
+ int ret = 0;
+
+ req = blk_get_request(cmd->device->request_queue,
+ REQ_OP_SCSI_OUT | REQ_SYNC, BLK_MQ_REQ_NOWAIT);
+ if (IS_ERR(req))
+ return -EAGAIN;
+
+ bio = bio_alloc(GFP_ATOMIC, 1);
+ if (!bio) {
+ blk_put_request(req);
+ return -EAGAIN;
+ }
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ pre_req = ufshpb_get_pre_req(hpb);
+ if (!pre_req) {
+ ret = -EAGAIN;
+ goto unlock_out;
+ }
+ _read_id = ufshpb_get_read_id(hpb);
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ pre_req->req = req;
+ pre_req->bio = bio;
+
+ ret = ufshpb_execute_pre_req(hpb, cmd, pre_req, _read_id);
+ if (ret)
+ goto free_pre_req;
+
+ *read_id = _read_id;
+
+ return ret;
+free_pre_req:
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+ ufshpb_put_pre_req(hpb, pre_req);
+unlock_out:
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ bio_put(bio);
+ blk_put_request(req);
+ return ret;
+}
+
/*
* This function will set up HPB read command using host-side L2P map data.
- * In HPB v1.0, maximum size of HPB read command is 4KB.
*/
-void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
struct ufshpb_lu *hpb;
struct ufshpb_region *rgn;
@@ -291,26 +560,27 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
u64 ppn;
unsigned long flags;
int transfer_len, rgn_idx, srgn_idx, srgn_offset;
+ int read_id = 0;
int err = 0;

hpb = ufshpb_get_hpb_data(cmd->device);
if (!hpb)
- return;
+ return -ENODEV;

if (ufshpb_get_state(hpb) != HPB_PRESENT) {
dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
"%s: ufshpb state is not PRESENT", __func__);
- return;
+ return -ENODEV;
}

if (!ufshpb_is_write_or_discard_cmd(cmd) &&
!ufshpb_is_read_cmd(cmd))
- return;
+ return 0;

transfer_len = sectors_to_logical(cmd->device,
blk_rq_sectors(cmd->request));
if (unlikely(!transfer_len))
- return;
+ return 0;

lpn = sectors_to_logical(cmd->device, blk_rq_pos(cmd->request));
ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset);
@@ -323,18 +593,18 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len);
spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
- return;
+ return 0;
}

- if (!ufshpb_is_support_chunk(transfer_len))
- return;
+ if (!ufshpb_is_support_chunk(hpb, transfer_len))
+ return 0;

spin_lock_irqsave(&hpb->rgn_state_lock, flags);
if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len)) {
hpb->stats.miss_cnt++;
spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
- return;
+ return 0;
}

err = ufshpb_fill_ppn_from_page(hpb, srgn->mctx, srgn_offset, 1, &ppn);
@@ -347,28 +617,46 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
* active state.
*/
dev_err(hba->dev, "get ppn failed. err %d\n", err);
- return;
+ return err;
+ }
+
+ if (!ufshpb_is_legacy(hba) &&
+ ufshpb_is_required_wb(hpb, transfer_len)) {
+ err = ufshpb_issue_pre_req(hpb, cmd, &read_id);
+ if (err) {
+ unsigned long timeout;
+
+ timeout = cmd->jiffies_at_alloc + msecs_to_jiffies(
+ hpb->params.requeue_timeout_ms);
+
+ if (time_before(jiffies, timeout))
+ return -EAGAIN;
+
+ hpb->stats.miss_cnt++;
+ return 0;
+ }
}

- ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len);
+ ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len, read_id);

hpb->stats.hit_cnt++;
+ return 0;
}
-static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
- struct ufshpb_subregion *srgn)
+
+static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb,
+ int rgn_idx, enum req_opf dir)
{
- struct ufshpb_req *map_req;
+ struct ufshpb_req *rq;
struct request *req;
- struct bio *bio;
int retries = HPB_MAP_REQ_RETRIES;

- map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
- if (!map_req)
+ rq = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
+ if (!rq)
return NULL;

retry:
- req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
- REQ_OP_SCSI_IN, BLK_MQ_REQ_NOWAIT);
+ req = blk_get_request(hpb->sdev_ufs_lu->request_queue, dir,
+ BLK_MQ_REQ_NOWAIT);

if ((PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) {
usleep_range(3000, 3100);
@@ -376,35 +664,54 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
}

if (IS_ERR(req))
- goto free_map_req;
+ goto free_rq;
+
+ rq->hpb = hpb;
+ rq->req = req;
+ rq->rb.rgn_idx = rgn_idx;
+
+ return rq;
+
+free_rq:
+ kmem_cache_free(hpb->map_req_cache, rq);
+ return NULL;
+}
+
+static void ufshpb_put_req(struct ufshpb_lu *hpb, struct ufshpb_req *rq)
+{
+ blk_put_request(rq->req);
+ kmem_cache_free(hpb->map_req_cache, rq);
+}
+
+static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
+ struct ufshpb_subregion *srgn)
+{
+ struct ufshpb_req *map_req;
+ struct bio *bio;
+
+ map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN);
+ if (!map_req)
+ return NULL;

bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn);
if (!bio) {
- blk_put_request(req);
- goto free_map_req;
+ ufshpb_put_req(hpb, map_req);
+ return NULL;
}

- map_req->hpb = hpb;
- map_req->req = req;
map_req->bio = bio;

- map_req->rgn_idx = srgn->rgn_idx;
- map_req->srgn_idx = srgn->srgn_idx;
- map_req->mctx = srgn->mctx;
+ map_req->rb.srgn_idx = srgn->srgn_idx;
+ map_req->rb.mctx = srgn->mctx;

return map_req;
-
-free_map_req:
- kmem_cache_free(hpb->map_req_cache, map_req);
- return NULL;
}

static void ufshpb_put_map_req(struct ufshpb_lu *hpb,
struct ufshpb_req *map_req)
{
bio_put(map_req->bio);
- blk_put_request(map_req->req);
- kmem_cache_free(hpb->map_req_cache, map_req);
+ ufshpb_put_req(hpb, map_req);
}

static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
@@ -487,6 +794,13 @@ static void ufshpb_activate_subregion(struct ufshpb_lu *hpb,
srgn->srgn_state = HPB_SRGN_VALID;
}

+static void ufshpb_umap_req_compl_fn(struct request *req, blk_status_t error)
+{
+ struct ufshpb_req *umap_req = (struct ufshpb_req *)req->end_io_data;
+
+ ufshpb_put_req(umap_req->hpb, umap_req);
+}
+
static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
{
struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;
@@ -494,8 +808,8 @@ static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
struct ufshpb_subregion *srgn;
unsigned long flags;

- srgn = hpb->rgn_tbl[map_req->rgn_idx].srgn_tbl +
- map_req->srgn_idx;
+ srgn = hpb->rgn_tbl[map_req->rb.rgn_idx].srgn_tbl +
+ map_req->rb.srgn_idx;

ufshpb_clear_dirty_bitmap(hpb, srgn);
spin_lock_irqsave(&hpb->rgn_state_lock, flags);
@@ -505,6 +819,16 @@ static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
ufshpb_put_map_req(map_req->hpb, map_req);
}

+static void ufshpb_set_unmap_cmd(unsigned char *cdb, int rgn_idx, bool single)
+{
+ cdb[0] = UFSHPB_WRITE_BUFFER;
+ cdb[1] = single ? UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID :
+ UFSHPB_WRITE_BUFFER_INACT_ALL_ID;
+ if (single)
+ put_unaligned_be16(rgn_idx, &cdb[2]);
+ cdb[9] = 0x00;
+}
+
static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
int srgn_idx, int srgn_mem_size)
{
@@ -518,6 +842,26 @@ static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
cdb[9] = 0x00;
}

+static int ufshpb_execute_umap_all_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *umap_req)
+{
+ struct request_queue *q;
+ struct request *req;
+ struct scsi_request *rq;
+
+ q = hpb->sdev_ufs_lu->request_queue;
+ req = umap_req->req;
+ req->timeout = 0;
+ req->end_io_data = (void *)umap_req;
+ rq = scsi_req(req);
+ ufshpb_set_unmap_cmd(rq->cmd, 0, false);
+ rq->cmd_len = HPB_WRITE_BUFFER_CMD_LENGTH;
+
+ blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_umap_req_compl_fn);
+
+ return 0;
+}
+
static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,
struct ufshpb_req *map_req, bool last)
{
@@ -530,12 +874,12 @@ static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,

q = hpb->sdev_ufs_lu->request_queue;
for (i = 0; i < hpb->pages_per_srgn; i++) {
- ret = bio_add_pc_page(q, map_req->bio, map_req->mctx->m_page[i],
+ ret = bio_add_pc_page(q, map_req->bio, map_req->rb.mctx->m_page[i],
PAGE_SIZE, 0);
if (ret != PAGE_SIZE) {
dev_err(&hpb->sdev_ufs_lu->sdev_dev,
"bio_add_pc_page fail %d - %d\n",
- map_req->rgn_idx, map_req->srgn_idx);
+ map_req->rb.rgn_idx, map_req->rb.srgn_idx);
return ret;
}
}
@@ -551,8 +895,8 @@ static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,
if (unlikely(last))
mem_size = hpb->last_srgn_entries * HPB_ENTRY_SIZE;

- ufshpb_set_read_buf_cmd(rq->cmd, map_req->rgn_idx,
- map_req->srgn_idx, mem_size);
+ ufshpb_set_read_buf_cmd(rq->cmd, map_req->rb.rgn_idx,
+ map_req->rb.srgn_idx, mem_size);
rq->cmd_len = HPB_READ_BUFFER_CMD_LENGTH;

blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_map_req_compl_fn);
@@ -684,6 +1028,24 @@ static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb,
}
}

+static int ufshpb_issue_umap_all_req(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_req *umap_req;
+
+ umap_req = ufshpb_get_req(hpb, 0, REQ_OP_SCSI_OUT);
+ if (!umap_req)
+ return -ENOMEM;
+
+ if (ufshpb_execute_umap_all_req(hpb, umap_req))
+ goto free_umap_req;
+
+ return 0;
+
+free_umap_req:
+ ufshpb_put_req(hpb, umap_req);
+ return -EAGAIN;
+}
+
static void __ufshpb_evict_region(struct ufshpb_lu *hpb,
struct ufshpb_region *rgn)
{
@@ -1209,6 +1571,16 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba,
u32 entries_per_rgn;
u64 rgn_mem_size, tmp;

+ /* for pre_req */
+ if (hpb_dev_info->max_hpb_single_cmd)
+ hpb->pre_req_min_tr_len = hpb_dev_info->max_hpb_single_cmd;
+ else
+ hpb->pre_req_min_tr_len = HPB_MULTI_CHUNK_LOW;
+ hpb->pre_req_max_tr_len = max(HPB_MULTI_CHUNK_HIGH,
+ hpb->pre_req_min_tr_len);
+
+ hpb->cur_read_id = 0;
+
hpb->lu_pinned_start = hpb_lu_info->pinned_start;
hpb->lu_pinned_end = hpb_lu_info->num_pinned ?
(hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1)
@@ -1356,7 +1728,7 @@ ufshpb_sysfs_attr_show_func(rb_active_cnt);
ufshpb_sysfs_attr_show_func(rb_inactive_cnt);
ufshpb_sysfs_attr_show_func(map_req_cnt);

-static struct attribute *hpb_dev_attrs[] = {
+static struct attribute *hpb_dev_stat_attrs[] = {
&dev_attr_hit_cnt.attr,
&dev_attr_miss_cnt.attr,
&dev_attr_rb_noti_cnt.attr,
@@ -1367,10 +1739,109 @@ static struct attribute *hpb_dev_attrs[] = {
};

struct attribute_group ufs_sysfs_hpb_stat_group = {
- .name = "hpb_sysfs",
- .attrs = hpb_dev_attrs,
+ .name = "hpb_stat_sysfs",
+ .attrs = hpb_dev_stat_attrs,
};

+/* SYSFS functions */
+#define ufshpb_sysfs_param_show_func(__name) \
+static ssize_t __name##_show(struct device *dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ struct scsi_device *sdev = to_scsi_device(dev); \
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \
+ if (!hpb) \
+ return -ENODEV; \
+ \
+ return sysfs_emit(buf, "%d\n", hpb->params.__name); \
+}
+
+ufshpb_sysfs_param_show_func(requeue_timeout_ms);
+static ssize_t
+requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= 0)
+ return -EINVAL;
+
+ hpb->params.requeue_timeout_ms = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(requeue_timeout_ms);
+
+static struct attribute *hpb_dev_param_attrs[] = {
+ &dev_attr_requeue_timeout_ms.attr,
+ NULL,
+};
+
+struct attribute_group ufs_sysfs_hpb_param_group = {
+ .name = "hpb_param_sysfs",
+ .attrs = hpb_dev_param_attrs,
+};
+
+static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_req *pre_req = NULL;
+ int qd = hpb->sdev_ufs_lu->queue_depth / 2;
+ int i, j;
+
+ INIT_LIST_HEAD(&hpb->lh_pre_req_free);
+
+ hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req), GFP_KERNEL);
+ hpb->throttle_pre_req = qd;
+ hpb->num_inflight_pre_req = 0;
+
+ if (!hpb->pre_req)
+ goto release_mem;
+
+ for (i = 0; i < qd; i++) {
+ pre_req = hpb->pre_req + i;
+ INIT_LIST_HEAD(&pre_req->list_req);
+ pre_req->req = NULL;
+ pre_req->bio = NULL;
+
+ pre_req->wb.m_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!pre_req->wb.m_page) {
+ for (j = 0; j < i; j++)
+ __free_page(hpb->pre_req[j].wb.m_page);
+
+ goto release_mem;
+ }
+ list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free);
+ }
+
+ return 0;
+release_mem:
+ kfree(hpb->pre_req);
+ return -ENOMEM;
+}
+
+static void ufshpb_pre_req_mempool_destroy(struct ufshpb_lu *hpb)
+{
+ struct ufshpb_req *pre_req = NULL;
+ int i;
+
+ for (i = 0; i < hpb->throttle_pre_req; i++) {
+ pre_req = hpb->pre_req + i;
+ if (!pre_req->wb.m_page)
+ __free_page(hpb->pre_req[i].wb.m_page);
+ list_del_init(&pre_req->list_req);
+ }
+
+ kfree(hpb->pre_req);
+}
+
static void ufshpb_stat_init(struct ufshpb_lu *hpb)
{
hpb->stats.hit_cnt = 0;
@@ -1381,6 +1852,11 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb)
hpb->stats.map_req_cnt = 0;
}

+static void ufshpb_param_init(struct ufshpb_lu *hpb)
+{
+ hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS;
+}
+
static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
{
int ret;
@@ -1413,14 +1889,24 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
goto release_req_cache;
}

+ ret = ufshpb_pre_req_mempool_init(hpb);
+ if (ret) {
+ dev_err(hba->dev, "ufshpb(%d) pre_req_mempool init fail",
+ hpb->lun);
+ goto release_m_page_cache;
+ }
+
ret = ufshpb_alloc_region_tbl(hba, hpb);
if (ret)
- goto release_m_page_cache;
+ goto release_pre_req_mempool;

ufshpb_stat_init(hpb);
+ ufshpb_param_init(hpb);

return 0;

+release_pre_req_mempool:
+ ufshpb_pre_req_mempool_destroy(hpb);
release_m_page_cache:
kmem_cache_destroy(hpb->m_page_cache);
release_req_cache:
@@ -1429,7 +1915,7 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
}

static struct ufshpb_lu *
-ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
+ufshpb_alloc_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev,
struct ufshpb_dev_info *hpb_dev_info,
struct ufshpb_lu_info *hpb_lu_info)
{
@@ -1440,7 +1926,8 @@ ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
if (!hpb)
return NULL;

- hpb->lun = lun;
+ hpb->lun = sdev->lun;
+ hpb->sdev_ufs_lu = sdev;

ufshpb_lu_parameter_init(hba, hpb, hpb_dev_info, hpb_lu_info);

@@ -1450,6 +1937,7 @@ ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun,
goto release_hpb;
}

+ sdev->hostdata = hpb;
return hpb;

release_hpb:
@@ -1652,6 +2140,7 @@ void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev)

ufshpb_cancel_jobs(hpb);

+ ufshpb_pre_req_mempool_destroy(hpb);
ufshpb_destroy_region_tbl(hpb);

kmem_cache_destroy(hpb->map_req_cache);
@@ -1691,6 +2180,7 @@ static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba)
ufshpb_set_state(hpb, HPB_PRESENT);
if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0)
queue_work(ufshpb_wq, &hpb->map_work);
+ ufshpb_issue_umap_all_req(hpb);
} else {
dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun);
ufshpb_destroy_lu(hba, sdev);
@@ -1715,7 +2205,7 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev)
if (ret)
goto out;

- hpb = ufshpb_alloc_hpb_lu(hba, lun, &hba->ufshpb_dev,
+ hpb = ufshpb_alloc_hpb_lu(hba, sdev, &hba->ufshpb_dev,
&hpb_lu_info);
if (!hpb)
goto out;
@@ -1723,9 +2213,6 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev)
tot_active_srgn_pages += hpb_lu_info.max_active_rgns *
hpb->srgns_per_rgn * hpb->pages_per_srgn;

- hpb->sdev_ufs_lu = sdev;
- sdev->hostdata = hpb;
-
out:
/* All LUs are initialized */
if (atomic_dec_and_test(&hba->ufshpb_dev.slave_conf_cnt))
@@ -1812,8 +2299,9 @@ void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf)
void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf)
{
struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev;
- int version;
+ int version, ret;
u8 hpb_mode;
+ u32 max_hpb_sigle_cmd = 0;

hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
if (hpb_mode == HPB_HOST_CONTROL) {
@@ -1824,13 +2312,27 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf)
}

version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER);
- if (version != HPB_SUPPORT_VERSION) {
+ if ((version != HPB_SUPPORT_VERSION) &&
+ (version != HPB_SUPPORT_LEGACY_VERSION)) {
dev_err(hba->dev, "%s: HPB %x version is not supported.\n",
__func__, version);
hpb_dev_info->hpb_disabled = true;
return;
}

+ if (version == HPB_SUPPORT_LEGACY_VERSION)
+ hpb_dev_info->is_legacy = true;
+
+ pm_runtime_get_sync(hba->dev);
+ ret = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_READ_ATTR,
+ QUERY_ATTR_IDN_MAX_HPB_SINGLE_CMD, 0, 0, &max_hpb_sigle_cmd);
+ pm_runtime_put_sync(hba->dev);
+
+ if (ret)
+ dev_err(hba->dev, "%s: idn: read max size of single hpb cmd query request failed",
+ __func__);
+ hpb_dev_info->max_hpb_single_cmd = max_hpb_sigle_cmd;
+
/*
* Get the number of user logical unit to check whether all
* scsi_device finish initialization
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index c70e73546e35..c0578e49a00b 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -30,19 +30,28 @@
#define PINNED_NOT_SET U32_MAX

/* hpb support chunk size */
-#define HPB_MULTI_CHUNK_HIGH 1
+#define HPB_MULTI_CHUNK_LOW 9
+#define HPB_MULTI_CHUNK_HIGH 128

/* hpb vender defined opcode */
#define UFSHPB_READ 0xF8
#define UFSHPB_READ_BUFFER 0xF9
#define UFSHPB_READ_BUFFER_ID 0x01
+#define UFSHPB_WRITE_BUFFER 0xFA
+#define UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID 0x01
+#define UFSHPB_WRITE_BUFFER_PREFETCH_ID 0x02
+#define UFSHPB_WRITE_BUFFER_INACT_ALL_ID 0x03
+#define HPB_WRITE_BUFFER_CMD_LENGTH 10
+#define MAX_HPB_READ_ID 0x7F
#define HPB_READ_BUFFER_CMD_LENGTH 10
#define LU_ENABLED_HPB_FUNC 0x02

#define HPB_RESET_REQ_RETRIES 10
#define HPB_MAP_REQ_RETRIES 5
+#define HPB_REQUEUE_TIME_MS 3

-#define HPB_SUPPORT_VERSION 0x100
+#define HPB_SUPPORT_VERSION 0x200
+#define HPB_SUPPORT_LEGACY_VERSION 0x100

enum UFSHPB_MODE {
HPB_HOST_CONTROL,
@@ -118,23 +127,39 @@ struct ufshpb_region {
(i)++)

/**
- * struct ufshpb_req - UFSHPB READ BUFFER (for caching map) request structure
- * @req: block layer request for READ BUFFER
- * @bio: bio for holding map page
- * @hpb: ufshpb_lu structure that related to the L2P map
+ * struct ufshpb_req - HPB related request structure (write/read buffer)
+ * @req: block layer request structure
+ * @bio: bio for this request
+ * @hpb: ufshpb_lu structure that related to
+ * @list_req: ufshpb_req mempool list
+ * @sense: store its sense data
* @mctx: L2P map information
* @rgn_idx: target region index
* @srgn_idx: target sub-region index
* @lun: target logical unit number
+ * @m_page: L2P map information data for pre-request
+ * @len: length of host-side cached L2P map in m_page
+ * @lpn: start LPN of L2P map in m_page
*/
struct ufshpb_req {
struct request *req;
struct bio *bio;
struct ufshpb_lu *hpb;
- struct ufshpb_map_ctx *mctx;
-
- unsigned int rgn_idx;
- unsigned int srgn_idx;
+ struct list_head list_req;
+ char sense[SCSI_SENSE_BUFFERSIZE];
+ union {
+ struct {
+ struct ufshpb_map_ctx *mctx;
+ unsigned int rgn_idx;
+ unsigned int srgn_idx;
+ unsigned int lun;
+ } rb;
+ struct {
+ struct page *m_page;
+ unsigned int len;
+ unsigned long lpn;
+ } wb;
+ };
};

struct victim_select_info {
@@ -143,6 +168,10 @@ struct victim_select_info {
atomic_t active_cnt;
};

+struct ufshpb_params {
+ unsigned int requeue_timeout_ms;
+};
+
struct ufshpb_stats {
u64 hit_cnt;
u64 miss_cnt;
@@ -150,6 +179,7 @@ struct ufshpb_stats {
u64 rb_active_cnt;
u64 rb_inactive_cnt;
u64 map_req_cnt;
+ u64 pre_req_cnt;
};

struct ufshpb_lu {
@@ -165,6 +195,15 @@ struct ufshpb_lu {
struct list_head lh_act_srgn; /* hold rsp_list_lock */
struct list_head lh_inact_rgn; /* hold rsp_list_lock */

+ /* pre request information */
+ struct ufshpb_req *pre_req;
+ int num_inflight_pre_req;
+ int throttle_pre_req;
+ struct list_head lh_pre_req_free;
+ int cur_read_id;
+ int pre_req_min_tr_len;
+ int pre_req_max_tr_len;
+
/* cached L2P map management worker */
struct work_struct map_work;

@@ -189,6 +228,7 @@ struct ufshpb_lu {
u32 pages_per_srgn;

struct ufshpb_stats stats;
+ struct ufshpb_params params;

struct kmem_cache *map_req_cache;
struct kmem_cache *m_page_cache;
@@ -200,7 +240,7 @@ struct ufs_hba;
struct ufshcd_lrb;

#ifndef CONFIG_SCSI_UFS_HPB
-static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
+static int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { return 0; }
static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
static void ufshpb_resume(struct ufs_hba *hba) {}
static void ufshpb_suspend(struct ufs_hba *hba) {}
@@ -213,8 +253,9 @@ static void ufshpb_remove(struct ufs_hba *hba) {}
static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; }
static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {}
static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {}
+static bool ufshpb_is_legacy(struct ufs_hba *hba) { return false; }
#else
-void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
+int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_resume(struct ufs_hba *hba);
void ufshpb_suspend(struct ufs_hba *hba);
@@ -227,7 +268,9 @@ void ufshpb_remove(struct ufs_hba *hba);
bool ufshpb_is_allowed(struct ufs_hba *hba);
void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf);
void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf);
+bool ufshpb_is_legacy(struct ufs_hba *hba);
extern struct attribute_group ufs_sysfs_hpb_stat_group;
+extern struct attribute_group ufs_sysfs_hpb_param_group;
#endif

#endif /* End of Header */
--
2.25.1

2021-02-24 13:10:06

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH v24 4/4] scsi: ufs: Add HPB 2.0 support


> if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION &&
> (b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) {
> - dev_info->hpb_enabled = true;
> + bool hpb_en = false;
> +
> ufshpb_get_dev_info(hba, desc_buf);
> +
> + err = ufshcd_query_flag_retry(hba,
> UPIU_QUERY_OPCODE_READ_FLAG,
> + QUERY_FLAG_IDN_HPB_EN, 0, &hpb_en);
> + if (ufshpb_is_legacy(hba) || (!err && hpb_en))
If is_legacy you shouldn't send fHPBen in the first place, not ignoring its failure.
Also, using a Boolean is limiting you to HPB2.0 vs. HPB1.0.
What would you do in new flags/attributes/descriptors that HPB3.0 will introduce?

> + dev_info->hpb_enabled = true;
> }

2021-02-25 06:16:00

by Daejun Park

[permalink] [raw]
Subject: RE: RE: [PATCH v24 4/4] scsi: ufs: Add HPB 2.0 support

> > if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION &&
> > (b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) {
> > - dev_info->hpb_enabled = true;
> > + bool hpb_en = false;
> > +
> > ufshpb_get_dev_info(hba, desc_buf);
> > +
> > + err = ufshcd_query_flag_retry(hba,
> > UPIU_QUERY_OPCODE_READ_FLAG,
> > + QUERY_FLAG_IDN_HPB_EN, 0, &hpb_en);
> > + if (ufshpb_is_legacy(hba) || (!err && hpb_en))
> If is_legacy you shouldn't send fHPBen in the first place, not ignoring its failure.
OK,

> Also, using a Boolean is limiting you to HPB2.0 vs. HPB1.0.
> What would you do in new flags/attributes/descriptors that HPB3.0 will introduce?
It can be managed as enum. but it is enough now, I think.

> > + dev_info->hpb_enabled = true;
> > }

Thanks,
Daejun

2021-02-25 15:04:00

by Bean Huo

[permalink] [raw]
Subject: Re: [PATCH v24 2/4] scsi: ufs: L2P map management for HPB read

On Wed, 2021-02-24 at 13:54 +0900, Daejun Park wrote:
> +static int ufshpb_init_mem_wq(void)
> +{
> + int ret;
> + unsigned int pool_size;
> +
> + ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache",
> + sizeof(struct
> ufshpb_map_ctx),
> + 0, 0, NULL);
> + if (!ufshpb_mctx_cache) {
> + pr_err("ufshpb: cannot init mctx cache\n");
> + return -ENOMEM;
> + }
> +
> + pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) /
> PAGE_SIZE;
> + pr_info("%s:%d ufshpb_host_map_kbytes %u pool_size %u\n",
> + __func__, __LINE__, ufshpb_host_map_kbytes,
> pool_size);
> +

I think print function name is not proper while booting.
And one HPB is associated with one HBA, if there are two UFS
controllers, how can I differentiate them?

Bean

2021-02-26 04:57:05

by Daejun Park

[permalink] [raw]
Subject: RE: Re: [PATCH v24 2/4] scsi: ufs: L2P map management for HPB read

> > +static int ufshpb_init_mem_wq(void)
> > +{
> > + int ret;
> > + unsigned int pool_size;
> > +
> > + ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache",
> > + sizeof(struct
> > ufshpb_map_ctx),
> > + 0, 0, NULL);
> > + if (!ufshpb_mctx_cache) {
> > + pr_err("ufshpb: cannot init mctx cache\n");
> > + return -ENOMEM;
> > + }
> > +
> > + pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) /
> > PAGE_SIZE;
> > + pr_info("%s:%d ufshpb_host_map_kbytes %u pool_size %u\n",
> > + __func__, __LINE__, ufshpb_host_map_kbytes,
> > pool_size);
> > +
>
> I think print function name is not proper while booting.
> And one HPB is associated with one HBA, if there are two UFS
> controllers, how can I differentiate them?

I will use dev_info instead of pr_info.

Thanks,
Daejun

> Bean
>
>
>