2021-01-28 00:02:25

by Avri Altman

[permalink] [raw]
Subject: [PATCH 0/8] Add Host control mode to HPB

The HPB spec defines 2 control modes - device control mode and host
control mode. In oppose to device control mode, in which the host obey
to whatever recommendation received from the device - In host control
mode, the host uses its own algorithms to decide which regions should
be activated or inactivated.

We kept the host managed heuristic simple and concise.

Aside from adding a by-spec functionality, host control mode entails
some further potential benefits: makes the hpb logic transparent and
readable, while allow tuning / scaling its various parameters, and
utilize system-wide info to optimize HPB potential.

This series is based on Samsung's V18 device-control HPB1.0 driver, see
msg-id: [email protected]
in lore.kernel.org. The patches are also available in wdc ufs repo:
https://github.com/westerndigitalcorporation/WDC-UFS-REPO/tree/hpb-v18

This version was tested on Galaxy S20, and Xiaomi Mi10 pro.
Your meticulous review and testing is mostly welcome and appreciated.

Thanks,
Avri

Avri Altman (8):
scsi: ufshpb: Cache HPB Control mode on init
scsi: ufshpb: Add host control mode support to rsp_upiu
scsi: ufshpb: Add region's reads counter
scsi: ufshpb: Make eviction depends on region's reads
scsi: ufshpb: Region inactivation in host mode
scsi: ufshpb: Add hpb dev reset response
scsi: ufshpb: Add "Cold" regions timer
scsi: ufshpb: Add support for host control mode

drivers/scsi/ufs/ufshpb.c | 430 +++++++++++++++++++++++++++++++++-----
drivers/scsi/ufs/ufshpb.h | 23 ++
2 files changed, 406 insertions(+), 47 deletions(-)

--
2.25.1


2021-01-28 00:02:27

by Avri Altman

[permalink] [raw]
Subject: [PATCH 2/8] scsi: ufshpb: Add host control mode support to rsp_upiu

There are some limitations to activations / inactivations in host
control mode - Add those.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 35 +++++++++++++++++++++++++++++++++++
drivers/scsi/ufs/ufshpb.h | 6 ++++++
2 files changed, 41 insertions(+)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 183bdf35f2d0..5fa1f5bc08e6 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -140,6 +140,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx,
else
set_bit_len = cnt;

+ set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags);
+
if (rgn->rgn_state != HPB_RGN_INACTIVE &&
srgn->srgn_state == HPB_SRGN_VALID)
bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len);
@@ -201,6 +203,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx,
return false;
}

+static inline bool is_rgn_dirty(struct ufshpb_region *rgn)
+{
+ return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags);
+}
+
static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb,
struct ufshpb_map_ctx *mctx, int pos, int *error)
{
@@ -380,8 +387,12 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb,
static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
struct ufshpb_subregion *srgn)
{
+ struct ufshpb_region *rgn;
+
WARN_ON(!srgn->mctx);
bitmap_zero(srgn->mctx->ppn_dirty, hpb->entries_per_srgn);
+ rgn = hpb->rgn_tbl + srgn->rgn_idx;
+ clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags);
return 0;
}

@@ -814,17 +825,39 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
*/
spin_lock(&hpb->rsp_list_lock);
for (i = 0; i < rsp_field->active_rgn_cnt; i++) {
+ struct ufshpb_region *rgn;
+
rgn_idx =
be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn);
srgn_idx =
be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn);

+ rgn = hpb->rgn_tbl + rgn_idx;
+ if (ufshpb_mode == HPB_HOST_CONTROL &&
+ (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) {
+ /*
+ * in host control mode, subregion activation
+ * recommendations are only allowed to active regions.
+ * Also, ignore recommendations for dirty regions - the
+ * host will make decisions concerning those by himself
+ */
+ continue;
+ }
+
dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
"activate(%d) region %d - %d\n", i, rgn_idx, srgn_idx);
ufshpb_update_active_info(hpb, rgn_idx, srgn_idx);
hpb->stats.rb_active_cnt++;
}

+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ /*
+ * in host control mode the device is not allowed to inactivate
+ * regions
+ */
+ goto out_unlock;
+ }
+
for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) {
rgn_idx = be16_to_cpu(rsp_field->hpb_inactive_field[i]);
dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
@@ -832,6 +865,8 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
ufshpb_update_inactive_info(hpb, rgn_idx);
hpb->stats.rb_inactive_cnt++;
}
+
+out_unlock:
spin_unlock(&hpb->rsp_list_lock);

dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n",
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 2c43a03b66b6..8a34b0f42754 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -48,6 +48,11 @@ enum UFSHPB_MODE {
HPB_DEVICE_CONTROL,
};

+enum HPB_RGN_FLAGS {
+ RGN_FLAG_UPDATE = 0,
+ RGN_FLAG_DIRTY,
+};
+
enum UFSHPB_STATE {
HPB_PRESENT = 1,
HPB_SUSPEND,
@@ -109,6 +114,7 @@ struct ufshpb_region {

/* below information is used by lru */
struct list_head list_lru_rgn;
+ unsigned long rgn_flags;
};

#define for_each_sub_region(rgn, i, srgn) \
--
2.25.1

2021-01-28 00:02:28

by Avri Altman

[permalink] [raw]
Subject: [PATCH 6/8] scsi: ufshpb: Add hpb dev reset response

The spec does not define what is the host's recommended response when
the device send hpb dev reset response (oper 0x2).

We will update all active hpb regions: mark them and do that on the next
read.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 55 ++++++++++++++++++++++++++++++++++++---
drivers/scsi/ufs/ufshpb.h | 1 +
2 files changed, 53 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index c09c8dce0745..cb99b57b4319 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -17,6 +17,7 @@
#include "../sd.h"

#define WORK_PENDING 0
+#define RESET_PENDING 1
#define ACTIVATION_THRSHLD 4 /* 4 IOs */
#define EVICTION_THRSHLD (ACTIVATION_THRSHLD << 6) /* 256 IOs */

@@ -344,7 +345,8 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
* in host control mode, reads are the main source for
* activation trials.
*/
- if (reads == ACTIVATION_THRSHLD) {
+ if (reads == ACTIVATION_THRSHLD ||
+ test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) {
spin_lock_irqsave(&hpb->rsp_list_lock, flags);
ufshpb_update_active_info(hpb, rgn_idx, srgn_idx);
hpb->stats.rb_active_cnt++;
@@ -1061,6 +1063,24 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
case HPB_RSP_DEV_RESET:
dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
"UFS device lost HPB information during PM.\n");
+
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ struct ufshpb_lu *h;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, hba->host) {
+ h = sdev->hostdata;
+ if (!h)
+ continue;
+
+ if (test_and_set_bit(RESET_PENDING,
+ &h->work_data_bits))
+ continue;
+
+ schedule_work(&h->ufshpb_lun_reset_work);
+ }
+ }
+
break;
default:
dev_notice(&hpb->sdev_ufs_lu->sdev_dev,
@@ -1193,6 +1213,28 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb)
spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
}

+static void ufshpb_reset_work_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb;
+ struct victim_select_info *lru_info;
+ struct ufshpb_region *rgn, *next_rgn;
+ unsigned long flags;
+
+ hpb = container_of(work, struct ufshpb_lu, ufshpb_lun_reset_work);
+
+ lru_info = &hpb->lru_info;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+
+ list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
+ list_lru_rgn)
+ set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags);
+
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ clear_bit(RESET_PENDING, &hpb->work_data_bits);
+}
+
static void ufshpb_normalization_work_handler(struct work_struct *work)
{
struct ufshpb_lu *hpb;
@@ -1379,6 +1421,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
} else {
rgn->rgn_state = HPB_RGN_INACTIVE;
}
+
+ rgn->rgn_flags = 0;
}

return 0;
@@ -1487,9 +1531,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
INIT_LIST_HEAD(&hpb->list_hpb_lu);

INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
- if (ufshpb_mode == HPB_HOST_CONTROL)
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
INIT_WORK(&hpb->ufshpb_normalization_work,
ufshpb_normalization_work_handler);
+ INIT_WORK(&hpb->ufshpb_lun_reset_work,
+ ufshpb_reset_work_handler);
+ }

hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
sizeof(struct ufshpb_req), 0, 0, NULL);
@@ -1576,8 +1623,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)

static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
{
- if (ufshpb_mode == HPB_HOST_CONTROL)
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ cancel_work_sync(&hpb->ufshpb_lun_reset_work);
cancel_work_sync(&hpb->ufshpb_normalization_work);
+ }
cancel_work_sync(&hpb->map_work);
}

diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 47a8877f9cdb..4bf77169af00 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -183,6 +183,7 @@ struct ufshpb_lu {
/* for selecting victim */
struct victim_select_info lru_info;
struct work_struct ufshpb_normalization_work;
+ struct work_struct ufshpb_lun_reset_work;
unsigned long work_data_bits;

/* pinned region information */
--
2.25.1

2021-01-28 00:02:28

by Avri Altman

[permalink] [raw]
Subject: [PATCH 7/8] scsi: ufshpb: Add "Cold" regions timer

In order not to hang on to “cold” regions, we shall inactivate a
region that has no READ access for a predefined amount of time -
READ_TO_MS. For that purpose we shall monitor the active regions list,
polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add
the region to the "to-be-inactivated" list, unless it is clean and did
not exahust its READ_TO_EXPIRIES - another parameter.

All this does not apply to pinned regions.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 75 ++++++++++++++++++++++++++++++++++++++-
drivers/scsi/ufs/ufshpb.h | 7 ++++
2 files changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index cb99b57b4319..482f01c3b3ee 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -18,8 +18,12 @@

#define WORK_PENDING 0
#define RESET_PENDING 1
+#define TIMEOUT_WORK_PENDING 2
#define ACTIVATION_THRSHLD 4 /* 4 IOs */
#define EVICTION_THRSHLD (ACTIVATION_THRSHLD << 6) /* 256 IOs */
+#define READ_TO_MS 1000
+#define READ_TO_EXPIRIES 100
+#define POLLING_INTERVAL_MS 200

/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
@@ -671,12 +675,69 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb,
return 0;
}

+static void ufshpb_read_to_handler(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct ufshpb_lu *hpb;
+ struct victim_select_info *lru_info;
+ struct ufshpb_region *rgn, *next_rgn;
+ unsigned long flags;
+ LIST_HEAD(expired_list);
+
+ hpb = container_of(dwork, struct ufshpb_lu, ufshpb_read_to_work);
+
+ if (test_and_set_bit(TIMEOUT_WORK_PENDING, &hpb->work_data_bits))
+ return;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+
+ lru_info = &hpb->lru_info;
+
+ list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
+ list_lru_rgn) {
+ bool timedout = ktime_after(ktime_get(), rgn->read_timeout);
+ bool dirty, expired;
+
+ if (!timedout)
+ continue;
+
+ dirty = is_rgn_dirty(rgn);
+ expired = atomic_dec_and_test(&rgn->read_timeout_expiries);
+
+ if (dirty || expired)
+ list_add(&rgn->list_expired_rgn, &expired_list);
+ else
+ rgn->read_timeout = ktime_add_ms(ktime_get(),
+ READ_TO_MS);
+ }
+
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ list_for_each_entry_safe(rgn, next_rgn, &expired_list,
+ list_expired_rgn) {
+ list_del_init(&rgn->list_expired_rgn);
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_update_inactive_info(hpb, rgn->rgn_idx);
+ hpb->stats.rb_inactive_cnt++;
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ }
+
+ clear_bit(TIMEOUT_WORK_PENDING, &hpb->work_data_bits);
+
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+}
+
static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
- struct ufshpb_region *rgn)
+ struct ufshpb_region *rgn)
{
rgn->rgn_state = HPB_RGN_ACTIVE;
list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
atomic_inc(&lru_info->active_cnt);
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS);
+ atomic_set(&rgn->read_timeout_expiries, READ_TO_EXPIRIES);
+ }
}

static void ufshpb_hit_lru_info(struct victim_select_info *lru_info,
@@ -1404,6 +1465,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)

INIT_LIST_HEAD(&rgn->list_inact_rgn);
INIT_LIST_HEAD(&rgn->list_lru_rgn);
+ INIT_LIST_HEAD(&rgn->list_expired_rgn);

if (rgn_idx == hpb->rgns_per_lu - 1)
srgn_cnt = ((hpb->srgns_per_lu - 1) %
@@ -1536,6 +1598,8 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
ufshpb_normalization_work_handler);
INIT_WORK(&hpb->ufshpb_lun_reset_work,
ufshpb_reset_work_handler);
+ INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work,
+ ufshpb_read_to_handler);
}

hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
@@ -1562,6 +1626,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)

ufshpb_stat_init(hpb);

+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+
return 0;

release_m_page_cache:
@@ -1624,6 +1692,7 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
{
if (ufshpb_mode == HPB_HOST_CONTROL) {
+ cancel_delayed_work_sync(&hpb->ufshpb_read_to_work);
cancel_work_sync(&hpb->ufshpb_lun_reset_work);
cancel_work_sync(&hpb->ufshpb_normalization_work);
}
@@ -1734,6 +1803,10 @@ void ufshpb_resume(struct ufs_hba *hba)
continue;
ufshpb_set_state(hpb, HPB_PRESENT);
ufshpb_kick_map_work(hpb);
+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+
}
}

diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 4bf77169af00..86c16a4127bd 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -121,6 +121,12 @@ struct ufshpb_region {

/* region reads - for host mode */
atomic64_t reads;
+
+ /* region "cold" timer - for host mode */
+ ktime_t read_timeout;
+ atomic_t read_timeout_expiries;
+ struct list_head list_expired_rgn;
+
};

#define for_each_sub_region(rgn, i, srgn) \
@@ -184,6 +190,7 @@ struct ufshpb_lu {
struct victim_select_info lru_info;
struct work_struct ufshpb_normalization_work;
struct work_struct ufshpb_lun_reset_work;
+ struct delayed_work ufshpb_read_to_work;
unsigned long work_data_bits;

/* pinned region information */
--
2.25.1

2021-01-28 00:02:28

by Avri Altman

[permalink] [raw]
Subject: [PATCH 4/8] scsi: ufshpb: Make eviction depends on region's reads

In host mode, eviction is considered an extreme measure.
verify that the entering region has enough reads, and the exiting
region has much less reads.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 44 ++++++++++++++++++++++++++++++++++++---
1 file changed, 41 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 51c3607166bc..a16c0f2d5fac 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -18,6 +18,7 @@

#define WORK_PENDING 0
#define ACTIVATION_THRSHLD 4 /* 4 IOs */
+#define EVICTION_THRSHLD (ACTIVATION_THRSHLD << 6) /* 256 IOs */

/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
@@ -639,6 +640,14 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
if (ufshpb_check_srgns_issue_state(hpb, rgn))
continue;

+ /*
+ * in host control mode, verify that the exiting region
+ * has less reads
+ */
+ if (ufshpb_mode == HPB_HOST_CONTROL &&
+ atomic64_read(&rgn->reads) > (EVICTION_THRSHLD >> 1))
+ continue;
+
victim_rgn = rgn;
break;
}
@@ -789,7 +798,7 @@ static int ufshpb_issue_map_req(struct ufshpb_lu *hpb,

static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
{
- struct ufshpb_region *victim_rgn;
+ struct ufshpb_region *victim_rgn = NULL;
struct victim_select_info *lru_info = &hpb->lru_info;
unsigned long flags;
int ret = 0;
@@ -817,6 +826,17 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
* because the device could detect this region
* by not issuing HPB_READ
*/
+
+ /*
+ * in host control mode, verify that the entering
+ * region has enough reads
+ */
+ if (ufshpb_mode == HPB_HOST_CONTROL &&
+ atomic64_read(&rgn->reads) < EVICTION_THRSHLD) {
+ ret = -EACCES;
+ goto out;
+ }
+
victim_rgn = ufshpb_victim_lru_info(hpb);
if (!victim_rgn) {
dev_warn(&hpb->sdev_ufs_lu->sdev_dev,
@@ -1024,8 +1044,13 @@ static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb)

rgn = hpb->rgn_tbl + srgn->rgn_idx;
ret = ufshpb_add_region(hpb, rgn);
- if (ret)
- goto active_failed;
+ if (ret) {
+ if (ret == -EACCES) {
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ continue;
+ }
+ goto add_region_failed;
+ }

ret = ufshpb_issue_map_req(hpb, rgn, srgn);
if (ret) {
@@ -1039,9 +1064,22 @@ static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb)
spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
return;

+add_region_failed:
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ /*
+ * In host control mode, it is common that eviction trials
+ * fail, either because the entering region didn't have enough
+ * reads, or a poor-performing exiting region wasn't found.
+ * No need to re-insert those regions to the "to-be-activated"
+ * list.
+ */
+ return;
+ }
+
active_failed:
dev_err(&hpb->sdev_ufs_lu->sdev_dev, "failed to activate region %d - %d, will retry\n",
rgn->rgn_idx, srgn->srgn_idx);
+
spin_lock_irqsave(&hpb->rsp_list_lock, flags);
ufshpb_add_active_list(hpb, rgn, srgn);
spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
--
2.25.1

2021-01-28 00:02:35

by Avri Altman

[permalink] [raw]
Subject: [PATCH 5/8] scsi: ufshpb: Region inactivation in host mode

I host mode, the host is expected to send HPB-WRITE-BUFFER with
buffer-id = 0x1 when it inactivates a region.

Use the map-requests pool as there is no point in assigning a
designated cache for umap-requests.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 120 +++++++++++++++++++++++++++++++-------
drivers/scsi/ufs/ufshpb.h | 4 ++
2 files changed, 104 insertions(+), 20 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index a16c0f2d5fac..c09c8dce0745 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -386,49 +386,66 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
hpb->stats.hit_cnt++;
}

+static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb,
+ int rgn_idx, enum req_opf dir)
+{
+ struct ufshpb_req *rq;
+ struct request *req;
+
+ rq = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
+ if (!rq)
+ return NULL;
+
+ req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
+ dir, 0);
+ if (IS_ERR(req))
+ goto free_rq;
+
+ rq->hpb = hpb;
+ rq->req = req;
+ rq->rgn_idx = rgn_idx;
+
+ return rq;
+
+free_rq:
+ kmem_cache_free(hpb->map_req_cache, rq);
+ return NULL;
+}
+
+static void ufshpb_put_req(struct ufshpb_lu *hpb, struct ufshpb_req *rq)
+{
+ blk_put_request(rq->req);
+ kmem_cache_free(hpb->map_req_cache, rq);
+}
+
static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
struct ufshpb_subregion *srgn)
{
struct ufshpb_req *map_req;
- struct request *req;
struct bio *bio;

- map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL);
+ map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN);
if (!map_req)
return NULL;

- req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
- REQ_OP_SCSI_IN, 0);
- if (IS_ERR(req))
- goto free_map_req;
-
bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn);
if (!bio) {
- blk_put_request(req);
- goto free_map_req;
+ ufshpb_put_req(hpb, map_req);
+ return NULL;
}

- map_req->hpb = hpb;
- map_req->req = req;
map_req->bio = bio;
-
- map_req->rgn_idx = srgn->rgn_idx;
map_req->srgn_idx = srgn->srgn_idx;
map_req->mctx = srgn->mctx;

return map_req;
-
-free_map_req:
- kmem_cache_free(hpb->map_req_cache, map_req);
- return NULL;
}

static void ufshpb_put_map_req(struct ufshpb_lu *hpb,
- struct ufshpb_req *map_req)
+ struct ufshpb_req *map_req)
{
bio_put(map_req->bio);
- blk_put_request(map_req->req);
- kmem_cache_free(hpb->map_req_cache, map_req);
+ ufshpb_put_req(hpb, map_req);
}

static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
@@ -483,6 +500,13 @@ static void ufshpb_activate_subregion(struct ufshpb_lu *hpb,
srgn->srgn_state = HPB_SRGN_VALID;
}

+static void ufshpb_umap_req_compl_fn(struct request *req, blk_status_t error)
+{
+ struct ufshpb_req *umap_req = (struct ufshpb_req *)req->end_io_data;
+
+ ufshpb_put_req(umap_req->hpb, umap_req);
+}
+
static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
{
struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;
@@ -501,6 +525,14 @@ static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error)
ufshpb_put_map_req(map_req->hpb, map_req);
}

+static void ufshpb_set_write_buf_cmd(unsigned char *cdb, int rgn_idx)
+{
+ cdb[0] = UFSHPB_WRITE_BUFFER;
+ cdb[1] = UFSHPB_WRITE_BUFFER_ID;
+ put_unaligned_be16(rgn_idx, &cdb[2]);
+ cdb[9] = 0x00;
+}
+
static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
int srgn_idx, int srgn_mem_size)
{
@@ -514,6 +546,27 @@ static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx,
cdb[9] = 0x00;
}

+static int ufshpb_execute_umap_req(struct ufshpb_lu *hpb,
+ struct ufshpb_req *umap_req)
+{
+ struct request_queue *q;
+ struct request *req;
+ struct scsi_request *rq;
+
+ q = hpb->sdev_ufs_lu->request_queue;
+ req = umap_req->req;
+ req->timeout = 0;
+ req->end_io_data = (void *)umap_req;
+ rq = scsi_req(req);
+ ufshpb_set_write_buf_cmd(rq->cmd, umap_req->rgn_idx);
+ rq->cmd_len = HPB_WRITE_BUFFER_CMD_LENGTH;
+
+ blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_umap_req_compl_fn);
+
+ hpb->stats.umap_req_cnt++;
+ return 0;
+}
+
static int ufshpb_execute_map_req(struct ufshpb_lu *hpb,
struct ufshpb_req *map_req)
{
@@ -673,6 +726,25 @@ static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb,
}
}

+static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb,
+ struct ufshpb_region *rgn)
+{
+ struct ufshpb_req *umap_req;
+
+ umap_req = ufshpb_get_req(hpb, rgn->rgn_idx, REQ_OP_SCSI_OUT);
+ if (!umap_req)
+ return -ENOMEM;
+
+ if (ufshpb_execute_umap_req(hpb, umap_req))
+ goto free_umap_req;
+
+ return 0;
+
+free_umap_req:
+ ufshpb_put_req(hpb, umap_req);
+ return -EAGAIN;
+}
+
static void __ufshpb_evict_region(struct ufshpb_lu *hpb,
struct ufshpb_region *rgn)
{
@@ -680,6 +752,11 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb,
struct ufshpb_subregion *srgn;
int srgn_idx;

+
+ if (ufshpb_mode == HPB_HOST_CONTROL &&
+ ufshpb_issue_umap_req(hpb, rgn))
+ return;
+
lru_info = &hpb->lru_info;

dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx);
@@ -1368,6 +1445,7 @@ ufshpb_sysfs_attr_show_func(rb_noti_cnt);
ufshpb_sysfs_attr_show_func(rb_active_cnt);
ufshpb_sysfs_attr_show_func(rb_inactive_cnt);
ufshpb_sysfs_attr_show_func(map_req_cnt);
+ufshpb_sysfs_attr_show_func(umap_req_cnt);

static struct attribute *hpb_dev_attrs[] = {
&dev_attr_hit_cnt.attr,
@@ -1376,6 +1454,7 @@ static struct attribute *hpb_dev_attrs[] = {
&dev_attr_rb_active_cnt.attr,
&dev_attr_rb_inactive_cnt.attr,
&dev_attr_map_req_cnt.attr,
+ &dev_attr_umap_req_cnt.attr,
NULL,
};

@@ -1392,6 +1471,7 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb)
hpb->stats.rb_active_cnt = 0;
hpb->stats.rb_inactive_cnt = 0;
hpb->stats.map_req_cnt = 0;
+ hpb->stats.umap_req_cnt = 0;
}

static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index b0e78728af38..47a8877f9cdb 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -35,8 +35,11 @@
/* hpb vender defined opcode */
#define UFSHPB_READ 0xF8
#define UFSHPB_READ_BUFFER 0xF9
+#define UFSHPB_WRITE_BUFFER 0xFA
#define UFSHPB_READ_BUFFER_ID 0x01
+#define UFSHPB_WRITE_BUFFER_ID 0x01
#define HPB_READ_BUFFER_CMD_LENGTH 10
+#define HPB_WRITE_BUFFER_CMD_LENGTH 10
#define LU_ENABLED_HPB_FUNC 0x02

#define HPB_RESET_REQ_RETRIES 10
@@ -158,6 +161,7 @@ struct ufshpb_stats {
u64 rb_active_cnt;
u64 rb_inactive_cnt;
u64 map_req_cnt;
+ u64 umap_req_cnt;
};

struct ufshpb_lu {
--
2.25.1

2021-01-28 00:03:30

by Avri Altman

[permalink] [raw]
Subject: [PATCH 8/8] scsi: ufshpb: Add support for host control mode

Support devices that report they are using host control mode.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 6 ------
1 file changed, 6 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 482f01c3b3ee..3ea9f7079189 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -2035,12 +2035,6 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf)
int version;

ufshpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
- if (ufshpb_mode == HPB_HOST_CONTROL) {
- dev_err(hba->dev, "%s: host control mode is not supported.\n",
- __func__);
- hpb_dev_info->hpb_disabled = true;
- return;
- }

version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER);
if (version != HPB_SUPPORT_VERSION) {
--
2.25.1

2021-01-28 00:03:30

by Avri Altman

[permalink] [raw]
Subject: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

In host control mode, reads are the major source of activation trials.
Keep track of those reads counters, for both active as well inactive
regions.

We reset the read counter upon write - we are only interested in "clean"
reads. less intuitive however, is that we also reset it upon region's
deactivation. Region deactivation is often due to the fact that
eviction took place: a region become active on the expense of another.
This is happening when the max-active-regions limit has crossed. If we
don’t reset the counter, we will trigger a lot of trashing of the HPB
database, since few reads (or even one) to the region that was
deactivated, will trigger a re-activation trial.

Keep those counters normalized, as we are using those reads as a
comparative score, to make various decisions.
If during consecutive normalizations an active region has exhaust its
reads - inactivate it.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 96 +++++++++++++++++++++++++++++++++------
drivers/scsi/ufs/ufshpb.h | 5 ++
2 files changed, 86 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 5fa1f5bc08e6..51c3607166bc 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -16,6 +16,9 @@
#include "ufshpb.h"
#include "../sd.h"

+#define WORK_PENDING 0
+#define ACTIVATION_THRSHLD 4 /* 4 IOs */
+
/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
static mempool_t *ufshpb_mctx_pool;
@@ -261,6 +264,21 @@ ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp,
cdb[14] = transfer_len;
}

+static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx)
+{
+ struct ufshpb_region *rgn;
+ struct ufshpb_subregion *srgn;
+
+ rgn = hpb->rgn_tbl + rgn_idx;
+ srgn = rgn->srgn_tbl + srgn_idx;
+
+ list_del_init(&rgn->list_inact_rgn);
+
+ if (list_empty(&srgn->list_act_srgn))
+ list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+}
+
/*
* This function will set up HPB read command using host-side L2P map data.
* In HPB v1.0, maximum size of HPB read command is 4KB.
@@ -276,6 +294,7 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
unsigned long flags;
int transfer_len, rgn_idx, srgn_idx, srgn_offset;
int err = 0;
+ u64 reads;

hpb = ufshpb_get_hpb_data(cmd->device);
if (!hpb)
@@ -306,12 +325,39 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len);
spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ atomic64_set(&rgn->reads, 0);
+
return;
}

+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ reads = atomic64_inc_return(&rgn->reads);
+
if (!ufshpb_is_support_chunk(transfer_len))
return;

+ if (ufshpb_mode == HPB_HOST_CONTROL) {
+ /*
+ * in host control mode, reads are the main source for
+ * activation trials.
+ */
+ if (reads == ACTIVATION_THRSHLD) {
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_update_active_info(hpb, rgn_idx, srgn_idx);
+ hpb->stats.rb_active_cnt++;
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "activate region %d-%d\n", rgn_idx, srgn_idx);
+ }
+
+ /* keep those counters normalized */
+ if (reads > hpb->entries_per_srgn &&
+ !test_and_set_bit(WORK_PENDING, &hpb->work_data_bits))
+ schedule_work(&hpb->ufshpb_normalization_work);
+ }
+
spin_lock_irqsave(&hpb->rgn_state_lock, flags);
if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len)) {
@@ -396,21 +442,6 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
return 0;
}

-static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
- int srgn_idx)
-{
- struct ufshpb_region *rgn;
- struct ufshpb_subregion *srgn;
-
- rgn = hpb->rgn_tbl + rgn_idx;
- srgn = rgn->srgn_tbl + srgn_idx;
-
- list_del_init(&rgn->list_inact_rgn);
-
- if (list_empty(&srgn->list_act_srgn))
- list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
-}
-
static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx)
{
struct ufshpb_region *rgn;
@@ -646,6 +677,9 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb,

ufshpb_cleanup_lru_info(lru_info, rgn);

+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ atomic64_set(&rgn->reads, 0);
+
for_each_sub_region(rgn, srgn_idx, srgn)
ufshpb_purge_active_subregion(hpb, srgn);
}
@@ -1044,6 +1078,33 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb)
spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
}

+static void ufshpb_normalization_work_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb;
+ int rgn_idx;
+
+ hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work);
+
+ for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) {
+ struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx;
+ u64 reads = atomic64_read(&rgn->reads);
+
+ if (reads)
+ atomic64_set(&rgn->reads, reads >> 1);
+
+ if (rgn->rgn_state != HPB_RGN_ACTIVE ||
+ atomic64_read(&rgn->reads))
+ continue;
+
+ /* if region is active but has no reads - inactivate it */
+ spin_lock(&hpb->rsp_list_lock);
+ ufshpb_update_inactive_info(hpb, rgn->rgn_idx);
+ spin_unlock(&hpb->rsp_list_lock);
+ }
+
+ clear_bit(WORK_PENDING, &hpb->work_data_bits);
+}
+
static void ufshpb_map_work_handler(struct work_struct *work)
{
struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work);
@@ -1308,6 +1369,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
INIT_LIST_HEAD(&hpb->list_hpb_lu);

INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ INIT_WORK(&hpb->ufshpb_normalization_work,
+ ufshpb_normalization_work_handler);

hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
sizeof(struct ufshpb_req), 0, 0, NULL);
@@ -1394,6 +1458,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)

static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
{
+ if (ufshpb_mode == HPB_HOST_CONTROL)
+ cancel_work_sync(&hpb->ufshpb_normalization_work);
cancel_work_sync(&hpb->map_work);
}

diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 8a34b0f42754..b0e78728af38 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -115,6 +115,9 @@ struct ufshpb_region {
/* below information is used by lru */
struct list_head list_lru_rgn;
unsigned long rgn_flags;
+
+ /* region reads - for host mode */
+ atomic64_t reads;
};

#define for_each_sub_region(rgn, i, srgn) \
@@ -175,6 +178,8 @@ struct ufshpb_lu {

/* for selecting victim */
struct victim_select_info lru_info;
+ struct work_struct ufshpb_normalization_work;
+ unsigned long work_data_bits;

/* pinned region information */
u32 lu_pinned_start;
--
2.25.1

2021-01-28 00:04:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 7/8] scsi: ufshpb: Add "Cold" regions timer

On Wed, Jan 27, 2021 at 05:12:16PM +0200, Avri Altman wrote:
> In order not to hang on to “cold” regions, we shall inactivate a
> region that has no READ access for a predefined amount of time -
> READ_TO_MS. For that purpose we shall monitor the active regions list,
> polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add
> the region to the "to-be-inactivated" list, unless it is clean and did
> not exahust its READ_TO_EXPIRIES - another parameter.
>
> All this does not apply to pinned regions.
>
> Signed-off-by: Avri Altman <[email protected]>
> ---
> drivers/scsi/ufs/ufshpb.c | 75 ++++++++++++++++++++++++++++++++++++++-
> drivers/scsi/ufs/ufshpb.h | 7 ++++
> 2 files changed, 81 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
> index cb99b57b4319..482f01c3b3ee 100644
> --- a/drivers/scsi/ufs/ufshpb.c
> +++ b/drivers/scsi/ufs/ufshpb.c
> @@ -18,8 +18,12 @@
>
> #define WORK_PENDING 0
> #define RESET_PENDING 1
> +#define TIMEOUT_WORK_PENDING 2
> #define ACTIVATION_THRSHLD 4 /* 4 IOs */
> #define EVICTION_THRSHLD (ACTIVATION_THRSHLD << 6) /* 256 IOs */
> +#define READ_TO_MS 1000
> +#define READ_TO_EXPIRIES 100
> +#define POLLING_INTERVAL_MS 200
>
> /* memory management */
> static struct kmem_cache *ufshpb_mctx_cache;
> @@ -671,12 +675,69 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb,
> return 0;
> }
>
> +static void ufshpb_read_to_handler(struct work_struct *work)
> +{
> + struct delayed_work *dwork = to_delayed_work(work);
> + struct ufshpb_lu *hpb;
> + struct victim_select_info *lru_info;
> + struct ufshpb_region *rgn, *next_rgn;
> + unsigned long flags;
> + LIST_HEAD(expired_list);
> +
> + hpb = container_of(dwork, struct ufshpb_lu, ufshpb_read_to_work);
> +
> + if (test_and_set_bit(TIMEOUT_WORK_PENDING, &hpb->work_data_bits))
> + return;
> +
> + spin_lock_irqsave(&hpb->rgn_state_lock, flags);
> +
> + lru_info = &hpb->lru_info;
> +
> + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
> + list_lru_rgn) {
> + bool timedout = ktime_after(ktime_get(), rgn->read_timeout);
> + bool dirty, expired;
> +
> + if (!timedout)
> + continue;
> +
> + dirty = is_rgn_dirty(rgn);
> + expired = atomic_dec_and_test(&rgn->read_timeout_expiries);
> +
> + if (dirty || expired)
> + list_add(&rgn->list_expired_rgn, &expired_list);
> + else
> + rgn->read_timeout = ktime_add_ms(ktime_get(),
> + READ_TO_MS);
> + }
> +
> + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
> +
> + list_for_each_entry_safe(rgn, next_rgn, &expired_list,
> + list_expired_rgn) {
> + list_del_init(&rgn->list_expired_rgn);
> + spin_lock_irqsave(&hpb->rsp_list_lock, flags);
> + ufshpb_update_inactive_info(hpb, rgn->rgn_idx);
> + hpb->stats.rb_inactive_cnt++;
> + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
> + }
> +
> + clear_bit(TIMEOUT_WORK_PENDING, &hpb->work_data_bits);
> +
> + schedule_delayed_work(&hpb->ufshpb_read_to_work,
> + msecs_to_jiffies(POLLING_INTERVAL_MS));
> +}
> +
> static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
> - struct ufshpb_region *rgn)
> + struct ufshpb_region *rgn)
> {
> rgn->rgn_state = HPB_RGN_ACTIVE;
> list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
> atomic_inc(&lru_info->active_cnt);
> + if (ufshpb_mode == HPB_HOST_CONTROL) {
> + rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS);
> + atomic_set(&rgn->read_timeout_expiries, READ_TO_EXPIRIES);
> + }
> }
>
> static void ufshpb_hit_lru_info(struct victim_select_info *lru_info,
> @@ -1404,6 +1465,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
>
> INIT_LIST_HEAD(&rgn->list_inact_rgn);
> INIT_LIST_HEAD(&rgn->list_lru_rgn);
> + INIT_LIST_HEAD(&rgn->list_expired_rgn);
>
> if (rgn_idx == hpb->rgns_per_lu - 1)
> srgn_cnt = ((hpb->srgns_per_lu - 1) %
> @@ -1536,6 +1598,8 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
> ufshpb_normalization_work_handler);
> INIT_WORK(&hpb->ufshpb_lun_reset_work,
> ufshpb_reset_work_handler);
> + INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work,
> + ufshpb_read_to_handler);
> }
>
> hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
> @@ -1562,6 +1626,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
>
> ufshpb_stat_init(hpb);
>
> + if (ufshpb_mode == HPB_HOST_CONTROL)
> + schedule_delayed_work(&hpb->ufshpb_read_to_work,
> + msecs_to_jiffies(POLLING_INTERVAL_MS));
> +
> return 0;
>
> release_m_page_cache:
> @@ -1624,6 +1692,7 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
> static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
> {
> if (ufshpb_mode == HPB_HOST_CONTROL) {
> + cancel_delayed_work_sync(&hpb->ufshpb_read_to_work);
> cancel_work_sync(&hpb->ufshpb_lun_reset_work);
> cancel_work_sync(&hpb->ufshpb_normalization_work);
> }
> @@ -1734,6 +1803,10 @@ void ufshpb_resume(struct ufs_hba *hba)
> continue;
> ufshpb_set_state(hpb, HPB_PRESENT);
> ufshpb_kick_map_work(hpb);
> + if (ufshpb_mode == HPB_HOST_CONTROL)
> + schedule_delayed_work(&hpb->ufshpb_read_to_work,
> + msecs_to_jiffies(POLLING_INTERVAL_MS));
> +
> }
> }
>
> diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
> index 4bf77169af00..86c16a4127bd 100644
> --- a/drivers/scsi/ufs/ufshpb.h
> +++ b/drivers/scsi/ufs/ufshpb.h
> @@ -121,6 +121,12 @@ struct ufshpb_region {
>
> /* region reads - for host mode */
> atomic64_t reads;
> +
> + /* region "cold" timer - for host mode */
> + ktime_t read_timeout;
> + atomic_t read_timeout_expiries;

Why does this have to be an atomic when you have a lock to protect this
structure already taken?

thanks,

greg k-h

2021-01-28 00:04:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 2/8] scsi: ufshpb: Add host control mode support to rsp_upiu

On Wed, Jan 27, 2021 at 05:12:11PM +0200, Avri Altman wrote:
> There are some limitations to activations / inactivations in host
> control mode - Add those.

I can not understand this changelog text, please make it more
descriptive as I have no way to know how to review this patch :(

thanks,

greg k-h

2021-01-28 00:04:57

by Avri Altman

[permalink] [raw]
Subject: [PATCH 1/8] scsi: ufshpb: Cache HPB Control mode on init

We will use it later, when we'll need to differentiate between device
and host control modes.

Signed-off-by: Avri Altman <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index d3e6c5b32328..183bdf35f2d0 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -26,6 +26,8 @@ static int tot_active_srgn_pages;

static struct workqueue_struct *ufshpb_wq;

+static enum UFSHPB_MODE ufshpb_mode;
+
bool ufshpb_is_allowed(struct ufs_hba *hba)
{
return !(hba->ufshpb_dev.hpb_disabled);
@@ -1690,10 +1692,9 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf)
{
struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev;
int version;
- u8 hpb_mode;

- hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
- if (hpb_mode == HPB_HOST_CONTROL) {
+ ufshpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
+ if (ufshpb_mode == HPB_HOST_CONTROL) {
dev_err(hba->dev, "%s: host control mode is not supported.\n",
__func__);
hpb_dev_info->hpb_disabled = true;
--
2.25.1

2021-01-31 07:16:10

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 2/8] scsi: ufshpb: Add host control mode support to rsp_upiu

>
>
> On Wed, Jan 27, 2021 at 05:12:11PM +0200, Avri Altman wrote:
> > There are some limitations to activations / inactivations in host
> > control mode - Add those.
>
> I can not understand this changelog text, please make it more
> descriptive as I have no way to know how to review this patch :(
Done.

2021-01-31 07:39:44

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 7/8] scsi: ufshpb: Add "Cold" regions timer

> > + /* region "cold" timer - for host mode */
> > + ktime_t read_timeout;
> > + atomic_t read_timeout_expiries;
>
> Why does this have to be an atomic when you have a lock to protect this
> structure already taken?
Done.

You are right, it is protected by the hpb state lock. Will fix it.

Thanks,
Avri

2021-02-01 04:27:29

by Daejun Park

[permalink] [raw]
Subject: RE: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

Hi Avri,

Thanks for adding HCM support on HPB.
I have some opinion for this patch.

> +#define WORK_PENDING 0
> +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
Rather than fixing it with macro, how about using sysfs and make it
configurable?

> @@ -306,12 +325,39 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
> ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
> transfer_len);
> spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
> +
> + if (ufshpb_mode == HPB_HOST_CONTROL)
> + atomic64_set(&rgn->reads, 0);
> +
> return;
> }
>
> + if (ufshpb_mode == HPB_HOST_CONTROL)
> + reads = atomic64_inc_return(&rgn->reads);
> +
> if (!ufshpb_is_support_chunk(transfer_len))
> return; <- *this*
>
> + if (ufshpb_mode == HPB_HOST_CONTROL) {
> + /*
> + * in host control mode, reads are the main source for
> + * activation trials.
> + */
> + if (reads == ACTIVATION_THRSHLD) {
If the chunk size is not supported, we can not active this region
permanently. It may be returned before get this statement.

> diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
> index 8a34b0f42754..b0e78728af38 100644
> --- a/drivers/scsi/ufs/ufshpb.h
> +++ b/drivers/scsi/ufs/ufshpb.h
> @@ -115,6 +115,9 @@ struct ufshpb_region {
> /* below information is used by lru */
> struct list_head list_lru_rgn;
> unsigned long rgn_flags;
> +
> + /* region reads - for host mode */
> + atomic64_t reads;
I think 32 bits are suitable, because it is normalized by worker on every
specific time.

Thanks,
Daejun

2021-02-01 04:32:47

by Daejun Park

[permalink] [raw]
Subject: RE: [PATCH 4/8] scsi: ufshpb: Make eviction depends on region's reads

Hi Avri,

> + /*
> + * in host control mode, verify that the exiting region
> + * has less reads
> + */
> + if (ufshpb_mode == HPB_HOST_CONTROL &&
> + atomic64_read(&rgn->reads) > (EVICTION_THRSHLD >> 1))
Why we use shifted value to verify less read? I think we should use another
value to verify.

Thanks,
Daejun

2021-02-01 05:18:17

by Daejun Park

[permalink] [raw]
Subject: RE: [PATCH 6/8] scsi: ufshpb: Add hpb dev reset response

Hi Avri,

> + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
> + list_lru_rgn)
How about replace list_for_each_entry_safe to list_for_each_entry?

Thanks,
Daejun

2021-02-01 07:15:59

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

> > +#define WORK_PENDING 0
> > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> Rather than fixing it with macro, how about using sysfs and make it
> configurable?
Yes.
I will add a patch making all the logic configurable.
As all those are hpb-related parameters, I think module parameters are more adequate.


>
> > @@ -306,12 +325,39 @@ void ufshpb_prep(struct ufs_hba *hba, struct
> ufshcd_lrb *lrbp)
> > ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
> > transfer_len);
> > spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
> > +
> > + if (ufshpb_mode == HPB_HOST_CONTROL)
> > + atomic64_set(&rgn->reads, 0);
> > +
> > return;
> > }
> >
> > + if (ufshpb_mode == HPB_HOST_CONTROL)
> > + reads = atomic64_inc_return(&rgn->reads);
> > +
> > if (!ufshpb_is_support_chunk(transfer_len))
> > return; <- *this*
> >
> > + if (ufshpb_mode == HPB_HOST_CONTROL) {
> > + /*
> > + * in host control mode, reads are the main source for
> > + * activation trials.
> > + */
> > + if (reads == ACTIVATION_THRSHLD) {
> If the chunk size is not supported, we can not active this region
> permanently. It may be returned before get this statement.
Yes.
I already noticed that replying to Greg.
Fixed that when I dropped the use of atomic variables.

>
> > diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
> > index 8a34b0f42754..b0e78728af38 100644
> > --- a/drivers/scsi/ufs/ufshpb.h
> > +++ b/drivers/scsi/ufs/ufshpb.h
> > @@ -115,6 +115,9 @@ struct ufshpb_region {
> > /* below information is used by lru */
> > struct list_head list_lru_rgn;
> > unsigned long rgn_flags;
> > +
> > + /* region reads - for host mode */
> > + atomic64_t reads;
> I think 32 bits are suitable, because it is normalized by worker on every
> specific time.
Done.

2021-02-01 07:16:27

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 4/8] scsi: ufshpb: Make eviction depends on region's reads

>
>
> Hi Avri,
>
> > + /*
> > + * in host control mode, verify that the exiting region
> > + * has less reads
> > + */
> > + if (ufshpb_mode == HPB_HOST_CONTROL &&
> > + atomic64_read(&rgn->reads) > (EVICTION_THRSHLD >> 1))
> Why we use shifted value to verify less read? I think we should use another
> value to verify.
Yes.
Will make every logic parameters configurable.

2021-02-01 07:24:22

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 6/8] scsi: ufshpb: Add hpb dev reset response

>
> Hi Avri,
>
> > + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
> > + list_lru_rgn)
> How about replace list_for_each_entry_safe to list_for_each_entry?
Done.
Can also use the relaxed version in the timeout handler as well (patch 7/8).

Thanks,
Avri

2021-02-01 07:32:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > +#define WORK_PENDING 0
> > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > Rather than fixing it with macro, how about using sysfs and make it
> > configurable?
> Yes.
> I will add a patch making all the logic configurable.
> As all those are hpb-related parameters, I think module parameters are more adequate.

No, this is not the 1990's, please never add new module parameters to
drivers. If not for the basic problem of they do not work on a
per-device basis, but on a per-driver basis, which is what you almost
never want.

But why would you want to change this value, why can't the driver "just
work" and not need manual intervention?

thanks,

greg k-h

2021-02-01 07:57:34

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

>
> On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > > +#define WORK_PENDING 0
> > > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > > Rather than fixing it with macro, how about using sysfs and make it
> > > configurable?
> > Yes.
> > I will add a patch making all the logic configurable.
> > As all those are hpb-related parameters, I think module parameters are
> more adequate.
>
> No, this is not the 1990's, please never add new module parameters to
> drivers. If not for the basic problem of they do not work on a
> per-device basis, but on a per-driver basis, which is what you almost
> never want.
OK.

>
> But why would you want to change this value, why can't the driver "just
> work" and not need manual intervention?
It is.
But those are a knobs each vendor may want to tweak,
So it'll be optimized with its internal device's implementation.

Tweaking the parameters, as well as the entire logic, is really an endless task.
Some logic works better for some scenarios, while falling behind on others.

How about leaving it for now, to be elaborated it in the future?
Maybe even can be a part of a scheme, to make the logic proprietary?

2021-02-01 08:03:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

On Mon, Feb 01, 2021 at 07:51:19AM +0000, Avri Altman wrote:
> >
> > On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > > > +#define WORK_PENDING 0
> > > > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > > > Rather than fixing it with macro, how about using sysfs and make it
> > > > configurable?
> > > Yes.
> > > I will add a patch making all the logic configurable.
> > > As all those are hpb-related parameters, I think module parameters are
> > more adequate.
> >
> > No, this is not the 1990's, please never add new module parameters to
> > drivers. If not for the basic problem of they do not work on a
> > per-device basis, but on a per-driver basis, which is what you almost
> > never want.
> OK.
>
> >
> > But why would you want to change this value, why can't the driver "just
> > work" and not need manual intervention?
> It is.
> But those are a knobs each vendor may want to tweak,
> So it'll be optimized with its internal device's implementation.
>
> Tweaking the parameters, as well as the entire logic, is really an endless task.
> Some logic works better for some scenarios, while falling behind on others.

Shouldn't the hardware know how to handle this dynamically? If not, how
is a user going to know?

> How about leaving it for now, to be elaborated it in the future?

I do not care, just do not make it a module parameter for the reason
that does not work on a per-device basis.

> Maybe even can be a part of a scheme, to make the logic proprietary?

What do you mean by "proprietary"?

thanks,

greg k-h

2021-02-01 08:22:44

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

>
> On Mon, Feb 01, 2021 at 07:51:19AM +0000, Avri Altman wrote:
> > >
> > > On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > > > > +#define WORK_PENDING 0
> > > > > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > > > > Rather than fixing it with macro, how about using sysfs and make it
> > > > > configurable?
> > > > Yes.
> > > > I will add a patch making all the logic configurable.
> > > > As all those are hpb-related parameters, I think module parameters are
> > > more adequate.
> > >
> > > No, this is not the 1990's, please never add new module parameters to
> > > drivers. If not for the basic problem of they do not work on a
> > > per-device basis, but on a per-driver basis, which is what you almost
> > > never want.
> > OK.
> >
> > >
> > > But why would you want to change this value, why can't the driver "just
> > > work" and not need manual intervention?
> > It is.
> > But those are a knobs each vendor may want to tweak,
> > So it'll be optimized with its internal device's implementation.
> >
> > Tweaking the parameters, as well as the entire logic, is really an endless
> task.
> > Some logic works better for some scenarios, while falling behind on others.
>
> Shouldn't the hardware know how to handle this dynamically? If not, how
> is a user going to know?
There is one "brain".
It is either in the device - in device mode, Or in the host - in host mode control.
The "brain" decides which region is active, thus carrying the physical address along with the logical -
minimizing context switches in the device's RAM.

There can be up to N active regions.
Activation and deactivation has its overhead.
So basically it is a constraint-optimization problem.

>
> > How about leaving it for now, to be elaborated it in the future?
>
> I do not care, just do not make it a module parameter for the reason
> that does not work on a per-device basis.
OK. Will make it a sysfs per hpb-lun, like Daejun proposed.

Thanks,
Avri

2021-02-01 08:47:48

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 3/8] scsi: ufshpb: Add region's reads counter

On Mon, Feb 01, 2021 at 08:17:59AM +0000, Avri Altman wrote:
> >
> > On Mon, Feb 01, 2021 at 07:51:19AM +0000, Avri Altman wrote:
> > > >
> > > > On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > > > > > +#define WORK_PENDING 0
> > > > > > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > > > > > Rather than fixing it with macro, how about using sysfs and make it
> > > > > > configurable?
> > > > > Yes.
> > > > > I will add a patch making all the logic configurable.
> > > > > As all those are hpb-related parameters, I think module parameters are
> > > > more adequate.
> > > >
> > > > No, this is not the 1990's, please never add new module parameters to
> > > > drivers. If not for the basic problem of they do not work on a
> > > > per-device basis, but on a per-driver basis, which is what you almost
> > > > never want.
> > > OK.
> > >
> > > >
> > > > But why would you want to change this value, why can't the driver "just
> > > > work" and not need manual intervention?
> > > It is.
> > > But those are a knobs each vendor may want to tweak,
> > > So it'll be optimized with its internal device's implementation.
> > >
> > > Tweaking the parameters, as well as the entire logic, is really an endless
> > task.
> > > Some logic works better for some scenarios, while falling behind on others.
> >
> > Shouldn't the hardware know how to handle this dynamically? If not, how
> > is a user going to know?
> There is one "brain".
> It is either in the device - in device mode, Or in the host - in host mode control.
> The "brain" decides which region is active, thus carrying the physical address along with the logical -
> minimizing context switches in the device's RAM.
>
> There can be up to N active regions.
> Activation and deactivation has its overhead.
> So basically it is a constraint-optimization problem.

So how do you solve it? And how would you expect a user to solve it if
the kernel can not?

You better document the heck out of these configuration options :)

thanks,

greg k-h

2021-02-01 09:07:40

by Avri Altman

[permalink] [raw]
Subject: RE: [PATCH 3/8] scsi: ufshpb: Add region's reads counter


> On Mon, Feb 01, 2021 at 08:17:59AM +0000, Avri Altman wrote:
> > >
> > > On Mon, Feb 01, 2021 at 07:51:19AM +0000, Avri Altman wrote:
> > > > >
> > > > > On Mon, Feb 01, 2021 at 07:12:53AM +0000, Avri Altman wrote:
> > > > > > > > +#define WORK_PENDING 0
> > > > > > > > +#define ACTIVATION_THRSHLD 4 /* 4 IOs */
> > > > > > > Rather than fixing it with macro, how about using sysfs and make it
> > > > > > > configurable?
> > > > > > Yes.
> > > > > > I will add a patch making all the logic configurable.
> > > > > > As all those are hpb-related parameters, I think module parameters
> are
> > > > > more adequate.
> > > > >
> > > > > No, this is not the 1990's, please never add new module parameters to
> > > > > drivers. If not for the basic problem of they do not work on a
> > > > > per-device basis, but on a per-driver basis, which is what you almost
> > > > > never want.
> > > > OK.
> > > >
> > > > >
> > > > > But why would you want to change this value, why can't the driver
> "just
> > > > > work" and not need manual intervention?
> > > > It is.
> > > > But those are a knobs each vendor may want to tweak,
> > > > So it'll be optimized with its internal device's implementation.
> > > >
> > > > Tweaking the parameters, as well as the entire logic, is really an endless
> > > task.
> > > > Some logic works better for some scenarios, while falling behind on
> others.
> > >
> > > Shouldn't the hardware know how to handle this dynamically? If not, how
> > > is a user going to know?
> > There is one "brain".
> > It is either in the device - in device mode, Or in the host - in host mode
> control.
> > The "brain" decides which region is active, thus carrying the physical address
> along with the logical -
> > minimizing context switches in the device's RAM.
> >
> > There can be up to N active regions.
> > Activation and deactivation has its overhead.
> > So basically it is a constraint-optimization problem.
>
> So how do you solve it? And how would you expect a user to solve it if
> the kernel can not?
>
> You better document the heck out of these configuration options :)
Yes. Will do.

Thanks,
Avri