v7 -> v8:
- restore Daejun atomic argument to ufshpb_get_req (v31)
- Add Daejun's Reviewed-by tag
v6 -> v7:
- attend CanG's comments
- add one more patch to transform set_dirty to iterate_rgn
- rebase on Daejun's v32
v5 -> v6:
- attend CanG's comments
- rebase on Daejun's v29
v4 -> v5:
- attend Daejun's comments
- Control the number of inflight map requests
v3 -> v4:
- rebase on Daejun's v25
v2 -> v3:
- Attend Greg's and Can's comments
- rebase on Daejun's v21
v1 -> v2:
- attend Greg's and Daejun's comments
- add patch 9 making host mode parameters configurable
- rebase on Daejun's v19
The HPB spec defines 2 control modes - device control mode and host
control mode. In oppose to device control mode, in which the host obey
to whatever recommendation received from the device - In host control
mode, the host uses its own algorithms to decide which regions should
be activated or inactivated.
We kept the host managed heuristic simple and concise.
Aside from adding a by-spec functionality, host control mode entails
some further potential benefits: makes the hpb logic transparent and
readable, while allow tuning / scaling its various parameters, and
utilize system-wide info to optimize HPB potential.
This series is based on Samsung's V32 device-control HPB2.0 driver
This version was tested on Galaxy S20, and Xiaomi Mi10 pro.
Your meticulous review and testing is mostly welcome and appreciated.
Thanks,
Avri
Avri Altman (11):
scsi: ufshpb: Cache HPB Control mode on init
scsi: ufshpb: Add host control mode support to rsp_upiu
scsi: ufshpb: Transform set_dirty to iterate_rgn
scsi: ufshpb: Add reads counter
scsi: ufshpb: Make eviction depends on region's reads
scsi: ufshpb: Region inactivation in host mode
scsi: ufshpb: Add hpb dev reset response
scsi: ufshpb: Add "Cold" regions timer
scsi: ufshpb: Limit the number of inflight map requests
scsi: ufshpb: Add support for host control mode
scsi: ufshpb: Make host mode parameters configurable
Documentation/ABI/testing/sysfs-driver-ufs | 84 ++-
drivers/scsi/ufs/ufshcd.h | 2 +
drivers/scsi/ufs/ufshpb.c | 566 ++++++++++++++++++++-
drivers/scsi/ufs/ufshpb.h | 44 ++
4 files changed, 661 insertions(+), 35 deletions(-)
--
2.25.1
We can make use of this commit, to elaborate some more of the host
control mode logic, explaining what role play each and every variable.
While at it, allow those parameters to be configurable.
Signed-off-by: Avri Altman <[email protected]>
Reviewed-by: Daejun Park <[email protected]>
---
Documentation/ABI/testing/sysfs-driver-ufs | 84 +++++-
drivers/scsi/ufs/ufshpb.c | 288 +++++++++++++++++++--
drivers/scsi/ufs/ufshpb.h | 20 ++
3 files changed, 365 insertions(+), 27 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
index 419adf450b89..133af2114165 100644
--- a/Documentation/ABI/testing/sysfs-driver-ufs
+++ b/Documentation/ABI/testing/sysfs-driver-ufs
@@ -1323,14 +1323,76 @@ Description: This entry shows the maximum HPB data size for using single HPB
The file is read only.
-What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable
-Date: March 2021
-Contact: Daejun Park <[email protected]>
-Description: This entry shows the status of HPB.
-
- == ============================
- 0 HPB is not enabled.
- 1 HPB is enabled
- == ============================
-
- The file is read only.
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: In host control mode, reads are the major source of activation
+ trials. once this threshold hs met, the region is added to the
+ "to-be-activated" list. Since we reset the read counter upon
+ write, this include sending a rb command updating the region
+ ppn as well.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: In host control mode, We think of the regions as "buckets".
+ Those buckets are being filled with reads, and emptied on write.
+ We use entries_per_srgn - the amount of blocks in a subregion as
+ our bucket size. This applies because HPB1.0 only concern a
+ single-block reads. Once the bucket size is crossed, we trigger
+ a normalization work - not only to avoid overflow, but mainly
+ because we want to keep those counters normalized, as we are
+ using those reads as a comparative score, to make various decisions.
+ The normalization is dividing (shift right) the read counter by
+ the normalization_factor. If during consecutive normalizations
+ an active region has exhaust its reads - inactivate it.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: Region deactivation is often due to the fact that eviction took
+ place: a region become active on the expense of another. This is
+ happening when the max-active-regions limit has crossed.
+ In host mode, eviction is considered an extreme measure. We
+ want to verify that the entering region has enough reads, and
+ the exiting region has much less reads. eviction_thld_enter is
+ the min reads that a region must have in order to be considered
+ as a candidate to evict other region.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: same as above for the exiting region. A region is consider to
+ be a candidate to be evicted, only if it has less reads than
+ eviction_thld_exit.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: In order not to hang on to “cold” regions, we shall inactivate
+ a region that has no READ access for a predefined amount of
+ time - read_timeout_ms. If read_timeout_ms has expired, and the
+ region is dirty - it is less likely that we can make any use of
+ HPB-READing it. So we inactivate it. Still, deactivation has
+ its overhead, and we may still benefit from HPB-READing this
+ region if it is clean - see read_timeout_expiries.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: if the region read timeout has expired, but the region is clean,
+ just re-wind its timer for another spin. Do that as long as it
+ is clean and did not exhaust its read_timeout_expiries threshold.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: the frequency in which the delayed worker that checks the
+ read_timeouts is awaken.
+
+What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req
+Date: February 2021
+Contact: Avri Altman <[email protected]>
+Description: in host control mode the host is the originator of map requests.
+ To not flood the device with map requests, use a simple throttling
+ mechanism that limits the number of inflight map requests.
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 3c4001486cf1..d22f3ad8fc32 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -17,7 +17,6 @@
#include "../sd.h"
#define ACTIVATION_THRESHOLD 8 /* 8 IOs */
-#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */
#define READ_TO_MS 1000
#define READ_TO_EXPIRIES 100
#define POLLING_INTERVAL_MS 200
@@ -194,7 +193,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx,
} else {
srgn->reads++;
rgn->reads++;
- if (srgn->reads == ACTIVATION_THRESHOLD)
+ if (srgn->reads == hpb->params.activation_thld)
activate = true;
}
spin_unlock(&rgn->rgn_lock);
@@ -743,10 +742,11 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
struct bio *bio;
if (hpb->is_hcm &&
- hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) {
+ hpb->num_inflight_map_req >= hpb->params.inflight_map_req) {
dev_info(&hpb->sdev_ufs_lu->sdev_dev,
"map_req throttle. inflight %d throttle %d",
- hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT);
+ hpb->num_inflight_map_req,
+ hpb->params.inflight_map_req);
return NULL;
}
@@ -1053,6 +1053,7 @@ static void ufshpb_read_to_handler(struct work_struct *work)
struct victim_select_info *lru_info = &hpb->lru_info;
struct ufshpb_region *rgn, *next_rgn;
unsigned long flags;
+ unsigned int poll;
LIST_HEAD(expired_list);
if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits))
@@ -1071,7 +1072,7 @@ static void ufshpb_read_to_handler(struct work_struct *work)
list_add(&rgn->list_expired_rgn, &expired_list);
else
rgn->read_timeout = ktime_add_ms(ktime_get(),
- READ_TO_MS);
+ hpb->params.read_timeout_ms);
}
}
@@ -1089,8 +1090,9 @@ static void ufshpb_read_to_handler(struct work_struct *work)
clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits);
+ poll = hpb->params.timeout_polling_interval_ms;
schedule_delayed_work(&hpb->ufshpb_read_to_work,
- msecs_to_jiffies(POLLING_INTERVAL_MS));
+ msecs_to_jiffies(poll));
}
static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
@@ -1100,8 +1102,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
atomic_inc(&lru_info->active_cnt);
if (rgn->hpb->is_hcm) {
- rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS);
- rgn->read_timeout_expiries = READ_TO_EXPIRIES;
+ rgn->read_timeout =
+ ktime_add_ms(ktime_get(),
+ rgn->hpb->params.read_timeout_ms);
+ rgn->read_timeout_expiries =
+ rgn->hpb->params.read_timeout_expiries;
}
}
@@ -1130,7 +1135,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
* in host control mode, verify that the exiting region
* has less reads
*/
- if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1))
+ if (hpb->is_hcm &&
+ rgn->reads > hpb->params.eviction_thld_exit)
continue;
victim_rgn = rgn;
@@ -1351,7 +1357,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
* in host control mode, verify that the entering
* region has enough reads
*/
- if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) {
+ if (hpb->is_hcm &&
+ rgn->reads < hpb->params.eviction_thld_enter) {
ret = -EACCES;
goto out;
}
@@ -1702,6 +1709,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work)
struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu,
ufshpb_normalization_work);
int rgn_idx;
+ u8 factor = hpb->params.normalization_factor;
for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) {
struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx;
@@ -1712,7 +1720,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work)
for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) {
struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx;
- srgn->reads >>= 1;
+ srgn->reads >>= factor;
rgn->reads += srgn->reads;
}
spin_unlock(&rgn->rgn_lock);
@@ -2036,8 +2044,247 @@ requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(requeue_timeout_ms);
+ufshpb_sysfs_param_show_func(activation_thld);
+static ssize_t
+activation_thld_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= 0)
+ return -EINVAL;
+
+ hpb->params.activation_thld = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(activation_thld);
+
+ufshpb_sysfs_param_show_func(normalization_factor);
+static ssize_t
+normalization_factor_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= 0 || val > ilog2(hpb->entries_per_srgn))
+ return -EINVAL;
+
+ hpb->params.normalization_factor = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(normalization_factor);
+
+ufshpb_sysfs_param_show_func(eviction_thld_enter);
+static ssize_t
+eviction_thld_enter_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= hpb->params.eviction_thld_exit)
+ return -EINVAL;
+
+ hpb->params.eviction_thld_enter = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(eviction_thld_enter);
+
+ufshpb_sysfs_param_show_func(eviction_thld_exit);
+static ssize_t
+eviction_thld_exit_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= hpb->params.activation_thld)
+ return -EINVAL;
+
+ hpb->params.eviction_thld_exit = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(eviction_thld_exit);
+
+ufshpb_sysfs_param_show_func(read_timeout_ms);
+static ssize_t
+read_timeout_ms_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ /* read_timeout >> timeout_polling_interval */
+ if (val < hpb->params.timeout_polling_interval_ms * 2)
+ return -EINVAL;
+
+ hpb->params.read_timeout_ms = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(read_timeout_ms);
+
+ufshpb_sysfs_param_show_func(read_timeout_expiries);
+static ssize_t
+read_timeout_expiries_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= 0)
+ return -EINVAL;
+
+ hpb->params.read_timeout_expiries = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(read_timeout_expiries);
+
+ufshpb_sysfs_param_show_func(timeout_polling_interval_ms);
+static ssize_t
+timeout_polling_interval_ms_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ /* timeout_polling_interval << read_timeout */
+ if (val <= 0 || val > hpb->params.read_timeout_ms / 2)
+ return -EINVAL;
+
+ hpb->params.timeout_polling_interval_ms = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(timeout_polling_interval_ms);
+
+ufshpb_sysfs_param_show_func(inflight_map_req);
+static ssize_t inflight_map_req_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev);
+ int val;
+
+ if (!hpb)
+ return -ENODEV;
+
+ if (!hpb->is_hcm)
+ return -EOPNOTSUPP;
+
+ if (kstrtouint(buf, 0, &val))
+ return -EINVAL;
+
+ if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1)
+ return -EINVAL;
+
+ hpb->params.inflight_map_req = val;
+
+ return count;
+}
+static DEVICE_ATTR_RW(inflight_map_req);
+
+static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb)
+{
+ hpb->params.activation_thld = ACTIVATION_THRESHOLD;
+ hpb->params.normalization_factor = 1;
+ hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 5);
+ hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 4);
+ hpb->params.read_timeout_ms = READ_TO_MS;
+ hpb->params.read_timeout_expiries = READ_TO_EXPIRIES;
+ hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS;
+ hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT;
+}
+
static struct attribute *hpb_dev_param_attrs[] = {
&dev_attr_requeue_timeout_ms.attr,
+ &dev_attr_activation_thld.attr,
+ &dev_attr_normalization_factor.attr,
+ &dev_attr_eviction_thld_enter.attr,
+ &dev_attr_eviction_thld_exit.attr,
+ &dev_attr_read_timeout_ms.attr,
+ &dev_attr_read_timeout_expiries.attr,
+ &dev_attr_timeout_polling_interval_ms.attr,
+ &dev_attr_inflight_map_req.attr,
NULL,
};
@@ -2121,6 +2368,8 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb)
static void ufshpb_param_init(struct ufshpb_lu *hpb)
{
hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS;
+ if (hpb->is_hcm)
+ ufshpb_hcm_param_init(hpb);
}
static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
@@ -2175,9 +2424,13 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
ufshpb_stat_init(hpb);
ufshpb_param_init(hpb);
- if (hpb->is_hcm)
+ if (hpb->is_hcm) {
+ unsigned int poll;
+
+ poll = hpb->params.timeout_polling_interval_ms;
schedule_delayed_work(&hpb->ufshpb_read_to_work,
- msecs_to_jiffies(POLLING_INTERVAL_MS));
+ msecs_to_jiffies(poll));
+ }
return 0;
@@ -2356,10 +2609,13 @@ void ufshpb_resume(struct ufs_hba *hba)
continue;
ufshpb_set_state(hpb, HPB_PRESENT);
ufshpb_kick_map_work(hpb);
- if (hpb->is_hcm)
- schedule_delayed_work(&hpb->ufshpb_read_to_work,
- msecs_to_jiffies(POLLING_INTERVAL_MS));
+ if (hpb->is_hcm) {
+ unsigned int poll =
+ hpb->params.timeout_polling_interval_ms;
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(poll));
+ }
}
}
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index cfa0abac21db..68a5af0ff682 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -185,8 +185,28 @@ struct victim_select_info {
atomic_t active_cnt;
};
+/**
+ * ufshpb_params - ufs hpb parameters
+ * @requeue_timeout_ms - requeue threshold of wb command (0x2)
+ * @activation_thld - min reads [IOs] to activate/update a region
+ * @normalization_factor - shift right the region's reads
+ * @eviction_thld_enter - min reads [IOs] for the entering region in eviction
+ * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction
+ * @read_timeout_ms - timeout [ms] from the last read IO to the region
+ * @read_timeout_expiries - amount of allowable timeout expireis
+ * @timeout_polling_interval_ms - frequency in which timeouts are checked
+ * @inflight_map_req - number of inflight map requests
+ */
struct ufshpb_params {
unsigned int requeue_timeout_ms;
+ unsigned int activation_thld;
+ unsigned int normalization_factor;
+ unsigned int eviction_thld_enter;
+ unsigned int eviction_thld_exit;
+ unsigned int read_timeout_ms;
+ unsigned int read_timeout_expiries;
+ unsigned int timeout_polling_interval_ms;
+ unsigned int inflight_map_req;
};
struct ufshpb_stats {
--
2.25.1
Support devices that report they are using host control mode.
Signed-off-by: Avri Altman <[email protected]>
Reviewed-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 0f72ea3f5d71..3c4001486cf1 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -2587,12 +2587,6 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf)
u32 max_hpb_single_cmd = HPB_MULTI_CHUNK_LOW;
hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL];
- if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) {
- dev_err(hba->dev, "%s: host control mode is not supported.\n",
- __func__);
- hpb_dev_info->hpb_disabled = true;
- return;
- }
version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER);
if ((version != HPB_SUPPORT_VERSION) &&
--
2.25.1
In host control mode, reads are the major source of activation trials.
Keep track of those reads counters, for both active as well inactive
regions.
We reset the read counter upon write - we are only interested in "clean"
reads.
Keep those counters normalized, as we are using those reads as a
comparative score, to make various decisions.
If during consecutive normalizations an active region has exhaust its
reads - inactivate it.
while at it, protect the {active,inactive}_count stats by adding them
into the applicable handler.
Signed-off-by: Avri Altman <[email protected]>
Reviewed-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 94 ++++++++++++++++++++++++++++++++++++---
drivers/scsi/ufs/ufshpb.h | 9 ++++
2 files changed, 97 insertions(+), 6 deletions(-)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 252fcfb48862..3ab66421dc00 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -16,6 +16,8 @@
#include "ufshpb.h"
#include "../sd.h"
+#define ACTIVATION_THRESHOLD 8 /* 8 IOs */
+
/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
static mempool_t *ufshpb_mctx_pool;
@@ -26,6 +28,9 @@ static int tot_active_srgn_pages;
static struct workqueue_struct *ufshpb_wq;
+static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
+ int srgn_idx);
+
bool ufshpb_is_allowed(struct ufs_hba *hba)
{
return !(hba->ufshpb_dev.hpb_disabled);
@@ -148,7 +153,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx,
int srgn_offset, int cnt, bool set_dirty)
{
struct ufshpb_region *rgn;
- struct ufshpb_subregion *srgn;
+ struct ufshpb_subregion *srgn, *prev_srgn = NULL;
int set_bit_len;
int bitmap_len;
unsigned long flags;
@@ -167,15 +172,39 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx,
else
set_bit_len = cnt;
- if (set_dirty)
- set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags);
-
spin_lock_irqsave(&hpb->rgn_state_lock, flags);
if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE &&
srgn->srgn_state == HPB_SRGN_VALID)
bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len);
spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+ if (hpb->is_hcm && prev_srgn != srgn) {
+ bool activate = false;
+
+ spin_lock(&rgn->rgn_lock);
+ if (set_dirty) {
+ rgn->reads -= srgn->reads;
+ srgn->reads = 0;
+ set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags);
+ } else {
+ srgn->reads++;
+ rgn->reads++;
+ if (srgn->reads == ACTIVATION_THRESHOLD)
+ activate = true;
+ }
+ spin_unlock(&rgn->rgn_lock);
+
+ if (activate) {
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_update_active_info(hpb, rgn_idx, srgn_idx);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ dev_dbg(&hpb->sdev_ufs_lu->sdev_dev,
+ "activate region %d-%d\n", rgn_idx, srgn_idx);
+ }
+
+ prev_srgn = srgn;
+ }
+
srgn_offset = 0;
if (++srgn_idx == hpb->srgns_per_rgn) {
srgn_idx = 0;
@@ -604,6 +633,19 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
if (!ufshpb_is_support_chunk(hpb, transfer_len))
return 0;
+ if (hpb->is_hcm) {
+ /*
+ * in host control mode, reads are the main source for
+ * activation trials.
+ */
+ ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset,
+ transfer_len, false);
+
+ /* keep those counters normalized */
+ if (rgn->reads > hpb->entries_per_srgn)
+ schedule_work(&hpb->ufshpb_normalization_work);
+ }
+
spin_lock_irqsave(&hpb->rgn_state_lock, flags);
if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset,
transfer_len)) {
@@ -755,6 +797,8 @@ static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx,
if (list_empty(&srgn->list_act_srgn))
list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn);
+
+ hpb->stats.rb_active_cnt++;
}
static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx)
@@ -770,6 +814,8 @@ static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx)
if (list_empty(&rgn->list_inact_rgn))
list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn);
+
+ hpb->stats.rb_inactive_cnt++;
}
static void ufshpb_activate_subregion(struct ufshpb_lu *hpb,
@@ -1090,6 +1136,7 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn)
rgn->rgn_idx);
goto out;
}
+
if (!list_empty(&rgn->list_lru_rgn)) {
if (ufshpb_check_srgns_issue_state(hpb, rgn)) {
ret = -EBUSY;
@@ -1284,7 +1331,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
if (srgn->srgn_state == HPB_SRGN_VALID)
srgn->srgn_state = HPB_SRGN_INVALID;
spin_unlock(&hpb->rgn_state_lock);
- hpb->stats.rb_active_cnt++;
}
if (hpb->is_hcm) {
@@ -1316,7 +1362,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb,
}
spin_unlock(&hpb->rgn_state_lock);
- hpb->stats.rb_inactive_cnt++;
}
out:
@@ -1515,6 +1560,36 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb)
spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
}
+static void ufshpb_normalization_work_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu,
+ ufshpb_normalization_work);
+ int rgn_idx;
+
+ for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) {
+ struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx;
+ int srgn_idx;
+
+ spin_lock(&rgn->rgn_lock);
+ rgn->reads = 0;
+ for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) {
+ struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx;
+
+ srgn->reads >>= 1;
+ rgn->reads += srgn->reads;
+ }
+ spin_unlock(&rgn->rgn_lock);
+
+ if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads)
+ continue;
+
+ /* if region is active but has no reads - inactivate it */
+ spin_lock(&hpb->rsp_list_lock);
+ ufshpb_update_inactive_info(hpb, rgn->rgn_idx);
+ spin_unlock(&hpb->rsp_list_lock);
+ }
+}
+
static void ufshpb_map_work_handler(struct work_struct *work)
{
struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work);
@@ -1674,6 +1749,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
rgn = rgn_table + rgn_idx;
rgn->rgn_idx = rgn_idx;
+ spin_lock_init(&rgn->rgn_lock);
+
INIT_LIST_HEAD(&rgn->list_inact_rgn);
INIT_LIST_HEAD(&rgn->list_lru_rgn);
@@ -1915,6 +1992,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
INIT_LIST_HEAD(&hpb->list_hpb_lu);
INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
+ if (hpb->is_hcm)
+ INIT_WORK(&hpb->ufshpb_normalization_work,
+ ufshpb_normalization_work_handler);
hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
sizeof(struct ufshpb_req), 0, 0, NULL);
@@ -2014,6 +2094,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
{
+ if (hpb->is_hcm)
+ cancel_work_sync(&hpb->ufshpb_normalization_work);
cancel_work_sync(&hpb->map_work);
}
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 032672114881..87495e59fcf1 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -106,6 +106,10 @@ struct ufshpb_subregion {
int rgn_idx;
int srgn_idx;
bool is_last;
+
+ /* subregion reads - for host mode */
+ unsigned int reads;
+
/* below information is used by rsp_list */
struct list_head list_act_srgn;
};
@@ -123,6 +127,10 @@ struct ufshpb_region {
struct list_head list_lru_rgn;
unsigned long rgn_flags;
#define RGN_FLAG_DIRTY 0
+
+ /* region reads - for host mode */
+ spinlock_t rgn_lock;
+ unsigned int reads;
};
#define for_each_sub_region(rgn, i, srgn) \
@@ -212,6 +220,7 @@ struct ufshpb_lu {
/* for selecting victim */
struct victim_select_info lru_info;
+ struct work_struct ufshpb_normalization_work;
/* pinned region information */
u32 lu_pinned_start;
--
2.25.1
In order not to hang on to “cold” regions, we shall inactivate a
region that has no READ access for a predefined amount of time -
READ_TO_MS. For that purpose we shall monitor the active regions list,
polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add
the region to the "to-be-inactivated" list, unless it is clean and did
not exhaust its READ_TO_EXPIRIES - another parameter.
All this does not apply to pinned regions.
Signed-off-by: Avri Altman <[email protected]>
Reviewed-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 74 +++++++++++++++++++++++++++++++++++++--
drivers/scsi/ufs/ufshpb.h | 8 +++++
2 files changed, 79 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 0352d269c1e9..8d067cf72ae2 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -18,6 +18,9 @@
#define ACTIVATION_THRESHOLD 8 /* 8 IOs */
#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */
+#define READ_TO_MS 1000
+#define READ_TO_EXPIRIES 100
+#define POLLING_INTERVAL_MS 200
/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
@@ -1032,12 +1035,63 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb,
return 0;
}
+static void ufshpb_read_to_handler(struct work_struct *work)
+{
+ struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu,
+ ufshpb_read_to_work.work);
+ struct victim_select_info *lru_info = &hpb->lru_info;
+ struct ufshpb_region *rgn, *next_rgn;
+ unsigned long flags;
+ LIST_HEAD(expired_list);
+
+ if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits))
+ return;
+
+ spin_lock_irqsave(&hpb->rgn_state_lock, flags);
+
+ list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn,
+ list_lru_rgn) {
+ bool timedout = ktime_after(ktime_get(), rgn->read_timeout);
+
+ if (timedout) {
+ rgn->read_timeout_expiries--;
+ if (is_rgn_dirty(rgn) ||
+ rgn->read_timeout_expiries == 0)
+ list_add(&rgn->list_expired_rgn, &expired_list);
+ else
+ rgn->read_timeout = ktime_add_ms(ktime_get(),
+ READ_TO_MS);
+ }
+ }
+
+ spin_unlock_irqrestore(&hpb->rgn_state_lock, flags);
+
+ list_for_each_entry_safe(rgn, next_rgn, &expired_list,
+ list_expired_rgn) {
+ list_del_init(&rgn->list_expired_rgn);
+ spin_lock_irqsave(&hpb->rsp_list_lock, flags);
+ ufshpb_update_inactive_info(hpb, rgn->rgn_idx);
+ spin_unlock_irqrestore(&hpb->rsp_list_lock, flags);
+ }
+
+ ufshpb_kick_map_work(hpb);
+
+ clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits);
+
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+}
+
static void ufshpb_add_lru_info(struct victim_select_info *lru_info,
struct ufshpb_region *rgn)
{
rgn->rgn_state = HPB_RGN_ACTIVE;
list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn);
atomic_inc(&lru_info->active_cnt);
+ if (rgn->hpb->is_hcm) {
+ rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS);
+ rgn->read_timeout_expiries = READ_TO_EXPIRIES;
+ }
}
static void ufshpb_hit_lru_info(struct victim_select_info *lru_info,
@@ -1825,6 +1879,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
INIT_LIST_HEAD(&rgn->list_inact_rgn);
INIT_LIST_HEAD(&rgn->list_lru_rgn);
+ INIT_LIST_HEAD(&rgn->list_expired_rgn);
if (rgn_idx == hpb->rgns_per_lu - 1) {
srgn_cnt = ((hpb->srgns_per_lu - 1) %
@@ -1846,6 +1901,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb)
}
rgn->rgn_flags = 0;
+ rgn->hpb = hpb;
}
return 0;
@@ -2069,9 +2125,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
INIT_LIST_HEAD(&hpb->list_hpb_lu);
INIT_WORK(&hpb->map_work, ufshpb_map_work_handler);
- if (hpb->is_hcm)
+ if (hpb->is_hcm) {
INIT_WORK(&hpb->ufshpb_normalization_work,
ufshpb_normalization_work_handler);
+ INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work,
+ ufshpb_read_to_handler);
+ }
hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache",
sizeof(struct ufshpb_req), 0, 0, NULL);
@@ -2105,6 +2164,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb)
ufshpb_stat_init(hpb);
ufshpb_param_init(hpb);
+ if (hpb->is_hcm)
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+
return 0;
release_pre_req_mempool:
@@ -2171,9 +2234,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb)
static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb)
{
- if (hpb->is_hcm)
+ if (hpb->is_hcm) {
+ cancel_delayed_work_sync(&hpb->ufshpb_read_to_work);
cancel_work_sync(&hpb->ufshpb_normalization_work);
-
+ }
cancel_work_sync(&hpb->map_work);
}
@@ -2281,6 +2345,10 @@ void ufshpb_resume(struct ufs_hba *hba)
continue;
ufshpb_set_state(hpb, HPB_PRESENT);
ufshpb_kick_map_work(hpb);
+ if (hpb->is_hcm)
+ schedule_delayed_work(&hpb->ufshpb_read_to_work,
+ msecs_to_jiffies(POLLING_INTERVAL_MS));
+
}
}
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index b863540e28d6..448062a94760 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -115,6 +115,7 @@ struct ufshpb_subregion {
};
struct ufshpb_region {
+ struct ufshpb_lu *hpb;
struct ufshpb_subregion *srgn_tbl;
enum HPB_RGN_STATE rgn_state;
int rgn_idx;
@@ -132,6 +133,10 @@ struct ufshpb_region {
/* region reads - for host mode */
spinlock_t rgn_lock;
unsigned int reads;
+ /* region "cold" timer - for host mode */
+ ktime_t read_timeout;
+ unsigned int read_timeout_expiries;
+ struct list_head list_expired_rgn;
};
#define for_each_sub_region(rgn, i, srgn) \
@@ -223,6 +228,9 @@ struct ufshpb_lu {
/* for selecting victim */
struct victim_select_info lru_info;
struct work_struct ufshpb_normalization_work;
+ struct delayed_work ufshpb_read_to_work;
+ unsigned long work_data_bits;
+#define TIMEOUT_WORK_RUNNING 0
/* pinned region information */
u32 lu_pinned_start;
--
2.25.1
In host control mode the host is the originator of map requests. To not
flood the device with map requests, use a simple throttling mechanism
that limits the number of inflight map requests.
Signed-off-by: Avri Altman <[email protected]>
Reviewed-by: Daejun Park <[email protected]>
---
drivers/scsi/ufs/ufshpb.c | 11 +++++++++++
drivers/scsi/ufs/ufshpb.h | 1 +
2 files changed, 12 insertions(+)
diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
index 8d067cf72ae2..0f72ea3f5d71 100644
--- a/drivers/scsi/ufs/ufshpb.c
+++ b/drivers/scsi/ufs/ufshpb.c
@@ -21,6 +21,7 @@
#define READ_TO_MS 1000
#define READ_TO_EXPIRIES 100
#define POLLING_INTERVAL_MS 200
+#define THROTTLE_MAP_REQ_DEFAULT 1
/* memory management */
static struct kmem_cache *ufshpb_mctx_cache;
@@ -741,6 +742,14 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
struct ufshpb_req *map_req;
struct bio *bio;
+ if (hpb->is_hcm &&
+ hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) {
+ dev_info(&hpb->sdev_ufs_lu->sdev_dev,
+ "map_req throttle. inflight %d throttle %d",
+ hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT);
+ return NULL;
+ }
+
map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN, false);
if (!map_req)
return NULL;
@@ -755,6 +764,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,
map_req->rb.srgn_idx = srgn->srgn_idx;
map_req->rb.mctx = srgn->mctx;
+ hpb->num_inflight_map_req++;
return map_req;
}
@@ -764,6 +774,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb,
{
bio_put(map_req->bio);
ufshpb_put_req(hpb, map_req);
+ hpb->num_inflight_map_req--;
}
static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb,
diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h
index 448062a94760..cfa0abac21db 100644
--- a/drivers/scsi/ufs/ufshpb.h
+++ b/drivers/scsi/ufs/ufshpb.h
@@ -217,6 +217,7 @@ struct ufshpb_lu {
struct ufshpb_req *pre_req;
int num_inflight_pre_req;
int throttle_pre_req;
+ int num_inflight_map_req;
struct list_head lh_pre_req_free;
int cur_read_id;
int pre_req_min_tr_len;
--
2.25.1