As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching
limit may see a big performance hit.
This series introduces a new DMA mapping API, dma_opt_mapping_size(), so
that drivers may know this limit when performance is a factor in the
mapping.
The SCSI SAS transport code is modified only to use this limit. For now I
did not want to touch other hosts as I have a concern that this change
could cause a performance regression.
I also added a patch for libata-scsi as it does not currently honour the
shost max_sectors limit.
[0] https://lore.kernel.org/linux-iommu/[email protected]/
Changes since v5:
- Rebase to Linux 5.19-rc6
- Add Martin's tags to unmodified patches (thanks!)
- Apply DMA opt limit to max_sectors in sd driver, and not max_hw_sectors
Changes since v4:
- tweak libata and other patch titles
- Add Robin's tag (thanks!)
- Clarify description of new DMA mapping API
Changes since v3:
- Apply max DMA optimial limit to SAS hosts only
Note: Even though "scsi: core: Cap shost max_sectors only once when
adding" is a subset of a previous patch I did not transfer the RB tags
- Rebase on v5.19-rc4
John Garry (6):
dma-mapping: Add dma_opt_mapping_size()
dma-iommu: Add iommu_dma_opt_mapping_size()
scsi: core: Cap shost max_sectors according to DMA limits only once
scsi: sd: Allow max_sectors be capped at DMA optimal size limit
scsi: scsi_transport_sas: Cap shost opt_sectors according to DMA
optimal limit
ata: libata-scsi: Cap ata_device->max_sectors according to
shost->max_sectors
Documentation/core-api/dma-api.rst | 14 ++++++++++++++
drivers/ata/libata-scsi.c | 1 +
drivers/iommu/dma-iommu.c | 6 ++++++
drivers/iommu/iova.c | 5 +++++
drivers/scsi/hosts.c | 5 +++++
drivers/scsi/scsi_lib.c | 4 ----
drivers/scsi/scsi_transport_sas.c | 6 ++++++
drivers/scsi/sd.c | 2 ++
include/linux/dma-map-ops.h | 1 +
include/linux/dma-mapping.h | 5 +++++
include/linux/iova.h | 2 ++
include/scsi/scsi_host.h | 1 +
kernel/dma/mapping.c | 12 ++++++++++++
13 files changed, 60 insertions(+), 4 deletions(-)
--
2.35.3
Streaming DMA mapping involving an IOMMU may be much slower for larger
total mapping size. This is because every IOMMU DMA mapping requires an
IOVA to be allocated and freed. IOVA sizes above a certain limit are not
cached, which can have a big impact on DMA mapping performance.
Provide an API for device drivers to know this "optimal" limit, such that
they may try to produce mapping which don't exceed it.
Signed-off-by: John Garry <[email protected]>
Reviewed-by: Damien Le Moal <[email protected]>
Acked-by: Martin K. Petersen <[email protected]>
---
Documentation/core-api/dma-api.rst | 14 ++++++++++++++
include/linux/dma-map-ops.h | 1 +
include/linux/dma-mapping.h | 5 +++++
kernel/dma/mapping.c | 12 ++++++++++++
4 files changed, 32 insertions(+)
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index 6d6d0edd2d27..829f20a193ca 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -204,6 +204,20 @@ Returns the maximum size of a mapping for the device. The size parameter
of the mapping functions like dma_map_single(), dma_map_page() and
others should not be larger than the returned value.
+::
+
+ size_t
+ dma_opt_mapping_size(struct device *dev);
+
+Returns the maximum optimal size of a mapping for the device.
+
+Mapping larger buffers may take much longer in certain scenarios. In
+addition, for high-rate short-lived streaming mappings, the upfront time
+spent on the mapping may account for an appreciable part of the total
+request lifetime. As such, if splitting larger requests incurs no
+significant performance penalty, then device drivers are advised to
+limit total DMA streaming mappings length to the returned value.
+
::
bool
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 0d5b06b3a4a6..98ceba6fa848 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -69,6 +69,7 @@ struct dma_map_ops {
int (*dma_supported)(struct device *dev, u64 mask);
u64 (*get_required_mask)(struct device *dev);
size_t (*max_mapping_size)(struct device *dev);
+ size_t (*opt_mapping_size)(void);
unsigned long (*get_merge_boundary)(struct device *dev);
};
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index dca2b1355bb1..fe3849434b2a 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -144,6 +144,7 @@ int dma_set_mask(struct device *dev, u64 mask);
int dma_set_coherent_mask(struct device *dev, u64 mask);
u64 dma_get_required_mask(struct device *dev);
size_t dma_max_mapping_size(struct device *dev);
+size_t dma_opt_mapping_size(struct device *dev);
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
unsigned long dma_get_merge_boundary(struct device *dev);
struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
@@ -266,6 +267,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
{
return 0;
}
+static inline size_t dma_opt_mapping_size(struct device *dev)
+{
+ return 0;
+}
static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
return false;
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index db7244291b74..1bfe11b1edb6 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -773,6 +773,18 @@ size_t dma_max_mapping_size(struct device *dev)
}
EXPORT_SYMBOL_GPL(dma_max_mapping_size);
+size_t dma_opt_mapping_size(struct device *dev)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+ size_t size = SIZE_MAX;
+
+ if (ops && ops->opt_mapping_size)
+ size = ops->opt_mapping_size();
+
+ return min(dma_max_mapping_size(dev), size);
+}
+EXPORT_SYMBOL_GPL(dma_opt_mapping_size);
+
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
--
2.35.3
Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.
This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit must always be newly allocated, which may be quite slow.
Signed-off-by: John Garry <[email protected]>
Reviewed-by: Damien Le Moal <[email protected]>
Acked-by: Robin Murphy <[email protected]>
Acked-by: Martin K. Petersen <[email protected]>
---
drivers/iommu/dma-iommu.c | 6 ++++++
drivers/iommu/iova.c | 5 +++++
include/linux/iova.h | 2 ++
3 files changed, 13 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f90251572a5d..9e1586447ee8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1459,6 +1459,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
}
+static size_t iommu_dma_opt_mapping_size(void)
+{
+ return iova_rcache_range();
+}
+
static const struct dma_map_ops iommu_dma_ops = {
.alloc = iommu_dma_alloc,
.free = iommu_dma_free,
@@ -1479,6 +1484,7 @@ static const struct dma_map_ops iommu_dma_ops = {
.map_resource = iommu_dma_map_resource,
.unmap_resource = iommu_dma_unmap_resource,
.get_merge_boundary = iommu_dma_get_merge_boundary,
+ .opt_mapping_size = iommu_dma_opt_mapping_size,
};
/*
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index db77aa675145..9f00b58d546e 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
static void free_iova_rcaches(struct iova_domain *iovad);
+unsigned long iova_rcache_range(void)
+{
+ return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
+}
+
static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
{
struct iova_domain *iovad;
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 320a70e40233..c6ba6d95d79c 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
int iova_cache_get(void);
void iova_cache_put(void);
+unsigned long iova_rcache_range(void);
+
void free_iova(struct iova_domain *iovad, unsigned long pfn);
void __free_iova(struct iova_domain *iovad, struct iova *iova);
struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
--
2.35.3
The shost->max_sectors is repeatedly capped according to the host DMA
mapping limit for each sdev in __scsi_init_queue(). This is unnecessary, so
set only once when adding the host.
Signed-off-by: John Garry <[email protected]>
Reviewed-by: Damien Le Moal <[email protected]>
Acked-by: Martin K. Petersen <[email protected]>
---
drivers/scsi/hosts.c | 5 +++++
drivers/scsi/scsi_lib.c | 4 ----
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index 8352f90d997d..d04bd2c7c9f1 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -236,6 +236,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
shost->dma_dev = dma_dev;
+ if (dma_dev->dma_mask) {
+ shost->max_sectors = min_t(unsigned int, shost->max_sectors,
+ dma_max_mapping_size(dma_dev) >> SECTOR_SHIFT);
+ }
+
error = scsi_mq_setup_tags(shost);
if (error)
goto fail;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 6ffc9e4258a8..6ce8acea322a 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1884,10 +1884,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize);
}
- if (dev->dma_mask) {
- shost->max_sectors = min_t(unsigned int, shost->max_sectors,
- dma_max_mapping_size(dev) >> SECTOR_SHIFT);
- }
blk_queue_max_hw_sectors(q, shost->max_sectors);
blk_queue_segment_boundary(q, shost->dma_boundary);
dma_set_seg_boundary(dev, shost->dma_boundary);
--
2.35.3
Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.
For performance reasons set the request queue max_sectors from
dma_opt_mapping_size(), which knows this mapping limit.
Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/scsi_transport_sas.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index 12bff64dade6..2f88c61216ee 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -225,6 +225,7 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
{
struct Scsi_Host *shost = dev_to_shost(dev);
struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
+ struct device *dma_dev = shost->dma_dev;
INIT_LIST_HEAD(&sas_host->rphy_list);
mutex_init(&sas_host->lock);
@@ -236,6 +237,11 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
dev_printk(KERN_ERR, dev, "fail to a bsg device %d\n",
shost->host_no);
+ if (dma_dev->dma_mask) {
+ shost->opt_sectors = min_t(unsigned int, shost->max_sectors,
+ dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
+ }
+
return 0;
}
--
2.35.3
ATA devices (struct ata_device) have a max_sectors field which is
configured internally in libata. This is then used to (re)configure the
associated sdev request queue max_sectors value from how it is earlier set
in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
according to shost limits, which includes host DMA mapping limits.
Cap the ata_device max_sectors according to shost->max_sectors to respect
this shost limit.
Signed-off-by: John Garry <[email protected]>
Acked-by: Damien Le Moal <[email protected]>
Acked-by: Martin K. Petersen <[email protected]>
---
drivers/ata/libata-scsi.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 86dbb1cdfabd..24a43d540d9f 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1060,6 +1060,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
dev->flags |= ATA_DFLAG_NO_UNLOAD;
/* configure max sectors */
+ dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
blk_queue_max_hw_sectors(q, dev->max_sectors);
if (dev->class == ATA_DEV_ATAPI) {
--
2.35.3
Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.
New member Scsi_Host.opt_sectors is added, which is the optimal host
max_sectors, and use this value to cap the request queue max_sectors when
set.
It could be considered to have request queues io_opt value initially
set at Scsi_Host.opt_sectors in __scsi_init_queue(), but that is not
really the purpose of io_opt.
Finally, even though Scsi_Host.opt_sectors value should never be greater
than the request queue max_hw_sectors value, continue to limit to this
value for safety.
Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/sd.c | 2 ++
include/scsi/scsi_host.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index a1a2ac09066f..3eaee1f7aaca 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3296,6 +3296,8 @@ static int sd_revalidate_disk(struct gendisk *disk)
(sector_t)BLK_DEF_MAX_SECTORS);
}
+ rw_max = min_not_zero(rw_max, sdp->host->opt_sectors);
+
/* Do not exceed controller limit */
rw_max = min(rw_max, queue_max_hw_sectors(q));
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 667d889b92b5..d32a84b2bb40 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -607,6 +607,7 @@ struct Scsi_Host {
short unsigned int sg_tablesize;
short unsigned int sg_prot_tablesize;
unsigned int max_sectors;
+ unsigned int opt_sectors;
unsigned int max_segment_size;
unsigned long dma_boundary;
unsigned long virt_boundary_mask;
--
2.35.3
On 7/14/22 20:15, John Garry wrote:
> Streaming DMA mappings may be considerably slower when mappings go through
> an IOMMU and the total mapping length is somewhat long. This is because the
> IOMMU IOVA code allocates and free an IOVA for each mapping, which may
> affect performance.
>
> For performance reasons set the request queue max_sectors from
> dma_opt_mapping_size(), which knows this mapping limit.
>
> Signed-off-by: John Garry <[email protected]>
> ---
> drivers/scsi/scsi_transport_sas.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
> index 12bff64dade6..2f88c61216ee 100644
> --- a/drivers/scsi/scsi_transport_sas.c
> +++ b/drivers/scsi/scsi_transport_sas.c
> @@ -225,6 +225,7 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
> {
> struct Scsi_Host *shost = dev_to_shost(dev);
> struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
> + struct device *dma_dev = shost->dma_dev;
>
> INIT_LIST_HEAD(&sas_host->rphy_list);
> mutex_init(&sas_host->lock);
> @@ -236,6 +237,11 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
> dev_printk(KERN_ERR, dev, "fail to a bsg device %d\n",
> shost->host_no);
>
> + if (dma_dev->dma_mask) {
> + shost->opt_sectors = min_t(unsigned int, shost->max_sectors,
> + dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
> + }
> +
> return 0;
> }
>
Reviewed-by: Damien Le Moal <[email protected]>
--
Damien Le Moal
Western Digital Research
On 7/14/22 20:15, John Garry wrote:
> Streaming DMA mappings may be considerably slower when mappings go through
> an IOMMU and the total mapping length is somewhat long. This is because the
> IOMMU IOVA code allocates and free an IOVA for each mapping, which may
> affect performance.
>
> New member Scsi_Host.opt_sectors is added, which is the optimal host
> max_sectors, and use this value to cap the request queue max_sectors when
> set.
>
> It could be considered to have request queues io_opt value initially
> set at Scsi_Host.opt_sectors in __scsi_init_queue(), but that is not
> really the purpose of io_opt.
>
> Finally, even though Scsi_Host.opt_sectors value should never be greater
> than the request queue max_hw_sectors value, continue to limit to this
> value for safety.
>
> Signed-off-by: John Garry <[email protected]>
> ---
> drivers/scsi/sd.c | 2 ++
> include/scsi/scsi_host.h | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index a1a2ac09066f..3eaee1f7aaca 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3296,6 +3296,8 @@ static int sd_revalidate_disk(struct gendisk *disk)
> (sector_t)BLK_DEF_MAX_SECTORS);
> }
>
> + rw_max = min_not_zero(rw_max, sdp->host->opt_sectors);
> +
Adding a comment explaining what the cap is would be nice.
> /* Do not exceed controller limit */
> rw_max = min(rw_max, queue_max_hw_sectors(q));
>
> diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
> index 667d889b92b5..d32a84b2bb40 100644
> --- a/include/scsi/scsi_host.h
> +++ b/include/scsi/scsi_host.h
> @@ -607,6 +607,7 @@ struct Scsi_Host {
> short unsigned int sg_tablesize;
> short unsigned int sg_prot_tablesize;
> unsigned int max_sectors;
> + unsigned int opt_sectors;
> unsigned int max_segment_size;
> unsigned long dma_boundary;
> unsigned long virt_boundary_mask;
Otherwise, looks good.
Reviewed-by: Damien Le Moal <[email protected]>
--
Damien Le Moal
Western Digital Research
John,
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index a1a2ac09066f..3eaee1f7aaca 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3296,6 +3296,8 @@ static int sd_revalidate_disk(struct gendisk *disk)
> (sector_t)BLK_DEF_MAX_SECTORS);
> }
>
> + rw_max = min_not_zero(rw_max, sdp->host->opt_sectors);
> +
> /* Do not exceed controller limit */
> rw_max = min(rw_max, queue_max_hw_sectors(q));
I'm OK with this approach.
Acked-by: Martin K. Petersen <[email protected]>
--
Martin K. Petersen Oracle Linux Engineering
Thanks,
applied to the dma-mapping tree.
On 18/07/2022 11:47, Damien Le Moal wrote:
>> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
>> index a1a2ac09066f..3eaee1f7aaca 100644
>> --- a/drivers/scsi/sd.c
>> +++ b/drivers/scsi/sd.c
>> @@ -3296,6 +3296,8 @@ static int sd_revalidate_disk(struct gendisk *disk)
>> (sector_t)BLK_DEF_MAX_SECTORS);
>> }
Hi Damien,
>>
>> + rw_max = min_not_zero(rw_max, sdp->host->opt_sectors);
>> +
> Adding a comment explaining what the cap is would be nice.
>
Christoph has now applied this (thanks, BTW), so would you like me to
follow up with a patch on top with a comment?
Thanks,
John
On Tue, Jul 19, 2022 at 08:05:30AM +0100, John Garry wrote:
> Christoph has now applied this (thanks, BTW), so would you like me to
> follow up with a patch on top with a comment?
Please do.
On 19/07/2022 08:10, Christoph Hellwig wrote:
> On Tue, Jul 19, 2022 at 08:05:30AM +0100, John Garry wrote:
>> Christoph has now applied this (thanks, BTW), so would you like me to
>> follow up with a patch on top with a comment?
> Please do.
ok, fine, I'll do it now.
Just saying in case it's an issue - I was looking at
http://git.infradead.org/users/hch/dma-mapping.git/log/refs/heads/for-next
and the order is not the same as this series and would cause an
intermediate build breakage at 9f5ec52ae501 ("scsi: scsi_transport_sas:
cap shost opt_sectors according to DMA optimal limit")
Cheers,
John
On Tue, Jul 19, 2022 at 10:10:22AM +0100, John Garry wrote:
> Just saying in case it's an issue - I was looking at
> http://git.infradead.org/users/hch/dma-mapping.git/log/refs/heads/for-next
> and the order is not the same as this series and would cause an
> intermediate build breakage at 9f5ec52ae501 ("scsi: scsi_transport_sas: cap
> shost opt_sectors according to DMA optimal limit")
No idea what git-am did there, I've fixed it up now.