2021-10-05 15:24:23

by Longpeng(Mike)

[permalink] [raw]
Subject: [PATCH v2 0/2] iommu/vt-d: boost the mapping process

Hi guys,

We found that the __domain_mapping() would take too long when
the memory region is too large, we try to make it faster in this
patchset. The performance number can be found in PATCH 2, please
review when you free, thanks.

Changes v1 -> v2:
- Fix compile warning on i386 [Baolu]

Longpeng(Mike) (2):
iommu/vt-d: convert the return type of first_pte_in_page to bool
iommu/vt-d: avoid duplicated removing in __domain_mapping

drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 8 +++++++-
2 files changed, 14 insertions(+), 6 deletions(-)

--
1.8.3.1


2021-10-05 15:24:25

by Longpeng(Mike)

[permalink] [raw]
Subject: [PATCH v2 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool

first_pte_in_page() returns boolean value, so let's convert its
return type to bool.

Signed-off-by: Longpeng(Mike) <[email protected]>
---
include/linux/intel-iommu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 05a65eb..a590b00 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -708,7 +708,7 @@ static inline bool dma_pte_superpage(struct dma_pte *pte)
return (pte->val & DMA_PTE_LARGE_PAGE);
}

-static inline int first_pte_in_page(struct dma_pte *pte)
+static inline bool first_pte_in_page(struct dma_pte *pte)
{
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
--
1.8.3.1

2021-10-05 15:27:58

by Longpeng(Mike)

[permalink] [raw]
Subject: [PATCH v2 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping

__domain_mapping() always removes the pages in the range from
'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn
of the range that the caller wants to map.

This would introduce too many duplicated removing and leads the
map operation take too long, for example:

Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x7e617ff
iov_pfn: 0x140000, end_pfn: 0x7e617ff
iov_pfn: 0x180000, end_pfn: 0x7e617ff
iov_pfn: 0x1c0000, end_pfn: 0x7e617ff
iov_pfn: 0x200000, end_pfn: 0x7e617ff
...
it takes about 50ms in total.

We can reduce the cost by recalculate the 'end_pfn' and limit it
to the boundary of the end of this pte page.

Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x13ffff
iov_pfn: 0x140000, end_pfn: 0x17ffff
iov_pfn: 0x180000, end_pfn: 0x1bffff
iov_pfn: 0x1c0000, end_pfn: 0x1fffff
iov_pfn: 0x200000, end_pfn: 0x23ffff
...
it only need 9ms now.

Signed-off-by: Longpeng(Mike) <[email protected]>
---
drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 6 ++++++
2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index d75f59a..87cbf34 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -2354,12 +2354,18 @@ static void switch_to_super_page(struct dmar_domain *domain,
return -ENOMEM;
first_pte = pte;

+ lvl_pages = lvl_to_nr_pages(largepage_lvl);
+ BUG_ON(nr_pages < lvl_pages);
+
/* It is large page*/
if (largepage_lvl > 1) {
unsigned long end_pfn;
+ unsigned long pages_to_remove;

pteval |= DMA_PTE_LARGE_PAGE;
- end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
+ pages_to_remove = min_t(unsigned long, nr_pages,
+ nr_pte_to_next_page(pte) * lvl_pages);
+ end_pfn = iov_pfn + pages_to_remove - 1;
switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
} else {
pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
@@ -2381,10 +2387,6 @@ static void switch_to_super_page(struct dmar_domain *domain,
WARN_ON(1);
}

- lvl_pages = lvl_to_nr_pages(largepage_lvl);
-
- BUG_ON(nr_pages < lvl_pages);
-
nr_pages -= lvl_pages;
iov_pfn += lvl_pages;
phys_pfn += lvl_pages;
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index a590b00..623b407 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -713,6 +713,12 @@ static inline bool first_pte_in_page(struct dma_pte *pte)
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}

+static inline int nr_pte_to_next_page(struct dma_pte *pte)
+{
+ return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
+ (struct dma_pte *)ALIGN((unsigned long)pte, VTD_PAGE_SIZE) - pte;
+}
+
extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
extern int dmar_find_matched_atsr_unit(struct pci_dev *dev);

--
1.8.3.1

2021-10-07 06:22:28

by Lu Baolu

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool

On 2021/10/5 23:23, Longpeng(Mike) wrote:
> first_pte_in_page() returns boolean value, so let's convert its
> return type to bool.
>
> Signed-off-by: Longpeng(Mike) <[email protected]>
> ---
> include/linux/intel-iommu.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> index 05a65eb..a590b00 100644
> --- a/include/linux/intel-iommu.h
> +++ b/include/linux/intel-iommu.h
> @@ -708,7 +708,7 @@ static inline bool dma_pte_superpage(struct dma_pte *pte)
> return (pte->val & DMA_PTE_LARGE_PAGE);
> }
>
> -static inline int first_pte_in_page(struct dma_pte *pte)
> +static inline bool first_pte_in_page(struct dma_pte *pte)
> {
> return !((unsigned long)pte & ~VTD_PAGE_MASK);
> }
>

Probably,

return IS_ALIGNED((unsigned long)pte, VTD_PAGE_SIZE);

looks neater?

Best regards,
baolu

2021-10-07 11:45:10

by Longpeng(Mike)

[permalink] [raw]
Subject: RE: [PATCH v2 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool



> -----Original Message-----
> From: Lu Baolu [mailto:[email protected]]
> Sent: Thursday, October 7, 2021 2:18 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <[email protected]>; [email protected]; [email protected];
> [email protected]
> Cc: [email protected]; [email protected];
> [email protected]; Gonglei (Arei) <[email protected]>
> Subject: Re: [PATCH v2 1/2] iommu/vt-d: convert the return type of
> first_pte_in_page to bool
>
> On 2021/10/5 23:23, Longpeng(Mike) wrote:
> > first_pte_in_page() returns boolean value, so let's convert its
> > return type to bool.
> >
> > Signed-off-by: Longpeng(Mike) <[email protected]>
> > ---
> > include/linux/intel-iommu.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> > index 05a65eb..a590b00 100644
> > --- a/include/linux/intel-iommu.h
> > +++ b/include/linux/intel-iommu.h
> > @@ -708,7 +708,7 @@ static inline bool dma_pte_superpage(struct dma_pte *pte)
> > return (pte->val & DMA_PTE_LARGE_PAGE);
> > }
> >
> > -static inline int first_pte_in_page(struct dma_pte *pte)
> > +static inline bool first_pte_in_page(struct dma_pte *pte)
> > {
> > return !((unsigned long)pte & ~VTD_PAGE_MASK);
> > }
> >
>
> Probably,
>
> return IS_ALIGNED((unsigned long)pte, VTD_PAGE_SIZE);
>
> looks neater?
>

Looks better! I'll include this optimization in v3.

> Best regards,
> baolu