2022-03-22 07:07:17

by David Stevens

[permalink] [raw]
Subject: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes

From: David Stevens <[email protected]>

Calculate the appropriate mask for non-size-aligned page selective
invalidation. Since psi uses the mask value to mask out the lower order
bits of the target address, properly flushing the iotlb requires using a
mask value such that [pfn, pfn+pages) all lie within the flushed
size-aligned region. This is not normally an issue because iova.c
always allocates iovas that are aligned to their size. However, iovas
which come from other sources (e.g. userspace via VFIO) may not be
aligned.

Signed-off-by: David Stevens <[email protected]>
---
v1 -> v2:
- Calculate an appropriate mask for non-size-aligned iovas instead
of falling back to domain selective flush.

drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 5b196cfe9ed2..ab2273300346 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
unsigned long pfn, unsigned int pages,
int ih, int map)
{
- unsigned int mask = ilog2(__roundup_pow_of_two(pages));
+ unsigned int aligned_pages = __roundup_pow_of_two(pages);
+ unsigned int mask = ilog2(aligned_pages);
uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
u16 did = domain->iommu_did[iommu->seq_id];

@@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
if (domain_use_first_level(domain)) {
domain_flush_piotlb(iommu, domain, addr, pages, ih);
} else {
+ unsigned long bitmask = aligned_pages - 1;
+
+ /*
+ * PSI masks the low order bits of the base address. If the
+ * address isn't aligned to the mask, then compute a mask value
+ * needed to ensure the target range is flushed.
+ */
+ if (unlikely(bitmask & pfn)) {
+ unsigned long end_pfn = pfn + pages - 1, shared_bits;
+
+ /*
+ * Since end_pfn <= pfn + bitmask, the only way bits
+ * higher than bitmask can differ in pfn and end_pfn is
+ * by carrying. This means after masking out bitmask,
+ * high bits starting with the first set bit in
+ * shared_bits are all equal in both pfn and end_pfn.
+ */
+ shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
+ mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
+ }
+
/*
* Fallback to domain selective flush if no PSI support or
- * the size is too big. PSI requires page size to be 2 ^ x,
- * and the base address is naturally aligned to the size.
+ * the size is too big.
*/
if (!cap_pgsel_inv(iommu->cap) ||
mask > cap_max_amask_val(iommu->cap))
--
2.35.1.894.gb6a874cedc-goog


2022-03-25 18:15:18

by Zhang, Tina

[permalink] [raw]
Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes



> -----Original Message-----
> From: iommu <[email protected]> On Behalf Of
> Tian, Kevin
> Sent: Friday, March 25, 2022 2:14 PM
> To: David Stevens <[email protected]>; Lu Baolu
> <[email protected]>
> Cc: [email protected]; [email protected]
> Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes
>
> > From: David Stevens
> > Sent: Tuesday, March 22, 2022 2:36 PM
> >
> > From: David Stevens <[email protected]>
> >
> > Calculate the appropriate mask for non-size-aligned page selective
> > invalidation. Since psi uses the mask value to mask out the lower
> > order bits of the target address, properly flushing the iotlb requires
> > using a mask value such that [pfn, pfn+pages) all lie within the
> > flushed size-aligned region. This is not normally an issue because
> > iova.c always allocates iovas that are aligned to their size. However,
> > iovas which come from other sources (e.g. userspace via VFIO) may not
> > be aligned.
> >
> > Signed-off-by: David Stevens <[email protected]>
> > ---
> > v1 -> v2:
> > - Calculate an appropriate mask for non-size-aligned iovas instead
> > of falling back to domain selective flush.
> >
> > drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> > 1 file changed, 24 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> > index 5b196cfe9ed2..ab2273300346 100644
> > --- a/drivers/iommu/intel/iommu.c
> > +++ b/drivers/iommu/intel/iommu.c
> > @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct
> > intel_iommu *iommu,
> > unsigned long pfn, unsigned int pages,
> > int ih, int map)
> > {
> > - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> > + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> > + unsigned int mask = ilog2(aligned_pages);
> > uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> > u16 did = domain->iommu_did[iommu->seq_id];
> >
> > @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct
> > intel_iommu *iommu,
> > if (domain_use_first_level(domain)) {
> > domain_flush_piotlb(iommu, domain, addr, pages, ih);
> > } else {
> > + unsigned long bitmask = aligned_pages - 1;
> > +
> > + /*
> > + * PSI masks the low order bits of the base address. If the
> > + * address isn't aligned to the mask, then compute a mask
> > value
> > + * needed to ensure the target range is flushed.
> > + */
> > + if (unlikely(bitmask & pfn)) {
> > + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> > +
> > + /*
> > + * Since end_pfn <= pfn + bitmask, the only way bits
> > + * higher than bitmask can differ in pfn and end_pfn
> > is
> > + * by carrying. This means after masking out bitmask,
> > + * high bits starting with the first set bit in
> > + * shared_bits are all equal in both pfn and end_pfn.
> > + */
> > + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> > + mask = shared_bits ? __ffs(shared_bits) :
> > BITS_PER_LONG;
> > + }
>
> While it works I wonder whether below is simpler regarding to readability:
>
> } else {
> + /*
> + * PSI masks the low order bits of the base address. If the
> + * address isn't aligned to the mask and [pfn, pfn+pages)
> + * don't all lie within the flushed size-aligned region,
> + * simply increment the mask by one to cover the trailing
> pages.
> + */
> + if (unlikely((pfn & (aligned_pages - 1)) &&
> + (pfn + pages - 1 >= ALIGN(pfn, aligned_pages))))
> + mask++;

According to the vt-d spec, increasing mask means more bits of the pfn would be masked out. So simply increasing the mask number might not be correct.
This second version does give more consideration on that.

BR,
Tina
>
> Thanks
> Kevin
> _______________________________________________
> iommu mailing list
> [email protected]
> https://lists.linuxfoundation.org/mailman/listinfo/iommu

2022-03-25 18:17:28

by David Stevens

[permalink] [raw]
Subject: Re: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes

On Fri, Mar 25, 2022 at 4:15 PM Zhang, Tina <[email protected]> wrote:
>
>
>
> > -----Original Message-----
> > From: iommu <[email protected]> On Behalf Of
> > Tian, Kevin
> > Sent: Friday, March 25, 2022 2:14 PM
> > To: David Stevens <[email protected]>; Lu Baolu
> > <[email protected]>
> > Cc: [email protected]; [email protected]
> > Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes
> >
> > > From: David Stevens
> > > Sent: Tuesday, March 22, 2022 2:36 PM
> > >
> > > From: David Stevens <[email protected]>
> > >
> > > Calculate the appropriate mask for non-size-aligned page selective
> > > invalidation. Since psi uses the mask value to mask out the lower
> > > order bits of the target address, properly flushing the iotlb requires
> > > using a mask value such that [pfn, pfn+pages) all lie within the
> > > flushed size-aligned region. This is not normally an issue because
> > > iova.c always allocates iovas that are aligned to their size. However,
> > > iovas which come from other sources (e.g. userspace via VFIO) may not
> > > be aligned.
> > >
> > > Signed-off-by: David Stevens <[email protected]>
> > > ---
> > > v1 -> v2:
> > > - Calculate an appropriate mask for non-size-aligned iovas instead
> > > of falling back to domain selective flush.
> > >
> > > drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> > > 1 file changed, 24 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> > > index 5b196cfe9ed2..ab2273300346 100644
> > > --- a/drivers/iommu/intel/iommu.c
> > > +++ b/drivers/iommu/intel/iommu.c
> > > @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct
> > > intel_iommu *iommu,
> > > unsigned long pfn, unsigned int pages,
> > > int ih, int map)
> > > {
> > > - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> > > + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> > > + unsigned int mask = ilog2(aligned_pages);
> > > uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> > > u16 did = domain->iommu_did[iommu->seq_id];
> > >
> > > @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct
> > > intel_iommu *iommu,
> > > if (domain_use_first_level(domain)) {
> > > domain_flush_piotlb(iommu, domain, addr, pages, ih);
> > > } else {
> > > + unsigned long bitmask = aligned_pages - 1;
> > > +
> > > + /*
> > > + * PSI masks the low order bits of the base address. If the
> > > + * address isn't aligned to the mask, then compute a mask
> > > value
> > > + * needed to ensure the target range is flushed.
> > > + */
> > > + if (unlikely(bitmask & pfn)) {
> > > + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> > > +
> > > + /*
> > > + * Since end_pfn <= pfn + bitmask, the only way bits
> > > + * higher than bitmask can differ in pfn and end_pfn
> > > is
> > > + * by carrying. This means after masking out bitmask,
> > > + * high bits starting with the first set bit in
> > > + * shared_bits are all equal in both pfn and end_pfn.
> > > + */
> > > + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> > > + mask = shared_bits ? __ffs(shared_bits) :
> > > BITS_PER_LONG;
> > > + }
> >
> > While it works I wonder whether below is simpler regarding to readability:
> >
> > } else {
> > + /*
> > + * PSI masks the low order bits of the base address. If the
> > + * address isn't aligned to the mask and [pfn, pfn+pages)
> > + * don't all lie within the flushed size-aligned region,
> > + * simply increment the mask by one to cover the trailing
> > pages.
> > + */
> > + if (unlikely((pfn & (aligned_pages - 1)) &&
> > + (pfn + pages - 1 >= ALIGN(pfn, aligned_pages))))
> > + mask++;
>
> According to the vt-d spec, increasing mask means more bits of the pfn would be masked out. So simply increasing the mask number might not be correct.
> This second version does give more consideration on that.
>

Right, this is what the more complicated code handles. For a concrete
example, if pfn=0x17f and pages=2, just doing mask+1 would only flush
[0x17c, 0x17f], which still misses 0x180. To ensure 0x180 is flushed,
mask needs to be 8.

-David


-David

2022-03-25 19:44:32

by Tian, Kevin

[permalink] [raw]
Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes

> From: David Stevens
> Sent: Tuesday, March 22, 2022 2:36 PM
>
> From: David Stevens <[email protected]>
>
> Calculate the appropriate mask for non-size-aligned page selective
> invalidation. Since psi uses the mask value to mask out the lower order
> bits of the target address, properly flushing the iotlb requires using a
> mask value such that [pfn, pfn+pages) all lie within the flushed
> size-aligned region. This is not normally an issue because iova.c
> always allocates iovas that are aligned to their size. However, iovas
> which come from other sources (e.g. userspace via VFIO) may not be
> aligned.
>
> Signed-off-by: David Stevens <[email protected]>
> ---
> v1 -> v2:
> - Calculate an appropriate mask for non-size-aligned iovas instead
> of falling back to domain selective flush.
>
> drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> 1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5b196cfe9ed2..ab2273300346 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct
> intel_iommu *iommu,
> unsigned long pfn, unsigned int pages,
> int ih, int map)
> {
> - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> + unsigned int mask = ilog2(aligned_pages);
> uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> u16 did = domain->iommu_did[iommu->seq_id];
>
> @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct
> intel_iommu *iommu,
> if (domain_use_first_level(domain)) {
> domain_flush_piotlb(iommu, domain, addr, pages, ih);
> } else {
> + unsigned long bitmask = aligned_pages - 1;
> +
> + /*
> + * PSI masks the low order bits of the base address. If the
> + * address isn't aligned to the mask, then compute a mask
> value
> + * needed to ensure the target range is flushed.
> + */
> + if (unlikely(bitmask & pfn)) {
> + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> +
> + /*
> + * Since end_pfn <= pfn + bitmask, the only way bits
> + * higher than bitmask can differ in pfn and end_pfn
> is
> + * by carrying. This means after masking out bitmask,
> + * high bits starting with the first set bit in
> + * shared_bits are all equal in both pfn and end_pfn.
> + */
> + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> + mask = shared_bits ? __ffs(shared_bits) :
> BITS_PER_LONG;
> + }

While it works I wonder whether below is simpler regarding to readability:

} else {
+ /*
+ * PSI masks the low order bits of the base address. If the
+ * address isn't aligned to the mask and [pfn, pfn+pages)
+ * don't all lie within the flushed size-aligned region,
+ * simply increment the mask by one to cover the trailing pages.
+ */
+ if (unlikely((pfn & (aligned_pages - 1)) &&
+ (pfn + pages - 1 >= ALIGN(pfn, aligned_pages))))
+ mask++;

Thanks
Kevin

2022-03-25 20:04:20

by Tian, Kevin

[permalink] [raw]
Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes

> From: David Stevens <[email protected]>
> Sent: Friday, March 25, 2022 3:43 PM
> On Fri, Mar 25, 2022 at 4:15 PM Zhang, Tina <[email protected]> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: iommu <[email protected]> On Behalf Of
> > > Tian, Kevin
> > > Sent: Friday, March 25, 2022 2:14 PM
> > > To: David Stevens <[email protected]>; Lu Baolu
> > > <[email protected]>
> > > Cc: [email protected]; [email protected]
> > > Subject: RE: [PATCH v2] iommu/vt-d: calculate mask for non-aligned
> flushes
> > >
> > > > From: David Stevens
> > > > Sent: Tuesday, March 22, 2022 2:36 PM
> > > >
> > > > From: David Stevens <[email protected]>
> > > >
> > > > Calculate the appropriate mask for non-size-aligned page selective
> > > > invalidation. Since psi uses the mask value to mask out the lower
> > > > order bits of the target address, properly flushing the iotlb requires
> > > > using a mask value such that [pfn, pfn+pages) all lie within the
> > > > flushed size-aligned region. This is not normally an issue because
> > > > iova.c always allocates iovas that are aligned to their size. However,
> > > > iovas which come from other sources (e.g. userspace via VFIO) may not
> > > > be aligned.
> > > >
> > > > Signed-off-by: David Stevens <[email protected]>
> > > > ---
> > > > v1 -> v2:
> > > > - Calculate an appropriate mask for non-size-aligned iovas instead
> > > > of falling back to domain selective flush.
> > > >
> > > > drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> > > > 1 file changed, 24 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/drivers/iommu/intel/iommu.c
> b/drivers/iommu/intel/iommu.c
> > > > index 5b196cfe9ed2..ab2273300346 100644
> > > > --- a/drivers/iommu/intel/iommu.c
> > > > +++ b/drivers/iommu/intel/iommu.c
> > > > @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct
> > > > intel_iommu *iommu,
> > > > unsigned long pfn, unsigned int pages,
> > > > int ih, int map)
> > > > {
> > > > - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> > > > + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> > > > + unsigned int mask = ilog2(aligned_pages);
> > > > uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> > > > u16 did = domain->iommu_did[iommu->seq_id];
> > > >
> > > > @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct
> > > > intel_iommu *iommu,
> > > > if (domain_use_first_level(domain)) {
> > > > domain_flush_piotlb(iommu, domain, addr, pages, ih);
> > > > } else {
> > > > + unsigned long bitmask = aligned_pages - 1;
> > > > +
> > > > + /*
> > > > + * PSI masks the low order bits of the base address. If the
> > > > + * address isn't aligned to the mask, then compute a mask
> > > > value
> > > > + * needed to ensure the target range is flushed.
> > > > + */
> > > > + if (unlikely(bitmask & pfn)) {
> > > > + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> > > > +
> > > > + /*
> > > > + * Since end_pfn <= pfn + bitmask, the only way bits
> > > > + * higher than bitmask can differ in pfn and end_pfn
> > > > is
> > > > + * by carrying. This means after masking out bitmask,
> > > > + * high bits starting with the first set bit in
> > > > + * shared_bits are all equal in both pfn and end_pfn.
> > > > + */
> > > > + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> > > > + mask = shared_bits ? __ffs(shared_bits) :
> > > > BITS_PER_LONG;
> > > > + }
> > >
> > > While it works I wonder whether below is simpler regarding to readability:
> > >
> > > } else {
> > > + /*
> > > + * PSI masks the low order bits of the base address. If the
> > > + * address isn't aligned to the mask and [pfn, pfn+pages)
> > > + * don't all lie within the flushed size-aligned region,
> > > + * simply increment the mask by one to cover the trailing
> > > pages.
> > > + */
> > > + if (unlikely((pfn & (aligned_pages - 1)) &&
> > > + (pfn + pages - 1 >= ALIGN(pfn, aligned_pages))))
> > > + mask++;
> >
> > According to the vt-d spec, increasing mask means more bits of the pfn
> would be masked out. So simply increasing the mask number might not be
> correct.
> > This second version does give more consideration on that.
> >
>
> Right, this is what the more complicated code handles. For a concrete
> example, if pfn=0x17f and pages=2, just doing mask+1 would only flush
> [0x17c, 0x17f], which still misses 0x180. To ensure 0x180 is flushed,
> mask needs to be 8.
>

Indeed! obviously I overlooked the trick here. Then here is:

Reviewed-by: Kevin Tian <[email protected]>

2022-03-28 22:51:15

by Lu Baolu

[permalink] [raw]
Subject: Re: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes

Hi David,

On 2022/3/22 14:35, David Stevens wrote:
> From: David Stevens <[email protected]>
>
> Calculate the appropriate mask for non-size-aligned page selective
> invalidation. Since psi uses the mask value to mask out the lower order
> bits of the target address, properly flushing the iotlb requires using a
> mask value such that [pfn, pfn+pages) all lie within the flushed
> size-aligned region. This is not normally an issue because iova.c
> always allocates iovas that are aligned to their size. However, iovas
> which come from other sources (e.g. userspace via VFIO) may not be
> aligned.

This is bug fix, right? Can you please add "Fixes" and "Cc stable" tags?

>
> Signed-off-by: David Stevens <[email protected]>
> ---
> v1 -> v2:
> - Calculate an appropriate mask for non-size-aligned iovas instead
> of falling back to domain selective flush.
>
> drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> 1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5b196cfe9ed2..ab2273300346 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
> unsigned long pfn, unsigned int pages,
> int ih, int map)
> {
> - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> + unsigned int mask = ilog2(aligned_pages);
> uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> u16 did = domain->iommu_did[iommu->seq_id];
>
> @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
> if (domain_use_first_level(domain)) {
> domain_flush_piotlb(iommu, domain, addr, pages, ih);
> } else {
> + unsigned long bitmask = aligned_pages - 1;
> +
> + /*
> + * PSI masks the low order bits of the base address. If the
> + * address isn't aligned to the mask, then compute a mask value
> + * needed to ensure the target range is flushed.
> + */
> + if (unlikely(bitmask & pfn)) {
> + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> +
> + /*
> + * Since end_pfn <= pfn + bitmask, the only way bits
> + * higher than bitmask can differ in pfn and end_pfn is
> + * by carrying. This means after masking out bitmask,
> + * high bits starting with the first set bit in
> + * shared_bits are all equal in both pfn and end_pfn.
> + */
> + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> + mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;

Can you please add some lines in the commit message to explain how this
magic line works? It's easier for people to understand it if you can
take a real example. :-)

Best regards,
baolu

> + }
> +
> /*
> * Fallback to domain selective flush if no PSI support or
> - * the size is too big. PSI requires page size to be 2 ^ x,
> - * and the base address is naturally aligned to the size.
> + * the size is too big.
> */
> if (!cap_pgsel_inv(iommu->cap) ||
> mask > cap_max_amask_val(iommu->cap))