check if there is a not-present cache present and flush it if there is.
Signed-off-by: Tom Murphy <[email protected]>
---
drivers/iommu/amd_iommu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index f7cdd2ab7f11..91fe5cb10f50 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1637,6 +1637,11 @@ static int iommu_map_page(struct protection_domain *dom,
update_domain(dom);
+ if (unlikely(amd_iommu_np_cache && !dom->updated)) {
+ domain_flush_pages(dom, bus_addr, page_size);
+ domain_flush_complete(dom);
+ }
+
/* Everything flushed out, free pages now */
free_page_list(freelist);
--
2.17.1
On Wed, Apr 24, 2019 at 05:50:51PM +0100, Tom Murphy wrote:
> check if there is a not-present cache present and flush it if there is.
>
> Signed-off-by: Tom Murphy <[email protected]>
> ---
> drivers/iommu/amd_iommu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index f7cdd2ab7f11..91fe5cb10f50 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1637,6 +1637,11 @@ static int iommu_map_page(struct protection_domain *dom,
>
> update_domain(dom);
>
> + if (unlikely(amd_iommu_np_cache && !dom->updated)) {
> + domain_flush_pages(dom, bus_addr, page_size);
> + domain_flush_complete(dom);
> + }
> +
The iommu_map_page function is called once per physical page that is
mapped, so in the worst case for every 4k mapping established. So it is
not the right place to put this check in.
From a quick glance this check belongs into the map_sg() and the
amd_iommu_map() function, but without the dom->updated check.
Besides, to really support systems with np-cache in a way that doesn't
destroy all performance, the driver also needs range-flushes for IOTLBs.
Currently it can only flush a 4k page of the full address space of a
domain. But that doesn't mean we shouldn't fix the missing flushes now.
So please re-send the patch with the check at the two places I pointed
out above.
Thanks,
Joerg
> The iommu_map_page function is called once per physical page that is
> mapped, so in the worst case for every 4k mapping established. So it is
> not the right place to put this check in.
Ah, you're right, that was careless of me.
> From a quick glance this check belongs into the map_sg() and the
> amd_iommu_map() function, but without the dom->updated check.
>
> Besides, to really support systems with np-cache in a way that doesn't
> destroy all performance, the driver also needs range-flushes for IOTLBs.
I am working on another patch to improve the intel iotlb flushing in
the iommu ops patch which should cover this too.
> Currently it can only flush a 4k page of the full address space of a
> domain. But that doesn't mean we shouldn't fix the missing flushes now.
>
> So please re-send the patch with the check at the two places I pointed
> out above.
will do
On Sat, Apr 27, 2019 at 03:20:35PM +0100, Tom Murphy wrote:
> I am working on another patch to improve the intel iotlb flushing in
> the iommu ops patch which should cover this too.
So are you looking into converting the intel-iommu driver to use
dma-iommu as well? That would be great!
On Mon, Apr 29, 2019 at 12:59 PM Christoph Hellwig <[email protected]> wrote:
>
> On Sat, Apr 27, 2019 at 03:20:35PM +0100, Tom Murphy wrote:
> > I am working on another patch to improve the intel iotlb flushing in
> > the iommu ops patch which should cover this too.
>
> So are you looking into converting the intel-iommu driver to use
> dma-iommu as well? That would be great!
Yes. My patches depend on the "iommu/vt-d: Delegate DMA domain to
generic iommu" patch which is currently being reviewed.
On Mon, Apr 29, 2019 at 01:17:44PM +0100, Tom Murphy wrote:
> Yes. My patches depend on the "iommu/vt-d: Delegate DMA domain to
> generic iommu" patch which is currently being reviewed.
Nice!