Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3520535imm; Mon, 4 Jun 2018 05:06:18 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLxffDPj5ZoexmscPmh0o62QkKqDmK8lCavAGTyIm9XBAbjEm6MaOrsqghoCUv2Jo+DqL7y X-Received: by 2002:a63:9543:: with SMTP id t3-v6mr13240692pgn.77.1528113978916; Mon, 04 Jun 2018 05:06:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528113978; cv=none; d=google.com; s=arc-20160816; b=n9lN2gxldj19qHb74Otc2Ey56UJSiUYsh7Fb9HFDSK6qlOl9AKsWF+jf+hk9P3VLZ2 d/V8fnP2Nl4k8nBIxwt8T9JkdEuxm9nVUMkcLMS0p3s+qtTG2YENQ98daGBDehu0Zo/e HYEj/se+5CQNBHMn1WSbZBVlnfwVct191m8KwwQn3UzEwNb3BQpW2Klzx5LP3gp/C6vn BsOlIxzJ9sp5pm8THNoBHz2+P2x6pmGFJE5fqlBj7bZtdELCxIW5SSqZqqbqSiBeSf0H LaonMAGdYP1la/rHM/ceyxp4HhOF/snHCj5vbIqHIEs8a/qUCh/4TinPcrhdGb9I6jga 1rCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject:arc-authentication-results; bh=NitbrwmAOcOJVsFeXPRD6KrZW7eyUfzDnHklDECANuY=; b=UkJbwGDz2UG3yDH3hRW62C/lebX2DiD1yfDilPqGbcMg5TZxwz7GAg8gelZbzQ5GBJ xIzSCp90bEqeeTuBC2lNnZ4YmVR/iITlM68jtq0cAEtXk2krJajNGwXrBCuAM2/Th9kO GYhw62M6BEV24SjFZ+Epm+62KI+N6pIqlKdLCZTDN5JYECfePQmrkLzwGCROdf8gH8cC BnpmdMSQpZIb8O1CfWsSied1zG4E2ELhkWfL+OE0iMqv/c3wdde+QzQwx7N2VNa51W00 EvMlmxaWROyOmr39J3dQIm/GSsMAoS4N+Y0z+G5RJ2P2mSxrOoi7Yci2U7f6tb5LDOeq nCsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2-v6si24428959pgp.147.2018.06.04.05.06.00; Mon, 04 Jun 2018 05:06:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752141AbeFDMFg (ORCPT + 99 others); Mon, 4 Jun 2018 08:05:36 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:42242 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1750868AbeFDMFe (ORCPT ); Mon, 4 Jun 2018 08:05:34 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id AE12D5F0FF0B4; Mon, 4 Jun 2018 20:05:29 +0800 (CST) Received: from [127.0.0.1] (10.177.23.164) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.382.0; Mon, 4 Jun 2018 20:05:22 +0800 Subject: Re: [PATCH 5/7] iommu/dma: add support for non-strict mode To: Robin Murphy , Will Deacon , Matthias Brugger , Rob Clark , Joerg Roedel , linux-mediatek , linux-arm-msm , linux-arm-kernel , iommu , linux-kernel References: <1527752569-18020-1-git-send-email-thunder.leizhen@huawei.com> <1527752569-18020-6-git-send-email-thunder.leizhen@huawei.com> <65cfe2f9-eb23-d81c-270e-ae80e96b6009@arm.com> CC: Hanjun Guo , Libin , "Guozhu Li" , Xinwei Hu From: "Leizhen (ThunderTown)" Message-ID: <5B152ACC.1080709@huawei.com> Date: Mon, 4 Jun 2018 20:04:28 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <65cfe2f9-eb23-d81c-270e-ae80e96b6009@arm.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/5/31 21:04, Robin Murphy wrote: > On 31/05/18 08:42, Zhen Lei wrote: >> 1. Save the related domain pointer in struct iommu_dma_cookie, make iovad >> capable call domain->ops->flush_iotlb_all to flush TLB. >> 2. Define a new iommu capable: IOMMU_CAP_NON_STRICT, which used to indicate >> that the iommu domain support non-strict mode. >> 3. During the iommu domain initialization phase, call capable() to check >> whether it support non-strcit mode. If so, call init_iova_flush_queue >> to register iovad->flush_cb callback. >> 4. All unmap(contains iova-free) APIs will finally invoke __iommu_dma_unmap >> -->iommu_dma_free_iova. Use iovad->flush_cb to check whether its related >> iommu support non-strict mode or not, and call IOMMU_DOMAIN_IS_STRICT to >> make sure the IOMMU_DOMAIN_UNMANAGED domain always follow strict mode. > > Once again, this is a whole load of complexity for a property which could just be statically encoded at allocation, e.g. in the cookie type. That's right. Pass domain to the static function iommu_dma_free_iova will be better. > >> Signed-off-by: Zhen Lei >> --- >> drivers/iommu/dma-iommu.c | 29 ++++++++++++++++++++++++++--- >> include/linux/iommu.h | 3 +++ >> 2 files changed, 29 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >> index 4e885f7..2e116d9 100644 >> --- a/drivers/iommu/dma-iommu.c >> +++ b/drivers/iommu/dma-iommu.c >> @@ -55,6 +55,7 @@ struct iommu_dma_cookie { >> }; >> struct list_head msi_page_list; >> spinlock_t msi_lock; >> + struct iommu_domain *domain; >> }; >> static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) >> @@ -64,7 +65,8 @@ static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) >> return PAGE_SIZE; >> } >> -static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) >> +static struct iommu_dma_cookie *cookie_alloc(struct iommu_domain *domain, >> + enum iommu_dma_cookie_type type) >> { >> struct iommu_dma_cookie *cookie; >> @@ -73,6 +75,7 @@ static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) >> spin_lock_init(&cookie->msi_lock); >> INIT_LIST_HEAD(&cookie->msi_page_list); >> cookie->type = type; >> + cookie->domain = domain; >> } >> return cookie; >> } >> @@ -94,7 +97,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain) >> if (domain->iova_cookie) >> return -EEXIST; >> - domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE); >> + domain->iova_cookie = cookie_alloc(domain, IOMMU_DMA_IOVA_COOKIE); >> if (!domain->iova_cookie) >> return -ENOMEM; >> @@ -124,7 +127,7 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) >> if (domain->iova_cookie) >> return -EEXIST; >> - cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); >> + cookie = cookie_alloc(domain, IOMMU_DMA_MSI_COOKIE); >> if (!cookie) >> return -ENOMEM; >> @@ -261,6 +264,17 @@ static int iova_reserve_iommu_regions(struct device *dev, >> return ret; >> } >> +static void iova_flush_iotlb_all(struct iova_domain *iovad) > > iommu_dma_flush... OK > >> +{ >> + struct iommu_dma_cookie *cookie; >> + struct iommu_domain *domain; >> + >> + cookie = container_of(iovad, struct iommu_dma_cookie, iovad); >> + domain = cookie->domain; >> + >> + domain->ops->flush_iotlb_all(domain); >> +} >> + >> /** >> * iommu_dma_init_domain - Initialise a DMA mapping domain >> * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() >> @@ -276,6 +290,7 @@ static int iova_reserve_iommu_regions(struct device *dev, >> int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, >> u64 size, struct device *dev) >> { >> + const struct iommu_ops *ops = domain->ops; >> struct iommu_dma_cookie *cookie = domain->iova_cookie; >> struct iova_domain *iovad = &cookie->iovad; >> unsigned long order, base_pfn, end_pfn; >> @@ -313,6 +328,11 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, >> init_iova_domain(iovad, 1UL << order, base_pfn); >> + if (ops->capable && ops->capable(IOMMU_CAP_NON_STRICT)) { >> + BUG_ON(!ops->flush_iotlb_all); >> + init_iova_flush_queue(iovad, iova_flush_iotlb_all, NULL); >> + } >> + >> return iova_reserve_iommu_regions(dev, domain); >> } >> EXPORT_SYMBOL(iommu_dma_init_domain); >> @@ -392,6 +412,9 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, >> /* The MSI case is only ever cleaning up its most recent allocation */ >> if (cookie->type == IOMMU_DMA_MSI_COOKIE) >> cookie->msi_iova -= size; >> + else if (!IOMMU_DOMAIN_IS_STRICT(cookie->domain) && iovad->flush_cb) >> + queue_iova(iovad, iova_pfn(iovad, iova), >> + size >> iova_shift(iovad), 0); >> else >> free_iova_fast(iovad, iova_pfn(iovad, iova), >> size >> iova_shift(iovad)); >> diff --git a/include/linux/iommu.h b/include/linux/iommu.h >> index 39b3150..01ff569 100644 >> --- a/include/linux/iommu.h >> +++ b/include/linux/iommu.h >> @@ -87,6 +87,8 @@ struct iommu_domain_geometry { >> __IOMMU_DOMAIN_DMA_API) >> #define IOMMU_STRICT 1 >> +#define IOMMU_DOMAIN_IS_STRICT(domain) \ >> + (domain->type == IOMMU_DOMAIN_UNMANAGED) >> struct iommu_domain { >> unsigned type; >> @@ -103,6 +105,7 @@ enum iommu_cap { >> transactions */ >> IOMMU_CAP_INTR_REMAP, /* IOMMU supports interrupt isolation */ >> IOMMU_CAP_NOEXEC, /* IOMMU_NOEXEC flag */ >> + IOMMU_CAP_NON_STRICT, /* IOMMU supports non-strict mode */ > > This isn't a property of the IOMMU, it depends purely on the driver implementation. I think it also doesn't matter anyway - if a caller asks for lazy unmapping on their domain but the IOMMU driver just does strict unmaps anyway because that's all it supports, there's no actual harm done. > > Robin. > >> }; >> /* >> > > . > -- Thanks! BestRegards