Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp692264pxf; Thu, 1 Apr 2021 11:02:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx8XFDNXHVL1N1W3X07TVpA2jOyHJa37FBMdzu0JNrjKZ+zlQRHbQabTzrNVNlK88GkBXSc X-Received: by 2002:a02:289:: with SMTP id 131mr8926592jau.99.1617300178416; Thu, 01 Apr 2021 11:02:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617300178; cv=none; d=google.com; s=arc-20160816; b=U5gWsGH2mw4Vcp4UWzRIADCwYpcPqqY9rdOmVTaYlWk5LYBalLV7iRtEWHXPskF5X8 CRevZXJw203WT7FuM5i+R5uaZa4ZQF/rbkhCflqIhDQBuGfUDOTHXpxNnYEzcEKRwzSe vrEmAO1S35MHmCMfJ28s4LvIG9gsxrKIVcu/ptqW229aTkeZizeXxiwAEzaZVgoYjJqF vPaCo3upFPOMC2zkkp7+FkGl5yR+ilbAlCAfsUPh5lOm0oViZ1rQea4Nu/KYtN4ee8Hr gKs8SHFhRP3Mdeb5FNMLewth/mdFYChtt62Fx+AWZqneyj/AxrG4y2TWh/OeP0v4QUtE Nn3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=obO3sOKFHtfWq3E27OUNIxyDqpKs6dH218XgFgf6rQE=; b=xcrRCEloEd8DS4KfgKpsy7UayDLJUtFHBal+iS2SMRuQeD3zs8d+UXK+SYFmszMHjv XPfNw5JJnAHphECHEXWiqcDeX7k64w+jYR2CYACesz0hjjhVaJOK035mteo0g+rnJEUa ZhQy1QNDiEQWbMwU/FP1mmypxBCDajteNFZXSGe8Aym+N0Ux6pdD/Iqi+XadXpveH3x7 /IVEdJE4EYPhNcu4LurvcXHomFnfirwp3ojAEZ3j74yV1laj00GPB6wEJF6nYLh+0o5A RIytxVVkOXMp9mWvgSdaXcCprIxsZm5aCwicAJmweiD9zy+lDAzCXuZiIqeeEVbqP99R +zAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 10si4930473ilq.136.2021.04.01.11.02.41; Thu, 01 Apr 2021 11:02:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237658AbhDASAx (ORCPT + 99 others); Thu, 1 Apr 2021 14:00:53 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:3933 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235990AbhDARq1 (ORCPT ); Thu, 1 Apr 2021 13:46:27 -0400 Received: from DGGEML401-HUB.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4FB2jD59sfz5j7n; Thu, 1 Apr 2021 20:36:00 +0800 (CST) Received: from dggema765-chm.china.huawei.com (10.1.198.207) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 1 Apr 2021 20:38:05 +0800 Received: from [10.174.185.210] (10.174.185.210) by dggema765-chm.china.huawei.com (10.1.198.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2106.2; Thu, 1 Apr 2021 20:38:05 +0800 Subject: Re: [PATCH v14 06/13] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs To: Eric Auger , , , , , , , , , , , , CC: , , , , , , , , , , , , , Keqian Zhu References: <20210223205634.604221-1-eric.auger@redhat.com> <20210223205634.604221-7-eric.auger@redhat.com> From: Kunkun Jiang Message-ID: <901720e6-6ca5-eb9a-1f24-0ca479bcfecc@huawei.com> Date: Thu, 1 Apr 2021 20:37:52 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210223205634.604221-7-eric.auger@redhat.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [10.174.185.210] X-ClientProxiedBy: dggeme712-chm.china.huawei.com (10.1.199.108) To dggema765-chm.china.huawei.com (10.1.198.207) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Eric, On 2021/2/24 4:56, Eric Auger wrote: > With nested stage support, soon we will need to invalidate > S1 contexts and ranges tagged with an unmanaged asid, this > latter being managed by the guest. So let's introduce 2 helpers > that allow to invalidate with externally managed ASIDs > > Signed-off-by: Eric Auger > > --- > > v13 -> v14 > - Actually send the NH_ASID command (reported by Xingang Wang) > --- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 38 ++++++++++++++++----- > 1 file changed, 29 insertions(+), 9 deletions(-) > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > index 5579ec4fccc8..4c19a1114de4 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > @@ -1843,9 +1843,9 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid, > } > > /* IO_PGTABLE API */ > -static void arm_smmu_tlb_inv_context(void *cookie) > +static void __arm_smmu_tlb_inv_context(struct arm_smmu_domain *smmu_domain, > + int ext_asid) > { > - struct arm_smmu_domain *smmu_domain = cookie; > struct arm_smmu_device *smmu = smmu_domain->smmu; > struct arm_smmu_cmdq_ent cmd; > > @@ -1856,7 +1856,13 @@ static void arm_smmu_tlb_inv_context(void *cookie) > * insertion to guarantee those are observed before the TLBI. Do be > * careful, 007. > */ > - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { > + if (ext_asid >= 0) { /* guest stage 1 invalidation */ > + cmd.opcode = CMDQ_OP_TLBI_NH_ASID; > + cmd.tlbi.asid = ext_asid; > + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; > + arm_smmu_cmdq_issue_cmd(smmu, &cmd); > + arm_smmu_cmdq_issue_sync(smmu); > + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { > arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid); > } else { > cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; > @@ -1867,6 +1873,13 @@ static void arm_smmu_tlb_inv_context(void *cookie) > arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); > } > > +static void arm_smmu_tlb_inv_context(void *cookie) > +{ > + struct arm_smmu_domain *smmu_domain = cookie; > + > + __arm_smmu_tlb_inv_context(smmu_domain, -1); > +} > + > static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd, > unsigned long iova, size_t size, > size_t granule, > @@ -1926,9 +1939,10 @@ static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd, > arm_smmu_cmdq_batch_submit(smmu, &cmds); > } > Here is the part of code in __arm_smmu_tlb_inv_range(): >         if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) { >                 /* Get the leaf page size */ >                 tg = __ffs(smmu_domain->domain.pgsize_bitmap); > >                 /* Convert page size of 12,14,16 (log2) to 1,2,3 */ >                 cmd->tlbi.tg = (tg - 10) / 2; > >                 /* Determine what level the granule is at */ >                 cmd->tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3)); > >                 num_pages = size >> tg; >         } When pSMMU supports RIL, we get the leaf page size by __ffs(smmu_domain-> domain.pgsize_bitmap). In nested mode, it is determined by host PAGE_SIZE. If the host kernel and guest kernel has different translation granule (e.g. host 16K, guest 4K), __arm_smmu_tlb_inv_range() will issue an incorrect tlbi command. Do you have any idea about this issue? Best Regards, Kunkun Jiang > -static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, > - size_t granule, bool leaf, > - struct arm_smmu_domain *smmu_domain) > +static void > +arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, > + size_t granule, bool leaf, int ext_asid, > + struct arm_smmu_domain *smmu_domain) > { > struct arm_smmu_cmdq_ent cmd = { > .tlbi = { > @@ -1936,7 +1950,12 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, > }, > }; > > - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { > + if (ext_asid >= 0) { /* guest stage 1 invalidation */ > + cmd.opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ? > + CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; > + cmd.tlbi.asid = ext_asid; > + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; > + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { > cmd.opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ? > CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; > cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; > @@ -1944,6 +1963,7 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, > cmd.opcode = CMDQ_OP_TLBI_S2_IPA; > cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; > } > + > __arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain); > > /* > @@ -1982,7 +2002,7 @@ static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather, > static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size, > size_t granule, void *cookie) > { > - arm_smmu_tlb_inv_range_domain(iova, size, granule, false, cookie); > + arm_smmu_tlb_inv_range_domain(iova, size, granule, false, -1, cookie); > } > > static const struct iommu_flush_ops arm_smmu_flush_ops = { > @@ -2523,7 +2543,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain, > > arm_smmu_tlb_inv_range_domain(gather->start, > gather->end - gather->start + 1, > - gather->pgsize, true, smmu_domain); > + gather->pgsize, true, -1, smmu_domain); > } > > static phys_addr_t