Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3194097pxf; Sun, 21 Mar 2021 23:25:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzULmCtMH0nNhSUMgpIbLmuAwsEYA0XtfQx69ydL/No+F/3dKdhj9fvwEDnBx48XiLFs/gQ X-Received: by 2002:a17:906:110d:: with SMTP id h13mr17813346eja.357.1616394350299; Sun, 21 Mar 2021 23:25:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616394350; cv=none; d=google.com; s=arc-20160816; b=MX9nXQ3TSanOy7eExM5yIaQY/VZnvd2Lx9s0xwYTsBPCNBvGSoEuAgCLWY8QRyNM9J KadXKl+8SJZAvzuv4eVWhB8U5u3GsqN+V4qaW2KfscNmyunD3KkAQHyYqOWIOGLrV7Oz QHEEWFTUrqOqIIkjYuon3OmyW3v3vXr0r6J81dqHYxWnZRzYsUIyig4Ps3uV6obSph6S qQbNywpr+uY82yC08UKQgvAs5j+Y0Y2bfTOaGc+ktFH7c3FQ+yc84Usm3tKgy1MilBu5 gtyFwaCTIPx5XxgetrCFfrINW/W6kks1cAyQX+d6iiiTj23Xzh5dmgOs79qZTfl3BCTO Hpaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=zVLz1Jnadql0wOq7n6MHe1NN1jleQXL6z+7Vlo8LoHY=; b=HmY4NhhIcK2zxyivzNfKpnXaYAQYO3LqGyfL+KiYcjBSlN62DLvyjcu03aZI0bmXOz 4RsBRRsP7yzJeCVFGoIOitDK3g8D1qzUJ2xeMvlZEA5r2fk5BjLa/cNt6PQTgED879aG /mpk288EgmyZBFhdqirfKOm6Juk+Kx2ieB9zsaVAyUKh/NrCpTQ7i52eoGUJ3XwDGlEc KsEtSsRUN/EFz1wKhV6/kvoDIQj1f6vqn1ftscLzZTZdnb7N6oIPKUQPj9pAODLzPe/L TZQ6+9tlblLP6m02SDv8p7R5LY9HKzqCLLmNHAiR10xJR0TLlHrp+d2kJXgqZ39SHuDB E1TQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t5si11047714edd.272.2021.03.21.23.25.28; Sun, 21 Mar 2021 23:25:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbhCVGYe (ORCPT + 99 others); Mon, 22 Mar 2021 02:24:34 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14051 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229574AbhCVGX7 (ORCPT ); Mon, 22 Mar 2021 02:23:59 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F3ksg4qRZzNq40; Mon, 22 Mar 2021 14:21:27 +0800 (CST) Received: from [10.174.184.42] (10.174.184.42) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Mar 2021 14:23:49 +0800 Subject: Re: [PATCH v14 05/13] iommu/smmuv3: Implement attach/detach_pasid_table To: Auger Eric , , , , , , , , , , , References: <20210223205634.604221-1-eric.auger@redhat.com> <20210223205634.604221-6-eric.auger@redhat.com> <5a22a597-0fba-edcc-bcf0-50d92346af08@huawei.com> <31290c71-25d9-2b49-fb4d-7250ed9f70e7@redhat.com> CC: , , , , , , , , , , , , From: Keqian Zhu Message-ID: <0769efb0-0a22-7cb1-b831-ec75845dde98@huawei.com> Date: Mon, 22 Mar 2021 14:23:48 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <31290c71-25d9-2b49-fb4d-7250ed9f70e7@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Eric, On 2021/3/19 21:15, Auger Eric wrote: > Hi Keqian, > > On 3/2/21 9:35 AM, Keqian Zhu wrote: >> Hi Eric, >> >> On 2021/2/24 4:56, Eric Auger wrote: >>> On attach_pasid_table() we program STE S1 related info set >>> by the guest into the actual physical STEs. At minimum >>> we need to program the context descriptor GPA and compute >>> whether the stage1 is translated/bypassed or aborted. >>> >>> On detach, the stage 1 config is unset and the abort flag is >>> unset. >>> >>> Signed-off-by: Eric Auger >>> >> [...] >> >>> + >>> + /* >>> + * we currently support a single CD so s1fmt and s1dss >>> + * fields are also ignored >>> + */ >>> + if (cfg->pasid_bits) >>> + goto out; >>> + >>> + smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; >> only the "cdtab_dma" field of "cdcfg" is set, we are not able to locate a specific cd using arm_smmu_get_cd_ptr(). >> >> Maybe we'd better use a specialized function to fill other fields of "cdcfg" or add a sanity check in arm_smmu_get_cd_ptr() >> to prevent calling it under nested mode? >> >> As now we just call arm_smmu_get_cd_ptr() during finalise_s1(), no problem found. Just a suggestion ;-) > > forgive me for the delay. yes I can indeed make sure that code is not > called in nested mode. Please could you detail why you would need to > call arm_smmu_get_cd_ptr()? I accidentally called this function in nested mode when verify the smmu mpam feature. :) Yes, in nested mode, context descriptor is owned by guest, hypervisor does not need to care about its content. Maybe we'd better give an explicit comment for arm_smmu_get_cd_ptr() to let coder pay attention to this? :) Thanks, Keqian > > Thanks > > Eric >> >> Thanks, >> Keqian >> >> >>> + smmu_domain->s1_cfg.set = true; >>> + smmu_domain->abort = false; >>> + break; >>> + default: >>> + goto out; >>> + } >>> + spin_lock_irqsave(&smmu_domain->devices_lock, flags); >>> + list_for_each_entry(master, &smmu_domain->devices, domain_head) >>> + arm_smmu_install_ste_for_dev(master); >>> + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); >>> + ret = 0; >>> +out: >>> + mutex_unlock(&smmu_domain->init_mutex); >>> + return ret; >>> +} >>> + >>> +static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) >>> +{ >>> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); >>> + struct arm_smmu_master *master; >>> + unsigned long flags; >>> + >>> + mutex_lock(&smmu_domain->init_mutex); >>> + >>> + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) >>> + goto unlock; >>> + >>> + smmu_domain->s1_cfg.set = false; >>> + smmu_domain->abort = false; >>> + >>> + spin_lock_irqsave(&smmu_domain->devices_lock, flags); >>> + list_for_each_entry(master, &smmu_domain->devices, domain_head) >>> + arm_smmu_install_ste_for_dev(master); >>> + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); >>> + >>> +unlock: >>> + mutex_unlock(&smmu_domain->init_mutex); >>> +} >>> + >>> static bool arm_smmu_dev_has_feature(struct device *dev, >>> enum iommu_dev_features feat) >>> { >>> @@ -2939,6 +3026,8 @@ static struct iommu_ops arm_smmu_ops = { >>> .of_xlate = arm_smmu_of_xlate, >>> .get_resv_regions = arm_smmu_get_resv_regions, >>> .put_resv_regions = generic_iommu_put_resv_regions, >>> + .attach_pasid_table = arm_smmu_attach_pasid_table, >>> + .detach_pasid_table = arm_smmu_detach_pasid_table, >>> .dev_has_feat = arm_smmu_dev_has_feature, >>> .dev_feat_enabled = arm_smmu_dev_feature_enabled, >>> .dev_enable_feat = arm_smmu_dev_enable_feature, >>> >> > > . >