Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp937115pxb; Fri, 22 Apr 2022 14:48:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7l1BKTKngGkigQnfHZ6SOIzGxhx+wiyLzVEgDJHFnjRQmMRlmSOm8jPbeF65gYHgvjORT X-Received: by 2002:aa7:8096:0:b0:50c:e24a:3bf8 with SMTP id v22-20020aa78096000000b0050ce24a3bf8mr6913776pff.29.1650664125766; Fri, 22 Apr 2022 14:48:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650664125; cv=none; d=google.com; s=arc-20160816; b=wE+hQkr4AnAhCTzLnno2AZhj2DqkjxqeyKByc2DQ12+L/HRYvbz+cCzwO1t2favBHs 3ut93M95/WLSNlZFvL/6ZokuHVEwVtwH2HKhdjO1lae2drMBxCuwrUPFpkoqEewbX1Vz SwRRNQFGfJQ4Xdahw3HGZ9KmaUfoBn54WS5lYveVZVfqEJEfA6La4r8HcQTFsXf2Ow30 YzouFlSVieK80KLgaBm+s0J9eQ6SNWHjgy3o3T+hjSiW/KaUm2w0a+cFo57gPlQX+o+a uQVMSpzY22TUeG7UiphYkkAhYhBbGsOCFUanLmDsByI+rZXUULnUQdVyM/m6iZoqjHs4 uVgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=r6Yp6IJMPl5jwLESiAnnKIdGOeQEqy6WPk88e44Un+s=; b=d616UXtEEcmMIZCJahj3AoHqcwwZaIKJ7IhalBfvm0EXU9r1wlksGaOj8SZL+RThc3 o6LJiyMeiJtbV68Hu454PDQGm3pjZQ0MazTdk31lOQqlW/77y5PGJOwW0KwZVivYRYIy 20AH5arpWU6sTfY5xxxHoMlzKTMjyKC+Vk/xOSq3MRmLhAdJTz0Khpy7w+hC5sBeHvAn xN13Wo72/r+qz+gNQvM9mBXf3XjiGmkiul687KbB/s68M2CvCpyW90S/ubyvn70TtXjB ldOTTCyBEMyX6633MhfAvDNN8BWFYabmrjjd30/ZhHPM1QN00fLRle2R6igbb9/lnc10 qPlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id pw13-20020a17090b278d00b001c6578f8d99si10707244pjb.73.2022.04.22.14.48.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 14:48:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EEFD533C5EE; Fri, 22 Apr 2022 12:56:34 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382453AbiDULwL (ORCPT + 99 others); Thu, 21 Apr 2022 07:52:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382358AbiDULwJ (ORCPT ); Thu, 21 Apr 2022 07:52:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7157C2A265; Thu, 21 Apr 2022 04:49:19 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F2B21FB; Thu, 21 Apr 2022 04:49:19 -0700 (PDT) Received: from [10.57.41.251] (unknown [10.57.41.251]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ACB8A3F5A1; Thu, 21 Apr 2022 04:49:16 -0700 (PDT) Message-ID: <9cc46406-0f78-de27-fa0c-65d86ff05561@arm.com> Date: Thu, 21 Apr 2022 12:49:11 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [Patch v2] iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu Content-Language: en-GB To: Ashish Mhetre , thierry.reding@gmail.com, vdumpa@nvidia.com, will@kernel.org, joro@8bytes.org, jonathanh@nvidia.com, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Snikam@nvidia.com, nicolinc@nvidia.com, Pritesh Raithatha References: <20220421081504.24678-1-amhetre@nvidia.com> From: Robin Murphy In-Reply-To: <20220421081504.24678-1-amhetre@nvidia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022-04-21 09:15, Ashish Mhetre wrote: > Tegra194 and Tegra234 SoCs have the erratum that causes walk cache > entries to not be invalidated correctly. The problem is that the walk > cache index generated for IOVA is not same across translation and > invalidation requests. This is leading to page faults when PMD entry is > released during unmap and populated with new PTE table during subsequent > map request. Disabling large page mappings avoids the release of PMD > entry and avoid translations seeing stale PMD entry in walk cache. > Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and > Tegra234 devices. This is recommended fix from Tegra hardware design > team. Acked-by: Robin Murphy > Co-developed-by: Pritesh Raithatha > Signed-off-by: Pritesh Raithatha > Signed-off-by: Ashish Mhetre > --- > Changes in v2: > - Using init_context() to override pgsize_bitmap instead of new function > > drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30 ++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c > index 01e9b50b10a1..87bf522b9d2e 100644 > --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c > +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c > @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi > dev_name(dev), err); > } > > +static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain, > + struct io_pgtable_cfg *pgtbl_cfg, > + struct device *dev) > +{ > + struct arm_smmu_device *smmu = smmu_domain->smmu; > + const struct device_node *np = smmu->dev->of_node; > + > + /* > + * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache > + * entries to not be invalidated correctly. The problem is that the walk > + * cache index generated for IOVA is not same across translation and > + * invalidation requests. This is leading to page faults when PMD entry > + * is released during unmap and populated with new PTE table during > + * subsequent map request. Disabling large page mappings avoids the > + * release of PMD entry and avoid translations seeing stale PMD entry in > + * walk cache. > + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and > + * Tegra234. > + */ > + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") || > + of_device_is_compatible(np, "nvidia,tegra194-smmu")) { > + smmu->pgsize_bitmap = PAGE_SIZE; > + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap; > + } > + > + return 0; > +} > + > static const struct arm_smmu_impl nvidia_smmu_impl = { > .read_reg = nvidia_smmu_read_reg, > .write_reg = nvidia_smmu_write_reg, > @@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = { > .global_fault = nvidia_smmu_global_fault, > .context_fault = nvidia_smmu_context_fault, > .probe_finalize = nvidia_smmu_probe_finalize, > + .init_context = nvidia_smmu_init_context, > }; > > static const struct arm_smmu_impl nvidia_smmu_single_impl = { > .probe_finalize = nvidia_smmu_probe_finalize, > + .init_context = nvidia_smmu_init_context, > }; > > struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)