Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp3448216iob; Mon, 16 May 2022 23:32:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYE13VeU63IbQVKwLIh3MzkdR8glqrPxZXbJdNsbdN7/Lzv/Ib0jqKVdi4QVuGM/UcnJfD X-Received: by 2002:a17:907:1c98:b0:6f4:5001:6751 with SMTP id nb24-20020a1709071c9800b006f450016751mr17979307ejc.498.1652769163871; Mon, 16 May 2022 23:32:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652769163; cv=none; d=google.com; s=arc-20160816; b=Ca+HcuR2kfwbNYIU/tHNreOLhrV90MCDWJVA52RRHIR3/Scmh7XFHZ6MTC0QlWMmxx A+RI8PdF78nim5LdxXzBpZ492TYhas74swiUgzkzgyRjXm557S/iewnK3jS6Mq8bqNc6 o1OYD1hz9a5Z51Ay+ABu1Uk9o/QQLnqi7ELUnXGtQuY3MesLvr7TmzKsF8AUV4FNC27t y+Ls7f1++D/govxtTjZUCdtnHKwDMQSbnAIdkPK/0lWw6kjo5tbRWbsRp0KY16k2WtqA oS6u2FBnqDxMWr+9T2NI4VQhIqJxbyLpirFbDAAKr/lG7jRkiCDsKq1ggP2614y8ywce 8MuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Tv7yNU39jYBH2PqC2m6OXVp6PR0ruMP2uq/cKN9Gs+M=; b=wrK6jNVI83+E4tWQOq8x+ZDbYB9yIZpxcHH1HKQ17z3o9gbKsPmXrVttbKOOj5du9H bJKvI28q1o7p96seNQxUC/BmPg1E/itgADPiyakII0SQd5Fv3G4fJs2l5c6OGzMGl4SH h+w/W4X/DrfIpxUMDnPd7VEX6RGxBUkdsbp8Be3sAwLlAww/RWlTIQh267eXU8vWb/dV Fyf8OHNKLTWXmcwWRn5/1+6I5XMjGjeqMkXSWdqm1TByZLaAM55ICraaa81m94EpvNS7 2sNfXJDL8lzNEegr98zMjmclv2eBvWwPmtTw5UAuAQkH62uGDqC0HKJpzph9/xkAs2WS g/4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=OgcJLSHI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e16-20020a056402331000b0042ac8e2e67fsi770916eda.579.2022.05.16.23.32.18; Mon, 16 May 2022 23:32:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=OgcJLSHI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233302AbiEPUZx (ORCPT + 99 others); Mon, 16 May 2022 16:25:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347865AbiEPT6N (ORCPT ); Mon, 16 May 2022 15:58:13 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E2C0433BA; Mon, 16 May 2022 12:49:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D4A99B81612; Mon, 16 May 2022 19:49:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A1B5C34100; Mon, 16 May 2022 19:49:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1652730594; bh=EdXBI+t7Z+CJBmYfgvbfu0giiu3XH2CkqB+Azf5RDVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OgcJLSHIKcKMNtGY9KY3C1sv1iwkVEqCCcXhUPDfaFv0HqmfP2vxo1p16ZcTHW7iy repdL8pkUn2gIouRh8FsL4D4D8wWX7n0RrRrkhkzadQtjq2WndAJnXQ/bcLKDJGsGL RWAINahU7JXoXaGf8EUa0acxy4NTMstEeewoaZYE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Robin Murphy , Krishna Reddy , Pritesh Raithatha , Ashish Mhetre , Will Deacon , Sasha Levin Subject: [PATCH 5.15 047/102] iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu Date: Mon, 16 May 2022 21:36:21 +0200 Message-Id: <20220516193625.349499464@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220516193623.989270214@linuxfoundation.org> References: <20220516193623.989270214@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ashish Mhetre [ Upstream commit 4a25f2ea0e030b2fc852c4059a50181bfc5b2f57 ] Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to not be invalidated correctly. The problem is that the walk cache index generated for IOVA is not same across translation and invalidation requests. This is leading to page faults when PMD entry is released during unmap and populated with new PTE table during subsequent map request. Disabling large page mappings avoids the release of PMD entry and avoid translations seeing stale PMD entry in walk cache. Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and Tegra234 devices. This is recommended fix from Tegra hardware design team. Acked-by: Robin Murphy Reviewed-by: Krishna Reddy Co-developed-by: Pritesh Raithatha Signed-off-by: Pritesh Raithatha Signed-off-by: Ashish Mhetre Link: https://lore.kernel.org/r/20220421081504.24678-1-amhetre@nvidia.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30 ++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c index 01e9b50b10a1..87bf522b9d2e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi dev_name(dev), err); } +static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, + struct device *dev) +{ + struct arm_smmu_device *smmu = smmu_domain->smmu; + const struct device_node *np = smmu->dev->of_node; + + /* + * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache + * entries to not be invalidated correctly. The problem is that the walk + * cache index generated for IOVA is not same across translation and + * invalidation requests. This is leading to page faults when PMD entry + * is released during unmap and populated with new PTE table during + * subsequent map request. Disabling large page mappings avoids the + * release of PMD entry and avoid translations seeing stale PMD entry in + * walk cache. + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and + * Tegra234. + */ + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") || + of_device_is_compatible(np, "nvidia,tegra194-smmu")) { + smmu->pgsize_bitmap = PAGE_SIZE; + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap; + } + + return 0; +} + static const struct arm_smmu_impl nvidia_smmu_impl = { .read_reg = nvidia_smmu_read_reg, .write_reg = nvidia_smmu_write_reg, @@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = { .global_fault = nvidia_smmu_global_fault, .context_fault = nvidia_smmu_context_fault, .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; static const struct arm_smmu_impl nvidia_smmu_single_impl = { .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu) -- 2.35.1