Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3661394ybt; Tue, 30 Jun 2020 08:20:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8qz/+43O4EoMvPOP8TO4a5zwEkQTIsNxgXoc46bmSy7h6k3utIg00UA+1jgkSHHc1kPKw X-Received: by 2002:a17:906:ccd3:: with SMTP id ot19mr20232864ejb.468.1593530459145; Tue, 30 Jun 2020 08:20:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593530459; cv=none; d=google.com; s=arc-20160816; b=gtCKX1mT9Q/uXxu8ZecyOiLXBnTTjGvzuKMPHb1BUDbmOYozXa0gDhk6UFoEZnIJmi d5a4/h+Fowj/EIaypgjvk2BVwtgXdwMXI6r4PQ8FPE8+5WLL5Wm4TmlDYMt9GbdhaBFT idGASjeWw1UzbeH6Lwn3xtD59lEii5bVdSbkFpvajv9nmeXZVENfA0dovM6AscQEs2l0 qe0H2BrdJ8h8jksPBfiXpwKTlpqVIjI/nnqwj0Tzz0hx5ZH2Ph4EEtCCAvjmfRj8PZN1 B9D/ogruhfFsIKyHM1/esFIsx7oPF5GrWcOZq4j/0o9cHh8hKsv0Xqr9iwUZngEckleD 5gRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=91KsOAnm+BDzvr6bvaKraTo/wHSaB0sGjCjoXk18ABA=; b=Bh08h6sINe7KvtKBMoHede11tR8iQsZRvN7oQdIuoZJg52qI4f4Bo231O9TMdQGOb7 UyWG+O2RK0DVZZnsVb28UzgL7JhpwDh9Hbc3A/31rhJcr8ndLq2TqUc/OZE1oVrnmyYe eQ0Cjs952xL1nLGhPnZ7MdZTptYd2Ygy/QvZSG1TFSgYqdaHRRSmHuyjvUbVb77ccAQ1 XtRoDjalhkupRtolPfXhed7Wo338sYWzUvOl4C3A6IyaNn9uYKbZbAVSKhwMBrewUGmI CnNEycadmksdVh7XQKugvCQ8KzRXdQrv2kF//0lXbg3FqN0sNRpibcRCLBmowBmzcBxC s83g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=q9Bxtrvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bf28si1889157edb.312.2020.06.30.08.20.35; Tue, 30 Jun 2020 08:20:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=q9Bxtrvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389372AbgF3PRz (ORCPT + 99 others); Tue, 30 Jun 2020 11:17:55 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:13063 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729565AbgF3PRz (ORCPT ); Tue, 30 Jun 2020 11:17:55 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 08:17:04 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 08:17:54 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 08:17:54 -0700 Received: from [10.26.75.203] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 15:17:47 +0000 Subject: Re: [PATCH v8 1/3] iommu/arm-smmu: add NVIDIA implementation for dual ARM MMU-500 usage To: Robin Murphy , Krishna Reddy CC: , , , , , , , , , , , , , , References: <20200630001051.12350-1-vdumpa@nvidia.com> <20200630001051.12350-2-vdumpa@nvidia.com> <53bfa5c8-c32d-6fa3-df60-a18ab33ca1c2@nvidia.com> From: Jon Hunter Message-ID: Date: Tue, 30 Jun 2020 16:17:44 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593530224; bh=91KsOAnm+BDzvr6bvaKraTo/wHSaB0sGjCjoXk18ABA=; h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=q9BxtrvgjxBZFZAUIswW2E+r9jYacMdtRAXU31UzobonciMILCOiTgOrglSBJWXOC Nw3Ld3OJhRkalYzLHH6M6kaQEDByt/jtTjFRrLnlErsAZMdnejdtXGLw1KUiIWGYpb xAYw9H68xASCHxw1KB8ySRZZPWT/UC0ykO89X7KdmPhuSFwModgBnGKcX3mANhq4cA 1jUaHUwaV277q2HZRs76LVNMDr7HqpnG3uB5TjwjQR+mgZW16EGe8bO4ySAU87ovHL KC7amklSwvERe8WmBdhZcfamoWB3m75rZDrJSuvtt9Sh3dSz214FWxGsEZkOkFxGhS u+d/2ZM3DBT1A== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 30/06/2020 15:53, Robin Murphy wrote: > On 2020-06-30 09:19, Jon Hunter wrote: >> >> On 30/06/2020 01:10, Krishna Reddy wrote: >>> NVIDIA's Tegra194 SoC uses two ARM MMU-500s together to interleave >>> IOVA accesses across them. >>> Add NVIDIA implementation for dual ARM MMU-500s and add new compatible >>> string for Tegra194 SoC SMMU topology. >> >> There is no description here of the 3rd SMMU that you mention below. >> I think that we should describe the full picture here. >> =C2=A0 >>> Signed-off-by: Krishna Reddy ... >>> +static void nvidia_smmu_tlb_sync(struct arm_smmu_device *smmu, int >>> page, >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 int sync, int status) >>> +{ >>> +=C2=A0=C2=A0=C2=A0 unsigned int delay; >>> + >>> +=C2=A0=C2=A0=C2=A0 arm_smmu_writel(smmu, page, sync, 0); >>> + >>> +=C2=A0=C2=A0=C2=A0 for (delay =3D 1; delay < TLB_LOOP_TIMEOUT_IN_US; d= elay *=3D 2) { >> >> So we are doubling the delay every time? Is this better than just using >> the same on each loop? >=20 > This is the same logic as the main driver (see 8513c8930069) - the sync > is expected to complete relatively quickly, hence why we have the inner > spin loop to avoid the delay entirely in the typical case, and the > longer it's taking, the more likely it is that something's wrong and it > will never complete anyway. Realistically, a heavily loaded SMMU at a > modest clock rate might take us through a couple of iterations of the > outer loop, but beyond that we're pretty much just killing time until we > declare it wedged and give up, and by then there's not much point in > burning power frantically hamering on the interconnect. Ah OK. Then maybe we should move the definitions for TLB_LOOP_TIMEOUT and TLB_SPIN_COUNT into the arm-smmu.h so that we can use them directly in this file instead of redefining them. Then it maybe clear that these are part of the main driver. >>> +struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device >>> *smmu) >>> +{ >>> +=C2=A0=C2=A0=C2=A0 unsigned int i; >>> +=C2=A0=C2=A0=C2=A0 struct nvidia_smmu *nvidia_smmu; >>> +=C2=A0=C2=A0=C2=A0 struct platform_device *pdev =3D to_platform_device= (smmu->dev); >>> + >>> +=C2=A0=C2=A0=C2=A0 nvidia_smmu =3D devm_kzalloc(smmu->dev, sizeof(*nvi= dia_smmu), >>> GFP_KERNEL); >>> +=C2=A0=C2=A0=C2=A0 if (!nvidia_smmu) >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return ERR_PTR(-ENOMEM); >>> + >>> +=C2=A0=C2=A0=C2=A0 nvidia_smmu->smmu =3D *smmu; >>> +=C2=A0=C2=A0=C2=A0 /* Instance 0 is ioremapped by arm-smmu.c after thi= s function >>> returns */ >>> +=C2=A0=C2=A0=C2=A0 nvidia_smmu->num_inst =3D 1; >>> + >>> +=C2=A0=C2=A0=C2=A0 for (i =3D 1; i < MAX_SMMU_INSTANCES; i++) { >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct resource *res; >>> + >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 res =3D platform_get_resour= ce(pdev, IORESOURCE_MEM, i); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!res) >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bre= ak; >>> + >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 nvidia_smmu->bases[i] =3D d= evm_ioremap_resource(smmu->dev, res); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (IS_ERR(nvidia_smmu->bas= es[i])) >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret= urn ERR_CAST(nvidia_smmu->bases[i]); >>> + >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 nvidia_smmu->num_inst++; >>> +=C2=A0=C2=A0=C2=A0 } >>> + >>> +=C2=A0=C2=A0=C2=A0 nvidia_smmu->smmu.impl =3D &nvidia_smmu_impl; >>> +=C2=A0=C2=A0=C2=A0 /* >>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Free the arm_smmu_device struct allocated i= n arm-smmu.c. >>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Once this function returns, arm-smmu.c woul= d use arm_smmu_device >>> +=C2=A0=C2=A0=C2=A0=C2=A0 * allocated as part of nvidia_smmu struct. >>> +=C2=A0=C2=A0=C2=A0=C2=A0 */ >>> +=C2=A0=C2=A0=C2=A0 devm_kfree(smmu->dev, smmu); >> >> Why don't we just store the pointer of the smmu struct passed to this >> function >> in the nvidia_smmu struct and then we do not need to free this here. >> In other >> words make ... >> >> =C2=A0 struct nvidia_smmu { >> =C2=A0=C2=A0=C2=A0=C2=A0struct arm_smmu_device=C2=A0=C2=A0=C2=A0 *smmu; >> =C2=A0=C2=A0=C2=A0=C2=A0unsigned int=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 num_inst; >> =C2=A0=C2=A0=C2=A0=C2=A0void __iomem=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 *bases[MAX_SMMU_INSTANCES]; >> =C2=A0 }; >> >> This seems more appropriate, than copying the struct and freeing memory >> allocated else-where. >=20 > But then how do you get back to struct nvidia_smmu given just a pointer > to struct arm_smmu_device? Ah yes of course that is what I was missing. I wondered what was going on here. So I think we should add a nice comment in the above function of why we are copying this and cannot simply store the pointer. Cheers Jon --=20 nvpublic