Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1019784imm; Fri, 15 Jun 2018 09:52:42 -0700 (PDT) X-Google-Smtp-Source: ADUXVKI5Fbi2v+tTuLJefoWtl340i7liB0NJrPnr63YHcSZvH6rHRt5fJEq/FBgDRz66aHUyEKF8 X-Received: by 2002:a63:bd51:: with SMTP id d17-v6mr2399596pgp.42.1529081562448; Fri, 15 Jun 2018 09:52:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529081562; cv=none; d=google.com; s=arc-20160816; b=Ah+yf8lwF9KNx0g1pz5T+pMgzI5H9RgIeOpRdZGR4VVA6NCSR5ZBG9uFOPsIHF78Db cprUj6Zluqw9mxryCxYP+nZ8LrduklUIgYRS8yBMJPoGY0ItF5172m3mumnEnKSXvQJX WISIHT2NUzztlJn5ZMA9w/giroxONQhHKvZhSnN9mKXSAZhN9IjGkPZqQLGTWdvvKSEi nUfnJI7+FZG1dYS6jeLZE2FIkeKxinRjaMc+gNHcCgqUd4ZjsCEfVqxWen3UpCVSRg8j EZUFgYTOLhCmpWCpRgQgvpopIFbqHT0a/Q2C7a+O8vETy9f8ui4lPz0DqGL16ZqBB8aW KWYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=U2WDbxezzbkCzaNyPS7eA4xfn02PiWiawqPnMfONUkM=; b=dxAODOs9CTt6hnAG3HeCcDfZ2+d6Krs237PRbAU8YEj7Fkcp3UeaBin4FdPmYOcDtb AqoVhATMK7yQ7s5jhjZe9aTEe5us3z+bebBFLfSkVyCbqL8QirfQpkGbUxc/KkD92wui hH368WIS0keAYdDpD6tuB3JOx14odpaTr51OqbY7S0I8jpTMuRGOAA1Q1hpevReeBCFY +JAu8bHAg//+BWyt7ePDj/CuUGeIrE4HSiaQx92gvDPyO+6YEoGxFLLjzIgvyQIbLs1b wNovti5FyZ5npbe+kMSFmetgmWHmPk/yGfflcvOrZvv0XMTLnajdwxY+4zbMFZpzjSlD tuEQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y36-v6si2940255pga.89.2018.06.15.09.52.27; Fri, 15 Jun 2018 09:52:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966211AbeFOQwA (ORCPT + 99 others); Fri, 15 Jun 2018 12:52:00 -0400 Received: from foss.arm.com ([217.140.101.70]:44762 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965985AbeFOQv6 (ORCPT ); Fri, 15 Jun 2018 12:51:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 75CD315B2; Fri, 15 Jun 2018 09:51:58 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 47BBD3F25D; Fri, 15 Jun 2018 09:51:58 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D05571AE2F86; Fri, 15 Jun 2018 17:52:32 +0100 (BST) Date: Fri, 15 Jun 2018 17:52:32 +0100 From: Will Deacon To: Vivek Gautam Cc: robin.murphy@arm.com, joro@8bytes.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, pdaly@codeaurora.org Subject: Re: [PATCH 1/1] iommu/arm-smmu: Add support to use Last level cache Message-ID: <20180615165232.GE2202@arm.com> References: <20180615105329.26800-1-vivek.gautam@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180615105329.26800-1-vivek.gautam@codeaurora.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Vivek, On Fri, Jun 15, 2018 at 04:23:29PM +0530, Vivek Gautam wrote: > Qualcomm SoCs have an additional level of cache called as > System cache or Last level cache[1]. This cache sits right > before the DDR, and is tightly coupled with the memory > controller. > The cache is available to all the clients present in the > SoC system. The clients request their slices from this system > cache, make it active, and can then start using it. For these > clients with smmu, to start using the system cache for > dma buffers and related page tables [2], few of the memory > attributes need to be set accordingly. > This change makes the related memory Outer-Shareable, and > updates the MAIR with necessary protection. > > The MAIR attribute requirements are: > Inner Cacheablity = 0 > Outer Cacheablity = 1, Write-Back Write Allocate > Outer Shareablity = 1 Hmm, so is this cache coherent with the CPU or not? Why don't normal non-cacheable mappings allocated in the LLC by default? > diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c > index f7a96bcf94a6..8058e7205034 100644 > --- a/drivers/iommu/arm-smmu.c > +++ b/drivers/iommu/arm-smmu.c > @@ -249,6 +249,7 @@ struct arm_smmu_domain { > struct mutex init_mutex; /* Protects smmu pointer */ > spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */ > struct iommu_domain domain; > + bool has_sys_cache; > }; > > struct arm_smmu_option_prop { > @@ -862,6 +863,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, > > if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) > pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA; > + if (smmu_domain->has_sys_cache) > + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_SYS_CACHE; > > smmu_domain->smmu = smmu; > pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); > @@ -1477,6 +1480,9 @@ static int arm_smmu_domain_get_attr(struct iommu_domain *domain, > case DOMAIN_ATTR_NESTING: > *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); > return 0; > + case DOMAIN_ATTR_USE_SYS_CACHE: > + *((int *)data) = smmu_domain->has_sys_cache; > + return 0; I really don't like exposing this to clients directly like this, particularly as there aren't any in-tree users. I would prefer that we provide a way for the io-pgtable code to have its MAIR values overridden so that all non-coherent DMA ends up using the system cache. Will