Received: by 2002:a4a:311b:0:0:0:0:0 with SMTP id k27-v6csp4745715ooa; Tue, 14 Aug 2018 10:00:22 -0700 (PDT) X-Google-Smtp-Source: AA+uWPxAX0inXoi8L6QWgSaZKd8l/oyotyNn2Po+v07GZWRfYH+4sMLZwLCk9vSVhtieXk4qa8p0 X-Received: by 2002:a17:902:1ab:: with SMTP id b40-v6mr20709406plb.55.1534266022350; Tue, 14 Aug 2018 10:00:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534266022; cv=none; d=google.com; s=arc-20160816; b=Lc5d36DRo7y4q8jlbJlty9kn4nhvKCN3NWdvN1CK6Gw4YyRPla0PynefPdK0/hmku5 eg5vaA9sOY8FR3fjNPCO7akxv8aIIGsZUC2YjGFnqBl/vpHYTN0q8MCJLCGx0V0kxymF ezltBfYQmfA7BEyAyhyA1kUmJ1PJyNuJujrAvEgy4BIr5WNVgOsKfRtyDvutgpu+UZkX hMYp7on3bnhKOI5hJbpExgmx+64pmYfNmzDDIdRqu57zHkqjErtBwXs5pu1NPYLbuTOR 6VHC+OsCB3/L2gzyAdLvOcGquzun8FQqwNVxqgjdZ4n+//ilHpFH+CX3P9FkoMiv4N2a p/tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=+P57rT5pEbgGzcAJ6mglPrqwpykQS/qmu8bYO+MnuEg=; b=WTcZ7D25qQHsdj3cOwb7Y9psIm41WRiwHoGM5xkBEZX4vUuIDhFYQvZA9P3Oajxrh8 UTlg//JwnQb/D+iCnWCNoUZPKD6QAlzbucnQ/Pzg2YbWNSxdY/Dxg6S2Q9c7iXKglTFb jQfWtHfng/x0AyO9R2m2q4Q2MrvU//YUT73OUnLZerFx9pbbxcfh/ZWSuP8baKM24YM4 XklGTNr5y3YAXUVpUuin0VwZ5i/J8n+W85gphVOt2wkiDaPSUiDWBtIexZE8rfiCs/83 ZMlK6Ys8bE9rB3jTZX2wmbE3rXcWgXRP9Aop//0VfYPAI1R2GRWWnajwTubc7pWyJHfl MtIA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w16-v6si16226170ply.462.2018.08.14.10.00.06; Tue, 14 Aug 2018 10:00:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733017AbeHNTrK (ORCPT + 99 others); Tue, 14 Aug 2018 15:47:10 -0400 Received: from foss.arm.com ([217.140.101.70]:46204 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732191AbeHNTrK (ORCPT ); Tue, 14 Aug 2018 15:47:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A066018A; Tue, 14 Aug 2018 09:59:12 -0700 (PDT) Received: from [10.4.12.131] (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 83D113F5BD; Tue, 14 Aug 2018 09:59:10 -0700 (PDT) Subject: Re: [PATCH 4/5] iommu/arm-smmu: Make way to add Qcom's smmu-500 errata handling To: Vivek Gautam , joro@8bytes.org, andy.gross@linaro.org, will.deacon@arm.com, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org Cc: mark.rutland@arm.com, david.brown@linaro.org, tfiga@chromium.org, swboyd@chromium.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org References: <20180814105528.20592-1-vivek.gautam@codeaurora.org> <20180814105528.20592-5-vivek.gautam@codeaurora.org> From: Robin Murphy Message-ID: Date: Tue, 14 Aug 2018 17:59:08 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20180814105528.20592-5-vivek.gautam@codeaurora.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/08/18 11:55, Vivek Gautam wrote: > Cleanup to re-use some of the stuff > > Signed-off-by: Vivek Gautam > --- > drivers/iommu/arm-smmu.c | 32 +++++++++++++++++++++++++------- > 1 file changed, 25 insertions(+), 7 deletions(-) I think the overall diffstat would be an awful lot smaller if the erratum workaround just has its own readl_poll_timeout() as it does in the vendor kernel. The burst-polling loop is for minimising latency in high-throughput situations, and if you're in a workaround which has to lock *every* register write and issue two firmware calls around each sync I think you're already well out of that game. > diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c > index 32e86df80428..75c146751c87 100644 > --- a/drivers/iommu/arm-smmu.c > +++ b/drivers/iommu/arm-smmu.c > @@ -391,21 +391,31 @@ static void __arm_smmu_free_bitmap(unsigned long *map, int idx) > clear_bit(idx, map); > } > > -/* Wait for any pending TLB invalidations to complete */ > -static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu, > - void __iomem *sync, void __iomem *status) > +static int __arm_smmu_tlb_sync_wait(void __iomem *status) > { > unsigned int spin_cnt, delay; > > - writel_relaxed(0, sync); > for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) { > for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) { > if (!(readl_relaxed(status) & sTLBGSTATUS_GSACTIVE)) > - return; > + return 0; > cpu_relax(); > } > udelay(delay); > } > + > + return -EBUSY; > +} > + > +/* Wait for any pending TLB invalidations to complete */ > +static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu, > + void __iomem *sync, void __iomem *status) > +{ > + writel_relaxed(0, sync); > + > + if (!__arm_smmu_tlb_sync_wait(status)) > + return; > + > dev_err_ratelimited(smmu->dev, > "TLB sync timed out -- SMMU may be deadlocked\n"); > } > @@ -461,8 +471,9 @@ static void arm_smmu_tlb_inv_context_s2(void *cookie) > arm_smmu_tlb_sync_global(smmu); > } > > -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, > - size_t granule, bool leaf, void *cookie) > +static void __arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, > + size_t granule, bool leaf, > + void *cookie) > { > struct arm_smmu_domain *smmu_domain = cookie; > struct arm_smmu_cfg *cfg = &smmu_domain->cfg; > @@ -498,6 +509,13 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, > } > } > > +static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, > + size_t granule, bool leaf, > + void *cookie) > +{ > + __arm_smmu_tlb_inv_range_nosync(iova, size, granule, leaf, cookie); > +} > + AFAICS even after patch #5 this does absolutely nothing except make the code needlessly harder to read :( Robin. > /* > * On MMU-401 at least, the cost of firing off multiple TLBIVMIDs appears > * almost negligible, but the benefit of getting the first one in as far ahead >