Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2663043pxj; Mon, 14 Jun 2021 04:21:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxPwd4TxNg/P4L/kJXhrZ6Z++djbbkkrfExChND7b3cUuo5w7z0WM5DC5shJ2w1BewpGCx8 X-Received: by 2002:a17:906:8041:: with SMTP id x1mr14398439ejw.81.1623669659843; Mon, 14 Jun 2021 04:20:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623669659; cv=none; d=google.com; s=arc-20160816; b=UOhWN/H6a8Ny430B6BgyWoiqyShzNoxbzdGKX2FCqnFr+r7C2Q5yXvmkoEuZbAsxFU Yw0pNztQrzCPgJS37xhZ0pJfxCsLXNNOrbUsmk8pW+N8FySLHGO2ofwOnLw7w/9JPfLN W9Hjgd/51bxqVmeHjgeIGhiI74CwDqFggnNAJPc0xzTSNiy63stFm0Abxalo07l2VEEq msulBFf9aDbQO5LK+QpEGwkNZJlcs1IjQ2DpQ95BQhNBOxBlJfVChIB3AtCWyEmHI33I tHzp/6X0NtHDr2rEG4NyKTtsPtEOlE49mN/KohM2wkQoOB2qgYqt57WRARmWHEUHzDx+ RfuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=mKV9id5uiFRLXLeZjtQouEyh2K8CnjtJ54fpVj0SGkw=; b=ox5CtRrD19bg4IDTDq5AaAPOBe2Af0BcwDojgmVHS1Z8LZ+qJmItqowh4yFmgEBOz6 TlmYc2YCH6zBHCwx+vxbB8KAQ1NS9cYwqjMbLqJeeJQ5yu7HWOZWd9MxYP3KmC7jgu4k BzFfd+8znzQ3HqJ6C6LzGfTnE9tJnj4lbihmDQJKrBsH6JOYtm42eimD19pPcs/OJiFz KZLFeyh5/i8bkC45q4acwvvPReRhKKCEfQLBxOD1Ev4AiUBV5PCjmis+s0HLwimTy0H/ zFH+0MYyqJ9NGUZ8NCAXv4GZL78QyXqarI/1sYWovKDIl62tCfZxpmrZFDpiVLwrfN87 QPow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=gNdgkrbE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i16si11683158ejo.417.2021.06.14.04.20.37; Mon, 14 Jun 2021 04:20:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=gNdgkrbE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235645AbhFNLSj (ORCPT + 99 others); Mon, 14 Jun 2021 07:18:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:37752 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234888AbhFNLFm (ORCPT ); Mon, 14 Jun 2021 07:05:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 18C7A61443; Mon, 14 Jun 2021 10:45:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1623667515; bh=aDgTQ4jnMcHneWAPTrB8W13zQAVYdLX0bMY0LSlFrV8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gNdgkrbERsJGljfc17JMhIqjIGlCvKfBGnNf2HujhQzGADYa9qYbH/t4UZ2lEQRIn RR1w1WsiwVgb18ZFOtjp+6Q1v7v6lDHhycYzKxbZe0zSOJwZeKX4/UCvCSwlxEnd0D 7gNPmxsGVONnnYXIBvewYc4BTZGSS1wtPbsJ44xI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jonathan Marek , Akhil P Oommen , Rob Clark Subject: [PATCH 5.10 101/131] drm/msm/a6xx: update/fix CP_PROTECT initialization Date: Mon, 14 Jun 2021 12:27:42 +0200 Message-Id: <20210614102656.430438265@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210614102652.964395392@linuxfoundation.org> References: <20210614102652.964395392@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jonathan Marek commit 408434036958699a7f50ddec984f7ba33e11a8f5 upstream. Update CP_PROTECT register programming based on downstream. A6XX_PROTECT_RW is renamed to A6XX_PROTECT_NORDWR to make things aligned and also be more clear about what it does. Note that this required switching to use the CP_ALWAYS_ON_COUNTER as the GMU counter is not accessible from the cmdstream. Which also means using the CPU counter for the msm_gpu_submit_flush() tracepoint (as catapult depends on being able to compare this to the start/end values captured in cmdstream). This may need to be revisited when IFPC is enabled. Also, compared to downstream, this opens up CP_PERFCTR_CP_SEL as the userspace performance tooling (fdperf and pps-producer) expect to be able to configure the CP counters. Fixes: 4b565ca5a2cb ("drm/msm: Add A6XX device support") Signed-off-by: Jonathan Marek Reviewed-by: Akhil P Oommen Link: https://lore.kernel.org/r/20210513171431.18632-5-jonathan@marek.ca [switch to CP_ALWAYS_ON_COUNTER, open up CP_PERFCNTR_CP_SEL, and spiff up commit msg] Signed-off-by: Rob Clark Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 151 +++++++++++++++++++++++++--------- drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 2 2 files changed, 113 insertions(+), 40 deletions(-) --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -154,7 +154,7 @@ static void a6xx_submit(struct msm_gpu * * GPU registers so we need to add 0x1a800 to the register value on A630 * to get the right value from PM4. */ - get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800, + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, rbmemptr_stats(ring, index, alwayson_start)); /* Invalidate CCU depth and color */ @@ -184,7 +184,7 @@ static void a6xx_submit(struct msm_gpu * get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO, rbmemptr_stats(ring, index, cpcycles_end)); - get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800, + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, rbmemptr_stats(ring, index, alwayson_end)); /* Write the fence to the scratch register */ @@ -203,8 +203,8 @@ static void a6xx_submit(struct msm_gpu * OUT_RING(ring, submit->seqno); trace_msm_gpu_submit_flush(submit, - gmu_read64(&a6xx_gpu->gmu, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L, - REG_A6XX_GMU_ALWAYS_ON_COUNTER_H)); + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI)); a6xx_flush(gpu, ring); } @@ -459,6 +459,113 @@ static void a6xx_set_hwcg(struct msm_gpu gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0); } +/* For a615, a616, a618, A619, a630, a640 and a680 */ +static const u32 a6xx_protect[] = { + A6XX_PROTECT_RDONLY(0x00000, 0x04ff), + A6XX_PROTECT_RDONLY(0x00501, 0x0005), + A6XX_PROTECT_RDONLY(0x0050b, 0x02f4), + A6XX_PROTECT_NORDWR(0x0050e, 0x0000), + A6XX_PROTECT_NORDWR(0x00510, 0x0000), + A6XX_PROTECT_NORDWR(0x00534, 0x0000), + A6XX_PROTECT_NORDWR(0x00800, 0x0082), + A6XX_PROTECT_NORDWR(0x008a0, 0x0008), + A6XX_PROTECT_NORDWR(0x008ab, 0x0024), + A6XX_PROTECT_RDONLY(0x008de, 0x00ae), + A6XX_PROTECT_NORDWR(0x00900, 0x004d), + A6XX_PROTECT_NORDWR(0x0098d, 0x0272), + A6XX_PROTECT_NORDWR(0x00e00, 0x0001), + A6XX_PROTECT_NORDWR(0x00e03, 0x000c), + A6XX_PROTECT_NORDWR(0x03c00, 0x00c3), + A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff), + A6XX_PROTECT_NORDWR(0x08630, 0x01cf), + A6XX_PROTECT_NORDWR(0x08e00, 0x0000), + A6XX_PROTECT_NORDWR(0x08e08, 0x0000), + A6XX_PROTECT_NORDWR(0x08e50, 0x001f), + A6XX_PROTECT_NORDWR(0x09624, 0x01db), + A6XX_PROTECT_NORDWR(0x09e70, 0x0001), + A6XX_PROTECT_NORDWR(0x09e78, 0x0187), + A6XX_PROTECT_NORDWR(0x0a630, 0x01cf), + A6XX_PROTECT_NORDWR(0x0ae02, 0x0000), + A6XX_PROTECT_NORDWR(0x0ae50, 0x032f), + A6XX_PROTECT_NORDWR(0x0b604, 0x0000), + A6XX_PROTECT_NORDWR(0x0be02, 0x0001), + A6XX_PROTECT_NORDWR(0x0be20, 0x17df), + A6XX_PROTECT_NORDWR(0x0f000, 0x0bff), + A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff), + A6XX_PROTECT_NORDWR(0x11c00, 0x0000), /* note: infinite range */ +}; + +/* These are for a620 and a650 */ +static const u32 a650_protect[] = { + A6XX_PROTECT_RDONLY(0x00000, 0x04ff), + A6XX_PROTECT_RDONLY(0x00501, 0x0005), + A6XX_PROTECT_RDONLY(0x0050b, 0x02f4), + A6XX_PROTECT_NORDWR(0x0050e, 0x0000), + A6XX_PROTECT_NORDWR(0x00510, 0x0000), + A6XX_PROTECT_NORDWR(0x00534, 0x0000), + A6XX_PROTECT_NORDWR(0x00800, 0x0082), + A6XX_PROTECT_NORDWR(0x008a0, 0x0008), + A6XX_PROTECT_NORDWR(0x008ab, 0x0024), + A6XX_PROTECT_RDONLY(0x008de, 0x00ae), + A6XX_PROTECT_NORDWR(0x00900, 0x004d), + A6XX_PROTECT_NORDWR(0x0098d, 0x0272), + A6XX_PROTECT_NORDWR(0x00e00, 0x0001), + A6XX_PROTECT_NORDWR(0x00e03, 0x000c), + A6XX_PROTECT_NORDWR(0x03c00, 0x00c3), + A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff), + A6XX_PROTECT_NORDWR(0x08630, 0x01cf), + A6XX_PROTECT_NORDWR(0x08e00, 0x0000), + A6XX_PROTECT_NORDWR(0x08e08, 0x0000), + A6XX_PROTECT_NORDWR(0x08e50, 0x001f), + A6XX_PROTECT_NORDWR(0x08e80, 0x027f), + A6XX_PROTECT_NORDWR(0x09624, 0x01db), + A6XX_PROTECT_NORDWR(0x09e60, 0x0011), + A6XX_PROTECT_NORDWR(0x09e78, 0x0187), + A6XX_PROTECT_NORDWR(0x0a630, 0x01cf), + A6XX_PROTECT_NORDWR(0x0ae02, 0x0000), + A6XX_PROTECT_NORDWR(0x0ae50, 0x032f), + A6XX_PROTECT_NORDWR(0x0b604, 0x0000), + A6XX_PROTECT_NORDWR(0x0b608, 0x0007), + A6XX_PROTECT_NORDWR(0x0be02, 0x0001), + A6XX_PROTECT_NORDWR(0x0be20, 0x17df), + A6XX_PROTECT_NORDWR(0x0f000, 0x0bff), + A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff), + A6XX_PROTECT_NORDWR(0x18400, 0x1fff), + A6XX_PROTECT_NORDWR(0x1a800, 0x1fff), + A6XX_PROTECT_NORDWR(0x1f400, 0x0443), + A6XX_PROTECT_RDONLY(0x1f844, 0x007b), + A6XX_PROTECT_NORDWR(0x1f887, 0x001b), + A6XX_PROTECT_NORDWR(0x1f8c0, 0x0000), /* note: infinite range */ +}; + +static void a6xx_set_cp_protect(struct msm_gpu *gpu) +{ + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + const u32 *regs = a6xx_protect; + unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32; + + BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32); + BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48); + + if (adreno_is_a650(adreno_gpu)) { + regs = a650_protect; + count = ARRAY_SIZE(a650_protect); + count_max = 48; + } + + /* + * Enable access protection to privileged registers, fault on an access + * protect violation and select the last span to protect from the start + * address all the way to the end of the register address space + */ + gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3)); + + for (i = 0; i < count - 1; i++) + gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]); + /* last CP_PROTECT to have "infinite" length on the last entry */ + gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]); +} + static void a6xx_set_ubwc_config(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -722,41 +829,7 @@ static int a6xx_hw_init(struct msm_gpu * } /* Protect registers from the CP */ - gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, 0x00000003); - - gpu_write(gpu, REG_A6XX_CP_PROTECT(0), - A6XX_PROTECT_RDONLY(0x600, 0x51)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(1), A6XX_PROTECT_RW(0xae50, 0x2)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(2), A6XX_PROTECT_RW(0x9624, 0x13)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(3), A6XX_PROTECT_RW(0x8630, 0x8)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(4), A6XX_PROTECT_RW(0x9e70, 0x1)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(5), A6XX_PROTECT_RW(0x9e78, 0x187)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(6), A6XX_PROTECT_RW(0xf000, 0x810)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(7), - A6XX_PROTECT_RDONLY(0xfc00, 0x3)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(8), A6XX_PROTECT_RW(0x50e, 0x0)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(9), A6XX_PROTECT_RDONLY(0x50f, 0x0)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(10), A6XX_PROTECT_RW(0x510, 0x0)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(11), - A6XX_PROTECT_RDONLY(0x0, 0x4f9)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(12), - A6XX_PROTECT_RDONLY(0x501, 0xa)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(13), - A6XX_PROTECT_RDONLY(0x511, 0x44)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(14), A6XX_PROTECT_RW(0xe00, 0xe)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(15), A6XX_PROTECT_RW(0x8e00, 0x0)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(16), A6XX_PROTECT_RW(0x8e50, 0xf)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(17), A6XX_PROTECT_RW(0xbe02, 0x0)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(18), - A6XX_PROTECT_RW(0xbe20, 0x11f3)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(19), A6XX_PROTECT_RW(0x800, 0x82)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(20), A6XX_PROTECT_RW(0x8a0, 0x8)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(21), A6XX_PROTECT_RW(0x8ab, 0x19)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(22), A6XX_PROTECT_RW(0x900, 0x4d)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(23), A6XX_PROTECT_RW(0x98d, 0x76)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(24), - A6XX_PROTECT_RDONLY(0x980, 0x4)); - gpu_write(gpu, REG_A6XX_CP_PROTECT(25), A6XX_PROTECT_RW(0xa630, 0x0)); + a6xx_set_cp_protect(gpu); /* Enable expanded apriv for targets that support it */ if (gpu->hw_apriv) { --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h @@ -37,7 +37,7 @@ struct a6xx_gpu { * REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len * registers starting at _reg. */ -#define A6XX_PROTECT_RW(_reg, _len) \ +#define A6XX_PROTECT_NORDWR(_reg, _len) \ ((1 << 31) | \ (((_len) & 0x3FFF) << 18) | ((_reg) & 0x3FFFF))