Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp4725061pxb; Mon, 28 Mar 2022 02:11:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwtM3vXzGB+jBOospix8Lzmlzw5kQYHyKvXPM0VtE9grKlfNUto5o5Z71lZ5+Rmy/RIEia X-Received: by 2002:a17:907:97c5:b0:6da:c285:44f5 with SMTP id js5-20020a17090797c500b006dac28544f5mr26532835ejc.208.1648458691312; Mon, 28 Mar 2022 02:11:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648458691; cv=none; d=google.com; s=arc-20160816; b=pld2qFX8AgkQ/J9Ys6PjtTHwQLQhzjeu/HEr0iIkTG7YIgSDE991jfBCZJWk9XvcUM RCsRxWdmtoyOGGYAIjrD1ZerGB/jr3npnkw68vvmshYqbW0j7M45NcqfyAQsXbnT90xO 33LzEGkoy2ZEV4uCQu3s0X50GnyFq1gALmlr1amft5nwlAEyBWA0GCd+sBYMvKRO50xY 41oGq4qJff5oXlkyO1G1F3hYzUIvC1q6oIPZQytiifjIDpPQFcPWz0Wyq3h8t45Etlry X/YHSl1ofhBPyOwDARWQLmOmXohSW0DWpTyFBHilqA5cAQVyCWZ3ZPNV9xvwUJhuhhCx uLXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NK8vjYrdVo4TBFyZjVLtp4w7DW+6UTEojT8+Tjz56Z4=; b=UZ404Qb7AuiiRX8A6xQe+TnwshatGnd9S8jTf1orZHo6lZKyPHbmLf5HbptUh4pZM9 MLa2rTHQ8KP++PhtbFcXc11EntExdZ4I29ALM9QkN/I8ukr6DyuNcJmj6R/C1KCaRzek d1gPMXyHe33rNyCBGDlBe3MJn6ddESM5eoluUcF/HnLhOV25dTqwoUBJVyxQfY4kxP48 TfWvZkAxaPHX/Q4uXw77GA+bQypuINr2DS14dXzfAfUHal0jVxzj257Nq2AGntvvBJAd etMz0++CkjI4+YUdM5w6UxBqccjsi+YD3k6r6bHTxSZLqTR0P85t4QZEpvnwW5OdZIUW aK2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marek-ca.20210112.gappssmtp.com header.s=20210112 header.b=ASFTwYds; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y20-20020a056402359400b00418fb7cf87bsi13562079edc.139.2022.03.28.02.11.03; Mon, 28 Mar 2022 02:11:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marek-ca.20210112.gappssmtp.com header.s=20210112 header.b=ASFTwYds; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236624AbiC0U2w (ORCPT + 99 others); Sun, 27 Mar 2022 16:28:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236602AbiC0U2u (ORCPT ); Sun, 27 Mar 2022 16:28:50 -0400 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C53E33E98 for ; Sun, 27 Mar 2022 13:27:10 -0700 (PDT) Received: by mail-qk1-x72f.google.com with SMTP id v13so10005332qkv.3 for ; Sun, 27 Mar 2022 13:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marek-ca.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NK8vjYrdVo4TBFyZjVLtp4w7DW+6UTEojT8+Tjz56Z4=; b=ASFTwYds2LuPhL9jfPgoogS7yM4i/eXvdDUIbCOuKD4/Ize707kUbOhoRD40Ek00uX TZpeKshJTpt/ol4zSeSHKhifALJAqxi/274XWIvaQguTi0Uav/GXJ2CCD8JXypb80TUX CkZQSZ0Y2QEi1S2g18mmONRT0R0vpb+ofdzO0SB+I7YR1w9uVQKADXrHlShpuoUu1FzB a+IWS8LCFd3bMqyPFwIHK7xG/c36D/iaz4Ks3nmz4dB4yBKY5jYul/1J4jImYu2ssaWX 3GE2U53Fr5oEo5ZvveDVHWTnBG+tYXaE2oImOiGO4C5wgIqDnRQvTu90aMOUJk+zAPeL UO5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NK8vjYrdVo4TBFyZjVLtp4w7DW+6UTEojT8+Tjz56Z4=; b=OcefygYvoLrmV0cIzGuaAkJuKRA2ydF53S6N2/L6Fe8VIdTjYFAMaGR5+oEN1quPKA RviwX3GOURhO0BnfQMw3SJ1dD+q/M7tJFUR884hSz9vnLbMBP2OKQ3uvZOcbCANR11zh YolBnCJd9kFB9fkhJktINa++4d+Tu4fjugf3gFjvw4+ROIjyQ00Jdz3q6aWiECj0cg65 cLbsRmDlmXF6dRrNQfbkntu77vOs0Xmizl7WwwXyQf7capbuxqAY2cSjZEnzfFqp70+k PTZAsGohXMGlI+yDHOTfmzqmgiKo0L7R3TXDpKvO+ukbGommN1X+OuO/3orcAeuHmbXm 0Dbg== X-Gm-Message-State: AOAM533bvoulfv+NpaL+dCbeE7C2p05nnm3RZ/4d37z5s0MwtXfALO7O b1BCS9B9w2b8jjZCy2pVfUVe9A== X-Received: by 2002:a05:620a:29d6:b0:67d:4fb9:b5d0 with SMTP id s22-20020a05620a29d600b0067d4fb9b5d0mr13954271qkp.315.1648412829391; Sun, 27 Mar 2022 13:27:09 -0700 (PDT) Received: from localhost.localdomain (modemcable134.222-177-173.mc.videotron.ca. [173.177.222.134]) by smtp.gmail.com with ESMTPSA id g9-20020a05620a108900b0067b13036bd5sm6720386qkk.52.2022.03.27.13.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Mar 2022 13:27:09 -0700 (PDT) From: Jonathan Marek To: freedreno@lists.freedesktop.org Cc: Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , Dan Carpenter , Akhil P Oommen , Jordan Crouse , Vladimir Lypak , Yangtao Li , =?UTF-8?q?Christian=20K=C3=B6nig?= , Dmitry Baryshkov , Douglas Anderson , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), dri-devel@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 2/4] drm/msm/adreno: use a single register offset for gpu_read64/gpu_write64 Date: Sun, 27 Mar 2022 16:25:55 -0400 Message-Id: <20220327202643.4053-3-jonathan@marek.ca> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20220327202643.4053-1-jonathan@marek.ca> References: <20220327202643.4053-1-jonathan@marek.ca> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The high half of 64-bit registers is always at +1 offset, so change these helpers to be more convenient by removing the unnecessary argument. Signed-off-by: Jonathan Marek --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 +-- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 27 ++++++++------------- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 +-- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 25 ++++++------------- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 3 +-- drivers/gpu/drm/msm/msm_gpu.h | 12 ++++----- 6 files changed, 27 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 0c6b2a6d0b4c9..da5e18bd74a45 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -606,8 +606,7 @@ static int a4xx_pm_suspend(struct msm_gpu *gpu) { static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) { - *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO, - REG_A4XX_RBBM_PERFCTR_CP_0_HI); + *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO); return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 407f50a15faa4..1916cb759cd5c 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -605,11 +605,9 @@ static int a5xx_ucode_init(struct msm_gpu *gpu) a5xx_ucode_check_version(a5xx_gpu, a5xx_gpu->pfp_bo); } - gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, - REG_A5XX_CP_ME_INSTR_BASE_HI, a5xx_gpu->pm4_iova); + gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, a5xx_gpu->pm4_iova); - gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, - REG_A5XX_CP_PFP_INSTR_BASE_HI, a5xx_gpu->pfp_iova); + gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, a5xx_gpu->pfp_iova); return 0; } @@ -868,8 +866,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) * memory rendering at this point in time and we don't want to block off * part of the virtual memory space. */ - gpu_write64(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, - REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI, 0x00000000); + gpu_write64(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, 0x00000000); gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000); /* Put the GPU into 64 bit by default */ @@ -908,8 +905,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) return ret; /* Set the ringbuffer address */ - gpu_write64(gpu, REG_A5XX_CP_RB_BASE, REG_A5XX_CP_RB_BASE_HI, - gpu->rb[0]->iova); + gpu_write64(gpu, REG_A5XX_CP_RB_BASE, gpu->rb[0]->iova); /* * If the microcode supports the WHERE_AM_I opcode then we can use that @@ -936,7 +932,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) } gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR, - REG_A5XX_CP_RB_RPTR_ADDR_HI, shadowptr(a5xx_gpu, gpu->rb[0])); + shadowptr(a5xx_gpu, gpu->rb[0])); } else if (gpu->nr_rings > 1) { /* Disable preemption if WHERE_AM_I isn't available */ a5xx_preempt_fini(gpu); @@ -1239,9 +1235,9 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu) gpu_read(gpu, REG_A5XX_RBBM_STATUS), gpu_read(gpu, REG_A5XX_CP_RB_RPTR), gpu_read(gpu, REG_A5XX_CP_RB_WPTR), - gpu_read64(gpu, REG_A5XX_CP_IB1_BASE, REG_A5XX_CP_IB1_BASE_HI), + gpu_read64(gpu, REG_A5XX_CP_IB1_BASE), gpu_read(gpu, REG_A5XX_CP_IB1_BUFSZ), - gpu_read64(gpu, REG_A5XX_CP_IB2_BASE, REG_A5XX_CP_IB2_BASE_HI), + gpu_read64(gpu, REG_A5XX_CP_IB2_BASE), gpu_read(gpu, REG_A5XX_CP_IB2_BUFSZ)); /* Turn off the hangcheck timer to keep it from bothering us */ @@ -1427,8 +1423,7 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu) static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) { - *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO, - REG_A5XX_RBBM_ALWAYSON_COUNTER_HI); + *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO); return 0; } @@ -1465,8 +1460,7 @@ static int a5xx_crashdumper_run(struct msm_gpu *gpu, if (IS_ERR_OR_NULL(dumper->ptr)) return -EINVAL; - gpu_write64(gpu, REG_A5XX_CP_CRASH_SCRIPT_BASE_LO, - REG_A5XX_CP_CRASH_SCRIPT_BASE_HI, dumper->iova); + gpu_write64(gpu, REG_A5XX_CP_CRASH_SCRIPT_BASE_LO, dumper->iova); gpu_write(gpu, REG_A5XX_CP_CRASH_DUMP_CNTL, 1); @@ -1670,8 +1664,7 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu) if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0) return 0; - busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO, - REG_A5XX_RBBM_PERFCTR_RBBM_0_HI); + busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO); busy_time = busy_cycles - gpu->devfreq.busy_cycles; do_div(busy_time, clk_get_rate(gpu->core_clk) / 1000000); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 8abc9a2b114a2..7658e89844b46 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -137,7 +137,6 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu) /* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, - REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_HI, a5xx_gpu->preempt_iova[ring->id]); a5xx_gpu->next_ring = ring; @@ -211,8 +210,7 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu) } /* Write a 0 to signal that we aren't switching pagetables */ - gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO, - REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_HI, 0); + gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO, 0); /* Reset the preemption state */ set_preempt_state(a5xx_gpu, PREEMPT_NONE); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 83c31b2ad865b..a624cb2df233b 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -246,8 +246,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) OUT_RING(ring, submit->seqno); trace_msm_gpu_submit_flush(submit, - gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI)); + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO)); a6xx_flush(gpu, ring); } @@ -878,8 +877,7 @@ static int a6xx_ucode_init(struct msm_gpu *gpu) } } - gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, - REG_A6XX_CP_SQE_INSTR_BASE+1, a6xx_gpu->sqe_iova); + gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, a6xx_gpu->sqe_iova); return 0; } @@ -926,8 +924,7 @@ static int hw_init(struct msm_gpu *gpu) * memory rendering at this point in time and we don't want to block off * part of the virtual memory space. */ - gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, - REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI, 0x00000000); + gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, 0x00000000); gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000); /* Turn on 64 bit addressing for all blocks */ @@ -976,11 +973,8 @@ static int hw_init(struct msm_gpu *gpu) if (!adreno_is_a650_family(adreno_gpu)) { /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ - gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN_LO, - REG_A6XX_UCHE_GMEM_RANGE_MIN_HI, 0x00100000); - + gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN_LO, 0x00100000); gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MAX_LO, - REG_A6XX_UCHE_GMEM_RANGE_MAX_HI, 0x00100000 + adreno_gpu->gmem - 1); } @@ -1072,8 +1066,7 @@ static int hw_init(struct msm_gpu *gpu) goto out; /* Set the ringbuffer address */ - gpu_write64(gpu, REG_A6XX_CP_RB_BASE, REG_A6XX_CP_RB_BASE_HI, - gpu->rb[0]->iova); + gpu_write64(gpu, REG_A6XX_CP_RB_BASE, gpu->rb[0]->iova); /* Targets that support extended APRIV can use the RPTR shadow from * hardware but all the other ones need to disable the feature. Targets @@ -1105,7 +1098,6 @@ static int hw_init(struct msm_gpu *gpu) } gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO, - REG_A6XX_CP_RB_RPTR_ADDR_HI, shadowptr(a6xx_gpu, gpu->rb[0])); } @@ -1394,9 +1386,9 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu) gpu_read(gpu, REG_A6XX_RBBM_STATUS), gpu_read(gpu, REG_A6XX_CP_RB_RPTR), gpu_read(gpu, REG_A6XX_CP_RB_WPTR), - gpu_read64(gpu, REG_A6XX_CP_IB1_BASE, REG_A6XX_CP_IB1_BASE_HI), + gpu_read64(gpu, REG_A6XX_CP_IB1_BASE), gpu_read(gpu, REG_A6XX_CP_IB1_REM_SIZE), - gpu_read64(gpu, REG_A6XX_CP_IB2_BASE, REG_A6XX_CP_IB2_BASE_HI), + gpu_read64(gpu, REG_A6XX_CP_IB2_BASE), gpu_read(gpu, REG_A6XX_CP_IB2_REM_SIZE)); /* Turn off the hangcheck timer to keep it from bothering us */ @@ -1607,8 +1599,7 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) /* Force the GPU power on so we can read this register */ a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); - *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); + *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO); a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 55f443328d8e7..c61b233aff09b 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -147,8 +147,7 @@ static int a6xx_crashdumper_run(struct msm_gpu *gpu, /* Make sure all pending memory writes are posted */ wmb(); - gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE_LO, - REG_A6XX_CP_CRASH_SCRIPT_BASE_HI, dumper->iova); + gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE_LO, dumper->iova); gpu_write(gpu, REG_A6XX_CP_CRASH_DUMP_CNTL, 1); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 02419f2ca2bc5..f7fca687d45de 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -503,7 +503,7 @@ static inline void gpu_rmw(struct msm_gpu *gpu, u32 reg, u32 mask, u32 or) msm_rmw(gpu->mmio + (reg << 2), mask, or); } -static inline u64 gpu_read64(struct msm_gpu *gpu, u32 lo, u32 hi) +static inline u64 gpu_read64(struct msm_gpu *gpu, u32 reg) { u64 val; @@ -521,17 +521,17 @@ static inline u64 gpu_read64(struct msm_gpu *gpu, u32 lo, u32 hi) * when the lo is read, so make sure to read the lo first to trigger * that */ - val = (u64) msm_readl(gpu->mmio + (lo << 2)); - val |= ((u64) msm_readl(gpu->mmio + (hi << 2)) << 32); + val = (u64) msm_readl(gpu->mmio + (reg << 2)); + val |= ((u64) msm_readl(gpu->mmio + ((reg + 1) << 2)) << 32); return val; } -static inline void gpu_write64(struct msm_gpu *gpu, u32 lo, u32 hi, u64 val) +static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) { /* Why not a writeq here? Read the screed above */ - msm_writel(lower_32_bits(val), gpu->mmio + (lo << 2)); - msm_writel(upper_32_bits(val), gpu->mmio + (hi << 2)); + msm_writel(lower_32_bits(val), gpu->mmio + (reg << 2)); + msm_writel(upper_32_bits(val), gpu->mmio + ((reg + 1) << 2)); } int msm_gpu_pm_suspend(struct msm_gpu *gpu); -- 2.26.1