Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp576458pxk; Fri, 11 Sep 2020 14:54:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxS44vhFBo3ZJopAyWLvfMT4yWbh5yuBvVOIcdAZNRCFoESTQwYQEN6hWEpfuo3RdSmPfS2 X-Received: by 2002:a17:906:7fcb:: with SMTP id r11mr3793923ejs.519.1599861284967; Fri, 11 Sep 2020 14:54:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599861284; cv=none; d=google.com; s=arc-20160816; b=kl7GcyekSKkBm8YYI0dpKc1Oa2Kb9/nY4bERabNDFicMpaYvHU8WF5E/2g8efZ42Iw 2ZPJNRXNvN7DW6my21NdR81rexRqJ/bF4oOwPypjuIeUMg9bt77zGlaMnnxIZ39oH0LH ynONRo3KAb2EG3cD4NrsbobY+MVOFeal4vcwYTk+rNXI9BbjTszAMHay3J5avagPVQVh aB2r9MFg4WgieHkJHdbAcaUV/byCNmIKrhWRFtocJxUoFsBjTmgxXS8tKtLtcp4wFDVh 3NjFjvlxXW6FwO9FInEZS+nHqel8xdGpEfwT0Kobvh+Bo2BaEmcTBsiT6XMEXWiEYUZP hAhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=vV/NJwKiI4ho72v7Wl7XJgGkuK2ZmPRXa/ZcvRowyvA=; b=xpw4RKBLuHiRwHT2TApnNP32d80V1jPpHX6JlC7p+H0DS9bs96FNPKLY75f4IzdrLI 5coI5Em6PKrR5wr/cClx9ZjoJD/DU5IijepY7O4Xk6pWeg0YkiBChLe1yClyn50OfFxb 9zPys+MlUXrFYe8/sKpKOx6v+5UfC1ASoA/2LUMe1IbY64gHOCtBglHCxWdxV1B6o6mn PPOmO0HxKGhJRavBDzuYAhHn6aGUejHJk0vNQk2tXtThiVcNcuaeLV4olOh05uT/866S aQ3V1hiejs+GrXBi82nHjdhu3NxHW6Kh5xOHNg0tphxhZgEQfDx+KE/d8E8hb7+LsUpd AgUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s22si2147980edc.481.2020.09.11.14.54.22; Fri, 11 Sep 2020 14:54:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725986AbgIKVws (ORCPT + 99 others); Fri, 11 Sep 2020 17:52:48 -0400 Received: from mail-il1-f194.google.com ([209.85.166.194]:45958 "EHLO mail-il1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725884AbgIKVv1 (ORCPT ); Fri, 11 Sep 2020 17:51:27 -0400 Received: by mail-il1-f194.google.com with SMTP id q6so10283662ild.12 for ; Fri, 11 Sep 2020 14:51:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vV/NJwKiI4ho72v7Wl7XJgGkuK2ZmPRXa/ZcvRowyvA=; b=b/sY3m5FJiabTX3FBWrKHgLko4iUHxuxKRBGaL+6DrxtrLU64f1vShQCJtFGa29HSz nQMQjMDO4xMi+OgX/MN6QfsMdMavmulsI926nh6zy5GTMWT9xACvwdfcSFGO2L9VhhtN aZAU/asFP0w9sBWAEkanCcXkWruUwTRVkRVkpJjPOPMTV4mZ9oyZpD1y2KztOPvn2OGy OnvcrKmiygvRoHUQ5xwvMyvXXLYpL2IICqEEb6iQoC89Z29WsiyZRzDUwL71dIbUIdr1 wCOMBAHMoqs8Q5IADl5smhYGr8MVnLd2EmU/okt3a+coT6ieP3GyieE0FezkFOvllcvN A0bg== X-Gm-Message-State: AOAM531yZv6ZoG+WB5mcVlXkVSqPfz3dJ8LNGkJ/dFi/lvTgtk0ybXmk 4XNFNxb4XfStzjiIfVofFA== X-Received: by 2002:a92:cf52:: with SMTP id c18mr3488287ilr.162.1599861085187; Fri, 11 Sep 2020 14:51:25 -0700 (PDT) Received: from xps15.herring.priv ([64.188.179.251]) by smtp.googlemail.com with ESMTPSA id a20sm1927966ilq.57.2020.09.11.14.51.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Sep 2020 14:51:24 -0700 (PDT) From: Rob Herring To: Will Deacon , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Alexander Shishkin , Namhyung Kim , Raphael Gault , Mark Rutland , Jonathan Cameron , Ian Rogers , honnappa.nagarahalli@arm.com Subject: [PATCH v3 03/10] arm64: perf: Enable pmu counter direct access for perf event on armv8 Date: Fri, 11 Sep 2020 15:51:11 -0600 Message-Id: <20200911215118.2887710-4-robh@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200911215118.2887710-1-robh@kernel.org> References: <20200911215118.2887710-1-robh@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Raphael Gault Keep track of event opened with direct access to the hardware counters and modify permissions while they are open. The strategy used here is the same which x86 uses: everytime an event is mapped, the permissions are set if required. The atomic field added in the mm_context helps keep track of the different event opened and de-activate the permissions when all are unmapped. We also need to update the permissions in the context switch code so that tasks keep the right permissions. Signed-off-by: Raphael Gault Signed-off-by: Rob Herring --- v2: - Move mapped/unmapped into arm64 code. Fixes arm32. - Rebase on cap_user_time_short changes Changes from Raphael's v4: - Drop homogeneous check - Disable access for chained counters - Set pmc_width in user page --- arch/arm64/include/asm/mmu.h | 5 ++++ arch/arm64/include/asm/mmu_context.h | 2 ++ arch/arm64/include/asm/perf_event.h | 14 ++++++++++ arch/arm64/kernel/perf_event.c | 41 ++++++++++++++++++++++++++++ 4 files changed, 62 insertions(+) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index a7a5ecaa2e83..52cfdb676f06 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -19,6 +19,11 @@ typedef struct { atomic64_t id; + /* + * non-zero if userspace have access to hardware + * counters directly. + */ + atomic_t pmu_direct_access; #ifdef CONFIG_COMPAT void *sigpage; #endif diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index f2d7537d6f83..d24589ecb07a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -224,6 +225,7 @@ static inline void __switch_mm(struct mm_struct *next) } check_and_switch_context(next); + perf_switch_user_access(next); } static inline void diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index 2c2d7dbe8a02..a025d9595d51 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -8,6 +8,7 @@ #include #include +#include #define ARMV8_PMU_MAX_COUNTERS 32 #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) @@ -251,4 +252,17 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +static inline void perf_switch_user_access(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_PERF_EVENTS)) + return; + + if (atomic_read(&mm->context.pmu_direct_access)) { + write_sysreg(ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR, + pmuserenr_el0); + } else { + write_sysreg(0, pmuserenr_el0); + } +} + #endif diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 2727d126cecd..cf44591f5be1 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -836,6 +836,41 @@ static int armv8pmu_access_event_idx(struct perf_event *event) return event->hw.idx; } +static void refresh_pmuserenr(void *mm) +{ + perf_switch_user_access(mm); +} + +static void armv8pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + /* + * This function relies on not being called concurrently in two + * tasks in the same mm. Otherwise one task could observe + * pmu_direct_access > 1 and return all the way back to + * userspace with user access disabled while another task is still + * doing on_each_cpu_mask() to enable user access. + * + * For now, this can't happen because all callers hold mmap_lock + * for write. If this changes, we'll need a different solution. + */ + lockdep_assert_held_write(&mm->mmap_lock); + + if (atomic_inc_return(&mm->context.pmu_direct_access) == 1) + on_each_cpu(refresh_pmuserenr, mm, 1); +} + +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + if (atomic_dec_and_test(&mm->context.pmu_direct_access)) + on_each_cpu_mask(mm_cpumask(mm), refresh_pmuserenr, NULL, 1); +} + /* * Add an event filter to a given event. */ @@ -1112,6 +1147,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->filter_match = armv8pmu_filter_match; cpu_pmu->pmu.event_idx = armv8pmu_access_event_idx; + cpu_pmu->pmu.event_mapped = armv8pmu_event_mapped; + cpu_pmu->pmu.event_unmapped = armv8pmu_event_unmapped; cpu_pmu->name = name; cpu_pmu->map_event = map_event; @@ -1272,6 +1309,10 @@ void arch_perf_update_userpage(struct perf_event *event, userpg->cap_user_time = 0; userpg->cap_user_time_zero = 0; userpg->cap_user_time_short = 0; + userpg->cap_user_rdpmc = !!(event->hw.flags & ARMPMU_EL0_RD_CNTR); + + if (userpg->cap_user_rdpmc) + userpg->pmc_width = armv8pmu_event_is_64bit(event) ? 64 : 32; do { rd = sched_clock_read_begin(&seq); -- 2.25.1