2023-03-15 05:17:12

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

This series enables perf branch stack sampling support on arm64 platform
via a new arch feature called Branch Record Buffer Extension (BRBE). All
relevant register definitions could be accessed here.

https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers

This series applies on 6.3-rc1 after applying the following patch from Mark
which allows enums in SysregFields blocks in sysreg tools.

https://lore.kernel.org/all/[email protected]/

Changes in V9:

- Fixed build problem with has_branch_stack() in arm64 header
- BRBINF_EL1 definition has been changed from 'Sysreg' to 'SysregFields'
- Renamed all BRBINF_EL1 call sites as BRBINFx_EL1
- Dropped static const char branch_filter_error_msg[]
- Implemented a positive list check for BRBE supported perf branch filters
- Added a comment in armv8pmu_handle_irq()
- Implemented per-cpu allocation for struct branch_record records
- Skipped looping through bank 1 if an invalid record is detected in bank 0
- Added comment in armv8pmu_branch_read() explaining prohibited region etc
- Added comment warning about erroneously marking transactions as aborted
- Replaced the first argument (perf_branch_entry) in capture_brbe_flags()
- Dropped the last argument (idx) in capture_brbe_flags()
- Dropped the brbcr argument from capture_brbe_flags()
- Used perf_sample_save_brstack() to capture branch records for perf_sample_data
- Added comment explaining rationale for setting BRBCR_EL1_FZP for user only traces
- Dropped BRBE prohibited state mechanism while in armv8pmu_branch_read()
- Implemented event task context based branch records save mechanism

Changes in V8:

https://lore.kernel.org/all/[email protected]/

- Replaced arm_pmu->features as arm_pmu->has_branch_stack, updated its helper
- Added a comment and line break before arm_pmu->private element
- Added WARN_ON_ONCE() in helpers i.e armv8pmu_branch_[read|valid|enable|disable]()
- Dropped comments in armv8pmu_enable_event() and armv8pmu_disable_event()
- Replaced open bank encoding in BRBFCR_EL1 with SYS_FIELD_PREP()
- Changed brbe_hw_attr->brbe_version from 'bool' to 'int'
- Updated pr_warn() as pr_warn_once() with values in brbe_get_perf_[type|priv]()
- Replaced all pr_warn_once() as pr_debug_once() in armv8pmu_branch_valid()
- Added a comment in branch_type_to_brbcr() for the BRBCR_EL1 privilege settings
- Modified the comment related to BRBINFx_EL1.LASTFAILED in capture_brbe_flags()
- Modified brbe_get_perf_entry_type() as brbe_set_perf_entry_type()
- Renamed brbe_valid() as brbe_record_is_complete()
- Renamed brbe_source() as brbe_record_is_source_only()
- Renamed brbe_target() as brbe_record_is_target_only()
- Inverted checks for !brbe_record_is_[target|source]_only() for info capture
- Replaced 'fetch' with 'get' in all helpers that extract field value
- Dropped 'static int brbe_current_bank' optimization in select_brbe_bank()
- Dropped select_brbe_bank_index() completely, added capture_branch_entry()
- Process captured branch entries in two separate loops one for each BRBE bank
- Moved branch_records_alloc() inside armv8pmu_probe_pmu()
- Added a forward declaration for the helper has_branch_stack()
- Added new callbacks armv8pmu_private_alloc() and armv8pmu_private_free()
- Updated armv8pmu_probe_pmu() to allocate the private structure before SMP call

Changes in V7:

https://lore.kernel.org/all/[email protected]/

- Folded [PATCH 7/7] into [PATCH 3/7] which enables branch stack sampling event
- Defined BRBFCR_EL1_BRANCH_FILTERS, BRBCR_EL1_DEFAULT_CONFIG in the header
- Defined BRBFCR_EL1_DEFAULT_CONFIG in the header
- Updated BRBCR_EL1_DEFAULT_CONFIG with BRBCR_EL1_FZP
- Defined BRBCR_EL1_DEFAULT_TS in the header
- Updated BRBCR_EL1_DEFAULT_CONFIG with BRBCR_EL1_DEFAULT_TS
- Moved BRBCR_EL1_DEFAULT_CONFIG check inside branch_type_to_brbcr()
- Moved down BRBCR_EL1_CC, BRBCR_EL1_MPRED later in branch_type_to_brbcr()
- Also set BRBE in paused state in armv8pmu_branch_disable()
- Dropped brbe_paused(), set_brbe_paused() helpers
- Extracted error string via branch_filter_error_msg[] for armv8pmu_branch_valid()
- Replaced brbe_v1p1 with brbe_version in struct brbe_hw_attr
- Added valid_brbe_[cc, format, version]() helpers
- Split a separate brbe_attributes_probe() from armv8pmu_branch_probe()
- Capture event->attr.branch_sample_type earlier in armv8pmu_branch_valid()
- Defined enum brbe_bank_idx with possible values for BRBE bank indices
- Changed armpmu->hw_attr into armpmu->private
- Added missing space in stub definition for armv8pmu_branch_valid()
- Replaced both kmalloc() with kzalloc()
- Added BRBE_BANK_MAX_ENTRIES
- Updated comment for capture_brbe_flags()
- Updated comment for struct brbe_hw_attr
- Dropped space after type cast in couple of places
- Replaced inverse with negation for testing BRBCR_EL1_FZP in armv8pmu_branch_read()
- Captured cpuc->branches->branch_entries[idx] in a local variable
- Dropped saved_priv from armv8pmu_branch_read()
- Reorganize PERF_SAMPLE_BRANCH_NO_[CYCLES|NO_FLAGS] related configuration
- Replaced with FIELD_GET() and FIELD_PREP() wherever applicable
- Replaced BRBCR_EL1_TS_PHYSICAL with BRBCR_EL1_TS_VIRTUAL
- Moved valid_brbe_nr(), valid_brbe_cc(), valid_brbe_format(), valid_brbe_version()
select_brbe_bank(), select_brbe_bank_index() helpers inside the C implementation
- Reorganized brbe_valid_nr() and dropped the pr_warn() message
- Changed probe sequence in brbe_attributes_probe()
- Added 'brbcr' argument into capture_brbe_flags() to ascertain correct state
- Disable BRBE before disabling the PMU event counter
- Enable PERF_SAMPLE_BRANCH_HV filters when is_kernel_in_hyp_mode()
- Guard armv8pmu_reset() & armv8pmu_sched_task() with arm_pmu_branch_stack_supported()

Changes in V6:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

- Restore the exception level privilege after reading the branch records
- Unpause the buffer after reading the branch records
- Decouple BRBCR_EL1_EXCEPTION/ERTN from perf event privilege level
- Reworked BRBE implementation and branch stack sampling support on arm pmu
- BRBE implementation is now part of overall ARMV8 PMU implementation
- BRBE implementation moved from drivers/perf/ to inside arch/arm64/kernel/
- CONFIG_ARM_BRBE_PMU renamed as CONFIG_ARM64_BRBE in arch/arm64/Kconfig
- File moved - drivers/perf/arm_pmu_brbe.c -> arch/arm64/kernel/brbe.c
- File moved - drivers/perf/arm_pmu_brbe.h -> arch/arm64/kernel/brbe.h
- BRBE name has been dropped from struct arm_pmu and struct hw_pmu_events
- BRBE name has been abstracted out as 'branches' in arm_pmu and hw_pmu_events
- BRBE name has been abstracted out as 'branches' in ARMV8 PMU implementation
- Added sched_task() callback into struct arm_pmu
- Added 'hw_attr' into struct arm_pmu encapsulating possible PMU HW attributes
- Dropped explicit attributes brbe_(v1p1, nr, cc, format) from struct arm_pmu
- Dropped brbfcr, brbcr, registers scratch area from struct hw_pmu_events
- Dropped brbe_users, brbe_context tracking in struct hw_pmu_events
- Added 'features' tracking into struct arm_pmu with ARM_PMU_BRANCH_STACK flag
- armpmu->hw_attr maps into 'struct brbe_hw_attr' inside BRBE implementation
- Set ARM_PMU_BRANCH_STACK in 'arm_pmu->features' after successful BRBE probe
- Added armv8pmu_branch_reset() inside armv8pmu_branch_enable()
- Dropped brbe_supported() as events will be rejected via ARM_PMU_BRANCH_STACK
- Dropped set_brbe_disabled() as well
- Reformated armv8pmu_branch_valid() warnings while rejecting unsupported events

Changes in V5:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

- Changed BRBCR_EL1.VIRTUAL from 0b1 to 0b01
- Changed BRBFCR_EL1.EnL into BRBFCR_EL1.EnI
- Changed config ARM_BRBE_PMU from 'tristate' to 'bool'

Changes in V4:

https://lore.kernel.org/all/[email protected]/

- Changed ../tools/sysreg declarations as suggested
- Set PERF_SAMPLE_BRANCH_STACK in data.sample_flags
- Dropped perfmon_capable() check in armpmu_event_init()
- s/pr_warn_once/pr_info in armpmu_event_init()
- Added brbe_format element into struct pmu_hw_events
- Changed v1p1 as brbe_v1p1 in struct pmu_hw_events
- Dropped pr_info() from arm64_pmu_brbe_probe(), solved LOCKDEP warning

Changes in V3:

https://lore.kernel.org/all/[email protected]/

- Moved brbe_stack from the stack and now dynamically allocated
- Return PERF_BR_PRIV_UNKNOWN instead of -1 in brbe_fetch_perf_priv()
- Moved BRBIDR0, BRBCR, BRBFCR registers and fields into tools/sysreg
- Created dummy BRBINF_EL1 field definitions in tools/sysreg
- Dropped ARMPMU_EVT_PRIV framework which cached perfmon_capable()
- Both exception and exception return branche records are now captured
only if the event has PERF_SAMPLE_BRANCH_KERNEL which would already
been checked in generic perf via perf_allow_kernel()

Changes in V2:

https://lore.kernel.org/all/[email protected]/

- Dropped branch sample filter helpers consolidation patch from this series
- Added new hw_perf_event.flags element ARMPMU_EVT_PRIV to cache perfmon_capable()
- Use cached perfmon_capable() while configuring BRBE branch record filters

Changes in V1:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

- Added CONFIG_PERF_EVENTS wrapper for all branch sample filter helpers
- Process new perf branch types via PERF_BR_EXTEND_ABI

Changes in RFC V2:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

- Added branch_sample_priv() while consolidating other branch sample filter helpers
- Changed all SYS_BRBXXXN_EL1 register definition encodings per Marc
- Changed the BRBE driver as per proposed BRBE related perf ABI changes (V5)
- Added documentation for struct arm_pmu changes, updated commit message
- Updated commit message for BRBE detection infrastructure patch
- PERF_SAMPLE_BRANCH_KERNEL gets checked during arm event init (outside the driver)
- Branch privilege state capture mechanism has now moved inside the driver

Changes in RFC V1:

https://lore.kernel.org/all/[email protected]/

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: James Clark <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Suzuki Poulose <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]

Anshuman Khandual (10):
drivers: perf: arm_pmu: Add new sched_task() callback
arm64/perf: Add BRBE registers and fields
arm64/perf: Add branch stack support in struct arm_pmu
arm64/perf: Add branch stack support in struct pmu_hw_events
arm64/perf: Add branch stack support in ARMV8 PMU
arm64/perf: Enable branch stack events via FEAT_BRBE
arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
arm64/perf: Add struct brbe_regset helper functions
arm64/perf: Implement branch records save on task sched out
arm64/perf: Implement branch records save on PMU IRQ

arch/arm64/Kconfig | 11 +
arch/arm64/include/asm/perf_event.h | 46 ++
arch/arm64/include/asm/sysreg.h | 103 ++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/brbe.c | 758 ++++++++++++++++++++++++++++
arch/arm64/kernel/brbe.h | 270 ++++++++++
arch/arm64/kernel/perf_event.c | 106 +++-
arch/arm64/tools/sysreg | 159 ++++++
drivers/perf/arm_pmu.c | 12 +-
include/linux/perf/arm_pmu.h | 22 +-
10 files changed, 1462 insertions(+), 26 deletions(-)
create mode 100644 arch/arm64/kernel/brbe.c
create mode 100644 arch/arm64/kernel/brbe.h

--
2.25.1



2023-03-15 05:17:20

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 01/10] drivers: perf: arm_pmu: Add new sched_task() callback

This adds armpmu_sched_task(), as generic pmu's sched_task() override which
in turn can utilize a new arm_pmu.sched_task() callback when available from
the arm_pmu instance. This new callback will be used while enabling BRBE in
ARMV8 PMU.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/perf/arm_pmu.c | 9 +++++++++
include/linux/perf/arm_pmu.h | 1 +
2 files changed, 10 insertions(+)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 15bd1e34a88e..aada47e3b126 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -517,6 +517,14 @@ static int armpmu_event_init(struct perf_event *event)
return __hw_perf_event_init(event);
}

+static void armpmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+{
+ struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+
+ if (armpmu->sched_task)
+ armpmu->sched_task(pmu_ctx, sched_in);
+}
+
static void armpmu_enable(struct pmu *pmu)
{
struct arm_pmu *armpmu = to_arm_pmu(pmu);
@@ -858,6 +866,7 @@ struct arm_pmu *armpmu_alloc(void)
}

pmu->pmu = (struct pmu) {
+ .sched_task = armpmu_sched_task,
.pmu_enable = armpmu_enable,
.pmu_disable = armpmu_disable,
.event_init = armpmu_event_init,
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 525b5d64e394..f7fbd162ca4c 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -100,6 +100,7 @@ struct arm_pmu {
void (*stop)(struct arm_pmu *);
void (*reset)(void *);
int (*map_event)(struct perf_event *event);
+ void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
int num_events;
bool secure_access; /* 32-bit ARM only */
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
--
2.25.1


2023-03-15 05:19:13

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 02/10] arm64/perf: Add BRBE registers and fields

This adds BRBE related register definitions and various other related field
macros there in. These will be used subsequently in a BRBE driver which is
being added later on.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/include/asm/sysreg.h | 103 +++++++++++++++++++++
arch/arm64/tools/sysreg | 159 ++++++++++++++++++++++++++++++++
2 files changed, 262 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 9e3ecba3c4e6..b3bc03ee22bd 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -165,6 +165,109 @@
#define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0)
#define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)

+#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0))
+#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1))
+#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2))
+
+#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0)
+#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1)
+#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2)
+#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3)
+#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4)
+#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5)
+#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6)
+#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7)
+#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8)
+#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9)
+#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10)
+#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11)
+#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12)
+#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13)
+#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14)
+#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15)
+#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16)
+#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17)
+#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18)
+#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19)
+#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20)
+#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21)
+#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22)
+#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23)
+#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24)
+#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25)
+#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26)
+#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27)
+#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28)
+#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29)
+#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30)
+#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31)
+
+#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0)
+#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1)
+#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2)
+#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3)
+#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4)
+#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5)
+#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6)
+#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7)
+#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8)
+#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9)
+#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10)
+#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11)
+#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12)
+#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13)
+#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14)
+#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15)
+#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16)
+#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17)
+#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18)
+#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19)
+#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20)
+#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21)
+#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22)
+#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23)
+#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24)
+#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25)
+#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26)
+#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27)
+#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28)
+#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29)
+#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30)
+#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31)
+
+#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0)
+#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1)
+#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2)
+#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3)
+#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4)
+#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5)
+#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6)
+#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7)
+#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8)
+#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9)
+#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10)
+#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11)
+#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12)
+#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13)
+#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14)
+#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15)
+#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16)
+#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17)
+#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18)
+#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19)
+#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20)
+#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21)
+#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22)
+#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23)
+#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24)
+#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25)
+#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26)
+#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27)
+#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28)
+#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29)
+#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30)
+#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31)
+
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index dd5a9c7e310f..d74d9dbe18a7 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -924,6 +924,165 @@ UnsignedEnum 3:0 BT
EndEnum
EndSysreg

+
+SysregFields BRBINFx_EL1
+Res0 63:47
+Field 46 CCU
+Field 45:32 CC
+Res0 31:18
+Field 17 LASTFAILED
+Field 16 T
+Res0 15:14
+Enum 13:8 TYPE
+ 0b000000 UNCOND_DIR
+ 0b000001 INDIR
+ 0b000010 DIR_LINK
+ 0b000011 INDIR_LINK
+ 0b000101 RET_SUB
+ 0b000111 RET_EXCPT
+ 0b001000 COND_DIR
+ 0b100001 DEBUG_HALT
+ 0b100010 CALL
+ 0b100011 TRAP
+ 0b100100 SERROR
+ 0b100110 INST_DEBUG
+ 0b100111 DATA_DEBUG
+ 0b101010 ALGN_FAULT
+ 0b101011 INST_FAULT
+ 0b101100 DATA_FAULT
+ 0b101110 IRQ
+ 0b101111 FIQ
+ 0b111001 DEBUG_EXIT
+EndEnum
+Enum 7:6 EL
+ 0b00 EL0
+ 0b01 EL1
+ 0b10 EL2
+ 0b11 EL3
+EndEnum
+Field 5 MPRED
+Res0 4:2
+Enum 1:0 VALID
+ 0b00 NONE
+ 0b01 TARGET
+ 0b10 SOURCE
+ 0b11 FULL
+EndEnum
+EndSysregFields
+
+Sysreg BRBCR_EL1 2 1 9 0 0
+Res0 63:24
+Field 23 EXCEPTION
+Field 22 ERTN
+Res0 21:9
+Field 8 FZP
+Res0 7
+Enum 6:5 TS
+ 0b01 VIRTUAL
+ 0b10 GST_PHYSICAL
+ 0b11 PHYSICAL
+EndEnum
+Field 4 MPRED
+Field 3 CC
+Res0 2
+Field 1 E1BRE
+Field 0 E0BRE
+EndSysreg
+
+Sysreg BRBFCR_EL1 2 1 9 0 1
+Res0 63:30
+Enum 29:28 BANK
+ 0b0 FIRST
+ 0b1 SECOND
+EndEnum
+Res0 27:23
+Field 22 CONDDIR
+Field 21 DIRCALL
+Field 20 INDCALL
+Field 19 RTN
+Field 18 INDIRECT
+Field 17 DIRECT
+Field 16 EnI
+Res0 15:8
+Field 7 PAUSED
+Field 6 LASTFAILED
+Res0 5:0
+EndSysreg
+
+Sysreg BRBTS_EL1 2 1 9 0 2
+Field 63:0 TS
+EndSysreg
+
+Sysreg BRBINFINJ_EL1 2 1 9 1 0
+Res0 63:47
+Field 46 CCU
+Field 45:32 CC
+Res0 31:18
+Field 17 LASTFAILED
+Field 16 T
+Res0 15:14
+Enum 13:8 TYPE
+ 0b000000 UNCOND_DIR
+ 0b000001 INDIR
+ 0b000010 DIR_LINK
+ 0b000011 INDIR_LINK
+ 0b000100 RET_SUB
+ 0b000100 RET_SUB
+ 0b000111 RET_EXCPT
+ 0b001000 COND_DIR
+ 0b100001 DEBUG_HALT
+ 0b100010 CALL
+ 0b100011 TRAP
+ 0b100100 SERROR
+ 0b100110 INST_DEBUG
+ 0b100111 DATA_DEBUG
+ 0b101010 ALGN_FAULT
+ 0b101011 INST_FAULT
+ 0b101100 DATA_FAULT
+ 0b101110 IRQ
+ 0b101111 FIQ
+ 0b111001 DEBUG_EXIT
+EndEnum
+Enum 7:6 EL
+ 0b00 EL0
+ 0b01 EL1
+ 0b10 EL2
+ 0b11 EL3
+EndEnum
+Field 5 MPRED
+Res0 4:2
+Enum 1:0 VALID
+ 0b00 NONE
+ 0b01 TARGET
+ 0b10 SOURCE
+ 0b00 FULL
+EndEnum
+EndSysreg
+
+Sysreg BRBSRCINJ_EL1 2 1 9 1 1
+Field 63:0 ADDRESS
+EndSysreg
+
+Sysreg BRBTGTINJ_EL1 2 1 9 1 2
+Field 63:0 ADDRESS
+EndSysreg
+
+Sysreg BRBIDR0_EL1 2 1 9 2 0
+Res0 63:16
+Enum 15:12 CC
+ 0b101 20_BIT
+EndEnum
+Enum 11:8 FORMAT
+ 0b0 0
+EndEnum
+Enum 7:0 NUMREC
+ 0b1000 8
+ 0b10000 16
+ 0b100000 32
+ 0b1000000 64
+EndEnum
+EndSysreg
+
Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4
Res0 63:60
UnsignedEnum 59:56 F64MM
--
2.25.1


2023-03-15 05:19:24

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 03/10] arm64/perf: Add branch stack support in struct arm_pmu

This updates 'struct arm_pmu' for branch stack sampling support later. This
adds a new 'features' element in the structure to track supported features,
and another 'private' element to encapsulate implementation attributes on a
given 'struct arm_pmu'. These updates here will help in tracking any branch
stack sampling support, which is being added later. This also adds a helper
arm_pmu_branch_stack_supported().

This also enables perf branch stack sampling event on all 'struct arm pmu',
supporting the feature but after removing the current gate that blocks such
events unconditionally in armpmu_event_init(). Instead a quick probe can be
initiated via arm_pmu_branch_stack_supported() to ascertain the support.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/perf/arm_pmu.c | 3 +--
include/linux/perf/arm_pmu.h | 12 +++++++++++-
2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index aada47e3b126..d4a4f2bd89a5 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -510,8 +510,7 @@ static int armpmu_event_init(struct perf_event *event)
!cpumask_test_cpu(event->cpu, &armpmu->supported_cpus))
return -ENOENT;

- /* does not support taken branch sampling */
- if (has_branch_stack(event))
+ if (has_branch_stack(event) && !arm_pmu_branch_stack_supported(armpmu))
return -EOPNOTSUPP;

return __hw_perf_event_init(event);
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index f7fbd162ca4c..0da745eaf426 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -102,7 +102,9 @@ struct arm_pmu {
int (*map_event)(struct perf_event *event);
void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
int num_events;
- bool secure_access; /* 32-bit ARM only */
+ unsigned int secure_access : 1, /* 32-bit ARM only */
+ has_branch_stack: 1, /* 64-bit ARM only */
+ reserved : 30;
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
#define ARMV8_PMUV3_EXT_COMMON_EVENT_BASE 0x4000
@@ -118,8 +120,16 @@ struct arm_pmu {

/* Only to be used by ACPI probing code */
unsigned long acpi_cpuid;
+
+ /* Implementation specific attributes */
+ void *private;
};

+static inline bool arm_pmu_branch_stack_supported(struct arm_pmu *armpmu)
+{
+ return armpmu->has_branch_stack;
+}
+
#define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu))

u64 armpmu_event_update(struct perf_event *event);
--
2.25.1


2023-03-15 05:19:57

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events

This adds branch records buffer pointer in 'struct pmu_hw_events' which can
be used to capture branch records during PMU interrupt. This percpu pointer
here needs to be allocated first before usage.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
include/linux/perf/arm_pmu.h | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 0da745eaf426..694b241e456c 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -44,6 +44,13 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
}, \
}

+#define MAX_BRANCH_RECORDS 64
+
+struct branch_records {
+ struct perf_branch_stack branch_stack;
+ struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS];
+};
+
/* The events for a given PMU register set. */
struct pmu_hw_events {
/*
@@ -70,6 +77,8 @@ struct pmu_hw_events {
struct arm_pmu *percpu_pmu;

int irq;
+
+ struct branch_records *branches;
};

enum armpmu_attr_groups {
--
2.25.1


2023-03-15 05:20:54

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE

This enables branch stack sampling events in ARMV8 PMU, via an architecture
feature FEAT_BRBE aka branch record buffer extension. This defines required
branch helper functions pmuv8pmu_branch_XXXXX() and the implementation here
is wrapped with a new config option CONFIG_ARM64_BRBE.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/Kconfig | 11 +
arch/arm64/include/asm/perf_event.h | 11 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/brbe.c | 571 ++++++++++++++++++++++++++++
arch/arm64/kernel/brbe.h | 257 +++++++++++++
arch/arm64/kernel/perf_event.c | 21 +-
6 files changed, 869 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kernel/brbe.c
create mode 100644 arch/arm64/kernel/brbe.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1023e896d46b..7004d03079dd 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1381,6 +1381,17 @@ config HW_PERF_EVENTS
def_bool y
depends on ARM_PMU

+config ARM64_BRBE
+ bool "Enable support for Branch Record Buffer Extension (BRBE)"
+ depends on PERF_EVENTS && ARM64 && ARM_PMU
+ default y
+ help
+ Enable perf support for Branch Record Buffer Extension (BRBE) which
+ records all branches taken in an execution path. This supports some
+ branch types and privilege based filtering. It captured additional
+ relevant information such as cycle count, misprediction and branch
+ type, branch privilege level etc.
+
# Supported by clang >= 7.0 or GCC >= 12.0.0
config CC_HAVE_SHADOW_CALL_STACK
def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index 463f23c3484f..8077b1fabe29 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -280,6 +280,16 @@ struct perf_event;
#ifdef CONFIG_PERF_EVENTS
static inline bool has_branch_stack(struct perf_event *event);

+#ifdef CONFIG_ARM64_BRBE
+void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event);
+bool armv8pmu_branch_valid(struct perf_event *event);
+void armv8pmu_branch_enable(struct perf_event *event);
+void armv8pmu_branch_disable(struct perf_event *event);
+void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
+void armv8pmu_branch_reset(void);
+int armv8pmu_private_alloc(struct arm_pmu *arm_pmu);
+void armv8pmu_private_free(struct arm_pmu *arm_pmu);
+#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
@@ -307,3 +317,4 @@ static inline int armv8pmu_private_alloc(struct arm_pmu *arm_pmu) { return 0; }
static inline void armv8pmu_private_free(struct arm_pmu *arm_pmu) { }
#endif
#endif
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index ceba6792f5b3..6ee7ccb61621 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -46,6 +46,7 @@ obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o
obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
+obj-$(CONFIG_ARM64_BRBE) += brbe.o
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
diff --git a/arch/arm64/kernel/brbe.c b/arch/arm64/kernel/brbe.c
new file mode 100644
index 000000000000..c37118983751
--- /dev/null
+++ b/arch/arm64/kernel/brbe.c
@@ -0,0 +1,571 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Branch Record Buffer Extension Driver.
+ *
+ * Copyright (C) 2022 ARM Limited
+ *
+ * Author: Anshuman Khandual <[email protected]>
+ */
+#include "brbe.h"
+
+static bool valid_brbe_nr(int brbe_nr)
+{
+ return brbe_nr == BRBIDR0_EL1_NUMREC_8 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_16 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_32 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_64;
+}
+
+static bool valid_brbe_cc(int brbe_cc)
+{
+ return brbe_cc == BRBIDR0_EL1_CC_20_BIT;
+}
+
+static bool valid_brbe_format(int brbe_format)
+{
+ return brbe_format == BRBIDR0_EL1_FORMAT_0;
+}
+
+static bool valid_brbe_version(int brbe_version)
+{
+ return brbe_version == ID_AA64DFR0_EL1_BRBE_IMP ||
+ brbe_version == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1;
+}
+
+static void select_brbe_bank(int bank)
+{
+ u64 brbfcr;
+
+ WARN_ON(bank > BRBE_BANK_IDX_1);
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ brbfcr &= ~BRBFCR_EL1_BANK_MASK;
+ brbfcr |= SYS_FIELD_PREP(BRBFCR_EL1, BANK, bank);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+}
+
+/*
+ * Generic perf branch filters supported on BRBE
+ *
+ * New branch filters need to be evaluated whether they could be supported on
+ * BRBE. This ensures that such branch filters would not just be accepted, to
+ * fail silently. PERF_SAMPLE_BRANCH_HV is a special case that is selectively
+ * supported only on platforms where kernel is in hyp mode.
+ */
+#define BRBE_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \
+ PERF_SAMPLE_BRANCH_IN_TX | \
+ PERF_SAMPLE_BRANCH_NO_TX | \
+ PERF_SAMPLE_BRANCH_CALL_STACK)
+
+#define BRBE_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \
+ PERF_SAMPLE_BRANCH_KERNEL | \
+ PERF_SAMPLE_BRANCH_HV | \
+ PERF_SAMPLE_BRANCH_ANY | \
+ PERF_SAMPLE_BRANCH_ANY_CALL | \
+ PERF_SAMPLE_BRANCH_ANY_RETURN | \
+ PERF_SAMPLE_BRANCH_IND_CALL | \
+ PERF_SAMPLE_BRANCH_COND | \
+ PERF_SAMPLE_BRANCH_IND_JUMP | \
+ PERF_SAMPLE_BRANCH_CALL | \
+ PERF_SAMPLE_BRANCH_NO_FLAGS | \
+ PERF_SAMPLE_BRANCH_NO_CYCLES | \
+ PERF_SAMPLE_BRANCH_TYPE_SAVE | \
+ PERF_SAMPLE_BRANCH_HW_INDEX | \
+ PERF_SAMPLE_BRANCH_PRIV_SAVE)
+
+#define BRBE_PERF_BRANCH_FILTERS (BRBE_ALLOWED_BRANCH_FILTERS | \
+ BRBE_EXCLUDE_BRANCH_FILTERS)
+
+bool armv8pmu_branch_valid(struct perf_event *event)
+{
+ u64 branch_type = event->attr.branch_sample_type;
+
+ /*
+ * Ensure both perf branch filter allowed and exclude
+ * masks are always in sync with the generic perf ABI.
+ */
+ BUILD_BUG_ON(BRBE_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1));
+
+ if (branch_type & ~BRBE_ALLOWED_BRANCH_FILTERS) {
+ pr_debug_once("requested branch filter not supported 0x%llx\n", branch_type);
+ return false;
+ }
+
+ /*
+ * If the event does not have at least one of the privilege
+ * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core
+ * perf will adjust its value based on perf event's existing
+ * privilege level via attr.exclude_[user|kernel|hv].
+ *
+ * As event->attr.branch_sample_type might have been changed
+ * when the event reaches here, it is not possible to figure
+ * out whether the event originally had HV privilege request
+ * or got added via the core perf. Just report this situation
+ * once and continue ignoring if there are other instances.
+ */
+ if ((branch_type & PERF_SAMPLE_BRANCH_HV) && !is_kernel_in_hyp_mode())
+ pr_debug_once("hypervisor privilege filter not supported 0x%llx\n", branch_type);
+
+ return true;
+}
+
+int armv8pmu_private_alloc(struct arm_pmu *arm_pmu)
+{
+ struct brbe_hw_attr *brbe_attr = kzalloc(sizeof(struct brbe_hw_attr), GFP_KERNEL);
+
+ if (!brbe_attr)
+ return -ENOMEM;
+
+ arm_pmu->private = brbe_attr;
+ return 0;
+}
+
+void armv8pmu_private_free(struct arm_pmu *arm_pmu)
+{
+ kfree(arm_pmu->private);
+}
+
+static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe)
+{
+ struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)armpmu->private;
+ u64 brbidr = read_sysreg_s(SYS_BRBIDR0_EL1);
+
+ brbe_attr->brbe_version = brbe;
+ brbe_attr->brbe_format = brbe_get_format(brbidr);
+ brbe_attr->brbe_cc = brbe_get_cc_bits(brbidr);
+ brbe_attr->brbe_nr = brbe_get_numrec(brbidr);
+
+ if (!valid_brbe_version(brbe_attr->brbe_version) ||
+ !valid_brbe_format(brbe_attr->brbe_format) ||
+ !valid_brbe_cc(brbe_attr->brbe_cc) ||
+ !valid_brbe_nr(brbe_attr->brbe_nr))
+ return -EOPNOTSUPP;
+
+ return 0;
+}
+
+void armv8pmu_branch_probe(struct arm_pmu *armpmu)
+{
+ u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
+ u32 brbe;
+
+ brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT);
+ if (!brbe)
+ return;
+
+ if (brbe_attributes_probe(armpmu, brbe))
+ return;
+
+ armpmu->has_branch_stack = 1;
+}
+
+static u64 branch_type_to_brbfcr(int branch_type)
+{
+ u64 brbfcr = 0;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
+ brbfcr |= BRBFCR_EL1_BRANCH_FILTERS;
+ return brbfcr;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) {
+ brbfcr |= BRBFCR_EL1_INDCALL;
+ brbfcr |= BRBFCR_EL1_DIRCALL;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
+ brbfcr |= BRBFCR_EL1_RTN;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL)
+ brbfcr |= BRBFCR_EL1_INDCALL;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_COND)
+ brbfcr |= BRBFCR_EL1_CONDDIR;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP)
+ brbfcr |= BRBFCR_EL1_INDIRECT;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_CALL)
+ brbfcr |= BRBFCR_EL1_DIRCALL;
+
+ return brbfcr;
+}
+
+static u64 branch_type_to_brbcr(int branch_type)
+{
+ u64 brbcr = BRBCR_EL1_DEFAULT_TS;
+
+ /*
+ * BRBE need not be paused on PMU interrupt while tracing only
+ * the user space, bcause it will automatically be inside the
+ * prohibited region. But even after PMU overflow occurs, the
+ * interrupt could still take much more cycles, before it can
+ * be taken and by that time BRBE will have been overwritten.
+ * Let's enable pause on PMU interrupt mechanism even for user
+ * only traces.
+ */
+ brbcr |= BRBCR_EL1_FZP;
+
+ /*
+ * When running in the hyp mode, writing into BRBCR_EL1
+ * actually writes into BRBCR_EL2 instead. Field E2BRE
+ * is also at the same position as E1BRE.
+ */
+ if (branch_type & PERF_SAMPLE_BRANCH_USER)
+ brbcr |= BRBCR_EL1_E0BRE;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_KERNEL)
+ brbcr |= BRBCR_EL1_E1BRE;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_HV) {
+ if (is_kernel_in_hyp_mode())
+ brbcr |= BRBCR_EL1_E1BRE;
+ }
+
+ if (!(branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES))
+ brbcr |= BRBCR_EL1_CC;
+
+ if (!(branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS))
+ brbcr |= BRBCR_EL1_MPRED;
+
+ /*
+ * The exception and exception return branches could be
+ * captured, irrespective of the perf event's privilege.
+ * If the perf event does not have enough privilege for
+ * a given exception level, then addresses which falls
+ * under that exception level will be reported as zero
+ * for the captured branch record, creating source only
+ * or target only records.
+ */
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
+ brbcr |= BRBCR_EL1_EXCEPTION;
+ brbcr |= BRBCR_EL1_ERTN;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL)
+ brbcr |= BRBCR_EL1_EXCEPTION;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
+ brbcr |= BRBCR_EL1_ERTN;
+
+ return brbcr & BRBCR_EL1_DEFAULT_CONFIG;
+}
+
+void armv8pmu_branch_enable(struct perf_event *event)
+{
+ u64 branch_type = event->attr.branch_sample_type;
+ u64 brbfcr, brbcr;
+
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ brbfcr &= ~BRBFCR_EL1_DEFAULT_CONFIG;
+ brbfcr |= branch_type_to_brbfcr(branch_type);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+
+ brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+ brbcr &= ~BRBCR_EL1_DEFAULT_CONFIG;
+ brbcr |= branch_type_to_brbcr(branch_type);
+ write_sysreg_s(brbcr, SYS_BRBCR_EL1);
+ isb();
+ armv8pmu_branch_reset();
+}
+
+void armv8pmu_branch_disable(struct perf_event *event)
+{
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ u64 brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+
+ brbcr &= ~(BRBCR_EL1_E0BRE | BRBCR_EL1_E1BRE);
+ brbfcr |= BRBFCR_EL1_PAUSED;
+ write_sysreg_s(brbcr, SYS_BRBCR_EL1);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+}
+
+static void brbe_set_perf_entry_type(struct perf_branch_entry *entry, u64 brbinf)
+{
+ int brbe_type = brbe_get_type(brbinf);
+
+ switch (brbe_type) {
+ case BRBINFx_EL1_TYPE_UNCOND_DIR:
+ entry->type = PERF_BR_UNCOND;
+ break;
+ case BRBINFx_EL1_TYPE_INDIR:
+ entry->type = PERF_BR_IND;
+ break;
+ case BRBINFx_EL1_TYPE_DIR_LINK:
+ entry->type = PERF_BR_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_INDIR_LINK:
+ entry->type = PERF_BR_IND_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_RET_SUB:
+ entry->type = PERF_BR_RET;
+ break;
+ case BRBINFx_EL1_TYPE_COND_DIR:
+ entry->type = PERF_BR_COND;
+ break;
+ case BRBINFx_EL1_TYPE_CALL:
+ entry->type = PERF_BR_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_TRAP:
+ entry->type = PERF_BR_SYSCALL;
+ break;
+ case BRBINFx_EL1_TYPE_RET_EXCPT:
+ entry->type = PERF_BR_ERET;
+ break;
+ case BRBINFx_EL1_TYPE_IRQ:
+ entry->type = PERF_BR_IRQ;
+ break;
+ case BRBINFx_EL1_TYPE_DEBUG_HALT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_HALT;
+ break;
+ case BRBINFx_EL1_TYPE_SERROR:
+ entry->type = PERF_BR_SERROR;
+ break;
+ case BRBINFx_EL1_TYPE_INST_DEBUG:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_INST;
+ break;
+ case BRBINFx_EL1_TYPE_DATA_DEBUG:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_DATA;
+ break;
+ case BRBINFx_EL1_TYPE_ALGN_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_ALGN;
+ break;
+ case BRBINFx_EL1_TYPE_INST_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_INST;
+ break;
+ case BRBINFx_EL1_TYPE_DATA_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_DATA;
+ break;
+ case BRBINFx_EL1_TYPE_FIQ:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_FIQ;
+ break;
+ case BRBINFx_EL1_TYPE_DEBUG_EXIT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_EXIT;
+ break;
+ default:
+ pr_warn_once("%d - unknown branch type captured\n", brbe_type);
+ entry->type = PERF_BR_UNKNOWN;
+ break;
+ }
+}
+
+static int brbe_get_perf_priv(u64 brbinf)
+{
+ int brbe_el = brbe_get_el(brbinf);
+
+ switch (brbe_el) {
+ case BRBINFx_EL1_EL_EL0:
+ return PERF_BR_PRIV_USER;
+ case BRBINFx_EL1_EL_EL1:
+ return PERF_BR_PRIV_KERNEL;
+ case BRBINFx_EL1_EL_EL2:
+ if (is_kernel_in_hyp_mode())
+ return PERF_BR_PRIV_KERNEL;
+ return PERF_BR_PRIV_HV;
+ default:
+ pr_warn_once("%d - unknown branch privilege captured\n", brbe_el);
+ return PERF_BR_PRIV_UNKNOWN;
+ }
+}
+
+static void capture_brbe_flags(struct perf_branch_entry *entry, struct perf_event *event,
+ u64 brbinf)
+{
+ if (branch_sample_type(event))
+ brbe_set_perf_entry_type(entry, brbinf);
+
+ if (!branch_sample_no_cycles(event))
+ entry->cycles = brbe_get_cycles(brbinf);
+
+ if (!branch_sample_no_flags(event)) {
+ /*
+ * BRBINFx_EL1.LASTFAILED indicates that a TME transaction failed (or
+ * was cancelled) prior to this record, and some number of records
+ * prior to this one, may have been generated during an attempt to
+ * execute the transaction.
+ *
+ * We will remove such entries later in process_branch_aborts().
+ */
+ entry->abort = brbe_get_lastfailed(brbinf);
+
+ /*
+ * All these information (i.e transaction state and mispredicts)
+ * are available for source only and complete branch records.
+ */
+ if (brbe_record_is_complete(brbinf) ||
+ brbe_record_is_source_only(brbinf)) {
+ entry->mispred = brbe_get_mispredict(brbinf);
+ entry->predicted = !entry->mispred;
+ entry->in_tx = brbe_get_in_tx(brbinf);
+ }
+ }
+
+ if (branch_sample_priv(event)) {
+ /*
+ * All these information (i.e branch privilege level) are
+ * available for target only and complete branch records.
+ */
+ if (brbe_record_is_complete(brbinf) ||
+ brbe_record_is_target_only(brbinf))
+ entry->priv = brbe_get_perf_priv(brbinf);
+ }
+}
+
+/*
+ * A branch record with BRBINFx_EL1.LASTFAILED set, implies that all
+ * preceding consecutive branch records, that were in a transaction
+ * (i.e their BRBINFx_EL1.TX set) have been aborted.
+ *
+ * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding
+ * consecutive branch records up to the last record, which were in a
+ * transaction (i.e their BRBINFx_EL1.TX set) have been aborted.
+ *
+ * --------------------------------- -------------------
+ * | 00 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success]
+ * --------------------------------- -------------------
+ * | 01 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success]
+ * --------------------------------- -------------------
+ * | 02 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 |
+ * --------------------------------- -------------------
+ * | 03 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 04 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 05 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 1 |
+ * --------------------------------- -------------------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 |
+ * --------------------------------- -------------------
+ * | 61 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 62 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 63 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ *
+ * BRBFCR_EL1.LASTFAILED == 1
+ *
+ * BRBFCR_EL1.LASTFAILED fails all those consecutive, in transaction
+ * branches records near the end of the BRBE buffer.
+ *
+ * Architecture does not guarantee a non transaction (TX = 0) branch
+ * record between two different transactions. So it is possible that
+ * a subsequent lastfailed record (TX = 0, LF = 1) might erroneously
+ * mark more than required transactions as aborted.
+ */
+static void process_branch_aborts(struct pmu_hw_events *cpuc)
+{
+ struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ bool lastfailed = !!(brbfcr & BRBFCR_EL1_LASTFAILED);
+ int idx = brbe_attr->brbe_nr - 1;
+ struct perf_branch_entry *entry;
+
+ do {
+ entry = &cpuc->branches->branch_entries[idx];
+ if (entry->in_tx) {
+ entry->abort = lastfailed;
+ } else {
+ lastfailed = entry->abort;
+ entry->abort = false;
+ }
+ } while (idx--, idx >= 0);
+}
+
+void armv8pmu_branch_reset(void)
+{
+ asm volatile(BRB_IALL);
+ isb();
+}
+
+static bool capture_branch_entry(struct pmu_hw_events *cpuc,
+ struct perf_event *event, int idx)
+{
+ struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx];
+ u64 brbinf = get_brbinf_reg(idx);
+
+ /*
+ * There are no valid entries anymore on the buffer.
+ * Abort the branch record processing to save some
+ * cycles and also reduce the capture/process load
+ * for the user space as well.
+ */
+ if (brbe_invalid(brbinf))
+ return false;
+
+ perf_clear_branch_entry_bitfields(entry);
+ if (brbe_record_is_complete(brbinf)) {
+ entry->from = get_brbsrc_reg(idx);
+ entry->to = get_brbtgt_reg(idx);
+ } else if (brbe_record_is_source_only(brbinf)) {
+ entry->from = get_brbsrc_reg(idx);
+ entry->to = 0;
+ } else if (brbe_record_is_target_only(brbinf)) {
+ entry->from = 0;
+ entry->to = get_brbtgt_reg(idx);
+ }
+ capture_brbe_flags(entry, event, brbinf);
+ return true;
+}
+
+void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
+{
+ struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
+ u64 brbfcr, brbcr;
+ int idx, loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2, count;
+
+ brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+
+ /* Ensure pause on PMU interrupt is enabled */
+ WARN_ON_ONCE(!(brbcr & BRBCR_EL1_FZP));
+
+ /* Pause the buffer */
+ write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ /* Determine the indices for each loop */
+ loop1_idx1 = BRBE_BANK0_IDX_MIN;
+ if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
+ loop1_idx2 = brbe_attr->brbe_nr - 1;
+ loop2_idx1 = BRBE_BANK1_IDX_MIN;
+ loop2_idx2 = BRBE_BANK0_IDX_MAX;
+ } else {
+ loop1_idx2 = BRBE_BANK0_IDX_MAX;
+ loop2_idx1 = BRBE_BANK1_IDX_MIN;
+ loop2_idx2 = brbe_attr->brbe_nr - 1;
+ }
+
+ /* Loop through bank 0 */
+ select_brbe_bank(BRBE_BANK_IDX_0);
+ for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
+ if (!capture_branch_entry(cpuc, event, idx))
+ goto skip_bank_1;
+ }
+
+ /* Loop through bank 1 */
+ select_brbe_bank(BRBE_BANK_IDX_1);
+ for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
+ if (!capture_branch_entry(cpuc, event, idx))
+ break;
+ }
+
+skip_bank_1:
+ cpuc->branches->branch_stack.nr = idx;
+ cpuc->branches->branch_stack.hw_idx = -1ULL;
+ process_branch_aborts(cpuc);
+
+ /* Unpause the buffer */
+ write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+ armv8pmu_branch_reset();
+}
diff --git a/arch/arm64/kernel/brbe.h b/arch/arm64/kernel/brbe.h
new file mode 100644
index 000000000000..a47480eec070
--- /dev/null
+++ b/arch/arm64/kernel/brbe.h
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Branch Record Buffer Extension Helpers.
+ *
+ * Copyright (C) 2022 ARM Limited
+ *
+ * Author: Anshuman Khandual <[email protected]>
+ */
+#define pr_fmt(fmt) "brbe: " fmt
+
+#include <linux/perf/arm_pmu.h>
+
+#define BRBFCR_EL1_BRANCH_FILTERS (BRBFCR_EL1_DIRECT | \
+ BRBFCR_EL1_INDIRECT | \
+ BRBFCR_EL1_RTN | \
+ BRBFCR_EL1_INDCALL | \
+ BRBFCR_EL1_DIRCALL | \
+ BRBFCR_EL1_CONDDIR)
+
+#define BRBFCR_EL1_DEFAULT_CONFIG (BRBFCR_EL1_BANK_MASK | \
+ BRBFCR_EL1_PAUSED | \
+ BRBFCR_EL1_EnI | \
+ BRBFCR_EL1_BRANCH_FILTERS)
+
+/*
+ * BRBTS_EL1 is currently not used for branch stack implementation
+ * purpose but BRBCR_EL1.TS needs to have a valid value from all
+ * available options. BRBCR_EL1_TS_VIRTUAL is selected for this.
+ */
+#define BRBCR_EL1_DEFAULT_TS FIELD_PREP(BRBCR_EL1_TS_MASK, BRBCR_EL1_TS_VIRTUAL)
+
+#define BRBCR_EL1_DEFAULT_CONFIG (BRBCR_EL1_EXCEPTION | \
+ BRBCR_EL1_ERTN | \
+ BRBCR_EL1_CC | \
+ BRBCR_EL1_MPRED | \
+ BRBCR_EL1_E1BRE | \
+ BRBCR_EL1_E0BRE | \
+ BRBCR_EL1_FZP | \
+ BRBCR_EL1_DEFAULT_TS)
+/*
+ * BRBE Instructions
+ *
+ * BRB_IALL : Invalidate the entire buffer
+ * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTINJ, BRBINFINJ]
+ */
+#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f))
+#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f))
+
+/*
+ * BRBE Buffer Organization
+ *
+ * BRBE buffer is arranged as multiple banks of 32 branch record
+ * entries each. An individual branch record in a given bank could
+ * be accessed, after selecting the bank in BRBFCR_EL1.BANK and
+ * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with
+ * indices [0..31].
+ *
+ * Bank 0
+ *
+ * --------------------------------- ------
+ * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 |
+ * --------------------------------- ------
+ * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 |
+ * --------------------------------- ------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | .. |
+ * --------------------------------- ------
+ * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 |
+ * --------------------------------- ------
+ *
+ * Bank 1
+ *
+ * --------------------------------- ------
+ * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 |
+ * --------------------------------- ------
+ * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 |
+ * --------------------------------- ------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | .. |
+ * --------------------------------- ------
+ * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 |
+ * --------------------------------- ------
+ */
+#define BRBE_BANK_MAX_ENTRIES 32
+
+#define BRBE_BANK0_IDX_MIN 0
+#define BRBE_BANK0_IDX_MAX 31
+#define BRBE_BANK1_IDX_MIN 32
+#define BRBE_BANK1_IDX_MAX 63
+
+struct brbe_hw_attr {
+ int brbe_version;
+ int brbe_cc;
+ int brbe_nr;
+ int brbe_format;
+};
+
+enum brbe_bank_idx {
+ BRBE_BANK_IDX_INVALID = -1,
+ BRBE_BANK_IDX_0,
+ BRBE_BANK_IDX_1,
+ BRBE_BANK_IDX_MAX
+};
+
+#define RETURN_READ_BRBSRCN(n) \
+ read_sysreg_s(SYS_BRBSRC##n##_EL1)
+
+#define RETURN_READ_BRBTGTN(n) \
+ read_sysreg_s(SYS_BRBTGT##n##_EL1)
+
+#define RETURN_READ_BRBINFN(n) \
+ read_sysreg_s(SYS_BRBINF##n##_EL1)
+
+#define BRBE_REGN_CASE(n, case_macro) \
+ case n: return case_macro(n); break
+
+#define BRBE_REGN_SWITCH(x, case_macro) \
+ do { \
+ switch (x) { \
+ BRBE_REGN_CASE(0, case_macro); \
+ BRBE_REGN_CASE(1, case_macro); \
+ BRBE_REGN_CASE(2, case_macro); \
+ BRBE_REGN_CASE(3, case_macro); \
+ BRBE_REGN_CASE(4, case_macro); \
+ BRBE_REGN_CASE(5, case_macro); \
+ BRBE_REGN_CASE(6, case_macro); \
+ BRBE_REGN_CASE(7, case_macro); \
+ BRBE_REGN_CASE(8, case_macro); \
+ BRBE_REGN_CASE(9, case_macro); \
+ BRBE_REGN_CASE(10, case_macro); \
+ BRBE_REGN_CASE(11, case_macro); \
+ BRBE_REGN_CASE(12, case_macro); \
+ BRBE_REGN_CASE(13, case_macro); \
+ BRBE_REGN_CASE(14, case_macro); \
+ BRBE_REGN_CASE(15, case_macro); \
+ BRBE_REGN_CASE(16, case_macro); \
+ BRBE_REGN_CASE(17, case_macro); \
+ BRBE_REGN_CASE(18, case_macro); \
+ BRBE_REGN_CASE(19, case_macro); \
+ BRBE_REGN_CASE(20, case_macro); \
+ BRBE_REGN_CASE(21, case_macro); \
+ BRBE_REGN_CASE(22, case_macro); \
+ BRBE_REGN_CASE(23, case_macro); \
+ BRBE_REGN_CASE(24, case_macro); \
+ BRBE_REGN_CASE(25, case_macro); \
+ BRBE_REGN_CASE(26, case_macro); \
+ BRBE_REGN_CASE(27, case_macro); \
+ BRBE_REGN_CASE(28, case_macro); \
+ BRBE_REGN_CASE(29, case_macro); \
+ BRBE_REGN_CASE(30, case_macro); \
+ BRBE_REGN_CASE(31, case_macro); \
+ default: \
+ pr_warn("unknown register index\n"); \
+ return -1; \
+ } \
+ } while (0)
+
+static inline int buffer_to_brbe_idx(int buffer_idx)
+{
+ return buffer_idx % BRBE_BANK_MAX_ENTRIES;
+}
+
+static inline u64 get_brbsrc_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN);
+}
+
+static inline u64 get_brbtgt_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN);
+}
+
+static inline u64 get_brbinf_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN);
+}
+
+static inline u64 brbe_record_valid(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_VALID_MASK, brbinf);
+}
+
+static inline bool brbe_invalid(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_NONE;
+}
+
+static inline bool brbe_record_is_complete(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_FULL;
+}
+
+static inline bool brbe_record_is_source_only(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_SOURCE;
+}
+
+static inline bool brbe_record_is_target_only(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_TARGET;
+}
+
+static inline int brbe_get_in_tx(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_T_MASK, brbinf);
+}
+
+static inline int brbe_get_mispredict(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_MPRED_MASK, brbinf);
+}
+
+static inline int brbe_get_lastfailed(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_LASTFAILED_MASK, brbinf);
+}
+
+static inline int brbe_get_cycles(u64 brbinf)
+{
+ /*
+ * Captured cycle count is unknown and hence
+ * should not be passed on to the user space.
+ */
+ if (brbinf & BRBINFx_EL1_CCU)
+ return 0;
+
+ return FIELD_GET(BRBINFx_EL1_CC_MASK, brbinf);
+}
+
+static inline int brbe_get_type(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_TYPE_MASK, brbinf);
+}
+
+static inline int brbe_get_el(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_EL_MASK, brbinf);
+}
+
+static inline int brbe_get_numrec(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, brbidr);
+}
+
+static inline int brbe_get_format(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_FORMAT_MASK, brbidr);
+}
+
+static inline int brbe_get_cc_bits(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_CC_MASK, brbidr);
+}
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 6d7c4f91cbf7..b074502835a2 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -861,6 +861,10 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
if (!armpmu_event_set_period(event))
continue;

+ /*
+ * PMU IRQ should remain asserted until all branch records
+ * are captured and processed into struct perf_sample_data.
+ */
if (has_branch_stack(event)) {
WARN_ON(!cpuc->branches);
armv8pmu_branch_read(cpuc, event);
@@ -1191,14 +1195,25 @@ static void __armv8pmu_probe_pmu(void *info)

static int branch_records_alloc(struct arm_pmu *armpmu)
{
+ struct branch_records __percpu *tmp_alloc_ptr;
+ struct branch_records *records;
struct pmu_hw_events *events;
int cpu;

+ tmp_alloc_ptr = alloc_percpu_gfp(struct branch_records, GFP_KERNEL);
+ if (!tmp_alloc_ptr)
+ return -ENOMEM;
+
+ /*
+ * FIXME: Memory allocated via tmp_alloc_ptr gets completely
+ * consumed here, never required to be freed up later. Hence
+ * losing access to on stack 'tmp_alloc_ptr' is acceptible.
+ * Otherwise this alloc handle has to be saved some where.
+ */
for_each_possible_cpu(cpu) {
events = per_cpu_ptr(armpmu->hw_events, cpu);
- events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
- if (!events->branches)
- return -ENOMEM;
+ records = per_cpu_ptr(tmp_alloc_ptr, cpu);
+ events->branches = records;
}
return 0;
}
--
2.25.1


2023-03-15 05:21:06

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()

Short running processes i.e those getting very small cpu run time each time
when they get scheduled on, might not accumulate much branch records before
a PMU IRQ really happens. This increases possibility, for such processes to
loose much of its branch records, while being scheduled in-out of various
cpus on the system.

There is a need to save all occurred branch records during the cpu run time
while the process gets scheduled out. It requires an event context specific
buffer for such storage.

This adds PERF_ATTACH_TASK_DATA flag unconditionally, for all branch stack
sampling events, which would allocate task_ctx_data during its event init.
This also creates a platform specific task_ctx_data kmem cache which will
serve such allocation requests.

This adds a new structure 'arm64_perf_task_context' which encapsulates brbe
register set for maximum possible BRBE entries on the HW along with a valid
records tracking element.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/kernel/brbe.c | 13 +++++++++++++
arch/arm64/kernel/brbe.h | 13 +++++++++++++
arch/arm64/kernel/perf_event.c | 8 ++++++--
3 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/brbe.c b/arch/arm64/kernel/brbe.c
index c37118983751..a8b4f89b5d00 100644
--- a/arch/arm64/kernel/brbe.c
+++ b/arch/arm64/kernel/brbe.c
@@ -109,20 +109,33 @@ bool armv8pmu_branch_valid(struct perf_event *event)
return true;
}

+static inline struct kmem_cache *
+arm64_create_brbe_task_ctx_kmem_cache(size_t size)
+{
+ return kmem_cache_create("arm64_brbe_task_ctx", size, 0, 0, NULL);
+}
+
int armv8pmu_private_alloc(struct arm_pmu *arm_pmu)
{
struct brbe_hw_attr *brbe_attr = kzalloc(sizeof(struct brbe_hw_attr), GFP_KERNEL);
+ size_t size = sizeof(struct arm64_perf_task_context);

if (!brbe_attr)
return -ENOMEM;

arm_pmu->private = brbe_attr;
+ arm_pmu->pmu.task_ctx_cache = arm64_create_brbe_task_ctx_kmem_cache(size);
+ if (!arm_pmu->pmu.task_ctx_cache) {
+ kfree(arm_pmu->private);
+ return -ENOMEM;
+ }
return 0;
}

void armv8pmu_private_free(struct arm_pmu *arm_pmu)
{
kfree(arm_pmu->private);
+ kmem_cache_destroy(arm_pmu->pmu.task_ctx_cache);
}

static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe)
diff --git a/arch/arm64/kernel/brbe.h b/arch/arm64/kernel/brbe.h
index a47480eec070..4a72c2ba7140 100644
--- a/arch/arm64/kernel/brbe.h
+++ b/arch/arm64/kernel/brbe.h
@@ -80,12 +80,25 @@
* --------------------------------- ------
*/
#define BRBE_BANK_MAX_ENTRIES 32
+#define BRBE_MAX_BANK 2
+#define BRBE_MAX_ENTRIES (BRBE_BANK_MAX_ENTRIES * BRBE_MAX_BANK)

#define BRBE_BANK0_IDX_MIN 0
#define BRBE_BANK0_IDX_MAX 31
#define BRBE_BANK1_IDX_MIN 32
#define BRBE_BANK1_IDX_MAX 63

+struct brbe_regset {
+ unsigned long brbsrc;
+ unsigned long brbtgt;
+ unsigned long brbinf;
+};
+
+struct arm64_perf_task_context {
+ struct brbe_regset store[BRBE_MAX_ENTRIES];
+ int nr_brbe_records;
+};
+
struct brbe_hw_attr {
int brbe_version;
int brbe_cc;
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index b074502835a2..c100731c52a0 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -1067,8 +1067,12 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
&armv8_pmuv3_perf_cache_map,
ARMV8_PMU_EVTYPE_EVENT);

- if (has_branch_stack(event) && !armv8pmu_branch_valid(event))
- return -EOPNOTSUPP;
+ if (has_branch_stack(event)) {
+ if (!armv8pmu_branch_valid(event))
+ return -EOPNOTSUPP;
+
+ event->attach_state |= PERF_ATTACH_TASK_DATA;
+ }

/*
* CHAIN events only work when paired with an adjacent counter, and it
--
2.25.1


2023-03-15 05:21:09

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 08/10] arm64/perf: Add struct brbe_regset helper functions

The primary abstraction level for fetching branch records from BRBE HW has
been changed as 'struct brbe_regset', which contains storage for all three
BRBE registers i.e BRBSRC, BRBTGT, BRBINF. Whether branch record processing
happens in the task sched out path, or in the PMU IRQ handling path, these
registers need to be extracted from the HW. Afterwards both live and stored
sets need to be stitched together to create final branch records set. This
adds required helper functions for such operations.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/kernel/brbe.c | 163 +++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)

diff --git a/arch/arm64/kernel/brbe.c b/arch/arm64/kernel/brbe.c
index a8b4f89b5d00..34bc58ef8062 100644
--- a/arch/arm64/kernel/brbe.c
+++ b/arch/arm64/kernel/brbe.c
@@ -44,6 +44,169 @@ static void select_brbe_bank(int bank)
isb();
}

+/*
+ * This scans over BRBE register banks and captures individual branch reocrds
+ * [BRBSRC, BRBTGT, BRBINF] into a pre-allocated 'struct brbe_regset' buffer,
+ * until an invalid one gets encountered. The caller for this function needs
+ * to ensure BRBE is an appropriate state before the records can be captured.
+ */
+static int capture_brbe_regset(struct brbe_hw_attr *brbe_attr, struct brbe_regset *buf)
+{
+ int loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2;
+ int idx, count;
+
+ loop1_idx1 = BRBE_BANK0_IDX_MIN;
+ if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
+ loop1_idx2 = brbe_attr->brbe_nr - 1;
+ loop2_idx1 = BRBE_BANK1_IDX_MIN;
+ loop2_idx2 = BRBE_BANK0_IDX_MAX;
+ } else {
+ loop1_idx2 = BRBE_BANK0_IDX_MAX;
+ loop2_idx1 = BRBE_BANK1_IDX_MIN;
+ loop2_idx2 = brbe_attr->brbe_nr - 1;
+ }
+
+ select_brbe_bank(BRBE_BANK_IDX_0);
+ for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
+ buf[idx].brbinf = get_brbinf_reg(idx);
+ /*
+ * There are no valid entries anymore on the buffer.
+ * Abort the branch record processing to save some
+ * cycles and also reduce the capture/process load
+ * for the user space as well.
+ */
+ if (brbe_invalid(buf[idx].brbinf))
+ return idx;
+
+ buf[idx].brbsrc = get_brbsrc_reg(idx);
+ buf[idx].brbtgt = get_brbtgt_reg(idx);
+ }
+
+ select_brbe_bank(BRBE_BANK_IDX_1);
+ for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
+ buf[idx].brbinf = get_brbinf_reg(idx);
+ /*
+ * There are no valid entries anymore on the buffer.
+ * Abort the branch record processing to save some
+ * cycles and also reduce the capture/process load
+ * for the user space as well.
+ */
+ if (brbe_invalid(buf[idx].brbinf))
+ return idx;
+
+ buf[idx].brbsrc = get_brbsrc_reg(idx);
+ buf[idx].brbtgt = get_brbtgt_reg(idx);
+ }
+ return idx;
+}
+
+static inline void copy_brbe_regset(struct brbe_regset *src, int src_idx,
+ struct brbe_regset *dst, int dst_idx)
+{
+ dst[dst_idx].brbinf = src[src_idx].brbinf;
+ dst[dst_idx].brbsrc = src[src_idx].brbsrc;
+ dst[dst_idx].brbtgt = src[src_idx].brbtgt;
+}
+
+/*
+ * This function concatenates branch records from stored and live buffer
+ * up to maximum nr_max records and the stored buffer holds the resultant
+ * buffer. The concatenated buffer contains all the branch records from
+ * the live buffer but might contain some from stored buffer considering
+ * the maximum combined length does not exceed 'nr_max'.
+ *
+ * Stored records Live records
+ * ------------------------------------------------^
+ * | S0 | L0 | Newest |
+ * --------------------------------- |
+ * | S1 | L1 | |
+ * --------------------------------- |
+ * | S2 | L2 | |
+ * --------------------------------- |
+ * | S3 | L3 | |
+ * --------------------------------- |
+ * | S4 | L4 | nr_max
+ * --------------------------------- |
+ * | | L5 | |
+ * --------------------------------- |
+ * | | L6 | |
+ * --------------------------------- |
+ * | | L7 | |
+ * --------------------------------- |
+ * | | | |
+ * --------------------------------- |
+ * | | | Oldest |
+ * ------------------------------------------------V
+ *
+ *
+ * S0 is the newest in the stored records, where as L7 is the oldest in
+ * the live reocords. Unless the live buffer is detetcted as being full
+ * thus potentially dropping off some older records, L7 and S0 records
+ * are contiguous in time for a user task context. The stitched buffer
+ * here represents maximum possible branch records, contiguous in time.
+ *
+ * Stored records Live records
+ * ------------------------------------------------^
+ * | L0 | L0 | Newest |
+ * --------------------------------- |
+ * | L0 | L1 | |
+ * --------------------------------- |
+ * | L2 | L2 | |
+ * --------------------------------- |
+ * | L3 | L3 | |
+ * --------------------------------- |
+ * | L4 | L4 | nr_max
+ * --------------------------------- |
+ * | L5 | L5 | |
+ * --------------------------------- |
+ * | L6 | L6 | |
+ * --------------------------------- |
+ * | L7 | L7 | |
+ * --------------------------------- |
+ * | S0 | | |
+ * --------------------------------- |
+ * | S1 | | Oldest |
+ * ------------------------------------------------V
+ * | S2 | <----|
+ * ----------------- |
+ * | S3 | <----| Dropped off after nr_max
+ * ----------------- |
+ * | S4 | <----|
+ * -----------------
+ */
+static int stitch_stored_live_entries(struct brbe_regset *stored,
+ struct brbe_regset *live,
+ int nr_stored, int nr_live,
+ int nr_max)
+{
+ int nr_total, nr_excess, nr_last, i;
+
+ nr_total = nr_stored + nr_live;
+ nr_excess = nr_total - nr_max;
+
+ /* Stored branch records in stitched buffer */
+ if (nr_live == nr_max)
+ nr_stored = 0;
+ else if (nr_excess > 0)
+ nr_stored -= nr_excess;
+
+ /* Stitched buffer branch records length */
+ if (nr_total > nr_max)
+ nr_last = nr_max;
+ else
+ nr_last = nr_total;
+
+ /* Move stored branch records */
+ for (i = 0; i < nr_stored; i++)
+ copy_brbe_regset(stored, i, stored, nr_last - nr_stored - 1 + i);
+
+ /* Copy live branch records */
+ for (i = 0; i < nr_live; i++)
+ copy_brbe_regset(live, i, stored, i);
+
+ return nr_last;
+}
+
/*
* Generic perf branch filters supported on BRBE
*
--
2.25.1


2023-03-15 05:25:57

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 09/10] arm64/perf: Implement branch records save on task sched out

This modifies current armv8pmu_sched_task(), to implement a branch records
save mechanism via armv8pmu_branch_save() when a task scheds out of a cpu.
BRBE is paused and disabled for all exception levels before branch records
get captured, which then get concatenated with all existing stored records
present in the task context maintaining the contiguity. Although the final
length of the concatenated buffer does not exceed implemented BRBE length.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/include/asm/perf_event.h | 2 ++
arch/arm64/kernel/brbe.c | 30 +++++++++++++++++++++++++++++
arch/arm64/kernel/perf_event.c | 14 ++++++++++++--
3 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index 8077b1fabe29..9ad0c6aabc07 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -289,6 +289,7 @@ void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
void armv8pmu_branch_reset(void);
int armv8pmu_private_alloc(struct arm_pmu *arm_pmu);
void armv8pmu_private_free(struct arm_pmu *arm_pmu);
+void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx);
#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
@@ -315,6 +316,7 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_reset(void) { }
static inline int armv8pmu_private_alloc(struct arm_pmu *arm_pmu) { return 0; }
static inline void armv8pmu_private_free(struct arm_pmu *arm_pmu) { }
+static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { }
#endif
#endif
#endif
diff --git a/arch/arm64/kernel/brbe.c b/arch/arm64/kernel/brbe.c
index 34bc58ef8062..3dcb4407b92a 100644
--- a/arch/arm64/kernel/brbe.c
+++ b/arch/arm64/kernel/brbe.c
@@ -207,6 +207,36 @@ static int stitch_stored_live_entries(struct brbe_regset *stored,
return nr_last;
}

+static int brbe_branch_save(struct brbe_hw_attr *brbe_attr, struct brbe_regset *live)
+{
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ int nr_live;
+
+ write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ nr_live = capture_brbe_regset(brbe_attr, live);
+
+ write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ return nr_live;
+}
+
+void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx)
+{
+ struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)arm_pmu->private;
+ struct arm64_perf_task_context *task_ctx = ctx;
+ struct brbe_regset live[BRBE_MAX_ENTRIES];
+ int nr_live, nr_store;
+
+ nr_live = brbe_branch_save(brbe_attr, live);
+ nr_store = task_ctx->nr_brbe_records;
+ nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
+ nr_live, brbe_attr->brbe_nr);
+ task_ctx->nr_brbe_records = nr_store;
+}
+
/*
* Generic perf branch filters supported on BRBE
*
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index c100731c52a0..2fbed575e747 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -972,9 +972,19 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
{
struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+ void *task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL;

- if (sched_in && arm_pmu_branch_stack_supported(armpmu))
- armv8pmu_branch_reset();
+ if (arm_pmu_branch_stack_supported(armpmu)) {
+ /* Save branch records in task_ctx on sched out */
+ if (task_ctx && !sched_in) {
+ armv8pmu_branch_save(armpmu, task_ctx);
+ return;
+ }
+
+ /* Reset branch records on sched in */
+ if (sched_in)
+ armv8pmu_branch_reset();
+ }
}

/*
--
2.25.1


2023-03-15 05:41:47

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 05/10] arm64/perf: Add branch stack support in ARMV8 PMU

This enables support for branch stack sampling event in ARMV8 PMU, checking
has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
for now. While here, this also defines arm_pmu's sched_task() callback with
armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/include/asm/perf_event.h | 33 +++++++++++++
arch/arm64/kernel/perf_event.c | 77 ++++++++++++++++++++---------
2 files changed, 87 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index 3eaf462f5752..463f23c3484f 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -273,4 +273,37 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs);
(regs)->pstate = PSR_MODE_EL1h; \
}

+struct pmu_hw_events;
+struct arm_pmu;
+struct perf_event;
+
+#ifdef CONFIG_PERF_EVENTS
+static inline bool has_branch_stack(struct perf_event *event);
+
+static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline bool armv8pmu_branch_valid(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+ return false;
+}
+
+static inline void armv8pmu_branch_enable(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline void armv8pmu_branch_disable(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
+static inline void armv8pmu_branch_reset(void) { }
+static inline int armv8pmu_private_alloc(struct arm_pmu *arm_pmu) { return 0; }
+static inline void armv8pmu_private_free(struct arm_pmu *arm_pmu) { }
+#endif
#endif
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index dde06c0f97f3..6d7c4f91cbf7 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -769,38 +769,21 @@ static void armv8pmu_enable_event(struct perf_event *event)
* Enable counter and interrupt, and set the counter to count
* the event that we're interested in.
*/
-
- /*
- * Disable counter
- */
armv8pmu_disable_event_counter(event);
-
- /*
- * Set event.
- */
armv8pmu_write_event_type(event);
-
- /*
- * Enable interrupt for this counter
- */
armv8pmu_enable_event_irq(event);
-
- /*
- * Enable counter
- */
armv8pmu_enable_event_counter(event);
+
+ if (has_branch_stack(event))
+ armv8pmu_branch_enable(event);
}

static void armv8pmu_disable_event(struct perf_event *event)
{
- /*
- * Disable counter
- */
- armv8pmu_disable_event_counter(event);
+ if (has_branch_stack(event))
+ armv8pmu_branch_disable(event);

- /*
- * Disable interrupt for this counter
- */
+ armv8pmu_disable_event_counter(event);
armv8pmu_disable_event_irq(event);
}

@@ -878,6 +861,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
if (!armpmu_event_set_period(event))
continue;

+ if (has_branch_stack(event)) {
+ WARN_ON(!cpuc->branches);
+ armv8pmu_branch_read(cpuc, event);
+ perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack);
+ }
+
/*
* Perf event overflow will queue the processing of the event as
* an irq_work which will be taken care of in the handling of
@@ -976,6 +965,14 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
return event->hw.idx;
}

+static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+{
+ struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+
+ if (sched_in && arm_pmu_branch_stack_supported(armpmu))
+ armv8pmu_branch_reset();
+}
+
/*
* Add an event filter to a given event.
*/
@@ -1046,6 +1043,9 @@ static void armv8pmu_reset(void *info)
pmcr |= ARMV8_PMU_PMCR_LP;

armv8pmu_pmcr_write(pmcr);
+
+ if (arm_pmu_branch_stack_supported(cpu_pmu))
+ armv8pmu_branch_reset();
}

static int __armv8_pmuv3_map_event(struct perf_event *event,
@@ -1063,6 +1063,9 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
&armv8_pmuv3_perf_cache_map,
ARMV8_PMU_EVTYPE_EVENT);

+ if (has_branch_stack(event) && !armv8pmu_branch_valid(event))
+ return -EOPNOTSUPP;
+
/*
* CHAIN events only work when paired with an adjacent counter, and it
* never makes sense for a user to open one in isolation, as they'll be
@@ -1183,6 +1186,21 @@ static void __armv8pmu_probe_pmu(void *info)
cpu_pmu->reg_pmmir = read_cpuid(PMMIR_EL1);
else
cpu_pmu->reg_pmmir = 0;
+ armv8pmu_branch_probe(cpu_pmu);
+}
+
+static int branch_records_alloc(struct arm_pmu *armpmu)
+{
+ struct pmu_hw_events *events;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ events = per_cpu_ptr(armpmu->hw_events, cpu);
+ events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
+ if (!events->branches)
+ return -ENOMEM;
+ }
+ return 0;
}

static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
@@ -1193,12 +1211,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
};
int ret;

+ ret = armv8pmu_private_alloc(cpu_pmu);
+ if (ret)
+ return ret;
+
ret = smp_call_function_any(&cpu_pmu->supported_cpus,
__armv8pmu_probe_pmu,
&probe, 1);
if (ret)
return ret;

+ if (arm_pmu_branch_stack_supported(cpu_pmu)) {
+ ret = branch_records_alloc(cpu_pmu);
+ if (ret)
+ return ret;
+ } else {
+ armv8pmu_private_free(cpu_pmu);
+ }
+
return probe.present ? 0 : -ENODEV;
}

@@ -1262,6 +1292,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;

cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx;
+ cpu_pmu->sched_task = armv8pmu_sched_task;

cpu_pmu->name = name;
cpu_pmu->map_event = map_event;
--
2.25.1


2023-03-15 05:41:51

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V9 10/10] arm64/perf: Implement branch records save on PMU IRQ

This modifies armv8pmu_branch_read() to concatenate live entries along with
task context stored entries and then process the resultant buffer to create
perf branch entry array for perf_sample_data. It follows the same principle
like task sched out.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/kernel/brbe.c | 75 +++++++++++++++-------------------------
1 file changed, 28 insertions(+), 47 deletions(-)

diff --git a/arch/arm64/kernel/brbe.c b/arch/arm64/kernel/brbe.c
index 3dcb4407b92a..652af6668d37 100644
--- a/arch/arm64/kernel/brbe.c
+++ b/arch/arm64/kernel/brbe.c
@@ -693,41 +693,45 @@ void armv8pmu_branch_reset(void)
isb();
}

-static bool capture_branch_entry(struct pmu_hw_events *cpuc,
- struct perf_event *event, int idx)
+static void brbe_regset_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
+ struct brbe_regset *regset, int idx)
{
struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx];
- u64 brbinf = get_brbinf_reg(idx);
-
- /*
- * There are no valid entries anymore on the buffer.
- * Abort the branch record processing to save some
- * cycles and also reduce the capture/process load
- * for the user space as well.
- */
- if (brbe_invalid(brbinf))
- return false;
+ u64 brbinf = regset[idx].brbinf;

perf_clear_branch_entry_bitfields(entry);
if (brbe_record_is_complete(brbinf)) {
- entry->from = get_brbsrc_reg(idx);
- entry->to = get_brbtgt_reg(idx);
+ entry->from = regset[idx].brbsrc;
+ entry->to = regset[idx].brbtgt;
} else if (brbe_record_is_source_only(brbinf)) {
- entry->from = get_brbsrc_reg(idx);
+ entry->from = regset[idx].brbsrc;
entry->to = 0;
} else if (brbe_record_is_target_only(brbinf)) {
entry->from = 0;
- entry->to = get_brbtgt_reg(idx);
+ entry->to = regset[idx].brbtgt;
}
capture_brbe_flags(entry, event, brbinf);
- return true;
+}
+
+static void process_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
+ struct brbe_regset *regset, int nr_regset)
+{
+ int idx;
+
+ for (idx = 0; idx < nr_regset; idx++)
+ brbe_regset_branch_entries(cpuc, event, regset, idx);
+
+ cpuc->branches->branch_stack.nr = nr_regset;
+ cpuc->branches->branch_stack.hw_idx = -1ULL;
}

void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
+ struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
+ struct brbe_regset live[BRBE_MAX_ENTRIES];
+ int nr_live, nr_store;
u64 brbfcr, brbcr;
- int idx, loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2, count;

brbcr = read_sysreg_s(SYS_BRBCR_EL1);
brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
@@ -739,36 +743,13 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
isb();

- /* Determine the indices for each loop */
- loop1_idx1 = BRBE_BANK0_IDX_MIN;
- if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
- loop1_idx2 = brbe_attr->brbe_nr - 1;
- loop2_idx1 = BRBE_BANK1_IDX_MIN;
- loop2_idx2 = BRBE_BANK0_IDX_MAX;
- } else {
- loop1_idx2 = BRBE_BANK0_IDX_MAX;
- loop2_idx1 = BRBE_BANK1_IDX_MIN;
- loop2_idx2 = brbe_attr->brbe_nr - 1;
- }
-
- /* Loop through bank 0 */
- select_brbe_bank(BRBE_BANK_IDX_0);
- for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
- if (!capture_branch_entry(cpuc, event, idx))
- goto skip_bank_1;
- }
-
- /* Loop through bank 1 */
- select_brbe_bank(BRBE_BANK_IDX_1);
- for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
- if (!capture_branch_entry(cpuc, event, idx))
- break;
- }
-
-skip_bank_1:
- cpuc->branches->branch_stack.nr = idx;
- cpuc->branches->branch_stack.hw_idx = -1ULL;
+ nr_live = capture_brbe_regset(brbe_attr, live);
+ nr_store = task_ctx->nr_brbe_records;
+ nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
+ nr_live, brbe_attr->brbe_nr);
+ process_branch_entries(cpuc, event, task_ctx->store, nr_store);
process_branch_aborts(cpuc);
+ task_ctx->nr_brbe_records = 0;

/* Unpause the buffer */
write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
--
2.25.1


2023-03-21 19:03:53

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

On Wed, Mar 15, 2023 at 10:44:34AM +0530, Anshuman Khandual wrote:
> This series enables perf branch stack sampling support on arm64 platform
> via a new arch feature called Branch Record Buffer Extension (BRBE). All
> relevant register definitions could be accessed here.
>
> https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers

While looking at another feature I noticed that HFGITR_EL2 has two traps
for BRBE instructions, nBRBINJ and nBRBIALL which trap BRB INJ and BRB
IALL. Even if we don't use those right now does it make sense to
document a requirement for those traps to be disabled now in case we
need them later, and do so during EL2 setup for KVM guests? That could
always be done incrementally.

I've got a patch adding the definition of that register to sysreg which
I should be sending shortly, no need to duplicate that effort.


Attachments:
(No filename) (870.00 B)
signature.asc (488.00 B)
Download all attachments

2023-03-23 04:28:20

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

Hello Mark,

On 3/22/23 00:32, Mark Brown wrote:
> On Wed, Mar 15, 2023 at 10:44:34AM +0530, Anshuman Khandual wrote:
>> This series enables perf branch stack sampling support on arm64 platform
>> via a new arch feature called Branch Record Buffer Extension (BRBE). All
>> relevant register definitions could be accessed here.
>>
>> https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers
>
> While looking at another feature I noticed that HFGITR_EL2 has two traps
> for BRBE instructions, nBRBINJ and nBRBIALL which trap BRB INJ and BRB
> IALL. Even if we don't use those right now does it make sense to

Right, current branch stack sampling experiments have been on EL2 host itself.

> document a requirement for those traps to be disabled now in case we
> need them later, and do so during EL2 setup for KVM guests? That could
> always be done incrementally.
Unlike all other instruction trap enable fields in SYS_HFGITR_EL2, these BRBE
instructions ones are actually inverted in semantics i.e the particular fields
need to be set for these traps to be disabled in EL2.

SYS_HFGITR_EL2.nBRBIALL
SYS_HFGITR_EL2.nBRBINJ

By default entire SYS_HFGITR_EL2 is set as cleared during init and that would
prevent a guest from using BRBE.

init_kernel_el()
init_el2()
init_el2_state()
__init_el2_fgt()
........
msr_s SYS_HFGITR_EL2, xzr
........

I guess something like the following (untested) needs to be done, to enable
BRBE in guests.

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index 037724b19c5c..309708127a2a 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -161,6 +161,15 @@
msr_s SYS_HFGWTR_EL2, x0
msr_s SYS_HFGITR_EL2, xzr

+ mrs x1, id_aa64dfr0_el1
+ ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
+ cbz x1, .Lskip_brbe_\@
+ mov x0, xzr
+ orr x0, x0, #HFGITR_EL2_nBRBIALL
+ orr x0, x0, #HFGITR_EL2_nBRBINJ
+ msr_s SYS_HFGITR_EL2, x0
+
+.Lskip_brbe_\@:
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
cbz x1, .Lskip_fgt_\@
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b3bc03ee22bd..3b939c42f3b8 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -527,6 +527,9 @@
#define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6)
#define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7)

+#define HFGITR_EL2_nBRBIALL (BIT(56))
+#define HFGITR_EL2_nBRBINJ (BIT(55))
+
#define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0)
#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)


>
> I've got a patch adding the definition of that register to sysreg which
> I should be sending shortly, no need to duplicate that effort.

Sure, I assume you are moving the existing definition for SYS_HFGITR_EL2 along
with all its fields from ../include/asm/sysreg.h to ../tools/sysreg. Right, it
makes sense.

- Anshuman

2023-03-23 13:21:47

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

On Thu, Mar 23, 2023 at 09:55:47AM +0530, Anshuman Khandual wrote:
> On 3/22/23 00:32, Mark Brown wrote:

> > document a requirement for those traps to be disabled now in case we
> > need them later, and do so during EL2 setup for KVM guests? That could
> > always be done incrementally.

> Unlike all other instruction trap enable fields in SYS_HFGITR_EL2, these BRBE
> instructions ones are actually inverted in semantics i.e the particular fields
> need to be set for these traps to be disabled in EL2.

Right, for backwards compatibility all newly added fields are trap by
default.

> SYS_HFGITR_EL2.nBRBIALL
> SYS_HFGITR_EL2.nBRBINJ

> By default entire SYS_HFGITR_EL2 is set as cleared during init and that would
> prevent a guest from using BRBE.

It should prevent the host as well shouldn't it?

> I guess something like the following (untested) needs to be done, to enable
> BRBE in guests.

> + mrs x1, id_aa64dfr0_el1
> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
> + cbz x1, .Lskip_brbe_\@
> + mov x0, xzr
> + orr x0, x0, #HFGITR_EL2_nBRBIALL
> + orr x0, x0, #HFGITR_EL2_nBRBINJ
> + msr_s SYS_HFGITR_EL2, x0
> +
> +.Lskip_brbe_\@:

Yes, looks roughly what I'd expect.

> > I've got a patch adding the definition of that register to sysreg which
> > I should be sending shortly, no need to duplicate that effort.

> Sure, I assume you are moving the existing definition for SYS_HFGITR_EL2 along
> with all its fields from ../include/asm/sysreg.h to ../tools/sysreg. Right, it
> makes sense.

No fields at the minute but yes, like the other conversions.


Attachments:
(No filename) (1.64 kB)
signature.asc (499.00 B)
Download all attachments

2023-03-24 03:34:36

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling



On 3/23/23 18:24, Mark Brown wrote:
> On Thu, Mar 23, 2023 at 09:55:47AM +0530, Anshuman Khandual wrote:
>> On 3/22/23 00:32, Mark Brown wrote:
>
>>> document a requirement for those traps to be disabled now in case we
>>> need them later, and do so during EL2 setup for KVM guests? That could
>>> always be done incrementally.
>
>> Unlike all other instruction trap enable fields in SYS_HFGITR_EL2, these BRBE
>> instructions ones are actually inverted in semantics i.e the particular fields
>> need to be set for these traps to be disabled in EL2.
>
> Right, for backwards compatibility all newly added fields are trap by
> default.

Okay

>
>> SYS_HFGITR_EL2.nBRBIALL
>> SYS_HFGITR_EL2.nBRBINJ
>
>> By default entire SYS_HFGITR_EL2 is set as cleared during init and that would
>> prevent a guest from using BRBE.
>
> It should prevent the host as well shouldn't it?

In a EL2 host environment, BRBE is being enabled either in EL2 (kernel/hv) or
in EL0 (user space), it never gets enabled on EL1. Moreover BRBIALL/BRBINJ
instructions are always executed while being inside EL2 (kernel/hv). Hence how
could these instructions cause trap in EL2 ?

>
>> I guess something like the following (untested) needs to be done, to enable
>> BRBE in guests.
>
>> + mrs x1, id_aa64dfr0_el1
>> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
>> + cbz x1, .Lskip_brbe_\@
>> + mov x0, xzr
>> + orr x0, x0, #HFGITR_EL2_nBRBIALL
>> + orr x0, x0, #HFGITR_EL2_nBRBINJ
>> + msr_s SYS_HFGITR_EL2, x0
>> +
>> +.Lskip_brbe_\@:
>
> Yes, looks roughly what I'd expect.

I could send an stand alone patch after your latest series [1], which disables
BRBINJ/BRBIALL instruction trap in EL2 to enable BRBE usage in the guest.

https://lore.kernel.org/all/[email protected]/T/

>
>>> I've got a patch adding the definition of that register to sysreg which
>>> I should be sending shortly, no need to duplicate that effort.
>
>> Sure, I assume you are moving the existing definition for SYS_HFGITR_EL2 along
>> with all its fields from ../include/asm/sysreg.h to ../tools/sysreg. Right, it
>> makes sense.
>
> No fields at the minute but yes, like the other conversions.

Sure.

2023-03-24 11:41:49

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

On Fri, Mar 24, 2023 at 08:50:32AM +0530, Anshuman Khandual wrote:
> On 3/23/23 18:24, Mark Brown wrote:
> > On Thu, Mar 23, 2023 at 09:55:47AM +0530, Anshuman Khandual wrote:

> >> By default entire SYS_HFGITR_EL2 is set as cleared during init and that would
> >> prevent a guest from using BRBE.

> > It should prevent the host as well shouldn't it?

> In a EL2 host environment, BRBE is being enabled either in EL2 (kernel/hv) or
> in EL0 (user space), it never gets enabled on EL1. Moreover BRBIALL/BRBINJ
> instructions are always executed while being inside EL2 (kernel/hv). Hence how
> could these instructions cause trap in EL2 ?

Ah, I see - I didn't realise this couldn't run at EL1.

> > Yes, looks roughly what I'd expect.

> I could send an stand alone patch after your latest series [1], which disables
> BRBINJ/BRBIALL instruction trap in EL2 to enable BRBE usage in the guest.

Sounds resaonable enough to me.


Attachments:
(No filename) (949.00 B)
signature.asc (499.00 B)
Download all attachments

2023-04-11 13:05:46

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling

Hi Anshuman

On Wed, Mar 15, 2023 at 10:44:34AM +0530, Anshuman Khandual wrote:
> This series enables perf branch stack sampling support on arm64 platform
> via a new arch feature called Branch Record Buffer Extension (BRBE). All
> relevant register definitions could be accessed here.
>
> https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers
>
> This series applies on 6.3-rc1 after applying the following patch from Mark
> which allows enums in SysregFields blocks in sysreg tools.
>
> https://lore.kernel.org/all/[email protected]/

As mentioned by Mark at:

https://lore.kernel.org/r/ZB2sGrsbr58ttoWI@FVFF77S0Q05N

this conflicts with supporting PMUv3 on AArch32. Please can you rebase onto
for-next/perf, which will mean moving this driver back into drivers/perf/
now?

Thanks,

Will

2023-04-12 08:53:22

by Yang Shen

[permalink] [raw]
Subject: Re: [PATCH V9 02/10] arm64/perf: Add BRBE registers and fields



在 2023/3/15 13:14, Anshuman Khandual 写道:
> This adds BRBE related register definitions and various other related field
> macros there in. These will be used subsequently in a BRBE driver which is
> being added later on.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Reviewed-by: Mark Brown <[email protected]>
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> arch/arm64/include/asm/sysreg.h | 103 +++++++++++++++++++++
> arch/arm64/tools/sysreg | 159 ++++++++++++++++++++++++++++++++
> 2 files changed, 262 insertions(+)
>
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 9e3ecba3c4e6..b3bc03ee22bd 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -165,6 +165,109 @@
> #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0)
> #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)
>
> +#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0))
> +#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1))
> +#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2))
> +
> +#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0)
> +#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1)
> +#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2)
> +#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3)
> +#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4)
> +#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5)
> +#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6)
> +#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7)
> +#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8)
> +#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9)
> +#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10)
> +#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11)
> +#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12)
> +#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13)
> +#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14)
> +#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15)
> +#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16)
> +#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17)
> +#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18)
> +#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19)
> +#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20)
> +#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21)
> +#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22)
> +#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23)
> +#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24)
> +#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25)
> +#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26)
> +#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27)
> +#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28)
> +#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29)
> +#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30)
> +#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31)
> +
> +#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0)
> +#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1)
> +#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2)
> +#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3)
> +#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4)
> +#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5)
> +#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6)
> +#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7)
> +#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8)
> +#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9)
> +#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10)
> +#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11)
> +#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12)
> +#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13)
> +#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14)
> +#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15)
> +#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16)
> +#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17)
> +#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18)
> +#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19)
> +#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20)
> +#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21)
> +#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22)
> +#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23)
> +#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24)
> +#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25)
> +#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26)
> +#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27)
> +#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28)
> +#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29)
> +#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30)
> +#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31)
> +
> +#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0)
> +#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1)
> +#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2)
> +#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3)
> +#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4)
> +#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5)
> +#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6)
> +#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7)
> +#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8)
> +#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9)
> +#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10)
> +#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11)
> +#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12)
> +#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13)
> +#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14)
> +#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15)
> +#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16)
> +#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17)
> +#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18)
> +#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19)
> +#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20)
> +#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21)
> +#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22)
> +#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23)
> +#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24)
> +#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25)
> +#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26)
> +#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27)
> +#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28)
> +#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29)
> +#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30)
> +#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31)
> +
> #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
> #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
> #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index dd5a9c7e310f..d74d9dbe18a7 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -924,6 +924,165 @@ UnsignedEnum 3:0 BT
> EndEnum
> EndSysreg
>
> +
> +SysregFields BRBINFx_EL1
> +Res0 63:47
> +Field 46 CCU
> +Field 45:32 CC
> +Res0 31:18
> +Field 17 LASTFAILED
> +Field 16 T
> +Res0 15:14
> +Enum 13:8 TYPE

Hi Anshuman,

I met a problem when built kernel which was based on 6.3-rc1. Here is
the error log:
    GEN     Makefile
    GEN     arch/arm64/include/generated/asm/sysreg-defs.h
Error at 936: unexpected Enum (inside SysregFields)

I think this is because the 'SysregFields' doesn't support the 'Enum'
type region.
And this problem can be fixed when I roll back this part to v7.

Do I need to apply some patches or do some other configures?

Thanks,
Yang

> + 0b000000 UNCOND_DIR
> + 0b000001 INDIR
> + 0b000010 DIR_LINK
> + 0b000011 INDIR_LINK
> + 0b000101 RET_SUB
> + 0b000111 RET_EXCPT
> + 0b001000 COND_DIR
> + 0b100001 DEBUG_HALT
> + 0b100010 CALL
> + 0b100011 TRAP
> + 0b100100 SERROR
> + 0b100110 INST_DEBUG
> + 0b100111 DATA_DEBUG
> + 0b101010 ALGN_FAULT
> + 0b101011 INST_FAULT
> + 0b101100 DATA_FAULT
> + 0b101110 IRQ
> + 0b101111 FIQ
> + 0b111001 DEBUG_EXIT
> +EndEnum
> +Enum 7:6 EL
> + 0b00 EL0
> + 0b01 EL1
> + 0b10 EL2
> + 0b11 EL3
> +EndEnum
> +Field 5 MPRED
> +Res0 4:2
> +Enum 1:0 VALID
> + 0b00 NONE
> + 0b01 TARGET
> + 0b10 SOURCE
> + 0b11 FULL
> +EndEnum
> +EndSysregFields
> +
> +Sysreg BRBCR_EL1 2 1 9 0 0
> +Res0 63:24
> +Field 23 EXCEPTION
> +Field 22 ERTN
> +Res0 21:9
> +Field 8 FZP
> +Res0 7
> +Enum 6:5 TS
> + 0b01 VIRTUAL
> + 0b10 GST_PHYSICAL
> + 0b11 PHYSICAL
> +EndEnum
> +Field 4 MPRED
> +Field 3 CC
> +Res0 2
> +Field 1 E1BRE
> +Field 0 E0BRE
> +EndSysreg
> +
> +Sysreg BRBFCR_EL1 2 1 9 0 1
> +Res0 63:30
> +Enum 29:28 BANK
> + 0b0 FIRST
> + 0b1 SECOND
> +EndEnum
> +Res0 27:23
> +Field 22 CONDDIR
> +Field 21 DIRCALL
> +Field 20 INDCALL
> +Field 19 RTN
> +Field 18 INDIRECT
> +Field 17 DIRECT
> +Field 16 EnI
> +Res0 15:8
> +Field 7 PAUSED
> +Field 6 LASTFAILED
> +Res0 5:0
> +EndSysreg
> +
> +Sysreg BRBTS_EL1 2 1 9 0 2
> +Field 63:0 TS
> +EndSysreg
> +
> +Sysreg BRBINFINJ_EL1 2 1 9 1 0
> +Res0 63:47
> +Field 46 CCU
> +Field 45:32 CC
> +Res0 31:18
> +Field 17 LASTFAILED
> +Field 16 T
> +Res0 15:14
> +Enum 13:8 TYPE
> + 0b000000 UNCOND_DIR
> + 0b000001 INDIR
> + 0b000010 DIR_LINK
> + 0b000011 INDIR_LINK
> + 0b000100 RET_SUB
> + 0b000100 RET_SUB
> + 0b000111 RET_EXCPT
> + 0b001000 COND_DIR
> + 0b100001 DEBUG_HALT
> + 0b100010 CALL
> + 0b100011 TRAP
> + 0b100100 SERROR
> + 0b100110 INST_DEBUG
> + 0b100111 DATA_DEBUG
> + 0b101010 ALGN_FAULT
> + 0b101011 INST_FAULT
> + 0b101100 DATA_FAULT
> + 0b101110 IRQ
> + 0b101111 FIQ
> + 0b111001 DEBUG_EXIT
> +EndEnum
> +Enum 7:6 EL
> + 0b00 EL0
> + 0b01 EL1
> + 0b10 EL2
> + 0b11 EL3
> +EndEnum
> +Field 5 MPRED
> +Res0 4:2
> +Enum 1:0 VALID
> + 0b00 NONE
> + 0b01 TARGET
> + 0b10 SOURCE
> + 0b00 FULL
> +EndEnum
> +EndSysreg
> +
> +Sysreg BRBSRCINJ_EL1 2 1 9 1 1
> +Field 63:0 ADDRESS
> +EndSysreg
> +
> +Sysreg BRBTGTINJ_EL1 2 1 9 1 2
> +Field 63:0 ADDRESS
> +EndSysreg
> +
> +Sysreg BRBIDR0_EL1 2 1 9 2 0
> +Res0 63:16
> +Enum 15:12 CC
> + 0b101 20_BIT
> +EndEnum
> +Enum 11:8 FORMAT
> + 0b0 0
> +EndEnum
> +Enum 7:0 NUMREC
> + 0b1000 8
> + 0b10000 16
> + 0b100000 32
> + 0b1000000 64
> +EndEnum
> +EndSysreg
> +
> Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4
> Res0 63:60
> UnsignedEnum 59:56 F64MM

2023-05-15 06:51:02

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V9 02/10] arm64/perf: Add BRBE registers and fields



On 4/12/23 14:02, Yang Shen wrote:
>
>
> 在 2023/3/15 13:14, Anshuman Khandual 写道:
>> This adds BRBE related register definitions and various other related field
>> macros there in. These will be used subsequently in a BRBE driver which is
>> being added later on.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Reviewed-by: Mark Brown <[email protected]>
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>>   arch/arm64/include/asm/sysreg.h | 103 +++++++++++++++++++++
>>   arch/arm64/tools/sysreg         | 159 ++++++++++++++++++++++++++++++++
>>   2 files changed, 262 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>> index 9e3ecba3c4e6..b3bc03ee22bd 100644
>> --- a/arch/arm64/include/asm/sysreg.h
>> +++ b/arch/arm64/include/asm/sysreg.h
>> @@ -165,6 +165,109 @@
>>   #define SYS_DBGDTRTX_EL0        sys_reg(2, 3, 0, 5, 0)
>>   #define SYS_DBGVCR32_EL2        sys_reg(2, 4, 0, 7, 0)
>>   +#define __SYS_BRBINFO(n)        sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0))
>> +#define __SYS_BRBSRC(n)            sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1))
>> +#define __SYS_BRBTGT(n)            sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2))
>> +
>> +#define SYS_BRBINF0_EL1            __SYS_BRBINFO(0)
>> +#define SYS_BRBINF1_EL1            __SYS_BRBINFO(1)
>> +#define SYS_BRBINF2_EL1            __SYS_BRBINFO(2)
>> +#define SYS_BRBINF3_EL1            __SYS_BRBINFO(3)
>> +#define SYS_BRBINF4_EL1            __SYS_BRBINFO(4)
>> +#define SYS_BRBINF5_EL1            __SYS_BRBINFO(5)
>> +#define SYS_BRBINF6_EL1            __SYS_BRBINFO(6)
>> +#define SYS_BRBINF7_EL1            __SYS_BRBINFO(7)
>> +#define SYS_BRBINF8_EL1            __SYS_BRBINFO(8)
>> +#define SYS_BRBINF9_EL1            __SYS_BRBINFO(9)
>> +#define SYS_BRBINF10_EL1        __SYS_BRBINFO(10)
>> +#define SYS_BRBINF11_EL1        __SYS_BRBINFO(11)
>> +#define SYS_BRBINF12_EL1        __SYS_BRBINFO(12)
>> +#define SYS_BRBINF13_EL1        __SYS_BRBINFO(13)
>> +#define SYS_BRBINF14_EL1        __SYS_BRBINFO(14)
>> +#define SYS_BRBINF15_EL1        __SYS_BRBINFO(15)
>> +#define SYS_BRBINF16_EL1        __SYS_BRBINFO(16)
>> +#define SYS_BRBINF17_EL1        __SYS_BRBINFO(17)
>> +#define SYS_BRBINF18_EL1        __SYS_BRBINFO(18)
>> +#define SYS_BRBINF19_EL1        __SYS_BRBINFO(19)
>> +#define SYS_BRBINF20_EL1        __SYS_BRBINFO(20)
>> +#define SYS_BRBINF21_EL1        __SYS_BRBINFO(21)
>> +#define SYS_BRBINF22_EL1        __SYS_BRBINFO(22)
>> +#define SYS_BRBINF23_EL1        __SYS_BRBINFO(23)
>> +#define SYS_BRBINF24_EL1        __SYS_BRBINFO(24)
>> +#define SYS_BRBINF25_EL1        __SYS_BRBINFO(25)
>> +#define SYS_BRBINF26_EL1        __SYS_BRBINFO(26)
>> +#define SYS_BRBINF27_EL1        __SYS_BRBINFO(27)
>> +#define SYS_BRBINF28_EL1        __SYS_BRBINFO(28)
>> +#define SYS_BRBINF29_EL1        __SYS_BRBINFO(29)
>> +#define SYS_BRBINF30_EL1        __SYS_BRBINFO(30)
>> +#define SYS_BRBINF31_EL1        __SYS_BRBINFO(31)
>> +
>> +#define SYS_BRBSRC0_EL1            __SYS_BRBSRC(0)
>> +#define SYS_BRBSRC1_EL1            __SYS_BRBSRC(1)
>> +#define SYS_BRBSRC2_EL1            __SYS_BRBSRC(2)
>> +#define SYS_BRBSRC3_EL1            __SYS_BRBSRC(3)
>> +#define SYS_BRBSRC4_EL1            __SYS_BRBSRC(4)
>> +#define SYS_BRBSRC5_EL1            __SYS_BRBSRC(5)
>> +#define SYS_BRBSRC6_EL1            __SYS_BRBSRC(6)
>> +#define SYS_BRBSRC7_EL1            __SYS_BRBSRC(7)
>> +#define SYS_BRBSRC8_EL1            __SYS_BRBSRC(8)
>> +#define SYS_BRBSRC9_EL1            __SYS_BRBSRC(9)
>> +#define SYS_BRBSRC10_EL1        __SYS_BRBSRC(10)
>> +#define SYS_BRBSRC11_EL1        __SYS_BRBSRC(11)
>> +#define SYS_BRBSRC12_EL1        __SYS_BRBSRC(12)
>> +#define SYS_BRBSRC13_EL1        __SYS_BRBSRC(13)
>> +#define SYS_BRBSRC14_EL1        __SYS_BRBSRC(14)
>> +#define SYS_BRBSRC15_EL1        __SYS_BRBSRC(15)
>> +#define SYS_BRBSRC16_EL1        __SYS_BRBSRC(16)
>> +#define SYS_BRBSRC17_EL1        __SYS_BRBSRC(17)
>> +#define SYS_BRBSRC18_EL1        __SYS_BRBSRC(18)
>> +#define SYS_BRBSRC19_EL1        __SYS_BRBSRC(19)
>> +#define SYS_BRBSRC20_EL1        __SYS_BRBSRC(20)
>> +#define SYS_BRBSRC21_EL1        __SYS_BRBSRC(21)
>> +#define SYS_BRBSRC22_EL1        __SYS_BRBSRC(22)
>> +#define SYS_BRBSRC23_EL1        __SYS_BRBSRC(23)
>> +#define SYS_BRBSRC24_EL1        __SYS_BRBSRC(24)
>> +#define SYS_BRBSRC25_EL1        __SYS_BRBSRC(25)
>> +#define SYS_BRBSRC26_EL1        __SYS_BRBSRC(26)
>> +#define SYS_BRBSRC27_EL1        __SYS_BRBSRC(27)
>> +#define SYS_BRBSRC28_EL1        __SYS_BRBSRC(28)
>> +#define SYS_BRBSRC29_EL1        __SYS_BRBSRC(29)
>> +#define SYS_BRBSRC30_EL1        __SYS_BRBSRC(30)
>> +#define SYS_BRBSRC31_EL1        __SYS_BRBSRC(31)
>> +
>> +#define SYS_BRBTGT0_EL1            __SYS_BRBTGT(0)
>> +#define SYS_BRBTGT1_EL1            __SYS_BRBTGT(1)
>> +#define SYS_BRBTGT2_EL1            __SYS_BRBTGT(2)
>> +#define SYS_BRBTGT3_EL1            __SYS_BRBTGT(3)
>> +#define SYS_BRBTGT4_EL1            __SYS_BRBTGT(4)
>> +#define SYS_BRBTGT5_EL1            __SYS_BRBTGT(5)
>> +#define SYS_BRBTGT6_EL1            __SYS_BRBTGT(6)
>> +#define SYS_BRBTGT7_EL1            __SYS_BRBTGT(7)
>> +#define SYS_BRBTGT8_EL1            __SYS_BRBTGT(8)
>> +#define SYS_BRBTGT9_EL1            __SYS_BRBTGT(9)
>> +#define SYS_BRBTGT10_EL1        __SYS_BRBTGT(10)
>> +#define SYS_BRBTGT11_EL1        __SYS_BRBTGT(11)
>> +#define SYS_BRBTGT12_EL1        __SYS_BRBTGT(12)
>> +#define SYS_BRBTGT13_EL1        __SYS_BRBTGT(13)
>> +#define SYS_BRBTGT14_EL1        __SYS_BRBTGT(14)
>> +#define SYS_BRBTGT15_EL1        __SYS_BRBTGT(15)
>> +#define SYS_BRBTGT16_EL1        __SYS_BRBTGT(16)
>> +#define SYS_BRBTGT17_EL1        __SYS_BRBTGT(17)
>> +#define SYS_BRBTGT18_EL1        __SYS_BRBTGT(18)
>> +#define SYS_BRBTGT19_EL1        __SYS_BRBTGT(19)
>> +#define SYS_BRBTGT20_EL1        __SYS_BRBTGT(20)
>> +#define SYS_BRBTGT21_EL1        __SYS_BRBTGT(21)
>> +#define SYS_BRBTGT22_EL1        __SYS_BRBTGT(22)
>> +#define SYS_BRBTGT23_EL1        __SYS_BRBTGT(23)
>> +#define SYS_BRBTGT24_EL1        __SYS_BRBTGT(24)
>> +#define SYS_BRBTGT25_EL1        __SYS_BRBTGT(25)
>> +#define SYS_BRBTGT26_EL1        __SYS_BRBTGT(26)
>> +#define SYS_BRBTGT27_EL1        __SYS_BRBTGT(27)
>> +#define SYS_BRBTGT28_EL1        __SYS_BRBTGT(28)
>> +#define SYS_BRBTGT29_EL1        __SYS_BRBTGT(29)
>> +#define SYS_BRBTGT30_EL1        __SYS_BRBTGT(30)
>> +#define SYS_BRBTGT31_EL1        __SYS_BRBTGT(31)
>> +
>>   #define SYS_MIDR_EL1            sys_reg(3, 0, 0, 0, 0)
>>   #define SYS_MPIDR_EL1            sys_reg(3, 0, 0, 0, 5)
>>   #define SYS_REVIDR_EL1            sys_reg(3, 0, 0, 0, 6)
>> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
>> index dd5a9c7e310f..d74d9dbe18a7 100644
>> --- a/arch/arm64/tools/sysreg
>> +++ b/arch/arm64/tools/sysreg
>> @@ -924,6 +924,165 @@ UnsignedEnum    3:0    BT
>>   EndEnum
>>   EndSysreg
>>   +
>> +SysregFields BRBINFx_EL1
>> +Res0    63:47
>> +Field    46    CCU
>> +Field    45:32    CC
>> +Res0    31:18
>> +Field    17    LASTFAILED
>> +Field    16    T
>> +Res0    15:14
>> +Enum    13:8        TYPE
>
> Hi Anshuman,
>
> I met a problem when built kernel which was based on 6.3-rc1. Here is the error log:
>     GEN     Makefile
>     GEN     arch/arm64/include/generated/asm/sysreg-defs.h
> Error at 936: unexpected Enum (inside SysregFields)
>
> I think this is because the 'SysregFields' doesn't support the 'Enum' type region.
> And this problem can be fixed when I roll back this part to v7.
>
> Do I need to apply some patches or do some other configures?

Yes, the following patch was required, but it's now merged mainline.

https://lore.kernel.org/all/[email protected]/

2023-05-15 06:58:06

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V9 00/10] arm64/perf: Enable branch stack sampling



On 4/11/23 18:33, Will Deacon wrote:
> Hi Anshuman
>
> On Wed, Mar 15, 2023 at 10:44:34AM +0530, Anshuman Khandual wrote:
>> This series enables perf branch stack sampling support on arm64 platform
>> via a new arch feature called Branch Record Buffer Extension (BRBE). All
>> relevant register definitions could be accessed here.
>>
>> https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers
>>
>> This series applies on 6.3-rc1 after applying the following patch from Mark
>> which allows enums in SysregFields blocks in sysreg tools.
>>
>> https://lore.kernel.org/all/[email protected]/
>
> As mentioned by Mark at:
>
> https://lore.kernel.org/r/ZB2sGrsbr58ttoWI@FVFF77S0Q05N
>
> this conflicts with supporting PMUv3 on AArch32. Please can you rebase onto
> for-next/perf, which will mean moving this driver back into drivers/perf/
> now?

Hi Will,

I am back from a long vacation, will go through the earlier discussions
on this and rework the series as required.

- Anshuman

2023-05-23 15:12:54

by James Clark

[permalink] [raw]
Subject: Re: [PATCH V9 10/10] arm64/perf: Implement branch records save on PMU IRQ



On 23/05/2023 15:39, James Clark wrote:
>
>
> On 15/03/2023 05:14, Anshuman Khandual wrote:
>> This modifies armv8pmu_branch_read() to concatenate live entries along with
>> task context stored entries and then process the resultant buffer to create
>> perf branch entry array for perf_sample_data. It follows the same principle
>> like task sched out.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>
> [...]
>
>> void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
>> {
>> struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
>> + struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
>> + struct brbe_regset live[BRBE_MAX_ENTRIES];
>> + int nr_live, nr_store;
>> u64 brbfcr, brbcr;
>> - int idx, loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2, count;
>>
>> brbcr = read_sysreg_s(SYS_BRBCR_EL1);
>> brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
>> @@ -739,36 +743,13 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
>> write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
>> isb();
>>
>> - /* Determine the indices for each loop */
>> - loop1_idx1 = BRBE_BANK0_IDX_MIN;
>> - if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
>> - loop1_idx2 = brbe_attr->brbe_nr - 1;
>> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
>> - loop2_idx2 = BRBE_BANK0_IDX_MAX;
>> - } else {
>> - loop1_idx2 = BRBE_BANK0_IDX_MAX;
>> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
>> - loop2_idx2 = brbe_attr->brbe_nr - 1;
>> - }
>> -
>> - /* Loop through bank 0 */
>> - select_brbe_bank(BRBE_BANK_IDX_0);
>> - for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
>> - if (!capture_branch_entry(cpuc, event, idx))
>> - goto skip_bank_1;
>> - }
>> -
>> - /* Loop through bank 1 */
>> - select_brbe_bank(BRBE_BANK_IDX_1);
>> - for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
>> - if (!capture_branch_entry(cpuc, event, idx))
>> - break;
>> - }
>> -
>> -skip_bank_1:
>> - cpuc->branches->branch_stack.nr = idx;
>> - cpuc->branches->branch_stack.hw_idx = -1ULL;
>> + nr_live = capture_brbe_regset(brbe_attr, live);
>> + nr_store = task_ctx->nr_brbe_records;
>> + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
>> + nr_live, brbe_attr->brbe_nr);
>> + process_branch_entries(cpuc, event, task_ctx->store, nr_store);
>
> Hi Anshuman,
>
> With the following command I get a crash:
>
> perf record --branch-filter any,save_type -a -- ls
>
> [ 101.171822] Unable to handle kernel NULL pointer dereference at
> virtual address 0000000000000600
> ...
> [145380.414654] Call trace:
> [145380.414739] armv8pmu_branch_read+0x7c/0x578
> [145380.414895] armv8pmu_handle_irq+0x104/0x1c0
> [145380.415043] armpmu_dispatch_irq+0x38/0x70
> [145380.415209] __handle_irq_event_percpu+0x124/0x3b8
> [145380.415392] handle_irq_event+0x54/0xc8
> [145380.415567] handle_fasteoi_irq+0x100/0x1e0
> [145380.415718] generic_handle_domain_irq+0x38/0x58
> [145380.415895] gic_handle_irq+0x5c/0x130
> [145380.416025] call_on_irq_stack+0x24/0x58
> [145380.416173] el1_interrupt+0x74/0xc0
> [145380.416321] el1h_64_irq_handler+0x18/0x28
> [145380.416475] el1h_64_irq+0x64/0x68
> [145380.416604] smp_call_function_single+0xe8/0x1f0
> [145380.416745] event_function_call+0xbc/0x1c8
> [145380.416919] _perf_event_enable+0x84/0xa0
> [145380.417069] perf_ioctl+0xe8/0xd68
> [145380.417204] __arm64_sys_ioctl+0x9c/0xe0
> [145380.417353] invoke_syscall+0x4c/0x120
> [145380.417523] el0_svc_common+0xd0/0x120
> [145380.417693] do_el0_svc+0x3c/0xb8
> [145380.417859] el0_svc+0x50/0xc0
> [145380.418004] el0t_64_sync_handler+0x84/0xf0
> [145380.418160] el0t_64_sync+0x190/0x198
>
> When using --branch-filter any,u without -a it seems to be fine so could
> be that task_ctx is null in per-cpu mode, or something to do with the
> userspace only flag?
>
> I'm also wondering if it's possible to collapse some of the last 5
> commits? They seem to mostly modify things in brbe.c which is a new file
> so the history probably isn't important at this point it just makes it a
> bit harder to review.
>

I realised I just tested V9 instead of V10 but I diffed them and don't
see anything that would change this issue so it's probably on both versions.

>> process_branch_aborts(cpuc);
>> + task_ctx->nr_brbe_records = 0;
>>
>> /* Unpause the buffer */
>> write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);

2023-05-23 15:14:07

by James Clark

[permalink] [raw]
Subject: Re: [PATCH V9 10/10] arm64/perf: Implement branch records save on PMU IRQ



On 15/03/2023 05:14, Anshuman Khandual wrote:
> This modifies armv8pmu_branch_read() to concatenate live entries along with
> task context stored entries and then process the resultant buffer to create
> perf branch entry array for perf_sample_data. It follows the same principle
> like task sched out.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---

[...]

> void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
> {
> struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
> + struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
> + struct brbe_regset live[BRBE_MAX_ENTRIES];
> + int nr_live, nr_store;
> u64 brbfcr, brbcr;
> - int idx, loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2, count;
>
> brbcr = read_sysreg_s(SYS_BRBCR_EL1);
> brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
> @@ -739,36 +743,13 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
> write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
> isb();
>
> - /* Determine the indices for each loop */
> - loop1_idx1 = BRBE_BANK0_IDX_MIN;
> - if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
> - loop1_idx2 = brbe_attr->brbe_nr - 1;
> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
> - loop2_idx2 = BRBE_BANK0_IDX_MAX;
> - } else {
> - loop1_idx2 = BRBE_BANK0_IDX_MAX;
> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
> - loop2_idx2 = brbe_attr->brbe_nr - 1;
> - }
> -
> - /* Loop through bank 0 */
> - select_brbe_bank(BRBE_BANK_IDX_0);
> - for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
> - if (!capture_branch_entry(cpuc, event, idx))
> - goto skip_bank_1;
> - }
> -
> - /* Loop through bank 1 */
> - select_brbe_bank(BRBE_BANK_IDX_1);
> - for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
> - if (!capture_branch_entry(cpuc, event, idx))
> - break;
> - }
> -
> -skip_bank_1:
> - cpuc->branches->branch_stack.nr = idx;
> - cpuc->branches->branch_stack.hw_idx = -1ULL;
> + nr_live = capture_brbe_regset(brbe_attr, live);
> + nr_store = task_ctx->nr_brbe_records;
> + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
> + nr_live, brbe_attr->brbe_nr);
> + process_branch_entries(cpuc, event, task_ctx->store, nr_store);

Hi Anshuman,

With the following command I get a crash:

perf record --branch-filter any,save_type -a -- ls

[ 101.171822] Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000600
...
[145380.414654] Call trace:
[145380.414739] armv8pmu_branch_read+0x7c/0x578
[145380.414895] armv8pmu_handle_irq+0x104/0x1c0
[145380.415043] armpmu_dispatch_irq+0x38/0x70
[145380.415209] __handle_irq_event_percpu+0x124/0x3b8
[145380.415392] handle_irq_event+0x54/0xc8
[145380.415567] handle_fasteoi_irq+0x100/0x1e0
[145380.415718] generic_handle_domain_irq+0x38/0x58
[145380.415895] gic_handle_irq+0x5c/0x130
[145380.416025] call_on_irq_stack+0x24/0x58
[145380.416173] el1_interrupt+0x74/0xc0
[145380.416321] el1h_64_irq_handler+0x18/0x28
[145380.416475] el1h_64_irq+0x64/0x68
[145380.416604] smp_call_function_single+0xe8/0x1f0
[145380.416745] event_function_call+0xbc/0x1c8
[145380.416919] _perf_event_enable+0x84/0xa0
[145380.417069] perf_ioctl+0xe8/0xd68
[145380.417204] __arm64_sys_ioctl+0x9c/0xe0
[145380.417353] invoke_syscall+0x4c/0x120
[145380.417523] el0_svc_common+0xd0/0x120
[145380.417693] do_el0_svc+0x3c/0xb8
[145380.417859] el0_svc+0x50/0xc0
[145380.418004] el0t_64_sync_handler+0x84/0xf0
[145380.418160] el0t_64_sync+0x190/0x198

When using --branch-filter any,u without -a it seems to be fine so could
be that task_ctx is null in per-cpu mode, or something to do with the
userspace only flag?

I'm also wondering if it's possible to collapse some of the last 5
commits? They seem to mostly modify things in brbe.c which is a new file
so the history probably isn't important at this point it just makes it a
bit harder to review.

> process_branch_aborts(cpuc);
> + task_ctx->nr_brbe_records = 0;
>
> /* Unpause the buffer */
> write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);

2023-05-24 03:15:15

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V9 10/10] arm64/perf: Implement branch records save on PMU IRQ



On 5/23/23 20:09, James Clark wrote:
>
>
> On 15/03/2023 05:14, Anshuman Khandual wrote:
>> This modifies armv8pmu_branch_read() to concatenate live entries along with
>> task context stored entries and then process the resultant buffer to create
>> perf branch entry array for perf_sample_data. It follows the same principle
>> like task sched out.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>
> [...]
>
>> void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
>> {
>> struct brbe_hw_attr *brbe_attr = (struct brbe_hw_attr *)cpuc->percpu_pmu->private;
>> + struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
>> + struct brbe_regset live[BRBE_MAX_ENTRIES];
>> + int nr_live, nr_store;
>> u64 brbfcr, brbcr;
>> - int idx, loop1_idx1, loop1_idx2, loop2_idx1, loop2_idx2, count;
>>
>> brbcr = read_sysreg_s(SYS_BRBCR_EL1);
>> brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
>> @@ -739,36 +743,13 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
>> write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
>> isb();
>>
>> - /* Determine the indices for each loop */
>> - loop1_idx1 = BRBE_BANK0_IDX_MIN;
>> - if (brbe_attr->brbe_nr <= BRBE_BANK_MAX_ENTRIES) {
>> - loop1_idx2 = brbe_attr->brbe_nr - 1;
>> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
>> - loop2_idx2 = BRBE_BANK0_IDX_MAX;
>> - } else {
>> - loop1_idx2 = BRBE_BANK0_IDX_MAX;
>> - loop2_idx1 = BRBE_BANK1_IDX_MIN;
>> - loop2_idx2 = brbe_attr->brbe_nr - 1;
>> - }
>> -
>> - /* Loop through bank 0 */
>> - select_brbe_bank(BRBE_BANK_IDX_0);
>> - for (idx = 0, count = loop1_idx1; count <= loop1_idx2; idx++, count++) {
>> - if (!capture_branch_entry(cpuc, event, idx))
>> - goto skip_bank_1;
>> - }
>> -
>> - /* Loop through bank 1 */
>> - select_brbe_bank(BRBE_BANK_IDX_1);
>> - for (count = loop2_idx1; count <= loop2_idx2; idx++, count++) {
>> - if (!capture_branch_entry(cpuc, event, idx))
>> - break;
>> - }
>> -
>> -skip_bank_1:
>> - cpuc->branches->branch_stack.nr = idx;
>> - cpuc->branches->branch_stack.hw_idx = -1ULL;
>> + nr_live = capture_brbe_regset(brbe_attr, live);
>> + nr_store = task_ctx->nr_brbe_records;
>> + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
>> + nr_live, brbe_attr->brbe_nr);
>> + process_branch_entries(cpuc, event, task_ctx->store, nr_store);
>
> Hi Anshuman,
>
> With the following command I get a crash:
>
> perf record --branch-filter any,save_type -a -- ls
>
> [ 101.171822] Unable to handle kernel NULL pointer dereference at
> virtual address 0000000000000600
> ...
> [145380.414654] Call trace:
> [145380.414739] armv8pmu_branch_read+0x7c/0x578
> [145380.414895] armv8pmu_handle_irq+0x104/0x1c0
> [145380.415043] armpmu_dispatch_irq+0x38/0x70
> [145380.415209] __handle_irq_event_percpu+0x124/0x3b8
> [145380.415392] handle_irq_event+0x54/0xc8
> [145380.415567] handle_fasteoi_irq+0x100/0x1e0
> [145380.415718] generic_handle_domain_irq+0x38/0x58
> [145380.415895] gic_handle_irq+0x5c/0x130
> [145380.416025] call_on_irq_stack+0x24/0x58
> [145380.416173] el1_interrupt+0x74/0xc0
> [145380.416321] el1h_64_irq_handler+0x18/0x28
> [145380.416475] el1h_64_irq+0x64/0x68
> [145380.416604] smp_call_function_single+0xe8/0x1f0
> [145380.416745] event_function_call+0xbc/0x1c8
> [145380.416919] _perf_event_enable+0x84/0xa0
> [145380.417069] perf_ioctl+0xe8/0xd68
> [145380.417204] __arm64_sys_ioctl+0x9c/0xe0
> [145380.417353] invoke_syscall+0x4c/0x120
> [145380.417523] el0_svc_common+0xd0/0x120
> [145380.417693] do_el0_svc+0x3c/0xb8
> [145380.417859] el0_svc+0x50/0xc0
> [145380.418004] el0t_64_sync_handler+0x84/0xf0
> [145380.418160] el0t_64_sync+0x190/0x198
>
> When using --branch-filter any,u without -a it seems to be fine so could
> be that task_ctx is null in per-cpu mode, or something to do with the
> userspace only flag?

It's task_ctx is null in per-cpu mode, because armv8pmu_branch_read() used
event->pmu_ctx->task_ctx_data in per-cpu mode where it would not have been
allocated as well. Following change fixes the problem.

diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index 9e441141a2c3..c8fd581eacf9 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -744,12 +744,17 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
isb();

nr_live = capture_brbe_regset(brbe_attr, live);
- nr_store = task_ctx->nr_brbe_records;
- nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
- nr_live, brbe_attr->brbe_nr);
- process_branch_entries(cpuc, event, task_ctx->store, nr_store);
+ if (event->ctx->task) {
+ nr_store = task_ctx->nr_brbe_records;
+ nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
+ nr_live, brbe_attr->brbe_nr);
+ process_branch_entries(cpuc, event, task_ctx->store, nr_store);
+ task_ctx->nr_brbe_records = 0;
+ } else {
+ process_branch_entries(cpuc, event, live, nr_live);
+ }
+
process_branch_aborts(cpuc);
- task_ctx->nr_brbe_records = 0;

/* Unpause the buffer */
write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);


>
> I'm also wondering if it's possible to collapse some of the last 5
> commits? They seem to mostly modify things in brbe.c which is a new file
> so the history probably isn't important at this point it just makes it a
> bit harder to review.

[PATCH 6/10] enables base perf branch stack sampling on arm64 platform via
BRBE and then subsequent patches represent logical progression up until
save-stitch mechanism is implemented both during normal PMU IRQ and task
switch callbacks.

>
>> process_branch_aborts(cpuc);
>> + task_ctx->nr_brbe_records = 0;
>>
>> /* Unpause the buffer */
>> write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);