Received: by 2002:a89:413:0:b0:1fd:dba5:e537 with SMTP id m19csp1119255lqs; Fri, 14 Jun 2024 16:17:31 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCX14eH/Z6LN7OzauCZe8MfJfOIzjKtgqGGsqeVhEK9xNW05CxAlZPYFskq3854Lec38ji85UHqb8lfUReveYUOPE9gWym53xi3Y1KEcng== X-Google-Smtp-Source: AGHT+IFfBT3GG/z5DbfvpP7c3mvWNyUZ3TEG+WO1afyku2qOl6p/OhwF9ZqkKcBMZjlxl42YuJfE X-Received: by 2002:a2e:9690:0:b0:2eb:2e4e:16e4 with SMTP id 38308e7fff4ca-2ec0e46e6edmr24305371fa.14.1718407050811; Fri, 14 Jun 2024 16:17:30 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718407050; cv=pass; d=google.com; s=arc-20160816; b=pG58Li4s+8SuQ5hiffAmz1vRicFOPwdg8ZXJGfzdIWLUWQ/zUITynnqa6ofJyBXyoa j5Z1OFeB18kCR+OZbTaVzomxzkZBUigH101uvLbauzzySsNG/dTEo+fIuB2u7sPzkMDH M9K7xX5VI14kSFBt/NKIm+0kxwC3jDX4qDqIOQ0zdFWP0uXKGS9v8VS16w2j2tuVSJr0 0AbYdsGwBPZoJBgDnRLzfpMPU0S7Gktnf5s7VXWU7PA5RGsmwIhvaQGPBauFF6Wtjwya B/mB7fQLD1nJYkJxlqA9CZQTcV6P4W6y/m3zIxgT7H7OOtC1OTYJSY4Ftn5OcFpoDwYs pfug== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :message-id:in-reply-to:date:dkim-signature; bh=3DPZUG0IJi5+mhSuLaONwzXky+fexFr0HTWucqEa9Ug=; fh=nZHFuqO+jSK+Ox4LY1UN/SywojBDL/4Oa2XOh/geV5Q=; b=A9HbJ3IQOdOz1Zy9bKAoZfaTfqPPmcr5DVDvD1Z325QMvpUiChQKZYdOfNhrbOjcMI HgZSFq2ewxfpijKs9FxG9CViZsDJb9Bb5RjDc2Kw0YFa01Po3yZZnldhSPmuGCvrc2BP toMu+Z5dokVVhKbj++hoWjfTgq7fXL/5vfSRwHfHRvgFb/sXpFTUiSpk+tP4ITcudNfe ESn6TbKF/KZsula23WQMGD8OvlXhbgOcUAQGBgXwOaPhY2oIImllelZ/OZ6mUrJN5Npe lGK9BJBwkAT6FQ5cQoJ5lTEJprX0/DALokx7BDbjPhR31NqZkgPqI0RkjChhc28wptSM al0w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ARSbdoCq; arc=pass (i=1 spf=pass spfdomain=flex--irogers.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-215583-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-215583-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 4fb4d7f45d1cf-57cb72e2ac3si2163481a12.177.2024.06.14.16.17.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 16:17:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-215583-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ARSbdoCq; arc=pass (i=1 spf=pass spfdomain=flex--irogers.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-215583-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-215583-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 8DDF81F23D39 for ; Fri, 14 Jun 2024 23:17:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8A47618C35B; Fri, 14 Jun 2024 23:09:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ARSbdoCq" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5C3218FC6F for ; Fri, 14 Jun 2024 23:04:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718406439; cv=none; b=DHyw69Xzg9aKPOIQBWrWBv72c1fTKCJuT9Jy3kov6N0r/w0Yv9AsC+FsEJN+mTiZ4dWRCO705wYRmeDpRKxxNhAqFMYe965a44hKpvpD0oCvlRC5242XhbFZCZRD5SjDpy3n6nl8mRAvsgxk6JDo9hzsOfZBh2sgxvXYMwoW/yE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718406439; c=relaxed/simple; bh=svIAp+zxCHjruHjWdwrYi712z+mfEGaVL5lBec5p6IA=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=FKtmGNYxjt5SqljwvYC+fc5xHkFwRxRiHVUsDc/103/dTWzgCDcTWcgT1Na/jz4mhYbDghTSHWqajH0CeMKnapBtBh39qiyqmphiirX83TyMTSi7ge+q02Y8pUpanGzYra77cJswlghrQK3LOeCErJNrXqHhZxXGfVQsKMlBPpg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--irogers.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ARSbdoCq; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--irogers.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dff166b6847so2253345276.2 for ; Fri, 14 Jun 2024 16:04:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718406240; x=1719011040; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date :message-id:reply-to; bh=3DPZUG0IJi5+mhSuLaONwzXky+fexFr0HTWucqEa9Ug=; b=ARSbdoCqzHVIVoUuDGp5F2OfpbFYALrm93g67nMhhVFGQOBmgwBglmRxlKOTRm2UN+ g6F0oc4287bDCmLT0Vwrp/F+Wwg+R9JNkSJh6der3PXBB4/5HyaEyfTyljtSEoYXRhiJ TRB+BYMlOA2XsHL5IEQaAnxi+3D/kjM8B0HjvTbjUwYYUwKDnHuCReJ5IAP/gqIKWqB7 WkE3q9EPDuPQTLW1UdFlKQiFG4kg/jBEI9pzU4D53MdaKHiWT0Y54OsmsYdtVOLgnRle FKUlCr9ZnjQXP3PIVhAzEIvU7LwR6VSf6o2577LiTMTOoz7gqUC2ReLk/lGhbW7luHBr gZPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718406240; x=1719011040; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=3DPZUG0IJi5+mhSuLaONwzXky+fexFr0HTWucqEa9Ug=; b=aemY+qWUoimGKzx6j4Afa+Q7kdx3/0r/M/U1bFjR92EyFG7jYvKsKOBe9QFVQTBsow xwoE3aqY4lWMOOWNWwCRdv+B0Z9Mo4URyS6urs9Kc7Od1/bQ9m1hA/GwYWj23xKZmOJ4 kfg4BT6inKeUH4N9zdRHp9Ri29zSPudnrNX9vDMMGqouEaNbAsNh2mugHis0z6yoWcfM tl02b9D3MZx/gtMqFe7qP1cw/8XIOkqsktJdH6vZ9NnL2YnIf8kI9MgVdeDNmClRQOt2 zb8BxS1gz528wp3sh64b3OXhgZLXNgFeY7/88ZQbnwnR0R5cGBUOHpVBmRuAdq9LXDeO D4ng== X-Forwarded-Encrypted: i=1; AJvYcCUet0EN+yvRpYSymz0gRjh44W4m0OWu4VReyBgu5i9e5rqBWRcnvdTwFV5Ohlp2VDbkkJoFwXFl3NrWBaJzYkj0UiDCETlCP6TkeX/4 X-Gm-Message-State: AOJu0YwknUl4HemtlfiGWWqWutu9Z6CyxDglY4rLknTaLiHxStQru/+U /nlyMQyHhKRzEewY53otXxLGHAJ0DrnMWNrOm2z3dSyfstoijMASsS3zNSvtC2whYX/s3NRyB3q IvOgsdg== X-Received: from irogers.svl.corp.google.com ([2620:15c:2a3:200:714a:5e65:12a1:603]) (user=irogers job=sendgmr) by 2002:a25:6911:0:b0:dfe:41e7:aa46 with SMTP id 3f1490d57ef6-dff154671fdmr273766276.11.1718406239448; Fri, 14 Jun 2024 16:03:59 -0700 (PDT) Date: Fri, 14 Jun 2024 16:01:36 -0700 In-Reply-To: <20240614230146.3783221-1-irogers@google.com> Message-Id: <20240614230146.3783221-29-irogers@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240614230146.3783221-1-irogers@google.com> X-Mailer: git-send-email 2.45.2.627.g7a2c4fd464-goog Subject: [PATCH v1 28/37] perf vendor events: Add/update sapphirerapids events/metrics From: Ian Rogers To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Kan Liang , Maxime Coquelin , Alexandre Torgue , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Weilin Wang , Caleb Biggers Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Update events from v1.20 to v1.23. Update TMA metrics from v4.7 to v4.8. Bring in the event updates v1.23: https://github.com/intel/perfmon/commit/6ace93281c0f573b90d3f8f624486ad59dd= e1c93 v1.22: https://github.com/intel/perfmon/commit/356eba05c07c4d54ed5b92c1164ce00fab5= 45636 The TMA 4.8 information was added in: https://github.com/intel/perfmon/commit/59194d4d90ca50a3fcb2de0d82b9f6fc0c9= a5736 Add counter information. The most recent RFC patch set using this information: https://lore.kernel.org/lkml/20240412210756.309828-1-weilin.wang@intel.com/ New events are: EXE_ACTIVITY.2_3_PORTS_UTIL, ICACHE_DATA.STALL_PERIODS, L2_TRANS.L2_WB, MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024, OFFCORE_REQUESTS.DEMAND_CODE_RD, OFFCORE_REQUESTS.DEMAND_RFO, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD, OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD, RS.EMPTY_RESOURCE, SW_PREFETCH_ACCESS.ANY, UOPS_ISSUED.CYCLES. Co-authored-by: Weilin Wang Co-authored-by: Caleb Biggers Signed-off-by: Ian Rogers --- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- .../arch/x86/sapphirerapids/cache.json | 161 +- .../arch/x86/sapphirerapids/counter.json | 82 + .../x86/sapphirerapids/floating-point.json | 28 + .../arch/x86/sapphirerapids/frontend.json | 50 + .../arch/x86/sapphirerapids/memory.json | 50 + .../arch/x86/sapphirerapids/metricgroups.json | 13 + .../arch/x86/sapphirerapids/other.json | 48 + .../arch/x86/sapphirerapids/pipeline.json | 133 ++ .../arch/x86/sapphirerapids/spr-metrics.json | 411 ++--- .../arch/x86/sapphirerapids/uncore-cache.json | 1244 ++++++++++++++ .../arch/x86/sapphirerapids/uncore-cxl.json | 110 ++ .../sapphirerapids/uncore-interconnect.json | 1427 +++++++++++++++++ .../arch/x86/sapphirerapids/uncore-io.json | 679 ++++++++ .../x86/sapphirerapids/uncore-memory.json | 742 +++++++++ .../arch/x86/sapphirerapids/uncore-power.json | 49 + .../x86/sapphirerapids/virtual-memory.json | 20 + 17 files changed, 5001 insertions(+), 248 deletions(-) create mode 100644 tools/perf/pmu-events/arch/x86/sapphirerapids/counter.j= son diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 51765cc94a3b..fb83c9a1bc5d 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -26,7 +26,7 @@ GenuineIntel-6-1[AEF],v4,nehalemep,core GenuineIntel-6-2E,v4,nehalemex,core GenuineIntel-6-A7,v1.03,rocketlake,core GenuineIntel-6-2A,v19,sandybridge,core -GenuineIntel-6-8F,v1.20,sapphirerapids,core +GenuineIntel-6-8F,v1.23,sapphirerapids,core GenuineIntel-6-AF,v1.02,sierraforest,core GenuineIntel-6-(37|4A|4C|4D|5A),v15,silvermont,core GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v58,skylake,core diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json b/too= ls/perf/pmu-events/arch/x86/sapphirerapids/cache.json index b0447aad0dfc..eec7bf6ebd53 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "L1D.HWPF_MISS", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "L1D.HWPF_MISS", "SampleAfterValue": "1000003", @@ -8,6 +9,7 @@ }, { "BriefDescription": "Counts the number of cache lines replaced in = L1 data cache.", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "L1D.REPLACEMENT", "PublicDescription": "Counts L1D data line replacements including = opportunistic replacements, and replacements that require stall-for-replace= or block-for-replace.", @@ -16,6 +18,7 @@ }, { "BriefDescription": "Number of cycles a demand request has waited = due to L1D Fill Buffer (FB) unavailability.", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL", "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", @@ -24,6 +27,7 @@ }, { "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailability.", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x48", @@ -34,6 +38,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = L1D_PEND_MISS.L2_STALLS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.L2_STALL", @@ -42,6 +47,7 @@ }, { "BriefDescription": "Number of cycles a demand request has waited = due to L1D due to lack of L2 resources.", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.L2_STALLS", "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D due to lack of L2 resources. Demand requests include cac= heable/uncacheable demand load, store, lock or SW prefetch accesses.", @@ -50,6 +56,7 @@ }, { "BriefDescription": "Number of L1D misses that are outstanding", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING", "PublicDescription": "Counts number of L1D misses that are outstan= ding in each cycle, that is each cycle the number of Fill Buffers (FB) outs= tanding required by Demand Reads. FB either is held by demand loads, or it = is held by non-demand loads and gets hit at least once by demand. The valid= outstanding interval is defined until the FB deallocation by one of the fo= llowing ways: from FB allocation, if FB is allocated by demand from the dem= and Hit FB, if it is allocated by hardware or software prefetch. Note: In t= he L1D, a Demand Read contains cacheable or noncacheable demand loads, incl= uding ones causing cache-line splits and reads due to page walks resulted f= rom any request type.", @@ -58,6 +65,7 @@ }, { "BriefDescription": "Cycles with L1D load Misses outstanding.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING_CYCLES", @@ -67,6 +75,7 @@ }, { "BriefDescription": "L2 cache lines filling L2", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "L2_LINES_IN.ALL", "PublicDescription": "Counts the number of L2 cache lines filling = the L2. Counting does not cover rejects.", @@ -74,14 +83,17 @@ "UMask": "0x1f" }, { - "BriefDescription": "L2_LINES_OUT.NON_SILENT", + "BriefDescription": "Modified cache lines that are evicted by L2 c= ache when triggered by an L2 cache fill.", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "L2_LINES_OUT.NON_SILENT", + "PublicDescription": "Counts the number of lines that are evicted = by L2 cache when triggered by an L2 cache fill. Those lines are in Modified= state. Modified lines are written back to L3", "SampleAfterValue": "200003", "UMask": "0x2" }, { "BriefDescription": "Non-modified cache lines that are silently dr= opped by L2 cache when triggered by an L2 cache fill.", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "L2_LINES_OUT.SILENT", "PublicDescription": "Counts the number of lines that are silently= dropped by L2 cache when triggered by an L2 cache fill. These lines are ty= pically in Shared or Exclusive state. A non-threaded event.", @@ -90,6 +102,7 @@ }, { "BriefDescription": "Cache lines that have been L2 hardware prefet= ched but not used by demand accesses", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "L2_LINES_OUT.USELESS_HWPF", "PublicDescription": "Counts the number of cache lines that have b= een prefetched by the L2 hardware prefetcher but not used by demand access = when evicted from the L2 cache", @@ -98,6 +111,7 @@ }, { "BriefDescription": "All accesses to L2 cache [This event is alias= to L2_RQSTS.REFERENCES]", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_REQUEST.ALL", "PublicDescription": "Counts all requests that were hit or true mi= sses in L2 cache. True-miss excludes misses that were merged with ongoing L= 2 misses. [This event is alias to L2_RQSTS.REFERENCES]", @@ -106,6 +120,7 @@ }, { "BriefDescription": "Read requests with true-miss in L2 cache. [Th= is event is alias to L2_RQSTS.MISS]", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_REQUEST.MISS", "PublicDescription": "Counts read requests of any type with true-m= iss in the L2 cache. True-miss excludes L2 misses that were merged with ong= oing L2 misses. [This event is alias to L2_RQSTS.MISS]", @@ -114,6 +129,7 @@ }, { "BriefDescription": "L2 code requests", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_CODE_RD", "PublicDescription": "Counts the total number of L2 code requests.= ", @@ -122,6 +138,7 @@ }, { "BriefDescription": "Demand Data Read access L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD", "PublicDescription": "Counts Demand Data Read requests accessing t= he L2 cache. These requests may hit or miss L2 cache. True-miss exclude mis= ses that were merged with ongoing L2 misses. An access is counted once.", @@ -130,6 +147,7 @@ }, { "BriefDescription": "Demand requests that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_MISS", "PublicDescription": "Counts demand requests that miss L2 cache.", @@ -138,6 +156,7 @@ }, { "BriefDescription": "Demand requests to L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES", "PublicDescription": "Counts demand requests to L2 cache.", @@ -146,6 +165,7 @@ }, { "BriefDescription": "L2_RQSTS.ALL_HWPF", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_HWPF", "SampleAfterValue": "200003", @@ -153,6 +173,7 @@ }, { "BriefDescription": "RFO requests to L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_RFO", "PublicDescription": "Counts the total number of RFO (read for own= ership) requests to L2 cache. L2 RFO requests include both L1D demand RFO m= isses as well as L1D RFO prefetches.", @@ -161,6 +182,7 @@ }, { "BriefDescription": "L2 cache hits when fetching instructions, cod= e reads.", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.CODE_RD_HIT", "PublicDescription": "Counts L2 cache hits when fetching instructi= ons, code reads.", @@ -169,6 +191,7 @@ }, { "BriefDescription": "L2 cache misses when fetching instructions", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.CODE_RD_MISS", "PublicDescription": "Counts L2 cache misses when fetching instruc= tions.", @@ -177,6 +200,7 @@ }, { "BriefDescription": "Demand Data Read requests that hit L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT", "PublicDescription": "Counts the number of demand Data Read reques= ts initiated by load instructions that hit L2 cache.", @@ -185,6 +209,7 @@ }, { "BriefDescription": "Demand Data Read miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", "PublicDescription": "Counts demand Data Read requests with true-m= iss in the L2 cache. True-miss excludes misses that were merged with ongoin= g L2 misses. An access is counted once.", @@ -193,6 +218,7 @@ }, { "BriefDescription": "L2_RQSTS.HWPF_MISS", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.HWPF_MISS", "SampleAfterValue": "200003", @@ -200,6 +226,7 @@ }, { "BriefDescription": "Read requests with true-miss in L2 cache. [Th= is event is alias to L2_REQUEST.MISS]", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.MISS", "PublicDescription": "Counts read requests of any type with true-m= iss in the L2 cache. True-miss excludes L2 misses that were merged with ong= oing L2 misses. [This event is alias to L2_REQUEST.MISS]", @@ -208,6 +235,7 @@ }, { "BriefDescription": "All accesses to L2 cache [This event is alias= to L2_REQUEST.ALL]", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.REFERENCES", "PublicDescription": "Counts all requests that were hit or true mi= sses in L2 cache. True-miss excludes misses that were merged with ongoing L= 2 misses. [This event is alias to L2_REQUEST.ALL]", @@ -216,6 +244,7 @@ }, { "BriefDescription": "RFO requests that hit L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.RFO_HIT", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests= that hit L2 cache.", @@ -224,6 +253,7 @@ }, { "BriefDescription": "RFO requests that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.RFO_MISS", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests= that miss L2 cache.", @@ -232,6 +262,7 @@ }, { "BriefDescription": "SW prefetch requests that hit L2 cache.", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.SWPF_HIT", "PublicDescription": "Counts Software prefetch requests that hit t= he L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when = FB is not full.", @@ -240,14 +271,25 @@ }, { "BriefDescription": "SW prefetch requests that miss L2 cache.", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.SWPF_MISS", "PublicDescription": "Counts Software prefetch requests that miss = the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when= FB is not full.", "SampleAfterValue": "200003", "UMask": "0x28" }, + { + "BriefDescription": "L2 writebacks that access L2 cache", + "Counter": "0,1,2,3", + "EventCode": "0x23", + "EventName": "L2_TRANS.L2_WB", + "PublicDescription": "Counts L2 writebacks that access L2 cache.", + "SampleAfterValue": "200003", + "UMask": "0x40" + }, { "BriefDescription": "Core-originated cacheable requests that misse= d L3 (Except hardware prefetches to the L3)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x2e", "EventName": "LONGEST_LAT_CACHE.MISS", "PublicDescription": "Counts core-originated cacheable requests th= at miss the L3 cache (Longest Latency cache). Requests include data and cod= e reads, Reads-for-Ownership (RFOs), speculative accesses and hardware pref= etches to the L1 and L2. It does not include hardware prefetches to the L3= , and may not count other types of requests to the L3.", @@ -256,6 +298,7 @@ }, { "BriefDescription": "Core-originated cacheable requests that refer= to L3 (Except hardware prefetches to the L3)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x2e", "EventName": "LONGEST_LAT_CACHE.REFERENCE", "PublicDescription": "Counts core-originated cacheable requests to= the L3 cache (Longest Latency cache). Requests include data and code reads= , Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches = to the L1 and L2. It does not include hardware prefetches to the L3, and m= ay not count other types of requests to the L3.", @@ -264,6 +307,7 @@ }, { "BriefDescription": "Retired load instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.ALL_LOADS", @@ -274,6 +318,7 @@ }, { "BriefDescription": "Retired store instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.ALL_STORES", @@ -284,6 +329,7 @@ }, { "BriefDescription": "All retired memory instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.ANY", @@ -294,6 +340,7 @@ }, { "BriefDescription": "Retired load instructions with locked access.= ", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.LOCK_LOADS", @@ -304,6 +351,7 @@ }, { "BriefDescription": "Retired load instructions that split across a= cacheline boundary.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.SPLIT_LOADS", @@ -314,6 +362,7 @@ }, { "BriefDescription": "Retired store instructions that split across = a cacheline boundary.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.SPLIT_STORES", @@ -324,6 +373,7 @@ }, { "BriefDescription": "Retired load instructions that miss the STLB.= ", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS", @@ -334,6 +384,7 @@ }, { "BriefDescription": "Retired store instructions that miss the STLB= .", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES", @@ -344,6 +395,7 @@ }, { "BriefDescription": "Completed demand load uops that miss the L1 d= -cache.", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "MEM_LOAD_COMPLETED.L1_MISS_ANY", "PublicDescription": "Number of completed demand load requests tha= t missed the L1 data cache including shadow misses (FB hits, merge to an on= going L1D miss)", @@ -352,6 +404,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = were HitM responses from shared L3", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD", @@ -362,6 +415,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = were L3 hit and cross-core snoop missed in on-pkg core cache.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", @@ -372,6 +426,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = were hits in L3 without snoops required", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE", @@ -382,6 +437,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = were L3 and cross-core snoop hits in on-pkg core cache", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD", @@ -392,6 +448,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = missed L3 but serviced from local dram", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM", @@ -402,6 +459,7 @@ }, { "BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM", @@ -411,6 +469,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = was forwarded from a remote cache", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD", @@ -421,6 +480,7 @@ }, { "BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM", @@ -430,6 +490,7 @@ }, { "BriefDescription": "Retired load instructions with remote Intel(R= ) Optane(TM) DC persistent memory as the data source where the data request= missed all caches.", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM", "PEBS": "1", @@ -439,6 +500,7 @@ }, { "BriefDescription": "Retired instructions with at least 1 uncachea= ble load or lock.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd4", "EventName": "MEM_LOAD_MISC_RETIRED.UC", @@ -449,6 +511,7 @@ }, { "BriefDescription": "Number of completed demand load requests that= missed the L1, but hit the FB(fill buffer), because a preceding miss to th= e same cacheline initiated the line to be brought into L1, but data is not = yet ready in L1.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.FB_HIT", @@ -459,6 +522,7 @@ }, { "BriefDescription": "Retired load instructions with L1 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L1_HIT", @@ -469,6 +533,7 @@ }, { "BriefDescription": "Retired load instructions missed L1 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L1_MISS", @@ -479,6 +544,7 @@ }, { "BriefDescription": "Retired load instructions with L2 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L2_HIT", @@ -489,6 +555,7 @@ }, { "BriefDescription": "Retired load instructions missed L2 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L2_MISS", @@ -499,6 +566,7 @@ }, { "BriefDescription": "Retired load instructions with L3 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L3_HIT", @@ -509,6 +577,7 @@ }, { "BriefDescription": "Retired load instructions missed L3 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.L3_MISS", @@ -519,6 +588,7 @@ }, { "BriefDescription": "Retired load instructions with local Intel(R)= Optane(TM) DC persistent memory as the data source where the data request = missed all caches.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xd1", "EventName": "MEM_LOAD_RETIRED.LOCAL_PMM", @@ -529,6 +599,7 @@ }, { "BriefDescription": "MEM_STORE_RETIRED.L2_HIT", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "MEM_STORE_RETIRED.L2_HIT", "SampleAfterValue": "200003", @@ -536,6 +607,7 @@ }, { "BriefDescription": "Retired memory uops for any access", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe5", "EventName": "MEM_UOP_RETIRED.ANY", "PublicDescription": "Number of retired micro-operations (uops) fo= r load or store memory accesses", @@ -544,6 +616,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that hit in the L3 or were snooped from another co= re's caches on the same socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -553,6 +626,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that resulted in a snoop hit a modified line in an= other core's caches which forwarded the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -562,6 +636,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that hit a modified line in a distant L3 Cache or = were snooped from a distant core's L1/L2 caches on this socket when the sys= tem is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HITM", "MSRIndex": "0x1a6,0x1a7", @@ -571,6 +646,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that either hit a non-modified line in a distant L= 3 Cache or were snooped from a distant core's L1/L2 caches on this socket w= hen the system is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -580,6 +656,7 @@ }, { "BriefDescription": "Counts demand data reads that hit in the L3 o= r were snooped from another core's caches on the same socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -589,6 +666,7 @@ }, { "BriefDescription": "Counts demand data reads that resulted in a s= noop hit a modified line in another core's caches which forwarded the data.= ", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -598,6 +676,7 @@ }, { "BriefDescription": "Counts demand data reads that resulted in a s= noop that hit in another core, which did not forward the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -607,6 +686,7 @@ }, { "BriefDescription": "Counts demand data reads that resulted in a s= noop hit in another core's caches which forwarded the unmodified data to th= e requesting core.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -616,6 +696,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y a cache on a remote socket where a snoop hit a modified line in another c= ore's caches which forwarded the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -625,6 +706,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y a cache on a remote socket where a snoop hit in another core's caches whi= ch forwarded the unmodified data to the requesting core.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -634,6 +716,7 @@ }, { "BriefDescription": "Counts demand data reads that hit a modified = line in a distant L3 Cache or were snooped from a distant core's L1/L2 cach= es on this socket when the system is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM", "MSRIndex": "0x1a6,0x1a7", @@ -643,6 +726,7 @@ }, { "BriefDescription": "Counts demand data reads that either hit a no= n-modified line in a distant L3 Cache or were snooped from a distant core's= L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) m= ode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -652,6 +736,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that hit in= the L3 or were snooped from another core's caches on the same socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -661,6 +746,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that result= ed in a snoop hit a modified line in another core's caches which forwarded = the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -670,6 +756,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that hit a = modified line in a distant L3 Cache or were snooped from a distant core's L= 1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mod= e.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.SNC_CACHE.HITM", "MSRIndex": "0x1a6,0x1a7", @@ -679,6 +766,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that either= hit a non-modified line in a distant L3 Cache or were snooped from a dista= nt core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA c= luster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.SNC_CACHE.HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -688,6 +776,7 @@ }, { "BriefDescription": "Counts hardware prefetches to the L3 only tha= t hit in the L3 or were snooped from another core's caches on the same sock= et.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L3.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -697,6 +786,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that hit in the L3 or were snooped from another core's caches on the sa= me socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -706,6 +796,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that resulted in a snoop hit a modified line in another core's caches w= hich forwarded the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -715,6 +806,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that resulted in a snoop that hit in another core, which did not forwar= d the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -724,6 +816,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that resulted in a snoop hit in another core's caches which forwarded t= he unmodified data to the requesting core.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -733,6 +826,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by a cache on a remote socket where a snoop was sent= and data was returned (Modified or Not Modified).", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -742,6 +836,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by a cache on a remote socket where a snoop hit a mo= dified line in another core's caches which forwarded the data.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -751,6 +846,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by a cache on a remote socket where a snoop hit in a= nother core's caches which forwarded the unmodified data to the requesting = core.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -760,6 +856,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that hit a modified line in a distant L3 Cache or were snooped from a d= istant core's L1/L2 caches on this socket when the system is in SNC (sub-NU= MA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM", "MSRIndex": "0x1a6,0x1a7", @@ -769,6 +866,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that either hit a non-modified line in a distant L3 Cache or were snoop= ed from a distant core's L1/L2 caches on this socket when the system is in = SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -778,6 +876,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO), hard= ware prefetch RFOs (which bring data to L2), and software prefetches for ex= clusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 = or snoop filter.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.RFO_TO_CORE.L3_HIT_M", "MSRIndex": "0x1a6,0x1a7", @@ -787,6 +886,7 @@ }, { "BriefDescription": "Counts streaming stores that hit in the L3 or= were snooped from another core's caches on the same socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.STREAMING_WR.L3_HIT", "MSRIndex": "0x1a6,0x1a7", @@ -796,6 +896,7 @@ }, { "BriefDescription": "OFFCORE_REQUESTS.ALL_REQUESTS", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS", "SampleAfterValue": "100003", @@ -803,22 +904,43 @@ }, { "BriefDescription": "Demand and prefetch data reads", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "OFFCORE_REQUESTS.DATA_RD", "PublicDescription": "Counts the demand and prefetch data reads. A= ll Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 p= refetchers). Counting also covers reads due to page walks resulted from any= request type.", "SampleAfterValue": "100003", "UMask": "0x8" }, + { + "BriefDescription": "Cacheable and noncacheable code read requests= ", + "Counter": "0,1,2,3", + "EventCode": "0x21", + "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD", + "PublicDescription": "Counts both cacheable and non-cacheable code= read requests.", + "SampleAfterValue": "100003", + "UMask": "0x2" + }, { "BriefDescription": "Demand Data Read requests sent to uncore", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD", "PublicDescription": "Counts the Demand Data Read requests sent to= uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determi= ne average latency in the uncore.", "SampleAfterValue": "100003", "UMask": "0x1" }, + { + "BriefDescription": "Demand RFO requests including regular RFOs, l= ocks, ItoM", + "Counter": "0,1,2,3", + "EventCode": "0x21", + "EventName": "OFFCORE_REQUESTS.DEMAND_RFO", + "PublicDescription": "Counts the demand RFO (read for ownership) r= equests including regular RFOs, locks, ItoM.", + "SampleAfterValue": "100003", + "UMask": "0x4" + }, { "BriefDescription": "This event is deprecated. Refer to new event = OFFCORE_REQUESTS_OUTSTANDING.DATA_RD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD", @@ -827,14 +949,26 @@ }, { "BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA= _RD", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD", "SampleAfterValue": "1000003", "UMask": "0x8" }, + { + "BriefDescription": "Cycles with offcore outstanding Code Reads tr= ansactions in the SuperQueue (SQ), queue to uncore.", + "Counter": "0,1,2,3", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE= _RD", + "PublicDescription": "Counts the number of offcore outstanding Cod= e Reads transactions in the super queue every cycle. The 'Offcore outstandi= ng' state of the transaction lasts from the L2 miss until the sending trans= action completion to requestor (SQ deallocation). See the corresponding Uma= sk under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x2" + }, { "BriefDescription": "Cycles where at least 1 outstanding demand da= ta read request is pending.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA= _RD", @@ -843,6 +977,7 @@ }, { "BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMA= ND_RFO", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO"= , @@ -851,13 +986,24 @@ }, { "BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD", "SampleAfterValue": "1000003", "UMask": "0x8" }, + { + "BriefDescription": "Offcore outstanding Code Reads transactions i= n the SuperQueue (SQ), queue to uncore, every cycle.", + "Counter": "0,1,2,3", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD", + "PublicDescription": "Counts the number of offcore outstanding Cod= e Reads transactions in the super queue every cycle. The 'Offcore outstandi= ng' state of the transaction lasts from the L2 miss until the sending trans= action completion to requestor (SQ deallocation). See the corresponding Uma= sk under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x2" + }, { "BriefDescription": "For every cycle, increments by the number of = outstanding demand data read requests pending.", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD", "PublicDescription": "For every cycle, increments by the number of= outstanding demand data read requests pending. Requests are considered o= utstanding from the time they miss the core's L2 cache until the transactio= n completion message is sent to the requestor.", @@ -866,14 +1012,24 @@ }, { "BriefDescription": "Counts bus locks, accounts for cache line spl= it locks and UC locks.", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "SQ_MISC.BUS_LOCK", "PublicDescription": "Counts the more expensive bus lock needed to= enforce cache coherency for certain memory accesses that need to be done a= tomically. Can be created by issuing an atomic instruction (via the LOCK p= refix) which causes a cache line split or accesses uncacheable memory.", "SampleAfterValue": "100003", "UMask": "0x10" }, + { + "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, = PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.", + "Counter": "0,1,2,3", + "EventCode": "0x40", + "EventName": "SW_PREFETCH_ACCESS.ANY", + "SampleAfterValue": "100003", + "UMask": "0xf" + }, { "BriefDescription": "Number of PREFETCHNTA instructions executed."= , + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "SW_PREFETCH_ACCESS.NTA", "PublicDescription": "Counts the number of PREFETCHNTA instruction= s executed.", @@ -882,6 +1038,7 @@ }, { "BriefDescription": "Number of PREFETCHW instructions executed.", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "SW_PREFETCH_ACCESS.PREFETCHW", "PublicDescription": "Counts the number of PREFETCHW instructions = executed.", @@ -890,6 +1047,7 @@ }, { "BriefDescription": "Number of PREFETCHT0 instructions executed.", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "SW_PREFETCH_ACCESS.T0", "PublicDescription": "Counts the number of PREFETCHT0 instructions= executed.", @@ -898,6 +1056,7 @@ }, { "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructio= ns executed.", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "SW_PREFETCH_ACCESS.T1_T2", "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT= 2 instructions executed.", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/counter.json b/t= ools/perf/pmu-events/arch/x86/sapphirerapids/counter.json new file mode 100644 index 000000000000..088d5954747c --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/counter.json @@ -0,0 +1,82 @@ +[ + { + "Unit": "core", + "CountersNumFixed": "4", + "CountersNumGeneric": "8" + }, + { + "Unit": "PCU", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "IRP", + "CountersNumFixed": "0", + "CountersNumGeneric": "2" + }, + { + "Unit": "M2PCIe", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "IIO", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "iMC", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "M2M", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "M3UPI", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "UPI", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "CHA", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "CXLCM", + "CountersNumFixed": "0", + "CountersNumGeneric": "8" + }, + { + "Unit": "CXLDP", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "MCHBM", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "M2HBM", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "UBOX", + "CountersNumFixed": "0", + "CountersNumGeneric": "2" + }, + { + "Unit": "MDF", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/floating-point.j= son b/tools/perf/pmu-events/arch/x86/sapphirerapids/floating-point.json index 1bdefaf96287..bc475e163227 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/floating-point.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/floating-point.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "ARITH.FPDIV_ACTIVE", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb0", "EventName": "ARITH.FPDIV_ACTIVE", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Counts all microcode FP assists.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc1", "EventName": "ASSISTS.FP", "PublicDescription": "Counts all microcode Floating Point assists.= ", @@ -17,6 +19,7 @@ }, { "BriefDescription": "ASSISTS.SSE_AVX_MIX", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc1", "EventName": "ASSISTS.SSE_AVX_MIX", "SampleAfterValue": "1000003", @@ -24,6 +27,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.PORT_0 [This event is ali= as to FP_ARITH_DISPATCHED.V0]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.PORT_0", "SampleAfterValue": "2000003", @@ -31,6 +35,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.PORT_1 [This event is ali= as to FP_ARITH_DISPATCHED.V1]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.PORT_1", "SampleAfterValue": "2000003", @@ -38,6 +43,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.PORT_5 [This event is ali= as to FP_ARITH_DISPATCHED.V2]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.PORT_5", "SampleAfterValue": "2000003", @@ -45,6 +51,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.V0 [This event is alias t= o FP_ARITH_DISPATCHED.PORT_0]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.V0", "SampleAfterValue": "2000003", @@ -52,6 +59,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.V1 [This event is alias t= o FP_ARITH_DISPATCHED.PORT_1]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.V1", "SampleAfterValue": "2000003", @@ -59,6 +67,7 @@ }, { "BriefDescription": "FP_ARITH_DISPATCHED.V2 [This event is alias t= o FP_ARITH_DISPATCHED.PORT_5]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb3", "EventName": "FP_ARITH_DISPATCHED.V2", "SampleAfterValue": "2000003", @@ -66,6 +75,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 128-bi= t packed double precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 2 computation= operations, one for each element. Applies to SSE* and AVX* packed double = precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN= MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice = as they perform 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE", "PublicDescription": "Number of SSE/AVX computational 128-bit pack= ed double precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 2 computation opera= tions, one for each element. Applies to SSE* and AVX* packed double precis= ion floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX S= QRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as the= y perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR re= gister need to be set when using these events.", @@ -74,6 +84,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational 128-bit packe= d single precision floating-point instructions retired; some instructions w= ill count twice as noted below. Each count represents 4 computation operat= ions, one for each element. Applies to SSE* and AVX* packed single precisi= on floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT = DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they pe= rform 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE", "PublicDescription": "Number of SSE/AVX computational 128-bit pack= ed single precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 4 computation opera= tions, one for each element. Applies to SSE* and AVX* packed single precis= ion floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX S= QRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count tw= ice as they perform 2 calculations per element. The DAZ and FTZ flags in th= e MXCSR register need to be set when using these events.", @@ -82,6 +93,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 256-bi= t packed double precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 4 computation= operations, one for each element. Applies to SSE* and AVX* packed double = precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN= MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perf= orm 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE", "PublicDescription": "Number of SSE/AVX computational 256-bit pack= ed double precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 4 computation opera= tions, one for each element. Applies to SSE* and AVX* packed double precis= ion floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX S= QRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 = calculations per element. The DAZ and FTZ flags in the MXCSR register need = to be set when using these events.", @@ -90,6 +102,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 256-bi= t packed single precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 8 computation= operations, one for each element. Applies to SSE* and AVX* packed single = precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN= MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions co= unt twice as they perform 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE", "PublicDescription": "Number of SSE/AVX computational 256-bit pack= ed single precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 8 computation opera= tions, one for each element. Applies to SSE* and AVX* packed single precis= ion floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX S= QRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count tw= ice as they perform 2 calculations per element. The DAZ and FTZ flags in th= e MXCSR register need to be set when using these events.", @@ -98,6 +111,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational 128-bit packe= d single and 256-bit packed double precision FP instructions retired; some = instructions will count twice as noted below. Each count represents 2 or/a= nd 4 computation operations, 1 for each element. Applies to SSE* and AVX* = packed single precision and packed double precision FP instructions: ADD SU= B HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DP= P and FM(N)ADD/SUB count twice as they perform 2 calculations per element."= , + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.4_FLOPS", "PublicDescription": "Number of SSE/AVX computational 128-bit pack= ed single precision and 256-bit packed double precision floating-point ins= tructions retired; some instructions will count twice as noted below. Each= count represents 2 or/and 4 computation operations, one for each element. = Applies to SSE* and AVX* packed single precision floating-point and packed= double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL= DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB ins= tructions count twice as they perform 2 calculations per element. The DAZ a= nd FTZ flags in the MXCSR register need to be set when using these events."= , @@ -106,6 +120,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 512-bi= t packed double precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 8 computation= operations, one for each element. Applies to SSE* and AVX* packed double = precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14= RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform = 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE", "PublicDescription": "Number of SSE/AVX computational 512-bit pack= ed double precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 8 computation opera= tions, one for each element. Applies to SSE* and AVX* packed double precis= ion floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14= FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calc= ulations per element. The DAZ and FTZ flags in the MXCSR register need to b= e set when using these events.", @@ -114,6 +129,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 512-bi= t packed single precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 16 computatio= n operations, one for each element. Applies to SSE* and AVX* packed single= precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT1= 4 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform= 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE", "PublicDescription": "Number of SSE/AVX computational 512-bit pack= ed single precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 16 computation oper= ations, one for each element. Applies to SSE* and AVX* packed single preci= sion floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP1= 4 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 cal= culations per element. The DAZ and FTZ flags in the MXCSR register need to = be set when using these events.", @@ -122,6 +138,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational 256-bit packe= d single precision and 512-bit packed double precision FP instructions ret= ired; some instructions will count twice as noted below. Each count repres= ents 8 computation operations, 1 for each element. Applies to SSE* and AVX= * packed single precision and double precision FP instructions: ADD SUB HAD= D HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB= . DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elem= ent.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.8_FLOPS", "PublicDescription": "Number of SSE/AVX computational 256-bit pack= ed single precision and 512-bit packed double precision floating-point ins= tructions retired; some instructions will count twice as noted below. Each= count represents 8 computation operations, one for each element. Applies = to SSE* and AVX* packed single precision and double precision floating-poin= t instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14= RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice= as they perform 2 calculations per element. The DAZ and FTZ flags in the M= XCSR register need to be set when using these events.", @@ -130,6 +147,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational scalar floati= ng-point instructions retired; some instructions will count twice as noted = below. Applies to SSE* and AVX* scalar, double and single precision floati= ng-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB= . DPP and FM(N)ADD/SUB instructions count twice as they perform multiple c= alculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR", "PublicDescription": "Number of SSE/AVX computational scalar singl= e precision and double precision floating-point instructions retired; some = instructions will count twice as noted below. Each count represents 1 comp= utational operation. Applies to SSE* and AVX* scalar single precision float= ing-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB= . FM(N)ADD/SUB instructions count twice as they perform 2 calculations per= element. The DAZ and FTZ flags in the MXCSR register need to be set when u= sing these events.", @@ -138,6 +156,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational scalar= double precision floating-point instructions retired; some instructions wi= ll count twice as noted below. Each count represents 1 computational opera= tion. Applies to SSE* and AVX* scalar double precision floating-point instr= uctions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructi= ons count twice as they perform 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE", "PublicDescription": "Number of SSE/AVX computational scalar doubl= e precision floating-point instructions retired; some instructions will cou= nt twice as noted below. Each count represents 1 computational operation. = Applies to SSE* and AVX* scalar double precision floating-point instruction= s: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions co= unt twice as they perform 2 calculations per element. The DAZ and FTZ flags= in the MXCSR register need to be set when using these events.", @@ -146,6 +165,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational scalar= single precision floating-point instructions retired; some instructions wi= ll count twice as noted below. Each count represents 1 computational opera= tion. Applies to SSE* and AVX* scalar single precision floating-point instr= uctions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB= instructions count twice as they perform 2 calculations per element.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE", "PublicDescription": "Number of SSE/AVX computational scalar singl= e precision floating-point instructions retired; some instructions will cou= nt twice as noted below. Each count represents 1 computational operation. = Applies to SSE* and AVX* scalar single precision floating-point instruction= s: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instr= uctions count twice as they perform 2 calculations per element. The DAZ and= FTZ flags in the MXCSR register need to be set when using these events.", @@ -154,6 +174,7 @@ }, { "BriefDescription": "Number of any Vector retired FP arithmetic in= structions", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.VECTOR", "PublicDescription": "Number of any Vector retired FP arithmetic i= nstructions. The DAZ and FTZ flags in the MXCSR register need to be set wh= en using these events.", @@ -162,6 +183,7 @@ }, { "BriefDescription": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF", "SampleAfterValue": "100003", @@ -169,6 +191,7 @@ }, { "BriefDescription": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF", "SampleAfterValue": "100003", @@ -176,6 +199,7 @@ }, { "BriefDescription": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF", "SampleAfterValue": "100003", @@ -183,6 +207,7 @@ }, { "BriefDescription": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF", "SampleAfterValue": "100003", @@ -190,6 +215,7 @@ }, { "BriefDescription": "Number of all Scalar Half-Precision FP arithm= etic instructions(1) retired - regular and complex.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.SCALAR", "PublicDescription": "FP_ARITH_INST_RETIRED2.SCALAR", @@ -198,6 +224,7 @@ }, { "BriefDescription": "FP_ARITH_INST_RETIRED2.SCALAR_HALF", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.SCALAR_HALF", "SampleAfterValue": "100003", @@ -205,6 +232,7 @@ }, { "BriefDescription": "Number of all Vector (also called packed) Hal= f-Precision FP arithmetic instructions(1) retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcf", "EventName": "FP_ARITH_INST_RETIRED2.VECTOR", "PublicDescription": "FP_ARITH_INST_RETIRED2.VECTOR", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json b/= tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json index 93d99318a623..f6e3e40a3b20 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Clears due to Unknown Branches.", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "BACLEARS.ANY", "PublicDescription": "Number of times the front-end is resteered w= hen it finds a branch instruction in a fetch line. This is called Unknown B= ranch which occurs for the first time a branch instruction is fetched or wh= en the branch is not tracked by the BPU (Branch Prediction Unit) anymore.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Stalls caused by changing prefix length of th= e instruction.", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "DECODE.LCP", "PublicDescription": "Counts cycles that the Instruction Length de= coder (ILD) stalls occurred due to dynamically changing prefix length of th= e decoded instruction (by operand size prefix instruction 0x66, address siz= e prefix instruction 0x67 or REX.W for Intel64). Count is proportional to t= he number of prefixes in a 16B-line. This may result in a three-cycle penal= ty for each LCP (Length changing prefix) in a 16-byte chunk.", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Cycles the Microcode Sequencer is busy.", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "DECODE.MS_BUSY", "SampleAfterValue": "500009", @@ -24,6 +27,7 @@ }, { "BriefDescription": "DSB-to-MITE switch true penalty cycles.", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES", "PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache th= at holds translations of previously fetched instructions that were decoded = by the legacy x86 decode pipeline (MITE). This event counts fetch penalty c= ycles when a transition occurs from DSB to MITE.", @@ -32,6 +36,7 @@ }, { "BriefDescription": "Retired Instructions who experienced DSB miss= .", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS", "MSRIndex": "0x3F7", @@ -43,6 +48,7 @@ }, { "BriefDescription": "Retired Instructions who experienced a critic= al DSB miss.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.DSB_MISS", "MSRIndex": "0x3F7", @@ -54,6 +60,7 @@ }, { "BriefDescription": "Retired Instructions who experienced iTLB tru= e miss.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.ITLB_MISS", "MSRIndex": "0x3F7", @@ -65,6 +72,7 @@ }, { "BriefDescription": "Retired Instructions who experienced Instruct= ion L1 Cache true miss.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.L1I_MISS", "MSRIndex": "0x3F7", @@ -76,6 +84,7 @@ }, { "BriefDescription": "Retired Instructions who experienced Instruct= ion L2 Cache true miss.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.L2_MISS", "MSRIndex": "0x3F7", @@ -87,6 +96,7 @@ }, { "BriefDescription": "Retired instructions after front-end starvati= on of at least 1 cycle", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_1", "MSRIndex": "0x3F7", @@ -98,6 +108,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 128 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_128", "MSRIndex": "0x3F7", @@ -109,6 +120,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 16 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_16", "MSRIndex": "0x3F7", @@ -120,6 +132,7 @@ }, { "BriefDescription": "Retired instructions after front-end starvati= on of at least 2 cycles", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2", "MSRIndex": "0x3F7", @@ -131,6 +144,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 256 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_256", "MSRIndex": "0x3F7", @@ -142,6 +156,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end had at least 1 bubble-slot for a period of 2= cycles which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1", "MSRIndex": "0x3F7", @@ -153,6 +168,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 32 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_32", "MSRIndex": "0x3F7", @@ -164,6 +180,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 4 cycles w= hich was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_4", "MSRIndex": "0x3F7", @@ -175,6 +192,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 512 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_512", "MSRIndex": "0x3F7", @@ -186,6 +204,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 64 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_64", "MSRIndex": "0x3F7", @@ -197,6 +216,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 8 cycles w= hich was not interrupted by a back-end stall.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_8", "MSRIndex": "0x3F7", @@ -208,6 +228,7 @@ }, { "BriefDescription": "FRONTEND_RETIRED.MS_FLOWS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.MS_FLOWS", "MSRIndex": "0x3F7", @@ -218,6 +239,7 @@ }, { "BriefDescription": "Retired Instructions who experienced STLB (2n= d level TLB) true miss.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.STLB_MISS", "MSRIndex": "0x3F7", @@ -229,6 +251,7 @@ }, { "BriefDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH", "MSRIndex": "0x3F7", @@ -239,14 +262,26 @@ }, { "BriefDescription": "Cycles where a code fetch is stalled due to L= 1 instruction cache miss.", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "ICACHE_DATA.STALLS", "PublicDescription": "Counts cycles where a code line fetch is sta= lled due to an L1 instruction cache miss. The decode pipeline works at a 32= Byte granularity.", "SampleAfterValue": "500009", "UMask": "0x4" }, + { + "BriefDescription": "ICACHE_DATA.STALL_PERIODS", + "Counter": "0,1,2,3", + "CounterMask": "1", + "EdgeDetect": "1", + "EventCode": "0x80", + "EventName": "ICACHE_DATA.STALL_PERIODS", + "SampleAfterValue": "500009", + "UMask": "0x4" + }, { "BriefDescription": "Cycles where a code fetch is stalled due to L= 1 instruction cache tag miss.", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "ICACHE_TAG.STALLS", "PublicDescription": "Counts cycles where a code fetch is stalled = due to L1 instruction cache tag miss.", @@ -255,6 +290,7 @@ }, { "BriefDescription": "Cycles Decode Stream Buffer (DSB) is deliveri= ng any Uop", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.DSB_CYCLES_ANY", @@ -264,6 +300,7 @@ }, { "BriefDescription": "Cycles DSB is delivering optimal number of Uo= ps", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0x79", "EventName": "IDQ.DSB_CYCLES_OK", @@ -273,6 +310,7 @@ }, { "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) from the Decode Stream Buffer (DSB) path", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.DSB_UOPS", "PublicDescription": "Counts the number of uops delivered to Instr= uction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.", @@ -281,6 +319,7 @@ }, { "BriefDescription": "Cycles MITE is delivering any Uop", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MITE_CYCLES_ANY", @@ -290,6 +329,7 @@ }, { "BriefDescription": "Cycles MITE is delivering optimal number of U= ops", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0x79", "EventName": "IDQ.MITE_CYCLES_OK", @@ -299,6 +339,7 @@ }, { "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) from MITE path", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.MITE_UOPS", "PublicDescription": "Counts the number of uops delivered to Instr= uction Decode Queue (IDQ) from the MITE path. This also means that uops are= not being delivered from the Decode Stream Buffer (DSB).", @@ -307,6 +348,7 @@ }, { "BriefDescription": "Cycles when uops are being delivered to IDQ w= hile MS is busy", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MS_CYCLES_ANY", @@ -316,6 +358,7 @@ }, { "BriefDescription": "Number of switches from DSB or MITE to the MS= ", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x79", @@ -326,6 +369,7 @@ }, { "BriefDescription": "Uops delivered to IDQ while MS is busy", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.MS_UOPS", "PublicDescription": "Counts the total number of uops delivered by= the Microcode Sequencer (MS).", @@ -334,6 +378,7 @@ }, { "BriefDescription": "Uops not delivered by IDQ when backend of the= machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE= ]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x9c", "EventName": "IDQ_BUBBLES.CORE", "PublicDescription": "Counts the number of uops not delivered to b= y the Instruction Decode Queue (IDQ) to the back-end of the pipeline when t= here was no back-end stalls. This event counts for one SMT thread in a give= n cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]", @@ -342,6 +387,7 @@ }, { "BriefDescription": "Cycles when no uops are not delivered by the = IDQ when backend of the machine is not stalled [This event is alias to IDQ_= UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "6", "EventCode": "0x9c", "EventName": "IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE", @@ -351,6 +397,7 @@ }, { "BriefDescription": "Cycles when optimal number of uops was delive= red to the back-end when the back-end is not stalled [This event is alias t= o IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0x9c", "EventName": "IDQ_BUBBLES.CYCLES_FE_WAS_OK", @@ -361,6 +408,7 @@ }, { "BriefDescription": "Uops not delivered by IDQ when backend of the= machine is not stalled [This event is alias to IDQ_BUBBLES.CORE]", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x9c", "EventName": "IDQ_UOPS_NOT_DELIVERED.CORE", "PublicDescription": "Counts the number of uops not delivered to b= y the Instruction Decode Queue (IDQ) to the back-end of the pipeline when t= here was no back-end stalls. This event counts for one SMT thread in a give= n cycle. [This event is alias to IDQ_BUBBLES.CORE]", @@ -369,6 +417,7 @@ }, { "BriefDescription": "Cycles when no uops are not delivered by the = IDQ when backend of the machine is not stalled [This event is alias to IDQ_= BUBBLES.CYCLES_0_UOPS_DELIV.CORE]", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "6", "EventCode": "0x9c", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE", @@ -378,6 +427,7 @@ }, { "BriefDescription": "Cycles when optimal number of uops was delive= red to the back-end when the back-end is not stalled [This event is alias t= o IDQ_BUBBLES.CYCLES_FE_WAS_OK]", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0x9c", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/memory.json b/to= ols/perf/pmu-events/arch/x86/sapphirerapids/memory.json index 5420f529f491..2ea19539291b 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/memory.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Execution stalls while L3 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Number of machine clears due to memory orderi= ng conflicts.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc3", "EventName": "MACHINE_CLEARS.MEMORY_ORDERING", "PublicDescription": "Counts the number of Machine Clears detected= dye to memory ordering. Memory Ordering Machine Clears may apply when a me= mory read may not conform to the memory ordering rules of the x86 architect= ure", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Cycles while L1 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "2", "EventCode": "0x47", "EventName": "MEMORY_ACTIVITY.CYCLES_L1D_MISS", @@ -25,6 +28,7 @@ }, { "BriefDescription": "Execution stalls while L1 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "3", "EventCode": "0x47", "EventName": "MEMORY_ACTIVITY.STALLS_L1D_MISS", @@ -33,6 +37,7 @@ }, { "BriefDescription": "Execution stalls while L2 cache miss demand c= acheable load request is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "5", "EventCode": "0x47", "EventName": "MEMORY_ACTIVITY.STALLS_L2_MISS", @@ -42,6 +47,7 @@ }, { "BriefDescription": "Execution stalls while L3 cache miss demand c= acheable load request is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "9", "EventCode": "0x47", "EventName": "MEMORY_ACTIVITY.STALLS_L3_MISS", @@ -49,8 +55,22 @@ "SampleAfterValue": "1000003", "UMask": "0x9" }, + { + "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 1024 cycles.", + "Counter": "1,2,3,4,5,6,7", + "Data_LA": "1", + "EventCode": "0xcd", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024", + "MSRIndex": "0x3F6", + "MSRValue": "0x400", + "PEBS": "2", + "PublicDescription": "Counts randomly selected loads when the late= ncy from first dispatch to completion is greater than 1024 cycles. Reporte= d latency may be longer than just the memory latency.", + "SampleAfterValue": "53", + "UMask": "0x1" + }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 128 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128", @@ -63,6 +83,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 16 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16", @@ -75,6 +96,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 256 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256", @@ -87,6 +109,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 32 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32", @@ -99,6 +122,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 4 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4", @@ -111,6 +135,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 512 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512", @@ -123,6 +148,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 64 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64", @@ -135,6 +161,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 8 cycles.", + "Counter": "1,2,3,4,5,6,7", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8", @@ -147,6 +174,7 @@ }, { "BriefDescription": "Retired memory store access operations. A PDi= st event for PEBS Store Latency Facility.", + "Counter": "0", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE", @@ -157,6 +185,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that were not supplied by the local socket's L1, L= 2, or L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -166,6 +195,7 @@ }, { "BriefDescription": "Counts demand data reads that were not suppli= ed by the local socket's L1, L2, or L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -175,6 +205,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that were n= ot supplied by the local socket's L1, L2, or L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -184,6 +215,7 @@ }, { "BriefDescription": "Counts hardware prefetches to the L3 only tha= t missed the local socket's L1, L2, and L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L3.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -193,6 +225,7 @@ }, { "BriefDescription": "Counts hardware prefetches to the L3 only tha= t were not supplied by the local socket's L1, L2, or L3 caches and the cach= eline is homed locally.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L3.L3_MISS_LOCAL", "MSRIndex": "0x1a6,0x1a7", @@ -202,6 +235,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were not supplied by the local socket's L1, L2, or L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -211,6 +245,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were not supplied by the local socket's L1, L2, or L3 caches and t= he cacheline is homed locally.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL", "MSRIndex": "0x1a6,0x1a7", @@ -220,6 +255,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that missed the L3 Cache and were supplied by the local socket (DRAM or= PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PM= M or DRAM accesses that are controlled by the close or distant SNC Cluster.= It does not count misses to the L3 which go to Local CXL Type 2 Memory or= Local Non DRAM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET", "MSRIndex": "0x1a6,0x1a7", @@ -229,6 +265,7 @@ }, { "BriefDescription": "Counts streaming stores that missed the local= socket's L1, L2, and L3 caches.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.STREAMING_WR.L3_MISS", "MSRIndex": "0x1a6,0x1a7", @@ -238,6 +275,7 @@ }, { "BriefDescription": "Counts streaming stores that were not supplie= d by the local socket's L1, L2, or L3 caches and the cacheline is homed loc= ally.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL", "MSRIndex": "0x1a6,0x1a7", @@ -247,6 +285,7 @@ }, { "BriefDescription": "Counts demand data read requests that miss th= e L3 cache.", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", "SampleAfterValue": "100003", @@ -254,6 +293,7 @@ }, { "BriefDescription": "For every cycle, increments by the number of = demand data read requests pending that are known to have missed the L3 cach= e.", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD"= , "PublicDescription": "For every cycle, increments by the number of= demand data read requests pending that are known to have missed the L3 cac= he. Note that this does not capture all elapsed cycles while requests are = outstanding - only cycles from when the requests were known by the requesti= ng core to have missed the L3 cache.", @@ -262,6 +302,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.ABORTED", "PEBS": "1", @@ -271,6 +312,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to none of the previous 4 categories (e.g. interrupt)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.ABORTED_EVENTS", "PublicDescription": "Counts the number of times an RTM execution = aborted due to none of the previous 4 categories (e.g. interrupt).", @@ -279,6 +321,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to various memory events (e.g. read/write capacity and conflicts)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.ABORTED_MEM", "PublicDescription": "Counts the number of times an RTM execution = aborted due to various memory events (e.g. read/write capacity and conflict= s).", @@ -287,6 +330,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to incompatible memory type", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.ABORTED_MEMTYPE", "PublicDescription": "Counts the number of times an RTM execution = aborted due to incompatible memory type.", @@ -295,6 +339,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to HLE-unfriendly instructions", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY", "PublicDescription": "Counts the number of times an RTM execution = aborted due to HLE-unfriendly instructions.", @@ -303,6 +348,7 @@ }, { "BriefDescription": "Number of times an RTM execution successfully= committed", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.COMMIT", "PublicDescription": "Counts the number of times RTM commit succee= ded.", @@ -311,6 +357,7 @@ }, { "BriefDescription": "Number of times an RTM execution started.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc9", "EventName": "RTM_RETIRED.START", "PublicDescription": "Counts the number of times we entered an RTM= region. Does not count nested transactions.", @@ -319,6 +366,7 @@ }, { "BriefDescription": "Speculatively counts the number of TSX aborts= due to a data capacity limitation for transactional reads", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_CAPACITY_READ", "PublicDescription": "Speculatively counts the number of Transacti= onal Synchronization Extensions (TSX) aborts due to a data capacity limitat= ion for transactional reads", @@ -327,6 +375,7 @@ }, { "BriefDescription": "Speculatively counts the number of TSX aborts= due to a data capacity limitation for transactional writes.", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_CAPACITY_WRITE", "PublicDescription": "Speculatively counts the number of Transacti= onal Synchronization Extensions (TSX) aborts due to a data capacity limitat= ion for transactional writes.", @@ -335,6 +384,7 @@ }, { "BriefDescription": "Number of times a transactional abort was sig= naled due to a data conflict on a transactionally accessed address", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_CONFLICT", "PublicDescription": "Counts the number of times a TSX line had a = cache conflict.", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/metricgroups.jso= n b/tools/perf/pmu-events/arch/x86/sapphirerapids/metricgroups.json index 81e5ca1c3078..e1de6c2675c4 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/metricgroups.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/metricgroups.json @@ -5,8 +5,21 @@ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Met= rics spreadsheet", "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Me= trics spreadsheet", "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics= spreadsheet", + "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", "C0Wait": "Grouping from Top-down Microarchitecture Analysis Metrics s= preadsheet", "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metric= s spreadsheet", + "CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metr= ics spreadsheet", "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics = spreadsheet", "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics = spreadsheet", "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spre= adsheet", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json b/too= ls/perf/pmu-events/arch/x86/sapphirerapids/other.json index 442ef3807a9d..05d8f14956ee 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "ASSISTS.PAGE_FAULT", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc1", "EventName": "ASSISTS.PAGE_FAULT", "SampleAfterValue": "1000003", @@ -8,6 +9,7 @@ }, { "BriefDescription": "Counts the cycles where the AMX (Advance Matr= ix Extension) unit is busy performing an operation.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb7", "EventName": "EXE.AMX_BUSY", "SampleAfterValue": "2000003", @@ -15,6 +17,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -24,6 +27,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that were supplied by DRAM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -33,6 +37,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that were supplied by DRAM attached to this socket= , unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM= accesses that are controlled by the close SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -42,6 +47,7 @@ }, { "BriefDescription": "Counts demand instruction fetches and L1 inst= ruction cache prefetches that were supplied by DRAM on a distant memory con= troller of this socket when the system is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -51,6 +57,7 @@ }, { "BriefDescription": "Counts demand data reads that have any type o= f response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -60,6 +67,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y DRAM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -69,6 +77,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In S= NC Mode counts only those DRAM accesses that are controlled by the close SN= C Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -78,6 +87,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode= . In SNC Mode counts PMM accesses that are controlled by the close or dist= ant SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.LOCAL_SOCKET_PMM", "MSRIndex": "0x1a6,0x1a7", @@ -87,6 +97,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y PMM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.PMM", "MSRIndex": "0x1a6,0x1a7", @@ -96,6 +107,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y DRAM attached to another socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -105,6 +117,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y PMM attached to another socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_PMM", "MSRIndex": "0x1a6,0x1a7", @@ -114,6 +127,7 @@ }, { "BriefDescription": "Counts demand data reads that were supplied b= y DRAM on a distant memory controller of this socket when the system is in = SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -123,6 +137,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that have a= ny type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -132,6 +147,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that were s= upplied by DRAM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -141,6 +157,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that were s= upplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mo= de. In SNC Mode counts only those DRAM accesses that are controlled by the= close SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -150,6 +167,7 @@ }, { "BriefDescription": "Counts demand reads for ownership (RFO) reque= sts and software prefetches for exclusive ownership (PREFETCHW) that were s= upplied by DRAM on a distant memory controller of this socket when the syst= em is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_RFO.SNC_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -159,6 +177,7 @@ }, { "BriefDescription": "Counts data load hardware prefetch requests t= o the L1 data cache that have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L1D.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -168,6 +187,7 @@ }, { "BriefDescription": "Counts hardware prefetches (which bring data = to L2) that have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L2.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -177,6 +197,7 @@ }, { "BriefDescription": "Counts hardware prefetches to the L3 only tha= t have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L3.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -186,6 +207,7 @@ }, { "BriefDescription": "Counts hardware prefetches to the L3 only tha= t were not supplied by the local socket's L1, L2, or L3 caches and the cach= eline was homed in a remote socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.HWPF_L3.REMOTE", "MSRIndex": "0x1a6,0x1a7", @@ -195,6 +217,7 @@ }, { "BriefDescription": "Counts writebacks of modified cachelines and = streaming stores that have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.MODIFIED_WRITE.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -204,6 +227,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that have any type of response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -213,6 +237,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -222,6 +247,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA = Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are co= ntrolled by the close SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -231,6 +257,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM attached to this socket, whether or not in S= ub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are contr= olled by the close or distant SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -240,6 +267,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by PMM attached to this socket, whether or not in Su= b NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are control= led by the close or distant SNC Cluster.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_PMM", "MSRIndex": "0x1a6,0x1a7", @@ -249,6 +277,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were not supplied by the local socket's L1, L2, or L3 caches and w= ere supplied by a remote socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE", "MSRIndex": "0x1a6,0x1a7", @@ -258,6 +287,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM attached to another socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -267,6 +297,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM or PMM attached to another socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY", "MSRIndex": "0x1a6,0x1a7", @@ -276,6 +307,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by PMM attached to another socket.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.REMOTE_PMM", "MSRIndex": "0x1a6,0x1a7", @@ -285,6 +317,7 @@ }, { "BriefDescription": "Counts all (cacheable) data read, code read a= nd RFO requests including demands and prefetches to the core caches (L1 or = L2) that were supplied by DRAM on a distant memory controller of this socke= t when the system is in SNC (sub-NUMA cluster) mode.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.READS_TO_CORE.SNC_DRAM", "MSRIndex": "0x1a6,0x1a7", @@ -294,6 +327,7 @@ }, { "BriefDescription": "Counts streaming stores that have any type of= response.", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.STREAMING_WR.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -303,6 +337,7 @@ }, { "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hard= ware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted = in a store to Memory (DRAM or PMM)", + "Counter": "0,1,2,3", "EventCode": "0x2A,0x2B", "EventName": "OCR.WRITE_ESTIMATE.MEMORY", "MSRIndex": "0x1a6,0x1a7", @@ -312,6 +347,7 @@ }, { "BriefDescription": "Cycles when Reservation Station (RS) is empty= for the thread.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa5", "EventName": "RS.EMPTY", "PublicDescription": "Counts cycles during which the reservation s= tation (RS) is empty for this logical processor. This is usually caused whe= n the front-end pipeline runs into starvation periods (e.g. branch mispredi= ctions or i-cache misses)", @@ -320,6 +356,7 @@ }, { "BriefDescription": "Counts end of periods where the Reservation S= tation (RS) was empty.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0xa5", @@ -329,8 +366,17 @@ "SampleAfterValue": "100003", "UMask": "0x7" }, + { + "BriefDescription": "Cycles when Reservation Station (RS) is empty= due to a resource in the back-end", + "Counter": "0,1,2,3,4,5,6,7", + "EventCode": "0xa5", + "EventName": "RS.EMPTY_RESOURCE", + "SampleAfterValue": "1000003", + "UMask": "0x1" + }, { "BriefDescription": "This event is deprecated. Refer to new event = RS.EMPTY_COUNT", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EdgeDetect": "1", @@ -342,6 +388,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = RS.EMPTY", + "Counter": "0,1,2,3,4,5,6,7", "Deprecated": "1", "EventCode": "0xa5", "EventName": "RS_EMPTY.CYCLES", @@ -350,6 +397,7 @@ }, { "BriefDescription": "Cycles the uncore cannot take further request= s", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x2d", "EventName": "XQ.FULL_CYCLES", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json b/= tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json index e2086bedeca8..5d5811f26151 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "This event is deprecated. Refer to new event = ARITH.DIV_ACTIVE", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EventCode": "0xb0", @@ -10,6 +11,7 @@ }, { "BriefDescription": "Cycles when divide unit is busy executing div= ide or square root operations.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb0", "EventName": "ARITH.DIV_ACTIVE", @@ -19,6 +21,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = ARITH.FPDIV_ACTIVE", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EventCode": "0xb0", @@ -28,6 +31,7 @@ }, { "BriefDescription": "This event counts the cycles the integer divi= der is busy.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb0", "EventName": "ARITH.IDIV_ACTIVE", @@ -36,6 +40,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = ARITH.IDIV_ACTIVE", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EventCode": "0xb0", @@ -45,6 +50,7 @@ }, { "BriefDescription": "Number of occurrences where a microcode assis= t is invoked by hardware.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc1", "EventName": "ASSISTS.ANY", "PublicDescription": "Counts the number of occurrences where a mic= rocode assist is invoked by hardware. Examples include AD (page Access Dirt= y), FP and AVX related assists.", @@ -53,6 +59,7 @@ }, { "BriefDescription": "All branch instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.ALL_BRANCHES", "PEBS": "1", @@ -61,6 +68,7 @@ }, { "BriefDescription": "Conditional branch instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.COND", "PEBS": "1", @@ -70,6 +78,7 @@ }, { "BriefDescription": "Not taken branch instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.COND_NTAKEN", "PEBS": "1", @@ -79,6 +88,7 @@ }, { "BriefDescription": "Taken conditional branch instructions retired= .", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.COND_TAKEN", "PEBS": "1", @@ -88,6 +98,7 @@ }, { "BriefDescription": "Far branch instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.FAR_BRANCH", "PEBS": "1", @@ -97,6 +108,7 @@ }, { "BriefDescription": "Indirect near branch instructions retired (ex= cluding returns)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.INDIRECT", "PEBS": "1", @@ -106,6 +118,7 @@ }, { "BriefDescription": "Direct and indirect near call instructions re= tired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.NEAR_CALL", "PEBS": "1", @@ -115,6 +128,7 @@ }, { "BriefDescription": "Return instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.NEAR_RETURN", "PEBS": "1", @@ -124,6 +138,7 @@ }, { "BriefDescription": "Taken branch instructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.NEAR_TAKEN", "PEBS": "1", @@ -133,6 +148,7 @@ }, { "BriefDescription": "All mispredicted branch instructions retired.= ", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.ALL_BRANCHES", "PEBS": "1", @@ -141,6 +157,7 @@ }, { "BriefDescription": "Mispredicted conditional branch instructions = retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.COND", "PEBS": "1", @@ -150,6 +167,7 @@ }, { "BriefDescription": "Mispredicted non-taken conditional branch ins= tructions retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.COND_NTAKEN", "PEBS": "1", @@ -159,6 +177,7 @@ }, { "BriefDescription": "number of branch instructions retired that we= re mispredicted and taken.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.COND_TAKEN", "PEBS": "1", @@ -168,6 +187,7 @@ }, { "BriefDescription": "Miss-predicted near indirect branch instructi= ons retired (excluding returns)", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.INDIRECT", "PEBS": "1", @@ -177,6 +197,7 @@ }, { "BriefDescription": "Mispredicted indirect CALL retired.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.INDIRECT_CALL", "PEBS": "1", @@ -186,6 +207,7 @@ }, { "BriefDescription": "Number of near branch instructions retired th= at were mispredicted and taken.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.NEAR_TAKEN", "PEBS": "1", @@ -195,6 +217,7 @@ }, { "BriefDescription": "This event counts the number of mispredicted = ret instructions retired. Non PEBS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.RET", "PEBS": "1", @@ -204,6 +227,7 @@ }, { "BriefDescription": "Core clocks when the thread is in the C0.1 li= ght-weight slower wakeup time but more power saving optimized state.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xec", "EventName": "CPU_CLK_UNHALTED.C01", "PublicDescription": "Counts core clocks when the thread is in the= C0.1 light-weight slower wakeup time but more power saving optimized state= . This state can be entered via the TPAUSE or UMWAIT instructions.", @@ -212,6 +236,7 @@ }, { "BriefDescription": "Core clocks when the thread is in the C0.2 li= ght-weight faster wakeup time but less power saving optimized state.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xec", "EventName": "CPU_CLK_UNHALTED.C02", "PublicDescription": "Counts core clocks when the thread is in the= C0.2 light-weight faster wakeup time but less power saving optimized state= . This state can be entered via the TPAUSE or UMWAIT instructions.", @@ -220,6 +245,7 @@ }, { "BriefDescription": "Core clocks when the thread is in the C0.1 or= C0.2 or running a PAUSE in C0 ACPI state.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xec", "EventName": "CPU_CLK_UNHALTED.C0_WAIT", "PublicDescription": "Counts core clocks when the thread is in the= C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions)= or running the PAUSE instruction.", @@ -228,6 +254,7 @@ }, { "BriefDescription": "Cycle counts are evenly distributed between a= ctive threads in the Core.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xec", "EventName": "CPU_CLK_UNHALTED.DISTRIBUTED", "PublicDescription": "This event distributes cycle counts between = active hyperthreads, i.e., those in C0. A hyperthread becomes inactive whe= n it executes the HLT or MWAIT instructions. If all other hyperthreads are= inactive (or disabled or do not exist), all counts are attributed to this = hyperthread. To obtain the full count when the Core is active, sum the coun= ts from each hyperthread.", @@ -236,6 +263,7 @@ }, { "BriefDescription": "Core crystal clock cycles when this thread is= unhalted and the other thread is halted.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x3c", "EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE", "PublicDescription": "Counts Core crystal clock cycles when curren= t thread is unhalted and the other thread is halted.", @@ -244,6 +272,7 @@ }, { "BriefDescription": "CPU_CLK_UNHALTED.PAUSE", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xec", "EventName": "CPU_CLK_UNHALTED.PAUSE", "SampleAfterValue": "2000003", @@ -251,6 +280,7 @@ }, { "BriefDescription": "CPU_CLK_UNHALTED.PAUSE_INST", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0xec", @@ -260,6 +290,7 @@ }, { "BriefDescription": "Core crystal clock cycles. Cycle counts are e= venly distributed between active threads in the Core.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x3c", "EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED", "PublicDescription": "This event distributes Core crystal clock cy= cle counts between active hyperthreads, i.e., those in C0 sleep-state. A hy= perthread becomes inactive when it executes the HLT or MWAIT instructions. = If one thread is active in a core, all counts are attributed to this hypert= hread. To obtain the full count when the Core is active, sum the counts fro= m each hyperthread.", @@ -268,6 +299,7 @@ }, { "BriefDescription": "Reference cycles when the core is not in halt= state.", + "Counter": "Fixed counter 2", "EventName": "CPU_CLK_UNHALTED.REF_TSC", "PublicDescription": "Counts the number of reference cycles when t= he core is not in a halt state. The core enters the halt state when it is r= unning the HLT instruction or the MWAIT instruction. This event is not affe= cted by core frequency changes (for example, P states, TM2 transitions) but= has the same incrementing frequency as the time stamp counter. This event = can approximate elapsed time while the core was not in a halt state. It is = counted on a dedicated fixed counter, leaving the eight programmable counte= rs available for other events. Note: On all current platforms this event st= ops counting during 'throttling (TM)' states duty off periods the processor= is 'halted'. The counter update is done at a lower clock rate then the co= re clock the overflow status bit for this counter may appear 'sticky'. Aft= er the counter has overflowed and software clears the overflow status bit a= nd resets the counter to less than MAX. The reset value to the counter is n= ot clocked immediately so the overflow status bit will flip 'high (1)' and = generate another PMI (if enabled) after which the reset value gets clocked = into the counter. Therefore, software will get the interrupt, read the over= flow status bit '1 for bit 34 while the counter value is less than MAX. Sof= tware should ignore this case.", "SampleAfterValue": "2000003", @@ -275,6 +307,7 @@ }, { "BriefDescription": "Reference cycles when the core is not in halt= state.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x3c", "EventName": "CPU_CLK_UNHALTED.REF_TSC_P", "PublicDescription": "Counts the number of reference cycles when t= he core is not in a halt state. The core enters the halt state when it is r= unning the HLT instruction or the MWAIT instruction. This event is not affe= cted by core frequency changes (for example, P states, TM2 transitions) but= has the same incrementing frequency as the time stamp counter. This event = can approximate elapsed time while the core was not in a halt state. It is = counted on a dedicated fixed counter, leaving the four (eight when Hyperthr= eading is disabled) programmable counters available for other events. Note:= On all current platforms this event stops counting during 'throttling (TM)= ' states duty off periods the processor is 'halted'. The counter update is= done at a lower clock rate then the core clock the overflow status bit for= this counter may appear 'sticky'. After the counter has overflowed and so= ftware clears the overflow status bit and resets the counter to less than M= AX. The reset value to the counter is not clocked immediately so the overfl= ow status bit will flip 'high (1)' and generate another PMI (if enabled) af= ter which the reset value gets clocked into the counter. Therefore, softwar= e will get the interrupt, read the overflow status bit '1 for bit 34 while = the counter value is less than MAX. Software should ignore this case.", @@ -283,6 +316,7 @@ }, { "BriefDescription": "Core cycles when the thread is not in halt st= ate", + "Counter": "Fixed counter 1", "EventName": "CPU_CLK_UNHALTED.THREAD", "PublicDescription": "Counts the number of core cycles while the t= hread is not in a halt state. The thread enters the halt state when it is r= unning the HLT instruction. This event is a component in many key event rat= ios. The core frequency may change from time to time due to transitions ass= ociated with Enhanced Intel SpeedStep Technology or TM2. For this reason th= is event may have a changing ratio with regards to time. When the core freq= uency is constant, this event can approximate elapsed time while the core w= as not in the halt state. It is counted on a dedicated fixed counter, leavi= ng the eight programmable counters available for other events.", "SampleAfterValue": "2000003", @@ -290,6 +324,7 @@ }, { "BriefDescription": "Thread cycles when thread is not in halt stat= e", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x3c", "EventName": "CPU_CLK_UNHALTED.THREAD_P", "PublicDescription": "This is an architectural event that counts t= he number of thread cycles while the thread is not in a halt state. The thr= ead enters the halt state when it is running the HLT instruction. The core = frequency may change from time to time due to power or thermal throttling. = For this reason, this event may have a changing ratio with regards to wall = clock time.", @@ -297,6 +332,7 @@ }, { "BriefDescription": "Cycles while L1 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "8", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS", @@ -305,6 +341,7 @@ }, { "BriefDescription": "Cycles while L2 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS", @@ -313,6 +350,7 @@ }, { "BriefDescription": "Cycles while memory subsystem has an outstand= ing load.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "16", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY", @@ -321,6 +359,7 @@ }, { "BriefDescription": "Execution stalls while L1 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "12", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS", @@ -329,6 +368,7 @@ }, { "BriefDescription": "Execution stalls while L2 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "5", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS", @@ -337,6 +377,7 @@ }, { "BriefDescription": "Total execution stalls.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "4", "EventCode": "0xa3", "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL", @@ -345,14 +386,24 @@ }, { "BriefDescription": "Cycles total of 1 uop is executed on all port= s and Reservation Station was not empty.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.1_PORTS_UTIL", "PublicDescription": "Counts cycles during which a total of 1 uop = was executed on all ports and Reservation Station (RS) was not empty.", "SampleAfterValue": "2000003", "UMask": "0x2" }, + { + "BriefDescription": "Cycles total of 2 or 3 uops are executed on a= ll ports and Reservation Station (RS) was not empty.", + "Counter": "0,1,2,3,4,5,6,7", + "EventCode": "0xa6", + "EventName": "EXE_ACTIVITY.2_3_PORTS_UTIL", + "SampleAfterValue": "2000003", + "UMask": "0xc" + }, { "BriefDescription": "Cycles total of 2 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.2_PORTS_UTIL", "PublicDescription": "Counts cycles during which a total of 2 uops= were executed on all ports and Reservation Station (RS) was not empty.", @@ -361,6 +412,7 @@ }, { "BriefDescription": "Cycles total of 3 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.3_PORTS_UTIL", "PublicDescription": "Cycles total of 3 uops are executed on all p= orts and Reservation Station (RS) was not empty.", @@ -369,6 +421,7 @@ }, { "BriefDescription": "Cycles total of 4 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.4_PORTS_UTIL", "PublicDescription": "Cycles total of 4 uops are executed on all p= orts and Reservation Station (RS) was not empty.", @@ -377,6 +430,7 @@ }, { "BriefDescription": "Execution stalls while memory subsystem has a= n outstanding load.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "5", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.BOUND_ON_LOADS", @@ -385,6 +439,7 @@ }, { "BriefDescription": "Cycles where the Store Buffer was full and no= loads caused an execution stall.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "2", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.BOUND_ON_STORES", @@ -394,6 +449,7 @@ }, { "BriefDescription": "Cycles no uop executed while RS was not empty= , the SB was not full and there was no outstanding load.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa6", "EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS", "PublicDescription": "Number of cycles total of 0 uops executed on= all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) w= as not full and there was no outstanding load.", @@ -402,6 +458,7 @@ }, { "BriefDescription": "Instruction decoders utilized in a cycle", + "Counter": "0,1,2,3", "EventCode": "0x75", "EventName": "INST_DECODED.DECODERS", "PublicDescription": "Number of decoders utilized in a cycle when = the MITE (legacy decode pipeline) fetches instructions.", @@ -410,6 +467,7 @@ }, { "BriefDescription": "Number of instructions retired. Fixed Counter= - architectural event", + "Counter": "Fixed counter 0", "EventName": "INST_RETIRED.ANY", "PEBS": "1", "PublicDescription": "Counts the number of X86 instructions retire= d - an Architectural PerfMon event. Counting continues during hardware inte= rrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is co= unted by a designated fixed counter freeing up programmable counters to cou= nt other events. INST_RETIRED.ANY_P is counted by a programmable counter.", @@ -418,6 +476,7 @@ }, { "BriefDescription": "Number of instructions retired. General Count= er - architectural event", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc0", "EventName": "INST_RETIRED.ANY_P", "PEBS": "1", @@ -426,6 +485,7 @@ }, { "BriefDescription": "INST_RETIRED.MACRO_FUSED", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc0", "EventName": "INST_RETIRED.MACRO_FUSED", "PEBS": "1", @@ -434,6 +494,7 @@ }, { "BriefDescription": "Retired NOP instructions.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc0", "EventName": "INST_RETIRED.NOP", "PEBS": "1", @@ -443,6 +504,7 @@ }, { "BriefDescription": "Precise instruction retired with PEBS precise= -distribution", + "Counter": "Fixed counter 0", "EventName": "INST_RETIRED.PREC_DIST", "PEBS": "1", "PublicDescription": "A version of INST_RETIRED that allows for a = precise distribution of samples across instructions retired. It utilizes th= e Precise Distribution of Instructions Retired (PDIR++) feature to fix bias= in how retired instructions get sampled. Use on Fixed Counter 0.", @@ -451,6 +513,7 @@ }, { "BriefDescription": "Iterations of Repeat string retired instructi= ons.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc0", "EventName": "INST_RETIRED.REP_ITERATION", "PEBS": "1", @@ -460,6 +523,7 @@ }, { "BriefDescription": "Clears speculative count", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0xad", @@ -470,6 +534,7 @@ }, { "BriefDescription": "Counts cycles after recovery from a branch mi= sprediction or machine clear till the first uop is issued from the resteere= d path.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xad", "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES", "PublicDescription": "Cycles after recovery from a branch mispredi= ction or machine clear till the first uop is issued from the resteered path= .", @@ -478,6 +543,7 @@ }, { "BriefDescription": "INT_MISC.MBA_STALLS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xad", "EventName": "INT_MISC.MBA_STALLS", "SampleAfterValue": "1000003", @@ -485,6 +551,7 @@ }, { "BriefDescription": "Core cycles the allocator was stalled due to = recovery from earlier clear event for this thread", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xad", "EventName": "INT_MISC.RECOVERY_CYCLES", "PublicDescription": "Counts core cycles when the Resource allocat= or was stalled due to recovery from an earlier branch misprediction or mach= ine clear event.", @@ -493,6 +560,7 @@ }, { "BriefDescription": "Bubble cycles of BAClear (Unknown Branch).", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xad", "EventName": "INT_MISC.UNKNOWN_BRANCH_CYCLES", "MSRIndex": "0x3F7", @@ -502,6 +570,7 @@ }, { "BriefDescription": "TMA slots where uops got dropped", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xad", "EventName": "INT_MISC.UOP_DROPPING", "PublicDescription": "Estimated number of Top-down Microarchitectu= re Analysis slots that got dropped due to non front-end reasons", @@ -510,6 +579,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.128BIT", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.128BIT", "SampleAfterValue": "1000003", @@ -517,6 +587,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.256BIT", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.256BIT", "SampleAfterValue": "1000003", @@ -524,6 +595,7 @@ }, { "BriefDescription": "integer ADD, SUB, SAD 128-bit vector instruct= ions.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.ADD_128", "PublicDescription": "Number of retired integer ADD/SUB (regular o= r horizontal), SAD 128-bit vector instructions.", @@ -532,6 +604,7 @@ }, { "BriefDescription": "integer ADD, SUB, SAD 256-bit vector instruct= ions.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.ADD_256", "PublicDescription": "Number of retired integer ADD/SUB (regular o= r horizontal), SAD 256-bit vector instructions.", @@ -540,6 +613,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.MUL_256", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.MUL_256", "SampleAfterValue": "1000003", @@ -547,6 +621,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.SHUFFLES", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.SHUFFLES", "SampleAfterValue": "1000003", @@ -554,6 +629,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.VNNI_128", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.VNNI_128", "SampleAfterValue": "1000003", @@ -561,6 +637,7 @@ }, { "BriefDescription": "INT_VEC_RETIRED.VNNI_256", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe7", "EventName": "INT_VEC_RETIRED.VNNI_256", "SampleAfterValue": "1000003", @@ -568,6 +645,7 @@ }, { "BriefDescription": "False dependencies in MOB due to partial comp= are on address.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "LD_BLOCKS.ADDRESS_ALIAS", "PublicDescription": "Counts the number of times a load got blocke= d due to false dependencies in MOB due to partial compare on address.", @@ -576,6 +654,7 @@ }, { "BriefDescription": "The number of times that split load operation= s are temporarily blocked because all resources for handling the split acce= sses are in use.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "LD_BLOCKS.NO_SR", "PublicDescription": "Counts the number of times that split load o= perations are temporarily blocked because all resources for handling the sp= lit accesses are in use.", @@ -584,6 +663,7 @@ }, { "BriefDescription": "Loads blocked due to overlapping with a prece= ding store that cannot be forwarded.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "LD_BLOCKS.STORE_FORWARD", "PublicDescription": "Counts the number of times where store forwa= rding was prevented for a load operation. The most common case is a load bl= ocked due to the address of memory access (partially) overlapping with a pr= eceding uncompleted store. Note: See the table of not supported store forwa= rds in the Optimization Guide.", @@ -592,6 +672,7 @@ }, { "BriefDescription": "Counts the number of demand load dispatches t= hat hit L1D fill buffer (FB) allocated for software prefetch.", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "LOAD_HIT_PREFETCH.SWPF", "PublicDescription": "Counts all not software-prefetch load dispat= ches that hit the fill buffer (FB) allocated for the software prefetch. It = can also be incremented by some lock instructions. So it should only be use= d with profiling so that the locks can be excluded by ASM (Assembly File) i= nspection of the nearby instructions.", @@ -600,6 +681,7 @@ }, { "BriefDescription": "Cycles Uops delivered by the LSD, but didn't = come from the decoder.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xa8", "EventName": "LSD.CYCLES_ACTIVE", @@ -609,6 +691,7 @@ }, { "BriefDescription": "Cycles optimal number of Uops delivered by th= e LSD, but did not come from the decoder.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "6", "EventCode": "0xa8", "EventName": "LSD.CYCLES_OK", @@ -618,6 +701,7 @@ }, { "BriefDescription": "Number of Uops delivered by the LSD.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa8", "EventName": "LSD.UOPS", "PublicDescription": "Counts the number of uops delivered to the b= ack-end by the LSD(Loop Stream Detector).", @@ -626,6 +710,7 @@ }, { "BriefDescription": "Number of machine clears (nukes) of any type.= ", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0xc3", @@ -636,6 +721,7 @@ }, { "BriefDescription": "Self-modifying code (SMC) detected.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc3", "EventName": "MACHINE_CLEARS.SMC", "PublicDescription": "Counts self-modifying code (SMC) detected, w= hich causes a machine clear.", @@ -644,6 +730,7 @@ }, { "BriefDescription": "LFENCE instructions retired", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xe0", "EventName": "MISC2_RETIRED.LFENCE", "PublicDescription": "number of LFENCE retired instructions", @@ -652,6 +739,7 @@ }, { "BriefDescription": "Increments whenever there is an update to the= LBR array.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xcc", "EventName": "MISC_RETIRED.LBR_INSERTS", "PublicDescription": "Increments when an entry is added to the Las= t Branch Record (LBR) array (or removed from the array in case of RETURNs i= n call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and= branch type selection via MSR_LBR_SELECT.", @@ -660,6 +748,7 @@ }, { "BriefDescription": "Cycles stalled due to no store buffers availa= ble. (not including draining form sync).", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa2", "EventName": "RESOURCE_STALLS.SB", "PublicDescription": "Counts allocation stall cycles caused by the= store buffer (SB) being full. This counts cycles that the pipeline back-en= d blocked uop delivery from the front-end.", @@ -668,6 +757,7 @@ }, { "BriefDescription": "Counts cycles where the pipeline is stalled d= ue to serializing operations.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa2", "EventName": "RESOURCE_STALLS.SCOREBOARD", "SampleAfterValue": "100003", @@ -675,6 +765,7 @@ }, { "BriefDescription": "TMA slots where no uops were being issued due= to lack of back-end resources.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa4", "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS", "PublicDescription": "Number of slots in TMA method where no micro= -operations were being issued from front-end to back-end of the machine due= to lack of back-end resources.", @@ -683,6 +774,7 @@ }, { "BriefDescription": "TMA slots wasted due to incorrect speculation= s.", + "Counter": "0", "EventCode": "0xa4", "EventName": "TOPDOWN.BAD_SPEC_SLOTS", "PublicDescription": "Number of slots of TMA method that were wast= ed due to incorrect speculation. It covers all types of control-flow or dat= a-related mis-speculations.", @@ -691,6 +783,7 @@ }, { "BriefDescription": "TMA slots wasted due to incorrect speculation= by branch mispredictions", + "Counter": "0", "EventCode": "0xa4", "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS", "PublicDescription": "Number of TMA slots that were wasted due to = incorrect speculation by (any type of) branch mispredictions. This event es= timates number of speculative operations that were issued but not retired a= s well as the out-of-order engine recovery past a branch misprediction.", @@ -699,6 +792,7 @@ }, { "BriefDescription": "TOPDOWN.MEMORY_BOUND_SLOTS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa4", "EventName": "TOPDOWN.MEMORY_BOUND_SLOTS", "SampleAfterValue": "10000003", @@ -706,6 +800,7 @@ }, { "BriefDescription": "TMA slots available for an unhalted logical p= rocessor. Fixed counter - architectural event", + "Counter": "Fixed counter 3", "EventName": "TOPDOWN.SLOTS", "PublicDescription": "Number of available slots for an unhalted lo= gical processor. The event increments by machine-width of the narrowest pip= eline as employed by the Top-down Microarchitecture Analysis method (TMA). = The count is distributed among unhalted logical processors (hyper-threads) = who share the same physical core. Software can use this event as the denomi= nator for the top-level metrics of the TMA method. This architectural event= is counted on a designated fixed counter (Fixed Counter 3).", "SampleAfterValue": "10000003", @@ -713,6 +808,7 @@ }, { "BriefDescription": "TMA slots available for an unhalted logical p= rocessor. General counter - architectural event", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xa4", "EventName": "TOPDOWN.SLOTS_P", "PublicDescription": "Counts the number of available slots for an = unhalted logical processor. The event increments by machine-width of the na= rrowest pipeline as employed by the Top-down Microarchitecture Analysis met= hod. The count is distributed among unhalted logical processors (hyper-thre= ads) who share the same physical core.", @@ -721,6 +817,7 @@ }, { "BriefDescription": "UOPS_DECODED.DEC0_UOPS", + "Counter": "0,1,2,3", "EventCode": "0x76", "EventName": "UOPS_DECODED.DEC0_UOPS", "SampleAfterValue": "1000003", @@ -728,6 +825,7 @@ }, { "BriefDescription": "Uops executed on port 0", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_0", "PublicDescription": "Number of uops dispatch to execution port 0= .", @@ -736,6 +834,7 @@ }, { "BriefDescription": "Uops executed on port 1", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_1", "PublicDescription": "Number of uops dispatch to execution port 1= .", @@ -744,6 +843,7 @@ }, { "BriefDescription": "Uops executed on ports 2, 3 and 10", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_2_3_10", "PublicDescription": "Number of uops dispatch to execution ports 2= , 3 and 10", @@ -752,6 +852,7 @@ }, { "BriefDescription": "Uops executed on ports 4 and 9", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_4_9", "PublicDescription": "Number of uops dispatch to execution ports 4= and 9", @@ -760,6 +861,7 @@ }, { "BriefDescription": "Uops executed on ports 5 and 11", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_5_11", "PublicDescription": "Number of uops dispatch to execution ports 5= and 11", @@ -768,6 +870,7 @@ }, { "BriefDescription": "Uops executed on port 6", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_6", "PublicDescription": "Number of uops dispatch to execution port 6= .", @@ -776,6 +879,7 @@ }, { "BriefDescription": "Uops executed on ports 7 and 8", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb2", "EventName": "UOPS_DISPATCHED.PORT_7_8", "PublicDescription": "Number of uops dispatch to execution ports = 7 and 8.", @@ -784,6 +888,7 @@ }, { "BriefDescription": "Number of uops executed on the core.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CORE", "PublicDescription": "Counts the number of uops executed from any = thread.", @@ -792,6 +897,7 @@ }, { "BriefDescription": "Cycles at least 1 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1", @@ -801,6 +907,7 @@ }, { "BriefDescription": "Cycles at least 2 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "2", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2", @@ -810,6 +917,7 @@ }, { "BriefDescription": "Cycles at least 3 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "3", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3", @@ -819,6 +927,7 @@ }, { "BriefDescription": "Cycles at least 4 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "4", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4", @@ -828,6 +937,7 @@ }, { "BriefDescription": "Cycles where at least 1 uop was executed per-= thread", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CYCLES_GE_1", @@ -837,6 +947,7 @@ }, { "BriefDescription": "Cycles where at least 2 uops were executed pe= r-thread", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "2", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CYCLES_GE_2", @@ -846,6 +957,7 @@ }, { "BriefDescription": "Cycles where at least 3 uops were executed pe= r-thread", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "3", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CYCLES_GE_3", @@ -855,6 +967,7 @@ }, { "BriefDescription": "Cycles where at least 4 uops were executed pe= r-thread", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "4", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.CYCLES_GE_4", @@ -864,6 +977,7 @@ }, { "BriefDescription": "Counts number of cycles no uops were dispatch= ed to be executed on this thread.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.STALLS", @@ -874,6 +988,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UOPS_EXECUTED.STALLS", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EventCode": "0xb1", @@ -884,6 +999,7 @@ }, { "BriefDescription": "Counts the number of uops to be executed per-= thread each cycle.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.THREAD", "SampleAfterValue": "2000003", @@ -891,6 +1007,7 @@ }, { "BriefDescription": "Counts the number of x87 uops dispatched.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xb1", "EventName": "UOPS_EXECUTED.X87", "PublicDescription": "Counts the number of x87 uops executed.", @@ -899,14 +1016,25 @@ }, { "BriefDescription": "Uops that RAT issues to RS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xae", "EventName": "UOPS_ISSUED.ANY", "PublicDescription": "Counts the number of uops that the Resource = Allocation Table (RAT) issues to the Reservation Station (RS).", "SampleAfterValue": "2000003", "UMask": "0x1" }, + { + "BriefDescription": "UOPS_ISSUED.CYCLES", + "Counter": "0,1,2,3,4,5,6,7", + "CounterMask": "1", + "EventCode": "0xae", + "EventName": "UOPS_ISSUED.CYCLES", + "SampleAfterValue": "2000003", + "UMask": "0x1" + }, { "BriefDescription": "Cycles with retired uop(s).", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.CYCLES", @@ -916,6 +1044,7 @@ }, { "BriefDescription": "Retired uops except the last uop of each inst= ruction.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.HEAVY", "PublicDescription": "Counts the number of retired micro-operation= s (uops) except the last uop of each instruction. An instruction that is de= coded into less than two uops does not contribute to the count.", @@ -924,6 +1053,7 @@ }, { "BriefDescription": "UOPS_RETIRED.MS", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.MS", "MSRIndex": "0x3F7", @@ -933,6 +1063,7 @@ }, { "BriefDescription": "Retirement slots used.", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.SLOTS", "PublicDescription": "Counts the retirement slots used each cycle.= ", @@ -941,6 +1072,7 @@ }, { "BriefDescription": "Cycles without actually retired uops.", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.STALLS", @@ -951,6 +1083,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UOPS_RETIRED.STALLS", + "Counter": "0,1,2,3,4,5,6,7", "CounterMask": "1", "Deprecated": "1", "EventCode": "0xc2", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json= b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json index f8c0eac8b828..2b3b013ccb06 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json @@ -47,7 +47,7 @@ }, { "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "tma_info_system_cpu_utilization", + "MetricExpr": "tma_info_system_cpus_utilized", "MetricName": "cpu_utilization", "ScaleUnit": "100%" }, @@ -72,18 +72,54 @@ "PublicDescription": "Ratio of number of completed page walks (for= all page sizes) caused by demand data stores to the total number of comple= ted instructions. This implies it missed in the DTLB and further levels of = TLB.", "ScaleUnit": "1per_instr" }, + { + "BriefDescription": "Bandwidth observed by the integrated I/O traf= fic controller (IIO) of IO reads that are initiated by end device controlle= rs that are requesting memory from the CPU.", + "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS * 4 / 1e= 6 / duration_time", + "MetricName": "iio_bandwidth_read", + "ScaleUnit": "1MB/s" + }, + { + "BriefDescription": "Bandwidth observed by the integrated I/O traf= fic controller (IIO) of IO writes that are initiated by end device controll= ers that are writing memory to the CPU.", + "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS * 4 / 1= e6 / duration_time", + "MetricName": "iio_bandwidth_write", + "ScaleUnit": "1MB/s" + }, { "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e6 / durati= on_time", "MetricName": "io_bandwidth_read", "ScaleUnit": "1MB/s" }, + { + "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the local CPU socket.= ", + "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / = duration_time", + "MetricName": "io_bandwidth_read_local", + "ScaleUnit": "1MB/s" + }, + { + "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from a remote CPU socket."= , + "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 /= duration_time", + "MetricName": "io_bandwidth_read_remote", + "ScaleUnit": "1MB/s" + }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.= IO_ITOMCACHENEAR) * 64 / 1e6 / duration_time", "MetricName": "io_bandwidth_write", "ScaleUnit": "1MB/s" }, + { + "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the local CPU socket.", + "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_IN= SERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_time", + "MetricName": "io_bandwidth_write_local", + "ScaleUnit": "1MB/s" + }, + { + "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to a remote CPU socket.", + "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_I= NSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_time", + "MetricName": "io_bandwidth_write_remote", + "ScaleUnit": "1MB/s" + }, { "BriefDescription": "Percentage of inbound full cacheline writes i= nitiated by end device controllers that miss the L3 cache.", "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSE= RTS.IO_ITOM", @@ -334,7 +370,7 @@ { "BriefDescription": "This metric estimates fraction of cycles wher= e the Advanced Matrix eXtensions (AMX) execution engine was busy with tile = (arithmetic) operations", "MetricExpr": "EXE.AMX_BUSY / tma_info_core_core_clks", - "MetricGroup": "Compute;HPC;Server;TopdownL3;tma_L3_group;tma_core= _bound_group", + "MetricGroup": "BvCB;Compute;HPC;Server;TopdownL3;tma_L3_group;tma= _core_bound_group", "MetricName": "tma_amx_busy", "MetricThreshold": "tma_amx_busy > 0.5 & (tma_core_bound > 0.1 & t= ma_backend_bound > 0.2)", "ScaleUnit": "100%" @@ -342,7 +378,7 @@ { "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", "MetricExpr": "78 * ASSISTS.ANY / tma_info_thread_slots", - "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_gro= up", + "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequence= r_group", "MetricName": "tma_assists", "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer >= 0.05 & tma_heavy_operations > 0.1)", "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", @@ -360,7 +396,7 @@ "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", "DefaultMetricgroupName": "TopdownL1", "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_inf= o_thread_slots", - "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group", "MetricName": "tma_backend_bound", "MetricThreshold": "tma_backend_bound > 0.2", "MetricgroupNoGroup": "TopdownL1;Default", @@ -382,7 +418,7 @@ "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", "DefaultMetricgroupName": "TopdownL2", "MetricExpr": "topdown\\-br\\-mispredict / (topdown\\-fe\\-bound += topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tm= a_info_thread_slots", - "MetricGroup": "BadSpec;BrMispredicts;Default;TmaL2;TopdownL2;tma_= L2_group;tma_bad_speculation_group;tma_issueBM", + "MetricGroup": "BadSpec;BrMispredicts;BvMP;Default;TmaL2;TopdownL2= ;tma_L2_group;tma_bad_speculation_group;tma_issueBM", "MetricName": "tma_branch_mispredicts", "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_specula= tion > 0.15", "MetricgroupNoGroup": "TopdownL2;Default", @@ -434,8 +470,8 @@ }, { "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", - "MetricExpr": "(76 * tma_info_system_core_frequency * (MEM_LOAD_L3= _HIT_RETIRED.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND= _DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))= ) + 75.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MI= SS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_in= fo_thread_clks", - "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;t= ma_issueSyncxn;tma_l3_bound_group", + "MetricExpr": "(76.6 * tma_info_system_core_frequency * (MEM_LOAD_= L3_HIT_RETIRED.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMA= ND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD= ))) + 74.6 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_= MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_= info_thread_clks", + "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_gr= oup;tma_issueSyncxn;tma_l3_bound_group", "MetricName": "tma_contested_accesses", "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound = > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related m= etrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote= _cache", @@ -454,8 +490,8 @@ }, { "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", - "MetricExpr": "75.5 * tma_info_system_core_frequency * (MEM_LOAD_L= 3_HIT_RETIRED.XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - OCR.DEM= AND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR= .DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT= / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks", - "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSync= xn;tma_l3_bound_group", + "MetricExpr": "74.6 * tma_info_system_core_frequency * (MEM_LOAD_L= 3_HIT_RETIRED.XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - OCR.DEM= AND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR= .DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT= / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks", + "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issu= eSyncxn;tma_l3_bound_group", "MetricName": "tma_data_sharing", "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma= _false_sharing, tma_machine_clears, tma_remote_cache", @@ -473,7 +509,7 @@ { "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", "MetricExpr": "ARITH.DIV_ACTIVE / tma_info_thread_clks", - "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group", + "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group", "MetricName": "tma_divider", "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tm= a_backend_bound > 0.2)", "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", @@ -503,13 +539,13 @@ "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_= latency_group;tma_issueFB", "MetricName": "tma_dsb_switches", "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency >= 0.1 & tma_frontend_bound > 0.15)", - "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_mis= ses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_ban= dwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_= info_inst_mix_iptb, tma_lcp", "ScaleUnit": "100%" }, { "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMOR= Y_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks", - "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_= l1_bound_group", + "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB= ;tma_l1_bound_group", "MetricName": "tma_dtlb_load", "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (t= ma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: t= ma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_me= mory_synchronization", @@ -518,7 +554,7 @@ { "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks", - "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_= store_bound_group", + "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB= ;tma_store_bound_group", "MetricName": "tma_dtlb_store", "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, = tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchroniz= ation", @@ -526,8 +562,8 @@ }, { "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", - "MetricExpr": "80 * tma_info_system_core_frequency * OCR.DEMAND_RF= O.L3_HIT.SNOOP_HITM / tma_info_thread_clks", - "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;t= ma_issueSyncxn;tma_store_bound_group", + "MetricExpr": "81 * tma_info_system_core_frequency * OCR.DEMAND_RF= O.L3_HIT.SNOOP_HITM / tma_info_thread_clks", + "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_gr= oup;tma_issueSyncxn;tma_store_bound_group", "MetricName": "tma_false_sharing", "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > = 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L= 3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing= , tma_machine_clears, tma_remote_cache", @@ -536,7 +572,7 @@ { "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", "MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks", - "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_is= sueSL;tma_issueSmSt;tma_l1_bound_group", + "MetricGroup": "BvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;t= ma_issueSL;tma_issueSmSt;tma_l1_bound_group", "MetricName": "tma_fb_full", "MetricThreshold": "tma_fb_full > 0.3", "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwi= dth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store= _latency, tma_streaming_stores", @@ -550,7 +586,7 @@ "MetricName": "tma_fetch_bandwidth", "MetricThreshold": "tma_fetch_bandwidth > 0.2", "MetricgroupNoGroup": "TopdownL2;Default", - "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_sw= itches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_= info_inst_mix_iptb, tma_lcp", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_sw= itches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tm= a_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", "ScaleUnit": "100%" }, { @@ -602,7 +638,7 @@ }, { "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", - "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,uma= sk\\=3D0xfc@ + FP_ARITH_INST_RETIRED2.VECTOR) / (tma_retiring * tma_info_th= read_slots)", + "MetricExpr": "(FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIR= ED2.VECTOR) / (tma_retiring * tma_info_thread_slots)", "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_= group;tma_issue2P", "MetricName": "tma_fp_vector", "MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tm= a_light_operations > 0.6)", @@ -640,7 +676,7 @@ "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", "DefaultMetricgroupName": "TopdownL1", "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / tma_info_thread_slots", - "MetricGroup": "Default;PGO;TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group= ", "MetricName": "tma_frontend_bound", "MetricThreshold": "tma_frontend_bound > 0.15", "MetricgroupNoGroup": "TopdownL1;Default", @@ -650,7 +686,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", "MetricExpr": "tma_light_operations * INST_RETIRED.MACRO_FUSED / (= tma_retiring * tma_info_thread_slots)", - "MetricGroup": "Branches;Pipeline;TopdownL3;tma_L3_group;tma_light= _operations_group", + "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_= light_operations_group", "MetricName": "tma_fused_instructions", "MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_opera= tions > 0.6", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of = legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Ot= her_Light_Ops in MTL!)}", @@ -670,7 +706,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks", - "MetricGroup": "BigFootprint;FetchLat;IcMiss;TopdownL3;tma_L3_grou= p;tma_fetch_latency_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3= _group;tma_fetch_latency_group", "MetricName": "tma_icache_misses", "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency = > 0.1 & tma_frontend_bound > 0.15)", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", @@ -724,24 +760,6 @@ "MetricGroup": "BrMispredicts", "MetricName": "tma_info_bad_spec_spec_clears_ratio" }, - { - "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "(100 * (1 - max(0, topdown\\-be\\-bound / (topdown\= \-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-b= ound) - topdown\\-mem\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spe= c + topdown\\-retiring + topdown\\-be\\-bound)) / (((cpu@EXE_ACTIVITY.3_POR= TS_UTIL\\,umask\\=3D0x80@ + cpu@RS.EMPTY\\,umask\\=3D0x1@) / CPU_CLK_UNHALT= ED.THREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS) / C= PU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVITY.1_PORTS_UT= IL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + to= pdown\\-retiring + topdown\\-be\\-bound) * cpu@EXE_ACTIVITY.2_PORTS_UTIL\\,= umask\\=3D0xc@)) / CPU_CLK_UNHALTED.THREAD if ARITH.DIV_ACTIVE < CYCLE_ACTI= VITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS else (EXE_ACTIVITY.1_PORTS_= UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + = topdown\\-retiring + topdown\\-be\\-bound) * cpu@EXE_ACTIVITY.2_PORTS_UTIL\= \,umask\\=3D0xc@) / CPU_CLK_UNHALTED.THREAD) if max(0, topdown\\-be\\-bound= / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topd= own\\-be\\-bound) - topdown\\-mem\\-bound / (topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)) < (((cpu@EXE_AC= TIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + cpu@RS.EMPTY\\,umask\\=3D0x1@) / CP= U_CLK_UNHALTED.THREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_O= N_LOADS) / CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVIT= Y.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad= \\-spec + topdown\\-retiring + topdown\\-be\\-bound) * cpu@EXE_ACTIVITY.2_P= ORTS_UTIL\\,umask\\=3D0xc@)) / CPU_CLK_UNHALTED.THREAD if ARITH.DIV_ACTIVE = < CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS else (EXE_ACTIV= ITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-b= ad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * cpu@EXE_ACTIVITY.2= _PORTS_UTIL\\,umask\\=3D0xc@) / CPU_CLK_UNHALTED.THREAD) else 1) if tma_inf= o_system_smt_2t_utilization > 0.5 else 0) + 0 * slots", - "MetricGroup": "Cor;SMT", - "MetricName": "tma_info_botlnk_core_bound_likely" - }, - { - "BriefDescription": "Total pipeline cost of DSB (uop cache) misses= - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * (100 * ((topdown\\-fetch\\-lat / (topdown\\-f= e\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-boun= d) - INT_MISC.UOP_DROPPING / slots) * (DSB2MITE_SWITCHES.PENALTY_CYCLES / C= PU_CLK_UNHALTED.THREAD) / (ICACHE_DATA.STALLS / CPU_CLK_UNHALTED.THREAD + I= CACHE_TAG.STALLS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CLEAR_RESTEER_CYCLES= / CPU_CLK_UNHALTED.THREAD + INT_MISC.UNKNOWN_BRANCH_CYCLES / CPU_CLK_UNHAL= TED.THREAD) + min(3 * cpu@UOPS_RETIRED.MS\\,cmask\\=3D0x1\\,edge\\=3D0x1@ /= (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY) / CPU_CLK_UNHALTED.THREAD, 1) + DEC= ODE.LCP / CPU_CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_= CLK_UNHALTED.THREAD) + max(0, topdown\\-fe\\-bound / (topdown\\-fe\\-bound = + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_M= ISC.UOP_DROPPING / slots - (topdown\\-fetch\\-lat / (topdown\\-fe\\-bound += topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MI= SC.UOP_DROPPING / slots)) * ((IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / (= CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNHALTED.THREAD) / 2) = / ((IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / (CPU_CLK_UNHALTED.DISTRIBUT= ED if #SMT_on else CPU_CLK_UNHALTED.THREAD) / 2 + (IDQ.DSB_CYCLES_ANY - IDQ= .DSB_CYCLES_OK) / (CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNH= ALTED.THREAD) / 2)))", - "MetricGroup": "DSBmiss;Fed", - "MetricName": "tma_info_botlnk_dsb_misses" - }, - { - "BriefDescription": "Total pipeline cost of Instruction Cache miss= es - subset of the Big_Code Bottleneck.", - "MetricExpr": "100 * (100 * ((topdown\\-fetch\\-lat / (topdown\\-f= e\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-boun= d) - INT_MISC.UOP_DROPPING / slots) * (ICACHE_DATA.STALLS / CPU_CLK_UNHALTE= D.THREAD) / (ICACHE_DATA.STALLS / CPU_CLK_UNHALTED.THREAD + ICACHE_TAG.STAL= LS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNH= ALTED.THREAD + INT_MISC.UNKNOWN_BRANCH_CYCLES / CPU_CLK_UNHALTED.THREAD) + = min(3 * cpu@UOPS_RETIRED.MS\\,cmask\\=3D0x1\\,edge\\=3D0x1@ / (UOPS_RETIRED= .SLOTS / UOPS_ISSUED.ANY) / CPU_CLK_UNHALTED.THREAD, 1) + DECODE.LCP / CPU_= CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.T= HREAD)))", - "MetricGroup": "Fed;FetchLat;IcMiss", - "MetricName": "tma_info_botlnk_ic_misses" - }, { "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", "MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization = if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t= _utilization > 0.5 else 0)", @@ -749,13 +767,21 @@ "MetricName": "tma_info_botlnk_l0_core_bound_likely", "MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5" }, + { + "BriefDescription": "Total pipeline cost of DSB (uop cache) hits -= subset of the Instruction_Fetch_BW Bottleneck", + "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / = (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite= )))", + "MetricGroup": "DSB;FetchBW;tma_issueFB", + "MetricName": "tma_info_botlnk_l2_dsb_bandwidth", + "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10", + "PublicDescription": "Total pipeline cost of DSB (uop cache) hits = - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_s= witches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_front= end_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp" + }, { "BriefDescription": "Total pipeline cost of DSB (uop cache) misses= - subset of the Instruction_Fetch_BW Bottleneck", "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", "MetricGroup": "DSBmiss;Fed;tma_issueFB", "MetricName": "tma_info_botlnk_l2_dsb_misses", "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10", - "PublicDescription": "Total pipeline cost of DSB (uop cache) misse= s - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb= _switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_in= st_mix_iptb, tma_lcp" + "PublicDescription": "Total pipeline cost of DSB (uop cache) misse= s - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb= _switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_= frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp" }, { "BriefDescription": "Total pipeline cost of Instruction Cache miss= es - subset of the Big_Code Bottleneck", @@ -765,39 +791,33 @@ "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5", "PublicDescription": "Total pipeline cost of Instruction Cache mis= ses - subset of the Big_Code Bottleneck. Related metrics: " }, - { - "BriefDescription": "Total pipeline cost of \"useful operations\" = - the baseline operations not covered by Branching_Overhead nor Irregular_O= verhead.", - "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES= + BR_INST_RETIRED.NEAR_CALL) / tma_info_thread_slots - tma_microcode_seque= ncer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists= / tma_microcode_sequencer) * tma_heavy_operations)", - "MetricGroup": "Ret", - "MetricName": "tma_info_bottleneck_base_non_br", - "MetricThreshold": "tma_info_bottleneck_base_non_br > 20" - }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", - "MetricGroup": "BigFootprint;Fed;Frontend;IcMiss;MemoryTLB", + "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "tma_info_bottleneck_big_code", "MetricThreshold": "tma_info_bottleneck_big_code > 20" }, { - "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + BR_INST_RETI= RED.NEAR_CALL) / tma_info_thread_slots)", - "MetricGroup": "Ret", + "BriefDescription": "Total pipeline cost of instructions used for = program control-flow - a subset of the Retiring category in TMA", + "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_= RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)", + "MetricGroup": "BvBO;Ret", "MetricName": "tma_info_bottleneck_branching_overhead", - "MetricThreshold": "tma_info_bottleneck_branching_overhead > 5" + "MetricThreshold": "tma_info_bottleneck_branching_overhead > 5", + "PublicDescription": "Total pipeline cost of instructions used for= program control-flow - a subset of the Retiring category in TMA. Examples = include function calls; loops and alignments. (A lower bound)" }, { "BriefDescription": "Total pipeline cost of external Memory- or Ca= che-Bandwidth related bottlenecks", - "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)= ) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma= _l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full= / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq= _full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound= + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_f= b_full / (tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads = + tma_store_fwd_blk)))", - "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", + "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)= ) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma= _l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full= / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq= _full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound= + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_f= b_full / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_laten= cy + tma_split_loads + tma_store_fwd_blk)))", + "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_cache_memory_bandwidth", "MetricThreshold": "tma_info_bottleneck_cache_memory_bandwidth > 2= 0", "PublicDescription": "Total pipeline cost of external Memory- or C= ache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_= system_dram_bw_use, tma_mem_bandwidth, tma_sq_full" }, { "BriefDescription": "Total pipeline cost of external Memory- or Ca= che-Latency related bottlenecks", - "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) = + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l= 2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_la= tency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + t= ma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_b= ound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma= _memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_= bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_laten= cy / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_lat= ency + tma_streaming_stores)))", - "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat", + "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) = + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l= 2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_la= tency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + t= ma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_b= ound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma= _memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_= bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_laten= cy / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_lat= ency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dra= m_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_= store_bound)) * (tma_l1_hit_latency / (tma_dtlb_load + tma_fb_full + tma_l1= _hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))", + "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat", "MetricName": "tma_info_bottleneck_cache_memory_latency", "MetricThreshold": "tma_info_bottleneck_cache_memory_latency > 20"= , "PublicDescription": "Total pipeline cost of external Memory- or C= ache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_= mem_latency" @@ -805,30 +825,30 @@ { "BriefDescription": "Total pipeline cost when the execution is com= pute-bound - an estimation", "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_amx_busy= + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_c= ore_bound * tma_amx_busy / (tma_amx_busy + tma_divider + tma_ports_utilizat= ion + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization = / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_ope= ration)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utili= zed_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))", - "MetricGroup": "Cor;tma_issueComp", + "MetricGroup": "BvCB;Cor;tma_issueComp", "MetricName": "tma_info_bottleneck_compute_bound_est", "MetricThreshold": "tma_info_bottleneck_compute_bound_est > 20", "PublicDescription": "Total pipeline cost when the execution is co= mpute-bound - an estimation. Covers Core Bound when High ILP as well as whe= n long-latency execution units are busy. Related metrics: " }, { - "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks (when the front-end could not sustain operations = delivery to the back-end)", "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode= _sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_la= tency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches = + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - I= NST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\\,cmask\\=3D1@) * (tma_fetc= h_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers += tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts)= / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches))= / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_m= isses + tma_lcp + tma_ms_switches))) - tma_info_bottleneck_big_code", - "MetricGroup": "Fed;FetchBW;Frontend", + "MetricGroup": "BvFB;Fed;FetchBW;Frontend", "MetricName": "tma_info_bottleneck_instruction_fetch_bw", "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20" }, { "BriefDescription": "Total pipeline cost of irregular execution (e= .g", "MetricExpr": "100 * ((1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_R= ETIRED.MS\\,cmask\\=3D1@) * (tma_fetch_latency * (tma_ms_switches + tma_bra= nch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_= mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredi= cts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_swit= ches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) + = 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredic= ts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_ot= her_nukes + tma_core_bound * (tma_serializing_operation + cpu@RS.EMPTY\\,um= ask\\=3D1@ / tma_info_thread_clks * tma_ports_utilized_0) / (tma_amx_busy += tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_mic= rocode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * = (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)", - "MetricGroup": "Bad;Cor;Ret;tma_issueMS", + "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS", "MetricName": "tma_info_bottleneck_irregular_overhead", "MetricThreshold": "tma_info_bottleneck_irregular_overhead > 10", "PublicDescription": "Total pipeline cost of irregular execution (= e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloa= ds, overhead in system services or virtualized environments). Related metri= cs: tma_microcode_sequencer, tma_ms_switches" }, { "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_= dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fw= d_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bo= und + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tm= a_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma= _store_latency + tma_streaming_stores)))", - "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB", + "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_= dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split= _loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_d= ram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tm= a_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + t= ma_split_stores + tma_store_latency + tma_streaming_stores)))", + "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB", "MetricName": "tma_info_bottleneck_memory_data_tlbs", "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20", "PublicDescription": "Total pipeline cost of Memory Address Transl= ation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load,= tma_dtlb_store, tma_info_bottleneck_memory_synchronization" @@ -836,7 +856,7 @@ { "BriefDescription": "Total pipeline cost of Memory Synchronization= related bottlenecks (data transfers and coherency updates across processor= s)", "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) *= tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + t= ma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound = + tma_pmm_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sha= ring) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + t= ma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bou= nd + tma_l3_bound + tma_pmm_bound + tma_store_bound) * tma_false_sharing / = (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency = + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tm= a_other_nukes / tma_other_nukes))", - "MetricGroup": "Mem;Offcore;tma_issueTLB", + "MetricGroup": "BvMS;Mem;Offcore;tma_issueTLB", "MetricName": "tma_info_bottleneck_memory_synchronization", "MetricThreshold": "tma_info_bottleneck_memory_synchronization > 1= 0", "PublicDescription": "Total pipeline cost of Memory Synchronizatio= n related bottlenecks (data transfers and coherency updates across processo= rs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_me= mory_data_tlbs" @@ -844,18 +864,25 @@ { "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other= _mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetc= h_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switc= hes + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", - "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM", + "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM", "MetricName": "tma_info_bottleneck_mispredictions", "MetricThreshold": "tma_info_bottleneck_mispredictions > 20", "PublicDescription": "Total pipeline cost of Branch Misprediction = related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_= spec_branch_misprediction_cost, tma_mispredicts_resteers" }, { - "BriefDescription": "Total pipeline cost of remaining bottlenecks = (apart from those listed in the Info.Bottlenecks metrics class)", - "MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bott= leneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info= _bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_laten= cy + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_sync= hronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_i= rregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottl= eneck_base_non_br)", - "MetricGroup": "Cor;Offcore", + "BriefDescription": "Total pipeline cost of remaining bottlenecks = in the back-end", + "MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bott= leneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info= _bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_laten= cy + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_sync= hronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_i= rregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottl= eneck_useful_work)", + "MetricGroup": "BvOB;Cor;Offcore", "MetricName": "tma_info_bottleneck_other_bottlenecks", "MetricThreshold": "tma_info_bottleneck_other_bottlenecks > 20", - "PublicDescription": "Total pipeline cost of remaining bottlenecks= (apart from those listed in the Info.Bottlenecks metrics class). Examples = include data-dependencies (Core Bound when Low ILP) and other unlisted memo= ry-related stalls." + "PublicDescription": "Total pipeline cost of remaining bottlenecks= in the back-end. Examples include data-dependencies (Core Bound when Low I= LP) and other unlisted memory-related stalls." + }, + { + "BriefDescription": "Total pipeline cost of \"useful operations\" = - the portion of Retiring category not covered by Branching_Overhead nor Ir= regular_Overhead.", + "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES= + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slot= s - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_se= quencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)"= , + "MetricGroup": "BvUW;Ret", + "MetricName": "tma_info_bottleneck_useful_work", + "MetricThreshold": "tma_info_bottleneck_useful_work > 20" }, { "BriefDescription": "Fraction of branches that are CALL or RET", @@ -907,7 +934,7 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_R= ETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARIT= H_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / t= ma_info_core_core_clks", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIR= ED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_= INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 = * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS)= + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.51= 2B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / tma_inf= o_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, @@ -930,7 +957,7 @@ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB", "MetricName": "tma_info_frontend_dsb_coverage", "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_inf= o_thread_ipc / 6 > 0.35", - "PublicDescription": "Fraction of Uops delivered by the DSB (aka D= ecoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp" + "PublicDescription": "Fraction of Uops delivered by the DSB (aka D= ecoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses,= tma_info_inst_mix_iptb, tma_lcp" }, { "BriefDescription": "Average number of cycles of a switch from the= DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details= .", @@ -997,7 +1024,7 @@ }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + = FP_ARITH_INST_RETIRED2.SCALAR + (cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE\\,umask\\=3D0xfc@ + FP_ARITH_INST_RETIRED2.VECTOR))", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + = FP_ARITH_INST_RETIRED2.SCALAR + (FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_IN= ST_RETIRED2.VECTOR))", "MetricGroup": "Flops;InsType", "MetricName": "tma_info_inst_mix_iparith", "MetricThreshold": "tma_info_inst_mix_iparith < 10", @@ -1067,7 +1094,7 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + = 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_= FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B= _PACKED_SINGLE)", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + = FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED= _DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_R= ETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_IN= ST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_AR= ITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PAC= KED_HALF)", "MetricGroup": "Flops;InsType", "MetricName": "tma_info_inst_mix_ipflop", "MetricThreshold": "tma_info_inst_mix_ipflop < 10" @@ -1100,24 +1127,12 @@ "MetricThreshold": "tma_info_inst_mix_ipswpf < 100" }, { - "BriefDescription": "Instruction per taken branch", + "BriefDescription": "Instructions per taken branch", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN", "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB", "MetricName": "tma_info_inst_mix_iptb", "MetricThreshold": "tma_info_inst_mix_iptb < 13", - "PublicDescription": "Instruction per taken branch. Related metric= s: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tm= a_info_frontend_dsb_coverage, tma_lcp" - }, - { - "BriefDescription": "\"Bus lock\" per kilo instruction", - "MetricExpr": "tma_info_memory_mix_bus_lock_pki", - "MetricGroup": "Mem", - "MetricName": "tma_info_memory_bus_lock_pki" - }, - { - "BriefDescription": "STLB (2nd level TLB) code speculative misses = per kilo instruction (misses of any page-size that complete the page walk)"= , - "MetricExpr": "tma_info_memory_tlb_code_stlb_mpki", - "MetricGroup": "Fed;MemoryTLB", - "MetricName": "tma_info_memory_code_stlb_mpki" + "PublicDescription": "Instructions per taken branch. Related metri= cs: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth= , tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp" }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", @@ -1155,12 +1170,6 @@ "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t" }, - { - "BriefDescription": "Average Parallel L2 cache miss data reads", - "MetricExpr": "tma_info_memory_latency_data_l2_mlp", - "MetricGroup": "Memory_BW;Offcore", - "MetricName": "tma_info_memory_data_l2_mlp" - }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", @@ -1168,17 +1177,11 @@ "MetricName": "tma_info_memory_fb_hpki" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l1d_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", - "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / (duration_time * 1e3 /= 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l1d_cache_fill_bw_2t" - }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", @@ -1192,29 +1195,11 @@ "MetricName": "tma_info_memory_l1mpki_load" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l2_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 2 cache [GB / sec]", - "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / (duration_time * 1e3 /= 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l2_cache_fill_bw_2t" - }, - { - "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1e3 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", - "MetricGroup": "L2Evicts;Mem;Server", - "MetricName": "tma_info_memory_l2_evictions_nonsilent_pki" - }, - { - "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1e3 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", - "MetricGroup": "L2Evicts;Mem;Server", - "MetricName": "tma_info_memory_l2_evictions_silent_pki" - }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", "MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_= RETIRED.ANY", @@ -1246,29 +1231,23 @@ "MetricName": "tma_info_memory_l2mpki_load" }, { - "BriefDescription": "", - "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration= _time", - "MetricGroup": "Mem;MemoryBW;Offcore", - "MetricName": "tma_info_memory_l3_cache_access_bw" + "BriefDescription": "Offcore requests (L2 cache miss) per kilo ins= truction for demand RFOs", + "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY", + "MetricGroup": "CacheMisses;Offcore", + "MetricName": "tma_info_memory_l2mpki_rfo" }, { - "BriefDescription": "Average per-core data access bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / (duratio= n_time * 1e3 / 1e3)", + "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", + "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration= _time", "MetricGroup": "Mem;MemoryBW;Offcore", - "MetricName": "tma_info_memory_l3_cache_access_bw_2t" + "MetricName": "tma_info_memory_l3_cache_access_bw" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l3_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 3 cache [GB / sec]", - "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / (duration_time = * 1e3 / 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l3_cache_fill_bw_2t" - }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", @@ -1283,7 +1262,7 @@ }, { "BriefDescription": "Average Latency for L2 cache miss demand Load= s", - "MetricExpr": "tma_info_memory_load_l2_miss_latency", + "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCO= RE_REQUESTS.DEMAND_DATA_RD", "MetricGroup": "Memory_Lat;Offcore", "MetricName": "tma_info_memory_latency_load_l2_miss_latency" }, @@ -1293,29 +1272,11 @@ "MetricGroup": "Memory_BW;Offcore", "MetricName": "tma_info_memory_latency_load_l2_mlp" }, - { - "BriefDescription": "Average Latency for L3 cache miss demand Load= s", - "MetricExpr": "tma_info_memory_load_l3_miss_latency", - "MetricGroup": "Memory_Lat;Offcore", - "MetricName": "tma_info_memory_latency_load_l3_miss_latency" - }, - { - "BriefDescription": "Average Latency for L2 cache miss demand Load= s", - "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCO= RE_REQUESTS.DEMAND_DATA_RD", - "MetricGroup": "Memory_Lat;Offcore", - "MetricName": "tma_info_memory_load_l2_miss_latency" - }, - { - "BriefDescription": "Average Parallel L2 cache miss demand Loads", - "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@O= FFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=3D0x1@", - "MetricGroup": "Memory_BW;Offcore", - "MetricName": "tma_info_memory_load_l2_mlp" - }, { "BriefDescription": "Average Latency for L3 cache miss demand Load= s", "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD= / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", "MetricGroup": "Memory_Lat;Offcore", - "MetricName": "tma_info_memory_load_l3_miss_latency" + "MetricName": "tma_info_memory_latency_load_l3_miss_latency" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", @@ -1323,12 +1284,6 @@ "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "tma_info_memory_load_miss_real_latency" }, - { - "BriefDescription": "STLB (2nd level TLB) data load speculative mi= sses per kilo instruction (misses of any page-size that complete the page w= alk)", - "MetricExpr": "tma_info_memory_tlb_load_stlb_mpki", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_load_stlb_mpki" - }, { "BriefDescription": "\"Bus lock\" per kilo instruction", "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY", @@ -1355,7 +1310,7 @@ }, { "BriefDescription": "Un-cacheable retired load per kilo instructio= n", - "MetricExpr": "tma_info_memory_uc_load_pki", + "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY", "MetricGroup": "Mem", "MetricName": "tma_info_memory_mix_uc_load_pki" }, @@ -1366,51 +1321,6 @@ "MetricName": "tma_info_memory_mlp", "PublicDescription": "Memory-Level-Parallelism (average number of = L1 miss demand load when there is at least one such miss. Per-Logical Proce= ssor)" }, - { - "BriefDescription": "Off-core accesses per kilo instruction for mo= dified write requests", - "MetricExpr": "1e3 * OCR.MODIFIED_WRITE.ANY_RESPONSE / INST_RETIRE= D.ANY", - "MetricGroup": "Offcore", - "MetricName": "tma_info_memory_offcore_mwrite_any_pki" - }, - { - "BriefDescription": "Off-core accesses per kilo instruction for re= ads-to-core requests (speculative; including in-core HW prefetches)", - "MetricExpr": "1e3 * OCR.READS_TO_CORE.ANY_RESPONSE / INST_RETIRED= .ANY", - "MetricGroup": "CacheHits;Offcore", - "MetricName": "tma_info_memory_offcore_read_any_pki" - }, - { - "BriefDescription": "L3 cache misses per kilo instruction for read= s-to-core requests (speculative; including in-core HW prefetches)", - "MetricExpr": "1e3 * OCR.READS_TO_CORE.L3_MISS / INST_RETIRED.ANY"= , - "MetricGroup": "Offcore", - "MetricName": "tma_info_memory_offcore_read_l3m_pki" - }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * (CPU_CLK_UNHALTED.DISTRIBUT= ED if #SMT_on else CPU_CLK_UNHALTED.THREAD))", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_page_walks_utilization" - }, - { - "BriefDescription": "Average DRAM BW for Reads-to-Core (R2C) cover= ing for memory attached to local- and remote-socket", - "MetricExpr": "64 * OCR.READS_TO_CORE.DRAM / 1e9 / (duration_time = * 1e3 / 1e3)", - "MetricGroup": "HPC;Mem;MemoryBW;SoC", - "MetricName": "tma_info_memory_r2c_dram_bw", - "PublicDescription": "Average DRAM BW for Reads-to-Core (R2C) cove= ring for memory attached to local- and remote-socket. See R2C_Offcore_BW." - }, - { - "BriefDescription": "Average L3-cache miss BW for Reads-to-Core (R= 2C)", - "MetricExpr": "64 * OCR.READS_TO_CORE.L3_MISS / 1e9 / (duration_ti= me * 1e3 / 1e3)", - "MetricGroup": "HPC;Mem;MemoryBW;SoC", - "MetricName": "tma_info_memory_r2c_l3m_bw", - "PublicDescription": "Average L3-cache miss BW for Reads-to-Core (= R2C). This covering going to DRAM or other memory off-chip memory tears. Se= e R2C_Offcore_BW." - }, - { - "BriefDescription": "Average Off-core access BW for Reads-to-Core = (R2C)", - "MetricExpr": "64 * OCR.READS_TO_CORE.ANY_RESPONSE / 1e9 / (durati= on_time * 1e3 / 1e3)", - "MetricGroup": "HPC;Mem;MemoryBW;SoC", - "MetricName": "tma_info_memory_r2c_offcore_bw", - "PublicDescription": "Average Off-core access BW for Reads-to-Core= (R2C). R2C account for demand or prefetch load/RFO/code access that fill d= ata into the Core caches." - }, { "BriefDescription": "Average DRAM BW for Reads-to-Core (R2C) cover= ing for memory attached to local- and remote-socket", "MetricExpr": "64 * OCR.READS_TO_CORE.DRAM / 1e9 / duration_time", @@ -1432,12 +1342,6 @@ "MetricName": "tma_info_memory_soc_r2c_offcore_bw", "PublicDescription": "Average Off-core access BW for Reads-to-Core= (R2C). R2C account for demand or prefetch load/RFO/code access that fill d= ata into the Core caches." }, - { - "BriefDescription": "STLB (2nd level TLB) data store speculative m= isses per kilo instruction (misses of any page-size that complete the page = walk)", - "MetricExpr": "tma_info_memory_tlb_store_stlb_mpki", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_store_stlb_mpki" - }, { "BriefDescription": "STLB (2nd level TLB) code speculative misses = per kilo instruction (misses of any page-size that complete the page walk)"= , "MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY= ", @@ -1464,17 +1368,23 @@ "MetricName": "tma_info_memory_tlb_store_stlb_mpki" }, { - "BriefDescription": "Un-cacheable retired load per kilo instructio= n", - "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY", - "MetricGroup": "Mem", - "MetricName": "tma_info_memory_uc_load_pki" - }, - { - "BriefDescription": "", + "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per core", "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_G= E_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=3D1@)", "MetricGroup": "Cor;Pipeline;PortsUtil;SMT", "MetricName": "tma_info_pipeline_execute" }, + { + "BriefDescription": "Average number of uops fetched from DSB per c= ycle", + "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY", + "MetricGroup": "Fed;FetchBW", + "MetricName": "tma_info_pipeline_fetch_dsb" + }, + { + "BriefDescription": "Average number of uops fetched from MITE per = cycle", + "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY", + "MetricGroup": "Fed;FetchBW", + "MetricName": "tma_info_pipeline_fetch_mite" + }, { "BriefDescription": "Instructions per a microcode Assist invocatio= n", "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY", @@ -1511,13 +1421,13 @@ }, { "BriefDescription": "Average CPU Utilization (percentage)", - "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC", + "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online", "MetricGroup": "HPC;Summary", "MetricName": "tma_info_system_cpu_utilization" }, { "BriefDescription": "Average number of utilized CPUs", - "MetricExpr": "#num_cpus_online * tma_info_system_cpu_utilization"= , + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC", "MetricGroup": "Summary", "MetricName": "tma_info_system_cpus_utilized" }, @@ -1530,7 +1440,7 @@ }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_R= ETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARIT= H_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1= e9 / duration_time", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIR= ED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_= INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 = * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS)= + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.51= 2B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / 1e9 / d= uration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_system_gflops", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width" @@ -1685,7 +1595,7 @@ "MetricThreshold": "tma_info_thread_uoppi > 1.05" }, { - "BriefDescription": "Instruction per taken branch", + "BriefDescription": "Uops per taken branch", "MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETI= RED.NEAR_TAKEN", "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "tma_info_thread_uptb", @@ -1721,7 +1631,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks", - "MetricGroup": "BigFootprint;FetchLat;MemoryTLB;TopdownL3;tma_L3_g= roup;tma_fetch_latency_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma= _L3_group;tma_fetch_latency_group", "MetricName": "tma_itlb_misses", "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > = 0.1 & tma_frontend_bound > 0.15)", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", @@ -1736,10 +1646,19 @@ "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_micr= ocode_sequencer, tma_ms_switches, tma_ports_utilized_1", "ScaleUnit": "100%" }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les with demand load accesses that hit the L1 cache", + "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETI= RED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLE= S_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks", + "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound= _group", + "MetricName": "tma_l1_hit_latency", + "MetricThreshold": "tma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", + "PublicDescription": "This metric roughly estimates fraction of cy= cles with demand load accesses that hit the L1 cache. The short latency of = the L1 data cache may be exposed in pointer-chasing memory access patterns = as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT", + "ScaleUnit": "100%" + }, { "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.= STALLS_L2_MISS) / tma_info_thread_clks", - "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_gr= oup;tma_memory_bound_group", + "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_= L3_group;tma_memory_bound_group", "MetricName": "tma_l2_bound", "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 = & tma_backend_bound > 0.2)", "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", @@ -1756,8 +1675,8 @@ }, { "BriefDescription": "This metric estimates fraction of cycles with= demand load accesses that hit the L3 cache under unloaded scenarios (possi= bly L3 latency limited)", - "MetricExpr": "33 * tma_info_system_core_frequency * (MEM_LOAD_RET= IRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2))= / tma_info_thread_clks", - "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_= l3_bound_group", + "MetricExpr": "32.6 * tma_info_system_core_frequency * (MEM_LOAD_R= ETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2= )) / tma_info_thread_clks", + "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat= ;tma_l3_bound_group", "MetricName": "tma_l3_hit_latency", "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.0= 5 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3= hits) will improve the latency; reduce contention with sibling physical co= res and increase performance. Note the value of this node may overlap with= its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tm= a_info_bottleneck_cache_memory_latency, tma_mem_latency", @@ -1769,7 +1688,7 @@ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_= group;tma_issueFB", "MetricName": "tma_lcp", "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tm= a_frontend_bound > 0.15)", - "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, t= ma_info_inst_mix_iptb", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses,= tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb", "ScaleUnit": "100%" }, { @@ -1810,11 +1729,11 @@ }, { "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", - "MetricExpr": "71 * tma_info_system_core_frequency * MEM_LOAD_L3_M= ISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1= _MISS / 2) / tma_info_thread_clks", + "MetricExpr": "72 * tma_info_system_core_frequency * MEM_LOAD_L3_M= ISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1= _MISS / 2) / tma_info_thread_clks", "MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_grou= p", "MetricName": "tma_local_mem", "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 &= (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)= ))", - "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM", "ScaleUnit": "100%" }, { @@ -1823,14 +1742,14 @@ "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1= _bound_group", "MetricName": "tma_lock_latency", "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 &= (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", - "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS. Related metrics: tma_store_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS. Related metrics: tma_store_latency", "ScaleUnit": "100%" }, { "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", "DefaultMetricgroupName": "TopdownL2", "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", - "MetricGroup": "BadSpec;Default;MachineClears;TmaL2;TopdownL2;tma_= L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn", + "MetricGroup": "BadSpec;BvMS;Default;MachineClears;TmaL2;TopdownL2= ;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn", "MetricName": "tma_machine_clears", "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation= > 0.15", "MetricgroupNoGroup": "TopdownL2;Default", @@ -1848,7 +1767,7 @@ { "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory - DRAM ([SPR-HBM] and/or HBM)", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / tma_info_thread_clks", - "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_b= ound_group;tma_issueBW", + "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_d= ram_bound_group;tma_issueBW", "MetricName": "tma_mem_bandwidth", "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.= 1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuris= tic assumes that a similar off-core traffic is generated by all IA cores. T= his metric does not aggregate non-data-read requests by this logical proces= sor; requests from other IA Logical Processors/Physical Cores/sockets; or o= ther non-IA devices like GPU; hence the maximum external memory bandwidth l= imits may or may not be approached when this metric is flagged (see Uncore = counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache= _memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full", @@ -1857,7 +1776,7 @@ { "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory - DRA= M ([SPR-HBM] and/or HBM)", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth", - "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_= bound_group;tma_issueLat", + "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_= dram_bound_group;tma_issueLat", "MetricName": "tma_mem_latency", "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 = & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory - DR= AM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from ot= her Logical Processors/Physical Cores/sockets (see Uncore counters for that= ). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_la= tency", @@ -1903,7 +1822,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", "MetricExpr": "tma_branch_mispredicts / tma_bad_speculation * INT_= MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks", - "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_b= ranch_resteers_group;tma_issueBM", + "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;= tma_branch_resteers_group;tma_issueBM", "MetricName": "tma_mispredicts_resteers", "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_= resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related m= etrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost= , tma_info_bottleneck_mispredictions", @@ -1939,7 +1858,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - INST_RETIRED.MACRO_FUSED) / (tma_retiring * tma_info_thread_slots)", - "MetricGroup": "Branches;Pipeline;TopdownL3;tma_L3_group;tma_light= _operations_group", + "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_= light_operations_group", "MetricName": "tma_non_fused_branches", "MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_opera= tions > 0.6", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", @@ -1948,7 +1867,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * tma_info_thread_slots)", - "MetricGroup": "Pipeline;TopdownL4;tma_L4_group;tma_other_light_op= s_group", + "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_lig= ht_ops_group", "MetricName": "tma_nop_instructions", "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_= ops > 0.3 & tma_light_operations > 0.6)", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", @@ -1966,7 +1885,7 @@ { "BriefDescription": "This metric estimates fraction of slots the C= PU was stalled due to other cases of misprediction (non-retired x86 branche= s or other types).", "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.A= LL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)", - "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mi= spredicts_group", + "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_bran= ch_mispredicts_group", "MetricName": "tma_other_mispredicts", "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mis= predicts > 0.1 & tma_bad_speculation > 0.15)", "ScaleUnit": "100%" @@ -1974,7 +1893,7 @@ { "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Nukes (Machine Clears) not related to memory ordering= .", "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY= _ORDERING / MACHINE_CLEARS.COUNT), 0.0001)", - "MetricGroup": "Machine_Clears;TopdownL3;tma_L3_group;tma_machine_= clears_group", + "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_mac= hine_clears_group", "MetricName": "tma_other_nukes", "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears >= 0.1 & tma_bad_speculation > 0.15)", "ScaleUnit": "100%" @@ -2035,7 +1954,7 @@ }, { "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", - "MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + = cpu@RS.EMPTY\\,umask\\=3D1@) / tma_info_thread_clks * (CYCLE_ACTIVITY.STALL= S_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS) / tma_info_thread_clks", + "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + max(cpu@RS.EMPTY\= \,umask\\=3D1@ - RESOURCE_STALLS.SCOREBOARD, 0)) / tma_info_thread_clks * (= CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS) / tma_info_threa= d_clks", "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utiliza= tion_group", "MetricName": "tma_ports_utilized_0", "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utiliz= ation > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", @@ -2065,7 +1984,7 @@ "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks", - "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utiliza= tion_group", + "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_ut= ilization_group", "MetricName": "tma_ports_utilized_3m", "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utili= zation > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", @@ -2073,7 +1992,7 @@ }, { "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", - "MetricExpr": "(135.5 * tma_info_system_core_frequency * MEM_LOAD_= L3_MISS_RETIRED.REMOTE_HITM + 135.5 * tma_info_system_core_frequency * MEM_= LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_= RETIRED.L1_MISS / 2) / tma_info_thread_clks", + "MetricExpr": "(133 * tma_info_system_core_frequency * MEM_LOAD_L3= _MISS_RETIRED.REMOTE_HITM + 133 * tma_info_system_core_frequency * MEM_LOAD= _L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS / 2) / tma_info_thread_clks", "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_L5_group;tma_is= sueSyncxn;tma_mem_latency_group", "MetricName": "tma_remote_cache", "MetricThreshold": "tma_remote_cache > 0.05 & (tma_mem_latency > 0= .1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > = 0.2)))", @@ -2082,7 +2001,7 @@ }, { "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", - "MetricExpr": "149 * tma_info_system_core_frequency * MEM_LOAD_L3_= MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.= L1_MISS / 2) / tma_info_thread_clks", + "MetricExpr": "153 * tma_info_system_core_frequency * MEM_LOAD_L3_= MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.= L1_MISS / 2) / tma_info_thread_clks", "MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latenc= y_group", "MetricName": "tma_remote_mem", "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 = & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2= )))", @@ -2093,7 +2012,7 @@ "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", "DefaultMetricgroupName": "TopdownL1", "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_info_= thread_slots", - "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group", "MetricName": "tma_retiring", "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.= 1", "MetricgroupNoGroup": "TopdownL1;Default", @@ -2103,7 +2022,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks += tma_c02_wait", - "MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_gr= oup;tma_issueSO", + "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bou= nd_group;tma_issueSO", "MetricName": "tma_serializing_operation", "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bo= und > 0.1 & tma_backend_bound > 0.2)", "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metri= cs: tma_ms_switches", @@ -2149,7 +2068,7 @@ { "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", "MetricExpr": "(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / tma_in= fo_thread_clks", - "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueB= W;tma_l3_bound_group", + "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_i= ssueBW;tma_l3_bound_group", "MetricName": "tma_sq_full", "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tm= a_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). Related metrics: tma_fb_full= , tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, = tma_mem_bandwidth", @@ -2176,7 +2095,7 @@ { "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", "MetricExpr": "(MEM_STORE_RETIRED.L2_HIT * 10 * (1 - MEM_INST_RETI= RED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_= LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE= _REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks", - "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issue= RFO;tma_issueSL;tma_store_bound_group", + "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_= issueRFO;tma_issueSL;tma_store_bound_group", "MetricName": "tma_store_latency", "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0= .2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency", @@ -2219,7 +2138,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / tma_info_thread_cl= ks", - "MetricGroup": "BigFootprint;FetchLat;TopdownL4;tma_L4_group;tma_b= ranch_resteers_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;= tma_branch_resteers_group", "MetricName": "tma_unknown_branches", "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_rest= eers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (e.g. first time the= branch is fetched or hitting BTB capacity limit) hence called Unknown Bran= ches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cache.jso= n b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cache.json index 25a2b9695135..a38db3e253f2 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cache.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cache.json @@ -1,8 +1,10 @@ [ { "BriefDescription": "CHA to iMC Bypass : Intermediate bypass Taken= ", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Bypass : Intermediate bypass Take= n : Counts the number of times when the CHA was able to bypass HA pipe on t= he way to iMC. This is a latency optimization for situations when there is= light loadings on the memory subsystem. This can be filtered by when the = bypass was taken and when it was not. : Filter for transactions that succee= ded in taking the intermediate bypass.", "UMask": "0x2", @@ -10,8 +12,10 @@ }, { "BriefDescription": "CHA to iMC Bypass : Not Taken", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Bypass : Not Taken : Counts the n= umber of times when the CHA was able to bypass HA pipe on the way to iMC. = This is a latency optimization for situations when there is light loadings = on the memory subsystem. This can be filtered by when the bypass was taken= and when it was not. : Filter for transactions that could not take the byp= ass, and issues a read to memory. Note that transactions that did not take = the bypass but did not issue read to memory will not be counted.", "UMask": "0x4", @@ -19,8 +23,10 @@ }, { "BriefDescription": "CHA to iMC Bypass : Taken", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Bypass : Taken : Counts the numbe= r of times when the CHA was able to bypass HA pipe on the way to iMC. This= is a latency optimization for situations when there is light loadings on t= he memory subsystem. This can be filtered by when the bypass was taken and= when it was not. : Filter for transactions that succeeded in taking the fu= ll bypass.", "UMask": "0x1", @@ -28,6 +34,7 @@ }, { "BriefDescription": "CHA Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_CHA_CLOCKTICKS", "PerPkg": "1", @@ -36,6 +43,7 @@ }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xc0", "EventName": "UNC_CHA_CMS_CLOCKTICKS", "PerPkg": "1", @@ -43,8 +51,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Any Cycle with Mul= tiple Snoops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.ANY_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Any Cycle with Mu= ltiple Snoops : Counts the number of transactions that trigger a configurab= le number of cross snoops. Cores are snooped if the transaction looks up t= he cache and determines that it is necessary based on the operation type an= d what CoreValid bits are set. For example, if 2 CV bits are set on a data= read, the cores must have the data in S state so it is not necessary to sn= oop them. However, if only 1 CV bit is set the core my have modified the d= ata. If the transaction was an RFO, it would need to invalidate the lines.= This event can be filtered based on who triggered the initial snoop(s).", "UMask": "0xf2", @@ -52,8 +62,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Any Single Snoop", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.ANY_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Any Single Snoop = : Counts the number of transactions that trigger a configurable number of c= ross snoops. Cores are snooped if the transaction looks up the cache and d= etermines that it is necessary based on the operation type and what CoreVal= id bits are set. For example, if 2 CV bits are set on a data read, the cor= es must have the data in S state so it is not necessary to snoop them. How= ever, if only 1 CV bit is set the core my have modified the data. If the t= ransaction was an RFO, it would need to invalidate the lines. This event c= an be filtered based on who triggered the initial snoop(s).", "UMask": "0xf1", @@ -61,8 +73,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Multiple Core Requ= ests", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.CORE_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Multiple Core Req= uests : Counts the number of transactions that trigger a configurable numbe= r of cross snoops. Cores are snooped if the transaction looks up the cache= and determines that it is necessary based on the operation type and what C= oreValid bits are set. For example, if 2 CV bits are set on a data read, t= he cores must have the data in S state so it is not necessary to snoop them= . However, if only 1 CV bit is set the core my have modified the data. If= the transaction was an RFO, it would need to invalidate the lines. This e= vent can be filtered based on who triggered the initial snoop(s).", "UMask": "0x42", @@ -70,8 +84,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Single Core Reques= ts", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.CORE_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Single Core Reque= sts : Counts the number of transactions that trigger a configurable number = of cross snoops. Cores are snooped if the transaction looks up the cache a= nd determines that it is necessary based on the operation type and what Cor= eValid bits are set. For example, if 2 CV bits are set on a data read, the= cores must have the data in S state so it is not necessary to snoop them. = However, if only 1 CV bit is set the core my have modified the data. If t= he transaction was an RFO, it would need to invalidate the lines. This eve= nt can be filtered based on who triggered the initial snoop(s).", "UMask": "0x41", @@ -79,8 +95,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Multiple Eviction"= , + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EVICT_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Multiple Eviction= : Counts the number of transactions that trigger a configurable number of = cross snoops. Cores are snooped if the transaction looks up the cache and = determines that it is necessary based on the operation type and what CoreVa= lid bits are set. For example, if 2 CV bits are set on a data read, the co= res must have the data in S state so it is not necessary to snoop them. Ho= wever, if only 1 CV bit is set the core my have modified the data. If the = transaction was an RFO, it would need to invalidate the lines. This event = can be filtered based on who triggered the initial snoop(s).", "UMask": "0x82", @@ -88,8 +106,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Single Eviction", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EVICT_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Single Eviction := Counts the number of transactions that trigger a configurable number of cr= oss snoops. Cores are snooped if the transaction looks up the cache and de= termines that it is necessary based on the operation type and what CoreVali= d bits are set. For example, if 2 CV bits are set on a data read, the core= s must have the data in S state so it is not necessary to snoop them. Howe= ver, if only 1 CV bit is set the core my have modified the data. If the tr= ansaction was an RFO, it would need to invalidate the lines. This event ca= n be filtered based on who triggered the initial snoop(s).", "UMask": "0x81", @@ -97,8 +117,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Multiple External = Snoops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EXT_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Multiple External= Snoops : Counts the number of transactions that trigger a configurable num= ber of cross snoops. Cores are snooped if the transaction looks up the cac= he and determines that it is necessary based on the operation type and what= CoreValid bits are set. For example, if 2 CV bits are set on a data read,= the cores must have the data in S state so it is not necessary to snoop th= em. However, if only 1 CV bit is set the core my have modified the data. = If the transaction was an RFO, it would need to invalidate the lines. This= event can be filtered based on who triggered the initial snoop(s).", "UMask": "0x22", @@ -106,8 +128,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Single External Sn= oops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EXT_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Single External S= noops : Counts the number of transactions that trigger a configurable numbe= r of cross snoops. Cores are snooped if the transaction looks up the cache= and determines that it is necessary based on the operation type and what C= oreValid bits are set. For example, if 2 CV bits are set on a data read, t= he cores must have the data in S state so it is not necessary to snoop them= . However, if only 1 CV bit is set the core my have modified the data. If= the transaction was an RFO, it would need to invalidate the lines. This e= vent can be filtered based on who triggered the initial snoop(s).", "UMask": "0x21", @@ -115,8 +139,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Multiple Snoop Tar= gets from Remote", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.REMOTE_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Multiple Snoop Ta= rgets from Remote : Counts the number of transactions that trigger a config= urable number of cross snoops. Cores are snooped if the transaction looks = up the cache and determines that it is necessary based on the operation typ= e and what CoreValid bits are set. For example, if 2 CV bits are set on a = data read, the cores must have the data in S state so it is not necessary t= o snoop them. However, if only 1 CV bit is set the core my have modified t= he data. If the transaction was an RFO, it would need to invalidate the li= nes. This event can be filtered based on who triggered the initial snoop(s= ).", "UMask": "0x12", @@ -124,8 +150,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued : Single Snoop Targe= t from Remote", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.REMOTE_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Core Cross Snoops Issued : Single Snoop Targ= et from Remote : Counts the number of transactions that trigger a configura= ble number of cross snoops. Cores are snooped if the transaction looks up = the cache and determines that it is necessary based on the operation type a= nd what CoreValid bits are set. For example, if 2 CV bits are set on a dat= a read, the cores must have the data in S state so it is not necessary to s= noop them. However, if only 1 CV bit is set the core my have modified the = data. If the transaction was an RFO, it would need to invalidate the lines= . This event can be filtered based on who triggered the initial snoop(s)."= , "UMask": "0x11", @@ -133,96 +161,120 @@ }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6e", "EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_DRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6e", "EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_NO_D2C", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6e", "EventName": "UNC_CHA_DIRECT_GO.HA_TOR_DEALLOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.EXTCMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO_PULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.GO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.GO_PULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.IDLE_DUE_SUPPRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.NOP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Direct GO", + "Counter": "0,1,2,3", "EventCode": "0x6d", "EventName": "UNC_CHA_DIRECT_GO_OPC.PULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Multi-socket cacheline Directory state lookup= s; Snoop Not Needed", + "Counter": "0,1,2,3", "EventCode": "0x53", "EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts transactions that looked into the mul= ti-socket cacheline Directory state, and therefore did not send a snoop bec= ause the Directory indicated it was not needed.", "UMask": "0x2", @@ -230,8 +282,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state lookup= s; Snoop Needed", + "Counter": "0,1,2,3", "EventCode": "0x53", "EventName": "UNC_CHA_DIR_LOOKUP.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts transactions that looked into the mu= lti-socket cacheline Directory state, and sent one or more snoops, because = the Directory indicated it was needed.", "UMask": "0x1", @@ -239,6 +293,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state update= s; Directory Updated memory write from the HA pipe", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_CHA_DIR_UPDATE.HA", "PerPkg": "1", @@ -248,6 +303,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state update= s; Directory Updated memory write from TOR pipe", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_CHA_DIR_UPDATE.TOR", "PerPkg": "1", @@ -257,8 +313,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Down", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Down : Counts number of cycles IV was blocked in the TGR Egress due to S= NP/GO Ordering requirements", "UMask": "0x4", @@ -266,8 +324,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Up", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP= /GO Ordering requirements", "UMask": "0x1", @@ -275,8 +335,10 @@ }, { "BriefDescription": "Read request from a remote socket which hit i= n the HitMe Cache to a line In the E state", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_CHA_HITME_HIT.EX_RDS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts read requests from a remote socket wh= ich hit in the HitME cache (used to cache the multi-socket Directory state)= to a line in the E(Exclusive) state. This includes the following read opc= odes (RdCode, RdData, RdDataMigratory, RdCur, RdInv*, Inv*).", "UMask": "0x1", @@ -284,80 +346,100 @@ }, { "BriefDescription": "Counts Number of Hits in HitMe Cache : Shared= hit and op is RdInvOwn, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_CHA_HITME_HIT.SHARED_OWNREQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Hits in HitMe Cache : op is = WbMtoE", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_CHA_HITME_HIT.WBMTOE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Hits in HitMe Cache : op is = WbMtoI, WbPushMtoI, WbFlush, or WbMtoS", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_CHA_HITME_HIT.WBMTOI_OR_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times HitMe Cache is accesse= d : op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x5e", "EventName": "UNC_CHA_HITME_LOOKUP.READ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times HitMe Cache is accesse= d : op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoS", + "Counter": "0,1,2,3", "EventCode": "0x5e", "EventName": "UNC_CHA_HITME_LOOKUP.WRITE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache : No S= F/LLC HitS/F and op is RdInvOwn", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache : op i= s RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.READ_OR_INV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache : SF/L= LC HitS/F and op is RdInvOwn", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.SHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache : Deallocate HitME$ on Reads without RspFwdI*", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache : op is RspIFwd or RspIFwdWb for a local request", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Allocate/Update to HitM= e Cache : op is RspIFwd or RspIFwdWb for a local request : Received RspFwdI= * for a local request, but converted HitME$ to SF entry", "UMask": "0x1", @@ -365,16 +447,20 @@ }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache : Update HitMe Cache on RdInvOwn even if not RspFwdI*", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache : op is RspIFwd or RspIFwdWb for a remote request", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.RSPFWDI_REM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Allocate/Update to HitM= e Cache : op is RspIFwd or RspIFwdWb for a remote request : Updated HitME$ = on RspFwdI* or local HitM/E received for a remote request", "UMask": "0x2", @@ -382,14 +468,17 @@ }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache : Update HitMe Cache to SHARed", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.SHARED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Normal priority reads issued to the memory co= ntroller from the CHA", + "Counter": "0,1,2,3", "EventCode": "0x59", "EventName": "UNC_CHA_IMC_READS_COUNT.NORMAL", "PerPkg": "1", @@ -399,8 +488,10 @@ }, { "BriefDescription": "HA to iMC Reads Issued : ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x59", "EventName": "UNC_CHA_IMC_READS_COUNT.PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "HA to iMC Reads Issued : ISOCH : Count of th= e number of reads issued to any of the memory controller channels. This ca= n be filtered by the priority of the reads.", "UMask": "0x2", @@ -408,6 +499,7 @@ }, { "BriefDescription": "CHA to iMC Full Line Writes Issued; Full Line= Non-ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x5b", "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL", "PerPkg": "1", @@ -417,8 +509,10 @@ }, { "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Fu= ll Line", + "Counter": "0,1,2,3", "EventCode": "0x5b", "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH F= ull Line : Counts the total number of full line writes issued from the HA i= nto the memory controller.", "UMask": "0x4", @@ -426,8 +520,10 @@ }, { "BriefDescription": "CHA to iMC Full Line Writes Issued : Partial = Non-ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x5b", "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Full Line Writes Issued : Partial= Non-ISOCH : Counts the total number of full line writes issued from the HA= into the memory controller.", "UMask": "0x2", @@ -435,8 +531,10 @@ }, { "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Pa= rtial", + "Counter": "0,1,2,3", "EventCode": "0x5b", "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH P= artial : Counts the total number of full line writes issued from the HA int= o the memory controller.", "UMask": "0x8", @@ -444,8 +542,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Any Request", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Filters for any transaction origi= nating from the IPQ or IRQ. This does not include lookups originating from= the ISMQ.", "UMask": "0x1fffff", @@ -453,8 +553,10 @@ }, { "BriefDescription": "Cache Lookups : All transactions from Remote = Agents", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.ALL_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : All transactions from Remote= Agents : Counts the number of times the LLC was accessed - this includes c= ode, data, prefetches and hints coming from L2. This has numerous filters = available. Note the non-standard filtering equation. This event will coun= t requests that lookup the cache multiple times with multiple increments. = One must ALWAYS select a state or states (in the umask field) to match. Ot= herwise, the event will count nothing.", "UMask": "0x17e0ff", @@ -462,16 +564,20 @@ }, { "BriefDescription": "Cache Lookups : All Requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.ANY_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : All Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Any local or remote transaction to the LLC, including pref= etch.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : CRd Requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.CODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "UMask": "0x1bd0ff", @@ -479,24 +585,30 @@ }, { "BriefDescription": "Cache Lookups : CRd Requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Local non-prefetch requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.COREPREF_OR_DMND_LOCAL_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Local non-prefetch requests = : Counts the number of times the LLC was accessed - this includes code, dat= a, prefetches and hints coming from L2. This has numerous filters availabl= e. Note the non-standard filtering equation. This event will count reques= ts that lookup the cache multiple times with multiple increments. One must= ALWAYS set umask bit 0 and select a state or states to match. Otherwise, = the event will count nothing. : Any local transaction to the LLC, not inclu= ding prefetch", "Unit": "CHA" }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Req= uest", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x1bc1ff", @@ -504,8 +616,10 @@ }, { "BriefDescription": "Cache Lookups : Data Reads", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Data Reads : Counts the numb= er of times the LLC was accessed - this includes code, data, prefetches and= hints coming from L2. This has numerous filters available. Note the non-= standard filtering equation. This event will count requests that lookup th= e cache multiple times with multiple increments. One must ALWAYS select a = state or states (in the umask field) to match. Otherwise, the event will c= ount nothing.", "UMask": "0x1fc1ff", @@ -513,16 +627,20 @@ }, { "BriefDescription": "Cache Lookups : Data Read Request", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Data Read Request : Counts t= he number of times the LLC was accessed - this includes code, data, prefetc= hes and hints coming from L2. This has numerous filters available. Note t= he non-standard filtering equation. This event will count requests that lo= okup the cache multiple times with multiple increments. One must ALWAYS se= t umask bit 0 and select a state or states to match. Otherwise, the event = will count nothing. : Read transactions.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Demand Data Reads, Core and L= LC prefetches", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Demand Data Reads, Core and = LLC prefetches : Counts the number of times the LLC was accessed - this inc= ludes code, data, prefetches and hints coming from L2. This has numerous f= ilters available. Note the non-standard filtering equation. This event wi= ll count requests that lookup the cache multiple times with multiple increm= ents. One must ALWAYS select a state or states (in the umask field) to mat= ch. Otherwise, the event will count nothing.", "UMask": "0x841ff", @@ -530,8 +648,10 @@ }, { "BriefDescription": "Cache Lookups : Data Read Misses", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Data Read Misses : Counts th= e number of times the LLC was accessed - this includes code, data, prefetch= es and hints coming from L2. This has numerous filters available. Note th= e non-standard filtering equation. This event will count requests that loo= kup the cache multiple times with multiple increments. One must ALWAYS sel= ect a state or states (in the umask field) to match. Otherwise, the event = will count nothing.", "UMask": "0x1fc101", @@ -539,8 +659,10 @@ }, { "BriefDescription": "Cache Lookups : E State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : E State : Counts the number = of times the LLC was accessed - this includes code, data, prefetches and hi= nts coming from L2. This has numerous filters available. Note the non-sta= ndard filtering equation. This event will count requests that lookup the c= ache multiple times with multiple increments. One must ALWAYS set umask bi= t 0 and select a state or states to match. Otherwise, the event will count= nothing. : Hit Exclusive State", "UMask": "0x20", @@ -548,8 +670,10 @@ }, { "BriefDescription": "Cache Lookups : F State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : F State : Counts the number = of times the LLC was accessed - this includes code, data, prefetches and hi= nts coming from L2. This has numerous filters available. Note the non-sta= ndard filtering equation. This event will count requests that lookup the c= ache multiple times with multiple increments. One must ALWAYS set umask bi= t 0 and select a state or states to match. Otherwise, the event will count= nothing. : Hit Forward State", "UMask": "0x80", @@ -557,8 +681,10 @@ }, { "BriefDescription": "Cache Lookups : Flush or Invalidate Requests"= , + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_INV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Flush : Counts the number of= times the LLC was accessed - this includes code, data, prefetches and hint= s coming from L2. This has numerous filters available. Note the non-stand= ard filtering equation. This event will count requests that lookup the cac= he multiple times with multiple increments. One must ALWAYS set umask bit = 0 and select a state or states to match. Otherwise, the event will count n= othing.", "UMask": "0x1a44ff", @@ -566,16 +692,20 @@ }, { "BriefDescription": "Cache Lookups : Flush", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_OR_INV_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Flush : Counts the number of= times the LLC was accessed - this includes code, data, prefetches and hint= s coming from L2. This has numerous filters available. Note the non-stand= ard filtering equation. This event will count requests that lookup the cac= he multiple times with multiple increments. One must ALWAYS set umask bit = 0 and select a state or states to match. Otherwise, the event will count n= othing.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : I State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.I", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : I State : Counts the number = of times the LLC was accessed - this includes code, data, prefetches and hi= nts coming from L2. This has numerous filters available. Note the non-sta= ndard filtering equation. This event will count requests that lookup the c= ache multiple times with multiple increments. One must ALWAYS set umask bi= t 0 and select a state or states to match. Otherwise, the event will count= nothing. : Miss", "UMask": "0x1", @@ -583,16 +713,20 @@ }, { "BriefDescription": "Cache Lookups : Local LLC prefetch requests (= from LLC)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LLCPREF_LOCAL_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Local LLC prefetch requests = (from LLC) : Counts the number of times the LLC was accessed - this include= s code, data, prefetches and hints coming from L2. This has numerous filte= rs available. Note the non-standard filtering equation. This event will c= ount requests that lookup the cache multiple times with multiple increments= . One must ALWAYS set umask bit 0 and select a state or states to match. = Otherwise, the event will count nothing. : Any local LLC prefetch to the LL= C", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Transactions homed locally", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCALLY_HOMED_ADDRESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Transactions homed locally := Counts the number of times the LLC was accessed - this includes code, data= , prefetches and hints coming from L2. This has numerous filters available= . Note the non-standard filtering equation. This event will count request= s that lookup the cache multiple times with multiple increments. One must = ALWAYS set umask bit 0 and select a state or states to match. Otherwise, t= he event will count nothing. : Transaction whose address resides in the loc= al MC.", "UMask": "0xbdfff", @@ -600,8 +734,10 @@ }, { "BriefDescription": "Cache Lookups : CRd Requests that come from t= he local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_CODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "UMask": "0x19d0ff", @@ -609,8 +745,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Req= uest that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DATA_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x19c1ff", @@ -618,8 +756,10 @@ }, { "BriefDescription": "Cache Lookups : Demand CRd Requests that come= from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_CODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "UMask": "0x1850ff", @@ -627,8 +767,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Demand Data R= eads that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_DATA_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x1841ff", @@ -636,8 +778,10 @@ }, { "BriefDescription": "Cache Lookups : Demand RFO Requests that come= from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : RFO Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote RFO transactions to the LLC. This include= s RFO prefetch.", "UMask": "0x1848ff", @@ -645,16 +789,20 @@ }, { "BriefDescription": "Cache Lookups : Transactions homed locally", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Transactions homed locally := Counts the number of times the LLC was accessed - this includes code, data= , prefetches and hints coming from L2. This has numerous filters available= . Note the non-standard filtering equation. This event will count request= s that lookup the cache multiple times with multiple increments. One must = ALWAYS set umask bit 0 and select a state or states to match. Otherwise, t= he event will count nothing. : Transaction whose address resides in the loc= al MC.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Flush or Invalidate Requests = that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_FLUSH_INV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Flush : Counts the number of= times the LLC was accessed - this includes code, data, prefetches and hint= s coming from L2. This has numerous filters available. Note the non-stand= ard filtering equation. This event will count requests that lookup the cac= he multiple times with multiple increments. One must ALWAYS set umask bit = 0 and select a state or states to match. Otherwise, the event will count n= othing.", "UMask": "0x1844ff", @@ -662,8 +810,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Prefetch requ= ests to the LLC that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_LLC_PF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x189dff", @@ -671,8 +821,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Pre= fetches that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x199dff", @@ -680,8 +832,10 @@ }, { "BriefDescription": "Cache Lookups : CRd Prefetches that come from= the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_CODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "UMask": "0x1910ff", @@ -689,8 +843,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Pre= fetches that come from the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_DATA_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x1981ff", @@ -698,8 +854,10 @@ }, { "BriefDescription": "Cache Lookups : RFO Prefetches that come from= the local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : RFO Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote RFO transactions to the LLC. This include= s RFO prefetch.", "UMask": "0x1908ff", @@ -707,8 +865,10 @@ }, { "BriefDescription": "Cache Lookups : RFO Requests that come from t= he local socket (usually the core)", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : RFO Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote RFO transactions to the LLC. This include= s RFO prefetch.", "UMask": "0x19c8ff", @@ -716,8 +876,10 @@ }, { "BriefDescription": "Cache Lookups : M State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : M State : Counts the number = of times the LLC was accessed - this includes code, data, prefetches and hi= nts coming from L2. This has numerous filters available. Note the non-sta= ndard filtering equation. This event will count requests that lookup the c= ache multiple times with multiple increments. One must ALWAYS set umask bi= t 0 and select a state or states to match. Otherwise, the event will count= nothing. : Hit Modified State", "UMask": "0x40", @@ -725,8 +887,10 @@ }, { "BriefDescription": "Cache Lookups : All Misses", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.MISS_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS select a state or states (in the umask= field) to match. Otherwise, the event will count nothing.", "UMask": "0x1fe001", @@ -734,24 +898,30 @@ }, { "BriefDescription": "Cache Lookups : Write Requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.OTHER_REQ_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Write Requests : Counts the = number of times the LLC was accessed - this includes code, data, prefetches= and hints coming from L2. This has numerous filters available. Note the = non-standard filtering equation. This event will count requests that looku= p the cache multiple times with multiple increments. One must ALWAYS set u= mask bit 0 and select a state or states to match. Otherwise, the event wil= l count nothing. : Writeback transactions from L2 to the LLC This includes= all write transactions -- both Cacheable and UC.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Remote non-snoop requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.PREF_OR_DMND_REMOTE_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Remote non-snoop requests : = Counts the number of times the LLC was accessed - this includes code, data,= prefetches and hints coming from L2. This has numerous filters available.= Note the non-standard filtering equation. This event will count requests= that lookup the cache multiple times with multiple increments. One must A= LWAYS set umask bit 0 and select a state or states to match. Otherwise, th= e event will count nothing. : Remote non-snoop transactions to the LLC.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Transactions homed remotely", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTELY_HOMED_ADDRESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Transactions homed remotely = : Counts the number of times the LLC was accessed - this includes code, dat= a, prefetches and hints coming from L2. This has numerous filters availabl= e. Note the non-standard filtering equation. This event will count reques= ts that lookup the cache multiple times with multiple increments. One must= ALWAYS set umask bit 0 and select a state or states to match. Otherwise, = the event will count nothing. : Transaction whose address resides in a remo= te MC", "UMask": "0x15dfff", @@ -759,8 +929,10 @@ }, { "BriefDescription": "Cache Lookups : CRd Requests that come from a= Remote socket.", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_CODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : CRd Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote CRd transactions to the LLC. This include= s CRd prefetch.", "UMask": "0x1a10ff", @@ -768,8 +940,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Req= uests that come from a Remote socket", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_DATA_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state. Read transactions", "UMask": "0x1a01ff", @@ -777,16 +951,20 @@ }, { "BriefDescription": "Cache Lookups : Transactions homed remotely", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Transactions homed remotely = : Counts the number of times the LLC was accessed - this includes code, dat= a, prefetches and hints coming from L2. This has numerous filters availabl= e. Note the non-standard filtering equation. This event will count reques= ts that lookup the cache multiple times with multiple increments. One must= ALWAYS set umask bit 0 and select a state or states to match. Otherwise, = the event will count nothing. : Transaction whose address resides in a remo= te MC", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Flush or Invalidate requests = that come from a Remote socket.", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_FLUSH_INV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Flush : Counts the number of= times the LLC was accessed - this includes code, data, prefetches and hint= s coming from L2. This has numerous filters available. Note the non-stand= ard filtering equation. This event will count requests that lookup the cac= he multiple times with multiple increments. One must ALWAYS set umask bit = 0 and select a state or states to match. Otherwise, the event will count n= othing.", "UMask": "0x1a04ff", @@ -794,8 +972,10 @@ }, { "BriefDescription": "Cache Lookups : Filters Requests for those th= at write info into the cache that come from a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_OTHER", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Write Requests : Counts the = number of times the LLC was accessed - this includes code, data, prefetches= and hints coming from L2. This has numerous filters available. Note the = non-standard filtering equation. This event will count requests that looku= p the cache multiple times with multiple increments. One must ALWAYS set u= mask bit 0 and select a state or states to match. Otherwise, the event wil= l count nothing. : Writeback transactions from L2 to the LLC This includes= all write transactions -- both Cacheable and UC.", "UMask": "0x1a02ff", @@ -803,8 +983,10 @@ }, { "BriefDescription": "Cache Lookups : RFO Requests that come from a= Remote socket.", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : RFO Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote RFO transactions to the LLC. This include= s RFO prefetch.", "UMask": "0x1a08ff", @@ -812,16 +994,20 @@ }, { "BriefDescription": "Cache Lookups : Remote snoop requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : Remote snoop requests : Coun= ts the number of times the LLC was accessed - this includes code, data, pre= fetches and hints coming from L2. This has numerous filters available. No= te the non-standard filtering equation. This event will count requests tha= t lookup the cache multiple times with multiple increments. One must ALWAY= S set umask bit 0 and select a state or states to match. Otherwise, the ev= ent will count nothing. : Remote snoop transactions to the LLC.", "Unit": "CHA" }, { "BriefDescription": "Cache and Snoop Filter Lookups; Snoop Request= s from a Remote Socket", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Filters for any transaction origi= nating from the IPQ or IRQ. This does not include lookups originating from= the ISMQ.", "UMask": "0x1c19ff", @@ -829,8 +1015,10 @@ }, { "BriefDescription": "Cache Lookups : RFO Requests", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : RFO Requests : Counts the nu= mber of times the LLC was accessed - this includes code, data, prefetches a= nd hints coming from L2. This has numerous filters available. Note the no= n-standard filtering equation. This event will count requests that lookup = the cache multiple times with multiple increments. One must ALWAYS set uma= sk bit 0 and select a state or states to match. Otherwise, the event will = count nothing. : Local or remote RFO transactions to the LLC. This include= s RFO prefetch.", "UMask": "0x1bc8ff", @@ -838,16 +1026,20 @@ }, { "BriefDescription": "Cache Lookups : RFO Request Filter", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.RFO_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS select a state or states (in the umask= field) to match. Otherwise, the event will count nothing. : Local or remo= te RFO transactions to the LLC. This includes RFO prefetch.", "Unit": "CHA" }, { "BriefDescription": "Cache Lookups : Locally HOMed RFOs - Demand a= nd Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.RFO_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS select a state or states (in the umask= field) to match. Otherwise, the event will count nothing.", "UMask": "0x9c8ff", @@ -855,8 +1047,10 @@ }, { "BriefDescription": "Cache Lookups : S State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : S State : Counts the number = of times the LLC was accessed - this includes code, data, prefetches and hi= nts coming from L2. This has numerous filters available. Note the non-sta= ndard filtering equation. This event will count requests that lookup the c= ache multiple times with multiple increments. One must ALWAYS set umask bi= t 0 and select a state or states to match. Otherwise, the event will count= nothing. : Hit Shared State", "UMask": "0x10", @@ -864,8 +1058,10 @@ }, { "BriefDescription": "Cache Lookups : SnoopFilter - E State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.SF_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : SnoopFilter - E State : Coun= ts the number of times the LLC was accessed - this includes code, data, pre= fetches and hints coming from L2. This has numerous filters available. No= te the non-standard filtering equation. This event will count requests tha= t lookup the cache multiple times with multiple increments. One must ALWAY= S set umask bit 0 and select a state or states to match. Otherwise, the ev= ent will count nothing. : SF Hit Exclusive State", "UMask": "0x4", @@ -873,8 +1069,10 @@ }, { "BriefDescription": "Cache Lookups : SnoopFilter - H State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.SF_H", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : SnoopFilter - H State : Coun= ts the number of times the LLC was accessed - this includes code, data, pre= fetches and hints coming from L2. This has numerous filters available. No= te the non-standard filtering equation. This event will count requests tha= t lookup the cache multiple times with multiple increments. One must ALWAY= S set umask bit 0 and select a state or states to match. Otherwise, the ev= ent will count nothing. : SF Hit HitMe State", "UMask": "0x8", @@ -882,8 +1080,10 @@ }, { "BriefDescription": "Cache Lookups : SnoopFilter - S State", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.SF_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cache Lookups : SnoopFilter - S State : Coun= ts the number of times the LLC was accessed - this includes code, data, pre= fetches and hints coming from L2. This has numerous filters available. No= te the non-standard filtering equation. This event will count requests tha= t lookup the cache multiple times with multiple increments. One must ALWAY= S set umask bit 0 and select a state or states to match. Otherwise, the ev= ent will count nothing. : SF Hit Shared State", "UMask": "0x2", @@ -891,8 +1091,10 @@ }, { "BriefDescription": "Cache Lookups : Writes", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.WRITE_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS select a state or states (in the umask= field) to match. Otherwise, the event will count nothing. : Requests that= install or change a line in the LLC. Examples: Writebacks from Core L2= 's and UPI. Prefetches into the LLC.", "UMask": "0x842ff", @@ -900,8 +1102,10 @@ }, { "BriefDescription": "Cache Lookups : Remote Writes", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.WRITE_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS select a state or states (in the umask= field) to match. Otherwise, the event will count nothing.", "UMask": "0x17c2ff", @@ -909,8 +1113,10 @@ }, { "BriefDescription": "Lines Victimized : Lines in E state", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.E_STATE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Lines in E state : Counts= the number of lines that were victimized on a fill. This can be filtered = by the state that the line was in.", "UMask": "0x2", @@ -918,8 +1124,10 @@ }, { "BriefDescription": "Lines Victimized : IA traffic", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : IA traffic : Counts the n= umber of lines that were victimized on a fill. This can be filtered by the= state that the line was in.", "UMask": "0x20", @@ -927,8 +1135,10 @@ }, { "BriefDescription": "Lines Victimized : IO traffic", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : IO traffic : Counts the n= umber of lines that were victimized on a fill. This can be filtered by the= state that the line was in.", "UMask": "0x10", @@ -936,8 +1146,10 @@ }, { "BriefDescription": "All LLC lines in E state that are victimized = on a fill from an IO device", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IO_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x12", @@ -945,8 +1157,10 @@ }, { "BriefDescription": "All LLC lines in F or S state that are victim= ized on a fill from an IO device", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IO_FS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x1c", @@ -954,8 +1168,10 @@ }, { "BriefDescription": "All LLC lines in M state that are victimized = on a fill from an IO device", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IO_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x11", @@ -963,8 +1179,10 @@ }, { "BriefDescription": "All LLC lines in any state that are victimize= d on a fill from an IO device", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.IO_MESF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x1f", @@ -972,8 +1190,10 @@ }, { "BriefDescription": "Lines Victimized; Local - All Lines", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x200f", @@ -981,8 +1201,10 @@ }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x2002", @@ -990,8 +1212,10 @@ }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x2001", @@ -999,16 +1223,20 @@ }, { "BriefDescription": "Lines Victimized : Local Only", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ONLY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Local Only : Counts the n= umber of lines that were victimized on a fill. This can be filtered by the= state that the line was in.", "Unit": "CHA" }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x2004", @@ -1016,8 +1244,10 @@ }, { "BriefDescription": "Lines Victimized : Lines in M state", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.M_STATE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Lines in M state : Counts= the number of lines that were victimized on a fill. This can be filtered = by the state that the line was in.", "UMask": "0x1", @@ -1025,8 +1255,10 @@ }, { "BriefDescription": "Lines Victimized; Remote - All Lines", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x800f", @@ -1034,8 +1266,10 @@ }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x8002", @@ -1043,8 +1277,10 @@ }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x8001", @@ -1052,16 +1288,20 @@ }, { "BriefDescription": "Lines Victimized : Remote Only", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ONLY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Remote Only : Counts the = number of lines that were victimized on a fill. This can be filtered by th= e state that the line was in.", "Unit": "CHA" }, { "BriefDescription": "Lines Victimized", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Counts the number of line= s that were victimized on a fill. This can be filtered by the state that t= he line was in.", "UMask": "0x8004", @@ -1069,8 +1309,10 @@ }, { "BriefDescription": "Lines Victimized : Lines in S State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.S_STATE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lines Victimized : Lines in S State : Counts= the number of lines that were victimized on a fill. This can be filtered = by the state that the line was in.", "UMask": "0x4", @@ -1078,8 +1320,10 @@ }, { "BriefDescription": "All LLC lines in E state that are victimized = on a fill", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x2", @@ -1087,8 +1331,10 @@ }, { "BriefDescription": "All LLC lines in M state that are victimized = on a fill", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x1", @@ -1096,8 +1342,10 @@ }, { "BriefDescription": "All LLC lines in S state that are victimized = on a fill", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x4", @@ -1105,8 +1353,10 @@ }, { "BriefDescription": "Cbo Misc : CV0 Prefetch Miss", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.CV0_PREF_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cbo Misc : CV0 Prefetch Miss : Miscellaneous= events in the Cbo.", "UMask": "0x20", @@ -1114,8 +1364,10 @@ }, { "BriefDescription": "Cbo Misc : CV0 Prefetch Victim", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.CV0_PREF_VIC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cbo Misc : CV0 Prefetch Victim : Miscellaneo= us events in the Cbo.", "UMask": "0x10", @@ -1123,8 +1375,10 @@ }, { "BriefDescription": "Number of times that an RFO hit in S state.", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.RFO_HIT_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts when a RFO (the Read for Ownership is= sued before a write) request hit a cacheline in the S (Shared) state.", "UMask": "0x8", @@ -1132,8 +1386,10 @@ }, { "BriefDescription": "Cbo Misc : Silent Snoop Eviction", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.RSPI_WAS_FSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cbo Misc : Silent Snoop Eviction : Miscellan= eous events in the Cbo. : Counts the number of times when a Snoop hit in FS= E states and triggered a silent eviction. This is useful because this info= rmation is lost in the PRE encodings.", "UMask": "0x1", @@ -1141,8 +1397,10 @@ }, { "BriefDescription": "Cbo Misc : Write Combining Aliasing", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.WC_ALIASING", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cbo Misc : Write Combining Aliasing : Miscel= laneous events in the Cbo. : Counts the number of times that a USWC write (= WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followe= d by the USWC write. This occurs when there is WC aliasing.", "UMask": "0x2", @@ -1150,8 +1408,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : Local InvItoE", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.LOCAL_INVITOE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : Local InvItoE : Count = of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be b= roadcast. Does not count all the snoops generated by OSB.", "UMask": "0x1", @@ -1159,8 +1419,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : Local Rd", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.LOCAL_READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : Local Rd : Count of OS= B snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadc= ast. Does not count all the snoops generated by OSB.", "UMask": "0x2", @@ -1168,8 +1430,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : Off", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.OFF_PWRHEURISTIC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : Off : Count of OSB sno= op broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. = Does not count all the snoops generated by OSB.", "UMask": "0x20", @@ -1177,8 +1441,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : Remote Rd", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.REMOTE_READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : Remote Rd : Count of O= SB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broad= cast. Does not count all the snoops generated by OSB.", "UMask": "0x4", @@ -1186,8 +1452,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : Remote Rd InvItoE", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.REMOTE_READINVITOE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : Remote Rd InvItoE : Co= unt of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to = be broadcast. Does not count all the snoops generated by OSB.", "UMask": "0x8", @@ -1195,8 +1463,10 @@ }, { "BriefDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcas= t", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB.RFO_HITS_SNP_BCAST", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadca= st : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB sno= ops to be broadcast. Does not count all the snoops generated by OSB.", "UMask": "0x10", @@ -1204,32 +1474,40 @@ }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x65", "EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE", + "Counter": "0,1,2,3", "EventCode": "0x65", "EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT", + "Counter": "0,1,2,3", "EventCode": "0x65", "EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Memory Mode related events; Counts the number= of times CHA saw a Near Memory set conflict in SF/LLC", + "Counter": "0,1,2,3", "EventCode": "0x64", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Near Memory evictions due to another read to= the same Near Memory set in the LLC.", "UMask": "0x2", @@ -1237,8 +1515,10 @@ }, { "BriefDescription": "Memory Mode related events; Counts the number= of times CHA saw a Near memory set conflict in SF/LLC", + "Counter": "0,1,2,3", "EventCode": "0x64", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Near Memory evictions due to another read to= the same Near Memory set in the SF", "UMask": "0x1", @@ -1246,8 +1526,10 @@ }, { "BriefDescription": "Memory Mode related events; Counts the number= of times CHA saw a Near Memory set conflict in TOR", + "Counter": "0,1,2,3", "EventCode": "0x64", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Reject in the CHA due to a pending read t= o the same Near Memory set in the TOR.", "UMask": "0x4", @@ -1255,88 +1537,110 @@ }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC", + "Counter": "0,1,2,3", "EventCode": "0x70", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR", + "Counter": "0,1,2,3", "EventCode": "0x70", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI"= , + "Counter": "0,1,2,3", "EventCode": "0x70", "EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.REJ_IRQ", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.REJ_IRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.SLOW_INSERT", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.SLOW_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.THROTTLE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_IRQ", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.THROTTLE_IRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_PRQ", + "Counter": "0,1,2,3", "EventCode": "0x66", "EventName": "UNC_CHA_PMM_QOS.THROTTLE_PRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": count # of FAST TOR Request inserted to ha= _tor_req_fifo", "UMask": "0x2", @@ -1344,16 +1648,20 @@ }, { "BriefDescription": "Number of SLOW TOR Request inserted to ha_pmm= _tor_req_fifo", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_SLOW_FIFO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC0", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC0 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 0 only.", "UMask": "0x1", @@ -1361,8 +1669,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC1", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC1 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 1 only.", "UMask": "0x2", @@ -1370,8 +1680,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC2", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC2 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 2 only.", "UMask": "0x4", @@ -1379,8 +1691,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC3", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC3 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 3 only.", "UMask": "0x8", @@ -1388,8 +1702,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC4", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC4 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 4 only.", "UMask": "0x10", @@ -1397,8 +1713,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty : MC5", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx READ Credits Empty : MC5 : Coun= ts the number of times when there are no credits available for sending read= s from the CHA into the iMC. In order to send reads into the memory contro= ller, the HA must first acquire a credit for the iMC's AD Ingress queue. : = Filter for memory controller 5 only.", "UMask": "0x20", @@ -1406,8 +1724,10 @@ }, { "BriefDescription": "Requests for exclusive ownership of a cache l= ine without receiving data", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.INVITOE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of requests coming f= rom a unit on this socket for exclusive ownership of a cache line without r= eceiving data (INVITOE) to the CHA.", "UMask": "0x30", @@ -1415,6 +1735,7 @@ }, { "BriefDescription": "Local requests for exclusive ownership of a c= ache line without receiving data", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL", "PerPkg": "1", @@ -1424,6 +1745,7 @@ }, { "BriefDescription": "Remote requests for exclusive ownership of a = cache line without receiving data", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE", "PerPkg": "1", @@ -1433,6 +1755,7 @@ }, { "BriefDescription": "Read requests made into the CHA", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS", "PerPkg": "1", @@ -1442,6 +1765,7 @@ }, { "BriefDescription": "Read requests from a unit on this socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS_LOCAL", "PerPkg": "1", @@ -1451,6 +1775,7 @@ }, { "BriefDescription": "Read requests from a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS_REMOTE", "PerPkg": "1", @@ -1460,6 +1785,7 @@ }, { "BriefDescription": "Write requests made into the CHA", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES", "PerPkg": "1", @@ -1469,6 +1795,7 @@ }, { "BriefDescription": "Write Requests from a unit on this socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL", "PerPkg": "1", @@ -1478,6 +1805,7 @@ }, { "BriefDescription": "Read and Write Requests; Writes Remote", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE", "PerPkg": "1", @@ -1487,8 +1815,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : IPQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : IPQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x4", @@ -1496,8 +1826,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : IRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : IRQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x1", @@ -1505,8 +1837,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : IRQ Rejected= ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IRQ_REJ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : IRQ Rejecte= d : Counts number of allocations per cycle into the specified Ingress queue= .", "UMask": "0x2", @@ -1514,8 +1848,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : PRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.PRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : PRQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x10", @@ -1523,8 +1859,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : PRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.PRQ_REJ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : PRQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x20", @@ -1532,8 +1870,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : RRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : RRQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x40", @@ -1541,8 +1881,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations : WBQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Allocations : WBQ : Count= s number of allocations per cycle into the specified Ingress queue.", "UMask": "0x80", @@ -1550,8 +1892,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD= REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : A= D REQ on VN0 : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -1559,8 +1903,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : A= D RSP on VN0 : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -1568,8 +1914,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : No= n UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : N= on UPI AK Request : Can't inject AK ring message", "UMask": "0x40", @@ -1577,8 +1925,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL= NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : B= L NCB on VN0 : No BL VN0 credit for NCB", "UMask": "0x10", @@ -1586,8 +1936,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL= NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : B= L NCS on VN0 : No BL VN0 credit for NCS", "UMask": "0x20", @@ -1595,8 +1947,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : B= L RSP on VN0 : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -1604,8 +1958,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL= WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : B= L WB on VN0 : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -1613,8 +1969,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : No= n UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : N= on UPI IV Request : Can't inject IV ring message", "UMask": "0x80", @@ -1622,16 +1980,20 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Al= low Snoop", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : AN= Y0", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : A= NY0 : Any condition listed in the IPQ0 Reject counter was true", "UMask": "0x1", @@ -1639,16 +2001,20 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : HA= ", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LL= C OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : L= LC OR SF Way : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -1656,16 +2022,20 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LL= C Victim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Ph= yAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : P= hyAddr Match : Address match with an outstanding request that was rejected.= ", "UMask": "0x80", @@ -1673,8 +2043,10 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : SF= Victim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : S= F Victim : Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -1682,16 +2054,20 @@ }, { "BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Vi= ctim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD= REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : A= D REQ on VN0 : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -1699,8 +2075,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : A= D RSP on VN0 : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -1708,8 +2086,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : No= n UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : N= on UPI AK Request : Can't inject AK ring message", "UMask": "0x40", @@ -1717,8 +2097,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL= NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : B= L NCB on VN0 : No BL VN0 credit for NCB", "UMask": "0x10", @@ -1726,8 +2108,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL= NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : B= L NCS on VN0 : No BL VN0 credit for NCS", "UMask": "0x20", @@ -1735,8 +2119,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : B= L RSP on VN0 : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -1744,8 +2130,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL= WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : B= L WB on VN0 : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -1753,8 +2141,10 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : No= n UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : N= on UPI IV Request : Can't inject IV ring message", "UMask": "0x80", @@ -1762,16 +2152,20 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Al= low Snoop", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : AN= Y0", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : A= NY0 : Any condition listed in the IRQ0 Reject counter was true", "UMask": "0x1", @@ -1779,16 +2173,20 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : HA= ", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LL= C or SF Way", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : L= LC or SF Way : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -1796,24 +2194,30 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LL= C Victim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Phy= Addr Match", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : SF= Victim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : S= F Victim : Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -1821,16 +2225,20 @@ }, { "BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Vi= ctim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -1838,8 +2246,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -1847,8 +2257,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request : = Number of times a transaction flowing through the ISMQ had to retry. Trans= action pass through the ISMQ as responses for requests that already exist i= n the Cbo. Some examples include: when data is returned or when snoop resp= onses come back from the cores. : Can't inject AK ring message", "UMask": "0x40", @@ -1856,8 +2268,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for NCB", "UMask": "0x10", @@ -1865,8 +2279,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for NCS", "UMask": "0x20", @@ -1874,8 +2290,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -1883,8 +2301,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : BL WB on VN0 : Number= of times a transaction flowing through the ISMQ had to retry. Transaction= pass through the ISMQ as responses for requests that already exist in the = Cbo. Some examples include: when data is returned or when snoop responses = come back from the cores. : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -1892,8 +2312,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request : = Number of times a transaction flowing through the ISMQ had to retry. Trans= action pass through the ISMQ as responses for requests that already exist i= n the Cbo. Some examples include: when data is returned or when snoop resp= onses come back from the cores. : Can't inject IV ring message", "UMask": "0x80", @@ -1901,8 +2323,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : AD REQ on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -1910,8 +2334,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : AD RSP on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -1919,8 +2345,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : Non UPI AK Request : = Number of times a transaction flowing through the ISMQ had to retry. Trans= action pass through the ISMQ as responses for requests that already exist i= n the Cbo. Some examples include: when data is returned or when snoop resp= onses come back from the cores. : Can't inject AK ring message", "UMask": "0x40", @@ -1928,8 +2356,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : BL NCB on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for NCB", "UMask": "0x10", @@ -1937,8 +2367,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : BL NCS on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for NCS", "UMask": "0x20", @@ -1946,8 +2378,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : BL RSP on VN0 : Numbe= r of times a transaction flowing through the ISMQ had to retry. Transactio= n pass through the ISMQ as responses for requests that already exist in the= Cbo. Some examples include: when data is returned or when snoop responses= come back from the cores. : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -1955,8 +2389,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : BL WB on VN0 : Number= of times a transaction flowing through the ISMQ had to retry. Transaction= pass through the ISMQ as responses for requests that already exist in the = Cbo. Some examples include: when data is returned or when snoop responses = come back from the cores. : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -1964,8 +2400,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 0 : Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 0 : Non UPI IV Request : = Number of times a transaction flowing through the ISMQ had to retry. Trans= action pass through the ISMQ as responses for requests that already exist i= n the Cbo. Some examples include: when data is returned or when snoop resp= onses come back from the cores. : Can't inject IV ring message", "UMask": "0x80", @@ -1973,8 +2411,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_CHA_RxC_ISMQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 1 : ANY0 : Number of time= s a transaction flowing through the ISMQ had to retry. Transaction pass th= rough the ISMQ as responses for requests that already exist in the Cbo. So= me examples include: when data is returned or when snoop responses come bac= k from the cores. : Any condition listed in the ISMQ0 Reject counter was tr= ue", "UMask": "0x1", @@ -1982,8 +2422,10 @@ }, { "BriefDescription": "ISMQ Rejects - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_CHA_RxC_ISMQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Rejects - Set 1 : HA : Number of times = a transaction flowing through the ISMQ had to retry. Transaction pass thro= ugh the ISMQ as responses for requests that already exist in the Cbo. Some= examples include: when data is returned or when snoop responses come back = from the cores.", "UMask": "0x2", @@ -1991,8 +2433,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_CHA_RxC_ISMQ1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 1 : ANY0 : Number of time= s a transaction flowing through the ISMQ had to retry. Transaction pass th= rough the ISMQ as responses for requests that already exist in the Cbo. So= me examples include: when data is returned or when snoop responses come bac= k from the cores. : Any condition listed in the ISMQ0 Reject counter was tr= ue", "UMask": "0x1", @@ -2000,8 +2444,10 @@ }, { "BriefDescription": "ISMQ Retries - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_CHA_RxC_ISMQ1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ISMQ Retries - Set 1 : HA : Number of times = a transaction flowing through the ISMQ had to retry. Transaction pass thro= ugh the ISMQ as responses for requests that already exist in the Cbo. Some= examples include: when data is returned or when snoop responses come back = from the cores.", "UMask": "0x2", @@ -2009,8 +2455,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy : IPQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Occupancy : IPQ : Counts = number of entries in the specified Ingress queue in each cycle.", "UMask": "0x4", @@ -2018,8 +2466,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy : RRQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Occupancy : RRQ : Counts = number of entries in the specified Ingress queue in each cycle.", "UMask": "0x40", @@ -2027,8 +2477,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy : WBQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Occupancy : WBQ : Counts = number of entries in the specified Ingress queue in each cycle.", "UMask": "0x80", @@ -2036,8 +2488,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : AD REQ on VN0 : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : No AD VN0 credit for generat= ing a request", "UMask": "0x1", @@ -2045,8 +2499,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : AD RSP on VN0 : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : No AD VN0 credit for generat= ing a response", "UMask": "0x2", @@ -2054,8 +2510,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : Non UPI AK Request := Retry Queue Inserts of Transactions that were already in another Retry Q (= sub-events encode the reason for the next reject) : Can't inject AK ring me= ssage", "UMask": "0x40", @@ -2063,8 +2521,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : BL NCB on VN0 : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : No BL VN0 credit for NCB", "UMask": "0x10", @@ -2072,8 +2532,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : BL NCS on VN0 : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : No BL VN0 credit for NCS", "UMask": "0x20", @@ -2081,8 +2543,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : BL RSP on VN0 : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : No BL VN0 credit for generat= ing a response", "UMask": "0x4", @@ -2090,8 +2554,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : BL WB on VN0 : Retry= Queue Inserts of Transactions that were already in another Retry Q (sub-ev= ents encode the reason for the next reject) : No BL VN0 credit for generati= ng a writeback", "UMask": "0x8", @@ -2099,8 +2565,10 @@ }, { "BriefDescription": "Other Retries - Set 0 : Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 0 : Non UPI IV Request := Retry Queue Inserts of Transactions that were already in another Retry Q (= sub-events encode the reason for the next reject) : Can't inject IV ring me= ssage", "UMask": "0x80", @@ -2108,8 +2576,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : Allow Snoop : Retry = Queue Inserts of Transactions that were already in another Retry Q (sub-eve= nts encode the reason for the next reject)", "UMask": "0x40", @@ -2117,8 +2587,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : ANY0 : Retry Queue I= nserts of Transactions that were already in another Retry Q (sub-events enc= ode the reason for the next reject) : Any condition listed in the Other0 Re= ject counter was true", "UMask": "0x1", @@ -2126,8 +2598,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : HA : Retry Queue Ins= erts of Transactions that were already in another Retry Q (sub-events encod= e the reason for the next reject)", "UMask": "0x2", @@ -2135,8 +2609,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : LLC OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : LLC OR SF Way : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : Way conflict with another re= quest that caused the reject", "UMask": "0x20", @@ -2144,8 +2620,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : LLC Victim : Retry Q= ueue Inserts of Transactions that were already in another Retry Q (sub-even= ts encode the reason for the next reject)", "UMask": "0x4", @@ -2153,8 +2631,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : PhyAddr Match : Retr= y Queue Inserts of Transactions that were already in another Retry Q (sub-e= vents encode the reason for the next reject) : Address match with an outsta= nding request that was rejected.", "UMask": "0x80", @@ -2162,8 +2642,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : SF Victim : Retry Qu= eue Inserts of Transactions that were already in another Retry Q (sub-event= s encode the reason for the next reject) : Requests did not generate Snoop = filter victim", "UMask": "0x8", @@ -2171,8 +2653,10 @@ }, { "BriefDescription": "Other Retries - Set 1 : Victim", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Other Retries - Set 1 : Victim : Retry Queue= Inserts of Transactions that were already in another Retry Q (sub-events e= ncode the reason for the next reject)", "UMask": "0x10", @@ -2180,8 +2664,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD= REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : A= D REQ on VN0 : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -2189,8 +2675,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : A= D RSP on VN0 : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -2198,8 +2686,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : No= n UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : N= on UPI AK Request : Can't inject AK ring message", "UMask": "0x40", @@ -2207,8 +2697,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL= NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : B= L NCB on VN0 : No BL VN0 credit for NCB", "UMask": "0x10", @@ -2216,8 +2708,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL= NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : B= L NCS on VN0 : No BL VN0 credit for NCS", "UMask": "0x20", @@ -2225,8 +2719,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL= RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : B= L RSP on VN0 : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -2234,8 +2730,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL= WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : B= L WB on VN0 : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -2243,8 +2741,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : No= n UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : N= on UPI IV Request : Can't inject IV ring message", "UMask": "0x80", @@ -2252,16 +2752,20 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Al= low Snoop", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : AN= Y0", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : A= NY0 : Any condition listed in the PRQ0 Reject counter was true", "UMask": "0x1", @@ -2269,16 +2773,20 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : HA= ", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LL= C OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : L= LC OR SF Way : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -2286,16 +2794,20 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LL= C Victim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Ph= yAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : P= hyAddr Match : Address match with an outstanding request that was rejected.= ", "UMask": "0x80", @@ -2303,8 +2815,10 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : SF= Victim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : S= F Victim : Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -2312,16 +2826,20 @@ }, { "BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Vi= ctim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Request Queue Retries - Set 0 : AD REQ on VN0= ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : AD REQ on VN= 0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -2329,8 +2847,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : AD RSP on VN0= ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : AD RSP on VN= 0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -2338,8 +2858,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : Non UPI AK Re= quest", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : Non UPI AK R= equest : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for= ISMQ) : Can't inject AK ring message", "UMask": "0x40", @@ -2347,8 +2869,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : BL NCB on VN0= ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : BL NCB on VN= 0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : No BL VN0 credit for NCB", "UMask": "0x10", @@ -2356,8 +2880,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : BL NCS on VN0= ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : BL NCS on VN= 0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : No BL VN0 credit for NCS", "UMask": "0x20", @@ -2365,8 +2891,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : BL RSP on VN0= ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : BL RSP on VN= 0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -2374,8 +2902,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : BL WB on VN0"= , + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : BL WB on VN0= : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)= : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -2383,8 +2913,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 0 : Non UPI IV Re= quest", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 0 : Non UPI IV R= equest : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for= ISMQ) : Can't inject IV ring message", "UMask": "0x80", @@ -2392,8 +2924,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : Allow Snoop = : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)"= , "UMask": "0x40", @@ -2401,8 +2935,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : ANY0 : REQUE= STQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Any c= ondition listed in the WBQ0 Reject counter was true", "UMask": "0x1", @@ -2410,8 +2946,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : HA : REQUEST= Q includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)", "UMask": "0x2", @@ -2419,8 +2957,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : LLC OR SF Way= ", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : LLC OR SF Wa= y : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -2428,8 +2968,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : LLC Victim := REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)", "UMask": "0x4", @@ -2437,8 +2979,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : PhyAddr Match= ", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : PhyAddr Matc= h : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ= ) : Address match with an outstanding request that was rejected.", "UMask": "0x80", @@ -2446,8 +2990,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : SF Victim : = REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : = Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -2455,8 +3001,10 @@ }, { "BriefDescription": "Request Queue Retries - Set 1 : Victim", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Request Queue Retries - Set 1 : Victim : REQ= UESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)", "UMask": "0x10", @@ -2464,8 +3012,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : AD REQ on VN0 : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -2473,8 +3023,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : AD RSP on VN0 : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -2482,8 +3034,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : Non UPI AK Request : N= umber of times a transaction flowing through the RRQ (Remote Response Queue= ) had to retry. : Can't inject AK ring message", "UMask": "0x40", @@ -2491,8 +3045,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : BL NCB on VN0 : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : No BL VN0 credit for NCB", "UMask": "0x10", @@ -2500,8 +3056,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : BL NCS on VN0 : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : No BL VN0 credit for NCS", "UMask": "0x20", @@ -2509,8 +3067,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : BL RSP on VN0 : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -2518,8 +3078,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : BL WB on VN0 : Number = of times a transaction flowing through the RRQ (Remote Response Queue) had = to retry. : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -2527,8 +3089,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 0 : Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 0 : Non UPI IV Request : N= umber of times a transaction flowing through the RRQ (Remote Response Queue= ) had to retry. : Can't inject IV ring message", "UMask": "0x80", @@ -2536,8 +3100,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : Allow Snoop : Number o= f times a transaction flowing through the RRQ (Remote Response Queue) had t= o retry.", "UMask": "0x40", @@ -2545,8 +3111,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : ANY0 : Number of times= a transaction flowing through the RRQ (Remote Response Queue) had to retry= . : Any condition listed in the RRQ0 Reject counter was true", "UMask": "0x1", @@ -2554,8 +3122,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : HA : Number of times a= transaction flowing through the RRQ (Remote Response Queue) had to retry."= , "UMask": "0x2", @@ -2563,8 +3133,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : LLC OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : LLC OR SF Way : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -2572,8 +3144,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : LLC Victim : Number of= times a transaction flowing through the RRQ (Remote Response Queue) had to= retry.", "UMask": "0x4", @@ -2581,8 +3155,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : PhyAddr Match : Number= of times a transaction flowing through the RRQ (Remote Response Queue) had= to retry. : Address match with an outstanding request that was rejected.", "UMask": "0x80", @@ -2590,8 +3166,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : SF Victim : Number of = times a transaction flowing through the RRQ (Remote Response Queue) had to = retry. : Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -2599,8 +3177,10 @@ }, { "BriefDescription": "RRQ Rejects - Set 1 : Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RRQ Rejects - Set 1 : Victim : Number of tim= es a transaction flowing through the RRQ (Remote Response Queue) had to ret= ry.", "UMask": "0x10", @@ -2608,8 +3188,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : AD REQ on VN0 : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : No AD VN0 credit for generating a request", "UMask": "0x1", @@ -2617,8 +3199,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : AD RSP on VN0 : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : No AD VN0 credit for generating a response", "UMask": "0x2", @@ -2626,8 +3210,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : Non UPI AK Request : N= umber of times a transaction flowing through the WBQ (Writeback Queue) had = to retry. : Can't inject AK ring message", "UMask": "0x40", @@ -2635,8 +3221,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : BL NCB on VN0 : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : No BL VN0 credit for NCB", "UMask": "0x10", @@ -2644,8 +3232,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : BL NCS on VN0 : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : No BL VN0 credit for NCS", "UMask": "0x20", @@ -2653,8 +3243,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : BL RSP on VN0 : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : No BL VN0 credit for generating a response", "UMask": "0x4", @@ -2662,8 +3254,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : BL WB on VN0 : Number = of times a transaction flowing through the WBQ (Writeback Queue) had to ret= ry. : No BL VN0 credit for generating a writeback", "UMask": "0x8", @@ -2671,8 +3265,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 0 : Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 0 : Non UPI IV Request : N= umber of times a transaction flowing through the WBQ (Writeback Queue) had = to retry. : Can't inject IV ring message", "UMask": "0x80", @@ -2680,8 +3276,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : Allow Snoop : Number o= f times a transaction flowing through the WBQ (Writeback Queue) had to retr= y.", "UMask": "0x40", @@ -2689,8 +3287,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : ANY0", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : ANY0 : Number of times= a transaction flowing through the WBQ (Writeback Queue) had to retry. : An= y condition listed in the WBQ0 Reject counter was true", "UMask": "0x1", @@ -2698,8 +3298,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : HA", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : HA : Number of times a= transaction flowing through the WBQ (Writeback Queue) had to retry.", "UMask": "0x2", @@ -2707,8 +3309,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : LLC OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : LLC OR SF Way : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : Way conflict with another request that caused the reject", "UMask": "0x20", @@ -2716,8 +3320,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : LLC Victim : Number of= times a transaction flowing through the WBQ (Writeback Queue) had to retry= .", "UMask": "0x4", @@ -2725,8 +3331,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : PhyAddr Match : Number= of times a transaction flowing through the WBQ (Writeback Queue) had to re= try. : Address match with an outstanding request that was rejected.", "UMask": "0x80", @@ -2734,8 +3342,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : SF Victim : Number of = times a transaction flowing through the WBQ (Writeback Queue) had to retry.= : Requests did not generate Snoop filter victim", "UMask": "0x8", @@ -2743,8 +3353,10 @@ }, { "BriefDescription": "WBQ Rejects - Set 1 : Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WBQ Rejects - Set 1 : Victim : Number of tim= es a transaction flowing through the WBQ (Writeback Queue) had to retry.", "UMask": "0x10", @@ -2752,8 +3364,10 @@ }, { "BriefDescription": "Snoops Sent : All", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : All : Counts the number of sno= ops issued by the HA.", "UMask": "0x1", @@ -2761,8 +3375,10 @@ }, { "BriefDescription": "Snoops Sent : Broadcast snoop for Local Reque= sts", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.BCST_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Broadcast snoop for Local Requ= ests : Counts the number of snoops issued by the HA. : Counts the number of= broadcast snoops issued by the HA. This filter includes only requests comi= ng from local sockets.", "UMask": "0x10", @@ -2770,8 +3386,10 @@ }, { "BriefDescription": "Snoops Sent : Broadcast snoops for Remote Req= uests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.BCST_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Broadcast snoops for Remote Re= quests : Counts the number of snoops issued by the HA. : Counts the number = of broadcast snoops issued by the HA.This filter includes only requests com= ing from remote sockets.", "UMask": "0x20", @@ -2779,8 +3397,10 @@ }, { "BriefDescription": "Snoops Sent : Directed snoops for Local Reque= sts", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Directed snoops for Local Requ= ests : Counts the number of snoops issued by the HA. : Counts the number of= directed snoops issued by the HA. This filter includes only requests comin= g from local sockets.", "UMask": "0x40", @@ -2788,8 +3408,10 @@ }, { "BriefDescription": "Snoops Sent : Directed snoops for Remote Requ= ests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Directed snoops for Remote Req= uests : Counts the number of snoops issued by the HA. : Counts the number o= f directed snoops issued by the HA. This filter includes only requests comi= ng from remote sockets.", "UMask": "0x80", @@ -2797,8 +3419,10 @@ }, { "BriefDescription": "Snoops Sent : Broadcast or directed Snoops se= nt for Local Requests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Broadcast or directed Snoops s= ent for Local Requests : Counts the number of snoops issued by the HA. : Co= unts the number of broadcast or directed snoops issued by the HA per reques= t. This filter includes only requests coming from the local socket.", "UMask": "0x4", @@ -2806,8 +3430,10 @@ }, { "BriefDescription": "Snoops Sent : Broadcast or directed Snoops se= nt for Remote Requests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoops Sent : Broadcast or directed Snoops s= ent for Remote Requests : Counts the number of snoops issued by the HA. : C= ounts the number of broadcast or directed snoops issued by the HA per reque= st. This filter includes only requests coming from the remote socket.", "UMask": "0x8", @@ -2815,8 +3441,10 @@ }, { "BriefDescription": "Snoop Responses Received : RSPCNFLCT*", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPCNFLCT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received : RSPCNFLCT* : Coun= ts the total number of RspI snoop responses received. Whenever a snoops ar= e issued, one or more snoop responses will be returned depending on the top= ology of the system. In systems larger than 2s, when multiple snoops are = returned this will count all the snoops that are received. For example, if= 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of th= ese sub-events would increment by 1. : Filters for snoops responses of RspC= onflict. This is returned when a snoop finds an existing outstanding trans= action in a remote caching agent when it CAMs that caching agent. This tri= ggers conflict resolution hardware. This covers both RspCnflct and RspCnfl= ctWbI.", "UMask": "0x40", @@ -2824,8 +3452,10 @@ }, { "BriefDescription": "Snoop Responses Received : RspFwd", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received : RspFwd : Counts t= he total number of RspI snoop responses received. Whenever a snoops are is= sued, one or more snoop responses will be returned depending on the topolog= y of the system. In systems larger than 2s, when multiple snoops are retu= rned this will count all the snoops that are received. For example, if 3 s= noops were issued and returned RspI, RspS, and RspSFwd; then each of these = sub-events would increment by 1. : Filters for a snoop response of RspFwd t= o a CA request. This snoop response is only possible for RdCur when a snoo= p HITM/E in a remote caching agent and it directly forwards data to a reque= stor without changing the requestor's cache line state.", "UMask": "0x80", @@ -2833,8 +3463,10 @@ }, { "BriefDescription": "Snoop Responses Received : Rsp*Fwd*WB", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPFWDWB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received : Rsp*Fwd*WB : Coun= ts the total number of RspI snoop responses received. Whenever a snoops ar= e issued, one or more snoop responses will be returned depending on the top= ology of the system. In systems larger than 2s, when multiple snoops are = returned this will count all the snoops that are received. For example, if= 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of th= ese sub-events would increment by 1. : Filters for a snoop response of Rsp*= Fwd*WB. This snoop response is only used in 4s systems. It is used when a= snoop HITM's in a remote caching agent and it directly forwards data to a = requestor, and simultaneously returns data to the home to be written back t= o memory.", "UMask": "0x20", @@ -2842,8 +3474,10 @@ }, { "BriefDescription": "RspI Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts when a transaction with the opcode ty= pe RspI Snoop Response was received which indicates the remote cache does n= ot have the data, or when the remote cache silently evicts data (such as wh= en an RFO: the Read for Ownership issued before a write hits non-modified d= ata).", "UMask": "0x1", @@ -2851,8 +3485,10 @@ }, { "BriefDescription": "RspIFwd Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPIFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts when a a transaction with the opcode = type RspIFwd Snoop Response was received which indicates a remote caching a= gent forwarded the data and the requesting agent is able to acquire the dat= a in E (Exclusive) or M (modified) states. This is commonly returned with = RFO (the Read for Ownership issued before a write) transactions. The snoop= could have either been to a cacheline in the M,E,F (Modified, Exclusive or= Forward) states.", "UMask": "0x4", @@ -2860,8 +3496,10 @@ }, { "BriefDescription": "RspS Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts when a transaction with the opcode ty= pe RspS Snoop Response was received which indicates when a remote cache has= data but is not forwarding it. It is a way to let the requesting socket k= now that it cannot allocate the data in E state. No data is sent with S Rs= pS.", "UMask": "0x2", @@ -2869,8 +3507,10 @@ }, { "BriefDescription": "RspSFwd Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPSFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts when a a transaction with the opcode = type RspSFwd Snoop Response was received which indicates a remote caching a= gent forwarded the data but held on to its current copy. This is common fo= r data and code reads that hit in a remote socket in E (Exclusive) or F (Fo= rward) state.", "UMask": "0x8", @@ -2878,8 +3518,10 @@ }, { "BriefDescription": "Snoop Responses Received : Rsp*WB", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_CHA_SNOOP_RESP.RSPWB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received : Rsp*WB : Counts t= he total number of RspI snoop responses received. Whenever a snoops are is= sued, one or more snoop responses will be returned depending on the topolog= y of the system. In systems larger than 2s, when multiple snoops are retu= rned this will count all the snoops that are received. For example, if 3 s= noops were issued and returned RspI, RspS, and RspSFwd; then each of these = sub-events would increment by 1. : Filters for a snoop response of RspIWB o= r RspSWB. This is returned when a non-RFO request hits in M state. Data a= nd Code Reads can return either RspIWB or RspSWB depending on how the syste= m has been configured. InvItoE transactions will also return RspIWB becaus= e they must acquire ownership.", "UMask": "0x10", @@ -2887,8 +3529,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspCnflct", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspCnflct := Number of snoop responses received for a Local request : Filters for snoo= ps responses of RspConflict to local CA requests. This is returned when a = snoop finds an existing outstanding transaction in a remote caching agent w= hen it CAMs that caching agent. This triggers conflict resolution hardware= . This covers both RspCnflct and RspCnflctWbI.", "UMask": "0x40", @@ -2896,8 +3540,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspFwd", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspFwd : Nu= mber of snoop responses received for a Local request : Filters for a snoop= response of RspFwd to local CA requests. This snoop response is only poss= ible for RdCur when a snoop HITM/E in a remote caching agent and it directl= y forwards data to a requestor without changing the requestor's cache line = state.", "UMask": "0x80", @@ -2905,8 +3551,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : Rsp*FWD*WB", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWDWB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : Rsp*FWD*WB = : Number of snoop responses received for a Local request : Filters for a s= noop response of Rsp*Fwd*WB to local CA requests. This snoop response is o= nly used in 4s systems. It is used when a snoop HITM's in a remote caching= agent and it directly forwards data to a requestor, and simultaneously ret= urns data to the home to be written back to memory.", "UMask": "0x20", @@ -2914,8 +3562,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspI", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspI : Numb= er of snoop responses received for a Local request : Filters for snoops re= sponses of RspI to local CA requests. RspI is returned when the remote cac= he does not have the data, or when the remote cache silently evicts data (s= uch as when an RFO hits non-modified data).", "UMask": "0x1", @@ -2923,8 +3573,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspIFwd", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspIFwd : N= umber of snoop responses received for a Local request : Filters for snoop = responses of RspIFwd to local CA requests. This is returned when a remote = caching agent forwards data and the requesting agent is able to acquire the= data in E or M states. This is commonly returned with RFO transactions. = It can be either a HitM or a HitFE.", "UMask": "0x4", @@ -2932,8 +3584,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspS", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspS : Numb= er of snoop responses received for a Local request : Filters for snoop res= ponses of RspS to local CA requests. RspS is returned when a remote cache = has data but is not forwarding it. It is a way to let the requesting socke= t know that it cannot allocate the data in E state. No data is sent with S= RspS.", "UMask": "0x2", @@ -2941,8 +3595,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : RspSFwd", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : RspSFwd : N= umber of snoop responses received for a Local request : Filters for a snoo= p response of RspSFwd to local CA requests. This is returned when a remote= caching agent forwards data but holds on to its current copy. This is com= mon for data and code reads that hit in a remote socket in E or F state.", "UMask": "0x8", @@ -2950,8 +3606,10 @@ }, { "BriefDescription": "Snoop Responses Received Local : Rsp*WB", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPWB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received Local : Rsp*WB : Nu= mber of snoop responses received for a Local request : Filters for a snoop= response of RspIWB or RspSWB to local CA requests. This is returned when = a non-RFO request hits in M state. Data and Code Reads can return either R= spIWB or RspSWB depending on how the system has been configured. InvItoE t= ransactions will also return RspIWB because they must acquire ownership.", "UMask": "0x10", @@ -2959,56 +3617,70 @@ }, { "BriefDescription": "Misc Snoop Responses Received : MtoI RspIData= M", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPDATAM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Misc Snoop Responses Received : MtoI RspIFwdM= ", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPIFWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Misc Snoop Responses Received : Pull Data Par= tial - Hit LLC", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITLLC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Misc Snoop Responses Received : Pull Data Par= tial - Hit SF", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITSF", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hi= t LLC", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITLLC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hi= t SF", + "Counter": "0,1,2,3", "EventCode": "0x6b", "EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITSF", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : All", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All : Counts the number of ent= ries successfully inserted into the TOR that match qualifications specified= by the subevent.", "UMask": "0xc001ffff", @@ -3016,16 +3688,20 @@ }, { "BriefDescription": "TOR Inserts : DDR Access", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : DDR Access : Counts the number= of entries successfully inserted into the TOR that match qualifications sp= ecified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : SF/LLC Evictions", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.EVICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : SF/LLC Evictions : Counts the = number of entries successfully inserted into the TOR that match qualificati= ons specified by the subevent. : TOR allocation occurred as a result of SF/= LLC evictions (came from the ISMQ)", "UMask": "0x2", @@ -3033,14 +3709,17 @@ }, { "BriefDescription": "TOR Inserts : Just Hits", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just Hits : Counts the number = of entries successfully inserted into the TOR that match qualifications spe= cified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts; All from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA", "PerPkg": "1", @@ -3050,6 +3729,7 @@ }, { "BriefDescription": "TOR Inserts;CLFlush from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSH", "PerPkg": "1", @@ -3059,8 +3739,10 @@ }, { "BriefDescription": "TOR Inserts;CLFlushOpt from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSHOPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; C= LFlushOpt events that are initiated from the Core", "UMask": "0xc8d7ff01", @@ -3068,6 +3750,7 @@ }, { "BriefDescription": "TOR Inserts; CRd from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD", "PerPkg": "1", @@ -3077,8 +3760,10 @@ }, { "BriefDescription": "TOR Inserts; CRd Pref from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Code read prefetch from local I= A that misses in the snoop filter", "UMask": "0xc88fff01", @@ -3086,6 +3771,7 @@ }, { "BriefDescription": "TOR Inserts; DRd from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD", "PerPkg": "1", @@ -3095,8 +3781,10 @@ }, { "BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_DRDPTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores du= e to a page walk : Counts the number of entries successfully inserted into = the TOR that match qualifications specified by the subevent. Does not inc= lude addressless requests such as locks and interrupts.", "UMask": "0xc837ff01", @@ -3104,8 +3792,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt from local IA tha= t misses in the snoop filter", "UMask": "0xc827ff01", @@ -3113,8 +3803,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt Pref from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt prefetch from loc= al IA that misses in the snoop filter", "UMask": "0xc8a7ff01", @@ -3122,6 +3814,7 @@ }, { "BriefDescription": "TOR Inserts; DRd Pref from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_PREF", "PerPkg": "1", @@ -3131,6 +3824,7 @@ }, { "BriefDescription": "TOR Inserts; Hits from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT", "PerPkg": "1", @@ -3140,6 +3834,7 @@ }, { "BriefDescription": "TOR Inserts; CRd hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD", "PerPkg": "1", @@ -3149,6 +3844,7 @@ }, { "BriefDescription": "TOR Inserts; CRd Pref hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD_PREF", "PerPkg": "1", @@ -3158,16 +3854,20 @@ }, { "BriefDescription": "All requests issued from IA cores to CXL acce= lerator memory regions that hit the LLC.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c0018101", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC_LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c0008101", @@ -3175,6 +3875,7 @@ }, { "BriefDescription": "TOR Inserts; DRd hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD", "PerPkg": "1", @@ -3184,8 +3885,10 @@ }, { "BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores tha= t Hit the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRDPTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores du= e to page walks that hit the LLC : Counts the number of entries successfull= y inserted into the TOR that match qualifications specified by the subevent= . Does not include addressless requests such as locks and interrupts.", "UMask": "0xc837fd01", @@ -3193,8 +3896,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt from local IA tha= t hits in the snoop filter", "UMask": "0xc827fd01", @@ -3202,8 +3907,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt Pref hits from local IA"= , + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt prefetch from loc= al IA that hits in the snoop filter", "UMask": "0xc8a7fd01", @@ -3211,6 +3918,7 @@ }, { "BriefDescription": "TOR Inserts; DRd Pref hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_PREF", "PerPkg": "1", @@ -3220,8 +3928,10 @@ }, { "BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that H= it LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xcc47fd01", @@ -3229,8 +3939,10 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefCode hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Last level cache prefetch code = read from local IA that hits in the snoop filter", "UMask": "0xcccffd01", @@ -3238,8 +3950,10 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefData hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Last level cache prefetch data = read from local IA that hits in the snoop filter", "UMask": "0xccd7fd01", @@ -3247,6 +3961,7 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefRFO hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFRFO", "PerPkg": "1", @@ -3256,6 +3971,7 @@ }, { "BriefDescription": "TOR Inserts; RFO hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO", "PerPkg": "1", @@ -3265,6 +3981,7 @@ }, { "BriefDescription": "TOR Inserts; RFO Pref hits from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO_PREF", "PerPkg": "1", @@ -3274,8 +3991,10 @@ }, { "BriefDescription": "TOR Inserts;ItoM from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; I= toM events that are initiated from the Core", "UMask": "0xcc47ff01", @@ -3283,8 +4002,10 @@ }, { "BriefDescription": "TOR Inserts : ItoMCacheNears issued by iA Cor= es", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOMCACHENEAR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xcd47ff01", @@ -3292,8 +4013,10 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefCode from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Last level cache prefetch code = read from local IA.", "UMask": "0xcccfff01", @@ -3301,6 +4024,7 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefData from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFDATA", "PerPkg": "1", @@ -3310,6 +4034,7 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefRFO from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFRFO", "PerPkg": "1", @@ -3319,6 +4044,7 @@ }, { "BriefDescription": "TOR Inserts; misses from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS", "PerPkg": "1", @@ -3328,6 +4054,7 @@ }, { "BriefDescription": "TOR Inserts for CRd misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD", "PerPkg": "1", @@ -3337,16 +4064,20 @@ }, { "BriefDescription": "CRds and equivalent opcodes issued from an IA= core which miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRDMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c80b8201", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : CRd issued by iA Cores that Mis= sed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc80efe01", @@ -3354,6 +4085,7 @@ }, { "BriefDescription": "TOR Inserts; CRd Pref misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF", "PerPkg": "1", @@ -3363,8 +4095,10 @@ }, { "BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores th= at Missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc88efe01", @@ -3372,8 +4106,10 @@ }, { "BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores th= at Missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc88f7e01", @@ -3381,8 +4117,10 @@ }, { "BriefDescription": "TOR Inserts : CRd issued by iA Cores that Mis= sed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc80f7e01", @@ -3390,16 +4128,20 @@ }, { "BriefDescription": "All requests issued from IA cores to CXL acce= lerator memory regions that miss the LLC.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c0018201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC_LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c0008201", @@ -3407,6 +4149,7 @@ }, { "BriefDescription": "TOR Inserts for DRd misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD", "PerPkg": "1", @@ -3416,16 +4159,20 @@ }, { "BriefDescription": "DRds and equivalent opcodes issued from an IA= core which miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8138201", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores tha= t Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDPTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores du= e to a page walk that missed the LLC : Counts the number of entries success= fully inserted into the TOR that match qualifications specified by the sube= vent. Does not include addressless requests such as locks and interrupts.= ", "UMask": "0xc837fe01", @@ -3433,16 +4180,20 @@ }, { "BriefDescription": "DRds issued from an IA core which miss the L3= and target memory in a CXL type 2 memory expander card.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8178201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC_LOCAL= ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8168201", @@ -3450,6 +4201,7 @@ }, { "BriefDescription": "TOR Inserts for DRds issued by IA Cores targe= ting DDR Mem that Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR", "PerPkg": "1", @@ -3459,6 +4211,7 @@ }, { "BriefDescription": "TOR Inserts for DRd misses from local IA targ= eting local memory", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL", "PerPkg": "1", @@ -3468,6 +4221,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores targeti= ng DDR Mem that Missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_DDR", "PerPkg": "1", @@ -3477,6 +4231,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores targeti= ng PMM Mem that Missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_PMM", "PerPkg": "1", @@ -3486,8 +4241,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt from local IA tha= t misses in the snoop filter", "UMask": "0xc827fe01", @@ -3495,8 +4252,10 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_CXL_ACC_L= OCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8268201", @@ -3504,8 +4263,10 @@ }, { "BriefDescription": "TOR Inserts; DRd Opt Pref misses from local I= A", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Data read opt prefetch from loc= al IA that misses in the snoop filter", "UMask": "0xc8a7fe01", @@ -3513,8 +4274,10 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_CXL_= ACC_LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOC= AL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8a68201", @@ -3522,6 +4285,7 @@ }, { "BriefDescription": "TOR Inserts for DRds issued by iA Cores targe= ting PMM Mem that Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM", "PerPkg": "1", @@ -3531,6 +4295,7 @@ }, { "BriefDescription": "TOR Inserts for DRd Pref misses from local IA= ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF", "PerPkg": "1", @@ -3540,16 +4305,20 @@ }, { "BriefDescription": "L2 data prefetches issued from an IA core whi= ch miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8978201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC_= LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8968201", @@ -3557,8 +4326,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting DDR Mem that Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8978601", @@ -3566,6 +4337,7 @@ }, { "BriefDescription": "TOR Inserts for DRd Pref misses from local IA= targeting local memory", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL", "PerPkg": "1", @@ -3575,8 +4347,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting DDR Mem that Missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8968601", @@ -3584,8 +4358,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting PMM Mem that Missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8968a01", @@ -3593,8 +4369,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting PMM Mem that Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8978a01", @@ -3602,6 +4380,7 @@ }, { "BriefDescription": "TOR Inserts for DRd Pref misses from local IA= targeting remote memory", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE", "PerPkg": "1", @@ -3611,8 +4390,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting DDR Mem that Missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8970601", @@ -3620,8 +4401,10 @@ }, { "BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores ta= rgeting PMM Mem that Missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8970a01", @@ -3629,6 +4412,7 @@ }, { "BriefDescription": "TOR Inserts for DRd misses from local IA targ= eting remote memory", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE", "PerPkg": "1", @@ -3638,6 +4422,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores targeti= ng DDR Mem that Missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_DDR", "PerPkg": "1", @@ -3647,6 +4432,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores targeti= ng PMM Mem that Missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_PMM", "PerPkg": "1", @@ -3656,8 +4442,10 @@ }, { "BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that M= issed LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xcc47fe01", @@ -3665,8 +4453,10 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefCode misses from local IA= ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts; Last level cache prefetch code = read from local IA that misses in the snoop filter", "UMask": "0xcccffe01", @@ -3674,14 +4464,17 @@ }, { "BriefDescription": "LLC Prefetch Code transactions issued from an= IA core which miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10cccf8201", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts; LLCPrefData misses from local IA= ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA", "PerPkg": "1", @@ -3691,16 +4484,20 @@ }, { "BriefDescription": "LLC data prefetches issued from an IA core wh= ich miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10ccd78201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_A= CC_LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC_LOCA= L", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10ccd68201", @@ -3708,6 +4505,7 @@ }, { "BriefDescription": "TOR Inserts; LLCPrefRFO misses from local IA"= , + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO", "PerPkg": "1", @@ -3717,16 +4515,20 @@ }, { "BriefDescription": "L2 RFO prefetches issued from an IA core whic= h miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8878201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_AC= C_LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC_LOCAL= ", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8868201", @@ -3734,8 +4536,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting DDR that missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8668601", @@ -3743,8 +4547,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting PMM that missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8668a01", @@ -3752,8 +4558,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing DDR that missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86e8601", @@ -3761,8 +4569,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing PMM that missed the LLC - HOMed locally", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86e8a01", @@ -3770,8 +4580,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting DDR that missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8670601", @@ -3779,8 +4591,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting PMM that missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8670a01", @@ -3788,8 +4602,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing DDR that missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86f0601", @@ -3797,8 +4613,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing PMM that missed the LLC - HOMed remotely", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86f0a01", @@ -3806,6 +4624,7 @@ }, { "BriefDescription": "TOR Inserts; RFO misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO", "PerPkg": "1", @@ -3815,24 +4634,30 @@ }, { "BriefDescription": "RFO and L2 RFO prefetches issued from an IA c= ore which miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFOMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8038201", "Unit": "CHA" }, { "BriefDescription": "RFOs issued from an IA core which miss the L3= and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8078201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC_LOCAL= ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8068201", @@ -3840,6 +4665,7 @@ }, { "BriefDescription": "TOR Inserts RFO misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_LOCAL", "PerPkg": "1", @@ -3849,6 +4675,7 @@ }, { "BriefDescription": "TOR Inserts; RFO pref misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF", "PerPkg": "1", @@ -3858,16 +4685,20 @@ }, { "BriefDescription": "LLC RFO prefetches issued from an IA core whi= ch miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10ccc78201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC_= LOCAL", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10ccc68201", @@ -3875,6 +4706,7 @@ }, { "BriefDescription": "TOR Inserts; RFO prefetch misses from local I= A", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_LOCAL", "PerPkg": "1", @@ -3884,6 +4716,7 @@ }, { "BriefDescription": "TOR Inserts; RFO prefetch misses from local I= A", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_REMOTE", "PerPkg": "1", @@ -3893,6 +4726,7 @@ }, { "BriefDescription": "TOR Inserts; RFO misses from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_REMOTE", "PerPkg": "1", @@ -3902,8 +4736,10 @@ }, { "BriefDescription": "TOR Inserts : UCRdFs issued by iA Cores that = Missed LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_UCRDF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc877de01", @@ -3911,8 +4747,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores that M= issed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86ffe01", @@ -3920,8 +4758,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLF issued by iA Cores that M= issed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc867fe01", @@ -3929,8 +4769,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting DDR that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8678601", @@ -3938,8 +4780,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targe= ting PMM that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc8678a01", @@ -3947,8 +4791,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing DDR that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86f8601", @@ -3956,8 +4802,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores target= ing PMM that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86f8a01", @@ -3965,8 +4813,10 @@ }, { "BriefDescription": "TOR Inserts : WiLs issued by iA Cores that Mi= ssed LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc87fde01", @@ -3974,6 +4824,7 @@ }, { "BriefDescription": "TOR Inserts; RFO from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO", "PerPkg": "1", @@ -3983,6 +4834,7 @@ }, { "BriefDescription": "TOR Inserts; RFO pref from local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO_PREF", "PerPkg": "1", @@ -3992,6 +4844,7 @@ }, { "BriefDescription": "TOR Inserts;SpecItoM from Local IA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_SPECITOM", "PerPkg": "1", @@ -4001,8 +4854,10 @@ }, { "BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. = Non Modified Write Backs", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified= Write Backs) :Counts the number of entries successfully inserted into the= TOR that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xcc3fff01", @@ -4010,8 +4865,10 @@ }, { "BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. = Non Modified Write Backs", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified= Write Backs) :Counts the number of entries successfully inserted into the= TOR that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xcc37ff01", @@ -4019,8 +4876,10 @@ }, { "BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. = Non Modified Write Backs", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified= Write Backs) :Counts the number of entries successfully inserted into the= TOR that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xcc2fff01", @@ -4028,8 +4887,10 @@ }, { "BriefDescription": "TOR Inserts : WbMtoIs issued by an iA Cores. = Modified Write Backs", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbMtoIs issued by iA Cores . (Modified Writ= e Backs) :Counts the number of entries successfully inserted into the TOR = that match qualifications specified by the subevent. Does not include addr= essless requests such as locks and interrupts.", "UMask": "0xcc27ff01", @@ -4037,8 +4898,10 @@ }, { "BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. = Non Modified Write Backs", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WBSTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified= Write Backs) :Counts the number of entries successfully inserted into the= TOR that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xcc67ff01", @@ -4046,8 +4909,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLs issued by iA Cores", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WCIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86fff01", @@ -4055,8 +4920,10 @@ }, { "BriefDescription": "TOR Inserts : WCiLF issued by iA Cores", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_WCILF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc867ff01", @@ -4064,6 +4931,7 @@ }, { "BriefDescription": "TOR Inserts; All from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO", "PerPkg": "1", @@ -4073,6 +4941,7 @@ }, { "BriefDescription": "TOR Inserts : CLFlushes issued by IO Devices"= , + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_CLFLUSH", "PerPkg": "1", @@ -4082,6 +4951,7 @@ }, { "BriefDescription": "TOR Inserts; Hits from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT", "PerPkg": "1", @@ -4091,6 +4961,7 @@ }, { "BriefDescription": "TOR Inserts; ItoM hits from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOM", "PerPkg": "1", @@ -4100,6 +4971,7 @@ }, { "BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a pa= rtial write request, from IO Devices that hit the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR", "PerPkg": "1", @@ -4109,6 +4981,7 @@ }, { "BriefDescription": "TOR Inserts; RdCur and FsRdCur hits from loca= l IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR", "PerPkg": "1", @@ -4118,6 +4991,7 @@ }, { "BriefDescription": "TOR Inserts; RFO hits from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_RFO", "PerPkg": "1", @@ -4127,6 +5001,7 @@ }, { "BriefDescription": "TOR Inserts for ItoM from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM", "PerPkg": "1", @@ -4136,6 +5011,7 @@ }, { "BriefDescription": "TOR Inserts for ItoMCacheNears from IO device= s.", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR", "PerPkg": "1", @@ -4145,6 +5021,7 @@ }, { "BriefDescription": "ItoMCacheNear (partial write) transactions fr= om an IO device that addresses memory on the local socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL", "PerPkg": "1", @@ -4154,6 +5031,7 @@ }, { "BriefDescription": "ItoMCacheNear (partial write) transactions fr= om an IO device that addresses memory on a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE", "PerPkg": "1", @@ -4163,6 +5041,7 @@ }, { "BriefDescription": "ItoM (write) transactions from an IO device t= hat addresses memory on the local socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL", "PerPkg": "1", @@ -4172,6 +5051,7 @@ }, { "BriefDescription": "ItoM (write) transactions from an IO device t= hat addresses memory on a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE", "PerPkg": "1", @@ -4181,6 +5061,7 @@ }, { "BriefDescription": "TOR Inserts; Misses from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS", "PerPkg": "1", @@ -4190,6 +5071,7 @@ }, { "BriefDescription": "TOR Inserts : ItoM, indicating a full cacheli= ne write request, from IO Devices that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM", "PerPkg": "1", @@ -4199,6 +5081,7 @@ }, { "BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a pa= rtial write request, from IO Devices that missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR", "PerPkg": "1", @@ -4208,6 +5091,7 @@ }, { "BriefDescription": "TOR Inserts; RdCur and FsRdCur requests from = local IO that miss LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR", "PerPkg": "1", @@ -4217,6 +5101,7 @@ }, { "BriefDescription": "TOR Inserts; RFO misses from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO", "PerPkg": "1", @@ -4226,6 +5111,7 @@ }, { "BriefDescription": "TOR Inserts for RdCur from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR", "PerPkg": "1", @@ -4235,6 +5121,7 @@ }, { "BriefDescription": "PCIRDCUR (read) transactions from an IO devic= e that addresses memory on a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL", "PerPkg": "1", @@ -4244,6 +5131,7 @@ }, { "BriefDescription": "PCIRDCUR (read) transactions from an IO devic= e that addresses memory on the local socket", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE", "PerPkg": "1", @@ -4253,6 +5141,7 @@ }, { "BriefDescription": "TOR Inserts; RFO from local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_RFO", "PerPkg": "1", @@ -4262,6 +5151,7 @@ }, { "BriefDescription": "TOR Inserts : WbMtoIs issued by IO Devices", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_WBMTOI", "PerPkg": "1", @@ -4271,8 +5161,10 @@ }, { "BriefDescription": "TOR Inserts : IPQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : IPQ : Counts the number of ent= ries successfully inserted into the TOR that match qualifications specified= by the subevent.", "UMask": "0x8", @@ -4280,8 +5172,10 @@ }, { "BriefDescription": "TOR Inserts : IRQ - iA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IRQ_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : IRQ - iA : Counts the number o= f entries successfully inserted into the TOR that match qualifications spec= ified by the subevent. : From an iA Core", "UMask": "0x1", @@ -4289,8 +5183,10 @@ }, { "BriefDescription": "TOR Inserts : IRQ - Non iA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IRQ_NON_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : IRQ - Non iA : Counts the numb= er of entries successfully inserted into the TOR that match qualifications = specified by the subevent.", "UMask": "0x10", @@ -4298,24 +5194,30 @@ }, { "BriefDescription": "TOR Inserts : Just ISOC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.ISOC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just ISOC : Counts the number = of entries successfully inserted into the TOR that match qualifications spe= cified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Just Local Targets", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.LOCAL_TGT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just Local Targets : Counts th= e number of entries successfully inserted into the TOR that match qualifica= tions specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : All from Local iA and IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All from Local iA and IO : Cou= nts the number of entries successfully inserted into the TOR that match qua= lifications specified by the subevent. : All locally initiated requests", "UMask": "0xc000ff05", @@ -4323,8 +5225,10 @@ }, { "BriefDescription": "TOR Inserts : All from Local iA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.LOC_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All from Local iA : Counts the= number of entries successfully inserted into the TOR that match qualificat= ions specified by the subevent. : All locally initiated requests from iA Co= res", "UMask": "0xc000ff01", @@ -4332,8 +5236,10 @@ }, { "BriefDescription": "TOR Inserts : All from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.LOC_IO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All from Local IO : Counts the= number of entries successfully inserted into the TOR that match qualificat= ions specified by the subevent. : All locally generated IO traffic", "UMask": "0xc000ff04", @@ -4341,80 +5247,100 @@ }, { "BriefDescription": "TOR Inserts : Match the Opcode in b[29:19] of= the extended umask field", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.MATCH_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Match the Opcode in b[29:19] o= f the extended umask field : Counts the number of entries successfully inse= rted into the TOR that match qualifications specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Just Misses", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just Misses : Counts the numbe= r of entries successfully inserted into the TOR that match qualifications s= pecified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : MMCFG Access", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.MMCFG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : MMCFG Access : Counts the numb= er of entries successfully inserted into the TOR that match qualifications = specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : MMIO Access", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.MMIO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : MMIO Access : Counts the numbe= r of entries successfully inserted into the TOR that match qualifications s= pecified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Just NearMem", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.NEARMEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just NearMem : Counts the numb= er of entries successfully inserted into the TOR that match qualifications = specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Just NonCoherent", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.NONCOH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just NonCoherent : Counts the = number of entries successfully inserted into the TOR that match qualificati= ons specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Just NotNearMem", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.NOT_NEARMEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just NotNearMem : Counts the n= umber of entries successfully inserted into the TOR that match qualificatio= ns specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : PMM Access", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : PM Access : Counts the number = of entries successfully inserted into the TOR that match qualifications spe= cified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : Match the PreMorphed Opcode in = b[29:19] of the extended umask field", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.PREMORPH_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Match the PreMorphed Opcode in= b[29:19] of the extended umask field : Counts the number of entries succes= sfully inserted into the TOR that match qualifications specified by the sub= event.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : PRQ - IOSF", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.PRQ_IOSF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : PRQ - IOSF : Counts the number= of entries successfully inserted into the TOR that match qualifications sp= ecified by the subevent. : From a PCIe Device", "UMask": "0x4", @@ -4422,8 +5348,10 @@ }, { "BriefDescription": "TOR Inserts : PRQ - Non IOSF", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.PRQ_NON_IOSF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : PRQ - Non IOSF : Counts the nu= mber of entries successfully inserted into the TOR that match qualification= s specified by the subevent.", "UMask": "0x20", @@ -4431,16 +5359,20 @@ }, { "BriefDescription": "TOR Inserts : Just Remote Targets", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.REMOTE_TGT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : Just Remote Targets : Counts t= he number of entries successfully inserted into the TOR that match qualific= ations specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts : All from Remote", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.REM_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All from Remote : Counts the n= umber of entries successfully inserted into the TOR that match qualificatio= ns specified by the subevent. : All remote requests (e.g. snoops, writeback= s) that came from remote sockets", "UMask": "0xc001ffc8", @@ -4448,8 +5380,10 @@ }, { "BriefDescription": "TOR Inserts : All Snoops from Remote", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.REM_SNPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : All Snoops from Remote : Count= s the number of entries successfully inserted into the TOR that match quali= fications specified by the subevent. : All snoops to this LLC that came fro= m remote sockets", "UMask": "0xc001ff08", @@ -4457,8 +5391,10 @@ }, { "BriefDescription": "TOR Inserts : RRQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : RRQ : Counts the number of ent= ries successfully inserted into the TOR that match qualifications specified= by the subevent.", "UMask": "0x40", @@ -4466,8 +5402,10 @@ }, { "BriefDescription": "TOR Inserts; All Snoops from Remote", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.SNPS_FROM_REM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. Al= l snoops to this LLC that came from remote sockets.", "UMask": "0xc001ff08", @@ -4475,8 +5413,10 @@ }, { "BriefDescription": "TOR Inserts : WBQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Inserts : WBQ : Counts the number of ent= ries successfully inserted into the TOR that match qualifications specified= by the subevent.", "UMask": "0x80", @@ -4484,8 +5424,10 @@ }, { "BriefDescription": "TOR Occupancy : All", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All : For each cycle, this e= vent accumulates the number of valid entries in the TOR that match qualific= ations specified by the subevent. T", "UMask": "0xc001ffff", @@ -4493,16 +5435,20 @@ }, { "BriefDescription": "TOR Occupancy : DDR Access", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DDR Access : For each cycle,= this event accumulates the number of valid entries in the TOR that match q= ualifications specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : SF/LLC Evictions", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.EVICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : SF/LLC Evictions : For each = cycle, this event accumulates the number of valid entries in the TOR that m= atch qualifications specified by the subevent. T : TOR allocation occurre= d as a result of SF/LLC evictions (came from the ISMQ)", "UMask": "0x2", @@ -4510,14 +5456,17 @@ }, { "BriefDescription": "TOR Occupancy : Just Hits", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just Hits : For each cycle, = this event accumulates the number of valid entries in the TOR that match qu= alifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy; All from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA", "PerPkg": "1", @@ -4527,6 +5476,7 @@ }, { "BriefDescription": "TOR Occupancy : CLFlushes issued by iA Cores"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSH", "PerPkg": "1", @@ -4536,8 +5486,10 @@ }, { "BriefDescription": "TOR Occupancy : CLFlushOpts issued by iA Core= s", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSHOPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CLFlushOpts issued by iA Cor= es : For each cycle, this event accumulates the number of valid entries in = the TOR that match qualifications specified by the subevent. Does not i= nclude addressless requests such as locks and interrupts.", "UMask": "0xc8d7ff01", @@ -4545,6 +5497,7 @@ }, { "BriefDescription": "TOR Occupancy; CRd from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD", "PerPkg": "1", @@ -4554,8 +5507,10 @@ }, { "BriefDescription": "TOR Occupancy; CRd Pref from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Code read prefetch from local= IA that misses in the snoop filter", "UMask": "0xc88fff01", @@ -4563,6 +5518,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD", "PerPkg": "1", @@ -4572,8 +5528,10 @@ }, { "BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due= to a page walk", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRDPTE", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -4583,8 +5541,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt from local IA t= hat misses in the snoop filter", "UMask": "0xc827ff01", @@ -4592,8 +5552,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt Pref from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt prefetch from l= ocal IA that misses in the snoop filter", "UMask": "0xc8a7ff01", @@ -4601,6 +5563,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd Pref from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_PREF", "PerPkg": "1", @@ -4610,6 +5573,7 @@ }, { "BriefDescription": "TOR Occupancy; Hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT", "PerPkg": "1", @@ -4619,6 +5583,7 @@ }, { "BriefDescription": "TOR Occupancy; CRd hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD", "PerPkg": "1", @@ -4628,6 +5593,7 @@ }, { "BriefDescription": "TOR Occupancy; CRd Pref hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD_PREF", "PerPkg": "1", @@ -4637,16 +5603,20 @@ }, { "BriefDescription": "TOR Occupancy for All requests issued from IA= cores to CXL accelerator memory regions that hit the LLC.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c0018101", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c0008101", @@ -4654,6 +5624,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD", "PerPkg": "1", @@ -4663,8 +5634,10 @@ }, { "BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due= to a page walk that hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRDPTE", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -4674,8 +5647,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt from local IA t= hat hits in the snoop filter", "UMask": "0xc827fd01", @@ -4683,8 +5658,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt Pref hits from local I= A", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt prefetch from l= ocal IA that hits in the snoop filter", "UMask": "0xc8a7fd01", @@ -4692,6 +5669,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd Pref hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_PREF", "PerPkg": "1", @@ -4701,8 +5679,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that= Hit LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores tha= t Hit LLC : For each cycle, this event accumulates the number of valid entr= ies in the TOR that match qualifications specified by the subevent. Doe= s not include addressless requests such as locks and interrupts.", "UMask": "0xcc47fd01", @@ -4710,8 +5690,10 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefCode hits from local IA= ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Last level cache prefetch cod= e read from local IA that hits in the snoop filter", "UMask": "0xcccffd01", @@ -4719,8 +5701,10 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefData hits from local IA= ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFDATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Last level cache prefetch dat= a read from local IA that hits in the snoop filter", "UMask": "0xccd7fd01", @@ -4728,6 +5712,7 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefRFO hits from local IA"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFRFO", "PerPkg": "1", @@ -4737,6 +5722,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO", "PerPkg": "1", @@ -4746,6 +5732,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO Pref hits from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO_PREF", "PerPkg": "1", @@ -4755,8 +5742,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores : F= or each cycle, this event accumulates the number of valid entries in the TO= R that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xcc47ff01", @@ -4764,8 +5753,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears issued by iA C= ores", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOMCACHENEAR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : ItoMCacheNears issued by iA = Cores : For each cycle, this event accumulates the number of valid entries = in the TOR that match qualifications specified by the subevent. Does no= t include addressless requests such as locks and interrupts.", "UMask": "0xcd47ff01", @@ -4773,8 +5764,10 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefCode from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Last level cache prefetch dat= a read from local IA.", "UMask": "0xcccfff01", @@ -4782,6 +5775,7 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefData from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFDATA", "PerPkg": "1", @@ -4791,6 +5785,7 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefRFO from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFRFO", "PerPkg": "1", @@ -4800,6 +5795,7 @@ }, { "BriefDescription": "TOR Occupancy; Misses from Local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS", "PerPkg": "1", @@ -4809,6 +5805,7 @@ }, { "BriefDescription": "TOR Occupancy; CRd misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD", "PerPkg": "1", @@ -4818,16 +5815,20 @@ }, { "BriefDescription": "TOR Occupancy for CRds and equivalent opcodes= issued from an IA core which miss the L3 and target memory in a CXL type 2= accelerator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRDMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c80b8201", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : CRd issued by iA Cores that M= issed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CRd issued by iA Cores that = Missed the LLC - HOMed locally : For each cycle, this event accumulates the= number of valid entries in the TOR that match qualifications specified by = the subevent. Does not include addressless requests such as locks and i= nterrupts.", "UMask": "0xc80efe01", @@ -4835,8 +5836,10 @@ }, { "BriefDescription": "TOR Occupancy; CRd Pref misses from local IA"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Code read prefetch from local= IA that misses in the snoop filter", "UMask": "0xc88ffe01", @@ -4844,8 +5847,10 @@ }, { "BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores = that Missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores= that Missed the LLC - HOMed locally : For each cycle, this event accumulat= es the number of valid entries in the TOR that match qualifications specifi= ed by the subevent. Does not include addressless requests such as locks= and interrupts.", "UMask": "0xc88efe01", @@ -4853,8 +5858,10 @@ }, { "BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores = that Missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores= that Missed the LLC - HOMed remotely : For each cycle, this event accumula= tes the number of valid entries in the TOR that match qualifications specif= ied by the subevent. Does not include addressless requests such as lock= s and interrupts.", "UMask": "0xc88f7e01", @@ -4862,8 +5869,10 @@ }, { "BriefDescription": "TOR Occupancy : CRd issued by iA Cores that M= issed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CRd issued by iA Cores that = Missed the LLC - HOMed remotely : For each cycle, this event accumulates th= e number of valid entries in the TOR that match qualifications specified by= the subevent. Does not include addressless requests such as locks and = interrupts.", "UMask": "0xc80f7e01", @@ -4871,16 +5880,20 @@ }, { "BriefDescription": "TOR Occupancy for All requests issued from IA= cores to CXL accelerator memory regions that miss the LLC.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c0018201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c0008201", @@ -4888,6 +5901,7 @@ }, { "BriefDescription": "TOR Occupancy for DRd misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD", "PerPkg": "1", @@ -4897,16 +5911,20 @@ }, { "BriefDescription": "TOR Occupancy for DRds and equivalent opcodes= issued from an IA core which miss the L3 and target memory in a CXL type 2= accelerator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8138201", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due= to a page walk that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDPTE", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -4916,16 +5934,20 @@ }, { "BriefDescription": "TOR Occupancy for DRds and equivalent opcodes= issued from an IA core which miss the L3 and target memory in a CXL type 2= memory expander card.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8178201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC_LOC= AL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8168201", @@ -4933,6 +5955,7 @@ }, { "BriefDescription": "TOR Occupancy for DRds issued by iA Cores tar= geting DDR Mem that Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR", "PerPkg": "1", @@ -4942,6 +5965,7 @@ }, { "BriefDescription": "TOR Occupancy for DRd misses from local IA ta= rgeting local memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL", "PerPkg": "1", @@ -4951,6 +5975,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores targe= ting DDR Mem that Missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_DDR", "PerPkg": "1", @@ -4960,6 +5985,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores targe= ting PMM Mem that Missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_PMM", "PerPkg": "1", @@ -4969,8 +5995,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt from local IA t= hat misses in the snoop filter", "UMask": "0xc827fe01", @@ -4978,8 +6006,10 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_CXL_ACC= _LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_CXL_ACC_LOCAL"= , + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8268201", @@ -4987,8 +6017,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Opt Pref misses from local= IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read opt prefetch from l= ocal IA that misses in the snoop filter", "UMask": "0xc8a7fe01", @@ -4996,8 +6028,10 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF_CX= L_ACC_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF_CXL_ACC_L= OCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8a68201", @@ -5005,6 +6039,7 @@ }, { "BriefDescription": "TOR Occupancy for DRds issued by iA Cores tar= geting PMM Mem that Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM", "PerPkg": "1", @@ -5014,6 +6049,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd Pref misses from local IA"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF", "PerPkg": "1", @@ -5023,16 +6059,20 @@ }, { "BriefDescription": "TOR Occupancy for L2 data prefetches issued f= rom an IA core which miss the L3 and target memory in a CXL type 2 accelera= tor.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8978201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_AC= C_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC_LOCAL= ", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8968201", @@ -5040,8 +6080,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting DDR Mem that Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting DDR Mem that Missed the LLC : For each cycle, this event accumul= ates the number of valid entries in the TOR that match qualifications speci= fied by the subevent. Does not include addressless requests such as loc= ks and interrupts.", "UMask": "0xc8978601", @@ -5049,8 +6091,10 @@ }, { "BriefDescription": "TOR Occupancy; DRd Pref misses from local IA"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Data read prefetch from local= IA that misses in the snoop filter", "UMask": "0xc896fe01", @@ -5058,8 +6102,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting DDR Mem that Missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, th= is event accumulates the number of valid entries in the TOR that match qual= ifications specified by the subevent. Does not include addressless requ= ests such as locks and interrupts.", "UMask": "0xc8968601", @@ -5067,8 +6113,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting PMM Mem that Missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, th= is event accumulates the number of valid entries in the TOR that match qual= ifications specified by the subevent. Does not include addressless requ= ests such as locks and interrupts.", "UMask": "0xc8968a01", @@ -5076,8 +6124,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting PMM Mem that Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting PMM Mem that Missed the LLC : For each cycle, this event accumul= ates the number of valid entries in the TOR that match qualifications speci= fied by the subevent. Does not include addressless requests such as loc= ks and interrupts.", "UMask": "0xc8978a01", @@ -5085,6 +6135,7 @@ }, { "BriefDescription": "TOR Occupancy; DRd Pref misses from local IA"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE", "PerPkg": "1", @@ -5094,8 +6145,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting DDR Mem that Missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, t= his event accumulates the number of valid entries in the TOR that match qua= lifications specified by the subevent. Does not include addressless req= uests such as locks and interrupts.", "UMask": "0xc8970601", @@ -5103,8 +6156,10 @@ }, { "BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores = targeting PMM Mem that Missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores= targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, t= his event accumulates the number of valid entries in the TOR that match qua= lifications specified by the subevent. Does not include addressless req= uests such as locks and interrupts.", "UMask": "0xc8970a01", @@ -5112,6 +6167,7 @@ }, { "BriefDescription": "TOR Occupancy for DRd misses from local IA ta= rgeting remote memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE", "PerPkg": "1", @@ -5121,6 +6177,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores targe= ting DDR Mem that Missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_DDR", "PerPkg": "1", @@ -5130,6 +6187,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores targe= ting PMM Mem that Missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_PMM", "PerPkg": "1", @@ -5139,8 +6197,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that= Missed LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_ITOM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores tha= t Missed LLC : For each cycle, this event accumulates the number of valid e= ntries in the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xcc47fe01", @@ -5148,8 +6208,10 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefCode misses from local = IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy; Last level cache prefetch cod= e read from local IA that misses in the snoop filter", "UMask": "0xcccffe01", @@ -5157,14 +6219,17 @@ }, { "BriefDescription": "TOR Occupancy for LLC Prefetch Code transacti= ons issued from an IA core which miss the L3 and target memory in a CXL typ= e 2 accelerator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10cccf8201", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy; LLCPrefData misses from local = IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA", "PerPkg": "1", @@ -5174,16 +6239,20 @@ }, { "BriefDescription": "TOR Occupancy for LLC data prefetches issued = from an IA core which miss the L3 and target memory in a CXL type 2 acceler= ator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10ccd78201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL= _ACC_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC_LO= CAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10ccd68201", @@ -5191,6 +6260,7 @@ }, { "BriefDescription": "TOR Occupancy; LLCPrefRFO misses from local I= A", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO", "PerPkg": "1", @@ -5200,16 +6270,20 @@ }, { "BriefDescription": "TOR Occupancy for L2 RFO prefetches issued fr= om an IA core which miss the L3 and target memory in a CXL type 2 accelerat= or.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8878201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_= ACC_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC_LOC= AL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8868201", @@ -5217,8 +6291,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting DDR that missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting DDR that missed the LLC - HOMed locally : For each cycle, this even= t accumulates the number of valid entries in the TOR that match qualificati= ons specified by the subevent. Does not include addressless requests su= ch as locks and interrupts.", "UMask": "0xc8668601", @@ -5226,8 +6302,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting PMM that missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting PMM that missed the LLC - HOMed locally : For each cycle, this even= t accumulates the number of valid entries in the TOR that match qualificati= ons specified by the subevent. Does not include addressless requests su= ch as locks and interrupts.", "UMask": "0xc8668a01", @@ -5235,8 +6313,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting DDR that missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting DDR that missed the LLC - HOMed locally : For each cycle, this event= accumulates the number of valid entries in the TOR that match qualificatio= ns specified by the subevent. Does not include addressless requests suc= h as locks and interrupts.", "UMask": "0xc86e8601", @@ -5244,8 +6324,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting PMM that missed the LLC - HOMed locally", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting PMM that missed the LLC - HOMed locally : For each cycle, this event= accumulates the number of valid entries in the TOR that match qualificatio= ns specified by the subevent. Does not include addressless requests suc= h as locks and interrupts.", "UMask": "0xc86e8a01", @@ -5253,8 +6335,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting DDR that missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting DDR that missed the LLC - HOMed remotely : For each cycle, this eve= nt accumulates the number of valid entries in the TOR that match qualificat= ions specified by the subevent. Does not include addressless requests s= uch as locks and interrupts.", "UMask": "0xc8670601", @@ -5262,8 +6346,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting PMM that missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting PMM that missed the LLC - HOMed remotely : For each cycle, this eve= nt accumulates the number of valid entries in the TOR that match qualificat= ions specified by the subevent. Does not include addressless requests s= uch as locks and interrupts.", "UMask": "0xc8670a01", @@ -5271,8 +6357,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting DDR that missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting DDR that missed the LLC - HOMed remotely : For each cycle, this even= t accumulates the number of valid entries in the TOR that match qualificati= ons specified by the subevent. Does not include addressless requests su= ch as locks and interrupts.", "UMask": "0xc86f0601", @@ -5280,8 +6368,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting PMM that missed the LLC - HOMed remotely", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting PMM that missed the LLC - HOMed remotely : For each cycle, this even= t accumulates the number of valid entries in the TOR that match qualificati= ons specified by the subevent. Does not include addressless requests su= ch as locks and interrupts.", "UMask": "0xc86f0a01", @@ -5289,6 +6379,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO", "PerPkg": "1", @@ -5298,24 +6389,30 @@ }, { "BriefDescription": "TOR Occupancy for RFO and L2 RFO prefetches i= ssued from an IA core which miss the L3 and target memory in a CXL type 2 a= ccelerator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFOMORPH_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8038201", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy for RFOs issued from an IA core= which miss the L3 and target memory in a CXL type 2 accelerator.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10c8078201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC_LOC= AL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC_LOCAL", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10c8068201", @@ -5323,6 +6420,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_LOCAL", "PerPkg": "1", @@ -5332,6 +6430,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO prefetch misses from local= IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF", "PerPkg": "1", @@ -5341,16 +6440,20 @@ }, { "BriefDescription": "TOR Occupancy for LLC RFO prefetches issued f= rom an IA core which miss the L3 and target memory in a CXL type 2 accelera= tor.", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10ccc78201", "Unit": "CHA" }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_AC= C_LOCAL", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC_LOCAL= ", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x000", "UMask": "0x10ccc68201", @@ -5358,6 +6461,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO prefetch misses from local= IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_LOCAL", "PerPkg": "1", @@ -5367,6 +6471,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO prefetch misses from local= IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_REMOTE", "PerPkg": "1", @@ -5376,6 +6481,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO misses from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_REMOTE", "PerPkg": "1", @@ -5385,8 +6491,10 @@ }, { "BriefDescription": "TOR Occupancy : UCRdFs issued by iA Cores tha= t Missed LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_UCRDF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : UCRdFs issued by iA Cores th= at Missed LLC : For each cycle, this event accumulates the number of valid = entries in the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc877de01", @@ -5394,8 +6502,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores that= Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tha= t Missed the LLC : For each cycle, this event accumulates the number of val= id entries in the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc86ffe01", @@ -5403,8 +6513,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores that= Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores tha= t Missed the LLC : For each cycle, this event accumulates the number of val= id entries in the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc867fe01", @@ -5412,8 +6524,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting DDR that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting DDR that missed the LLC : For each cycle, this event accumulates th= e number of valid entries in the TOR that match qualifications specified by= the subevent. Does not include addressless requests such as locks and = interrupts.", "UMask": "0xc8678601", @@ -5421,8 +6535,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores tar= geting PMM that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores ta= rgeting PMM that missed the LLC : For each cycle, this event accumulates th= e number of valid entries in the TOR that match qualifications specified by= the subevent. Does not include addressless requests such as locks and = interrupts.", "UMask": "0xc8678a01", @@ -5430,8 +6546,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting DDR that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_DDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting DDR that missed the LLC : For each cycle, this event accumulates the= number of valid entries in the TOR that match qualifications specified by = the subevent. Does not include addressless requests such as locks and i= nterrupts.", "UMask": "0xc86f8601", @@ -5439,8 +6557,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targ= eting PMM that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores tar= geting PMM that missed the LLC : For each cycle, this event accumulates the= number of valid entries in the TOR that match qualifications specified by = the subevent. Does not include addressless requests such as locks and i= nterrupts.", "UMask": "0xc86f8a01", @@ -5448,8 +6568,10 @@ }, { "BriefDescription": "TOR Occupancy : WiLs issued by iA Cores that = Missed LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WiLs issued by iA Cores that= Missed LLC : For each cycle, this event accumulates the number of valid en= tries in the TOR that match qualifications specified by the subevent. D= oes not include addressless requests such as locks and interrupts.", "UMask": "0xc87fde01", @@ -5457,6 +6579,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO", "PerPkg": "1", @@ -5466,6 +6589,7 @@ }, { "BriefDescription": "TOR Occupancy; RFO prefetch from local IA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO_PREF", "PerPkg": "1", @@ -5475,6 +6599,7 @@ }, { "BriefDescription": "TOR Occupancy : SpecItoMs issued by iA Cores"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_SPECITOM", "PerPkg": "1", @@ -5484,8 +6609,10 @@ }, { "BriefDescription": "TOR Occupancy : WbMtoIs issued by iA Cores", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WBMTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WbMtoIs issued by iA Cores := For each cycle, this event accumulates the number of valid entries in the = TOR that match qualifications specified by the subevent. Does not inclu= de addressless requests such as locks and interrupts.", "UMask": "0xcc27ff01", @@ -5493,8 +6620,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores : F= or each cycle, this event accumulates the number of valid entries in the TO= R that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xc86fff01", @@ -5502,8 +6631,10 @@ }, { "BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCILF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores : F= or each cycle, this event accumulates the number of valid entries in the TO= R that match qualifications specified by the subevent. Does not include= addressless requests such as locks and interrupts.", "UMask": "0xc867ff01", @@ -5511,6 +6642,7 @@ }, { "BriefDescription": "TOR Occupancy; All from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO", "PerPkg": "1", @@ -5520,8 +6652,10 @@ }, { "BriefDescription": "TOR Occupancy : CLFlushes issued by IO Device= s", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_CLFLUSH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : CLFlushes issued by IO Devic= es : For each cycle, this event accumulates the number of valid entries in = the TOR that match qualifications specified by the subevent. Does not i= nclude addressless requests such as locks and interrupts.", "UMask": "0xc8c3ff04", @@ -5529,6 +6663,7 @@ }, { "BriefDescription": "TOR Occupancy; Hits from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT", "PerPkg": "1", @@ -5538,6 +6673,7 @@ }, { "BriefDescription": "TOR Occupancy; ITOM hits from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOM", "PerPkg": "1", @@ -5547,6 +6683,7 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a = partial write request, from IO Devices that hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOMCACHENEAR", "PerPkg": "1", @@ -5556,6 +6693,7 @@ }, { "BriefDescription": "TOR Occupancy; RdCur and FsRdCur hits from lo= cal IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_PCIRDCUR", "PerPkg": "1", @@ -5565,8 +6703,10 @@ }, { "BriefDescription": "TOR Occupancy; RFO hits from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices th= at hit the LLC : For each cycle, this event accumulates the number of valid= entries in the TOR that match qualifications specified by the subevent. = Does not include addressless requests such as locks and interrupts.", "UMask": "0xc803fd04", @@ -5574,6 +6714,7 @@ }, { "BriefDescription": "TOR Occupancy; ITOM from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOM", "PerPkg": "1", @@ -5583,8 +6724,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a = partial write request, from IO Devices", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOMCACHENEAR", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -5594,6 +6737,7 @@ }, { "BriefDescription": "TOR Occupancy; Misses from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS", "PerPkg": "1", @@ -5603,6 +6747,7 @@ }, { "BriefDescription": "TOR Occupancy; ITOM misses from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM", "PerPkg": "1", @@ -5612,6 +6757,7 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a = partial write request, from IO Devices that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR", "PerPkg": "1", @@ -5621,8 +6767,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a = partial write request, from IO Devices that missed the LLC and targets loca= l memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xcd42fe04", @@ -5630,8 +6778,10 @@ }, { "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a = partial write request, from IO Devices that missed the LLC and targets remo= te memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xcd437e04", @@ -5639,8 +6789,10 @@ }, { "BriefDescription": "TOR Occupancy; ITOM misses from local IO and = targets local memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xcc42fe04", @@ -5648,8 +6800,10 @@ }, { "BriefDescription": "TOR Occupancy; ITOM misses from local IO and = targets remote memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xcc437e04", @@ -5657,6 +6811,7 @@ }, { "BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from = local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR", "PerPkg": "1", @@ -5666,8 +6821,10 @@ }, { "BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from = local IO and targets local memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xc8f2fe04", @@ -5675,8 +6832,10 @@ }, { "BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from = local IO and targets remote memory", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. Does not include addressless requests such as locks and int= errupts.", "UMask": "0xc8f37e04", @@ -5684,8 +6843,10 @@ }, { "BriefDescription": "TOR Occupancy; RFO misses from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices th= at missed the LLC : For each cycle, this event accumulates the number of va= lid entries in the TOR that match qualifications specified by the subevent.= Does not include addressless requests such as locks and interrupts.", "UMask": "0xc803fe04", @@ -5693,6 +6854,7 @@ }, { "BriefDescription": "TOR Occupancy; RdCur and FsRdCur from local I= O", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_PCIRDCUR", "PerPkg": "1", @@ -5702,8 +6864,10 @@ }, { "BriefDescription": "TOR Occupancy; ItoM from local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_RFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices : = For each cycle, this event accumulates the number of valid entries in the T= OR that match qualifications specified by the subevent. Does not includ= e addressless requests such as locks and interrupts.", "UMask": "0xc803ff04", @@ -5711,8 +6875,10 @@ }, { "BriefDescription": "TOR Occupancy : WbMtoIs issued by IO Devices"= , + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_WBMTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WbMtoIs issued by IO Devices= : For each cycle, this event accumulates the number of valid entries in th= e TOR that match qualifications specified by the subevent. Does not inc= lude addressless requests such as locks and interrupts.", "UMask": "0xcc23ff04", @@ -5720,8 +6886,10 @@ }, { "BriefDescription": "TOR Occupancy : IPQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : IPQ : For each cycle, this e= vent accumulates the number of valid entries in the TOR that match qualific= ations specified by the subevent. T", "UMask": "0x8", @@ -5729,8 +6897,10 @@ }, { "BriefDescription": "TOR Occupancy : IRQ - iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : IRQ - iA : For each cycle, t= his event accumulates the number of valid entries in the TOR that match qua= lifications specified by the subevent. T : From an iA Core", "UMask": "0x1", @@ -5738,8 +6908,10 @@ }, { "BriefDescription": "TOR Occupancy : IRQ - Non iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_NON_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : IRQ - Non iA : For each cycl= e, this event accumulates the number of valid entries in the TOR that match= qualifications specified by the subevent. T", "UMask": "0x10", @@ -5747,24 +6919,30 @@ }, { "BriefDescription": "TOR Occupancy : Just ISOC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.ISOC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just ISOC : For each cycle, = this event accumulates the number of valid entries in the TOR that match qu= alifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Just Local Targets", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.LOCAL_TGT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just Local Targets : For eac= h cycle, this event accumulates the number of valid entries in the TOR that= match qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : All from Local iA and IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All from Local iA and IO : F= or each cycle, this event accumulates the number of valid entries in the TO= R that match qualifications specified by the subevent. T : All locally in= itiated requests", "UMask": "0xc000ff05", @@ -5772,8 +6950,10 @@ }, { "BriefDescription": "TOR Occupancy : All from Local iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All from Local iA : For each= cycle, this event accumulates the number of valid entries in the TOR that = match qualifications specified by the subevent. T : All locally initiated= requests from iA Cores", "UMask": "0xc000ff01", @@ -5781,8 +6961,10 @@ }, { "BriefDescription": "TOR Occupancy : All from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All from Local IO : For each= cycle, this event accumulates the number of valid entries in the TOR that = match qualifications specified by the subevent. T : All locally generated= IO traffic", "UMask": "0xc000ff04", @@ -5790,80 +6972,100 @@ }, { "BriefDescription": "TOR Occupancy : Match the Opcode in b[29:19] = of the extended umask field", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.MATCH_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Match the Opcode in b[29:19]= of the extended umask field : For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Just Misses", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just Misses : For each cycle= , this event accumulates the number of valid entries in the TOR that match = qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : MMCFG Access", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.MMCFG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : MMCFG Access : For each cycl= e, this event accumulates the number of valid entries in the TOR that match= qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : MMIO Access", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.MMIO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : MMIO Access : For each cycle= , this event accumulates the number of valid entries in the TOR that match = qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Just NearMem", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.NEARMEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just NearMem : For each cycl= e, this event accumulates the number of valid entries in the TOR that match= qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Just NonCoherent", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.NONCOH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just NonCoherent : For each = cycle, this event accumulates the number of valid entries in the TOR that m= atch qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Just NotNearMem", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.NOT_NEARMEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just NotNearMem : For each c= ycle, this event accumulates the number of valid entries in the TOR that ma= tch qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : PMM Access", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : PMM Access : For each cycle,= this event accumulates the number of valid entries in the TOR that match q= ualifications specified by the subevent.", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : Match the PreMorphed Opcode i= n b[29:19] of the extended umask field", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.PREMORPH_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Match the PreMorphed Opcode = in b[29:19] of the extended umask field : For each cycle, this event accumu= lates the number of valid entries in the TOR that match qualifications spec= ified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : PRQ - IOSF", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : PRQ - IOSF : For each cycle,= this event accumulates the number of valid entries in the TOR that match q= ualifications specified by the subevent. T : From a PCIe Device", "UMask": "0x4", @@ -5871,8 +7073,10 @@ }, { "BriefDescription": "TOR Occupancy : PRQ - Non IOSF", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ_NON_IOSF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : PRQ - Non IOSF : For each cy= cle, this event accumulates the number of valid entries in the TOR that mat= ch qualifications specified by the subevent. T", "UMask": "0x20", @@ -5880,16 +7084,20 @@ }, { "BriefDescription": "TOR Occupancy : Just Remote Targets", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.REMOTE_TGT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : Just Remote Targets : For ea= ch cycle, this event accumulates the number of valid entries in the TOR tha= t match qualifications specified by the subevent. T", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy : All from Remote", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.REM_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All from Remote : For each c= ycle, this event accumulates the number of valid entries in the TOR that ma= tch qualifications specified by the subevent. T : All remote requests (e.= g. snoops, writebacks) that came from remote sockets", "UMask": "0xc001ffc8", @@ -5897,8 +7105,10 @@ }, { "BriefDescription": "TOR Occupancy : All Snoops from Remote", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.REM_SNPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : All Snoops from Remote : For= each cycle, this event accumulates the number of valid entries in the TOR = that match qualifications specified by the subevent. T : All snoops to th= is LLC that came from remote sockets", "UMask": "0xc001ff08", @@ -5906,8 +7116,10 @@ }, { "BriefDescription": "TOR Occupancy : RRQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : RRQ : For each cycle, this e= vent accumulates the number of valid entries in the TOR that match qualific= ations specified by the subevent. T", "UMask": "0x40", @@ -5915,8 +7127,10 @@ }, { "BriefDescription": "TOR Occupancy; All Snoops from Remote", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.SNPS_FROM_REM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. All snoops to this LLC that came from remote sockets.", "UMask": "0xc001ff08", @@ -5924,8 +7138,10 @@ }, { "BriefDescription": "TOR Occupancy : WBQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "TOR Occupancy : WBQ : For each cycle, this e= vent accumulates the number of valid entries in the TOR that match qualific= ations specified by the subevent. T", "UMask": "0x80", @@ -5933,8 +7149,10 @@ }, { "BriefDescription": "WbPushMtoI : Pushed to LLC", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_CHA_WB_PUSH_MTOI.LLC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbPushMtoI : Pushed to LLC : Counts the numb= er of times when the CHA was received WbPushMtoI : Counts the number of tim= es when the CHA was able to push WbPushMToI to LLC", "UMask": "0x1", @@ -5942,8 +7160,10 @@ }, { "BriefDescription": "WbPushMtoI : Pushed to Memory", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_CHA_WB_PUSH_MTOI.MEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "WbPushMtoI : Pushed to Memory : Counts the n= umber of times when the CHA was received WbPushMtoI : Counts the number of = times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to= MEM)", "UMask": "0x2", @@ -5951,8 +7171,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC0", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC0 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 0 only.", "UMask": "0x1", @@ -5960,8 +7182,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC1", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC1 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 1 only.", "UMask": "0x2", @@ -5969,8 +7193,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC2", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC2 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 2 only.", "UMask": "0x4", @@ -5978,8 +7204,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC3", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC3 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 3 only.", "UMask": "0x8", @@ -5987,8 +7215,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC4", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC4 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 4 only.", "UMask": "0x10", @@ -5996,8 +7226,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC5", + "Counter": "0,1,2,3", "EventCode": "0x5a", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC5 : Cou= nts the number of times when there are no credits available for sending WRI= TEs from the CHA into the iMC. In order to send WRITEs into the memory con= troller, the HA must first acquire a credit for the iMC's BL Ingress queue.= : Filter for memory controller 5 only.", "UMask": "0x20", @@ -6005,8 +7237,10 @@ }, { "BriefDescription": "XPT Prefetches : Dropped (on 0?) - Conflict", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.DROP0_CONFLICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Dropped (on 0?) - Conflict = : Number of XPT prefetches dropped due to AD CMS write port contention", "UMask": "0x8", @@ -6014,8 +7248,10 @@ }, { "BriefDescription": "XPT Prefetches : Dropped (on 0?) - No Credits= ", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.DROP0_NOCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Dropped (on 0?) - No Credit= s : Number of XPT prefetches dropped due to lack of XPT AD egress credits", "UMask": "0x4", @@ -6023,8 +7259,10 @@ }, { "BriefDescription": "XPT Prefetches : Dropped (on 1?) - Conflict", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.DROP1_CONFLICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Dropped (on 1?) - Conflict = : Number of XPT prefetches dropped due to AD CMS write port contention", "UMask": "0x80", @@ -6032,8 +7270,10 @@ }, { "BriefDescription": "XPT Prefetches : Dropped (on 1?) - No Credits= ", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.DROP1_NOCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Dropped (on 1?) - No Credit= s : Number of XPT prefetches dropped due to lack of XPT AD egress credits", "UMask": "0x40", @@ -6041,8 +7281,10 @@ }, { "BriefDescription": "XPT Prefetches : Sent (on 0?)", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.SENT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Sent (on 0?) : Number of XP= T prefetches sent", "UMask": "0x1", @@ -6050,8 +7292,10 @@ }, { "BriefDescription": "XPT Prefetches : Sent (on 1?)", + "Counter": "0,1,2,3", "EventCode": "0x6f", "EventName": "UNC_CHA_XPT_PREF.SENT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "XPT Prefetches : Sent (on 1?) : Number of XP= T prefetches sent", "UMask": "0x10", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cxl.json = b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cxl.json index f3e84fd88de3..ff81f3a6426a 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cxl.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cxl.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Counts the number of lfclk ticks", + "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0x01", "EventName": "UNC_CXLCM_CLOCKTICKS", "PerPkg": "1", @@ -9,390 +10,487 @@ }, { "BriefDescription": "Number of Allocation to Mem Rxx AGF 0", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Req AGF0", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_REQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Rsp AGF", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_REQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Data AGF", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_RSP0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Rsp AGF", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_RSP1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Req AGF 1", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.MEM_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Mem Data AGF", + "Counter": "4,5,6,7", "EventCode": "0x43", "EventName": "UNC_CXLCM_RxC_AGF_INSERTS.MEM_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with AK set", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.AK_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with BE set", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.BE_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of control flits received", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.CTRL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Headerless flits received= ", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.NO_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of protocol flits received", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.PROT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with SZ set", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.SZ_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of flits received", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.VALID", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of valid messages in the fli= t", + "Counter": "4,5,6,7", "EventCode": "0x4b", "EventName": "UNC_CXLCM_RxC_FLITS.VALID_MSG", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of CRC errors detected", + "Counter": "4,5,6,7", "EventCode": "0x40", "EventName": "UNC_CXLCM_RxC_MISC.CRC_ERRORS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Init flits sent", + "Counter": "4,5,6,7", "EventCode": "0x40", "EventName": "UNC_CXLCM_RxC_MISC.INIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of LLCRD flits sent", + "Counter": "4,5,6,7", "EventCode": "0x40", "EventName": "UNC_CXLCM_RxC_MISC.LLCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Retry flits sent", + "Counter": "4,5,6,7", "EventCode": "0x40", "EventName": "UNC_CXLCM_RxC_MISC.RETRY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles the Packing Buffer is Full", + "Counter": "4,5,6,7", "EventCode": "0x52", "EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles the Packing Buffer is Full", + "Counter": "4,5,6,7", "EventCode": "0x52", "EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles the Packing Buffer is Full", + "Counter": "4,5,6,7", "EventCode": "0x52", "EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles the Packing Buffer is Full", + "Counter": "4,5,6,7", "EventCode": "0x52", "EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.MEM_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles the Packing Buffer is Full", + "Counter": "4,5,6,7", "EventCode": "0x52", "EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.MEM_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Data Packing bu= ffer", + "Counter": "4,5,6,7", "EventCode": "0x41", "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Req Packing buf= fer", + "Counter": "4,5,6,7", "EventCode": "0x41", "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Rsp Packing buf= fer", + "Counter": "4,5,6,7", "EventCode": "0x41", "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Mem Data Packing buff= er", + "Counter": "4,5,6,7", "EventCode": "0x41", "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Mem Rxx Packing buffe= r", + "Counter": "4,5,6,7", "EventCode": "0x41", "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles of Not Empty for Cache Data = Packing buffer", + "Counter": "4,5,6,7", "EventCode": "0x42", "EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles of Not Empty for Cache Req P= acking buffer", + "Counter": "4,5,6,7", "EventCode": "0x42", "EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles of Not Empty for Cache Rsp P= acking buffer", + "Counter": "4,5,6,7", "EventCode": "0x42", "EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles of Not Empty for Mem Data Pa= cking buffer", + "Counter": "4,5,6,7", "EventCode": "0x42", "EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.MEM_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Number of cycles of Not Empty for Mem Rxx Pac= king buffer", + "Counter": "4,5,6,7", "EventCode": "0x42", "EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.MEM_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with AK set", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.AK_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with BE set", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.BE_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of control flits packed", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.CTRL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Headerless flits packed", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.NO_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of protocol flits packed", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.PROT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of Flits with SZ set", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.SZ_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CXLCM" }, { "BriefDescription": "Count the number of flits packed", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_CXLCM_TxC_FLITS.VALID", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Data Packing bu= ffer", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Req Packing buf= fer", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_REQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Rsp1 Packing bu= ffer", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_REQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Rsp0 Packing bu= ffer", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_RSP0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Cache Req Packing buf= fer", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_RSP1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Mem Data Packing buff= er", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.MEM_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLCM" }, { "BriefDescription": "Number of Allocation to Mem Rxx Packing buffe= r", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.MEM_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLCM" }, { "BriefDescription": "Counts the number of uclk ticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_CXLDP_CLOCKTICKS", "PerPkg": "1", @@ -401,48 +499,60 @@ }, { "BriefDescription": "Number of Allocation to M2S Data AGF", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.M2S_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CXLDP" }, { "BriefDescription": "Number of Allocation to M2S Req AGF", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.M2S_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CXLDP" }, { "BriefDescription": "Number of Allocation to U2C Data AGF", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CXLDP" }, { "BriefDescription": "Number of Allocation to U2C Req AGF", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CXLDP" }, { "BriefDescription": "Number of Allocation to U2C Rsp AGF 0", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_RSP0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CXLDP" }, { "BriefDescription": "Number of Allocation to U2C Rsp AGF 1", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_RSP1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CXLDP" diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-interconn= ect.json b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-interconnec= t.json index 22bb490e9666..8b1ae9540066 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-interconnect.jso= n +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-interconnect.jso= n @@ -1,8 +1,10 @@ [ { "BriefDescription": "Total IRP occupancy of inbound read and write= requests to coherent memory.", + "Counter": "0,1", "EventCode": "0x0f", "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Total IRP occupancy of inbound read and writ= e requests to coherent memory. This is effectively the sum of read occupan= cy and write occupancy.", "UMask": "0x4", @@ -10,6 +12,7 @@ }, { "BriefDescription": "IRP Clockticks", + "Counter": "0,1", "EventCode": "0x01", "EventName": "UNC_I_CLOCKTICKS", "PerPkg": "1", @@ -18,6 +21,7 @@ }, { "BriefDescription": "FAF RF full", + "Counter": "0,1", "EventCode": "0x17", "EventName": "UNC_I_FAF_FULL", "PerPkg": "1", @@ -25,6 +29,7 @@ }, { "BriefDescription": "FAF - request insert from TC.", + "Counter": "0,1", "EventCode": "0x18", "EventName": "UNC_I_FAF_INSERTS", "PerPkg": "1", @@ -32,6 +37,7 @@ }, { "BriefDescription": "FAF occupancy", + "Counter": "0,1", "EventCode": "0x19", "EventName": "UNC_I_FAF_OCCUPANCY", "PerPkg": "1", @@ -39,6 +45,7 @@ }, { "BriefDescription": "FAF allocation -- sent to ADQ", + "Counter": "0,1", "EventCode": "0x16", "EventName": "UNC_I_FAF_TRANSACTIONS", "PerPkg": "1", @@ -46,14 +53,17 @@ }, { "BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)", + "Counter": "0,1", "EventCode": "0x20", "EventName": "UNC_I_IRP_ALL.EVICTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": ": All Inserts Inbound (p2p + faf + cset)", + "Counter": "0,1", "EventCode": "0x20", "EventName": "UNC_I_IRP_ALL.INBOUND_INSERTS", "PerPkg": "1", @@ -62,78 +72,97 @@ }, { "BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)", + "Counter": "0,1", "EventCode": "0x20", "EventName": "UNC_I_IRP_ALL.OUTBOUND_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of At= omic Transactions as Secondary", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Re= ad Transactions as Secondary", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.2ND_RD_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Wr= ite Transactions as Secondary", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.2ND_WR_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Rejects", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.FAST_REJ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Requests", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.FAST_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Transfers = From Primary to Secondary", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.FAST_XFER", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Prefetch Ack Hints = From Primary to Secondary", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.PF_ACK_HINT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IRP" }, { "BriefDescription": "Counts Timeouts - Set 0 : Slow path fwpf didn= 't find prefetch", + "Counter": "0,1", "EventCode": "0x1e", "EventName": "UNC_I_MISC0.SLOWPATH_FWPF_NO_PRF", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 1 : Lost Forward", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.LOST_FWD", "PerPkg": "1", @@ -143,8 +172,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Received Invalid", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SEC_RCVD_INVLD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Received Invalid : Sec= ondary received a transfer that did not have sufficient MESI state", "UMask": "0x20", @@ -152,8 +183,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Received Valid", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SEC_RCVD_VLD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Received Valid : Secon= dary received a transfer that did have sufficient MESI state", "UMask": "0x40", @@ -161,8 +194,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Slow Transfer of E Line= ", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SLOW_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Slow Transfer of E Lin= e : Secondary received a transfer that did have sufficient MESI state", "UMask": "0x4", @@ -170,8 +205,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Slow Transfer of I Line= ", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SLOW_I", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Slow Transfer of I Lin= e : Snoop took cacheline ownership before write from data was committed.", "UMask": "0x1", @@ -179,8 +216,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Slow Transfer of M Line= ", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SLOW_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Slow Transfer of M Lin= e : Snoop took cacheline ownership before write from data was committed.", "UMask": "0x8", @@ -188,8 +227,10 @@ }, { "BriefDescription": "Misc Events - Set 1 : Slow Transfer of S Line= ", + "Counter": "0,1", "EventCode": "0x1f", "EventName": "UNC_I_MISC1.SLOW_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Misc Events - Set 1 : Slow Transfer of S Lin= e : Secondary received a transfer that did not have sufficient MESI state", "UMask": "0x2", @@ -197,8 +238,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit M, E= , S or I line in the IIO", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit M, E, S or I line in the IIO", "UMask": "0x7e", @@ -206,8 +249,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit E or= S line in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit E or S line in the IIO cache", "UMask": "0x74", @@ -215,8 +260,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit I li= ne in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit I line in the IIO cache", "UMask": "0x72", @@ -224,6 +271,7 @@ }, { "BriefDescription": "Responses to snoops of any type that hit M li= ne in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M", "PerPkg": "1", @@ -233,8 +281,10 @@ }, { "BriefDescription": "Responses to snoops of any type that miss the= IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that miss the IIO cache", "UMask": "0x71", @@ -242,62 +292,77 @@ }, { "BriefDescription": "Snoop Responses : Hit E or S", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_ES", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : Hit I", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : Hit M", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_M", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : Miss", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : SnpCode", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPCODE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : SnpData", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPDATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses : SnpInv", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPINV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IRP" }, { "BriefDescription": "Inbound write (fast path) requests received b= y the IRP.", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.WR_PREF", "PerPkg": "1", @@ -307,132 +372,167 @@ }, { "BriefDescription": "AK Egress Allocations", + "Counter": "0,1", "EventCode": "0x0b", "EventName": "UNC_I_TxC_AK_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x05", "EventName": "UNC_I_TxC_BL_DRS_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Inserts", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_I_TxC_BL_DRS_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Occupancy", + "Counter": "0,1", "EventCode": "0x08", "EventName": "UNC_I_TxC_BL_DRS_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x06", "EventName": "UNC_I_TxC_BL_NCB_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Inserts", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_I_TxC_BL_NCB_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Occupancy", + "Counter": "0,1", "EventCode": "0x09", "EventName": "UNC_I_TxC_BL_NCB_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x07", "EventName": "UNC_I_TxC_BL_NCS_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Inserts", + "Counter": "0,1", "EventCode": "0x04", "EventName": "UNC_I_TxC_BL_NCS_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Occupancy", + "Counter": "0,1", "EventCode": "0x0a", "EventName": "UNC_I_TxC_BL_NCS_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES", + "Counter": "0,1", "EventCode": "0x1c", "EventName": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": Counts the number times when it is not pos= sible to issue a request to the M2PCIe because there are no Egress Credits = available on AD0, A1 or AD0AD1 both. Stalls on both AD0 and AD1 will count = as 2", "Unit": "IRP" }, { "BriefDescription": "No AD0 Egress Credits Stalls", + "Counter": "0,1", "EventCode": "0x1a", "EventName": "UNC_I_TxR2_AD0_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No AD0 Egress Credits Stalls : Counts the nu= mber times when it is not possible to issue a request to the M2PCIe because= there are no AD0 Egress Credits available.", "Unit": "IRP" }, { "BriefDescription": "No AD1 Egress Credits Stalls", + "Counter": "0,1", "EventCode": "0x1b", "EventName": "UNC_I_TxR2_AD1_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No AD1 Egress Credits Stalls : Counts the nu= mber times when it is not possible to issue a request to the M2PCIe because= there are no AD1 Egress Credits available.", "Unit": "IRP" }, { "BriefDescription": "No BL Egress Credit Stalls", + "Counter": "0,1", "EventCode": "0x1d", "EventName": "UNC_I_TxR2_BL_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No BL Egress Credit Stalls : Counts the numb= er times when it is not possible to issue data to the R2PCIe because there = are no BL Egress Credits available.", "Unit": "IRP" }, { "BriefDescription": "Outbound Read Requests", + "Counter": "0,1", "EventCode": "0x0d", "EventName": "UNC_I_TxS_DATA_INSERTS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outbound Read Requests : Counts the number o= f requests issued to the switch (towards the devices).", "Unit": "IRP" }, { "BriefDescription": "Outbound Read Requests", + "Counter": "0,1", "EventCode": "0x0e", "EventName": "UNC_I_TxS_DATA_INSERTS_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outbound Read Requests : Counts the number o= f requests issued to the switch (towards the devices).", "Unit": "IRP" }, { "BriefDescription": "Outbound Request Queue Occupancy", + "Counter": "0,1", "EventCode": "0x0c", "EventName": "UNC_I_TxS_REQUEST_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outbound Request Queue Occupancy : Accumulat= es the number of outstanding outbound requests from the IRP to the switch (= towards the devices). This can be used in conjunction with the allocations= event in order to calculate average latency of outbound requests.", "Unit": "IRP" }, { "BriefDescription": "M2M Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M2M_CLOCKTICKS", "PerPkg": "1", @@ -441,6 +541,7 @@ }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xc0", "EventName": "UNC_M2M_CMS_CLOCKTICKS", "PerPkg": "1", @@ -448,16 +549,20 @@ }, { "BriefDescription": "Cycles when direct to core mode (which bypass= es the CHA) was disabled", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2M" }, { "BriefDescription": "Cycles when direct to core mode, which bypass= es the CHA, was disabled : Non Cisgress", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE.NON_CISGRESS"= , + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles when direct to core mode, which bypas= ses the CHA, was disabled : Non Cisgress : Counts the number of time non ci= sgress D2C was not honoured by egress due to directory state constraints", "UMask": "0x2", @@ -465,39 +570,49 @@ }, { "BriefDescription": "Counts the time when FM didn't do d2c for fil= l reads (cross tile case)", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_NOTFORKED", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Number of reads in which direct to core trans= action were overridden", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "Number of reads in which direct to core trans= action was overridden : Cisgress", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE.CISGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Number of reads in which direct to core trans= action was overridden : 2LM Hit?", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE.PMM_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Number of times a direct to UPI transaction w= as overridden.", + "Counter": "0,1,2,3", "EventCode": "0x1C", "EventName": "UNC_M2M_DIRECT2UPITXN_OVERRIDE.PMM_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a direct to UPI transaction = was overridden. : Counts the number of times D2K wasn't honored even though= the incoming request had d2k set", "UMask": "0x1", @@ -505,24 +620,30 @@ }, { "BriefDescription": "Number of reads in which direct to Intel UPI = transactions were overridden", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2M" }, { "BriefDescription": "Cycles when direct to Intel UPI was disabled"= , + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2M" }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Cisgres= s D2U Ignored", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.CISGRESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles when Direct2UPI was Disabled : Cisgre= ss D2U Ignored : Counts cisgress d2K that was not honored due to directory = constraints", "UMask": "0x4", @@ -530,8 +651,10 @@ }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Egress = Ignored D2U", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.EGRESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles when Direct2UPI was Disabled : Egress= Ignored D2U : Counts the number of time D2K was not honoured by egress due= to directory state constraints", "UMask": "0x1", @@ -539,8 +662,10 @@ }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Non Cis= gress D2U Ignored", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.NON_CISGRESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles when Direct2UPI was Disabled : Non Ci= sgress D2U Ignored : Counts non cisgress d2K that was not honored due to di= rectory constraints", "UMask": "0x2", @@ -548,8 +673,10 @@ }, { "BriefDescription": "Messages sent direct to the Intel UPI", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2M_DIRECT2UPI_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times egress did D2K (D= irect to KTI)", "UMask": "0x7", @@ -557,86 +684,107 @@ }, { "BriefDescription": "Number of reads that a message sent direct2 I= ntel UPI was overridden", + "Counter": "0,1,2,3", "EventCode": "0x1c", "EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "Number of times a direct to UPI transaction w= as overridden.", + "Counter": "0,1,2,3", "EventCode": "0x1C", "EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE.CISGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On NonDirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On NonDirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On NonDirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On NonDirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Directory Hit : On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory lookups (any= state found)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.ANY", "PerPkg": "1", @@ -646,6 +794,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookups (cac= heline found in A state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_A", "PerPkg": "1", @@ -655,6 +804,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in I state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_I", "PerPkg": "1", @@ -664,6 +814,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in S state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_S", "PerPkg": "1", @@ -673,86 +824,107 @@ }, { "BriefDescription": "Directory Miss : On NonDirty Line in A State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On NonDirty Line in I State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On NonDirty Line in L State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On NonDirty Line in S State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Directory Miss : On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to I", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A2I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x320", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to S", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A2S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x340", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from/= to Any state", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.ANY", "PerPkg": "1", @@ -761,8 +933,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_I_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from A to I to non persistent memory (DRAM or= HBM)", "UMask": "0x120", @@ -770,8 +944,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_I_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would res= ult in directory update from A to I to non persistent memory (DRAM or HBM)"= , "UMask": "0x220", @@ -779,8 +955,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_S_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from A to S to non persistent memory (DRAM or= HBM)", "UMask": "0x140", @@ -788,8 +966,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_S_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would res= ult in directory update from A to S to non persistent memory (DRAM or HBM)"= , "UMask": "0x240", @@ -797,8 +977,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts any 1lm or 2lm hit data return that w= ould result in directory update to non persistent memory (DRAM or HBM)", "UMask": "0x101", @@ -806,24 +988,30 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to A", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I2A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x304", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to S", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I2S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x302", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_A_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from I to A to non persistent memory (DRAM or= HBM)", "UMask": "0x104", @@ -831,8 +1019,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_A_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would res= ult in directory update from I to A to non persistent memory (DRAM or HBM)"= , "UMask": "0x204", @@ -840,8 +1030,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_S_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from I to S to non persistent memory (DRAM or= HBM)", "UMask": "0x102", @@ -849,8 +1041,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_S_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would re= sult in directory update from I to S to non persistent memory (DRAM or HBM)= ", "UMask": "0x202", @@ -858,8 +1052,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts any 2lm miss data return that would r= esult in directory update to non persistent memory (DRAM or HBM)", "UMask": "0x201", @@ -867,24 +1063,30 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to A", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S2A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x310", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to I", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S2I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x308", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_A_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from S to A to non persistent memory (DRAM or= HBM)", "UMask": "0x110", @@ -892,8 +1094,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_A_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would res= ult in directory update from S to A to non persistent memory (DRAM or HBM)"= , "UMask": "0x210", @@ -901,8 +1105,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_I_HIT_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 1lm or 2lm hit data returns that wou= ld result in directory update from S to I to non persistent memory (DRAM or= HBM)", "UMask": "0x108", @@ -910,8 +1116,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_I_MISS_NON_PMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts 2lm miss data returns that would res= ult in directory update from S to I to non persistent memory (DRAM or HBM)"= , "UMask": "0x208", @@ -919,8 +1127,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Down", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Down : Counts number of cycles IV was blocked in the TGR Egress due to S= NP/GO Ordering requirements", "UMask": "0x80000004", @@ -928,8 +1138,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Up", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP= /GO Ordering requirements", "UMask": "0x80000001", @@ -937,40 +1149,50 @@ }, { "BriefDescription": "Count when Starve Glocab counter is at 7", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2M_IGR_STARVE_WINNER.MASK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2M" }, { "BriefDescription": "Reads to iMC issued", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x304", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0.TO_NM1LM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0.TO_NM1LM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x108", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0.TO_NMCache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0.TO_NMCache", + "Experimental": "1", "PerPkg": "1", "UMask": "0x110", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -979,24 +1201,30 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_FROM_TGR", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x140", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x102", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_NORMAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1005,24 +1233,30 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHE", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x110", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x108", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_PMM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH0_TO_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1031,24 +1265,30 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1.TO_NM1LM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1.TO_NM1LM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x208", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1.TO_NMCache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1.TO_NMCache", + "Experimental": "1", "PerPkg": "1", "UMask": "0x210", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1057,24 +1297,30 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_FROM_TGR", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x240", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x202", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_NORMAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1083,24 +1329,30 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHE", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x210", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x208", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_PMM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.CH1_TO_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1109,62 +1361,77 @@ }, { "BriefDescription": "UNC_M2M_IMC_READS.FROM_TGR", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x340", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x302", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.NORMAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x301", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.TO_DDR_AS_CACHE", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x310", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.TO_DDR_AS_MEM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x308", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.TO_NM1LM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.TO_NM1LM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x308", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.TO_NMCACHE", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.TO_NMCACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x310", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_READS.TO_PMM", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_IMC_READS.TO_PMM", "PerPkg": "1", @@ -1173,23 +1440,29 @@ }, { "BriefDescription": "All Writes - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1810", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0.NI", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_ALL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1198,15 +1471,19 @@ }, { "BriefDescription": "From TGR - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_FULL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_FULL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1215,30 +1492,38 @@ }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_FULL_ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x804", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_NI", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive Miss - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_NI_MISS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_PARTIAL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1247,32 +1532,40 @@ }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x808", "Unit": "M2M" }, { "BriefDescription": "DDR, acting as Cache - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x840", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEM", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x820", "Unit": "M2M" }, { "BriefDescription": "PMM - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH0_TO_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1282,15 +1575,19 @@ }, { "BriefDescription": "Non-Inclusive - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1.NI", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "All Writes - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1299,15 +1596,19 @@ }, { "BriefDescription": "From TGR - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Full Line Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_FULL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1316,30 +1617,38 @@ }, { "BriefDescription": "ISOCH Full Line - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1004", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_NI", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive Miss - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_NI_MISS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Partial Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1348,32 +1657,40 @@ }, { "BriefDescription": "ISOCH Partial - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1008", "Unit": "M2M" }, { "BriefDescription": "DDR, acting as Cache - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1040", "Unit": "M2M" }, { "BriefDescription": "DDR - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1020", "Unit": "M2M" }, { "BriefDescription": "PMM - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.CH1_TO_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1383,75 +1700,94 @@ }, { "BriefDescription": "From TGR - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Full Non-ISOCH - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1801", "Unit": "M2M" }, { "BriefDescription": "ISOCH Full Line - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1804", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.NI", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Non-Inclusive Miss - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.NI_MISS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Partial Non-ISOCH - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.PARTIAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1802", "Unit": "M2M" }, { "BriefDescription": "ISOCH Partial - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1808", "Unit": "M2M" }, { "BriefDescription": "DDR, acting as Cache - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1840", "Unit": "M2M" }, { "BriefDescription": "DDR - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1820", "Unit": "M2M" }, { "BriefDescription": "PMM - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_IMC_WRITES.TO_PMM", "PerPkg": "1", @@ -1460,143 +1796,179 @@ }, { "BriefDescription": "UNC_M2M_PREFCAM_CIS_DROPS", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_M2M_PREFCAM_CIS_DROPS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped : UPI - All Channels"= , + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2M" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "M2M" }, { "BriefDescription": ": UPI - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2M" }, { "BriefDescription": ": XPT - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "M2M" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.RD_MERGED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.WR_MERGED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.WR_SQUASHED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - Ch 1", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 1", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_INSERTS.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Prefetch CAM Inserts : XPT -All Channels", "UMask": "0x5", @@ -1604,108 +1976,135 @@ }, { "BriefDescription": "Prefetch CAM Occupancy : All Channels", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2M_PREFCAM_OCCUPANCY.ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5F", "EventName": "UNC_M2M_PREFCAM_RESP_MISS.ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": ": Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": ": Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCE= PT", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) Occupancy - Prefetches"= , + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_M2M_PREFCAM_RxC_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS)= Allocations", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_M2M_RxC_AD_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M2M_RxC_AD_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Clean NearMem Read Hit", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_CLEAN", "PerPkg": "1", @@ -1715,6 +2114,7 @@ }, { "BriefDescription": "Dirty NearMem Read Hit", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_DIRTY", "PerPkg": "1", @@ -1724,8 +2124,10 @@ }, { "BriefDescription": "Tag Hit : Clean NearMem Underfill Hit", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_CLEAN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Tag Hit indicates when a request sent to the= iMC hit in Near Memory. : Counts clean underfill hits due to a partial wri= te", "UMask": "0x4", @@ -1733,8 +2135,10 @@ }, { "BriefDescription": "Tag Hit : Dirty NearMem Underfill Hit", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_DIRTY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Tag Hit indicates when a request sent to the= iMC hit in Near Memory. : Counts dirty underfill read hits due to a partia= l write", "UMask": "0x8", @@ -1742,230 +2146,288 @@ }, { "BriefDescription": "UNC_M2M_TAG_MISS", + "Counter": "0,1,2,3", "EventCode": "0x4b", "EventName": "UNC_M2M_TAG_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "Number AD Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M2M_TGR_AD_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Number BL Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_M2M_TGR_BL_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Tracker Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2M_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x104", "Unit": "M2M" }, { "BriefDescription": "Tracker Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2M_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x204", "Unit": "M2M" }, { "BriefDescription": "Tracker Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Tracker Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "WPQ Flush : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2M_WPQ_FLUSH.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "WPQ Flush : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2M_WPQ_FLUSH.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Cha= nnel 0", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Cha= nnel 1", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Cha= nnel 0", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Cha= nnel 1", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_M2M_WR_TRACKER_NE.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_M2M_WR_TRACKER_NE.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty : Mirror", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_M2M_WR_TRACKER_NE.MIRR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_NONTGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_PWR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Non-Posted Inserts : Channel 0"= , + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Non-Posted Inserts : Channel 1"= , + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Non-Posted Occupancy : Channel = 0", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Non-Posted Occupancy : Channel = 1", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Posted Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Posted Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Posted Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Posted Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "CBox AD Credits Empty : Requests", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CBox AD Credits Empty : Requests : No credit= s available to send to Cbox on the AD Ring (covers higher CBoxes)", "UMask": "0x4", @@ -1973,8 +2435,10 @@ }, { "BriefDescription": "CBox AD Credits Empty : Snoops", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CBox AD Credits Empty : Snoops : No credits = available to send to Cbox on the AD Ring (covers higher CBoxes)", "UMask": "0x8", @@ -1982,8 +2446,10 @@ }, { "BriefDescription": "CBox AD Credits Empty : VNA Messages", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CBox AD Credits Empty : VNA Messages : No cr= edits available to send to Cbox on the AD Ring (covers higher CBoxes)", "UMask": "0x1", @@ -1991,8 +2457,10 @@ }, { "BriefDescription": "CBox AD Credits Empty : Writebacks", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CBox AD Credits Empty : Writebacks : No cred= its available to send to Cbox on the AD Ring (covers higher CBoxes)", "UMask": "0x2", @@ -2000,6 +2468,7 @@ }, { "BriefDescription": "M3UPI Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M3UPI_CLOCKTICKS", "PerPkg": "1", @@ -2008,31 +2477,39 @@ }, { "BriefDescription": "M3UPI CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xc0", "EventName": "UNC_M3UPI_CMS_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "D2C Sent", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_M3UPI_D2C_SENT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "D2C Sent : Count cases BL sends direct to co= re", "Unit": "M3UPI" }, { "BriefDescription": "D2U Sent", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_M3UPI_D2U_SENT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "D2U Sent : Cases where SMI3 sends D2U comman= d", "Unit": "M3UPI" }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Down", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Down : Counts number of cycles IV was blocked in the TGR Egress due to S= NP/GO Ordering requirements", "UMask": "0x4", @@ -2040,8 +2517,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Up", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP= /GO Ordering requirements", "UMask": "0x1", @@ -2049,8 +2528,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO0 and IIO1 share the= same ring destination. (1 VN0 credit only)", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO0 and IIO1 share th= e same ring destination. (1 VN0 credit only) : No vn0 and vna credits avail= able to send to M2", "UMask": "0x1", @@ -2058,8 +2539,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO2", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO2_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO2 : No vn0 and vna = credits available to send to M2", "UMask": "0x2", @@ -2067,8 +2550,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO3", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO3_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO3 : No vn0 and vna = credits available to send to M2", "UMask": "0x4", @@ -2076,8 +2561,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO4", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO4_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO4 : No vn0 and vna = credits available to send to M2", "UMask": "0x8", @@ -2085,8 +2572,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO5", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO5_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna = credits available to send to M2", "UMask": "0x10", @@ -2094,8 +2583,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : All IIO targets for NCS= are in single mask. ORs them together", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : All IIO targets for NC= S are in single mask. ORs them together : No vn0 and vna credits available = to send to M2", "UMask": "0x40", @@ -2103,8 +2594,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : Selected M2p BL NCS cre= dits", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS_SEL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : Selected M2p BL NCS cr= edits : No vn0 and vna credits available to send to M2", "UMask": "0x80", @@ -2112,8 +2605,10 @@ }, { "BriefDescription": "M2 BL Credits Empty : IIO5", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.UBOX_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna = credits available to send to M2", "UMask": "0x20", @@ -2121,8 +2616,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : AD - Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : AD - Slot 0 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x1", @@ -2130,8 +2627,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : AD - Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : AD - Slot 1 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x2", @@ -2139,8 +2638,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : AD - Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : AD - Slot 2 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x4", @@ -2148,8 +2649,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : AK - Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : AK - Slot 0 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x10", @@ -2157,8 +2660,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : AK - Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : AK - Slot 2 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x20", @@ -2166,8 +2671,10 @@ }, { "BriefDescription": "Multi Slot Flit Received : BL - Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x3e", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.BL_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi Slot Flit Received : BL - Slot 0 : Mul= ti slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks = for AK allocations)", "UMask": "0x8", @@ -2175,8 +2682,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : REQ on AD", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : REQ on AD : VN0 message r= equested but lost arbitration : Home (REQ) messages on AD. REQ is generall= y used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -2184,8 +2693,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : RSP on AD", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : RSP on AD : VN0 message r= equested but lost arbitration : Response (RSP) messages on AD. RSP packets= are used to transmit a variety of protocol flits including grants and comp= letions (CMP).", "UMask": "0x4", @@ -2193,8 +2704,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : SNP on AD", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : SNP on AD : VN0 message r= equested but lost arbitration : Snoops (SNP) messages on AD. SNP is used f= or outgoing snoops.", "UMask": "0x2", @@ -2202,8 +2715,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : NCB on BL", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : NCB on BL : VN0 message r= equested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL= . NCB is generally used to transmit data without coherency. For example, = non-coherent read data returns.", "UMask": "0x20", @@ -2211,8 +2726,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : NCS on BL", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : NCS on BL : VN0 message r= equested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.= ", "UMask": "0x40", @@ -2220,8 +2737,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : RSP on BL", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : RSP on BL : VN0 message r= equested but lost arbitration : Response (RSP) messages on BL. RSP packets = are used to transmit a variety of protocol flits including grants and compl= etions (CMP).", "UMask": "0x8", @@ -2229,8 +2748,10 @@ }, { "BriefDescription": "Lost Arb for VN0 : WB on BL", + "Counter": "0", "EventCode": "0x4b", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN0 : WB on BL : VN0 message re= quested but lost arbitration : Data Response (WB) messages on BL. WB is ge= nerally used to transmit data with coherency. For example, remote reads an= d writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -2238,8 +2759,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : REQ on AD", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : REQ on AD : VN1 message r= equested but lost arbitration : Home (REQ) messages on AD. REQ is generall= y used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -2247,8 +2770,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : RSP on AD", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : RSP on AD : VN1 message r= equested but lost arbitration : Response (RSP) messages on AD. RSP packets= are used to transmit a variety of protocol flits including grants and comp= letions (CMP).", "UMask": "0x4", @@ -2256,8 +2781,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : SNP on AD", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : SNP on AD : VN1 message r= equested but lost arbitration : Snoops (SNP) messages on AD. SNP is used f= or outgoing snoops.", "UMask": "0x2", @@ -2265,8 +2792,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : NCB on BL", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : NCB on BL : VN1 message r= equested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL= . NCB is generally used to transmit data without coherency. For example, = non-coherent read data returns.", "UMask": "0x20", @@ -2274,8 +2803,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : NCS on BL", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : NCS on BL : VN1 message r= equested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.= ", "UMask": "0x40", @@ -2283,8 +2814,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : RSP on BL", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : RSP on BL : VN1 message r= equested but lost arbitration : Response (RSP) messages on BL. RSP packets = are used to transmit a variety of protocol flits including grants and compl= etions (CMP).", "UMask": "0x8", @@ -2292,8 +2825,10 @@ }, { "BriefDescription": "Lost Arb for VN1 : WB on BL", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Lost Arb for VN1 : WB on BL : VN1 message re= quested but lost arbitration : Data Response (WB) messages on BL. WB is ge= nerally used to transmit data with coherency. For example, remote reads an= d writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -2301,8 +2836,10 @@ }, { "BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0 = : AD and BL messages won arbitration concurrently / in parallel", "UMask": "0x10", @@ -2310,8 +2847,10 @@ }, { "BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1 = : AD and BL messages won arbitration concurrently / in parallel", "UMask": "0x20", @@ -2319,8 +2858,10 @@ }, { "BriefDescription": "Arb Miscellaneous : Max Parallel Win", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.ALL_PARALLEL_WIN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : Max Parallel Win : VN0 a= nd VN1 arbitration sub-pipelines both produced AD and BL winners (maximum p= ossible parallel winners)", "UMask": "0x80", @@ -2328,8 +2869,10 @@ }, { "BriefDescription": "Arb Miscellaneous : No Progress on Pending AD= VN0", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : No Progress on Pending A= D VN0 : Arbitration stage made no progress on pending ad vn0 messages becau= se slotting stage cannot accept new message", "UMask": "0x1", @@ -2337,8 +2880,10 @@ }, { "BriefDescription": "Arb Miscellaneous : No Progress on Pending AD= VN1", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : No Progress on Pending A= D VN1 : Arbitration stage made no progress on pending ad vn1 messages becau= se slotting stage cannot accept new message", "UMask": "0x2", @@ -2346,8 +2891,10 @@ }, { "BriefDescription": "Arb Miscellaneous : No Progress on Pending BL= VN0", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : No Progress on Pending B= L VN0 : Arbitration stage made no progress on pending bl vn0 messages becau= se slotting stage cannot accept new message", "UMask": "0x4", @@ -2355,8 +2902,10 @@ }, { "BriefDescription": "Arb Miscellaneous : No Progress on Pending BL= VN1", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : No Progress on Pending B= L VN1 : Arbitration stage made no progress on pending bl vn1 messages becau= se slotting stage cannot accept new message", "UMask": "0x8", @@ -2364,8 +2913,10 @@ }, { "BriefDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_M3UPI_RxC_ARB_MISC.VN01_PARALLEL_WIN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win : = VN0 and VN1 arbitration sub-pipelines had parallel winners (at least one AD= or BL on each side)", "UMask": "0x40", @@ -2373,8 +2924,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : REQ on AD", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : REQ on AD : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Home (REQ) messages on AD. REQ is generally used to send requests,= request responses, and snoop responses.", "UMask": "0x1", @@ -2382,8 +2935,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : RSP on AD", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : RSP on AD : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Response (RSP) messages on AD. RSP packets are used to transmit a = variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -2391,8 +2946,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : SNP on AD", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : SNP on AD : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -2400,8 +2957,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : NCB on BL", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : NCB on BL : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used= to transmit data without coherency. For example, non-coherent read data r= eturns.", "UMask": "0x20", @@ -2409,8 +2968,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : NCS on BL", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : NCS on BL : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -2418,8 +2979,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : RSP on BL", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : RSP on BL : VN0 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Response (RSP) messages on BL. RSP packets are used to transmit a v= ariety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -2427,8 +2990,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0 : WB on BL", + "Counter": "0", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN0 : WB on BL : VN0 m= essage is blocked from requesting arbitration due to lack of remote UPI cre= dits : Data Response (WB) messages on BL. WB is generally used to transmit= data with coherency. For example, remote reads and writes, or cache to ca= che transfers will transmit their data using WB.", "UMask": "0x10", @@ -2436,8 +3001,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : REQ on AD", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : REQ on AD : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Home (REQ) messages on AD. REQ is generally used to send requests,= request responses, and snoop responses.", "UMask": "0x1", @@ -2445,8 +3012,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : RSP on AD", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : RSP on AD : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Response (RSP) messages on AD. RSP packets are used to transmit a = variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -2454,8 +3023,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : SNP on AD", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : SNP on AD : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -2463,8 +3034,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : NCB on BL", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : NCB on BL : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used= to transmit data without coherency. For example, non-coherent read data r= eturns.", "UMask": "0x20", @@ -2472,8 +3045,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : NCS on BL", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : NCS on BL : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -2481,8 +3056,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : RSP on BL", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : RSP on BL : VN1 = message is blocked from requesting arbitration due to lack of remote UPI cr= edits : Response (RSP) messages on BL. RSP packets are used to transmit a v= ariety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -2490,8 +3067,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1 : WB on BL", + "Counter": "0", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No Credits to Arb for VN1 : WB on BL : VN1 m= essage is blocked from requesting arbitration due to lack of remote UPI cre= dits : Data Response (WB) messages on BL. WB is generally used to transmit= data with coherency. For example, remote reads and writes, or cache to ca= che transfers will transmit their data using WB.", "UMask": "0x10", @@ -2499,8 +3078,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : REQ on AD", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : REQ on AD : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Home (REQ) messages on AD. REQ is generally used to send requests, req= uest responses, and snoop responses.", "UMask": "0x1", @@ -2508,8 +3089,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : RSP on AD", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : RSP on AD : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Response (RSP) messages on AD. RSP packets are used to transmit a vari= ety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -2517,8 +3100,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : SNP on AD", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : SNP on AD : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -2526,8 +3111,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : NCB on BL", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : NCB on BL : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to = transmit data without coherency. For example, non-coherent read data retur= ns.", "UMask": "0x20", @@ -2535,8 +3122,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : NCS on BL", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : NCS on BL : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -2544,8 +3133,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : RSP on BL", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : RSP on BL : VN0 message = was not able to request arbitration while some other message won arbitratio= n : Response (RSP) messages on BL. RSP packets are used to transmit a varie= ty of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -2553,8 +3144,10 @@ }, { "BriefDescription": "Can't Arb for VN0 : WB on BL", + "Counter": "0", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN0 : WB on BL : VN0 message w= as not able to request arbitration while some other message won arbitration= : Data Response (WB) messages on BL. WB is generally used to transmit dat= a with coherency. For example, remote reads and writes, or cache to cache = transfers will transmit their data using WB.", "UMask": "0x10", @@ -2562,8 +3155,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : REQ on AD", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : REQ on AD : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Home (REQ) messages on AD. REQ is generally used to send requests, req= uest responses, and snoop responses.", "UMask": "0x1", @@ -2571,8 +3166,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : RSP on AD", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : RSP on AD : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Response (RSP) messages on AD. RSP packets are used to transmit a vari= ety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -2580,8 +3177,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : SNP on AD", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : SNP on AD : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -2589,8 +3188,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : NCB on BL", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : NCB on BL : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to = transmit data without coherency. For example, non-coherent read data retur= ns.", "UMask": "0x20", @@ -2598,8 +3199,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : NCS on BL", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : NCS on BL : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -2607,8 +3210,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : RSP on BL", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : RSP on BL : VN1 message = was not able to request arbitration while some other message won arbitratio= n : Response (RSP) messages on BL. RSP packets are used to transmit a varie= ty of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -2616,8 +3221,10 @@ }, { "BriefDescription": "Can't Arb for VN1 : WB on BL", + "Counter": "0", "EventCode": "0x4a", "EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Can't Arb for VN1 : WB on BL : VN1 message w= as not able to request arbitration while some other message won arbitration= : Data Response (WB) messages on BL. WB is generally used to transmit dat= a with coherency. For example, remote reads and writes, or cache to cache = transfers will transmit their data using WB.", "UMask": "0x10", @@ -2625,8 +3232,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL A= rb", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_BL_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL = Arb : Number of times message is bypassed around the Ingress Queue : AD is = taking bypass to slot 0 of independent flit while bl message is in arbitrat= ion", "UMask": "0x2", @@ -2634,8 +3243,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idle= ", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idl= e : Number of times message is bypassed around the Ingress Queue : AD is ta= king bypass to slot 0 of independent flit while pipeline is idle", "UMask": "0x1", @@ -2643,8 +3254,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 1", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S1_BL_SLOT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 1 := Number of times message is bypassed around the Ingress Queue : AD is takin= g bypass to flit slot 1 while merging with bl message in same flit", "UMask": "0x4", @@ -2652,8 +3265,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 2", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S2_BL_SLOT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 2 := Number of times message is bypassed around the Ingress Queue : AD is takin= g bypass to flit slot 2 while merging with bl message in same flit", "UMask": "0x8", @@ -2661,8 +3276,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events : Any In BGF FIFO= ", + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : Any In BGF FIF= O : Indication that at least one packet (flit) is in the bgf (fifo only)", "UMask": "0x1", @@ -2670,8 +3287,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events : Any in BGF Path= ", + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_PATH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : Any in BGF Pat= h : Indication that at least one packet (flit) is in the bgf path (i.e. pip= e to fifo)", "UMask": "0x2", @@ -2679,8 +3298,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events", + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.LT1_FOR_D2K", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : d2k credit cou= nt is less than 1", "UMask": "0x10", @@ -2688,8 +3309,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events", + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.LT2_FOR_D2K", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : d2k credit cou= nt is less than 2", "UMask": "0x20", @@ -2697,8 +3320,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events : No D2K For Arb"= , + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.VN0_NO_D2K_FOR_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : No D2K For Arb= : VN0 BL RSP message was blocked from arbitration request due to lack of D= 2K CMP credit", "UMask": "0x4", @@ -2706,8 +3331,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events", + "Counter": "0", "EventCode": "0x5f", "EventName": "UNC_M3UPI_RxC_CRD_MISC.VN1_NO_D2K_FOR_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous Credit Events : VN1 BL RSP mes= sage was blocked from arbitration request due to lack of D2K CMP credits", "UMask": "0x8", @@ -2715,8 +3342,10 @@ }, { "BriefDescription": "Credit Occupancy : Credits Consumed", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.CONSUMED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : Credits Consumed : number= of remote vna credits consumed per cycle", "UMask": "0x80", @@ -2724,8 +3353,10 @@ }, { "BriefDescription": "Credit Occupancy : D2K Credits", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.D2K_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : D2K Credits : D2K complet= ion fifo credit occupancy (credits in use), accumulated across all cycles", "UMask": "0x10", @@ -2733,8 +3364,10 @@ }, { "BriefDescription": "Credit Occupancy : Packets in BGF FIFO", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : Packets in BGF FIFO : Occ= upancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifo", "UMask": "0x2", @@ -2742,8 +3375,10 @@ }, { "BriefDescription": "Credit Occupancy : Packets in BGF Path", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_PATH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : Packets in BGF Path : Occ= upancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e= . pipe to fifo or fifo)", "UMask": "0x4", @@ -2751,8 +3386,10 @@ }, { "BriefDescription": "Credit Occupancy", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : count of bl messages in p= ump-1-pending state, in completion fifo only", "UMask": "0x40", @@ -2760,8 +3397,10 @@ }, { "BriefDescription": "Credit Occupancy", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_TOTAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : count of bl messages in p= ump-1-pending state, in marker table and in fifo", "UMask": "0x20", @@ -2769,8 +3408,10 @@ }, { "BriefDescription": "Credit Occupancy : Transmit Credits", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.TxQ_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : Transmit Credits : Link l= ayer transmit queue credit occupancy (credits in use), accumulated across a= ll cycles", "UMask": "0x8", @@ -2778,8 +3419,10 @@ }, { "BriefDescription": "Credit Occupancy : VNA In Use", + "Counter": "0", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_OCC.VNA_IN_USE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Credit Occupancy : VNA In Use : Remote UPI V= NA credit occupancy (number of credits in use), accumulated across all cycl= es", "UMask": "0x1", @@ -2787,8 +3430,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : REQ on AD", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : REQ on AD : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Home (REQ) me= ssages on AD. REQ is generally used to send requests, request responses, a= nd snoop responses.", "UMask": "0x1", @@ -2796,8 +3441,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : RSP on AD", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : RSP on AD : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Response (RSP= ) messages on AD. RSP packets are used to transmit a variety of protocol f= lits including grants and completions (CMP).", "UMask": "0x4", @@ -2805,8 +3452,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : SNP on AD", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : SNP on AD : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Snoops (SNP) = messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -2814,8 +3463,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : NCB on BL", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : NCB on BL : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Non-Coherent = Broadcast (NCB) messages on BL. NCB is generally used to transmit data wit= hout coherency. For example, non-coherent read data returns.", "UMask": "0x20", @@ -2823,8 +3474,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : NCS on BL", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : NCS on BL : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Non-Coherent = Standard (NCS) messages on BL.", "UMask": "0x40", @@ -2832,8 +3485,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : RSP on BL", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : RSP on BL : Counts the number of cycles when the UPI Ingress is not e= mpty. This tracks one of the three rings that are used by the UPI agent. = This can be used in conjunction with the UPI Ingress Occupancy Accumulator = event in order to calculate average queue occupancy. Multiple ingress buff= ers can be tracked at a given time using multiple counters. : Response (RSP= ) messages on BL. RSP packets are used to transmit a variety of protocol fl= its including grants and completions (CMP).", "UMask": "0x8", @@ -2841,8 +3496,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty : WB on BL", + "Counter": "0", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Em= pty : WB on BL : Counts the number of cycles when the UPI Ingress is not em= pty. This tracks one of the three rings that are used by the UPI agent. T= his can be used in conjunction with the UPI Ingress Occupancy Accumulator e= vent in order to calculate average queue occupancy. Multiple ingress buffe= rs can be tracked at a given time using multiple counters. : Data Response = (WB) messages on BL. WB is generally used to transmit data with coherency.= For example, remote reads and writes, or cache to cache transfers will tr= ansmit their data using WB.", "UMask": "0x10", @@ -2850,8 +3507,10 @@ }, { "BriefDescription": "Data Flit Not Sent : All", + "Counter": "0", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data Flit Not Sent : All : Data flit is read= y for transmission but could not be sent : data flit is ready for transmiss= ion but could not be sent for any reason, e.g. low credits, low tsv, stall = injection", "UMask": "0x1", @@ -2859,8 +3518,10 @@ }, { "BriefDescription": "Data Flit Not Sent : No BGF Credits", + "Counter": "0", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_BGF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data Flit Not Sent : No BGF Credits : Data f= lit is ready for transmission but could not be sent", "UMask": "0x8", @@ -2868,8 +3529,10 @@ }, { "BriefDescription": "Data Flit Not Sent : No TxQ Credits", + "Counter": "0", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_TXQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data Flit Not Sent : No TxQ Credits : Data f= lit is ready for transmission but could not be sent", "UMask": "0x10", @@ -2877,8 +3540,10 @@ }, { "BriefDescription": "Data Flit Not Sent : TSV High", + "Counter": "0", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.TSV_HI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data Flit Not Sent : TSV High : Data flit is= ready for transmission but could not be sent : data flit is ready for tran= smission but was not sent while tsv high", "UMask": "0x2", @@ -2886,8 +3551,10 @@ }, { "BriefDescription": "Data Flit Not Sent : Cycle valid for Flit", + "Counter": "0", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.VALID_FOR_FLIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data Flit Not Sent : Cycle valid for Flit : = Data flit is ready for transmission but could not be sent : data flit is re= ady for transmission but was not sent while cycle is valid for flit transmi= ssion", "UMask": "0x4", @@ -2895,8 +3562,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence : Wait on Pu= mp 0", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P0_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : Wait on P= ump 0 : generating bl data flit sequence; waiting for data pump 0", "UMask": "0x1", @@ -2904,8 +3573,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_AT_LIMIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : pump-1-pe= nding logic is at capacity (pending table plus completion fifo at limit)", "UMask": "0x10", @@ -2913,8 +3584,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_BUSY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : pump-1-pe= nding logic is tracking at least one message", "UMask": "0x8", @@ -2922,8 +3595,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_FIFO_FULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : pump-1-pe= nding completion fifo is full", "UMask": "0x40", @@ -2931,8 +3606,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_HOLD_P0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : pump-1-pe= nding logic is at or near capacity, such that pump-0-only bl messages are g= etting stalled in slotting stage", "UMask": "0x20", @@ -2940,8 +3617,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_TO_LIMBO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : a bl mess= age finished but is in limbo and moved to pump-1-pending logic", "UMask": "0x4", @@ -2949,8 +3628,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence : Wait on Pu= mp 1", + "Counter": "0", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Generating BL Data Flit Sequence : Wait on P= ump 1 : generating bl data flit sequence; waiting for data pump 1", "UMask": "0x2", @@ -2958,8 +3639,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF", + "Counter": "0", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": slot 2 request naturally serviced during h= old-off period", "UMask": "0x4", @@ -2967,8 +3650,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE", + "Counter": "0", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": slot 2 request forcibly serviced during se= rvice window", "UMask": "0x8", @@ -2976,8 +3661,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED", + "Counter": "0", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": slot 2 request received from link layer wh= ile idle (with no slot 2 request active immediately prior)", "UMask": "0x1", @@ -2985,8 +3672,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN", + "Counter": "0", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": slot 2 request withdrawn during hold-off p= eriod or service window", "UMask": "0x2", @@ -2994,16 +3683,20 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : All", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "Slotting BL Message Into Header Flit : Needs = Data Flit", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.NEED_DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Needs= Data Flit : BL message requires data flit sequence", "UMask": "0x2", @@ -3011,8 +3704,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : Wait o= n Pump 0", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P0_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Wait = on Pump 0 : Waiting for header pump 0", "UMask": "0x4", @@ -3020,8 +3715,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : Don't = Need Pump 1", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Don't= Need Pump 1 : Header pump 1 is not required for flit", "UMask": "0x10", @@ -3029,8 +3726,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : Don't = Need Pump 1 - Bubble", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_BUT_BUBBLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Don't= Need Pump 1 - Bubble : Header pump 1 is not required for flit but flit tra= nsmission delayed", "UMask": "0x20", @@ -3038,8 +3737,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : Don't = Need Pump 1 - Not Avail", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_NOT_AVAIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Don't= Need Pump 1 - Not Avail : Header pump 1 is not required for flit and not a= vailable", "UMask": "0x40", @@ -3047,8 +3748,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit : Wait o= n Pump 1", + "Counter": "0", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Slotting BL Message Into Header Flit : Wait = on Pump 1 : Waiting for header pump 1", "UMask": "0x8", @@ -3056,8 +3759,10 @@ }, { "BriefDescription": "Flit Gen - Header 1 : Accumulate", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Accumulate : Events re= lated to Header Flit Generation - Set 1 : Header flit slotting control stat= e machine is in any accumulate state; multi-message flit may be assembled o= ver multiple cycles", "UMask": "0x1", @@ -3065,8 +3770,10 @@ }, { "BriefDescription": "Flit Gen - Header 1 : Accumulate Ready", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Accumulate Ready : Eve= nts related to Header Flit Generation - Set 1 : header flit slotting contro= l state machine is in accum_ready state; flit is ready to send but transmis= sion is blocked; more messages may be slotted into flit", "UMask": "0x2", @@ -3074,8 +3781,10 @@ }, { "BriefDescription": "Flit Gen - Header 1 : Accumulate Wasted", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_WASTED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Accumulate Wasted : Ev= ents related to Header Flit Generation - Set 1 : Flit is being assembled ov= er multiple cycles, but no additional message is being slotted into flit in= current cycle; accumulate cycle is wasted", "UMask": "0x4", @@ -3083,8 +3792,10 @@ }, { "BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_BLOCKED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked : = Events related to Header Flit Generation - Set 1 : Header flit slotting ent= ered run-ahead state; new header flit is started while transmission of prio= r, fully assembled flit is blocked", "UMask": "0x8", @@ -3092,8 +3803,10 @@ }, { "BriefDescription": "Flit Gen - Header 1", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_AFTER", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Events related to Head= er Flit Generation - Set 1 : run-ahead mode: message was slotted only after= run-ahead was over; run-ahead mode definitely wasted", "UMask": "0x80", @@ -3101,8 +3814,10 @@ }, { "BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Message", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_DURING", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Message : = Events related to Header Flit Generation - Set 1 : run-ahead mode: one mess= age slotted during run-ahead", "UMask": "0x10", @@ -3110,8 +3825,10 @@ }, { "BriefDescription": "Flit Gen - Header 1", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_AFTER", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Events related to Head= er Flit Generation - Set 1 : run-ahead mode: second message slotted immedia= tely after run-ahead; potential run-ahead success", "UMask": "0x20", @@ -3119,8 +3836,10 @@ }, { "BriefDescription": "Flit Gen - Header 1", + "Counter": "0", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_SENT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 1 : Events related to Head= er Flit Generation - Set 1 : run-ahead mode: two (or three) message flit se= nt immediately after run-ahead; complete run-ahead success", "UMask": "0x40", @@ -3128,8 +3847,10 @@ }, { "BriefDescription": "Flit Gen - Header 2 : Parallel Ok", + "Counter": "0", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 2 : Parallel Ok : Events r= elated to Header Flit Generation - Set 2 : new header flit construction may= proceed in parallel with data flit sequence", "UMask": "0x4", @@ -3137,8 +3858,10 @@ }, { "BriefDescription": "Flit Gen - Header 2 : Parallel Flit Finished"= , + "Counter": "0", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_FLIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 2 : Parallel Flit Finished= : Events related to Header Flit Generation - Set 2 : header flit finished = assembly in parallel with data flit sequence", "UMask": "0x10", @@ -3146,8 +3869,10 @@ }, { "BriefDescription": "Flit Gen - Header 2 : Parallel Message", + "Counter": "0", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 2 : Parallel Message : Eve= nts related to Header Flit Generation - Set 2 : message is slotted into hea= der flit in parallel with data flit sequence", "UMask": "0x8", @@ -3155,8 +3880,10 @@ }, { "BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall", + "Counter": "0", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall : = Events related to Header Flit Generation - Set 2 : Rate-matching stall inje= cted", "UMask": "0x1", @@ -3164,8 +3891,10 @@ }, { "BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall - N= o Message", + "Counter": "0", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL_NOMSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall - = No Message : Events related to Header Flit Generation - Set 2 : Rate matchi= ng stall injected, but no additional message slotted during stall cycle", "UMask": "0x2", @@ -3173,8 +3902,10 @@ }, { "BriefDescription": "Sent Header Flit : One Message", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Sent Header Flit : One Message : One message= in flit; VNA or non-VNA flit", "UMask": "0x1", @@ -3182,8 +3913,10 @@ }, { "BriefDescription": "Sent Header Flit : One Message in non-VNA", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG_VNX", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Sent Header Flit : One Message in non-VNA : = One message in flit; non-VNA flit", "UMask": "0x8", @@ -3191,8 +3924,10 @@ }, { "BriefDescription": "Sent Header Flit : Two Messages", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.2_MSGS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Sent Header Flit : Two Messages : Two messag= es in flit; VNA flit", "UMask": "0x2", @@ -3200,8 +3935,10 @@ }, { "BriefDescription": "Sent Header Flit : Three Messages", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.3_MSGS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Sent Header Flit : Three Messages : Three me= ssages in flit; VNA flit", "UMask": "0x4", @@ -3209,32 +3946,40 @@ }, { "BriefDescription": "Sent Header Flit : One Slot Taken", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "Sent Header Flit : Two Slots Taken", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "Sent Header Flit : All Slots Taken", + "Counter": "0", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "Header Not Sent : All", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : All : header flit is ready= for transmission but could not be sent : header flit is ready for transmis= sion but could not be sent for any reason, e.g. no credits, low tsv, stall = injection", "UMask": "0x1", @@ -3242,8 +3987,10 @@ }, { "BriefDescription": "Header Not Sent : No BGF Credits", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : No BGF Credits : header fl= it is ready for transmission but could not be sent : No BGF credits availab= le", "UMask": "0x8", @@ -3251,8 +3998,10 @@ }, { "BriefDescription": "Header Not Sent : No BGF Credits + No Extra M= essage Slotted", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_NO_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : No BGF Credits + No Extra = Message Slotted : header flit is ready for transmission but could not be se= nt : No BGF credits available; no additional message slotted into flit", "UMask": "0x20", @@ -3260,8 +4009,10 @@ }, { "BriefDescription": "Header Not Sent : No TxQ Credits", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : No TxQ Credits : header fl= it is ready for transmission but could not be sent : No TxQ credits availab= le", "UMask": "0x10", @@ -3269,8 +4020,10 @@ }, { "BriefDescription": "Header Not Sent : No TxQ Credits + No Extra M= essage Slotted", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_NO_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : No TxQ Credits + No Extra = Message Slotted : header flit is ready for transmission but could not be se= nt : No TxQ credits available; no additional message slotted into flit", "UMask": "0x40", @@ -3278,8 +4031,10 @@ }, { "BriefDescription": "Header Not Sent : TSV High", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.TSV_HI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : TSV High : header flit is = ready for transmission but could not be sent : header flit is ready for tra= nsmission but was not sent while tsv high", "UMask": "0x2", @@ -3287,8 +4042,10 @@ }, { "BriefDescription": "Header Not Sent : Cycle valid for Flit", + "Counter": "0", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.VALID_FOR_FLIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header Not Sent : Cycle valid for Flit : hea= der flit is ready for transmission but could not be sent : header flit is r= eady for transmission but was not sent while cycle is valid for flit transm= ission", "UMask": "0x4", @@ -3296,8 +4053,10 @@ }, { "BriefDescription": "Message Held : Can't Slot AD", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : Can't Slot AD : some AD messa= ge could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_M= C_VN{0,1})", "UMask": "0x10", @@ -3305,8 +4064,10 @@ }, { "BriefDescription": "Message Held : Can't Slot BL", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : Can't Slot BL : some BL messa= ge could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_M= C_VN{0,1})", "UMask": "0x20", @@ -3314,8 +4075,10 @@ }, { "BriefDescription": "Message Held : Parallel Attempt", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_ATTEMPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : Parallel Attempt : ad and bl = messages attempted to slot into the same flit in parallel", "UMask": "0x4", @@ -3323,8 +4086,10 @@ }, { "BriefDescription": "Message Held : Parallel Success", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_SUCCESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : Parallel Success : ad and bl = messages were actually slotted into the same flit in parallel", "UMask": "0x8", @@ -3332,8 +4097,10 @@ }, { "BriefDescription": "Message Held : VN0", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : VN0 : vn0 message(s) that cou= ldn't be slotted into last vn0 flit are held in slotting stage while proces= sing vn1 flit", "UMask": "0x1", @@ -3341,8 +4108,10 @@ }, { "BriefDescription": "Message Held : VN1", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_HELD.VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Held : VN1 : vn1 message(s) that cou= ldn't be slotted into last vn1 flit are held in slotting stage while proces= sing vn0 flit", "UMask": "0x2", @@ -3350,8 +4119,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : REQ on AD"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : REQ on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Home (REQ) messages on AD. REQ is generally used to send re= quests, request responses, and snoop responses.", "UMask": "0x1", @@ -3359,8 +4130,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : RSP on AD"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : RSP on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Response (RSP) messages on AD. RSP packets are used to tran= smit a variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -3368,8 +4141,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : SNP on AD"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : SNP on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoop= s.", "UMask": "0x2", @@ -3377,8 +4152,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : NCB on BL"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : NCB on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is general= ly used to transmit data without coherency. For example, non-coherent read= data returns.", "UMask": "0x20", @@ -3386,8 +4163,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : NCS on BL"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : NCS on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -3395,8 +4174,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : RSP on BL"= , + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : RSP on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Response (RSP) messages on BL. RSP packets are used to trans= mit a variety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -3404,8 +4185,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit : WB on BL", + "Counter": "0,1,2", "EventCode": "0x4e", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message can't slot into flit : WB on BL = : Count cases where Ingress has packets to send but did not have time to pa= ck into flit before sending to Agent so slot was left NULL which could have= been used. : Data Response (WB) messages on BL. WB is generally used to t= ransmit data with coherency. For example, remote reads and writes, or cach= e to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -3413,8 +4196,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : REQ on AD"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : REQ on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Home (REQ) messages on AD. REQ is generally used to send re= quests, request responses, and snoop responses.", "UMask": "0x1", @@ -3422,8 +4207,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : RSP on AD"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : RSP on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Response (RSP) messages on AD. RSP packets are used to tran= smit a variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -3431,8 +4218,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : SNP on AD"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : SNP on AD= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoop= s.", "UMask": "0x2", @@ -3440,8 +4229,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : NCB on BL"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : NCB on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is general= ly used to transmit data without coherency. For example, non-coherent read= data returns.", "UMask": "0x20", @@ -3449,8 +4240,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : NCS on BL"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : NCS on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -3458,8 +4251,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : RSP on BL"= , + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : RSP on BL= : Count cases where Ingress has packets to send but did not have time to p= ack into flit before sending to Agent so slot was left NULL which could hav= e been used. : Response (RSP) messages on BL. RSP packets are used to trans= mit a variety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -3467,8 +4262,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit : WB on BL", + "Counter": "0,1,2", "EventCode": "0x4f", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message can't slot into flit : WB on BL = : Count cases where Ingress has packets to send but did not have time to pa= ck into flit before sending to Agent so slot was left NULL which could have= been used. : Data Response (WB) messages on BL. WB is generally used to t= ransmit data with coherency. For example, remote reads and writes, or cach= e to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -3476,8 +4273,10 @@ }, { "BriefDescription": "Remote VNA Credits : Any In Use", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.ANY_IN_USE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Any In Use : At least o= ne remote vna credit is in use", "UMask": "0x20", @@ -3485,8 +4284,10 @@ }, { "BriefDescription": "Remote VNA Credits : Corrected", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.CORRECTED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Corrected : Number of r= emote vna credits corrected (local return) per cycle", "UMask": "0x1", @@ -3494,8 +4295,10 @@ }, { "BriefDescription": "Remote VNA Credits : Level < 1", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Level < 1 : Remote vna = credit level is less than 1 (i.e. no vna credits available)", "UMask": "0x2", @@ -3503,8 +4306,10 @@ }, { "BriefDescription": "Remote VNA Credits : Level < 10", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT10", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Level < 10 : remote vna= credit level is less than 10; parallel vn0/vn1 arb not possible", "UMask": "0x10", @@ -3512,8 +4317,10 @@ }, { "BriefDescription": "Remote VNA Credits : Level < 4", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Level < 4 : Remote vna = credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vna", "UMask": "0x4", @@ -3521,8 +4328,10 @@ }, { "BriefDescription": "Remote VNA Credits : Level < 5", + "Counter": "0", "EventCode": "0x5a", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote VNA Credits : Level < 5 : Remote vna = credit level is less than 5; parallel ad/bl arb on vna not possible", "UMask": "0x8", @@ -3530,8 +4339,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5"= , + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": remote vna credit count was less than 5 an= d allocation to ad or bl messages was required", "UMask": "0x2", @@ -3539,8 +4350,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT1= 0", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": remote vna credit count was less than 10 a= nd allocation to vn0 or vn1 was required", "UMask": "0x1", @@ -3548,8 +4361,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": on vn0, remote vna credits were allocated = only to ad messages, not to bl", "UMask": "0x10", @@ -3557,8 +4372,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": on vn0, remote vna credits were allocated = only to bl messages, not to ad", "UMask": "0x20", @@ -3566,8 +4383,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": remote vna credits were allocated only to = vn0, not to vn1", "UMask": "0x4", @@ -3575,8 +4394,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": on vn1, remote vna credits were allocated = only to ad messages, not to bl", "UMask": "0x40", @@ -3584,8 +4405,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": on vn1, remote vna credits were allocated = only to bl messages, not to ad", "UMask": "0x80", @@ -3593,8 +4416,10 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY", + "Counter": "0", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": remote vna credits were allocated only to = vn1, not to vn0", "UMask": "0x8", @@ -3602,8 +4427,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN0 REQ Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN0 REQ Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x1", @@ -3611,8 +4438,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN0 RSP Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x4", @@ -3620,8 +4449,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN0 SNP Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN0 SNP Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x2", @@ -3629,8 +4460,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN0 WB Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN0 WB Messages : AD arb= but no win; arb request asserted but not won", "UMask": "0x8", @@ -3638,8 +4471,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN1 REQ Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN1 REQ Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x10", @@ -3647,8 +4482,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN1 RSP Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x40", @@ -3656,8 +4493,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN1 SNP Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN1 SNP Messages : AD ar= b but no win; arb request asserted but not won", "UMask": "0x20", @@ -3665,8 +4504,10 @@ }, { "BriefDescription": "Failed ARB for AD : VN1 WB Messages", + "Counter": "0", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for AD : VN1 WB Messages : AD arb= but no win; arb request asserted but not won", "UMask": "0x80", @@ -3674,8 +4515,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -3684,8 +4527,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD FlowQ Bypass : Counts cases when the AD f= lowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 ha= ving the highest priority and S2 the least)", "UMask": "0x1", @@ -3693,8 +4538,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD FlowQ Bypass : Counts cases when the AD f= lowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 ha= ving the highest priority and S2 the least)", "UMask": "0x2", @@ -3702,8 +4549,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD FlowQ Bypass : Counts cases when the AD f= lowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 ha= ving the highest priority and S2 the least)", "UMask": "0x4", @@ -3711,8 +4560,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2,3", "EventCode": "0x2c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.BL_EARLY_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD FlowQ Bypass : Counts cases when the AD f= lowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 ha= ving the highest priority and S2 the least)", "UMask": "0x8", @@ -3720,8 +4571,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN0 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN0 REQ Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x1", @@ -3729,8 +4582,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN0 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN0 RSP Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x4", @@ -3738,8 +4593,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN0 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN0 SNP Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x2", @@ -3747,8 +4604,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN0 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN0 WB Messages : Numb= er of cycles the AD Egress queue is Not Empty", "UMask": "0x8", @@ -3756,8 +4615,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN1 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN1 REQ Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x10", @@ -3765,8 +4626,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN1 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN1 RSP Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x40", @@ -3774,8 +4637,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN1 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN1 SNP Messages : Num= ber of cycles the AD Egress queue is Not Empty", "UMask": "0x20", @@ -3783,8 +4648,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty : VN1 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Not Empty : VN1 WB Messages : Numb= er of cycles the AD Egress queue is Not Empty", "UMask": "0x80", @@ -3792,8 +4659,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN0 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN0 REQ Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x1", @@ -3801,8 +4670,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN0 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN0 RSP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x4", @@ -3810,8 +4681,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN0 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN0 SNP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x2", @@ -3819,8 +4692,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN0 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN0 WB Messages : Counts= the number of allocations into the QPI FlowQ. This can be used in conjunct= ion with the QPI FlowQ Occupancy Accumulator event in order to calculate av= erage queue latency. Only a single FlowQ queue can be tracked at any given= time. It is not possible to filter based on direction or polarity.", "UMask": "0x8", @@ -3828,8 +4703,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN1 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN1 REQ Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x10", @@ -3837,8 +4714,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN1 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN1 RSP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x40", @@ -3846,8 +4725,10 @@ }, { "BriefDescription": "AD Flow Q Inserts : VN1 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Flow Q Inserts : VN1 SNP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x20", @@ -3855,78 +4736,98 @@ }, { "BriefDescription": "AD Flow Q Occupancy : VN0 REQ Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN0 SNP Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN0 WB Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN1 REQ Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy : VN1 SNP Messages", + "Counter": "0", "EventCode": "0x1c", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "AK Flow Q Inserts", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_M3UPI_TxC_AK_FLQ_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "AK Flow Q Occupancy", + "Counter": "0", "EventCode": "0x1e", "EventName": "UNC_M3UPI_TxC_AK_FLQ_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "Failed ARB for BL : VN0 NCB Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN0 NCB Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x4", @@ -3934,8 +4835,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN0 NCS Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN0 NCS Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x8", @@ -3943,8 +4846,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN0 RSP Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x1", @@ -3952,8 +4857,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN0 WB Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN0 WB Messages : BL arb= but no win; arb request asserted but not won", "UMask": "0x2", @@ -3961,8 +4868,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN1 NCS Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN1 NCS Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x40", @@ -3970,8 +4879,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN1 NCB Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN1 NCB Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x80", @@ -3979,8 +4890,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN1 RSP Messages : BL ar= b but no win; arb request asserted but not won", "UMask": "0x10", @@ -3988,8 +4901,10 @@ }, { "BriefDescription": "Failed ARB for BL : VN1 WB Messages", + "Counter": "0", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Failed ARB for BL : VN1 WB Messages : BL arb= but no win; arb request asserted but not won", "UMask": "0x20", @@ -3997,8 +4912,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN0 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN0 REQ Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x1", @@ -4006,8 +4923,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN0 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN0 RSP Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x4", @@ -4015,8 +4934,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN0 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN0 SNP Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x2", @@ -4024,8 +4945,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN0 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN0 WB Messages : Numb= er of cycles the BL Egress queue is Not Empty", "UMask": "0x8", @@ -4033,8 +4956,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN1 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN1 REQ Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x10", @@ -4042,8 +4967,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN1 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN1 RSP Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x40", @@ -4051,8 +4978,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN1 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN1 SNP Messages : Num= ber of cycles the BL Egress queue is Not Empty", "UMask": "0x20", @@ -4060,8 +4989,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty : VN1 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Not Empty : VN1 WB Messages : Numb= er of cycles the BL Egress queue is Not Empty", "UMask": "0x80", @@ -4069,8 +5000,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN0 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN0 RSP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x1", @@ -4078,8 +5011,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN0 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN0 WB Messages : Counts= the number of allocations into the QPI FlowQ. This can be used in conjunct= ion with the QPI FlowQ Occupancy Accumulator event in order to calculate av= erage queue latency. Only a single FlowQ queue can be tracked at any given= time. It is not possible to filter based on direction or polarity.", "UMask": "0x2", @@ -4087,8 +5022,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN0 NCS Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN0 NCS Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x8", @@ -4096,8 +5033,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN0 NCB Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN0 NCB Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x4", @@ -4105,8 +5044,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN1 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN1 RSP Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x10", @@ -4114,8 +5055,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN1 WB Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN1 WB Messages : Counts= the number of allocations into the QPI FlowQ. This can be used in conjunct= ion with the QPI FlowQ Occupancy Accumulator event in order to calculate av= erage queue latency. Only a single FlowQ queue can be tracked at any given= time. It is not possible to filter based on direction or polarity.", "UMask": "0x20", @@ -4123,8 +5066,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN1_NCB Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN1_NCB Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x80", @@ -4132,8 +5077,10 @@ }, { "BriefDescription": "BL Flow Q Inserts : VN1_NCS Messages", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Flow Q Inserts : VN1_NCS Messages : Count= s the number of allocations into the QPI FlowQ. This can be used in conjunc= tion with the QPI FlowQ Occupancy Accumulator event in order to calculate a= verage queue latency. Only a single FlowQ queue can be tracked at any give= n time. It is not possible to filter based on direction or polarity.", "UMask": "0x40", @@ -4141,120 +5088,150 @@ }, { "BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 NCS Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1_NCB Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages", + "Counter": "0", "EventCode": "0x1d", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_THROUGH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_WRPULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_THROUGH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages", + "Counter": "0", "EventCode": "0x1f", "EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_WRPULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "UPI0 AD Credits Empty : VN0 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN0 REQ Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x2", @@ -4262,8 +5239,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VN0 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN0 RSP Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x8", @@ -4271,8 +5250,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VN0 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN0 SNP Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x4", @@ -4280,8 +5261,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VN1 REQ Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN1 REQ Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x10", @@ -4289,8 +5272,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VN1 RSP Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN1 RSP Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x40", @@ -4298,8 +5283,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VN1 SNP Messages", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VN1 SNP Messages : N= o credits available to send to UPIs on the AD Ring", "UMask": "0x20", @@ -4307,8 +5294,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty : VNA", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 AD Credits Empty : VNA : No credits ava= ilable to send to UPIs on the AD Ring", "UMask": "0x1", @@ -4316,8 +5305,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN0 RSP Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_NCS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN0 RSP Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x4", @@ -4325,8 +5316,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN0 REQ Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN0 REQ Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x2", @@ -4334,8 +5327,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN0 SNP Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN0 SNP Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x8", @@ -4343,8 +5338,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN1 RSP Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_NCS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN1 RSP Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x20", @@ -4352,8 +5349,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN1 REQ Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN1 REQ Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x10", @@ -4361,8 +5360,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VN1 SNP Messages", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VN1 SNP Messages : N= o credits available to send to UPI on the BL Ring (diff between non-SMI and= SMI mode)", "UMask": "0x40", @@ -4370,8 +5371,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty : VNA", + "Counter": "0", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "UPI0 BL Credits Empty : VNA : No credits ava= ilable to send to UPI on the BL Ring (diff between non-SMI and SMI mode)", "UMask": "0x1", @@ -4379,16 +5382,20 @@ }, { "BriefDescription": "FlowQ Generated Prefetch", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_M3UPI_UPI_PREFETCH_SPAWN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "FlowQ Generated Prefetch : Count cases where= FlowQ causes spawn of Prefetch to iMC/SMI3 target", "Unit": "M3UPI" }, { "BriefDescription": "VN0 Credit Used : WB on BL", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : WB on BL : Number of times= a VN0 credit was used on the DRS message channel. In order for a request = to be transferred across UPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN0.= VNA is a shared pool used to achieve high performance. The VN0 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN0 if= they fail. This counts the number of times a VN0 credit was used. Note t= hat a single VN0 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN0 will only count a single credit ev= en though it may use multiple buffers. : Data Response (WB) messages on BL.= WB is generally used to transmit data with coherency. For example, remot= e reads and writes, or cache to cache transfers will transmit their data us= ing WB.", "UMask": "0x10", @@ -4396,8 +5403,10 @@ }, { "BriefDescription": "VN0 Credit Used : NCB on BL", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : NCB on BL : Number of time= s a VN0 credit was used on the DRS message channel. In order for a request= to be transferred across UPI, it must be guaranteed to have a flit buffer = on the remote socket to sink into. There are two credit pools, VNA and VN0= . VNA is a shared pool used to achieve high performance. The VN0 pool has= reserved entries for each message class and is used to prevent deadlock. = Requests first attempt to acquire a VNA credit, and then fall back to VN0 i= f they fail. This counts the number of times a VN0 credit was used. Note = that a single VN0 credit holds access to potentially multiple flit buffers.= For example, a transfer that uses VNA could use 9 flit buffers and in tha= t case uses 9 credits. A transfer on VN0 will only count a single credit e= ven though it may use multiple buffers. : Non-Coherent Broadcast (NCB) mess= ages on BL. NCB is generally used to transmit data without coherency. For= example, non-coherent read data returns.", "UMask": "0x20", @@ -4405,8 +5414,10 @@ }, { "BriefDescription": "VN0 Credit Used : REQ on AD", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : REQ on AD : Number of time= s a VN0 credit was used on the DRS message channel. In order for a request= to be transferred across UPI, it must be guaranteed to have a flit buffer = on the remote socket to sink into. There are two credit pools, VNA and VN0= . VNA is a shared pool used to achieve high performance. The VN0 pool has= reserved entries for each message class and is used to prevent deadlock. = Requests first attempt to acquire a VNA credit, and then fall back to VN0 i= f they fail. This counts the number of times a VN0 credit was used. Note = that a single VN0 credit holds access to potentially multiple flit buffers.= For example, a transfer that uses VNA could use 9 flit buffers and in tha= t case uses 9 credits. A transfer on VN0 will only count a single credit e= ven though it may use multiple buffers. : Home (REQ) messages on AD. REQ i= s generally used to send requests, request responses, and snoop responses."= , "UMask": "0x1", @@ -4414,8 +5425,10 @@ }, { "BriefDescription": "VN0 Credit Used : RSP on AD", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : RSP on AD : Number of time= s a VN0 credit was used on the DRS message channel. In order for a request= to be transferred across UPI, it must be guaranteed to have a flit buffer = on the remote socket to sink into. There are two credit pools, VNA and VN0= . VNA is a shared pool used to achieve high performance. The VN0 pool has= reserved entries for each message class and is used to prevent deadlock. = Requests first attempt to acquire a VNA credit, and then fall back to VN0 i= f they fail. This counts the number of times a VN0 credit was used. Note = that a single VN0 credit holds access to potentially multiple flit buffers.= For example, a transfer that uses VNA could use 9 flit buffers and in tha= t case uses 9 credits. A transfer on VN0 will only count a single credit e= ven though it may use multiple buffers. : Response (RSP) messages on AD. R= SP packets are used to transmit a variety of protocol flits including grant= s and completions (CMP).", "UMask": "0x4", @@ -4423,8 +5436,10 @@ }, { "BriefDescription": "VN0 Credit Used : SNP on AD", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : SNP on AD : Number of time= s a VN0 credit was used on the DRS message channel. In order for a request= to be transferred across UPI, it must be guaranteed to have a flit buffer = on the remote socket to sink into. There are two credit pools, VNA and VN0= . VNA is a shared pool used to achieve high performance. The VN0 pool has= reserved entries for each message class and is used to prevent deadlock. = Requests first attempt to acquire a VNA credit, and then fall back to VN0 i= f they fail. This counts the number of times a VN0 credit was used. Note = that a single VN0 credit holds access to potentially multiple flit buffers.= For example, a transfer that uses VNA could use 9 flit buffers and in tha= t case uses 9 credits. A transfer on VN0 will only count a single credit e= ven though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP= is used for outgoing snoops.", "UMask": "0x2", @@ -4432,8 +5447,10 @@ }, { "BriefDescription": "VN0 Credit Used : RSP on BL", + "Counter": "0", "EventCode": "0x5b", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Used : RSP on BL : Number of time= s a VN0 credit was used on the DRS message channel. In order for a request= to be transferred across UPI, it must be guaranteed to have a flit buffer = on the remote socket to sink into. There are two credit pools, VNA and VN0= . VNA is a shared pool used to achieve high performance. The VN0 pool has= reserved entries for each message class and is used to prevent deadlock. = Requests first attempt to acquire a VNA credit, and then fall back to VN0 i= f they fail. This counts the number of times a VN0 credit was used. Note = that a single VN0 credit holds access to potentially multiple flit buffers.= For example, a transfer that uses VNA could use 9 flit buffers and in tha= t case uses 9 credits. A transfer on VN0 will only count a single credit e= ven though it may use multiple buffers. : Response (RSP) messages on BL. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x8", @@ -4441,8 +5458,10 @@ }, { "BriefDescription": "VN0 No Credits : WB on BL", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : WB on BL : Number of Cycles= there were no VN0 Credits : Data Response (WB) messages on BL. WB is gene= rally used to transmit data with coherency. For example, remote reads and = writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -4450,8 +5469,10 @@ }, { "BriefDescription": "VN0 No Credits : NCB on BL", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : NCB on BL : Number of Cycle= s there were no VN0 Credits : Non-Coherent Broadcast (NCB) messages on BL. = NCB is generally used to transmit data without coherency. For example, no= n-coherent read data returns.", "UMask": "0x20", @@ -4459,8 +5480,10 @@ }, { "BriefDescription": "VN0 No Credits : REQ on AD", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : REQ on AD : Number of Cycle= s there were no VN0 Credits : Home (REQ) messages on AD. REQ is generally = used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -4468,8 +5491,10 @@ }, { "BriefDescription": "VN0 No Credits : RSP on AD", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : RSP on AD : Number of Cycle= s there were no VN0 Credits : Response (RSP) messages on AD. RSP packets a= re used to transmit a variety of protocol flits including grants and comple= tions (CMP).", "UMask": "0x4", @@ -4477,8 +5502,10 @@ }, { "BriefDescription": "VN0 No Credits : SNP on AD", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : SNP on AD : Number of Cycle= s there were no VN0 Credits : Snoops (SNP) messages on AD. SNP is used for= outgoing snoops.", "UMask": "0x2", @@ -4486,8 +5513,10 @@ }, { "BriefDescription": "VN0 No Credits : RSP on BL", + "Counter": "0", "EventCode": "0x5d", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 No Credits : RSP on BL : Number of Cycle= s there were no VN0 Credits : Response (RSP) messages on BL. RSP packets ar= e used to transmit a variety of protocol flits including grants and complet= ions (CMP).", "UMask": "0x8", @@ -4495,8 +5524,10 @@ }, { "BriefDescription": "VN1 Credit Used : WB on BL", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : WB on BL : Number of times= a VN1 credit was used on the WB message channel. In order for a request t= o be transferred across QPI, it must be guaranteed to have a flit buffer on= the remote socket to sink into. There are two credit pools, VNA and VN1. = VNA is a shared pool used to achieve high performance. The VN1 pool has r= eserved entries for each message class and is used to prevent deadlock. Re= quests first attempt to acquire a VNA credit, and then fall back to VN1 if = they fail. This counts the number of times a VN1 credit was used. Note th= at a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that = case uses 9 credits. A transfer on VN1 will only count a single credit eve= n though it may use multiple buffers. : Data Response (WB) messages on BL. = WB is generally used to transmit data with coherency. For example, remote= reads and writes, or cache to cache transfers will transmit their data usi= ng WB.", "UMask": "0x10", @@ -4504,8 +5535,10 @@ }, { "BriefDescription": "VN1 Credit Used : NCB on BL", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : NCB on BL : Number of time= s a VN1 credit was used on the WB message channel. In order for a request = to be transferred across QPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN1.= VNA is a shared pool used to achieve high performance. The VN1 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN1 if= they fail. This counts the number of times a VN1 credit was used. Note t= hat a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN1 will only count a single credit ev= en though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messa= ges on BL. NCB is generally used to transmit data without coherency. For = example, non-coherent read data returns.", "UMask": "0x20", @@ -4513,8 +5546,10 @@ }, { "BriefDescription": "VN1 Credit Used : REQ on AD", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : REQ on AD : Number of time= s a VN1 credit was used on the WB message channel. In order for a request = to be transferred across QPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN1.= VNA is a shared pool used to achieve high performance. The VN1 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN1 if= they fail. This counts the number of times a VN1 credit was used. Note t= hat a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN1 will only count a single credit ev= en though it may use multiple buffers. : Home (REQ) messages on AD. REQ is= generally used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -4522,8 +5557,10 @@ }, { "BriefDescription": "VN1 Credit Used : RSP on AD", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : RSP on AD : Number of time= s a VN1 credit was used on the WB message channel. In order for a request = to be transferred across QPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN1.= VNA is a shared pool used to achieve high performance. The VN1 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN1 if= they fail. This counts the number of times a VN1 credit was used. Note t= hat a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN1 will only count a single credit ev= en though it may use multiple buffers. : Response (RSP) messages on AD. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x4", @@ -4531,8 +5568,10 @@ }, { "BriefDescription": "VN1 Credit Used : SNP on AD", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : SNP on AD : Number of time= s a VN1 credit was used on the WB message channel. In order for a request = to be transferred across QPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN1.= VNA is a shared pool used to achieve high performance. The VN1 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN1 if= they fail. This counts the number of times a VN1 credit was used. Note t= hat a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN1 will only count a single credit ev= en though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP = is used for outgoing snoops.", "UMask": "0x2", @@ -4540,8 +5579,10 @@ }, { "BriefDescription": "VN1 Credit Used : RSP on BL", + "Counter": "0", "EventCode": "0x5c", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Used : RSP on BL : Number of time= s a VN1 credit was used on the WB message channel. In order for a request = to be transferred across QPI, it must be guaranteed to have a flit buffer o= n the remote socket to sink into. There are two credit pools, VNA and VN1.= VNA is a shared pool used to achieve high performance. The VN1 pool has = reserved entries for each message class and is used to prevent deadlock. R= equests first attempt to acquire a VNA credit, and then fall back to VN1 if= they fail. This counts the number of times a VN1 credit was used. Note t= hat a single VN1 credit holds access to potentially multiple flit buffers. = For example, a transfer that uses VNA could use 9 flit buffers and in that= case uses 9 credits. A transfer on VN1 will only count a single credit ev= en though it may use multiple buffers. : Response (RSP) messages on BL. RSP= packets are used to transmit a variety of protocol flits including grants = and completions (CMP).", "UMask": "0x8", @@ -4549,8 +5590,10 @@ }, { "BriefDescription": "VN1 No Credits : WB on BL", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : WB on BL : Number of Cycles= there were no VN1 Credits : Data Response (WB) messages on BL. WB is gene= rally used to transmit data with coherency. For example, remote reads and = writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -4558,8 +5601,10 @@ }, { "BriefDescription": "VN1 No Credits : NCB on BL", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : NCB on BL : Number of Cycle= s there were no VN1 Credits : Non-Coherent Broadcast (NCB) messages on BL. = NCB is generally used to transmit data without coherency. For example, no= n-coherent read data returns.", "UMask": "0x20", @@ -4567,8 +5612,10 @@ }, { "BriefDescription": "VN1 No Credits : REQ on AD", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : REQ on AD : Number of Cycle= s there were no VN1 Credits : Home (REQ) messages on AD. REQ is generally = used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -4576,8 +5623,10 @@ }, { "BriefDescription": "VN1 No Credits : RSP on AD", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : RSP on AD : Number of Cycle= s there were no VN1 Credits : Response (RSP) messages on AD. RSP packets a= re used to transmit a variety of protocol flits including grants and comple= tions (CMP).", "UMask": "0x4", @@ -4585,8 +5634,10 @@ }, { "BriefDescription": "VN1 No Credits : SNP on AD", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : SNP on AD : Number of Cycle= s there were no VN1 Credits : Snoops (SNP) messages on AD. SNP is used for= outgoing snoops.", "UMask": "0x2", @@ -4594,8 +5645,10 @@ }, { "BriefDescription": "VN1 No Credits : RSP on BL", + "Counter": "0", "EventCode": "0x5e", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 No Credits : RSP on BL : Number of Cycle= s there were no VN1 Credits : Response (RSP) messages on BL. RSP packets ar= e used to transmit a variety of protocol flits including grants and complet= ions (CMP).", "UMask": "0x8", @@ -4603,168 +5656,210 @@ }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LO= CALDEST_VN0", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST= _VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x82", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LO= CALDEST_VN1", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST= _VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa0", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LO= CALDEST_VN0", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST= _VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x81", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LO= CALDEST_VN1", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST= _VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x90", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LO= CALDEST_VN0", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST= _VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x84", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LO= CALDEST_VN1", + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST= _VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc0", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1"= , + "Counter": "0", "EventCode": "0x7e", "EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1", + "Counter": "0", "EventCode": "0x7d", "EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M3UPI" }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARB", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message is making arbitration= request", "UMask": "0x4", @@ -4772,8 +5867,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARRIVED", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.ARRIVED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message arrived in ingress pi= peline", "UMask": "0x1", @@ -4781,8 +5878,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.BYPASS", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.BYPASS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message took bypass path", "UMask": "0x2", @@ -4790,8 +5889,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.FLITTED", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.FLITTED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message was slotted into flit= (non bypass)", "UMask": "0x10", @@ -4799,8 +5900,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_ARB", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.LOST_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message lost arbitration", "UMask": "0x8", @@ -4808,8 +5911,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_OLD", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.LOST_OLD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message was dropped because i= t became too old", "UMask": "0x20", @@ -4817,8 +5922,10 @@ }, { "BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL", + "Counter": "0", "EventCode": "0x61", "EventName": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": xpt prefetch message was dropped because i= t was overwritten by new message while prefetch queue was full", "UMask": "0x40", @@ -4826,8 +5933,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (AD Bounceable)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD Bounceable : Number of allocations into t= he CRS Egress", "UMask": "0x1", @@ -4835,8 +5944,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (AD credited)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD credited : Number of allocations into the= CRS Egress", "UMask": "0x2", @@ -4844,8 +5955,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (AK)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AK : Number of allocations into the CRS Egre= ss", "UMask": "0x10", @@ -4853,8 +5966,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (AKC)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.AKC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AKC : Number of allocations into the CRS Egr= ess", "UMask": "0x40", @@ -4862,8 +5977,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (BL Bounceable)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL Bounceable : Number of allocations into t= he CRS Egress", "UMask": "0x4", @@ -4871,8 +5988,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (BL credited)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL credited : Number of allocations into the= CRS Egress", "UMask": "0x8", @@ -4880,8 +5999,10 @@ }, { "BriefDescription": "Number of allocations into the CRS Egress us= ed to queue up requests destined to the mesh (IV)", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_MDF_CRS_TxR_INSERTS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IV : Number of allocations into the CRS Egre= ss", "UMask": "0x20", @@ -4889,8 +6010,10 @@ }, { "BriefDescription": "Number of cycles incoming messages from the v= ertical ring that are bounced at the SBO Ingress (V-EMIB) (AD)", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD : Number of cycles incoming messages from= the vertical ring that are bounced at the SBO", "UMask": "0x1", @@ -4898,8 +6021,10 @@ }, { "BriefDescription": "Number of cycles incoming messages from the v= ertical ring that are bounced at the SBO Ingress (V-EMIB) (AK)", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AK : Number of cycles incoming messages from= the vertical ring that are bounced at the SBO", "UMask": "0x4", @@ -4907,8 +6032,10 @@ }, { "BriefDescription": "Number of cycles incoming messages from the v= ertical ring that are bounced at the SBO Ingress (V-EMIB) (AKC)", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AKC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AKC : Number of cycles incoming messages fro= m the vertical ring that are bounced at the SBO", "UMask": "0x10", @@ -4916,8 +6043,10 @@ }, { "BriefDescription": "Number of cycles incoming messages from the v= ertical ring that are bounced at the SBO Ingress (V-EMIB) (BL)", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL : Number of cycles incoming messages from= the vertical ring that are bounced at the SBO", "UMask": "0x2", @@ -4925,8 +6054,10 @@ }, { "BriefDescription": "Number of cycles incoming messages from the v= ertical ring that are bounced at the SBO Ingress (V-EMIB) (IV)", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IV : Number of cycles incoming messages from= the vertical ring that are bounced at the SBO", "UMask": "0x8", @@ -4934,8 +6065,10 @@ }, { "BriefDescription": "Counts the number of cycles when the distress= signals are asserted based on SBO Ingress threshold", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_MDF_FAST_ASSERTED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD bnc : Counts the number of cycles when th= e distress signals are asserted based on SBO Ingress threshold", "UMask": "0x1", @@ -4943,8 +6076,10 @@ }, { "BriefDescription": "Counts the number of cycles when the distress= signals are asserted based on SBO Ingress threshold", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_MDF_FAST_ASSERTED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL bnc : Counts the number of cycles when th= e distress signals are asserted based on SBO Ingress threshold", "UMask": "0x2", @@ -4952,6 +6087,7 @@ }, { "BriefDescription": "UPI Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_UPI_CLOCKTICKS", "PerPkg": "1", @@ -4960,8 +6096,10 @@ }, { "BriefDescription": "Direct packet attempts : D2C", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2C", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Direct packet attempts : D2C : Counts the nu= mber of DRS packets that we attempted to do direct2core/direct2UPI on. The= re are 4 mutually exclusive filters. Filter [0] can be used to get success= ful spawns, while [1:3] provide the different failure cases. Note that thi= s does not count packets that are not candidates for Direct2Core. The only= candidates for Direct2Core are DRS packets destined for Cbos.", "UMask": "0x1", @@ -4969,8 +6107,10 @@ }, { "BriefDescription": "Direct packet attempts : D2K", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2K", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Direct packet attempts : D2K : Counts the nu= mber of DRS packets that we attempted to do direct2core/direct2UPI on. The= re are 4 mutually exclusive filters. Filter [0] can be used to get success= ful spawns, while [1:3] provide the different failure cases. Note that thi= s does not count packets that are not candidates for Direct2Core. The only= candidates for Direct2Core are DRS packets destined for Cbos.", "UMask": "0x2", @@ -4978,70 +6118,87 @@ }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "Cycles in L1", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_UPI_L1_POWER_CYCLES", "PerPkg": "1", @@ -5050,246 +6207,308 @@ }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_CRD_RETURN_BLOCKED", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_UPI_M3_CRD_RETURN_BLOCKED", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THR= ESH", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THR= ESH", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "Cycles where phy is not in L0, L0c, L0p, L1", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_UPI_PHY_INIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "L1 Req Nack", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_UPI_POWER_L1_NACK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "L1 Req Nack : Counts the number of times a l= ink sends/receives a LinkReqNAck. When the UPI links would like to change = power state, the Tx side initiates a request to the Rx side requesting to c= hange states. This requests can either be accepted or denied. If the Rx s= ide replies with an Ack, the power mode will change. If it replies with NA= ck, no change will take place. This can be filtered based on Rx and Tx. A= n Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx origi= nally requested the power change). A Tx LinkReqNAck refers to sending this= command (meaning the peer agent's Tx originally requested the power change= and this agent accepted it).", "Unit": "UPI" }, { "BriefDescription": "L1 Req (same as L1 Ack).", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_UPI_POWER_L1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "L1 Req (same as L1 Ack). : Counts the number= of times a link sends/receives a LinkReqAck. When the UPI links would lik= e to change power state, the Tx side initiates a request to the Rx side req= uesting to change states. This requests can either be accepted or denied. = If the Rx side replies with an Ack, the power mode will change. If it rep= lies with NAck, no change will take place. This can be filtered based on R= x and Tx. An Rx LinkReqAck refers to receiving an Ack (meaning this agent'= s Tx originally requested the power change). A Tx LinkReqAck refers to sen= ding this command (meaning the peer agent's Tx originally requested the pow= er change and this agent accepted it).", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "Cycles in L0p", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_UPI_RxL0P_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles in L0p : Number of UPI qfclk cycles s= pent in L0p power mode. L0p is a mode where we disable 1/2 of the UPI lane= s, decreasing our bandwidth in order to save power. It increases snoop and= data transfer latencies and decreases overall bandwidth. This mode can be= very useful in NUMA optimized workloads that largely only utilize UPI for = snoops and their responses. Use edge detect to count the number of instanc= es when the UPI link entered L0p. Link power states are per link and per d= irection, so for example the Tx direction could be in one state while Rx wa= s in another.", "Unit": "UPI" }, { "BriefDescription": "Cycles in L0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_UPI_RxL0_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles sp= ent in L0 power mode in the Link Layer. L0 is the default mode which provi= des the highest performance with the most power. Use edge detect to count = the number of instances that the link entered L0. Link power states are pe= r link and per direction, so for example the Tx direction could be in one s= tate while Rx was in another. The phy layer sometimes leaves L0 for train= ing, which will not be captured by this event.", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.DATA", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.LLCRD", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.NULL", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.NULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.PROTHDR", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT0", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT1", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT2", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "Matches on Receive path of a UPI Port : Non-C= oherent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Receive path of a UPI Port : Non-= Coherent Bypass : Matches on Receive path of a UPI port. Match based on UMa= sk specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcod= e (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Ena= ble R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enabl= e Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even un= der specific opcode match_en cases. Note: If Message Class is disabled, we = expect opcode to also be disabled.", "UMask": "0xe", @@ -5297,8 +6516,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port : Non-C= oherent Bypass, Match Opcode", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Receive path of a UPI Port : Non-= Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port. Matc= h based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class E= nable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S= : Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single = Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, = LLCRD) even under specific opcode match_en cases. Note: If Message Class is= disabled, we expect opcode to also be disabled.", "UMask": "0x10e", @@ -5306,8 +6527,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port : Non-C= oherent Standard", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Receive path of a UPI Port : Non-= Coherent Standard : Matches on Receive path of a UPI port. Match based on U= Mask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opc= ode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr E= nable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Ena= ble Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even = under specific opcode match_en cases. Note: If Message Class is disabled, w= e expect opcode to also be disabled.", "UMask": "0xf", @@ -5315,8 +6538,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port : Non-C= oherent Standard, Match Opcode", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Receive path of a UPI Port : Non-= Coherent Standard, Match Opcode : Matches on Receive path of a UPI port. Ma= tch based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class= Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable= S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Singl= e Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL= , LLCRD) even under specific opcode match_en cases. Note: If Message Class = is disabled, we expect opcode to also be disabled.", "UMask": "0x10f", @@ -5324,8 +6549,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Bypassed : Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Bypassed : Slot 0 : Counts t= he number of times that an incoming flit was able to bypass the flit buffer= and pass directly across the BGF and into the Egress. This is a latency o= ptimization, and should generally be the common case. If this value is les= s than the number of flits transferred, it implies that there was queueing = getting onto the ring, and thus the transactions saw higher latency.", "UMask": "0x1", @@ -5333,8 +6560,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Bypassed : Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Bypassed : Slot 1 : Counts t= he number of times that an incoming flit was able to bypass the flit buffer= and pass directly across the BGF and into the Egress. This is a latency o= ptimization, and should generally be the common case. If this value is les= s than the number of flits transferred, it implies that there was queueing = getting onto the ring, and thus the transactions saw higher latency.", "UMask": "0x2", @@ -5342,8 +6571,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Bypassed : Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Bypassed : Slot 2 : Counts t= he number of times that an incoming flit was able to bypass the flit buffer= and pass directly across the BGF and into the Egress. This is a latency o= ptimization, and should generally be the common case. If this value is les= s than the number of flits transferred, it implies that there was queueing = getting onto the ring, and thus the transactions saw higher latency.", "UMask": "0x4", @@ -5351,40 +6582,50 @@ }, { "BriefDescription": "CRC Errors Detected", + "Counter": "0,1,2,3", "EventCode": "0x0b", "EventName": "UNC_UPI_RxL_CRC_ERRORS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CRC Errors Detected : Number of CRC errors d= etected in the UPI Agent. Each UPI flit incorporates 8 bits of CRC for err= or detection. This counts the number of flits where the CRC was able to de= tect an error. After an error has been detected, the UPI agent will send a= request to the transmitting socket to resend the flit (as well as any flit= s that came after it).", "Unit": "UPI" }, { "BriefDescription": "LLR Requests Sent", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "UNC_UPI_RxL_CRC_LLR_REQ_TRANSMIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "LLR Requests Sent : Number of LLR Requests w= ere transmitted. This should generally be <=3D the number of CRC errors de= tected. If multiple errors are detected before the Rx side receives a LLC_= REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKs..", "Unit": "UPI" }, { "BriefDescription": "VN0 Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 Credit Consumed : Counts the number of t= imes that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit fo= r the Rx Buffer). This includes packets that went through the RxQ and thos= e that were bypasssed.", "Unit": "UPI" }, { "BriefDescription": "VN1 Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x3a", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 Credit Consumed : Counts the number of t= imes that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit fo= r the Rx Buffer). This includes packets that went through the RxQ and thos= e that were bypasssed.", "Unit": "UPI" }, { "BriefDescription": "VNA Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VNA", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -5393,6 +6634,7 @@ }, { "BriefDescription": "Valid Flits Received : All Data", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.ALL_DATA", "PerPkg": "1", @@ -5402,6 +6644,7 @@ }, { "BriefDescription": "Null FLITs received from any slot", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.ALL_NULL", "PerPkg": "1", @@ -5410,8 +6653,10 @@ }, { "BriefDescription": "Valid Flits Received : Data", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Data : Shows legal fl= it time (hides impact of L0p and L0c). : Count Data Flits (which consume al= l slots), but how much to count is based on Slot0-2 mask, so count can be 0= -3 depending on which slots are enabled for counting..", "UMask": "0x8", @@ -5419,8 +6664,10 @@ }, { "BriefDescription": "Valid Flits Received : Idle", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Idle : Shows legal fl= it time (hides impact of L0p and L0c).", "UMask": "0x47", @@ -5428,8 +6675,10 @@ }, { "BriefDescription": "Valid Flits Received : LLCRD Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : LLCRD Not Empty : Sho= ws legal flit time (hides impact of L0p and L0c). : Enables counting of LLC= RD (with non-zero payload). This only applies to slot 2 since LLCRD is only= allowed in slot 2", "UMask": "0x10", @@ -5437,8 +6686,10 @@ }, { "BriefDescription": "Valid Flits Received : LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : LLCTRL : Shows legal = flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. E= nables counting of slot 0 LLCTRL messages.", "UMask": "0x40", @@ -5446,6 +6697,7 @@ }, { "BriefDescription": "Valid Flits Received : All Non Data", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.NON_DATA", "PerPkg": "1", @@ -5455,8 +6707,10 @@ }, { "BriefDescription": "Valid Flits Received : Slot NULL or LLCRD Emp= ty", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.NULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Slot NULL or LLCRD Em= pty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all= zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dua= l slot. This can apply to slot 0,1, or 2.", "UMask": "0x20", @@ -5464,8 +6718,10 @@ }, { "BriefDescription": "Valid Flits Received : Protocol Header", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Protocol Header : Sho= ws legal flit time (hides impact of L0p and L0c). : Enables count of protoc= ol headers in slot 0,1,2 (depending on slot uMask bits)", "UMask": "0x80", @@ -5473,8 +6729,10 @@ }, { "BriefDescription": "Valid Flits Received : Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Slot 0 : Shows legal = flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits d= etermine types of headers to count.", "UMask": "0x1", @@ -5482,8 +6740,10 @@ }, { "BriefDescription": "Valid Flits Received : Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Slot 1 : Shows legal = flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits d= etermine types of headers to count.", "UMask": "0x2", @@ -5491,8 +6751,10 @@ }, { "BriefDescription": "Valid Flits Received : Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_UPI_RxL_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Received : Slot 2 : Shows legal = flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits d= etermine types of headers to count.", "UMask": "0x4", @@ -5500,8 +6762,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Allocations : Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Allocations : Slot 0 : Numbe= r of allocations into the UPI Rx Flit Buffer. Generally, when data is tran= smitted across UPI, it will bypass the RxQ and pass directly to the ring in= terface. If things back up getting transmitted onto the ring, however, it = may need to allocate into this buffer, thus increasing the latency. This e= vent can be used in conjunction with the Flit Buffer Occupancy event in ord= er to calculate the average flit buffer lifetime.", "UMask": "0x1", @@ -5509,8 +6773,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Allocations : Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Allocations : Slot 1 : Numbe= r of allocations into the UPI Rx Flit Buffer. Generally, when data is tran= smitted across UPI, it will bypass the RxQ and pass directly to the ring in= terface. If things back up getting transmitted onto the ring, however, it = may need to allocate into this buffer, thus increasing the latency. This e= vent can be used in conjunction with the Flit Buffer Occupancy event in ord= er to calculate the average flit buffer lifetime.", "UMask": "0x2", @@ -5518,8 +6784,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Allocations : Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Flit Buffer Allocations : Slot 2 : Numbe= r of allocations into the UPI Rx Flit Buffer. Generally, when data is tran= smitted across UPI, it will bypass the RxQ and pass directly to the ring in= terface. If things back up getting transmitted onto the ring, however, it = may need to allocate into this buffer, thus increasing the latency. This e= vent can be used in conjunction with the Flit Buffer Occupancy event in ord= er to calculate the average flit buffer lifetime.", "UMask": "0x4", @@ -5527,8 +6795,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets : Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Occupancy - All Packets : Slot 0 : Accum= ulates the number of elements in the UPI RxQ in each cycle. Generally, whe= n data is transmitted across UPI, it will bypass the RxQ and pass directly = to the ring interface. If things back up getting transmitted onto the ring= , however, it may need to allocate into this buffer, thus increasing the la= tency. This event can be used in conjunction with the Flit Buffer Not Empt= y event to calculate average occupancy, or with the Flit Buffer Allocations= event to track average lifetime.", "UMask": "0x1", @@ -5536,8 +6806,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets : Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Occupancy - All Packets : Slot 1 : Accum= ulates the number of elements in the UPI RxQ in each cycle. Generally, whe= n data is transmitted across UPI, it will bypass the RxQ and pass directly = to the ring interface. If things back up getting transmitted onto the ring= , however, it may need to allocate into this buffer, thus increasing the la= tency. This event can be used in conjunction with the Flit Buffer Not Empt= y event to calculate average occupancy, or with the Flit Buffer Allocations= event to track average lifetime.", "UMask": "0x2", @@ -5545,8 +6817,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets : Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RxQ Occupancy - All Packets : Slot 2 : Accum= ulates the number of elements in the UPI RxQ in each cycle. Generally, whe= n data is transmitted across UPI, it will bypass the RxQ and pass directly = to the ring interface. If things back up getting transmitted onto the ring= , however, it may need to allocate into this buffer, thus increasing the la= tency. This event can be used in conjunction with the Flit Buffer Not Empt= y event to calculate average occupancy, or with the Flit Buffer Allocations= event to track average lifetime.", "UMask": "0x4", @@ -5554,214 +6828,268 @@ }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "Cycles in L0p", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles in L0p : Number of UPI qfclk cycles s= pent in L0p power mode. L0p is a mode where we disable 1/2 of the UPI lane= s, decreasing our bandwidth in order to save power. It increases snoop and= data transfer latencies and decreases overall bandwidth. This mode can be= very useful in NUMA optimized workloads that largely only utilize UPI for = snoops and their responses. Use edge detect to count the number of instanc= es when the UPI link entered L0p. Link power states are per link and per d= irection, so for example the Tx direction could be in one state while Rx wa= s in another.", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "Cycles in L0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_UPI_TxL0_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles sp= ent in L0 power mode in the Link Layer. L0 is the default mode which provi= des the highest performance with the most power. Use edge detect to count = the number of instances that the link entered L0. Link power states are pe= r link and per direction, so for example the Tx direction could be in one s= tate while Rx was in another. The phy layer sometimes leaves L0 for train= ing, which will not be captured by this event.", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.DATA", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.LLCRD", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.NULL", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.NULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.PROTHDR", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT0", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT1", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT2", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "Matches on Transmit path of a UPI Port : Non-= Coherent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Transmit path of a UPI Port : Non= -Coherent Bypass : Matches on Transmit path of a UPI port. Match based on U= Mask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opc= ode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr E= nable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Ena= ble Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even = under specific opcode match_en cases. Note: If Message Class is disabled, w= e expect opcode to also be disabled.", "UMask": "0xe", @@ -5769,8 +7097,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port : Non-= Coherent Bypass, Match Opcode", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Transmit path of a UPI Port : Non= -Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port. Ma= tch based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class= Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable= S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Singl= e Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL= , LLCRD) even under specific opcode match_en cases. Note: If Message Class = is disabled, we expect opcode to also be disabled.", "UMask": "0x10e", @@ -5778,8 +7108,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port : Non-= Coherent Standard", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Transmit path of a UPI Port : Non= -Coherent Standard : Matches on Transmit path of a UPI port. Match based on= UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: O= pcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr= Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr E= nable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) eve= n under specific opcode match_en cases. Note: If Message Class is disabled,= we expect opcode to also be disabled.", "UMask": "0xf", @@ -5787,8 +7119,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port : Non-= Coherent Standard, Match Opcode", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Matches on Transmit path of a UPI Port : Non= -Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port. = Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Cla= ss Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enab= le S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Sin= gle Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NU= LL, LLCRD) even under specific opcode match_en cases. Note: If Message Clas= s is disabled, we expect opcode to also be disabled.", "UMask": "0x10f", @@ -5796,14 +7130,17 @@ }, { "BriefDescription": "Tx Flit Buffer Bypassed", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_UPI_TxL_BYPASSED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Tx Flit Buffer Bypassed : Counts the number = of times that an incoming flit was able to bypass the Tx flit buffer and pa= ss directly out the UPI Link. Generally, when data is transmitted across UP= I, it will bypass the TxQ and pass directly to the link. However, the TxQ = will be used with L0p and when LLR occurs, increasing latency to transfer o= ut to the link.", "Unit": "UPI" }, { "BriefDescription": "Valid Flits Sent : All Data", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.ALL_DATA", "PerPkg": "1", @@ -5813,8 +7150,10 @@ }, { "BriefDescription": "Valid Flits Sent : All LLCRD Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.ALL_LLCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : All Data : Shows legal fl= it time (hides impact of L0p and L0c).", "UMask": "0x17", @@ -5822,8 +7161,10 @@ }, { "BriefDescription": "Valid Flits Sent : All LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.ALL_LLCTRL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : All LLCTRL : Shows legal = flit time (hides impact of L0p and L0c).", "UMask": "0x47", @@ -5831,6 +7172,7 @@ }, { "BriefDescription": "All Null Flits", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.ALL_NULL", "PerPkg": "1", @@ -5839,8 +7181,10 @@ }, { "BriefDescription": "Valid Flits Sent : All Protocol Header", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.ALL_PROTHDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : All ProtDDR : Shows legal= flit time (hides impact of L0p and L0c).", "UMask": "0x87", @@ -5848,8 +7192,10 @@ }, { "BriefDescription": "Valid Flits Sent : Data", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Data : Shows legal flit t= ime (hides impact of L0p and L0c). : Count Data Flits (which consume all sl= ots), but how much to count is based on Slot0-2 mask, so count can be 0-3 d= epending on which slots are enabled for counting..", "UMask": "0x8", @@ -5857,8 +7203,10 @@ }, { "BriefDescription": "Valid Flits Sent : Idle", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Idle : Shows legal flit t= ime (hides impact of L0p and L0c).", "UMask": "0x47", @@ -5866,8 +7214,10 @@ }, { "BriefDescription": "Valid Flits Sent : LLCRD Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : LLCRD Not Empty : Shows l= egal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (= with non-zero payload). This only applies to slot 2 since LLCRD is only all= owed in slot 2", "UMask": "0x10", @@ -5875,8 +7225,10 @@ }, { "BriefDescription": "Valid Flits Sent : LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : LLCTRL : Shows legal flit= time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enabl= es counting of slot 0 LLCTRL messages.", "UMask": "0x40", @@ -5884,6 +7236,7 @@ }, { "BriefDescription": "Valid Flits Sent : All Non Data", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.NON_DATA", "PerPkg": "1", @@ -5893,8 +7246,10 @@ }, { "BriefDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.NULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty = : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zer= os is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual sl= ot. This can apply to slot 0,1, or 2.", "UMask": "0x20", @@ -5902,8 +7257,10 @@ }, { "BriefDescription": "Valid Flits Sent : Protocol Header", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Protocol Header : Shows l= egal flit time (hides impact of L0p and L0c). : Enables count of protocol h= eaders in slot 0,1,2 (depending on slot uMask bits)", "UMask": "0x80", @@ -5911,8 +7268,10 @@ }, { "BriefDescription": "Valid Flits Sent : Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Slot 0 : Shows legal flit= time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits deter= mine types of headers to count.", "UMask": "0x1", @@ -5920,8 +7279,10 @@ }, { "BriefDescription": "Valid Flits Sent : Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Slot 1 : Shows legal flit= time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits deter= mine types of headers to count.", "UMask": "0x2", @@ -5929,8 +7290,10 @@ }, { "BriefDescription": "Valid Flits Sent : Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_UPI_TxL_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Valid Flits Sent : Slot 2 : Shows legal flit= time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits deter= mine types of headers to count.", "UMask": "0x4", @@ -5938,47 +7301,59 @@ }, { "BriefDescription": "Tx Flit Buffer Allocations", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_UPI_TxL_INSERTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Tx Flit Buffer Allocations : Number of alloc= ations into the UPI Tx Flit Buffer. Generally, when data is transmitted ac= ross UPI, it will bypass the TxQ and pass directly to the link. However, t= he TxQ will be used with L0p and when LLR occurs, increasing latency to tra= nsfer out to the link. This event can be used in conjunction with the Flit= Buffer Occupancy event in order to calculate the average flit buffer lifet= ime.", "Unit": "UPI" }, { "BriefDescription": "Tx Flit Buffer Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_UPI_TxL_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Tx Flit Buffer Occupancy : Accumulates the n= umber of flits in the TxQ. Generally, when data is transmitted across UPI,= it will bypass the TxQ and pass directly to the link. However, the TxQ wi= ll be used with L0p and when LLR occurs, increasing latency to transfer out= to the link. This can be used with the cycles not empty event to track ave= rage occupancy, or the allocations event to track average lifetime in the T= xQ.", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "VNA Credits Pending Return - Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_UPI_VNA_CREDIT_RETURN_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VNA Credits Pending Return - Occupancy : Num= ber of VNA credits in the Rx side that are waitng to be returned back acros= s the link.", "Unit": "UPI" }, { "BriefDescription": "Message Received : Doorbell", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UBOX" }, { "BriefDescription": "Message Received : Interrupt", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.INT_PRIO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Received : Interrupt : Interrupts", "UMask": "0x10", @@ -5986,8 +7361,10 @@ }, { "BriefDescription": "Message Received : IPI", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.IPI_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Received : IPI : Inter Processor Int= errupts", "UMask": "0x4", @@ -5995,8 +7372,10 @@ }, { "BriefDescription": "Message Received : MSI", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.MSI_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Received : MSI : Message Signaled In= terrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket = Mode only)", "UMask": "0x2", @@ -6004,8 +7383,10 @@ }, { "BriefDescription": "Message Received : VLW", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.VLW_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Message Received : VLW : Virtual Logical Wir= e (legacy) message were received from Uncore.", "UMask": "0x1", @@ -6013,152 +7394,190 @@ }, { "BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS", + "Counter": "0", "EventCode": "0x4d", "EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL", + "Counter": "0", "EventCode": "0x4e", "EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK", + "Counter": "0", "EventCode": "0x4f", "EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC", + "Counter": "0", "EventCode": "0x4f", "EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UBOX" }, { "BriefDescription": "Cycles PHOLD Assert to Ack : Assert to ACK", + "Counter": "0,1", "EventCode": "0x45", "EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles PHOLD Assert to Ack : Assert to ACK := PHOLD cycles.", "UMask": "0x1", @@ -6166,32 +7585,40 @@ }, { "BriefDescription": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_RACU_DRNG.RDRAND", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_U_RACU_DRNG.RDRAND", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_RACU_DRNG.RDSEED", + "Counter": "0", "EventCode": "0x4c", "EventName": "UNC_U_RACU_DRNG.RDSEED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UBOX" }, { "BriefDescription": "RACU Request", + "Counter": "0,1", "EventCode": "0x46", "EventName": "UNC_U_RACU_REQUESTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "RACU Request : Number outstanding register r= equests within message channel tracker", "Unit": "UBOX" diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-io.json b= /tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-io.json index 03596db87710..91013ced74aa 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-io.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-io.json @@ -1,134 +1,167 @@ [ { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "1", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART0_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "2", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART1_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x21", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "3", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART2_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x22", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "4", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART3_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x23", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "5", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART4_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x24", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "6", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART5_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x25", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "7", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART6_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x26", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "8", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_IN.PART7_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x27", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "9", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART0_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x30", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "10", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART1_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x31", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "11", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART2_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x32", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "12", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART3_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x33", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "13", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART4_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x34", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "14", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART5_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x35", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "15", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART6_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x36", "Unit": "iio_free_running" }, { "BriefDescription": "Free running counter that increments for ever= y 32 bytes of data sent from the IO agent to the SOC", + "Counter": "16", "EventCode": "0xff", "EventName": "UNC_IIO_BANDWIDTH_OUT.PART7_FREERUN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x37", "Unit": "iio_free_running" }, { "BriefDescription": "IIO Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_IIO_CLOCKTICKS", "PerPkg": "1", @@ -138,6 +171,7 @@ }, { "BriefDescription": "Free running counter that increments for IIO = clocktick", + "Counter": "0", "EventCode": "0xff", "EventName": "UNC_IIO_CLOCKTICKS_FREERUN", "PerPkg": "1", @@ -146,8 +180,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 0-7", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0xff", @@ -157,8 +193,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 0", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -168,8 +206,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 1", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -179,8 +219,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 2", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -190,8 +232,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 3", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -201,8 +245,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 4", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -212,8 +258,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 5", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -223,8 +271,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 6", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -234,8 +284,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 7", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -245,8 +297,10 @@ }, { "BriefDescription": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "UMask": "0xff", @@ -254,8 +308,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 0", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -265,8 +321,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 1", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -276,8 +334,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 2", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -287,8 +347,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 3", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -298,8 +360,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 4", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -309,8 +373,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 5", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -320,8 +386,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 6", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -331,8 +399,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Occupancy : Part 7", + "Counter": "2,3", "EventCode": "0xd5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0000", @@ -342,8 +412,10 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part0-7", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00ff", @@ -352,6 +424,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part0", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -363,6 +436,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part1", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -374,6 +448,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part2", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -385,6 +460,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part3", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -396,6 +472,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core reading from= Cards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART4", "FCMask": "0x07", @@ -407,6 +484,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core reading from= Cards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART5", "FCMask": "0x07", @@ -418,6 +496,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core reading from= Cards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART6", "FCMask": "0x07", @@ -429,6 +508,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core reading from= Cards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART7", "FCMask": "0x07", @@ -440,8 +520,10 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part0-7 = by the CPU", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00ff", @@ -450,8 +532,10 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0100", @@ -461,8 +545,10 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0200", @@ -472,6 +558,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part0 by= the CPU", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -483,6 +570,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part1 by= the CPU", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -494,6 +582,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part2 by= the CPU", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -505,6 +594,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part3 by= the CPU", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -516,6 +606,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART4", "FCMask": "0x07", @@ -527,6 +618,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART5", "FCMask": "0x07", @@ -538,6 +630,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART6", "FCMask": "0x07", @@ -549,6 +642,7 @@ }, { "BriefDescription": "Data requested by the CPU : Core writing to C= ards MMIO space", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART7", "FCMask": "0x07", @@ -560,8 +654,10 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part0", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -571,8 +667,10 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part0", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -582,8 +680,10 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part0", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -593,8 +693,10 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part0", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -604,8 +706,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -615,8 +719,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -626,8 +732,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -637,8 +745,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -648,8 +758,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part0 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -659,8 +771,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part0 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -670,8 +784,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part0 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -681,8 +797,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part0 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -692,8 +810,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -703,8 +823,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -714,8 +836,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -725,8 +849,10 @@ }, { "BriefDescription": "Data requested by the CPU : Another card (dif= ferent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xc0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -736,8 +862,10 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0xff", @@ -747,6 +875,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART0", "FCMask": "0x07", @@ -758,6 +887,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART1", "FCMask": "0x07", @@ -769,6 +899,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART2", "FCMask": "0x07", @@ -780,6 +911,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART3", "FCMask": "0x07", @@ -791,6 +923,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART4", "FCMask": "0x07", @@ -802,6 +935,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART5", "FCMask": "0x07", @@ -813,6 +947,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART6", "FCMask": "0x07", @@ -824,6 +959,7 @@ }, { "BriefDescription": "Data requested of the CPU : CmpD - device sen= ding completion to CPU request", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART7", "FCMask": "0x07", @@ -835,8 +971,10 @@ }, { "BriefDescription": "Read request for 4 bytes made by IIO Part0-7 = to Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00ff", @@ -845,6 +983,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by IIO Part0 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -856,6 +995,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by IIO Part1 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -867,6 +1007,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by IIO Part2 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -878,6 +1019,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by IIO Part3 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -889,6 +1031,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card reading from= DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4", "FCMask": "0x07", @@ -900,6 +1043,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card reading from= DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5", "FCMask": "0x07", @@ -911,6 +1055,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card reading from= DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6", "FCMask": "0x07", @@ -922,6 +1067,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card reading from= DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7", "FCMask": "0x07", @@ -933,8 +1079,10 @@ }, { "BriefDescription": "Write request of 4 bytes made by IIO Part0-7 = to Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00ff", @@ -943,6 +1091,7 @@ }, { "BriefDescription": "Write request of 4 bytes made by IIO Part0 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -954,6 +1103,7 @@ }, { "BriefDescription": "Write request of 4 bytes made by IIO Part1 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -965,6 +1115,7 @@ }, { "BriefDescription": "Write request of 4 bytes made by IIO Part2 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -976,6 +1127,7 @@ }, { "BriefDescription": "Write request of 4 bytes made by IIO Part3 to= Memory", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -987,6 +1139,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to D= RAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4", "FCMask": "0x07", @@ -998,6 +1151,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to D= RAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5", "FCMask": "0x07", @@ -1009,6 +1163,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to D= RAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6", "FCMask": "0x07", @@ -1020,6 +1175,7 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to D= RAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7", "FCMask": "0x07", @@ -1031,8 +1187,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -1042,8 +1200,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -1053,8 +1213,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -1064,8 +1226,10 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -1075,8 +1239,10 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to a= nother Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -1086,8 +1252,10 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to a= nother Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -1097,8 +1265,10 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to a= nother Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -1108,8 +1278,10 @@ }, { "BriefDescription": "Data requested of the CPU : Card writing to a= nother Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -1119,8 +1291,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Passing data = to be written", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.DATA", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1130,8 +1304,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Issuing final= read or write of line", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.FINAL_RD_WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1141,8 +1317,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Processing re= sponse from IOMMU", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_HIT", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1152,8 +1330,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Issuing to IO= MMU", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_REQ", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1163,8 +1343,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Request Owner= ship", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.REQ_OWN", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1174,8 +1356,10 @@ }, { "BriefDescription": "Incoming arbitration requests : Writing line"= , + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_IIO_INBOUND_ARB_REQ.WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1185,8 +1369,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Passi= ng data to be written", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.DATA", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1196,8 +1382,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Issui= ng final read or write of line", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.FINAL_RD_WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1207,8 +1395,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Proce= ssing response from IOMMU", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_HIT", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1218,8 +1408,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Issui= ng to IOMMU", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_REQ", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1229,8 +1421,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Reque= st Ownership", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.REQ_OWN", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1240,8 +1434,10 @@ }, { "BriefDescription": "Incoming arbitration requests granted : Writi= ng line", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "UNC_IIO_INBOUND_ARB_WON.WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1251,8 +1447,10 @@ }, { "BriefDescription": ": IOTLB Hits to a 1G Page", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.1G_HITS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": IOTLB Hits to a 1G Page : Counts if a tran= saction to a 1G page, on its first lookup, hits the IOTLB.", @@ -1261,8 +1459,10 @@ }, { "BriefDescription": ": IOTLB Hits to a 2M Page", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.2M_HITS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": IOTLB Hits to a 2M Page : Counts if a tran= saction to a 2M page, on its first lookup, hits the IOTLB.", @@ -1271,8 +1471,10 @@ }, { "BriefDescription": ": IOTLB Hits to a 4K Page", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.4K_HITS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": IOTLB Hits to a 4K Page : Counts if a tran= saction to a 4K page, on its first lookup, hits the IOTLB.", @@ -1281,8 +1483,10 @@ }, { "BriefDescription": ": Context cache hits", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_HITS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": Context cache hits : Counts each time a fi= rst look up of the transaction hits the RCC.", @@ -1291,8 +1495,10 @@ }, { "BriefDescription": ": Context cache lookups", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_LOOKUPS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": Context cache lookups : Counts each time a= transaction looks up root context cache.", @@ -1301,8 +1507,10 @@ }, { "BriefDescription": ": IOTLB lookups first", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.FIRST_LOOKUPS", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": IOTLB lookups first : Some transactions ha= ve to look up IOTLB multiple times. Counts the first time a request looks = up IOTLB.", @@ -1311,8 +1519,10 @@ }, { "BriefDescription": "IOTLB Fills (same as IOTLB miss)", + "Counter": "0", "EventCode": "0x40", "EventName": "UNC_IIO_IOMMU0.MISSES", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "IOTLB Fills (same as IOTLB miss) : When a tr= ansaction misses IOTLB, it does a page walk to look up memory and bring in = the relevant page translation. Counts when this page translation is written= to IOTLB.", @@ -1321,8 +1531,10 @@ }, { "BriefDescription": ": IOMMU memory access", + "Counter": "0", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": IOMMU memory access : IOMMU sends out memo= ry fetches when it misses the cache look up which is indicated by this sign= al. M2IOSF only uses low priority channel", "UMask": "0xc0", @@ -1330,8 +1542,10 @@ }, { "BriefDescription": ": PWC Hit to a 2M page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWC_1G_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 2M page : Counts each time a = transaction's first look up hits the SLPWC at the 2M level", "UMask": "0x4", @@ -1339,8 +1553,10 @@ }, { "BriefDescription": ": PWT Hit to a 256T page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWC_256T_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWT Hit to a 256T page : Counts each time = a transaction's first look up hits the SLPWC at the 512G level", "UMask": "0x10", @@ -1348,8 +1564,10 @@ }, { "BriefDescription": ": PWC Hit to a 4K page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWC_2M_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 4K page : Counts each time a = transaction's first look up hits the SLPWC at the 4K level", "UMask": "0x2", @@ -1357,8 +1575,10 @@ }, { "BriefDescription": ": PWC Hit to a 1G page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWC_512G_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 1G page : Counts each time a = transaction's first look up hits the SLPWC at the 1G level", "UMask": "0x8", @@ -1366,8 +1586,10 @@ }, { "BriefDescription": ": PageWalk cache fill", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWC_CACHE_FILLS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PageWalk cache fill : When a transaction m= isses SLPWC, it does a page walk to look up memory and bring in the relevan= t page translation. When this page translation is written to SLPWC, ObsPwcF= illValid_nnnH is asserted.", "UMask": "0x20", @@ -1375,8 +1597,10 @@ }, { "BriefDescription": ": PageWalk cache lookup", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.PWT_CACHE_LOOKUPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PageWalk cache lookup : Counts each time a= transaction looks up second level page walk cache.", "UMask": "0x1", @@ -1384,8 +1608,10 @@ }, { "BriefDescription": ": PWC Hit to a 2M page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.SLPWC_1G_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 2M page : Counts each time a = transaction's first look up hits the SLPWC at the 2M level", "UMask": "0x4", @@ -1393,8 +1619,10 @@ }, { "BriefDescription": ": PWC Hit to a 2M page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.SLPWC_256T_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 2M page : Counts each time a = transaction's first look up hits the SLPWC at the 2M level", "UMask": "0x10", @@ -1402,8 +1630,10 @@ }, { "BriefDescription": ": PWC Hit to a 1G page", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_IOMMU1.SLPWC_512G_HITS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": ": PWC Hit to a 1G page : Counts each time a = transaction's first look up hits the SLPWC at the 1G level", "UMask": "0x8", @@ -1411,8 +1641,10 @@ }, { "BriefDescription": ": Global IOTLB invalidation cycles", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_IIO_IOMMU3.PWT_OCCUPANCY_MSB", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": ": Global IOTLB invalidation cycles : Indicat= es that IOMMU is doing global invalidation.", @@ -1421,8 +1653,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus = : Asserted if all bits specified by mask match", @@ -1431,8 +1665,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus a= nd PCIE bus", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus = and PCIE bus : Asserted if all bits specified by mask match", @@ -1441,8 +1677,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus a= nd !(PCIE bus)", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus = and !(PCIE bus) : Asserted if all bits specified by mask match", @@ -1451,8 +1689,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : PCIE bus", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : PCIE bus : As= serted if all bits specified by mask match", @@ -1461,8 +1701,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus= ) and PCIE bus", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bu= s) and PCIE bus : Asserted if all bits specified by mask match", @@ -1471,8 +1713,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus= ) and !(PCIE bus)", + "Counter": "0,1", "EventCode": "0x02", "EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bu= s) and !(PCIE bus) : Asserted if all bits specified by mask match", @@ -1481,8 +1725,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus := Asserted if any bits specified by mask match", @@ -1491,8 +1737,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus an= d PCIE bus", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus a= nd PCIE bus : Asserted if any bits specified by mask match", @@ -1501,8 +1749,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus an= d !(PCIE bus)", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus a= nd !(PCIE bus) : Asserted if any bits specified by mask match", @@ -1511,8 +1761,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : PCIE bus", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : PCIE bus : Ass= erted if any bits specified by mask match", @@ -1521,8 +1773,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus)= and PCIE bus", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus= ) and PCIE bus : Asserted if any bits specified by mask match", @@ -1531,8 +1785,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus)= and !(PCIE bus)", + "Counter": "0,1", "EventCode": "0x03", "EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus= ) and !(PCIE bus) : Asserted if any bits specified by mask match", @@ -1541,6 +1797,7 @@ }, { "BriefDescription": "Number requests PCIe makes of the main die : = All", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "UNC_IIO_NUM_REQ_OF_CPU.COMMIT.ALL", "FCMask": "0x07", @@ -1552,8 +1809,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Abort= ", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.ABORT", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1562,8 +1821,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Confi= ned P2P", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.CONFINED_P2P", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1572,8 +1833,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Local= P2P", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.LOC_P2P", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1582,8 +1845,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Multi= -cast", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MCAST", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1592,8 +1857,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Memor= y", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MEM", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1602,8 +1869,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : MsgB"= , + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MSGB", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1612,8 +1881,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Remot= e P2P", + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.REM_P2P", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1622,8 +1893,10 @@ }, { "BriefDescription": "Num requests sent by PCIe - by target : Ubox"= , + "Counter": "0,1,2,3", "EventCode": "0x8e", "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1632,8 +1905,10 @@ }, { "BriefDescription": "ITC address map 1", + "Counter": "0,1,2,3", "EventCode": "0x8f", "EventName": "UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPU", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPU", @@ -1641,8 +1916,10 @@ }, { "BriefDescription": "Outbound cacheline requests issued : 64B requ= ests issued to device", + "Counter": "0,1,2,3", "EventCode": "0xd0", "EventName": "UNC_IIO_OUTBOUND_CL_REQS_ISSUED.TO_IO", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1652,8 +1929,10 @@ }, { "BriefDescription": "Outbound TLP (transaction layer packet) reque= sts issued : To device", + "Counter": "0,1,2,3", "EventCode": "0xd1", "EventName": "UNC_IIO_OUTBOUND_TLP_REQS_ISSUED.TO_IO", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1663,8 +1942,10 @@ }, { "BriefDescription": "PWT occupancy. Does not include 9th bit of o= ccupancy (will undercount if PWT is greater than 255 per cycle).", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_IIO_PWT_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PortMask": "0x0000", "PublicDescription": "PWT occupancy : Indicates how many page walk= s are outstanding at any point in time.", @@ -1673,8 +1954,10 @@ }, { "BriefDescription": "Request Ownership : PCIe Request complete", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.DATA", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1684,8 +1967,10 @@ }, { "BriefDescription": "Request Ownership : Writing line", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.FINAL_RD_WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1695,8 +1980,10 @@ }, { "BriefDescription": "Request Ownership : Issuing final read or wri= te of line", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.REQ_OWN", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1706,8 +1993,10 @@ }, { "BriefDescription": "Request Ownership : Passing data to be writte= n", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1717,8 +2006,10 @@ }, { "BriefDescription": "Processing response from IOMMU : Passing data= to be written", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.FINAL_RD_WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1728,8 +2019,10 @@ }, { "BriefDescription": "Processing response from IOMMU : Issuing fina= l read or write of line", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_HIT", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1738,8 +2031,10 @@ }, { "BriefDescription": "Processing response from IOMMU : Request Owne= rship", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_REQ", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1749,8 +2044,10 @@ }, { "BriefDescription": "Processing response from IOMMU : Writing line= ", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.REQ_OWN", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1760,8 +2057,10 @@ }, { "BriefDescription": "PCIe Request - pass complete : Passing data t= o be written", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.DATA", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1771,8 +2070,10 @@ }, { "BriefDescription": "PCIe Request - pass complete : Issuing final = read or write of line", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.FINAL_RD_WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1782,8 +2083,10 @@ }, { "BriefDescription": "PCIe Request - pass complete : Request Owners= hip", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.REQ_OWN", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1793,8 +2096,10 @@ }, { "BriefDescription": "PCIe Request - pass complete : Writing line", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.WR", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x00FF", @@ -1804,6 +2109,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part0", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -1815,6 +2121,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part1", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -1826,6 +2133,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part2", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -1837,6 +2145,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part3", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -1848,6 +2157,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re reading from Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART4", "FCMask": "0x07", @@ -1859,6 +2169,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re reading from Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART5", "FCMask": "0x07", @@ -1870,6 +2181,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re reading from Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART6", "FCMask": "0x07", @@ -1881,6 +2193,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re reading from Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART7", "FCMask": "0x07", @@ -1892,6 +2205,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part0 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -1903,6 +2217,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part1 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -1914,6 +2229,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part2 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -1925,6 +2241,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part3 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -1936,6 +2253,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re writing to Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART4", "FCMask": "0x07", @@ -1947,6 +2265,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re writing to Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART5", "FCMask": "0x07", @@ -1958,6 +2277,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re writing to Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART6", "FCMask": "0x07", @@ -1969,6 +2289,7 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : Co= re writing to Cards MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART7", "FCMask": "0x07", @@ -1980,8 +2301,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -1991,8 +2314,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -2002,8 +2327,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -2013,8 +2340,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -2024,8 +2353,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -2035,8 +2366,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -2046,8 +2379,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -2057,8 +2392,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU : An= other card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xc1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -2068,6 +2405,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART0", "FCMask": "0x07", @@ -2079,6 +2417,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART1", "FCMask": "0x07", @@ -2090,6 +2429,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART2", "FCMask": "0x07", @@ -2101,6 +2441,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART3", "FCMask": "0x07", @@ -2112,6 +2453,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART4", "FCMask": "0x07", @@ -2123,6 +2465,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART5", "FCMask": "0x07", @@ -2134,6 +2477,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART6", "FCMask": "0x07", @@ -2145,6 +2489,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Cm= pD - device sending completion to CPU request", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART7", "FCMask": "0x07", @@ -2156,6 +2501,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part0 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -2167,6 +2513,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part1 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -2178,6 +2525,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part2 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -2189,6 +2537,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part3 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -2200,6 +2549,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART4", "FCMask": "0x07", @@ -2211,6 +2561,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART5", "FCMask": "0x07", @@ -2222,6 +2573,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART6", "FCMask": "0x07", @@ -2233,6 +2585,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART7", "FCMask": "0x07", @@ -2244,6 +2597,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part0 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -2255,6 +2609,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part1 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -2266,6 +2621,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part2 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -2277,6 +2633,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part3 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -2288,6 +2645,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART4", "FCMask": "0x07", @@ -2299,6 +2657,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART5", "FCMask": "0x07", @@ -2310,6 +2669,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART6", "FCMask": "0x07", @@ -2321,6 +2681,7 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART7", "FCMask": "0x07", @@ -2332,8 +2693,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0001", @@ -2343,8 +2706,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0002", @@ -2354,8 +2719,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0004", @@ -2365,8 +2732,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0008", @@ -2376,8 +2745,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART4", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0010", @@ -2387,8 +2758,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART5", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0020", @@ -2398,8 +2771,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART6", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0040", @@ -2409,8 +2784,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU : Ca= rd writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART7", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x0080", @@ -2420,6 +2797,7 @@ }, { "BriefDescription": "M2P Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M2P_CLOCKTICKS", "PerPkg": "1", @@ -2428,6 +2806,7 @@ }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xc0", "EventName": "UNC_M2P_CMS_CLOCKTICKS", "PerPkg": "1", @@ -2435,8 +2814,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Down", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Down : Counts number of cycles IV was blocked in the TGR Egress due to S= NP/GO Ordering requirements", "UMask": "0x4", @@ -2444,8 +2825,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Up", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP= /GO Ordering requirements", "UMask": "0x1", @@ -2453,8 +2836,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : DRS", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credits f= or transfer through CMS Port 0 to the IIO for the DRS message class.", "UMask": "0x1", @@ -2462,8 +2847,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : DRS", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credits f= or transfer through CMS Port 0 to the IIO for the DRS message class.", "UMask": "0x2", @@ -2471,8 +2858,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : NCB", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credits f= or transfer through CMS Port 0 to the IIO for the NCB message class.", "UMask": "0x4", @@ -2480,8 +2869,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : NCB", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credits f= or transfer through CMS Port 0 to the IIO for the NCB message class.", "UMask": "0x8", @@ -2489,8 +2880,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : NCS", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credits f= or transfer through CMS Port 0 to the IIO for the NCS message class.", "UMask": "0x10", @@ -2498,8 +2891,10 @@ }, { "BriefDescription": "M2PCIe IIO Credit Acquired : NCS", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts th= e number of credits that are acquired in the M2PCIe agent for sending trans= actions into the IIO on either NCB or NCS are in use. Transactions from th= e BL ring going into the IIO Agent must first acquire a credit. These cred= its are for either the NCB or NCS message classes. NCB, or non-coherent by= pass messages are used to transmit data without coherency (and are common).= NCS is used for reads to PCIe (and should be used sparingly). : Credit fo= r transfer through CMS Port 0s to the IIO for the NCS message class.", "UMask": "0x20", @@ -2507,8 +2902,10 @@ }, { "BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_M2P_IIO_CREDITS_REJECT.DRS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS = : Counts the number of times that a request pending in the BL Ingress attem= pted to acquire either a NCB or NCS credit to transmit into the IIO, but wa= s rejected because no credits were available. NCB, or non-coherent bypass = messages are used to transmit data without coherency (and are common). NCS= is used for reads to PCIe (and should be used sparingly). : Credits to the= IIO for the DRS message class.", "UMask": "0x8", @@ -2516,8 +2913,10 @@ }, { "BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB = : Counts the number of times that a request pending in the BL Ingress attem= pted to acquire either a NCB or NCS credit to transmit into the IIO, but wa= s rejected because no credits were available. NCB, or non-coherent bypass = messages are used to transmit data without coherency (and are common). NCS= is used for reads to PCIe (and should be used sparingly). : Credits to the= IIO for the NCB message class.", "UMask": "0x10", @@ -2525,8 +2924,10 @@ }, { "BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS = : Counts the number of times that a request pending in the BL Ingress attem= pted to acquire either a NCB or NCS credit to transmit into the IIO, but wa= s rejected because no credits were available. NCB, or non-coherent bypass = messages are used to transmit data without coherency (and are common). NCS= is used for reads to PCIe (and should be used sparingly). : Credits to the= IIO for the NCS message class.", "UMask": "0x20", @@ -2534,8 +2935,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 0= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port = 0 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS me= ssage class.", "UMask": "0x1", @@ -2543,8 +2946,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 1= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port = 1 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS me= ssage class.", "UMask": "0x2", @@ -2552,8 +2957,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 0= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port = 0 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB me= ssage class.", "UMask": "0x4", @@ -2561,8 +2968,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 1= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port = 1 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB me= ssage class.", "UMask": "0x8", @@ -2570,8 +2979,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 0= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port = 0 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS me= ssage class.", "UMask": "0x10", @@ -2579,8 +2990,10 @@ }, { "BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 1= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port = 1 : Counts the number of cycles when one or more credits in the M2PCIe agen= t for sending transactions into the IIO on either NCB or NCS are in use. T= ransactions from the BL ring going into the IIO Agent must first acquire a = credit. These credits are for either the NCB or NCS message classes. NCB,= or non-coherent bypass messages are used to transmit data without coherenc= y (and are common). NCS is used for reads to PCIe (and should be used spar= ingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS me= ssage class.", "UMask": "0x20", @@ -2588,896 +3001,1120 @@ }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 0 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 0 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 1 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 1 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 2 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 2 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 3 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF= 3 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF= 4 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF= 4 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF= 5 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF= 5 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF0 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF0 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF1 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF1 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF2 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF2 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF3 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2= IOSF3 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2= IOSF4 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2= IOSF4 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2= IOSF5 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2= IOSF5 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Shared Credits Returned : Agent0", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Shared Credits Returned : Agent1", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local P2P Shared Credits Returned : Agent2", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent0", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent1", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent2", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent3", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent4", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Returned to credit ri= ng : Agent5", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 -= NCB", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 -= NCS", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF0 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF0 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF1 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF1 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF2 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF2 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF3 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IO= SF3 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IO= SF4 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4b", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IO= SF4 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4b", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IO= SF5 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4b", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IO= SF5 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4b", "EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "P2P Credit Occupancy : All", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "P2P Credit Occupancy : Local NCB", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "P2P Credit Occupancy : Local NCS", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "P2P Credit Occupancy : Remote NCB", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "P2P Credit Occupancy : Remote NCS", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Dedicated Credits Received : All", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2P_P2P_DED_RECEIVED.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Dedicated Credits Received : Local NCB", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Dedicated Credits Received : Local NCS", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Dedicated Credits Received : Remote NCB", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Dedicated Credits Received : Remote NCS", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Shared Credits Received : All", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2P_P2P_SHAR_RECEIVED.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Shared Credits Received : Local NCB", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Shared Credits Received : Local NCS", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Shared Credits Received : Remote NCB", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Shared Credits Received : Remote NCS", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 = - DRS", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 = - DRS", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 = - DRS", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 = - NCB", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 = - NCS", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Shared Credits Returned : Agent0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Shared Credits Returned : Agent1", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote P2P Shared Credits Returned : Agent2", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Returned to credit r= ing : Agent0", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Returned to credit r= ing : Agent1", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Returned to credit r= ing : Agent2", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - D= RS", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - N= CB", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - N= CS", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - D= RS", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - N= CB", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - N= CS", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - D= RS", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - N= CB", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - N= CS", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 0 - DRS", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 0 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 0 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 1 - DRS", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 1 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI= 1 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI= 2 - DRS", + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_DRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI= 2 - NCB", + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI= 2 - NCS", + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2PCIe" }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x80", @@ -3485,8 +4122,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_IDI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x1", @@ -3494,8 +4133,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x2", @@ -3503,8 +4144,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x4", @@ -3512,8 +4155,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x20", @@ -3521,8 +4166,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x40", @@ -3530,8 +4177,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x8", @@ -3539,8 +4188,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : = Counts the number of cycles when the M2PCIe Ingress is not empty.", "UMask": "0x10", @@ -3548,8 +4199,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x80", @@ -3557,8 +4210,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.CHA_IDI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x1", @@ -3566,8 +4221,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.CHA_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x2", @@ -3575,8 +4232,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.CHA_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x4", @@ -3584,8 +4243,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.IIO_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x20", @@ -3593,8 +4254,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.IIO_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x40", @@ -3602,8 +4265,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.UPI_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x8", @@ -3611,8 +4276,10 @@ }, { "BriefDescription": "Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2P_RxC_INSERTS.UPI_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Ingress (from CMS) Queue Inserts : Counts th= e number of entries inserted into the M2PCIe Ingress Queue. This can be us= ed in conjunction with the M2PCIe Ingress Occupancy Accumulator event in or= der to calculate average queue latency.", "UMask": "0x10", @@ -3620,24 +4287,30 @@ }, { "BriefDescription": "UNC_M2P_TxC_CREDITS.PMM", + "Counter": "0,1", "EventCode": "0x2d", "EventName": "UNC_M2P_TxC_CREDITS.PMM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2PCIe" }, { "BriefDescription": "UNC_M2P_TxC_CREDITS.PRQ", + "Counter": "0,1", "EventCode": "0x2d", "EventName": "UNC_M2P_TxC_CREDITS.PRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2PCIe" }, { "BriefDescription": "Egress (to CMS) Cycles Full", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_0", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -3647,8 +4320,10 @@ }, { "BriefDescription": "Egress (to CMS) Cycles Full", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_1", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -3658,8 +4333,10 @@ }, { "BriefDescription": "Egress (to CMS) Cycles Not Empty", + "Counter": "0,1", "EventCode": "0x23", "EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_0", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -3669,8 +4346,10 @@ }, { "BriefDescription": "Egress (to CMS) Cycles Not Empty", + "Counter": "0,1", "EventCode": "0x23", "EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_1", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-memory.js= on b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-memory.json index 3ff9e9b722c8..aa06088dd26f 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Cycles - at UCLK", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M2HBM_CLOCKTICKS", "PerPkg": "1", @@ -8,6 +9,7 @@ }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xc0", "EventName": "UNC_M2HBM_CMS_CLOCKTICKS", "PerPkg": "1", @@ -15,16 +17,20 @@ }, { "BriefDescription": "Cycles when direct to core mode (which bypass= es the CHA) was disabled", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_DIRSTATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2HBM" }, { "BriefDescription": "Cycles when direct to core mode, which bypass= es the CHA, was disabled : Non Cisgress", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_DIRSTATE.NON_CISGRES= S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of time non cisgress D2C w= as not honoured by egress due to directory state constraints", "UMask": "0x2", @@ -32,47 +38,59 @@ }, { "BriefDescription": "Counts the time when FM didn't do d2c for fil= l reads (cross tile case)", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_NOTFORKED", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Number of reads in which direct to core trans= action were overridden", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2HBM_DIRECT2CORE_TXN_OVERRIDE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2HBM" }, { "BriefDescription": "Number of reads in which direct to core trans= action was overridden : Cisgress", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2HBM_DIRECT2CORE_TXN_OVERRIDE.CISGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Number of reads in which direct to Intel UPI = transactions were overridden", + "Counter": "0,1,2,3", "EventCode": "0x1b", "EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_CREDITS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2HBM" }, { "BriefDescription": "Cycles when direct to Intel UPI was disabled"= , + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "M2HBM" }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Cisgres= s D2U Ignored", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.CISGRESS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -82,8 +100,10 @@ }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Egress = Ignored D2U", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.EGRESS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -93,8 +113,10 @@ }, { "BriefDescription": "Cycles when Direct2UPI was Disabled : Non Cis= gress D2U Ignored", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.NON_CISGRESS= ", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -104,86 +126,107 @@ }, { "BriefDescription": "Number of reads that a message sent direct2 I= ntel UPI was overridden", + "Counter": "0,1,2,3", "EventCode": "0x1c", "EventName": "UNC_M2HBM_DIRECT2UPI_TXN_OVERRIDE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2HBM" }, { "BriefDescription": "Number of times a direct to UPI transaction w= as overridden.", + "Counter": "0,1,2,3", "EventCode": "0x1c", "EventName": "UNC_M2HBM_DIRECT2UPI_TXN_OVERRIDE.CISGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On NonDirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On NonDirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On NonDirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On NonDirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2HBM" }, { "BriefDescription": "Directory Hit : On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1d", "EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory lookups (any= state found)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.ANY", "PerPkg": "1", @@ -193,6 +236,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookups (cac= heline found in A state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_A", "PerPkg": "1", @@ -202,6 +246,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in I state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_I", "PerPkg": "1", @@ -211,6 +256,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in S state)", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_S", "PerPkg": "1", @@ -220,86 +266,107 @@ }, { "BriefDescription": "Directory Miss : On NonDirty Line in A State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On NonDirty Line in I State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On NonDirty Line in L State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On NonDirty Line in S State"= , + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2HBM" }, { "BriefDescription": "Directory Miss : On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x1e", "EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to I", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A2I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x320", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to S", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A2S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x340", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory update from/= to Any state", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.ANY", "PerPkg": "1", @@ -308,8 +375,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_I_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -319,8 +388,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_I_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -330,8 +401,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_S_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -341,8 +414,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_S_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -352,8 +427,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -363,24 +440,30 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to A", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I2A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x304", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to S", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I2S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x302", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_A_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -390,8 +473,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_A_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -401,8 +486,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_S_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -412,8 +499,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_S_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -423,8 +512,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -434,24 +525,30 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to A", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S2A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x310", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to I", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S2I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x308", "Unit": "M2HBM" }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_A_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -461,8 +558,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_A_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -472,8 +571,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_I_HIT_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -483,8 +584,10 @@ }, { "BriefDescription": "Multi-socket cacheline Directory Updates", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_I_MISS_NON_PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -494,64 +597,80 @@ }, { "BriefDescription": "Count distress signalled on AkAd cmp message"= , + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on any packet type", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on Bl Cmp message", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.BL_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on NM fill write mes= sage", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.CROSSTILE_NMWR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on D2Cha message", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.D2CHA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on D2c message", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.D2CORE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Count distress signalled on D2k message", + "Counter": "0,1,2,3", "EventCode": "0x67", "EventName": "UNC_M2HBM_DISTRESS.D2UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2HBM" }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Down", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2HBM_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Down : Counts number of cycles IV was blocked in the TGR Egress due to S= NP/GO Ordering requirements", "UMask": "0x80000004", @@ -559,8 +678,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements = : Up", + "Counter": "0,1,2,3", "EventCode": "0xba", "EventName": "UNC_M2HBM_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Egress Blocking due to Ordering requirements= : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP= /GO Ordering requirements", "UMask": "0x80000001", @@ -568,8 +689,10 @@ }, { "BriefDescription": "Count when Starve Glocab counter is at 7", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2HBM_IGR_STARVE_WINNER.MASK7", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -578,32 +701,40 @@ }, { "BriefDescription": "Reads to iMC issued", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x304", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH0.ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x104", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH0.NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0.NORMAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x101", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH0_ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -612,24 +743,30 @@ }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH0_FROM_TGR", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x140", "Unit": "M2HBM" }, { "BriefDescription": "Critical Priority - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x102", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH0_NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH0_NORMAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -638,24 +775,30 @@ }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH1.ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x204", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH1.NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1.NORMAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x201", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH1_ALL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -664,24 +807,30 @@ }, { "BriefDescription": "From TGR - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x240", "Unit": "M2HBM" }, { "BriefDescription": "Critical Priority - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x202", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.CH1_NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.CH1_NORMAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -690,64 +839,80 @@ }, { "BriefDescription": "From TGR - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.FROM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x340", "Unit": "M2HBM" }, { "BriefDescription": "Critical Priority - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x302", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_READS.NORMAL", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2HBM_IMC_READS.NORMAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x301", "Unit": "M2HBM" }, { "BriefDescription": "All Writes - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1810", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.ALL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x810", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.FULL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x801", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.PARTIAL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0.PARTIAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x802", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_ALL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -756,15 +921,19 @@ }, { "BriefDescription": "From TGR - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_FULL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_FULL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -773,16 +942,20 @@ }, { "BriefDescription": "ISOCH Full Line - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x804", "Unit": "M2HBM" }, { "BriefDescription": "Non-Inclusive - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_NI", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -790,8 +963,10 @@ }, { "BriefDescription": "Non-Inclusive Miss - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_NI_MISS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -799,8 +974,10 @@ }, { "BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -809,40 +986,50 @@ }, { "BriefDescription": "ISOCH Partial - Ch0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x808", "Unit": "M2HBM" }, { "BriefDescription": "All Writes - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1010", "Unit": "M2HBM" }, { "BriefDescription": "Full Line Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1001", "Unit": "M2HBM" }, { "BriefDescription": "Partial Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1.PARTIAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1002", "Unit": "M2HBM" }, { "BriefDescription": "All Writes - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_ALL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -851,15 +1038,19 @@ }, { "BriefDescription": "From TGR - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Full Line Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_FULL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -868,16 +1059,20 @@ }, { "BriefDescription": "ISOCH Full Line - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1004", "Unit": "M2HBM" }, { "BriefDescription": "Non-Inclusive - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_NI", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -885,8 +1080,10 @@ }, { "BriefDescription": "Non-Inclusive Miss - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_NI_MISS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -894,8 +1091,10 @@ }, { "BriefDescription": "Partial Non-ISOCH - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_PARTIAL", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -904,39 +1103,49 @@ }, { "BriefDescription": "ISOCH Partial - Ch1", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.CH1_PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1008", "Unit": "M2HBM" }, { "BriefDescription": "From TGR - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.FROM_TGR", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Full Non-ISOCH - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1801", "Unit": "M2HBM" }, { "BriefDescription": "ISOCH Full Line - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1804", "Unit": "M2HBM" }, { "BriefDescription": "Non-Inclusive - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.NI", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -944,8 +1153,10 @@ }, { "BriefDescription": "Non-Inclusive Miss - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.NI_MISS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -953,159 +1164,199 @@ }, { "BriefDescription": "Partial Non-ISOCH - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.PARTIAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1802", "Unit": "M2HBM" }, { "BriefDescription": "ISOCH Partial - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2HBM_IMC_WRITES.PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1808", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_PREFCAM_CIS_DROPS", + "Counter": "0,1,2,3", "EventCode": "0x5c", "EventName": "UNC_M2HBM_PREFCAM_CIS_DROPS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH0_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH0_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH1_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH1_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped : UPI - All Channels"= , + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2HBM" }, { "BriefDescription": "Data Prefetches Dropped", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "M2HBM" }, { "BriefDescription": ": UPI - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_MERGE.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2HBM" }, { "BriefDescription": ": XPT - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_MERGE.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "M2HBM" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5e", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.RD_MERGED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2HBM" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5e", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.WR_MERGED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2HBM" }, { "BriefDescription": "Demands Not Merged with CAMed Prefetches", + "Counter": "0,1,2,3", "EventCode": "0x5e", "EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.WR_SQUASHED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH0_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH0_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - Ch 1", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH1_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 1", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH1_XPT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : UPI - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.UPI_ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Inserts : XPT - All Channels", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2HBM_PREFCAM_INSERTS.XPT_ALLCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Prefetch CAM Inserts : XPT -All Channels", "UMask": "0x5", @@ -1113,80 +1364,100 @@ }, { "BriefDescription": "Prefetch CAM Occupancy : All Channels", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Prefetch CAM Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "All Channels", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.ALLCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2HBM" }, { "BriefDescription": ": Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": ": Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x5f", "EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.1LM_POSTED", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.1LM_POSTED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.CIS", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.CIS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.SQUASHED", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.SQUASHED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "UNC_M2HBM_PREFCAM_RxC_OCCUPANCY", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_M2HBM_PREFCAM_RxC_OCCUPANCY", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1194,8 +1465,10 @@ }, { "BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS)= Allocations", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_M2HBM_RxC_AD.INSERTS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1204,23 +1477,29 @@ }, { "BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS)= Allocations", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_M2HBM_RxC_AD_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "AD Ingress (from CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M2HBM_RxC_AD_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "BL Ingress (from CMS) : BL Ingress (from CMS)= Allocations", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_M2HBM_RxC_BL.INSERTS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1230,8 +1509,10 @@ }, { "BriefDescription": "BL Ingress (from CMS) : BL Ingress (from CMS)= Allocations", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_M2HBM_RxC_BL_INSERTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts anytime a BL packet is added to Ingre= ss", "UMask": "0x1", @@ -1239,61 +1520,77 @@ }, { "BriefDescription": "BL Ingress (from CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M2HBM_RxC_BL_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Number AD Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x2e", "EventName": "UNC_M2HBM_TGR_AD_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Number BL Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_M2HBM_TGR_BL_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "Tracker Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2HBM_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x104", "Unit": "M2HBM" }, { "BriefDescription": "Tracker Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_M2HBM_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x204", "Unit": "M2HBM" }, { "BriefDescription": "Tracker Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2HBM_TRACKER_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Tracker Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_M2HBM_TRACKER_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "AD Egress (to CMS) : AD Egress (to CMS) Alloc= ations", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M2HBM_TxC_AD.INSERTS", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1303,8 +1600,10 @@ }, { "BriefDescription": "AD Egress (to CMS) : AD Egress (to CMS) Alloc= ations", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M2HBM_TxC_AD_INSERTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts anytime a AD packet is added to Egres= s", "UMask": "0x1", @@ -1312,15 +1611,19 @@ }, { "BriefDescription": "AD Egress (to CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x07", "EventName": "UNC_M2HBM_TxC_AD_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2HBM" }, { "BriefDescription": "BL Egress (to CMS) : Inserts - CMS0 - Near Si= de", + "Counter": "0,1,2,3", "EventCode": "0x0E", "EventName": "UNC_M2HBM_TxC_BL.INSERTS_CMS0", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1330,8 +1633,10 @@ }, { "BriefDescription": "BL Egress (to CMS) : Inserts - CMS1 - Far Sid= e", + "Counter": "0,1,2,3", "EventCode": "0x0E", "EventName": "UNC_M2HBM_TxC_BL.INSERTS_CMS1", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -1341,160 +1646,200 @@ }, { "BriefDescription": "BL Egress (to CMS) Occupancy : All", + "Counter": "0,1,2,3", "EventCode": "0x0f", "EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2HBM" }, { "BriefDescription": "BL Egress (to CMS) Occupancy : Common Mesh St= op - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x0f", "EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "BL Egress (to CMS) Occupancy : Common Mesh St= op - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x0f", "EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "WPQ Flush : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2HBM_WPQ_FLUSH.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "WPQ Flush : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2HBM_WPQ_FLUSH.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Regular : = Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2HBM_WPQ_NO_REG_CRD.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Regular : = Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2HBM_WPQ_NO_REG_CRD.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Special : = Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2HBM_WPQ_NO_SPEC_CRD.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Special : = Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2HBM_WPQ_NO_SPEC_CRD.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2HBM_WR_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2HBM_WR_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Non-Posted Inserts : Channel 0"= , + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Non-Posted Inserts : Channel 1"= , + "Counter": "0,1,2,3", "EventCode": "0x4d", "EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Non-Posted Occupancy : Channel = 0", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Non-Posted Occupancy : Channel = 1", + "Counter": "0,1,2,3", "EventCode": "0x4c", "EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Posted Inserts : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2HBM_WR_TRACKER_POSTED_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Posted Inserts : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2HBM_WR_TRACKER_POSTED_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Posted Occupancy : Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2HBM_WR_TRACKER_POSTED_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2HBM" }, { "BriefDescription": "Write Tracker Posted Occupancy : Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2HBM_WR_TRACKER_POSTED_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2HBM" }, { "BriefDescription": "Activate due to read, write, underfill, or by= pass", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0xff", @@ -1502,8 +1847,10 @@ }, { "BriefDescription": "Activate due to read", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x11", @@ -1511,8 +1858,10 @@ }, { "BriefDescription": "HBM Activate Count : Activate due to Read in = PCH0", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.RD_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x1", @@ -1520,8 +1869,10 @@ }, { "BriefDescription": "HBM Activate Count : Activate due to Read in = PCH1", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.RD_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x10", @@ -1529,8 +1880,10 @@ }, { "BriefDescription": "HBM Activate Count : Underfill Read transacti= on on Page Empty or Page Miss", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.UFILL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x44", @@ -1538,8 +1891,10 @@ }, { "BriefDescription": "HBM Activate Count", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.UFILL_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x4", @@ -1547,8 +1902,10 @@ }, { "BriefDescription": "HBM Activate Count", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.UFILL_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x40", @@ -1556,8 +1913,10 @@ }, { "BriefDescription": "Activate due to write", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.WR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x22", @@ -1565,8 +1924,10 @@ }, { "BriefDescription": "HBM Activate Count : Activate due to Write in= PCH0", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.WR_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x2", @@ -1574,8 +1935,10 @@ }, { "BriefDescription": "HBM Activate Count : Activate due to Write in= PCH1", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_MCHBM_ACT_COUNT.WR_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Activate commands s= ent on this channel. Activate commands are issued to open up a page on the= HBM devices so that it can be read or written to with a CAS. One can calc= ulate the number of Page Misses by subtracting the number of Page Miss prec= harges from the number of Activates.", "UMask": "0x20", @@ -1583,16 +1946,20 @@ }, { "BriefDescription": "All CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0xff", "Unit": "MCHBM" }, { "BriefDescription": "Pseudo Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "HBM RD_CAS and WR_CAS Commands", "UMask": "0x40", @@ -1600,8 +1967,10 @@ }, { "BriefDescription": "Pseudo Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "HBM RD_CAS and WR_CAS Commands", "UMask": "0x80", @@ -1609,134 +1978,167 @@ }, { "BriefDescription": "Read CAS commands issued (regular and underfi= ll)", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0xcf", "Unit": "MCHBM" }, { "BriefDescription": "Regular read CAS commands with precharge", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.RD_PRE_REG", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc2", "Unit": "MCHBM" }, { "BriefDescription": "Underfill read CAS commands with precharge", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.RD_PRE_UNDERFILL", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc8", "Unit": "MCHBM" }, { "BriefDescription": "Regular read CAS commands issued (does not in= clude underfills)", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.RD_REG", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc1", "Unit": "MCHBM" }, { "BriefDescription": "Underfill read CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.RD_UNDERFILL", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc4", "Unit": "MCHBM" }, { "BriefDescription": "Write CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf0", "Unit": "MCHBM" }, { "BriefDescription": "HBM RD_CAS and WR_CAS Commands. : HBM WR_CAS = commands w/o auto-pre", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.WR_NONPRE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd0", "Unit": "MCHBM" }, { "BriefDescription": "Write CAS commands with precharge", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_MCHBM_CAS_COUNT.WR_PRE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe0", "Unit": "MCHBM" }, { "BriefDescription": "Pseudo Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.PCH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "MCHBM" }, { "BriefDescription": "Pseudo Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.PCH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "MCHBM" }, { "BriefDescription": "Read CAS Command in Interleaved Mode (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc8", "Unit": "MCHBM" }, { "BriefDescription": "Read CAS Command in Regular Mode (64B) in Pse= udochannel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc1", "Unit": "MCHBM" }, { "BriefDescription": "Underfill Read CAS Command in Interleaved Mod= e (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_UFILL_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd0", "Unit": "MCHBM" }, { "BriefDescription": "Underfill Read CAS Command in Regular Mode (6= 4B) in Pseudochannel 1", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_UFILL_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc2", "Unit": "MCHBM" }, { "BriefDescription": "Write CAS Command in Interleaved Mode (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.WR_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe0", "Unit": "MCHBM" }, { "BriefDescription": "Write CAS Command in Regular Mode (64B) in Ps= eudochannel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.WR_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc4", "Unit": "MCHBM" }, { "BriefDescription": "IMC Clockticks at DCLK frequency", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_MCHBM_CLOCKTICKS", "PerPkg": "1", @@ -1745,8 +2147,10 @@ }, { "BriefDescription": "HBM Precharge All Commands", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_MCHBM_HBM_PREALL.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that the precharg= e all command was sent.", "UMask": "0x1", @@ -1754,8 +2158,10 @@ }, { "BriefDescription": "HBM Precharge All Commands", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_MCHBM_HBM_PREALL.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that the precharg= e all command was sent.", "UMask": "0x2", @@ -1763,8 +2169,10 @@ }, { "BriefDescription": "All Precharge Commands", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_MCHBM_HBM_PRE_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Precharge All Commands: Counts the number of= times that the precharge all command was sent.", "UMask": "0x3", @@ -1772,15 +2180,19 @@ }, { "BriefDescription": "IMC Clockticks at HCLK frequency", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_MCHBM_HCLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "MCHBM" }, { "BriefDescription": "All precharge events", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0xff", @@ -1788,8 +2200,10 @@ }, { "BriefDescription": "Precharge from MC page table", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.PGT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x88", @@ -1797,8 +2211,10 @@ }, { "BriefDescription": "HBM Precharge commands. : Precharges from Pag= e Table", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.PGT_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel. : Equivalent to PAGE_EMPTY", "UMask": "0x8", @@ -1806,8 +2222,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.PGT_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x80", @@ -1815,8 +2233,10 @@ }, { "BriefDescription": "Precharge due to read on page miss", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x11", @@ -1824,8 +2244,10 @@ }, { "BriefDescription": "HBM Precharge commands. : Precharge due to re= ad", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.RD_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel. : Precharge from read bank scheduler", "UMask": "0x1", @@ -1833,8 +2255,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.RD_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x10", @@ -1842,8 +2266,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.UFILL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x44", @@ -1851,8 +2277,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.UFILL_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x4", @@ -1860,8 +2288,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.UFILL_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x40", @@ -1869,8 +2299,10 @@ }, { "BriefDescription": "Precharge due to write on page miss", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.WR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x22", @@ -1878,8 +2310,10 @@ }, { "BriefDescription": "HBM Precharge commands. : Precharge due to wr= ite", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.WR_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel. : Precharge from write bank scheduler", "UMask": "0x2", @@ -1887,8 +2321,10 @@ }, { "BriefDescription": "HBM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_MCHBM_PRE_COUNT.WR_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of HBM Precharge commands = sent on this channel.", "UMask": "0x20", @@ -1896,46 +2332,58 @@ }, { "BriefDescription": "Counts the number of cycles where the read bu= ffer has greater than UMASK elements. NOTE: Umask must be set to the maxim= um number of elements in the queue (24 entries for SPR).", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_MCHBM_RDB_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "MCHBM" }, { "BriefDescription": "Counts the number of inserts into the read bu= ffer.", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_MCHBM_RDB_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "MCHBM" }, { "BriefDescription": "Read Data Buffer Inserts", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_MCHBM_RDB_INSERTS.PCH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "MCHBM" }, { "BriefDescription": "Read Data Buffer Inserts", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_MCHBM_RDB_INSERTS.PCH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "MCHBM" }, { "BriefDescription": "Counts the number of elements in the read buf= fer per cycle.", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_MCHBM_RDB_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "MCHBM" }, { "BriefDescription": "Read Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_MCHBM_RPQ_INSERTS.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Read Pending Queue Allocations: Counts the n= umber of allocations into the Read Pending Queue. This queue is used to sc= hedule reads out to the memory controller and to track the requests. Reque= sts allocate into the RPQ soon after they enter the memory controller, and = need credits for an entry in this buffer before being sent from the HA to t= he iMC. They deallocate after the CAS command has been issued to memory. = This includes both ISOCH and non-ISOCH requests.", "UMask": "0x1", @@ -1943,8 +2391,10 @@ }, { "BriefDescription": "Read Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_MCHBM_RPQ_INSERTS.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Read Pending Queue Allocations: Counts the n= umber of allocations into the Read Pending Queue. This queue is used to sc= hedule reads out to the memory controller and to track the requests. Reque= sts allocate into the RPQ soon after they enter the memory controller, and = need credits for an entry in this buffer before being sent from the HA to t= he iMC. They deallocate after the CAS command has been issued to memory. = This includes both ISOCH and non-ISOCH requests.", "UMask": "0x2", @@ -1952,24 +2402,30 @@ }, { "BriefDescription": "Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_MCHBM_RPQ_OCCUPANCY_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Read Pending Queue Occupancy: Accumulates th= e occupancies of the Read Pending Queue each cycle. This can then be used = to calculate both the average occupancy (in conjunction with the number of = cycles not empty) and the average latency (in conjunction with the number o= f allocations). The RPQ is used to schedule reads out to the memory contro= ller and to track the requests. Requests allocate into the RPQ soon after = they enter the memory controller, and need credits for an entry in this buf= fer before being sent from the HA to the iMC. They deallocate after the CAS= command has been issued to memory.", "Unit": "MCHBM" }, { "BriefDescription": "Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x81", "EventName": "UNC_MCHBM_RPQ_OCCUPANCY_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Read Pending Queue Occupancy: Accumulates th= e occupancies of the Read Pending Queue each cycle. This can then be used = to calculate both the average occupancy (in conjunction with the number of = cycles not empty) and the average latency (in conjunction with the number o= f allocations). The RPQ is used to schedule reads out to the memory contro= ller and to track the requests. Requests allocate into the RPQ soon after = they enter the memory controller, and need credits for an entry in this buf= fer before being sent from the HA to the iMC. They deallocate after the CAS= command has been issued to memory.", "Unit": "MCHBM" }, { "BriefDescription": "Write Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_MCHBM_WPQ_INSERTS.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue Allocations: Counts the = number of allocations into the Write Pending Queue. This can then be used = to calculate the average queuing latency (in conjunction with the WPQ occup= ancy count). The WPQ is used to schedule write out to the memory controlle= r and to track the writes. Requests allocate into the WPQ soon after they = enter the memory controller, and need credits for an entry in this buffer b= efore being sent from the CHA to the iMC. They deallocate after being issu= ed. Write requests themselves are able to complete (from the perspective o= f the rest of the system) as soon they have posted to the iMC.", "UMask": "0x1", @@ -1977,8 +2433,10 @@ }, { "BriefDescription": "Write Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_MCHBM_WPQ_INSERTS.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue Allocations: Counts the = number of allocations into the Write Pending Queue. This can then be used = to calculate the average queuing latency (in conjunction with the WPQ occup= ancy count). The WPQ is used to schedule write out to the memory controlle= r and to track the writes. Requests allocate into the WPQ soon after they = enter the memory controller, and need credits for an entry in this buffer b= efore being sent from the CHA to the iMC. They deallocate after being issu= ed. Write requests themselves are able to complete (from the perspective o= f the rest of the system) as soon they have posted to the iMC.", "UMask": "0x2", @@ -1986,24 +2444,30 @@ }, { "BriefDescription": "Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_MCHBM_WPQ_OCCUPANCY_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue Occupancy: Accumulates t= he occupancies of the Write Pending Queue each cycle. This can then be use= d to calculate both the average queue occupancy (in conjunction with the nu= mber of cycles not empty) and the average latency (in conjunction with the = number of allocations). The WPQ is used to schedule write out to the memor= y controller and to track the writes. Requests allocate into the WPQ soon = after they enter the memory controller, and need credits for an entry in th= is buffer before being sent from the HA to the iMC. They deallocate after = being issued to memory. Write requests themselves are able to complete (fr= om the perspective of the rest of the system) as soon they have posted to t= he iMC. This is not to be confused with actually performing the write. Th= erefore, the average latency for this queue is actually not useful for deco= nstruction intermediate write latencies. So, we provide filtering based on= if the request has posted or not. By using the not posted filter, we can = track how long writes spent in the iMC before completions were sent to the = HA. The posted filter, on the other hand, provides information about how m= uch queueing is actually happening in the iMC for writes before they are ac= tually issued to memory. High average occupancies will generally coincide = with high write major mode counts.", "Unit": "MCHBM" }, { "BriefDescription": "Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_MCHBM_WPQ_OCCUPANCY_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue Occupancy: Accumulates t= he occupancies of the Write Pending Queue each cycle. This can then be use= d to calculate both the average queue occupancy (in conjunction with the nu= mber of cycles not empty) and the average latency (in conjunction with the = number of allocations). The WPQ is used to schedule write out to the memor= y controller and to track the writes. Requests allocate into the WPQ soon = after they enter the memory controller, and need credits for an entry in th= is buffer before being sent from the HA to the iMC. They deallocate after = being issued to memory. Write requests themselves are able to complete (fr= om the perspective of the rest of the system) as soon they have posted to t= he iMC. This is not to be confused with actually performing the write. Th= erefore, the average latency for this queue is actually not useful for deco= nstruction intermediate write latencies. So, we provide filtering based on= if the request has posted or not. By using the not posted filter, we can = track how long writes spent in the iMC before completions were sent to the = HA. The posted filter, on the other hand, provides information about how m= uch queueing is actually happening in the iMC for writes before they are ac= tually issued to memory. High average occupancies will generally coincide = with high write major mode counts.", "Unit": "MCHBM" }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_MCHBM_WPQ_READ_HIT", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -2012,8 +2476,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_MCHBM_WPQ_READ_HIT.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue CAM Match: Counts the nu= mber of times a request hits in the WPQ (write-pending queue). The iMC all= ows writes and reads to pass up other writes to different addresses. Befor= e a read or a write is issued, it will first CAM the WPQ to see if there is= a write pending to that address. When reads hit, they are able to directl= y pull their data from the WPQ instead of going to memory. Writes that hit= will overwrite the existing data. Partial writes that hit will not need t= o do underfill reads and will simply update their relevant sections.", "UMask": "0x1", @@ -2021,8 +2487,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_MCHBM_WPQ_READ_HIT.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue CAM Match: Counts the nu= mber of times a request hits in the WPQ (write-pending queue). The iMC all= ows writes and reads to pass up other writes to different addresses. Befor= e a read or a write is issued, it will first CAM the WPQ to see if there is= a write pending to that address. When reads hit, they are able to directl= y pull their data from the WPQ instead of going to memory. Writes that hit= will overwrite the existing data. Partial writes that hit will not need t= o do underfill reads and will simply update their relevant sections.", "UMask": "0x2", @@ -2030,8 +2498,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_MCHBM_WPQ_WRITE_HIT", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -2040,8 +2510,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_MCHBM_WPQ_WRITE_HIT.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue CAM Match: Counts the nu= mber of times a request hits in the WPQ (write-pending queue). The iMC all= ows writes and reads to pass up other writes to different addresses. Befor= e a read or a write is issued, it will first CAM the WPQ to see if there is= a write pending to that address. When reads hit, they are able to directl= y pull their data from the WPQ instead of going to memory. Writes that hit= will overwrite the existing data. Partial writes that hit will not need t= o do underfill reads and will simply update their relevant sections.", "UMask": "0x1", @@ -2049,8 +2521,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_MCHBM_WPQ_WRITE_HIT.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Write Pending Queue CAM Match: Counts the nu= mber of times a request hits in the WPQ (write-pending queue). The iMC all= ows writes and reads to pass up other writes to different addresses. Befor= e a read or a write is issued, it will first CAM the WPQ to see if there is= a write pending to that address. When reads hit, they are able to directl= y pull their data from the WPQ instead of going to memory. Writes that hit= will overwrite the existing data. Partial writes that hit will not need t= o do underfill reads and will simply update their relevant sections.", "UMask": "0x2", @@ -2058,6 +2532,7 @@ }, { "BriefDescription": "Activate due to read, write, underfill, or by= pass", + "Counter": "0,1,2,3", "EventCode": "0x02", "EventName": "UNC_M_ACT_COUNT.ALL", "PerPkg": "1", @@ -2067,6 +2542,7 @@ }, { "BriefDescription": "All DRAM CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.ALL", "PerPkg": "1", @@ -2076,8 +2552,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Cha= nnel 0", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Ch= annel 0 : DRAM RD_CAS and WR_CAS Commands", "UMask": "0x40", @@ -2085,8 +2563,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Cha= nnel 1", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Ch= annel 1 : DRAM RD_CAS and WR_CAS Commands", "UMask": "0x80", @@ -2094,6 +2574,7 @@ }, { "BriefDescription": "All DRAM read CAS commands issued (including = underfills)", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", @@ -2103,8 +2584,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.RD_PRE_REG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_C= AS and WR_CAS Commands", "UMask": "0xc2", @@ -2112,8 +2595,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.RD_PRE_UNDERFILL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_C= AS and WR_CAS Commands", "UMask": "0xc8", @@ -2121,8 +2606,10 @@ }, { "BriefDescription": "All DRAM read CAS commands issued (does not i= nclude underfills)", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.RD_REG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_C= AS commands w/out auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the t= otal number or DRAM Read CAS commands issued on this channel. This include= s both regular RD CAS commands as well as those with implicit Precharge. = We do not filter based on major mode, as RD_CAS is not issued during WMM (w= ith the exception of underfills).", "UMask": "0xc1", @@ -2130,8 +2617,10 @@ }, { "BriefDescription": "DRAM underfill read CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Underfill= Read Issued : DRAM RD_CAS and WR_CAS Commands", "UMask": "0xc4", @@ -2139,6 +2628,7 @@ }, { "BriefDescription": "All DRAM write CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", @@ -2148,8 +2638,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CA= S commands w/o auto-pre", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.WR_NONPRE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_C= AS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commands", "UMask": "0xd0", @@ -2157,8 +2649,10 @@ }, { "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_M_CAS_COUNT.WR_PRE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_C= AS and WR_CAS Commands", "UMask": "0xe0", @@ -2166,70 +2660,87 @@ }, { "BriefDescription": "Pseudo Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.PCH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": "Pseudo Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.PCH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "iMC" }, { "BriefDescription": "Read CAS Command in Interleaved Mode (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc8", "Unit": "iMC" }, { "BriefDescription": "Read CAS Command in Regular Mode (64B) in Pse= udochannel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc1", "Unit": "iMC" }, { "BriefDescription": "Underfill Read CAS Command in Interleaved Mod= e (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_UFILL_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd0", "Unit": "iMC" }, { "BriefDescription": "Underfill Read CAS Command in Regular Mode (6= 4B) in Pseudochannel 1", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_UFILL_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc2", "Unit": "iMC" }, { "BriefDescription": "Write CAS Command in Interleaved Mode (32B)", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.WR_32B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe0", "Unit": "iMC" }, { "BriefDescription": "Write CAS Command in Regular Mode (64B) in Ps= eudochannel 0", + "Counter": "0,1,2,3", "EventCode": "0x06", "EventName": "UNC_M_CAS_ISSUED_REQ_LEN.WR_64B", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc4", "Unit": "iMC" }, { "BriefDescription": "IMC Clockticks at DCLK frequency", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M_CLOCKTICKS", "PerPkg": "1", @@ -2239,8 +2750,10 @@ }, { "BriefDescription": "DRAM Precharge All Commands", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M_DRAM_PRE_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge All Commands : Counts the num= ber of times that the precharge all command was sent.", "UMask": "0x3", @@ -2248,6 +2761,7 @@ }, { "BriefDescription": "IMC Clockticks at HCLK frequency", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_M_HCLOCKTICKS", "PerPkg": "1", @@ -2256,30 +2770,37 @@ }, { "BriefDescription": "UNC_M_PCLS.RD", + "Counter": "0,1,2,3", "EventCode": "0xa0", "EventName": "UNC_M_PCLS.RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "UNC_M_PCLS.TOTAL", + "Counter": "0,1,2,3", "EventCode": "0xa0", "EventName": "UNC_M_PCLS.TOTAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "UNC_M_PCLS.WR", + "Counter": "0,1,2,3", "EventCode": "0xa0", "EventName": "UNC_M_PCLS.WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "PMM Read Pending Queue inserts", + "Counter": "0,1,2,3", "EventCode": "0xe3", "EventName": "UNC_M_PMM_RPQ_INSERTS", "PerPkg": "1", @@ -2288,6 +2809,7 @@ }, { "BriefDescription": "PMM Read Pending Queue occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL_SCH0", "PerPkg": "1", @@ -2297,6 +2819,7 @@ }, { "BriefDescription": "PMM Read Pending Queue occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL_SCH1", "PerPkg": "1", @@ -2306,8 +2829,10 @@ }, { "BriefDescription": "PMM Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xE0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT_SCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PMM Read Pending Queue Occupancy : Accumulat= es the per cycle occupancy of the PMM Read Pending Queue.", "UMask": "0x10", @@ -2315,8 +2840,10 @@ }, { "BriefDescription": "PMM Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xE0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT_SCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PMM Read Pending Queue Occupancy : Accumulat= es the per cycle occupancy of the PMM Read Pending Queue.", "UMask": "0x20", @@ -2324,8 +2851,10 @@ }, { "BriefDescription": "PMM Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT_SCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the per cycle occupancy of the P= MM Read Pending Queue.", "UMask": "0x4", @@ -2333,8 +2862,10 @@ }, { "BriefDescription": "PMM Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe0", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT_SCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the per cycle occupancy of the P= MM Read Pending Queue.", "UMask": "0x8", @@ -2342,13 +2873,16 @@ }, { "BriefDescription": "PMM (for IXP) Write Queue Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0xe5", "EventName": "UNC_M_PMM_WPQ_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "PMM Write Pending Queue inserts", + "Counter": "0,1,2,3", "EventCode": "0xe7", "EventName": "UNC_M_PMM_WPQ_INSERTS", "PerPkg": "1", @@ -2357,6 +2891,7 @@ }, { "BriefDescription": "PMM Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe4", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL", "PerPkg": "1", @@ -2366,6 +2901,7 @@ }, { "BriefDescription": "PMM Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xE4", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL_SCH0", "PerPkg": "1", @@ -2375,6 +2911,7 @@ }, { "BriefDescription": "PMM Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xE4", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL_SCH1", "PerPkg": "1", @@ -2384,8 +2921,10 @@ }, { "BriefDescription": "PMM (for IXP) Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe4", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PMM (for IXP) Write Pending Queue Occupancy = : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP= DIMM.", "UMask": "0xc", @@ -2393,8 +2932,10 @@ }, { "BriefDescription": "PMM (for IXP) Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xe4", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PMM (for IXP) Write Pending Queue Occupancy = : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP= DIMM.", "UMask": "0x30", @@ -2402,16 +2943,20 @@ }, { "BriefDescription": "Channel PPD Cycles", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "UNC_M_POWER_CHANNEL_PPD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Channel PPD Cycles : Number of cycles when a= ll the ranks in the channel are in PPD mode. If IBT=3Doff is enabled, then= this can be used to count those cycles. If it is not enabled, then this c= an count the number of cycles when that could have been taken advantage of.= ", "Unit": "iMC" }, { "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of = cycles spent in CKE ON mode. The filter allows you to select a rank to mon= itor. If multiple ranks are in CKE ON mode at one time, the counter will O= NLY increment by one rather than doing accumulation. Multiple counters wil= l need to be used to track multiple ranks simultaneously. There is no dist= inction between the different CKE modes (APD, PPDS, PPDF). This can be det= ermined based on the system programming. These events should commonly be u= sed with Invert to get the number of cycles in power saving mode. Edge Det= ect is also useful here. Make sure that you do NOT use Invert with Edge De= tect (this just confuses the system and is not necessary).", "UMask": "0x1", @@ -2419,8 +2964,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of = cycles spent in CKE ON mode. The filter allows you to select a rank to mon= itor. If multiple ranks are in CKE ON mode at one time, the counter will O= NLY increment by one rather than doing accumulation. Multiple counters wil= l need to be used to track multiple ranks simultaneously. There is no dist= inction between the different CKE modes (APD, PPDS, PPDF). This can be det= ermined based on the system programming. These events should commonly be u= sed with Invert to get the number of cycles in power saving mode. Edge Det= ect is also useful here. Make sure that you do NOT use Invert with Edge De= tect (this just confuses the system and is not necessary).", "UMask": "0x2", @@ -2428,8 +2975,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of = cycles spent in CKE ON mode. The filter allows you to select a rank to mon= itor. If multiple ranks are in CKE ON mode at one time, the counter will O= NLY increment by one rather than doing accumulation. Multiple counters wil= l need to be used to track multiple ranks simultaneously. There is no dist= inction between the different CKE modes (APD, PPDS, PPDF). This can be det= ermined based on the system programming. These events should commonly be u= sed with Invert to get the number of cycles in power saving mode. Edge Det= ect is also useful here. Make sure that you do NOT use Invert with Edge De= tect (this just confuses the system and is not necessary).", "UMask": "0x4", @@ -2437,8 +2986,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of = cycles spent in CKE ON mode. The filter allows you to select a rank to mon= itor. If multiple ranks are in CKE ON mode at one time, the counter will O= NLY increment by one rather than doing accumulation. Multiple counters wil= l need to be used to track multiple ranks simultaneously. There is no dist= inction between the different CKE modes (APD, PPDS, PPDF). This can be det= ermined based on the system programming. These events should commonly be u= sed with Invert to get the number of cycles in power saving mode. Edge Det= ect is also useful here. Make sure that you do NOT use Invert with Edge De= tect (this just confuses the system and is not necessary).", "UMask": "0x8", @@ -2446,8 +2997,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the numb= er of cycles while the iMC is being throttled by either thermal constraints= or by the PCU throttling. It is not possible to distinguish between the t= wo. This can be filtered by rank. If multiple ranks are selected and are = being throttled at the same time, the counter will only increment by 1. : T= hermal throttling is performed per DIMM. We support 3 DIMMs per channel. = This ID allows us to filter by ID.", "UMask": "0x1", @@ -2455,8 +3008,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the numb= er of cycles while the iMC is being throttled by either thermal constraints= or by the PCU throttling. It is not possible to distinguish between the t= wo. This can be filtered by rank. If multiple ranks are selected and are = being throttled at the same time, the counter will only increment by 1.", "UMask": "0x2", @@ -2464,14 +3019,17 @@ }, { "BriefDescription": "Clock-Enabled Self-Refresh", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M_POWER_SELF_REFRESH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Clock-Enabled Self-Refresh : Counts the numb= er of cycles when the iMC is in self-refresh and the iMC still has a clock.= This happens in some package C-states. For example, the PCU may ask the = iMC to enter self-refresh even though some of the cores are still processin= g. One use of this is for Monroe technology. Self-refresh is required dur= ing package C3 and C6, but there is no clock in the iMC at this time, so it= is not possible to count these cases.", "Unit": "iMC" }, { "BriefDescription": "Precharge due to read, write, underfill, or P= GT.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.ALL", "PerPkg": "1", @@ -2481,6 +3039,7 @@ }, { "BriefDescription": "DRAM Precharge commands", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.PGT", "PerPkg": "1", @@ -2490,8 +3049,10 @@ }, { "BriefDescription": "DRAM Precharge commands. : Precharges from Pa= ge Table", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.PGT_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Precharges from P= age Table : Counts the number of DRAM Precharge commands sent on this chann= el. : Equivalent to PAGE_EMPTY", "UMask": "0x8", @@ -2499,8 +3060,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.PGT_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x80", @@ -2508,6 +3071,7 @@ }, { "BriefDescription": "Precharge due to read on page miss", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.RD", "PerPkg": "1", @@ -2517,8 +3081,10 @@ }, { "BriefDescription": "DRAM Precharge commands. : Precharge due to r= ead", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.RD_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Precharge due to = read : Counts the number of DRAM Precharge commands sent on this channel. := Precharge from read bank scheduler", "UMask": "0x1", @@ -2526,8 +3092,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.RD_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x10", @@ -2535,8 +3103,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.UFILL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x44", @@ -2544,8 +3114,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.UFILL_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x4", @@ -2553,8 +3125,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.UFILL_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x40", @@ -2562,6 +3136,7 @@ }, { "BriefDescription": "Precharge due to write on page miss", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.WR", "PerPkg": "1", @@ -2571,8 +3146,10 @@ }, { "BriefDescription": "DRAM Precharge commands. : Precharge due to w= rite", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.WR_PCH0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Precharge due to = write : Counts the number of DRAM Precharge commands sent on this channel. = : Precharge from write bank scheduler", "UMask": "0x2", @@ -2580,8 +3157,10 @@ }, { "BriefDescription": "DRAM Precharge commands.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "UNC_M_PRE_COUNT.WR_PCH1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "DRAM Precharge commands. : Counts the number= of DRAM Precharge commands sent on this channel.", "UMask": "0x20", @@ -2589,21 +3168,26 @@ }, { "BriefDescription": "Counts the number of cycles where the read bu= ffer has greater than UMASK elements. This includes reads to both DDR and = PMEM. NOTE: Umask must be set to the maximum number of elements in the que= ue (24 entries for SPR).", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M_RDB_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "Counts the number of inserts into the read bu= ffer destined for DDR. Does not count reads destined for PMEM.", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M_RDB_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "Read Data Buffer Inserts", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M_RDB_INSERTS.PCH0", "PerPkg": "1", @@ -2612,6 +3196,7 @@ }, { "BriefDescription": "Read Data Buffer Inserts", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M_RDB_INSERTS.PCH1", "PerPkg": "1", @@ -2620,45 +3205,56 @@ }, { "BriefDescription": "Counts the number of cycles where there's at = least one element in the read buffer. This includes reads to both DDR and = PMEM.", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M_RDB_NE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "Read Data Buffer Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M_RDB_NE.PCH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Read Data Buffer Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M_RDB_NE.PCH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Counts the number of cycles where there's at = least one element in the read buffer. This includes reads to both DDR and = PMEM.", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M_RDB_NOT_EMPTY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "Counts the number of elements in the read buf= fer, including reads to both DDR and PMEM.", + "Counter": "0,1,2,3", "EventCode": "0x1a", "EventName": "UNC_M_RDB_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "Read Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M_RPQ_INSERTS.PCH0", "PerPkg": "1", @@ -2668,6 +3264,7 @@ }, { "BriefDescription": "Read Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M_RPQ_INSERTS.PCH1", "PerPkg": "1", @@ -2677,6 +3274,7 @@ }, { "BriefDescription": "Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M_RPQ_OCCUPANCY_PCH0", "PerPkg": "1", @@ -2685,6 +3283,7 @@ }, { "BriefDescription": "Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x81", "EventName": "UNC_M_RPQ_OCCUPANCY_PCH1", "PerPkg": "1", @@ -2693,294 +3292,368 @@ }, { "BriefDescription": "Scoreboard accepts", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.ACCEPTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : Write Accepts", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : Write Rejects", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : FM read completions", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : FM write completions", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : Read Accepts", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : Read Rejects", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.RD_REJECTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Scoreboard rejects", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.REJECTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : NM read completions", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Accesses : NM write completions", + "Counter": "0,1,2,3", "EventCode": "0xd2", "EventName": "UNC_M_SB_ACCESSES.WR_REJECTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": ": Alloc", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.ALLOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": ": Dealloc", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.DEALLOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Write Starved", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.FM_RD_STARVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Write Starved", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.FM_TGR_WR_STARVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Read Starved", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.FM_WR_STARVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": ": Valid", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.NM_RD_STARVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read Starved", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.NM_WR_STARVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": ": Reject", + "Counter": "0,1,2,3", "EventCode": "0xd9", "EventName": "UNC_M_SB_CANARY.VLD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Cycles Full", + "Counter": "0,1,2,3", "EventCode": "0xd1", "EventName": "UNC_M_SB_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Cycles Not-Empty", + "Counter": "0,1,2,3", "EventCode": "0xd0", "EventName": "UNC_M_SB_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Block region reads", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.BLOCK_RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Block region writes", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.BLOCK_WRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Persistent Mem reads", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.PMM_RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Persistent Mem writes", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.PMM_WRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Reads", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Inserts : Writes", + "Counter": "0,1,2,3", "EventCode": "0xd6", "EventName": "UNC_M_SB_INSERTS.WRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Occupancy : Block region reads", + "Counter": "0,1,2,3", "EventCode": "0xd5", "EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Occupancy : Block region writes", + "Counter": "0,1,2,3", "EventCode": "0xd5", "EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Occupancy : Persistent Mem reads", + "Counter": "0,1,2,3", "EventCode": "0xd5", "EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Occupancy : Persistent Mem writes"= , + "Counter": "0,1,2,3", "EventCode": "0xd5", "EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Occupancy : Reads", + "Counter": "0,1,2,3", "EventCode": "0xd5", "EventName": "UNC_M_SB_OCCUPANCY.RDS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Inserts : All", + "Counter": "0,1,2,3", "EventCode": "0xda", "EventName": "UNC_M_SB_PREF_INSERTS.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Inserts : DDR4", + "Counter": "0,1,2,3", "EventCode": "0xda", "EventName": "UNC_M_SB_PREF_INSERTS.DDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Inserts : PMM", + "Counter": "0,1,2,3", "EventCode": "0xda", "EventName": "UNC_M_SB_PREF_INSERTS.PMM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Occupancy : All", + "Counter": "0,1,2,3", "EventCode": "0xdb", "EventName": "UNC_M_SB_PREF_OCCUPANCY.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Occupancy : DDR4", + "Counter": "0,1,2,3", "EventCode": "0xdb", "EventName": "UNC_M_SB_PREF_OCCUPANCY.DDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Scoreboard Prefetch Occupancy : Persistent Me= m", + "Counter": "0,1,2,3", "EventCode": "0xDB", "EventName": "UNC_M_SB_PREF_OCCUPANCY.PMM", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -2989,230 +3662,287 @@ }, { "BriefDescription": "Number of Scoreboard Requests Rejected", + "Counter": "0,1,2,3", "EventCode": "0xd4", "EventName": "UNC_M_SB_REJECT.CANARY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "Number of Scoreboard Requests Rejected", + "Counter": "0,1,2,3", "EventCode": "0xd4", "EventName": "UNC_M_SB_REJECT.DDR_EARLY_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "Number of Scoreboard Requests Rejected : FM r= equests rejected due to full address conflict", + "Counter": "0,1,2,3", "EventCode": "0xd4", "EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Number of Scoreboard Requests Rejected : NM r= equests rejected due to set conflict", + "Counter": "0,1,2,3", "EventCode": "0xd4", "EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Number of Scoreboard Requests Rejected : Patr= ol requests rejected due to set conflict", + "Counter": "0,1,2,3", "EventCode": "0xd4", "EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Read - Set", + "Counter": "0,1,2,3", "EventCode": "0xd7", "EventName": "UNC_M_SB_STRV_ALLOC.FM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read - Clear", + "Counter": "0,1,2,3", "EventCode": "0xd7", "EventName": "UNC_M_SB_STRV_ALLOC.FM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Write - Set", + "Counter": "0,1,2,3", "EventCode": "0xd7", "EventName": "UNC_M_SB_STRV_ALLOC.FM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read - Set", + "Counter": "0,1,2,3", "EventCode": "0xd7", "EventName": "UNC_M_SB_STRV_ALLOC.NM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Write - Set", + "Counter": "0,1,2,3", "EventCode": "0xd7", "EventName": "UNC_M_SB_STRV_ALLOC.NM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Read - Set", + "Counter": "0,1,2,3", "EventCode": "0xde", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read - Clear", + "Counter": "0,1,2,3", "EventCode": "0xde", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Write - Set", + "Counter": "0,1,2,3", "EventCode": "0xde", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read - Set", + "Counter": "0,1,2,3", "EventCode": "0xde", "EventName": "UNC_M_SB_STRV_DEALLOC.NM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Write - Set", + "Counter": "0,1,2,3", "EventCode": "0xde", "EventName": "UNC_M_SB_STRV_DEALLOC.NM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Read", + "Counter": "0,1,2,3", "EventCode": "0xd8", "EventName": "UNC_M_SB_STRV_OCC.FM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read - Clear", + "Counter": "0,1,2,3", "EventCode": "0xd8", "EventName": "UNC_M_SB_STRV_OCC.FM_TGR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": ": Far Mem Write", + "Counter": "0,1,2,3", "EventCode": "0xd8", "EventName": "UNC_M_SB_STRV_OCC.FM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Read", + "Counter": "0,1,2,3", "EventCode": "0xd8", "EventName": "UNC_M_SB_STRV_OCC.NM_RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": ": Near Mem Write", + "Counter": "0,1,2,3", "EventCode": "0xd8", "EventName": "UNC_M_SB_STRV_OCC.NM_WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.DDR4_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.NEW", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.NEW", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.OCC", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.OCC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.PMM0_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.PMM1_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.PMM2_CMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.RD_HIT", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.RD_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "UNC_M_SB_TAGGED.RD_MISS", + "Counter": "0,1,2,3", "EventCode": "0xdd", "EventName": "UNC_M_SB_TAGGED.RD_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "2LM Tag check hit in near memory cache (DDR4)= ", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "UNC_M_TAGCHK.HIT", "PerPkg": "1", @@ -3221,6 +3951,7 @@ }, { "BriefDescription": "2LM Tag check miss, no data at this line", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "UNC_M_TAGCHK.MISS_CLEAN", "PerPkg": "1", @@ -3229,6 +3960,7 @@ }, { "BriefDescription": "2LM Tag check miss, existing data may be evic= ted to PMM", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "UNC_M_TAGCHK.MISS_DIRTY", "PerPkg": "1", @@ -3237,6 +3969,7 @@ }, { "BriefDescription": "2LM Tag check hit due to memory read", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "UNC_M_TAGCHK.NM_RD_HIT", "PerPkg": "1", @@ -3245,6 +3978,7 @@ }, { "BriefDescription": "2LM Tag check hit due to memory write", + "Counter": "0,1,2,3", "EventCode": "0xd3", "EventName": "UNC_M_TAGCHK.NM_WR_HIT", "PerPkg": "1", @@ -3253,6 +3987,7 @@ }, { "BriefDescription": "Write Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M_WPQ_INSERTS.PCH0", "PerPkg": "1", @@ -3262,6 +3997,7 @@ }, { "BriefDescription": "Write Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M_WPQ_INSERTS.PCH1", "PerPkg": "1", @@ -3271,6 +4007,7 @@ }, { "BriefDescription": "Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M_WPQ_OCCUPANCY_PCH0", "PerPkg": "1", @@ -3279,6 +4016,7 @@ }, { "BriefDescription": "Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_WPQ_OCCUPANCY_PCH1", "PerPkg": "1", @@ -3287,8 +4025,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M_WPQ_READ_HIT", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", @@ -3297,8 +4037,10 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M_WPQ_WRITE_HIT", + "Experimental": "1", "FCMask": "0x00000000", "PerPkg": "1", "PortMask": "0x00000000", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-power.jso= n b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-power.json index 8948e85074f0..9482ddaea4d1 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-power.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-power.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "PCU PCLK Clockticks", + "Counter": "0,1,2,3", "EventCode": "0x01", "EventName": "UNC_P_CLOCKTICKS", "PerPkg": "1", @@ -9,187 +10,235 @@ }, { "BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_P_CORE_TRANSITION_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "UNC_P_DEMOTIONS", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_P_DEMOTIONS", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 0 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x75", "EventName": "UNC_P_FIVR_PS_PS0_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Phase Shed 0 Cycles : Cycles spent in phase-= shedding power state 0", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 1 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x76", "EventName": "UNC_P_FIVR_PS_PS1_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Phase Shed 1 Cycles : Cycles spent in phase-= shedding power state 1", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 2 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x77", "EventName": "UNC_P_FIVR_PS_PS2_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Phase Shed 2 Cycles : Cycles spent in phase-= shedding power state 2", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 3 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x78", "EventName": "UNC_P_FIVR_PS_PS3_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Phase Shed 3 Cycles : Cycles spent in phase-= shedding power state 3", "Unit": "PCU" }, { "BriefDescription": "AVX256 Frequency Clipping", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_P_FREQ_CLIP_AVX256", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "AVX512 Frequency Clipping", + "Counter": "0,1,2,3", "EventCode": "0x4a", "EventName": "UNC_P_FREQ_CLIP_AVX512", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Thermal Strongest Upper Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x04", "EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Thermal Strongest Upper Limit Cycles : Numbe= r of cycles any frequency is reduced due to a thermal limit. Count only if= throttling is occurring.", "Unit": "PCU" }, { "BriefDescription": "Power Strongest Upper Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x05", "EventName": "UNC_P_FREQ_MAX_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Power Strongest Upper Limit Cycles : Counts = the number of cycles when power is the upper limit on frequency.", "Unit": "PCU" }, { "BriefDescription": "IO P Limit Strongest Lower Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x73", "EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "IO P Limit Strongest Lower Limit Cycles : Co= unts the number of cycles when IO P Limit is preventing us from dropping th= e frequency lower. This algorithm monitors the needs to the IO subsystem o= n both local and remote sockets and will maintain a frequency high enough t= o maintain good IO BW. This is necessary for when all the IA cores on a so= cket are idle but a user still would like to maintain high IO Bandwidth.", "Unit": "PCU" }, { "BriefDescription": "Cycles spent changing Frequency", + "Counter": "0,1,2,3", "EventCode": "0x74", "EventName": "UNC_P_FREQ_TRANS_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles spent changing Frequency : Counts the= number of cycles when the system is changing frequency. This can not be f= iltered by thread ID. One can also use it with the occupancy counter that = monitors number of threads in C0 to estimate the performance impact that fr= equency transitions had on the system.", "Unit": "PCU" }, { "BriefDescription": "Memory Phase Shedding Cycles", + "Counter": "0,1,2,3", "EventCode": "0x2f", "EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Memory Phase Shedding Cycles : Counts the nu= mber of cycles that the PCU has triggered memory phase shedding. This is a= mode that can be run in the iMC physicals that saves power at the expense = of additional latency.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C0", + "Counter": "0,1,2,3", "EventCode": "0x2a", "EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Package C State Residency - C0 : Counts the = number of cycles when the package was in C0. This event can be used in con= junction with edge detect to count C0 entrances (or exits using invert). R= esidency events do not include transition times.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C2E", + "Counter": "0,1,2,3", "EventCode": "0x2b", "EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Package C State Residency - C2E : Counts the= number of cycles when the package was in C2E. This event can be used in c= onjunction with edge detect to count C2E entrances (or exits using invert).= Residency events do not include transition times.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C6", + "Counter": "0,1,2,3", "EventCode": "0x2d", "EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Package C State Residency - C6 : Counts the = number of cycles when the package was in C6. This event can be used in con= junction with edge detect to count C6 entrances (or exits using invert). R= esidency events do not include transition times.", "Unit": "PCU" }, { "BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES", + "Counter": "0", "EventCode": "0x06", "EventName": "UNC_P_PMAX_THROTTLED_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Number of cores in C0", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cores in C0 : This is an occupancy= event that tracks the number of cores that are in the chosen C-State. It = can be used by itself to get the average number of cores in that C-state wi= th thresholding to generate histograms, or with other PCU events and occupa= ncy triggering to capture other details.", "Unit": "PCU" }, { "BriefDescription": "Number of cores in C3", + "Counter": "0,1,2,3", "EventCode": "0x36", "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cores in C3 : This is an occupancy= event that tracks the number of cores that are in the chosen C-State. It = can be used by itself to get the average number of cores in that C-state wi= th thresholding to generate histograms, or with other PCU events and occupa= ncy triggering to capture other details.", "Unit": "PCU" }, { "BriefDescription": "Number of cores in C6", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cores in C6 : This is an occupancy= event that tracks the number of cores that are in the chosen C-State. It = can be used by itself to get the average number of cores in that C-state wi= th thresholding to generate histograms, or with other PCU events and occupa= ncy triggering to capture other details.", "Unit": "PCU" }, { "BriefDescription": "External Prochot", + "Counter": "0,1,2,3", "EventCode": "0x0a", "EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "External Prochot : Counts the number of cycl= es that we are in external PROCHOT mode. This mode is triggered when a sen= sor off the die determines that something off-die (like DRAM) is too hot an= d must throttle to avoid damaging the chip.", "Unit": "PCU" }, { "BriefDescription": "Internal Prochot", + "Counter": "0,1,2,3", "EventCode": "0x09", "EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Internal Prochot : Counts the number of cycl= es that we are in Internal PROCHOT mode. This mode is triggered when a sen= sor on the die determines that we are too hot and must throttle to avoid da= maging the chip.", "Unit": "PCU" }, { "BriefDescription": "Total Core C State Transition Cycles", + "Counter": "0,1,2,3", "EventCode": "0x72", "EventName": "UNC_P_TOTAL_TRANSITION_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Total Core C State Transition Cycles : Numbe= r of cycles spent performing core C state transitions across all cores.", "Unit": "PCU" }, { "BriefDescription": "VR Hot", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_P_VR_HOT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VR Hot : Number of cycles that a CPU SVID VR= is hot. Does not cover DRAM VRs", "Unit": "PCU" diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/virtual-memory.j= son b/tools/perf/pmu-events/arch/x86/sapphirerapids/virtual-memory.json index a1e3b8d2ebe7..609a9549cbf3 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/virtual-memory.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/virtual-memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Loads that miss the DTLB and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.STLB_HIT", "PublicDescription": "Counts loads that miss the DTLB (Data TLB) a= nd hit the STLB (Second level TLB).", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for a demand load.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE", @@ -18,6 +20,7 @@ }, { "BriefDescription": "Load miss in all TLB levels causes a page wal= k that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes= ) caused by demand data loads. This implies it missed in the DTLB and furth= er levels of TLB. The page walk can end with or without a fault.", @@ -26,6 +29,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data loa= d to a 1G page.", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G", "PublicDescription": "Counts completed page walks (1G sizes) caus= ed by demand data loads. This implies address translations missed in the DT= LB and further levels of TLB. The page walk can end with or without a fault= .", @@ -34,6 +38,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data loa= d to a 2M/4M page.", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M sizes) c= aused by demand data loads. This implies address translations missed in the= DTLB and further levels of TLB. The page walk can end with or without a fa= ult.", @@ -42,6 +47,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data loa= d to a 4K page.", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K sizes) caus= ed by demand data loads. This implies address translations missed in the DT= LB and further levels of TLB. The page walk can end with or without a fault= .", @@ -50,6 +56,7 @@ }, { "BriefDescription": "Number of page walks outstanding for a demand= load in the PMH each cycle.", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "DTLB_LOAD_MISSES.WALK_PENDING", "PublicDescription": "Counts the number of page walks outstanding = for a demand load in the PMH (Page Miss Handler) each cycle.", @@ -58,6 +65,7 @@ }, { "BriefDescription": "Stores that miss the DTLB and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.STLB_HIT", "PublicDescription": "Counts stores that miss the DTLB (Data TLB) = and hit the STLB (2nd Level TLB).", @@ -66,6 +74,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for a store.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE", @@ -75,6 +84,7 @@ }, { "BriefDescription": "Store misses in all TLB levels causes a page = walk that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes= ) caused by demand data stores. This implies it missed in the DTLB and furt= her levels of TLB. The page walk can end with or without a fault.", @@ -83,6 +93,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data sto= re to a 1G page.", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G", "PublicDescription": "Counts completed page walks (1G sizes) caus= ed by demand data stores. This implies address translations missed in the D= TLB and further levels of TLB. The page walk can end with or without a faul= t.", @@ -91,6 +102,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data sto= re to a 2M/4M page.", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M sizes) c= aused by demand data stores. This implies address translations missed in th= e DTLB and further levels of TLB. The page walk can end with or without a f= ault.", @@ -99,6 +111,7 @@ }, { "BriefDescription": "Page walks completed due to a demand data sto= re to a 4K page.", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K sizes) caus= ed by demand data stores. This implies address translations missed in the D= TLB and further levels of TLB. The page walk can end with or without a faul= t.", @@ -107,6 +120,7 @@ }, { "BriefDescription": "Number of page walks outstanding for a store = in the PMH each cycle.", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "DTLB_STORE_MISSES.WALK_PENDING", "PublicDescription": "Counts the number of page walks outstanding = for a store in the PMH (Page Miss Handler) each cycle.", @@ -115,6 +129,7 @@ }, { "BriefDescription": "Instruction fetch requests that miss the ITLB= and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "ITLB_MISSES.STLB_HIT", "PublicDescription": "Counts instruction fetch requests that miss = the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).", @@ -123,6 +138,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for code (instruction fetch) request.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x11", "EventName": "ITLB_MISSES.WALK_ACTIVE", @@ -132,6 +148,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "ITLB_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes)= caused by a code fetch. This implies it missed in the ITLB (Instruction TL= B) and further levels of TLB. The page walk can end with or without a fault= .", @@ -140,6 +157,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (2M/4M)", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M page size= s) caused by a code fetch. This implies it missed in the ITLB (Instruction = TLB) and further levels of TLB. The page walk can end with or without a fau= lt.", @@ -148,6 +166,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (4K)", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "ITLB_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K page sizes) = caused by a code fetch. This implies it missed in the ITLB (Instruction TLB= ) and further levels of TLB. The page walk can end with or without a fault.= ", @@ -156,6 +175,7 @@ }, { "BriefDescription": "Number of page walks outstanding for an outst= anding code request in the PMH each cycle.", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "ITLB_MISSES.WALK_PENDING", "PublicDescription": "Counts the number of page walks outstanding = for an outstanding code (instruction fetch) request in the PMH (Page Miss H= andler) each cycle.", --=20 2.45.2.627.g7a2c4fd464-goog