Received: by 2002:a89:413:0:b0:1fd:dba5:e537 with SMTP id m19csp1119564lqs; Fri, 14 Jun 2024 16:18:20 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXDBSfvH8ULqGDHS/1NIaIEyLZsTFRboMxoeZPF0bcEwpmhbIHjTF5l2dZy/JrzdbHWdm0AgnzS3WemjQ1y5/ZioVo2ZyjuYc1Zkd0rKw== X-Google-Smtp-Source: AGHT+IF3hUtfChQv4msBXcNNWGatUTLc4ufJ4No/WBFaAqV2YVdR1iSaNsGoiHO2E1rMDdeY+w81 X-Received: by 2002:a17:906:145a:b0:a6f:4746:4ee7 with SMTP id a640c23a62f3a-a6f6082db7cmr311379466b.11.1718407100604; Fri, 14 Jun 2024 16:18:20 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718407100; cv=pass; d=google.com; s=arc-20160816; b=dtZjStf2ZWdKXG/ZxKQFoiaJ7CwQLgMJg1QXuj29uM8Y2QdtlWw5WLSRAo9Qi0vMVS c3qBHMtQBd+A5C8uVETusLZP/R5XgcHV1Xw4TBEibYjDt35sYbIHR3FDZhL+BW1LCiub biJJGj4aJRvlMSyogmlpwX1WEXKjSVXL7kaO4gmIhHOqmfIPZ5NZaAzKh70ufoxQu87s r2tBuAzEvCvFcPM3SK59nzHc8Go6H33Zo2k4mu8WOFCeer++QjReNBSnrGGeXEjn4c2v 4CmpzOO241m1aDoxoAYWLrvlq/iIO7Ox64GzX8rxRR7usjvCGbgWaxzJzFb1RD1bC7Qx YDEQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :message-id:in-reply-to:date:dkim-signature; bh=2OffoANN2zJFwOqCuha7TzyxJ2OneraseDMRz0GnJWQ=; fh=hJyCW7hr52ALlFbKPAiHgS2McdbAfU5f4nIavkhpKZk=; b=CtrBElky6GsqCjC00YshO5J/w0quzxaPR7My22iED0WFpkl6199PuU6Q3Ygg9cnaE/ HAx8CHjlUCzLaR38BB7qbKIQSN6GEdN1HXQSQpLtx31/iH1JPGbNiHDhJFMQH0x6RD0N CvlcE3lkhy6fpRnqd4/RLRJzQo7+msyS0fhzlFNg+NDU27C9wGz3fFYFp8Pknmgwcslf xv7GGN9Sj2G3q+lL+GA8V2nXzsbiAxGmN5qsjvoPYhPkZSErYukfNh0DUTDAaMFUyTKw a2rUqYCzxOtmXEVF/Vd12FecAsKY1VbfaNRaMlnN4F+debfBZ4ikqKls54TQ3PtPx6iL yWqw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fjY4n5iB; arc=pass (i=1 spf=pass spfdomain=flex--irogers.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-215586-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-215586-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id a640c23a62f3a-a6f56dfa977si210037166b.692.2024.06.14.16.18.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 16:18:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-215586-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fjY4n5iB; arc=pass (i=1 spf=pass spfdomain=flex--irogers.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-215586-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-215586-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 93C311F20F09 for ; Fri, 14 Jun 2024 23:18:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DB52718754A; Fri, 14 Jun 2024 23:10:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fjY4n5iB" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BFFFA18FC99 for ; Fri, 14 Jun 2024 23:04:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718406492; cv=none; b=OiAdCBva7D+mA/ohWp7jzsQPIxpyV2JMC8qLNZvAN4+NdR1PU6kP8nxAmis+CHyZC8612rAz/RqNE21x8UP7rnR8sIaSLfSABlbcQMCskH4a5OcsOGNrwjZZoEWdVTE5FXQLmUTa0SDdjdm1ceOIXe9Q4dl38xhz+0QfOoMTNV4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718406492; c=relaxed/simple; bh=SPAVYAP2di2xwPw8y2P9/zKK8GlD4XHLKJiPbHQoyYc=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=pClBHgVOfZvYrtbyrO2bSWwwa+XSwOMgC3BdPrG8KKGM+f8fLIO+XMYLYG9E5CiSPY3lvoAlfsz955zJtraVS6YB3v30ul8dIC3HFj6wrO9tYkaxPi1rKCZcGz7UZHOo/J/9noCCIpgeF7r/KPoi+ua2L0iqfMInYzLE4ILsY58= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--irogers.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fjY4n5iB; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--irogers.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-62a080977a5so44154537b3.0 for ; Fri, 14 Jun 2024 16:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718406251; x=1719011051; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date :message-id:reply-to; bh=2OffoANN2zJFwOqCuha7TzyxJ2OneraseDMRz0GnJWQ=; b=fjY4n5iBqiuOhZs56I77NJZlZFZ2UcLzQQKQ5Kx2YGdSTwyMn1TCRtaXn6CdVL4KgM sd6loYONMFCxC4kpePAnX6sPRKYI6z5pbIPU6Ji5oFJX30ZjRgvVDQbGxla3o3kl1I4P nmmQ1y71wEcG032UoN6y0qbClJp1EkrrKAbtZ2ilENWg6tVXxRuNhyarttwcIU1JVyG8 cDcn9LrNJc+luG0VmRgYkapCnDNybS1n2SOpSSEeXFDRJOtjZ5omWq2+tU2kzESjowOk xylshmfl+ZN2HrRRlV0apP70D71Ai7cfDf13Ow5F6Pt+wA7MHqVuaatfMmPiAZrqhhBj Jfpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718406251; x=1719011051; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=2OffoANN2zJFwOqCuha7TzyxJ2OneraseDMRz0GnJWQ=; b=hlefUsbYuEcIHuEyzERHFnVDezbsrf9cudn3ruMMUxdFYJ2+5jTMxLY5LETetpXxZm WwxPW4LQ3zFJ2hKwUVA7UX9rSw8ZFEH7CmssGF+uIim8ccpMOdISyIxFl5Ge10Avv4sc zqAA6ZJ1v8Bo18xmXWRv7Xtuw92sG/f2I+awUujkBNEnUvE/31NJXET6+gVi+by9SeJW JghmE0oWvp/YxqZA8MzIlkU7ux/aBidm4iBlyIlsm3tMt64VsNHHzRVAzE+OaW5KE7fq 6tEWOI8sPaqpoPGlpgrIfVmxm27eFtK8eGm64+AIhme+Pgh5LZU2WyT5NKwd1+j61zM2 4aXQ== X-Forwarded-Encrypted: i=1; AJvYcCUTjH3ZoeRMHdDiRRwcsopYrJSHHnIbezNU9zfS2M46jMVBNy7X5fG520c9vV8M5P9rqhkkdEaJUBdVLWz0Zp5x61A2Bn3s+FLeSG8o X-Gm-Message-State: AOJu0YzSgekr+iqHs6o/vGt3+4FcyLFT0ZSRnLYSIkCNmEzifVePhVfa JzZ3qzU78wDS/y6kHl+PM2rTJ597G0FPyrXCuKAyKALa24eeS2yyQzMTwbg4rnXs9IrELUB0Xtu w36/kFw== X-Received: from irogers.svl.corp.google.com ([2620:15c:2a3:200:714a:5e65:12a1:603]) (user=irogers job=sendgmr) by 2002:a05:690c:b85:b0:615:32e1:d82c with SMTP id 00721157ae682-6322402fcecmr9178947b3.6.1718406250623; Fri, 14 Jun 2024 16:04:10 -0700 (PDT) Date: Fri, 14 Jun 2024 16:01:40 -0700 In-Reply-To: <20240614230146.3783221-1-irogers@google.com> Message-Id: <20240614230146.3783221-33-irogers@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240614230146.3783221-1-irogers@google.com> X-Mailer: git-send-email 2.45.2.627.g7a2c4fd464-goog Subject: [PATCH v1 32/37] perf vendor events: Add/update skylakex events/metrics From: Ian Rogers To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Kan Liang , Maxime Coquelin , Alexandre Torgue , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Weilin Wang , Caleb Biggers Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Update events from v1.33 to v1.35. Update TMA metrics from v4.7 to v4.8. Bring in the event updates v1.35: https://github.com/intel/perfmon/commit/c99b60c147b96f40f96dd961abfae54909f= 47e5f The TMA 4.8 information was added in: https://github.com/intel/perfmon/commit/59194d4d90ca50a3fcb2de0d82b9f6fc0c9= a5736 Add counter information. The most recent RFC patch set using this information: https://lore.kernel.org/lkml/20240412210756.309828-1-weilin.wang@intel.com/ Adds the event SW_PREFETCH_ACCESS.ANY. Co-authored-by: Weilin Wang Co-authored-by: Caleb Biggers Signed-off-by: Ian Rogers --- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- .../pmu-events/arch/x86/skylakex/cache.json | 155 + .../pmu-events/arch/x86/skylakex/counter.json | 52 + .../arch/x86/skylakex/floating-point.json | 13 + .../arch/x86/skylakex/frontend.json | 49 + .../pmu-events/arch/x86/skylakex/memory.json | 115 + .../arch/x86/skylakex/metricgroups.json | 13 + .../pmu-events/arch/x86/skylakex/other.json | 15 + .../arch/x86/skylakex/pipeline.json | 104 +- .../arch/x86/skylakex/skx-metrics.json | 310 +- .../arch/x86/skylakex/uncore-cache.json | 2274 +++++++++++++++ .../x86/skylakex/uncore-interconnect.json | 2521 +++++++++++++++++ .../arch/x86/skylakex/uncore-io.json | 703 +++++ .../arch/x86/skylakex/uncore-memory.json | 804 ++++++ .../arch/x86/skylakex/uncore-power.json | 50 + .../arch/x86/skylakex/virtual-memory.json | 28 + 16 files changed, 7019 insertions(+), 189 deletions(-) create mode 100644 tools/perf/pmu-events/arch/x86/skylakex/counter.json diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 70631bcfa8eb..b5d40fa2a29f 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -30,7 +30,7 @@ GenuineIntel-6-8F,v1.23,sapphirerapids,core GenuineIntel-6-AF,v1.04,sierraforest,core GenuineIntel-6-(37|4A|4C|4D|5A),v15,silvermont,core GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v59,skylake,core -GenuineIntel-6-55-[01234],v1.33,skylakex,core +GenuineIntel-6-55-[01234],v1.35,skylakex,core GenuineIntel-6-86,v1.22,snowridgex,core GenuineIntel-6-8[CD],v1.15,tigerlake,core GenuineIntel-6-2C,v5,westmereep-dp,core diff --git a/tools/perf/pmu-events/arch/x86/skylakex/cache.json b/tools/per= f/pmu-events/arch/x86/skylakex/cache.json index 14229f4b29d8..2ce070629c52 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/cache.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/cache.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "L1D data line replacements", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "L1D.REPLACEMENT", "PublicDescription": "Counts L1D data line replacements including = opportunistic replacements, and replacements that require stall-for-replace= or block-for-replace.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Number of times a request needed a FB entry b= ut there was no entry available for it. That is the FB unavailability was d= ominant reason for blocking the request. A request includes cacheable/uncac= heable demands that is load, store or SW prefetch.", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL", "PublicDescription": "Number of times a request needed a FB (Fill = Buffer) entry but there was no entry available for it. A request includes c= acheable/uncacheable demands that are load, store or SW prefetch instructio= ns.", @@ -17,6 +19,7 @@ }, { "BriefDescription": "L1D miss outstandings duration in cycles", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING", "PublicDescription": "Counts duration of L1D miss outstanding, tha= t is each cycle number of Fill Buffers (FB) outstanding required by Demand = Reads. FB either is held by demand loads, or it is held by non-demand loads= and gets hit at least once by demand. The valid outstanding interval is de= fined until the FB deallocation by one of the following ways: from FB alloc= ation, if FB is allocated by demand from the demand Hit FB, if it is alloca= ted by hardware or software prefetch.Note: In the L1D, a Demand Read contai= ns cacheable or noncacheable demand loads, including ones causing cache-lin= e splits and reads due to page walks resulted from any request type.", @@ -25,6 +28,7 @@ }, { "BriefDescription": "Cycles with L1D load Misses outstanding.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING_CYCLES", @@ -35,6 +39,7 @@ { "AnyThread": "1", "BriefDescription": "Cycles with L1D load Misses outstanding from = any thread on physical core.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY", @@ -43,6 +48,7 @@ }, { "BriefDescription": "L2 cache lines filling L2", + "Counter": "0,1,2,3", "EventCode": "0xF1", "EventName": "L2_LINES_IN.ALL", "PublicDescription": "Counts the number of L2 cache lines filling = the L2. Counting does not cover rejects.", @@ -51,6 +57,7 @@ }, { "BriefDescription": "Counts the number of lines that are evicted b= y L2 cache when triggered by an L2 cache fill. Those lines can be either in= modified state or clean state. Modified lines may either be written back t= o L3 or directly written to memory and not allocated in L3. Clean lines ma= y either be allocated in L3 or dropped", + "Counter": "0,1,2,3", "EventCode": "0xF2", "EventName": "L2_LINES_OUT.NON_SILENT", "PublicDescription": "Counts the number of lines that are evicted = by L2 cache when triggered by an L2 cache fill. Those lines can be either i= n modified state or clean state. Modified lines may either be written back = to L3 or directly written to memory and not allocated in L3. Clean lines m= ay either be allocated in L3 or dropped.", @@ -59,6 +66,7 @@ }, { "BriefDescription": "Counts the number of lines that are silently = dropped by L2 cache when triggered by an L2 cache fill. These lines are typ= ically in Shared state. A non-threaded event.", + "Counter": "0,1,2,3", "EventCode": "0xF2", "EventName": "L2_LINES_OUT.SILENT", "SampleAfterValue": "200003", @@ -66,6 +74,7 @@ }, { "BriefDescription": "Counts the number of lines that have been har= dware prefetched but not used and now evicted by L2 cache", + "Counter": "0,1,2,3", "EventCode": "0xF2", "EventName": "L2_LINES_OUT.USELESS_HWPF", "SampleAfterValue": "200003", @@ -73,6 +82,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = L2_LINES_OUT.USELESS_HWPF", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xF2", "EventName": "L2_LINES_OUT.USELESS_PREF", @@ -81,6 +91,7 @@ }, { "BriefDescription": "L2 code requests", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_CODE_RD", "PublicDescription": "Counts the total number of L2 code requests.= ", @@ -89,6 +100,7 @@ }, { "BriefDescription": "Demand Data Read requests", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD", "PublicDescription": "Counts the number of demand Data Read reques= ts (including requests from L1D hardware prefetchers). These loads may hit = or miss L2 cache. Only non rejected loads are counted.", @@ -97,6 +109,7 @@ }, { "BriefDescription": "Demand requests that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_MISS", "PublicDescription": "Demand requests that miss L2 cache.", @@ -105,6 +118,7 @@ }, { "BriefDescription": "Demand requests to L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES", "PublicDescription": "Demand requests to L2 cache.", @@ -113,6 +127,7 @@ }, { "BriefDescription": "Requests from the L1/L2/L3 hardware prefetche= rs or Load software prefetches", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_PF", "PublicDescription": "Counts the total number of requests from the= L2 hardware prefetchers.", @@ -121,6 +136,7 @@ }, { "BriefDescription": "RFO requests to L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.ALL_RFO", "PublicDescription": "Counts the total number of RFO (read for own= ership) requests to L2 cache. L2 RFO requests include both L1D demand RFO m= isses as well as L1D RFO prefetches.", @@ -129,6 +145,7 @@ }, { "BriefDescription": "L2 cache hits when fetching instructions, cod= e reads.", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.CODE_RD_HIT", "PublicDescription": "Counts L2 cache hits when fetching instructi= ons, code reads.", @@ -137,6 +154,7 @@ }, { "BriefDescription": "L2 cache misses when fetching instructions", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.CODE_RD_MISS", "PublicDescription": "Counts L2 cache misses when fetching instruc= tions.", @@ -145,6 +163,7 @@ }, { "BriefDescription": "Demand Data Read requests that hit L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT", "PublicDescription": "Counts the number of demand Data Read reques= ts, initiated by load instructions, that hit L2 cache", @@ -153,6 +172,7 @@ }, { "BriefDescription": "Demand Data Read miss L2, no rejects", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", "PublicDescription": "Counts the number of demand Data Read reques= ts that miss L2 cache. Only not rejected loads are counted.", @@ -161,6 +181,7 @@ }, { "BriefDescription": "All requests that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.MISS", "PublicDescription": "All requests that miss L2 cache.", @@ -169,6 +190,7 @@ }, { "BriefDescription": "Requests from the L1/L2/L3 hardware prefetche= rs or Load software prefetches that hit L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.PF_HIT", "PublicDescription": "Counts requests from the L1/L2/L3 hardware p= refetchers or Load software prefetches that hit L2 cache.", @@ -177,6 +199,7 @@ }, { "BriefDescription": "Requests from the L1/L2/L3 hardware prefetche= rs or Load software prefetches that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.PF_MISS", "PublicDescription": "Counts requests from the L1/L2/L3 hardware p= refetchers or Load software prefetches that miss L2 cache.", @@ -185,6 +208,7 @@ }, { "BriefDescription": "All L2 requests", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.REFERENCES", "PublicDescription": "All L2 requests.", @@ -193,6 +217,7 @@ }, { "BriefDescription": "RFO requests that hit L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.RFO_HIT", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests= that hit L2 cache.", @@ -201,6 +226,7 @@ }, { "BriefDescription": "RFO requests that miss L2 cache", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "L2_RQSTS.RFO_MISS", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests= that miss L2 cache.", @@ -209,6 +235,7 @@ }, { "BriefDescription": "L2 writebacks that access L2 cache", + "Counter": "0,1,2,3", "EventCode": "0xF0", "EventName": "L2_TRANS.L2_WB", "PublicDescription": "Counts L2 writebacks that access L2 cache.", @@ -217,6 +244,7 @@ }, { "BriefDescription": "Core-originated cacheable demand requests mis= sed L3", + "Counter": "0,1,2,3", "Errata": "SKL057", "EventCode": "0x2E", "EventName": "LONGEST_LAT_CACHE.MISS", @@ -226,6 +254,7 @@ }, { "BriefDescription": "Core-originated cacheable demand requests tha= t refer to L3", + "Counter": "0,1,2,3", "Errata": "SKL057", "EventCode": "0x2E", "EventName": "LONGEST_LAT_CACHE.REFERENCE", @@ -235,6 +264,7 @@ }, { "BriefDescription": "Retired load instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.ALL_LOADS", @@ -245,6 +275,7 @@ }, { "BriefDescription": "Retired store instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.ALL_STORES", @@ -255,6 +286,7 @@ }, { "BriefDescription": "All retired memory instructions.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.ANY", @@ -265,6 +297,7 @@ }, { "BriefDescription": "Retired load instructions with locked access.= ", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.LOCK_LOADS", @@ -274,6 +307,7 @@ }, { "BriefDescription": "Retired load instructions that split across a= cacheline boundary.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.SPLIT_LOADS", @@ -284,6 +318,7 @@ }, { "BriefDescription": "Retired store instructions that split across = a cacheline boundary.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.SPLIT_STORES", @@ -294,6 +329,7 @@ }, { "BriefDescription": "Retired load instructions that miss the STLB.= ", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS", @@ -304,6 +340,7 @@ }, { "BriefDescription": "Retired store instructions that miss the STLB= .", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD0", "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES", @@ -314,6 +351,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = were L3 and cross-core snoop hits in on-pkg core cache", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT", @@ -324,6 +362,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = were HitM responses from shared L3", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM", @@ -334,6 +373,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = were L3 hit and cross-core snoop missed in on-pkg core cache.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", @@ -343,6 +383,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = were hits in L3 without snoops required", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD2", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE", @@ -353,6 +394,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = missed L3 but serviced from local dram", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM", @@ -363,6 +405,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = missed L3 but serviced from remote dram", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM", @@ -372,6 +415,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = was forwarded from a remote cache", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD", @@ -382,6 +426,7 @@ }, { "BriefDescription": "Retired load instructions whose data sources = was remote HITM", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD3", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM", @@ -392,6 +437,7 @@ }, { "BriefDescription": "Retired instructions with at least 1 uncachea= ble load or lock.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD4", "EventName": "MEM_LOAD_MISC_RETIRED.UC", @@ -401,6 +447,7 @@ }, { "BriefDescription": "Retired load instructions which data sources = were load missed L1 but hit FB due to preceding miss to the same cache line= with data not ready", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.FB_HIT", @@ -411,6 +458,7 @@ }, { "BriefDescription": "Retired load instructions with L1 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L1_HIT", @@ -421,6 +469,7 @@ }, { "BriefDescription": "Retired load instructions missed L1 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L1_MISS", @@ -431,6 +480,7 @@ }, { "BriefDescription": "Retired load instructions with L2 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L2_HIT", @@ -441,6 +491,7 @@ }, { "BriefDescription": "Retired load instructions missed L2 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L2_MISS", @@ -451,6 +502,7 @@ }, { "BriefDescription": "Retired load instructions with L3 cache hits = as data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L3_HIT", @@ -461,6 +513,7 @@ }, { "BriefDescription": "Retired load instructions missed L3 cache as = data sources", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xD1", "EventName": "MEM_LOAD_RETIRED.L3_MISS", @@ -471,6 +524,7 @@ }, { "BriefDescription": "Demand and prefetch data reads", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.ALL_DATA_RD", "PublicDescription": "Counts the demand and prefetch data reads. A= ll Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 p= refetchers). Counting also covers reads due to page walks resulted from any= request type.", @@ -479,6 +533,7 @@ }, { "BriefDescription": "Any memory transaction that reached the SQ.", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS", "PublicDescription": "Counts memory transactions reached the super= queue including requests initiated by the core, all L3 prefetches, page wa= lks, etc..", @@ -487,6 +542,7 @@ }, { "BriefDescription": "Cacheable and non-cacheable code read request= s", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD", "PublicDescription": "Counts both cacheable and non-cacheable code= read requests.", @@ -495,6 +551,7 @@ }, { "BriefDescription": "Demand Data Read requests sent to uncore", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD", "PublicDescription": "Counts the Demand Data Read requests sent to= uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determi= ne average latency in the uncore.", @@ -503,6 +560,7 @@ }, { "BriefDescription": "Demand RFO requests including regular RFOs, l= ocks, ItoM", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.DEMAND_RFO", "PublicDescription": "Counts the demand RFO (read for ownership) r= equests including regular RFOs, locks, ItoM.", @@ -511,6 +569,7 @@ }, { "BriefDescription": "Offcore requests buffer cannot take more entr= ies for this thread core.", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL", "PublicDescription": "Counts the number of cases when the offcore = requests buffer cannot take more entries for the core. This can happen when= the superqueue does not contain eligible entries, or when L1D writeback pe= nding FIFO requests is full.Note: Writeback pending FIFO has six entries.", @@ -519,6 +578,7 @@ }, { "BriefDescription": "Offcore outstanding cacheable Core Data Read = transactions in SuperQueue (SQ), queue to uncore", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD", "PublicDescription": "Counts the number of offcore outstanding cac= heable Core Data Read transactions in the super queue every cycle. A transa= ction is considered to be in the Offcore outstanding state between L2 miss = and transaction completion sent to requestor (SQ de-allocation). See corres= ponding Umask under OFFCORE_REQUESTS.", @@ -527,6 +587,7 @@ }, { "BriefDescription": "Cycles when offcore outstanding cacheable Cor= e Data Read transactions are present in SuperQueue (SQ), queue to uncore.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD", @@ -536,6 +597,7 @@ }, { "BriefDescription": "Cycles with offcore outstanding Code Reads tr= ansactions in the SuperQueue (SQ), queue to uncore.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE= _RD", @@ -545,6 +607,7 @@ }, { "BriefDescription": "Cycles when offcore outstanding Demand Data R= ead transactions are present in SuperQueue (SQ), queue to uncore", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA= _RD", @@ -554,6 +617,7 @@ }, { "BriefDescription": "Cycles with offcore outstanding demand rfo re= ads transactions in SuperQueue (SQ), queue to uncore.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO"= , @@ -563,6 +627,7 @@ }, { "BriefDescription": "Offcore outstanding Code Reads transactions i= n the SuperQueue (SQ), queue to uncore, every cycle.", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD", "PublicDescription": "Counts the number of offcore outstanding Cod= e Reads transactions in the super queue every cycle. The 'Offcore outstandi= ng' state of the transaction lasts from the L2 miss until the sending trans= action completion to requestor (SQ deallocation). See the corresponding Uma= sk under OFFCORE_REQUESTS.", @@ -571,6 +636,7 @@ }, { "BriefDescription": "Offcore outstanding Demand Data Read transact= ions in uncore queue.", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD", "PublicDescription": "Counts the number of offcore outstanding Dem= and Data Read transactions in the super queue (SQ) every cycle. A transacti= on is considered to be in the Offcore outstanding state between L2 miss and= transaction completion sent to requestor. See the corresponding Umask unde= r OFFCORE_REQUESTS.Note: A prefetch promoted to Demand is counted from the = promotion point.", @@ -579,6 +645,7 @@ }, { "BriefDescription": "Cycles with at least 6 offcore outstanding De= mand Data Read transactions in uncore queue.", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6", @@ -587,6 +654,7 @@ }, { "BriefDescription": "Offcore outstanding demand rfo reads transact= ions in SuperQueue (SQ), queue to uncore, every cycle", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO", "PublicDescription": "Counts the number of offcore outstanding RFO= (store) transactions in the super queue (SQ) every cycle. A transaction is= considered to be in the Offcore outstanding state between L2 miss and tran= saction completion sent to requestor (SQ de-allocation). See corresponding = Umask under OFFCORE_REQUESTS.", @@ -595,6 +663,7 @@ }, { "BriefDescription": "Offcore response can be programmed only with = a specific pair of event select and counter MSR, and with specific event co= des and predefine mask bit value in a dedicated MSR to specify attributes o= f the offcore transaction", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE", "PublicDescription": "Offcore response can be programmed only with= a specific pair of event select and counter MSR, and with specific event c= odes and predefine mask bit value in a dedicated MSR to specify attributes = of the offcore transaction.", @@ -603,6 +672,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -612,6 +682,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -621,6 +692,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = hit in the L3 and the snoop to one of the sibling cores hits the line in M = state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE"= , "MSRIndex": "0x1a6,0x1a7", @@ -630,6 +702,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = hit in the L3 and the snoop to one of the sibling cores hits the line in M = state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_N= O_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -639,6 +712,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = hit in the L3 and sibling core snoops are not needed as either the core-val= id bit is not set or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED"= , "MSRIndex": "0x1a6,0x1a7", @@ -648,6 +722,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT= _WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_F= WD", "MSRIndex": "0x1a6,0x1a7", @@ -657,6 +732,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that have any = response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -666,6 +742,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that hit in th= e L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -675,6 +752,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that hit in th= e L3 and the snoop to one of the sibling cores hits the line in M state and= the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CO= RE", "MSRIndex": "0x1a6,0x1a7", @@ -684,6 +762,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that hit in th= e L3 and the snoop to one of the sibling cores hits the line in M state and= the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_COR= E_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -693,6 +772,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that hit in th= e L3 and sibling core snoops are not needed as either the core-valid bit is= not set or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEED= ED", "MSRIndex": "0x1a6,0x1a7", @@ -702,6 +782,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_= HIT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WIT= H_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -711,6 +792,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that have any response t= ype.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -720,6 +802,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -729,6 +812,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that hit in the L3 and t= he snoop to one of the sibling cores hits the line in M state and the line = is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE", "MSRIndex": "0x1a6,0x1a7", @@ -738,6 +822,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that hit in the L3 and t= he snoop to one of the sibling cores hits the line in M state and the line = is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -747,6 +832,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that hit in the L3 and s= ibling core snoops are not needed as either the core-valid bit is not set o= r the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED", "MSRIndex": "0x1a6,0x1a7", @@ -756,6 +842,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_= WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FW= D", "MSRIndex": "0x1a6,0x1a7", @@ -765,6 +852,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_C= ORE_FWD hit in the L3 and the snoop to one of the sibling cores hits the li= ne in E/S/F state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD= ", "MSRIndex": "0x1a6,0x1a7", @@ -774,6 +862,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that have a= ny response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -783,6 +872,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that hit in= the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -792,6 +882,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that hit in= the L3 and the snoop to one of the sibling cores hits the line in M state = and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HITM_OTHER_CORE", "MSRIndex": "0x1a6,0x1a7", @@ -801,6 +892,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that hit in= the L3 and the snoop to one of the sibling cores hits the line in M state = and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FW= D", "MSRIndex": "0x1a6,0x1a7", @@ -810,6 +902,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that hit in= the L3 and sibling core snoops are not needed as either the core-valid bit= is not set or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED", "MSRIndex": "0x1a6,0x1a7", @@ -819,6 +912,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WIT= H_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -828,6 +922,7 @@ }, { "BriefDescription": "Counts all demand code reads that have any re= sponse type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -837,6 +932,7 @@ }, { "BriefDescription": "Counts all demand code reads that hit in the = L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -846,6 +942,7 @@ }, { "BriefDescription": "Counts all demand code reads that hit in the = L3 and the snoop to one of the sibling cores hits the line in M state and t= he line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CO= RE", "MSRIndex": "0x1a6,0x1a7", @@ -855,6 +952,7 @@ }, { "BriefDescription": "Counts all demand code reads that hit in the = L3 and the snoop to one of the sibling cores hits the line in M state and t= he line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_COR= E_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -864,6 +962,7 @@ }, { "BriefDescription": "Counts all demand code reads that hit in the = L3 and sibling core snoops are not needed as either the core-valid bit is n= ot set or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEED= ED", "MSRIndex": "0x1a6,0x1a7", @@ -873,6 +972,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_= HIT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WIT= H_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -882,6 +982,7 @@ }, { "BriefDescription": "Counts demand data reads that have any respon= se type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -891,6 +992,7 @@ }, { "BriefDescription": "Counts demand data reads that hit in the L3."= , + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -900,6 +1002,7 @@ }, { "BriefDescription": "Counts demand data reads that hit in the L3 a= nd the snoop to one of the sibling cores hits the line in M state and the l= ine is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CO= RE", "MSRIndex": "0x1a6,0x1a7", @@ -909,6 +1012,7 @@ }, { "BriefDescription": "Counts demand data reads that hit in the L3 a= nd the snoop to one of the sibling cores hits the line in M state and the l= ine is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_COR= E_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -918,6 +1022,7 @@ }, { "BriefDescription": "Counts demand data reads that hit in the L3 a= nd sibling core snoops are not needed as either the core-valid bit is not s= et or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEED= ED", "MSRIndex": "0x1a6,0x1a7", @@ -927,6 +1032,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_= HIT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WIT= H_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -936,6 +1042,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that hav= e any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -945,6 +1052,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that hit= in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -954,6 +1062,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that hit= in the L3 and the snoop to one of the sibling cores hits the line in M sta= te and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE", "MSRIndex": "0x1a6,0x1a7", @@ -963,6 +1072,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that hit= in the L3 and the snoop to one of the sibling cores hits the line in M sta= te and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -972,6 +1082,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that hit= in the L3 and sibling core snoops are not needed as either the core-valid = bit is not set or the shared line is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED", "MSRIndex": "0x1a6,0x1a7", @@ -981,6 +1092,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_= WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FW= D", "MSRIndex": "0x1a6,0x1a7", @@ -990,6 +1102,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -999,6 +1112,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -1008,6 +1122,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that hit in the L3 and the snoop to one o= f the sibling cores hits the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_COR= E", "MSRIndex": "0x1a6,0x1a7", @@ -1017,6 +1132,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that hit in the L3 and the snoop to one o= f the sibling cores hits the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1026,6 +1142,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that hit in the L3 and sibling core snoop= s are not needed as either the core-valid bit is not set or the shared line= is present in multiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDE= D", "MSRIndex": "0x1a6,0x1a7", @@ -1035,6 +1152,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_H= IT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1044,6 +1162,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -1053,6 +1172,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -1062,6 +1182,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that hit in the L3 and the snoop to one of the sibling cores hits the= line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_COR= E", "MSRIndex": "0x1a6,0x1a7", @@ -1071,6 +1192,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that hit in the L3 and the snoop to one of the sibling cores hits the= line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1080,6 +1202,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that hit in the L3 and sibling core snoops are not needed as either t= he core-valid bit is not set or the shared line is present in multiple core= s.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDE= D", "MSRIndex": "0x1a6,0x1a7", @@ -1089,6 +1212,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_H= IT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1098,6 +1222,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -1107,6 +1232,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -1116,6 +1242,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that hit in the L3 and the snoop to one of the sibling cores hits the l= ine in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE", "MSRIndex": "0x1a6,0x1a7", @@ -1125,6 +1252,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that hit in the L3 and the snoop to one of the sibling cores hits the l= ine in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_= FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1134,6 +1262,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that hit in the L3 and sibling core snoops are not needed as either the= core-valid bit is not set or the shared line is present in multiple cores.= ", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED", "MSRIndex": "0x1a6,0x1a7", @@ -1143,6 +1272,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_W= ITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD= ", "MSRIndex": "0x1a6,0x1a7", @@ -1152,6 +1282,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -1161,6 +1292,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -1170,6 +1302,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that hit in the L3 and the snoop to one of the sibling core= s hits the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_COR= E", "MSRIndex": "0x1a6,0x1a7", @@ -1179,6 +1312,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that hit in the L3 and the snoop to one of the sibling core= s hits the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1188,6 +1322,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that hit in the L3 and sibling core snoops are not needed a= s either the core-valid bit is not set or the shared line is present in mul= tiple cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDE= D", "MSRIndex": "0x1a6,0x1a7", @@ -1197,6 +1332,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_H= IT_WITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1206,6 +1342,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that have any response type.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.ANY_RESPONSE", "MSRIndex": "0x1a6,0x1a7", @@ -1215,6 +1352,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that hit in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -1224,6 +1362,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that hit in the L3 and the snoop to one of the sibling cores hits= the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE", "MSRIndex": "0x1a6,0x1a7", @@ -1233,6 +1372,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that hit in the L3 and the snoop to one of the sibling cores hits= the line in M state and the line is forwarded.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_= FWD", "MSRIndex": "0x1a6,0x1a7", @@ -1242,6 +1382,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that hit in the L3 and sibling core snoops are not needed as eith= er the core-valid bit is not set or the shared line is present in multiple = cores.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED", "MSRIndex": "0x1a6,0x1a7", @@ -1251,6 +1392,7 @@ }, { "BriefDescription": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_W= ITH_FWD", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD= ", "MSRIndex": "0x1a6,0x1a7", @@ -1260,14 +1402,24 @@ }, { "BriefDescription": "Number of cache line split locks sent to unco= re.", + "Counter": "0,1,2,3", "EventCode": "0xF4", "EventName": "SQ_MISC.SPLIT_LOCK", "PublicDescription": "Counts the number of cache line split locks = sent to the uncore.", "SampleAfterValue": "100003", "UMask": "0x10" }, + { + "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, = PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.", + "Counter": "0,1,2,3", + "EventCode": "0x32", + "EventName": "SW_PREFETCH_ACCESS.ANY", + "SampleAfterValue": "2000003", + "UMask": "0xf" + }, { "BriefDescription": "Number of PREFETCHNTA instructions executed."= , + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "SW_PREFETCH_ACCESS.NTA", "SampleAfterValue": "2000003", @@ -1275,6 +1427,7 @@ }, { "BriefDescription": "Number of PREFETCHW instructions executed.", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "SW_PREFETCH_ACCESS.PREFETCHW", "SampleAfterValue": "2000003", @@ -1282,6 +1435,7 @@ }, { "BriefDescription": "Number of PREFETCHT0 instructions executed.", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "SW_PREFETCH_ACCESS.T0", "SampleAfterValue": "2000003", @@ -1289,6 +1443,7 @@ }, { "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructio= ns executed.", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "SW_PREFETCH_ACCESS.T1_T2", "SampleAfterValue": "2000003", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/counter.json b/tools/p= erf/pmu-events/arch/x86/skylakex/counter.json new file mode 100644 index 000000000000..e94b76404856 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/skylakex/counter.json @@ -0,0 +1,52 @@ +[ + { + "Unit": "core", + "CountersNumFixed": "3", + "CountersNumGeneric": "4" + }, + { + "Unit": "CHA", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "IIO", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "IRP", + "CountersNumFixed": "0", + "CountersNumGeneric": "2" + }, + { + "Unit": "UPI", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "M2M", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "iMC", + "CountersNumFixed": "1", + "CountersNumGeneric": "4" + }, + { + "Unit": "M3UPI", + "CountersNumFixed": "0", + "CountersNumGeneric": "3" + }, + { + "Unit": "PCU", + "CountersNumFixed": "0", + "CountersNumGeneric": "4" + }, + { + "Unit": "UBOX", + "CountersNumFixed": "1", + "CountersNumGeneric": "2" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json b/= tools/perf/pmu-events/arch/x86/skylakex/floating-point.json index 384b3c551a1f..25a864613c7d 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Counts once for most SIMD 128-bit packed comp= utational double precision floating-point instructions retired. Counts twic= e for DPP and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE", "PublicDescription": "Counts once for most SIMD 128-bit packed com= putational double precision floating-point instructions retired; some instr= uctions will count twice as noted below. Each count represents 2 computati= on operations, one for each element. Applies to packed double precision fl= oating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DP= P FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perf= orm 2 calculations per element. The DAZ and FTZ flags in the MXCSR register= need to be set when using these events.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Counts once for most SIMD 128-bit packed comp= utational single precision floating-point instruction retired. Counts twice= for DPP and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE", "PublicDescription": "Counts once for most SIMD 128-bit packed com= putational single precision floating-point instructions retired; some instr= uctions will count twice as noted below. Each count represents 4 computati= on operations, one for each element. Applies to packed single precision fl= oating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RS= QRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as= they perform 2 calculations per element. The DAZ and FTZ flags in the MXCS= R register need to be set when using these events.", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Counts once for most SIMD 256-bit packed doub= le computational precision floating-point instructions retired. Counts twic= e for DPP and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE", "PublicDescription": "Counts once for most SIMD 256-bit packed dou= ble computational precision floating-point instructions retired; some instr= uctions will count twice as noted below. Each count represents 4 computati= on operations, one for each element. Applies to packed double precision fl= oating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM= (N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calcul= ations per element. The DAZ and FTZ flags in the MXCSR register need to be = set when using these events.", @@ -25,6 +28,7 @@ }, { "BriefDescription": "Counts once for most SIMD 256-bit packed sing= le computational precision floating-point instructions retired. Counts twic= e for DPP and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE", "PublicDescription": "Counts once for most SIMD 256-bit packed sin= gle computational precision floating-point instructions retired; some instr= uctions will count twice as noted below. Each count represents 8 computati= on operations, one for each element. Applies to packed single precision fl= oating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RS= QRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as= they perform 2 calculations per element. The DAZ and FTZ flags in the MXCS= R register need to be set when using these events.", @@ -33,6 +37,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational 128-bit packe= d single and 256-bit packed double precision FP instructions retired; some = instructions will count twice as noted below. Each count represents 2 or/a= nd 4 computation operations, 1 for each element. Applies to SSE* and AVX* = packed single precision and packed double precision FP instructions: ADD SU= B HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DP= P and FM(N)ADD/SUB count twice as they perform 2 calculations per element."= , + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.4_FLOPS", "PublicDescription": "Number of SSE/AVX computational 128-bit pack= ed single precision and 256-bit packed double precision floating-point ins= tructions retired; some instructions will count twice as noted below. Each= count represents 2 or/and 4 computation operations, one for each element. = Applies to SSE* and AVX* packed single precision floating-point and packed= double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL= DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB ins= tructions count twice as they perform 2 calculations per element. The DAZ a= nd FTZ flags in the MXCSR register need to be set when using these events."= , @@ -41,6 +46,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 512-bi= t packed double precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 8 computation= operations, one for each element. Applies to SSE* and AVX* packed double = precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14= RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform = 2 calculations per element.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE", "PublicDescription": "Number of SSE/AVX computational 512-bit pack= ed double precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 8 computation opera= tions, one for each element. Applies to SSE* and AVX* packed double precis= ion floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14= FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calc= ulations per element. The DAZ and FTZ flags in the MXCSR register need to b= e set when using these events.", @@ -49,6 +55,7 @@ }, { "BriefDescription": "Counts number of SSE/AVX computational 512-bi= t packed single precision floating-point instructions retired; some instruc= tions will count twice as noted below. Each count represents 16 computatio= n operations, one for each element. Applies to SSE* and AVX* packed single= precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT1= 4 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform= 2 calculations per element.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE", "PublicDescription": "Number of SSE/AVX computational 512-bit pack= ed single precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 16 computation oper= ations, one for each element. Applies to SSE* and AVX* packed single preci= sion floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP1= 4 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 cal= culations per element. The DAZ and FTZ flags in the MXCSR register need to = be set when using these events.", @@ -57,6 +64,7 @@ }, { "BriefDescription": "Number of SSE/AVX computational 256-bit packe= d single precision and 512-bit packed double precision FP instructions ret= ired; some instructions will count twice as noted below. Each count repres= ents 8 computation operations, 1 for each element. Applies to SSE* and AVX= * packed single precision and double precision FP instructions: ADD SUB HAD= D HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB= . DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elem= ent.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.8_FLOPS", "PublicDescription": "Number of SSE/AVX computational 256-bit pack= ed single precision and 512-bit packed double precision floating-point ins= tructions retired; some instructions will count twice as noted below. Each= count represents 8 computation operations, one for each element. Applies = to SSE* and AVX* packed single precision and double precision floating-poin= t instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14= RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice= as they perform 2 calculations per element. The DAZ and FTZ flags in the M= XCSR register need to be set when using these events.", @@ -65,6 +73,7 @@ }, { "BriefDescription": "Counts once for most SIMD scalar computationa= l floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SU= B instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR", "PublicDescription": "Counts once for most SIMD scalar computation= al single precision and double precision floating-point instructions retire= d; some instructions will count twice as noted below. Each count represent= s 1 computational operation. Applies to SIMD scalar single precision floati= ng-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.= FM(N)ADD/SUB instructions count twice as they perform 2 calculations per = element. The DAZ and FTZ flags in the MXCSR register need to be set when us= ing these events.", @@ -73,6 +82,7 @@ }, { "BriefDescription": "Counts once for most SIMD scalar computationa= l double precision floating-point instructions retired. Counts twice for DP= P and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE", "PublicDescription": "Counts once for most SIMD scalar computation= al double precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 1 computational ope= ration. Applies to SIMD scalar double precision floating-point instructions= : ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions cou= nt twice as they perform 2 calculations per element. The DAZ and FTZ flags = in the MXCSR register need to be set when using these events.", @@ -81,6 +91,7 @@ }, { "BriefDescription": "Counts once for most SIMD scalar computationa= l single precision floating-point instructions retired. Counts twice for DP= P and FM(N)ADD/SUB instructions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE", "PublicDescription": "Counts once for most SIMD scalar computation= al single precision floating-point instructions retired; some instructions = will count twice as noted below. Each count represents 1 computational ope= ration. Applies to SIMD scalar single precision floating-point instructions= : ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instru= ctions count twice as they perform 2 calculations per element. The DAZ and = FTZ flags in the MXCSR register need to be set when using these events.", @@ -89,6 +100,7 @@ }, { "BriefDescription": "Number of any Vector retired FP arithmetic in= structions", + "Counter": "0,1,2,3", "EventCode": "0xC7", "EventName": "FP_ARITH_INST_RETIRED.VECTOR", "SampleAfterValue": "2000003", @@ -96,6 +108,7 @@ }, { "BriefDescription": "Cycles with any input/output SSE or FP assist= ", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xCA", "EventName": "FP_ASSIST.ANY", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json b/tools/= perf/pmu-events/arch/x86/skylakex/frontend.json index d6f543471b24..0e1dedce00f2 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/frontend.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Counts the total number when the front end is= resteered, mainly when the BPU cannot provide a correct prediction and thi= s is corrected by other branch handling mechanisms at the front end.", + "Counter": "0,1,2,3", "EventCode": "0xE6", "EventName": "BACLEARS.ANY", "PublicDescription": "Counts the number of times the front-end is = resteered when it finds a branch instruction in a fetch line. This occurs f= or the first time a branch instruction is fetched or when the branch is not= tracked by the BPU (Branch Prediction Unit) anymore.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Stalls caused by changing prefix length of th= e instruction. [This event is alias to ILD_STALL.LCP]", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "DECODE.LCP", "PublicDescription": "Counts cycles that the Instruction Length de= coder (ILD) stalls occurred due to dynamically changing prefix length of th= e decoded instruction (by operand size prefix instruction 0x66, address siz= e prefix instruction 0x67 or REX.W for Intel64). Count is proportional to t= he number of prefixes in a 16B-line. This may result in a three-cycle penal= ty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is= alias to ILD_STALL.LCP]", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switches", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "DSB2MITE_SWITCHES.COUNT", "PublicDescription": "This event counts the number of the Decode S= tream Buffer (DSB)-to-MITE switches including all misses because of missing= Decode Stream Buffer (DSB) cache and u-arch forced misses. Note: Invoking = MITE requires two or three cycles delay.", @@ -25,6 +28,7 @@ }, { "BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch tru= e penalty cycles.", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES", "PublicDescription": "Counts Decode Stream Buffer (DSB)-to-MITE sw= itch true penalty cycles. These cycles do not include uops routed through b= ecause of the switch itself, for example, when Instruction Decode Queue (ID= Q) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full= . SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) re= ceives Decode Stream Buffer (DSB) Sync-indication until receiving the first= MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops= being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Strea= m Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer = (DSB)-to-MITE switch occurs.Penalty: A Decode Stream Buffer (DSB) hit follo= wed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which= no uops are delivered to the IDQ. Most often, such switches from the Decod= e Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.", @@ -33,6 +37,7 @@ }, { "BriefDescription": "Retired Instructions who experienced DSB miss= .", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS", "MSRIndex": "0x3F7", @@ -44,6 +49,7 @@ }, { "BriefDescription": "Retired Instructions who experienced a critic= al DSB miss.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.DSB_MISS", "MSRIndex": "0x3F7", @@ -55,6 +61,7 @@ }, { "BriefDescription": "Retired Instructions who experienced iTLB tru= e miss.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.ITLB_MISS", "MSRIndex": "0x3F7", @@ -66,6 +73,7 @@ }, { "BriefDescription": "Retired Instructions who experienced Instruct= ion L1 Cache true miss.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.L1I_MISS", "MSRIndex": "0x3F7", @@ -76,6 +84,7 @@ }, { "BriefDescription": "Retired Instructions who experienced Instruct= ion L2 Cache true miss.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.L2_MISS", "MSRIndex": "0x3F7", @@ -86,6 +95,7 @@ }, { "BriefDescription": "Retired instructions after front-end starvati= on of at least 1 cycle", + "Counter": "0,1,2,3", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_1", "MSRIndex": "0x3F7", @@ -97,6 +107,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 128 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_128", "MSRIndex": "0x3F7", @@ -107,6 +118,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 16 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_16", "MSRIndex": "0x3F7", @@ -118,6 +130,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 2 cycles w= hich was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2", "MSRIndex": "0x3F7", @@ -128,6 +141,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 256 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_256", "MSRIndex": "0x3F7", @@ -138,6 +152,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end had at least 1 bubble-slot for a period of 2= cycles which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1", "MSRIndex": "0x3F7", @@ -149,6 +164,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end had at least 2 bubble-slots for a period of = 2 cycles which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_2", "MSRIndex": "0x3F7", @@ -159,6 +175,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end had at least 3 bubble-slots for a period of = 2 cycles which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_3", "MSRIndex": "0x3F7", @@ -169,6 +186,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 32 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_32", "MSRIndex": "0x3F7", @@ -180,6 +198,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 4 cycles w= hich was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_4", "MSRIndex": "0x3F7", @@ -190,6 +209,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 512 cycles= which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_512", "MSRIndex": "0x3F7", @@ -200,6 +220,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 64 cycles = which was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_64", "MSRIndex": "0x3F7", @@ -210,6 +231,7 @@ }, { "BriefDescription": "Retired instructions that are fetched after a= n interval where the front-end delivered no uops for a period of 8 cycles w= hich was not interrupted by a back-end stall.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_8", "MSRIndex": "0x3F7", @@ -221,6 +243,7 @@ }, { "BriefDescription": "Retired Instructions who experienced STLB (2n= d level TLB) true miss.", + "Counter": "0,1,2,3", "EventCode": "0xC6", "EventName": "FRONTEND_RETIRED.STLB_MISS", "MSRIndex": "0x3F7", @@ -232,6 +255,7 @@ }, { "BriefDescription": "Cycles where a code fetch is stalled due to L= 1 instruction cache miss.", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "ICACHE_16B.IFDATA_STALL", "PublicDescription": "Cycles where a code line fetch is stalled du= e to an L1 instruction cache miss. The legacy decode pipeline works at a 16= Byte granularity.", @@ -240,6 +264,7 @@ }, { "BriefDescription": "Instruction fetch tag lookups that hit in the= instruction cache (L1I). Counts at 64-byte cache-line granularity.", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "ICACHE_64B.IFTAG_HIT", "SampleAfterValue": "200003", @@ -247,6 +272,7 @@ }, { "BriefDescription": "Instruction fetch tag lookups that miss in th= e instruction cache (L1I). Counts at 64-byte cache-line granularity.", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "ICACHE_64B.IFTAG_MISS", "SampleAfterValue": "200003", @@ -254,6 +280,7 @@ }, { "BriefDescription": "Cycles where a code fetch is stalled due to L= 1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "ICACHE_64B.IFTAG_STALL", "SampleAfterValue": "200003", @@ -261,6 +288,7 @@ }, { "BriefDescription": "Cycles where a code fetch is stalled due to L= 1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STAL= L]", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "ICACHE_TAG.STALLS", "SampleAfterValue": "200003", @@ -268,6 +296,7 @@ }, { "BriefDescription": "Cycles Decode Stream Buffer (DSB) is deliveri= ng 4 or more Uops [This event is alias to IDQ.DSB_CYCLES_OK]", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0x79", "EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS", @@ -277,6 +306,7 @@ }, { "BriefDescription": "Cycles Decode Stream Buffer (DSB) is deliveri= ng any Uop [This event is alias to IDQ.DSB_CYCLES_ANY]", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS", @@ -286,6 +316,7 @@ }, { "BriefDescription": "Cycles MITE is delivering 4 Uops", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0x79", "EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS", @@ -295,6 +326,7 @@ }, { "BriefDescription": "Cycles MITE is delivering any Uop", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS", @@ -304,6 +336,7 @@ }, { "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.DSB_CYCLES", @@ -313,6 +346,7 @@ }, { "BriefDescription": "Cycles Decode Stream Buffer (DSB) is deliveri= ng any Uop [This event is alias to IDQ.ALL_DSB_CYCLES_ANY_UOPS]", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.DSB_CYCLES_ANY", @@ -322,6 +356,7 @@ }, { "BriefDescription": "Cycles Decode Stream Buffer (DSB) is deliveri= ng 4 or more Uops [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0x79", "EventName": "IDQ.DSB_CYCLES_OK", @@ -331,6 +366,7 @@ }, { "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) from the Decode Stream Buffer (DSB) path", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.DSB_UOPS", "PublicDescription": "Counts the number of uops delivered to Instr= uction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Countin= g includes uops that may 'bypass' the IDQ.", @@ -339,6 +375,7 @@ }, { "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) from MITE path", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MITE_CYCLES", @@ -348,6 +385,7 @@ }, { "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) from MITE path", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.MITE_UOPS", "PublicDescription": "Counts the number of uops delivered to Instr= uction Decode Queue (IDQ) from the MITE path. Counting includes uops that m= ay 'bypass' the IDQ. This also means that uops are not being delivered from= the Decode Stream Buffer (DSB).", @@ -356,6 +394,7 @@ }, { "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MS_CYCLES", @@ -365,6 +404,7 @@ }, { "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequencer (MS) is busy", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MS_DSB_CYCLES", @@ -374,6 +414,7 @@ }, { "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.MS_MITE_UOPS", "PublicDescription": "Counts the number of uops initiated by MITE = and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenc= er (MS) is busy. Counting includes uops that may 'bypass' the IDQ.", @@ -382,6 +423,7 @@ }, { "BriefDescription": "Number of switches from DSB (Decode Stream Bu= ffer) or MITE (legacy decode pipeline) to the Microcode Sequencer", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x79", @@ -392,6 +434,7 @@ }, { "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequencer (MS) is busy", + "Counter": "0,1,2,3", "EventCode": "0x79", "EventName": "IDQ.MS_UOPS", "PublicDescription": "Counts the total number of uops delivered by= the Microcode Sequencer (MS). Any instruction over 4 uops will be delivere= d by the MS. Some instructions such as transcendentals may additionally gen= erate uops from the MS.", @@ -400,6 +443,7 @@ }, { "BriefDescription": "Uops not delivered to Resource Allocation Tab= le (RAT) per thread when backend of the machine is not stalled", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CORE", "PublicDescription": "Counts the number of uops not delivered to R= esource Allocation Table (RAT) per thread adding 4 x when Resource Allocat= ion Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers = x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). C= ounting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) p= ipe serves the other thread. b. Resource Allocation Table (RAT) is stalled = for the thread (including uop drops and clear BE conditions). c. Instructi= on Decode Queue (IDQ) delivers four uops.", @@ -408,6 +452,7 @@ }, { "BriefDescription": "Cycles per thread when 4 or more uops are not= delivered to Resource Allocation Table (RAT) when backend of the machine i= s not stalled", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE", @@ -417,6 +462,7 @@ }, { "BriefDescription": "Counts cycles FE delivered 4 uops or Resource= Allocation Table (RAT) was stalling FE.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK", @@ -426,6 +472,7 @@ }, { "BriefDescription": "Cycles per thread when 3 or more uops are not= delivered to Resource Allocation Table (RAT) when backend of the machine i= s not stalled", + "Counter": "0,1,2,3", "CounterMask": "3", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE", @@ -435,6 +482,7 @@ }, { "BriefDescription": "Cycles with less than 2 uops delivered by the= front end.", + "Counter": "0,1,2,3", "CounterMask": "2", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE", @@ -444,6 +492,7 @@ }, { "BriefDescription": "Cycles with less than 3 uops delivered by the= front end.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x9C", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/memory.json b/tools/pe= rf/pmu-events/arch/x86/skylakex/memory.json index dba3cd6b3690..9ee7a9d44fd2 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/memory.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Cycles while L3 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "2", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.CYCLES_L3_MISS", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Execution stalls while L3 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to any reasons (multiple categories may count as one).", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED", "PEBS": "1", @@ -26,6 +29,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to unfriendly events (such as interrupts).", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED_EVENTS", "SampleAfterValue": "2000003", @@ -33,6 +37,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to various memory events (e.g., read/write capacity and conflicts).", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED_MEM", "SampleAfterValue": "2000003", @@ -40,6 +45,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to incompatible memory type", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED_MEMTYPE", "PublicDescription": "Number of times an HLE execution aborted due= to incompatible memory type.", @@ -48,6 +54,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to hardware timer expiration.", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED_TIMER", "SampleAfterValue": "2000003", @@ -55,6 +62,7 @@ }, { "BriefDescription": "Number of times an HLE execution aborted due = to HLE-unfriendly instructions and certain unfriendly events (such as AD as= sists etc.).", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.ABORTED_UNFRIENDLY", "SampleAfterValue": "2000003", @@ -62,6 +70,7 @@ }, { "BriefDescription": "Number of times an HLE execution successfully= committed", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.COMMIT", "PublicDescription": "Number of times HLE commit succeeded.", @@ -70,6 +79,7 @@ }, { "BriefDescription": "Number of times an HLE execution started.", + "Counter": "0,1,2,3", "EventCode": "0xC8", "EventName": "HLE_RETIRED.START", "PublicDescription": "Number of times we entered an HLE region. Do= es not count nested transactions.", @@ -78,6 +88,7 @@ }, { "BriefDescription": "Counts the number of machine clears due to me= mory order conflicts.", + "Counter": "0,1,2,3", "Errata": "SKL089", "EventCode": "0xC3", "EventName": "MACHINE_CLEARS.MEMORY_ORDERING", @@ -87,6 +98,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 128 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128", @@ -99,6 +111,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 16 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16", @@ -111,6 +124,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 256 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256", @@ -123,6 +137,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 32 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32", @@ -135,6 +150,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 4 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4", @@ -147,6 +163,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 512 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512", @@ -159,6 +176,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 64 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64", @@ -171,6 +189,7 @@ }, { "BriefDescription": "Counts randomly selected loads when the laten= cy from first dispatch to completion is greater than 8 cycles.", + "Counter": "0,1,2,3", "Data_LA": "1", "EventCode": "0xcd", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8", @@ -183,6 +202,7 @@ }, { "BriefDescription": "Demand Data Read requests who miss L3 cache", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", "PublicDescription": "Demand Data Read requests who miss L3 cache.= ", @@ -191,6 +211,7 @@ }, { "BriefDescription": "Cycles with at least 1 Demand Data Read reque= sts who miss L3 cache in the superQ.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEM= AND_DATA_RD", @@ -199,6 +220,7 @@ }, { "BriefDescription": "Counts number of Offcore outstanding Demand D= ata Read requests that miss L3 cache in the superQ every cycle.", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD"= , "SampleAfterValue": "2000003", @@ -206,6 +228,7 @@ }, { "BriefDescription": "Cycles with at least 6 Demand Data Read reque= sts that miss L3 cache in the superQ.", + "Counter": "0,1,2,3", "CounterMask": "6", "EventCode": "0x60", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_= GE_6", @@ -214,6 +237,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -223,6 +247,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss the L3 and the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -232,6 +257,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss the L3 and clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORW= ARD", "MSRIndex": "0x1a6,0x1a7", @@ -241,6 +267,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss the L3 and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_MISS_OR_N= O_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -250,6 +277,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss the L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOO= P_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -259,6 +287,7 @@ }, { "BriefDescription": "Counts all demand & prefetch data reads that = miss the L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNO= OP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -268,6 +297,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss in t= he L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -277,6 +307,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss the = L3 and the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM"= , "MSRIndex": "0x1a6,0x1a7", @@ -286,6 +317,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss the = L3 and clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_F= ORWARD", "MSRIndex": "0x1a6,0x1a7", @@ -295,6 +327,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss the = L3 and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS_O= R_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -304,6 +337,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss the = L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -313,6 +347,7 @@ }, { "BriefDescription": "Counts all prefetch data reads that miss the = L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.= SNOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -322,6 +357,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -331,6 +367,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss the L3 and the= modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -340,6 +377,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss the L3 and cle= an or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWA= RD", "MSRIndex": "0x1a6,0x1a7", @@ -349,6 +387,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss the L3 and the= data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_MISS_OR_NO= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -358,6 +397,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss the L3 and the= data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP= _MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -367,6 +407,7 @@ }, { "BriefDescription": "Counts prefetch RFOs that miss the L3 and the= data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOO= P_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -376,6 +417,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss i= n the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -385,6 +427,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss t= he L3 and the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -394,6 +437,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss t= he L3 and clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD"= , "MSRIndex": "0x1a6,0x1a7", @@ -403,6 +447,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss t= he L3 and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_MISS_OR_NO_FW= D", "MSRIndex": "0x1a6,0x1a7", @@ -412,6 +457,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss t= he L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MI= SS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -421,6 +467,7 @@ }, { "BriefDescription": "Counts all demand & prefetch RFOs that miss t= he L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_M= ISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -430,6 +477,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss in the= L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -439,6 +487,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss the L3= and the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HITM"= , "MSRIndex": "0x1a6,0x1a7", @@ -448,6 +497,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss the L3= and clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_F= ORWARD", "MSRIndex": "0x1a6,0x1a7", @@ -457,6 +507,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss the L3= and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS_O= R_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -466,6 +517,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss the L3= and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -475,6 +527,7 @@ }, { "BriefDescription": "Counts all demand code reads that miss the L3= and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.= SNOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -484,6 +537,7 @@ }, { "BriefDescription": "Counts demand data reads that miss in the L3.= ", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -493,6 +547,7 @@ }, { "BriefDescription": "Counts demand data reads that miss the L3 and= the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HITM"= , "MSRIndex": "0x1a6,0x1a7", @@ -502,6 +557,7 @@ }, { "BriefDescription": "Counts demand data reads that miss the L3 and= clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_F= ORWARD", "MSRIndex": "0x1a6,0x1a7", @@ -511,6 +567,7 @@ }, { "BriefDescription": "Counts demand data reads that miss the L3 and= the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS_O= R_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -520,6 +577,7 @@ }, { "BriefDescription": "Counts demand data reads that miss the L3 and= the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -529,6 +587,7 @@ }, { "BriefDescription": "Counts demand data reads that miss the L3 and= the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.= SNOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -538,6 +597,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -547,6 +607,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s the L3 and the modified data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -556,6 +617,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s the L3 and clean or shared data is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWA= RD", "MSRIndex": "0x1a6,0x1a7", @@ -565,6 +627,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s the L3 and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_MISS_OR_NO= _FWD", "MSRIndex": "0x1a6,0x1a7", @@ -574,6 +637,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s the L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP= _MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -583,6 +647,7 @@ }, { "BriefDescription": "Counts all demand data writes (RFOs) that mis= s the L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOO= P_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -592,6 +657,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -601,6 +667,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss the L3 and the modified data is= transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -610,6 +677,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss the L3 and clean or shared data= is transferred from remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FO= RWARD", "MSRIndex": "0x1a6,0x1a7", @@ -619,6 +687,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss the L3 and the data is returned= from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS_OR= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -628,6 +697,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss the L3 and the data is returned= from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SN= OOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -637,6 +707,7 @@ }, { "BriefDescription": "Counts L1 data cache hardware prefetch reques= ts and software prefetch requests that miss the L3 and the data is returned= from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -646,6 +717,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -655,6 +727,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss the L3 and the modified data is transferred from remote cac= he.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -664,6 +737,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss the L3 and clean or shared data is transferred from remote = cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FO= RWARD", "MSRIndex": "0x1a6,0x1a7", @@ -673,6 +747,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss the L3 and the data is returned from local or remote dram."= , + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS_OR= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -682,6 +757,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss the L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SN= OOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -691,6 +767,7 @@ }, { "BriefDescription": "Counts prefetch (that bring data to L2) data = reads that miss the L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -700,6 +777,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -709,6 +787,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss the L3 and the modified data is transferred from remote cache= .", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -718,6 +797,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss the L3 and clean or shared data is transferred from remote ca= che.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWAR= D", "MSRIndex": "0x1a6,0x1a7", @@ -727,6 +807,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss the L3 and the data is returned from local or remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_MISS_OR_NO_= FWD", "MSRIndex": "0x1a6,0x1a7", @@ -736,6 +817,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss the L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_= MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -745,6 +827,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to L2) R= FOs that miss the L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP= _MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -754,6 +837,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -763,6 +847,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss the L3 and the modified data is transferred from = remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -772,6 +857,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss the L3 and clean or shared data is transferred fr= om remote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FO= RWARD", "MSRIndex": "0x1a6,0x1a7", @@ -781,6 +867,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss the L3 and the data is returned from local or rem= ote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS_OR= _NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -790,6 +877,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss the L3 and the data is returned from local dram."= , + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SN= OOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -799,6 +887,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) data reads that miss the L3 and the data is returned from remote dram.= ", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.S= NOOP_MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -808,6 +897,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss in the L3.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.ANY_SNOOP", "MSRIndex": "0x1a6,0x1a7", @@ -817,6 +907,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss the L3 and the modified data is transferred from remote= cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HITM", "MSRIndex": "0x1a6,0x1a7", @@ -826,6 +917,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss the L3 and clean or shared data is transferred from rem= ote cache.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWAR= D", "MSRIndex": "0x1a6,0x1a7", @@ -835,6 +927,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss the L3 and the data is returned from local or remote dr= am.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_MISS_OR_NO_= FWD", "MSRIndex": "0x1a6,0x1a7", @@ -844,6 +937,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss the L3 and the data is returned from local dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_= MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -853,6 +947,7 @@ }, { "BriefDescription": "Counts all prefetch (that bring data to LLC o= nly) RFOs that miss the L3 and the data is returned from remote dram.", + "Counter": "0,1,2,3", "EventCode": "0xB7, 0xBB", "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP= _MISS_OR_NO_FWD", "MSRIndex": "0x1a6,0x1a7", @@ -862,6 +957,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to any reasons (multiple categories may count as one).", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED", "PEBS": "2", @@ -871,6 +967,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to none of the previous 4 categories (e.g. interrupt)", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED_EVENTS", "PublicDescription": "Number of times an RTM execution aborted due= to none of the previous 4 categories (e.g. interrupt).", @@ -879,6 +976,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to various memory events (e.g. read/write capacity and conflicts)", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED_MEM", "PublicDescription": "Number of times an RTM execution aborted due= to various memory events (e.g. read/write capacity and conflicts).", @@ -887,6 +985,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to incompatible memory type", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED_MEMTYPE", "PublicDescription": "Number of times an RTM execution aborted due= to incompatible memory type.", @@ -895,6 +994,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to uncommon conditions.", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED_TIMER", "SampleAfterValue": "2000003", @@ -902,6 +1002,7 @@ }, { "BriefDescription": "Number of times an RTM execution aborted due = to HLE-unfriendly instructions", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY", "PublicDescription": "Number of times an RTM execution aborted due= to HLE-unfriendly instructions.", @@ -910,6 +1011,7 @@ }, { "BriefDescription": "Number of times an RTM execution successfully= committed", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.COMMIT", "PublicDescription": "Number of times RTM commit succeeded.", @@ -918,6 +1020,7 @@ }, { "BriefDescription": "Number of times an RTM execution started.", + "Counter": "0,1,2,3", "EventCode": "0xC9", "EventName": "RTM_RETIRED.START", "PublicDescription": "Number of times we entered an RTM region. Do= es not count nested transactions.", @@ -926,6 +1029,7 @@ }, { "BriefDescription": "Counts the number of times a class of instruc= tions that may cause a transactional abort was executed. Since this is the = count of execution, it may not always cause a transactional abort.", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "TX_EXEC.MISC1", "SampleAfterValue": "2000003", @@ -933,6 +1037,7 @@ }, { "BriefDescription": "Counts the number of times a class of instruc= tions (e.g., vzeroupper) that may cause a transactional abort was executed = inside a transactional region", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "TX_EXEC.MISC2", "PublicDescription": "Unfriendly TSX abort triggered by a vzeroupp= er instruction.", @@ -941,6 +1046,7 @@ }, { "BriefDescription": "Counts the number of times an instruction exe= cution caused the transactional nest count supported to be exceeded", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "TX_EXEC.MISC3", "PublicDescription": "Unfriendly TSX abort triggered by a nest cou= nt that is too deep.", @@ -949,6 +1055,7 @@ }, { "BriefDescription": "Counts the number of times a XBEGIN instructi= on was executed inside an HLE transactional region.", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "TX_EXEC.MISC4", "PublicDescription": "RTM region detected inside HLE.", @@ -957,6 +1064,7 @@ }, { "BriefDescription": "Counts the number of times an HLE XACQUIRE in= struction was executed inside an RTM transactional region", + "Counter": "0,1,2,3", "EventCode": "0x5d", "EventName": "TX_EXEC.MISC5", "PublicDescription": "Counts the number of times an HLE XACQUIRE i= nstruction was executed inside an RTM transactional region.", @@ -965,6 +1073,7 @@ }, { "BriefDescription": "Number of times a transactional abort was sig= naled due to a data capacity limitation for transactional reads or writes."= , + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_CAPACITY", "SampleAfterValue": "2000003", @@ -972,6 +1081,7 @@ }, { "BriefDescription": "Number of times a transactional abort was sig= naled due to a data conflict on a transactionally accessed address", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_CONFLICT", "PublicDescription": "Number of times a TSX line had a cache confl= ict.", @@ -980,6 +1090,7 @@ }, { "BriefDescription": "Number of times an HLE transactional executio= n aborted due to XRELEASE lock not satisfying the address and value require= ments in the elision buffer", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH", "PublicDescription": "Number of times a TSX Abort was triggered du= e to release/commit but data and address mismatch.", @@ -988,6 +1099,7 @@ }, { "BriefDescription": "Number of times an HLE transactional executio= n aborted due to NoAllocatedElisionBuffer being non-zero.", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY", "PublicDescription": "Number of times a TSX Abort was triggered du= e to commit but Lock Buffer not empty.", @@ -996,6 +1108,7 @@ }, { "BriefDescription": "Number of times an HLE transactional executio= n aborted due to an unsupported read alignment from the elision buffer.", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMEN= T", "PublicDescription": "Number of times a TSX Abort was triggered du= e to attempting an unsupported alignment from Lock Buffer.", @@ -1004,6 +1117,7 @@ }, { "BriefDescription": "Number of times a HLE transactional region ab= orted due to a non XRELEASE prefixed instruction writing to an elided lock = in the elision buffer", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK", "PublicDescription": "Number of times a TSX Abort was triggered du= e to a non-release/commit store to lock.", @@ -1012,6 +1126,7 @@ }, { "BriefDescription": "Number of times HLE lock could not be elided = due to ElisionBufferAvailable being zero.", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL", "PublicDescription": "Number of times we could not allocate Lock B= uffer.", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/metricgroups.json b/to= ols/perf/pmu-events/arch/x86/skylakex/metricgroups.json index 904d299c95a3..cccfcab3425e 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/metricgroups.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/metricgroups.json @@ -5,7 +5,20 @@ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Met= rics spreadsheet", "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Me= trics spreadsheet", "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics= spreadsheet", + "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", + "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spr= eadsheet", "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metric= s spreadsheet", + "CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metr= ics spreadsheet", "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics = spreadsheet", "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics = spreadsheet", "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spre= adsheet", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/other.json b/tools/per= f/pmu-events/arch/x86/skylakex/other.json index 2511d722327a..44c820518e12 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/other.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/other.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Core cycles where the core was running in a m= anner where Turbo may be clipped to the Non-AVX turbo schedule.", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "CORE_POWER.LVL0_TURBO_LICENSE", "PublicDescription": "Core cycles where the core was running with = power-delivery for baseline license level 0. This includes non-AVX codes, = SSE, AVX 128-bit, and low-current AVX 256-bit codes.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Core cycles where the core was running in a m= anner where Turbo may be clipped to the AVX2 turbo schedule.", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "CORE_POWER.LVL1_TURBO_LICENSE", "PublicDescription": "Core cycles where the core was running with = power-delivery for license level 1. This includes high current AVX 256-bit= instructions as well as low current AVX 512-bit instructions.", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Core cycles where the core was running in a m= anner where Turbo may be clipped to the AVX512 turbo schedule.", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "CORE_POWER.LVL2_TURBO_LICENSE", "PublicDescription": "Core cycles where the core was running with = power-delivery for license level 2 (introduced in Skylake Server microarchi= tecture). This includes high current AVX 512-bit instructions.", @@ -25,6 +28,7 @@ }, { "BriefDescription": "Core cycles the core was throttled due to a p= ending power level request.", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "CORE_POWER.THROTTLE", "PublicDescription": "Core cycles the out-of-order engine was thro= ttled due to a pending power level request.", @@ -33,6 +37,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDFE", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDFE", "SampleAfterValue": "2000003", @@ -40,6 +45,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDM", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDM", "SampleAfterValue": "2000003", @@ -47,6 +53,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITFSE", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITFSE", "SampleAfterValue": "2000003", @@ -54,6 +61,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITI", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITI", "SampleAfterValue": "2000003", @@ -61,6 +69,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDFE", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDFE", "SampleAfterValue": "2000003", @@ -68,6 +77,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDM", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDM", "SampleAfterValue": "2000003", @@ -75,6 +85,7 @@ }, { "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SHITFSE", + "Counter": "0,1,2,3", "EventCode": "0xEF", "EventName": "CORE_SNOOP_RESPONSE.RSP_SHITFSE", "SampleAfterValue": "2000003", @@ -82,6 +93,7 @@ }, { "BriefDescription": "Number of hardware interrupts received by the= processor.", + "Counter": "0,1,2,3", "EventCode": "0xCB", "EventName": "HW_INTERRUPTS.RECEIVED", "PublicDescription": "Counts the number of hardware interruptions = received by the processor.", @@ -90,6 +102,7 @@ }, { "BriefDescription": "Counts number of cache lines that are dropped= and not written back to L3 as they are deemed to be less likely to be reus= ed shortly", + "Counter": "0,1,2,3", "EventCode": "0xFE", "EventName": "IDI_MISC.WB_DOWNGRADE", "PublicDescription": "Counts number of cache lines that are droppe= d and not written back to L3 as they are deemed to be less likely to be reu= sed shortly.", @@ -98,6 +111,7 @@ }, { "BriefDescription": "Counts number of cache lines that are allocat= ed and written back to L3 with the intention that they are more likely to b= e reused shortly", + "Counter": "0,1,2,3", "EventCode": "0xFE", "EventName": "IDI_MISC.WB_UPGRADE", "PublicDescription": "Counts number of cache lines that are alloca= ted and written back to L3 with the intention that they are more likely to = be reused shortly.", @@ -106,6 +120,7 @@ }, { "BriefDescription": "MEMORY_DISAMBIGUATION.HISTORY_RESET", + "Counter": "0,1,2,3", "EventCode": "0x09", "EventName": "MEMORY_DISAMBIGUATION.HISTORY_RESET", "SampleAfterValue": "2000003", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json b/tools/= perf/pmu-events/arch/x86/skylakex/pipeline.json index c50ddf5b40dd..3dd296ab4d78 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Cycles when divide unit is busy executing div= ide or square root operations. Accounts for integer and floating-point oper= ations.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x14", "EventName": "ARITH.DIVIDER_ACTIVE", @@ -9,6 +10,7 @@ }, { "BriefDescription": "All (macro) branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.ALL_BRANCHES", @@ -17,6 +19,7 @@ }, { "BriefDescription": "All (macro) branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS", @@ -27,6 +30,7 @@ }, { "BriefDescription": "Conditional branch instructions retired. [Thi= s event is alias to BR_INST_RETIRED.CONDITIONAL]", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.COND", @@ -36,6 +40,7 @@ }, { "BriefDescription": "Conditional branch instructions retired. [Thi= s event is alias to BR_INST_RETIRED.COND]", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.CONDITIONAL", @@ -46,6 +51,7 @@ }, { "BriefDescription": "Not taken branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xc4", "EventName": "BR_INST_RETIRED.COND_NTAKEN", @@ -55,6 +61,7 @@ }, { "BriefDescription": "Far branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.FAR_BRANCH", @@ -65,6 +72,7 @@ }, { "BriefDescription": "Direct and indirect near call instructions re= tired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.NEAR_CALL", @@ -75,6 +83,7 @@ }, { "BriefDescription": "Return instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.NEAR_RETURN", @@ -85,6 +94,7 @@ }, { "BriefDescription": "Taken branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.NEAR_TAKEN", @@ -95,6 +105,7 @@ }, { "BriefDescription": "Not taken branch instructions retired.", + "Counter": "0,1,2,3", "Errata": "SKL091", "EventCode": "0xC4", "EventName": "BR_INST_RETIRED.NOT_TAKEN", @@ -104,6 +115,7 @@ }, { "BriefDescription": "Speculative and retired mispredicted macro co= nditional branches", + "Counter": "0,1,2,3", "EventCode": "0x89", "EventName": "BR_MISP_EXEC.ALL_BRANCHES", "PublicDescription": "This event counts both taken and not taken s= peculative and retired mispredicted branch instructions.", @@ -112,6 +124,7 @@ }, { "BriefDescription": "Speculative mispredicted indirect branches", + "Counter": "0,1,2,3", "EventCode": "0x89", "EventName": "BR_MISP_EXEC.INDIRECT", "PublicDescription": "Counts speculatively miss-predicted indirect= branches at execution time. Counts for indirect near CALL or JMP instructi= ons (RET excluded).", @@ -120,6 +133,7 @@ }, { "BriefDescription": "All mispredicted macro branch instructions re= tired.", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.ALL_BRANCHES", "PublicDescription": "Counts all the retired branch instructions t= hat were mispredicted by the processor. A branch misprediction occurs when = the processor incorrectly predicts the destination of the branch. When the= misprediction is discovered at execution, all the instructions executed in= the wrong (speculative) path must be discarded, and the processor must sta= rt fetching from the correct path.", @@ -127,6 +141,7 @@ }, { "BriefDescription": "Mispredicted macro branch instructions retire= d.", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS", "PEBS": "2", @@ -136,6 +151,7 @@ }, { "BriefDescription": "Mispredicted conditional branch instructions = retired.", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.CONDITIONAL", "PEBS": "1", @@ -145,6 +161,7 @@ }, { "BriefDescription": "Mispredicted direct and indirect near call in= structions retired.", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.NEAR_CALL", "PEBS": "1", @@ -154,6 +171,7 @@ }, { "BriefDescription": "Number of near branch instructions retired th= at were mispredicted and taken.", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.NEAR_TAKEN", "PEBS": "1", @@ -162,6 +180,7 @@ }, { "BriefDescription": "This event counts the number of mispredicted = ret instructions retired. Non PEBS", + "Counter": "0,1,2,3", "EventCode": "0xC5", "EventName": "BR_MISP_RETIRED.RET", "PEBS": "1", @@ -171,6 +190,7 @@ }, { "BriefDescription": "Core crystal clock cycles when this thread is= unhalted and the other thread is halted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE", "SampleAfterValue": "25003", @@ -178,6 +198,7 @@ }, { "BriefDescription": "Core crystal clock cycles when the thread is = unhalted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK", "SampleAfterValue": "25003", @@ -186,6 +207,7 @@ { "AnyThread": "1", "BriefDescription": "Core crystal clock cycles when at least one t= hread on the physical core is unhalted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY", "SampleAfterValue": "25003", @@ -193,6 +215,7 @@ }, { "BriefDescription": "Core crystal clock cycles when this thread is= unhalted and the other thread is halted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE", "SampleAfterValue": "25003", @@ -200,6 +223,7 @@ }, { "BriefDescription": "Reference cycles when the core is not in halt= state.", + "Counter": "Fixed counter 2", "EventName": "CPU_CLK_UNHALTED.REF_TSC", "PublicDescription": "Counts the number of reference cycles when t= he core is not in a halt state. The core enters the halt state when it is r= unning the HLT instruction or the MWAIT instruction. This event is not affe= cted by core frequency changes (for example, P states, TM2 transitions) but= has the same incrementing frequency as the time stamp counter. This event = can approximate elapsed time while the core was not in a halt state. This e= vent has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is c= ounted on a dedicated fixed counter, leaving the four (eight when Hyperthre= ading is disabled) programmable counters available for other events. Note: = On all current platforms this event stops counting during 'throttling (TM)'= states duty off periods the processor is 'halted'. The counter update is = done at a lower clock rate then the core clock the overflow status bit for = this counter may appear 'sticky'. After the counter has overflowed and sof= tware clears the overflow status bit and resets the counter to less than MA= X. The reset value to the counter is not clocked immediately so the overflo= w status bit will flip 'high (1)' and generate another PMI (if enabled) aft= er which the reset value gets clocked into the counter. Therefore, software= will get the interrupt, read the overflow status bit '1 for bit 34 while t= he counter value is less than MAX. Software should ignore this case.", "SampleAfterValue": "2000003", @@ -207,6 +231,7 @@ }, { "BriefDescription": "Core crystal clock cycles when the thread is = unhalted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_UNHALTED.REF_XCLK", "SampleAfterValue": "25003", @@ -215,6 +240,7 @@ { "AnyThread": "1", "BriefDescription": "Core crystal clock cycles when at least one t= hread on the physical core is unhalted.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY", "SampleAfterValue": "25003", @@ -222,6 +248,7 @@ }, { "BriefDescription": "Counts when there is a transition from ring 1= , 2 or 3 to ring 0.", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x3C", @@ -231,6 +258,7 @@ }, { "BriefDescription": "Core cycles when the thread is not in halt st= ate", + "Counter": "Fixed counter 1", "EventName": "CPU_CLK_UNHALTED.THREAD", "PublicDescription": "Counts the number of core cycles while the t= hread is not in a halt state. The thread enters the halt state when it is r= unning the HLT instruction. This event is a component in many key event rat= ios. The core frequency may change from time to time due to transitions ass= ociated with Enhanced Intel SpeedStep Technology or TM2. For this reason th= is event may have a changing ratio with regards to time. When the core freq= uency is constant, this event can approximate elapsed time while the core w= as not in the halt state. It is counted on a dedicated fixed counter, leavi= ng the four (eight when Hyperthreading is disabled) programmable counters a= vailable for other events.", "SampleAfterValue": "2000003", @@ -239,12 +267,14 @@ { "AnyThread": "1", "BriefDescription": "Core cycles when at least one thread on the p= hysical core is not in halt state.", + "Counter": "Fixed counter 1", "EventName": "CPU_CLK_UNHALTED.THREAD_ANY", "SampleAfterValue": "2000003", "UMask": "0x2" }, { "BriefDescription": "Thread cycles when thread is not in halt stat= e", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_UNHALTED.THREAD_P", "PublicDescription": "This is an architectural event that counts t= he number of thread cycles while the thread is not in a halt state. The thr= ead enters the halt state when it is running the HLT instruction. The core = frequency may change from time to time due to power or thermal throttling. = For this reason, this event may have a changing ratio with regards to wall = clock time.", @@ -253,12 +283,14 @@ { "AnyThread": "1", "BriefDescription": "Core cycles when at least one thread on the p= hysical core is not in halt state.", + "Counter": "0,1,2,3", "EventCode": "0x3C", "EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY", "SampleAfterValue": "2000003" }, { "BriefDescription": "Cycles while L1 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "8", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS", @@ -267,6 +299,7 @@ }, { "BriefDescription": "Cycles while L2 cache miss demand load is out= standing.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS", @@ -275,6 +308,7 @@ }, { "BriefDescription": "Cycles while memory subsystem has an outstand= ing load.", + "Counter": "0,1,2,3", "CounterMask": "16", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY", @@ -283,6 +317,7 @@ }, { "BriefDescription": "Execution stalls while L1 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "12", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS", @@ -291,6 +326,7 @@ }, { "BriefDescription": "Execution stalls while L2 cache miss demand l= oad is outstanding.", + "Counter": "0,1,2,3", "CounterMask": "5", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS", @@ -299,6 +335,7 @@ }, { "BriefDescription": "Execution stalls while memory subsystem has a= n outstanding load.", + "Counter": "0,1,2,3", "CounterMask": "20", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY", @@ -307,6 +344,7 @@ }, { "BriefDescription": "Total execution stalls.", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0xA3", "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL", @@ -315,6 +353,7 @@ }, { "BriefDescription": "Cycles total of 1 uop is executed on all port= s and Reservation Station was not empty.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.1_PORTS_UTIL", "PublicDescription": "Counts cycles during which a total of 1 uop = was executed on all ports and Reservation Station (RS) was not empty.", @@ -323,6 +362,7 @@ }, { "BriefDescription": "Cycles total of 2 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.2_PORTS_UTIL", "PublicDescription": "Counts cycles during which a total of 2 uops= were executed on all ports and Reservation Station (RS) was not empty.", @@ -331,6 +371,7 @@ }, { "BriefDescription": "Cycles total of 3 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.3_PORTS_UTIL", "PublicDescription": "Cycles total of 3 uops are executed on all p= orts and Reservation Station (RS) was not empty.", @@ -339,6 +380,7 @@ }, { "BriefDescription": "Cycles total of 4 uops are executed on all po= rts and Reservation Station was not empty.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.4_PORTS_UTIL", "PublicDescription": "Cycles total of 4 uops are executed on all p= orts and Reservation Station (RS) was not empty.", @@ -347,6 +389,7 @@ }, { "BriefDescription": "Cycles where the Store Buffer was full and no= outstanding load.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.BOUND_ON_STORES", "SampleAfterValue": "2000003", @@ -354,6 +397,7 @@ }, { "BriefDescription": "Cycles where no uops were executed, the Reser= vation Station was not empty, the Store Buffer was full and there was no ou= tstanding load.", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS", "PublicDescription": "Counts cycles during which no uops were exec= uted on all ports and Reservation Station (RS) was not empty.", @@ -362,6 +406,7 @@ }, { "BriefDescription": "Stalls caused by changing prefix length of th= e instruction. [This event is alias to DECODE.LCP]", + "Counter": "0,1,2,3", "EventCode": "0x87", "EventName": "ILD_STALL.LCP", "PublicDescription": "Counts cycles that the Instruction Length de= coder (ILD) stalls occurred due to dynamically changing prefix length of th= e decoded instruction (by operand size prefix instruction 0x66, address siz= e prefix instruction 0x67 or REX.W for Intel64). Count is proportional to t= he number of prefixes in a 16B-line. This may result in a three-cycle penal= ty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is= alias to DECODE.LCP]", @@ -370,6 +415,7 @@ }, { "BriefDescription": "Instruction decoders utilized in a cycle", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "INST_DECODED.DECODERS", "PublicDescription": "Number of decoders utilized in a cycle when = the MITE (legacy decode pipeline) fetches instructions.", @@ -378,6 +424,7 @@ }, { "BriefDescription": "Instructions retired from execution.", + "Counter": "Fixed counter 0", "EventName": "INST_RETIRED.ANY", "PublicDescription": "Counts the number of instructions retired fr= om execution. For instructions that consist of multiple micro-ops, Counts t= he retirement of the last micro-op of the instruction. Counting continues d= uring hardware interrupts, traps, and inside interrupt handlers. Notes: INS= T_RETIRED.ANY is counted by a designated fixed counter, leaving the four (e= ight when Hyperthreading is disabled) programmable counters available for o= ther events. INST_RETIRED.ANY_P is counted by a programmable counter and it= is an architectural performance event. Counting: Faulting executions of GE= TSEC/VM entry/VM Exit/MWait will not count as retired instructions.", "SampleAfterValue": "2000003", @@ -385,6 +432,7 @@ }, { "BriefDescription": "Number of instructions retired. General Count= er - architectural event", + "Counter": "0,1,2,3", "Errata": "SKL091, SKL044", "EventCode": "0xC0", "EventName": "INST_RETIRED.ANY_P", @@ -393,15 +441,17 @@ }, { "BriefDescription": "Number of all retired NOP instructions.", + "Counter": "0,1,2,3", "Errata": "SKL091, SKL044", "EventCode": "0xC0", "EventName": "INST_RETIRED.NOP", - "PEBS": "2", + "PEBS": "1", "SampleAfterValue": "2000003", "UMask": "0x2" }, { "BriefDescription": "Precise instruction retired event with HW to = reduce effect of PEBS shadow in IP distribution", + "Counter": "1", "Errata": "SKL091, SKL044", "EventCode": "0xC0", "EventName": "INST_RETIRED.PREC_DIST", @@ -412,6 +462,7 @@ }, { "BriefDescription": "Number of cycles using always true condition = applied to PEBS instructions retired event.", + "Counter": "0,2,3", "CounterMask": "10", "Errata": "SKL091, SKL044", "EventCode": "0xC0", @@ -424,6 +475,7 @@ }, { "BriefDescription": "Clears speculative count", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x0D", @@ -434,6 +486,7 @@ }, { "BriefDescription": "Cycles the issue-stage is waiting for front-e= nd to fetch from resteered path following branch misprediction or machine c= lear events.", + "Counter": "0,1,2,3", "EventCode": "0x0D", "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES", "SampleAfterValue": "2000003", @@ -441,6 +494,7 @@ }, { "BriefDescription": "Core cycles the allocator was stalled due to = recovery from earlier clear event for this thread (e.g. misprediction or me= mory nuke)", + "Counter": "0,1,2,3", "EventCode": "0x0D", "EventName": "INT_MISC.RECOVERY_CYCLES", "PublicDescription": "Core cycles the Resource allocator was stall= ed due to recovery from an earlier branch misprediction or machine clear ev= ent.", @@ -450,6 +504,7 @@ { "AnyThread": "1", "BriefDescription": "Core cycles the allocator was stalled due to = recovery from earlier clear event for any thread running on the physical co= re (e.g. misprediction or memory nuke).", + "Counter": "0,1,2,3", "EventCode": "0x0D", "EventName": "INT_MISC.RECOVERY_CYCLES_ANY", "SampleAfterValue": "2000003", @@ -457,6 +512,7 @@ }, { "BriefDescription": "The number of times that split load operation= s are temporarily blocked because all resources for handling the split acce= sses are in use", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "LD_BLOCKS.NO_SR", "PublicDescription": "The number of times that split load operatio= ns are temporarily blocked because all resources for handling the split acc= esses are in use.", @@ -465,6 +521,7 @@ }, { "BriefDescription": "Loads blocked due to overlapping with a prece= ding store that cannot be forwarded.", + "Counter": "0,1,2,3", "EventCode": "0x03", "EventName": "LD_BLOCKS.STORE_FORWARD", "PublicDescription": "Counts the number of times where store forwa= rding was prevented for a load operation. The most common case is a load bl= ocked due to the address of memory access (partially) overlapping with a pr= eceding uncompleted store. Note: See the table of not supported store forwa= rds in the Optimization Guide.", @@ -473,6 +530,7 @@ }, { "BriefDescription": "False dependencies in MOB due to partial comp= are on address.", + "Counter": "0,1,2,3", "EventCode": "0x07", "EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS", "PublicDescription": "Counts false dependencies in MOB when the pa= rtial comparison upon loose net check and dependency was resolved by the En= hanced Loose net mechanism. This may not result in high performance penalti= es. Loose net checks can fail when loads and stores are 4k aliased.", @@ -481,6 +539,7 @@ }, { "BriefDescription": "Demand load dispatches that hit L1D fill buff= er (FB) allocated for software prefetch.", + "Counter": "0,1,2,3", "EventCode": "0x4C", "EventName": "LOAD_HIT_PRE.SW_PF", "PublicDescription": "Counts all not software-prefetch load dispat= ches that hit the fill buffer (FB) allocated for the software prefetch. It = can also be incremented by some lock instructions. So it should only be use= d with profiling so that the locks can be excluded by ASM (Assembly File) i= nspection of the nearby instructions.", @@ -489,6 +548,7 @@ }, { "BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn'= t come from the decoder. [This event is alias to LSD.CYCLES_OK]", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0xA8", "EventName": "LSD.CYCLES_4_UOPS", @@ -498,6 +558,7 @@ }, { "BriefDescription": "Cycles Uops delivered by the LSD, but didn't = come from the decoder.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xA8", "EventName": "LSD.CYCLES_ACTIVE", @@ -507,6 +568,7 @@ }, { "BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn'= t come from the decoder. [This event is alias to LSD.CYCLES_4_UOPS]", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0xA8", "EventName": "LSD.CYCLES_OK", @@ -516,6 +578,7 @@ }, { "BriefDescription": "Number of Uops delivered by the LSD.", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "LSD.UOPS", "PublicDescription": "Number of uops delivered to the back-end by = the LSD(Loop Stream Detector).", @@ -524,6 +587,7 @@ }, { "BriefDescription": "Number of machine clears (nukes) of any type.= ", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0xC3", @@ -533,6 +597,7 @@ }, { "BriefDescription": "Self-modifying code (SMC) detected.", + "Counter": "0,1,2,3", "EventCode": "0xC3", "EventName": "MACHINE_CLEARS.SMC", "PublicDescription": "Counts self-modifying code (SMC) detected, w= hich causes a machine clear.", @@ -541,6 +606,7 @@ }, { "BriefDescription": "Number of times a microcode assist is invoked= by HW other than FP-assist. Examples include AD (page Access Dirty) and AV= X* related assists.", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "OTHER_ASSISTS.ANY", "SampleAfterValue": "100003", @@ -548,6 +614,7 @@ }, { "BriefDescription": "Cycles where the pipeline is stalled due to s= erializing operations.", + "Counter": "0,1,2,3", "EventCode": "0x59", "EventName": "PARTIAL_RAT_STALLS.SCOREBOARD", "PublicDescription": "This event counts cycles during which the mi= crocode scoreboard stalls happen.", @@ -556,6 +623,7 @@ }, { "BriefDescription": "Resource-related stall cycles", + "Counter": "0,1,2,3", "EventCode": "0xa2", "EventName": "RESOURCE_STALLS.ANY", "PublicDescription": "Counts resource-related stall cycles.", @@ -564,6 +632,7 @@ }, { "BriefDescription": "Cycles stalled due to no store buffers availa= ble. (not including draining form sync).", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "RESOURCE_STALLS.SB", "PublicDescription": "Counts allocation stall cycles caused by the= store buffer (SB) being full. This counts cycles that the pipeline back-en= d blocked uop delivery from the front-end.", @@ -572,6 +641,7 @@ }, { "BriefDescription": "Increments whenever there is an update to the= LBR array.", + "Counter": "0,1,2,3", "EventCode": "0xCC", "EventName": "ROB_MISC_EVENTS.LBR_INSERTS", "PublicDescription": "Increments when an entry is added to the Las= t Branch Record (LBR) array (or removed from the array in case of RETURNs i= n call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and= branch type selection via MSR_LBR_SELECT.", @@ -580,6 +650,7 @@ }, { "BriefDescription": "Number of retired PAUSE instructions (that do= not end up with a VMExit to the VMM; TSX aborted Instructions may be count= ed). This event is not supported on first SKL and KBL products.", + "Counter": "0,1,2,3", "EventCode": "0xCC", "EventName": "ROB_MISC_EVENTS.PAUSE_INST", "SampleAfterValue": "2000003", @@ -587,6 +658,7 @@ }, { "BriefDescription": "Cycles when Reservation Station (RS) is empty= for the thread", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "RS_EVENTS.EMPTY_CYCLES", "PublicDescription": "Counts cycles during which the reservation s= tation (RS) is empty for the thread.; Note: In ST-mode, not active thread s= hould drive 0. This is usually caused by severely costly branch mispredicti= ons, or allocator/FE issues.", @@ -595,6 +667,7 @@ }, { "BriefDescription": "Counts end of periods where the Reservation S= tation (RS) was empty. Could be useful to precisely locate Frontend Latency= Bound issues.", + "Counter": "0,1,2,3", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x5E", @@ -606,6 +679,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 0", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_0", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 0.", @@ -614,6 +688,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 1", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_1", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 1.", @@ -622,6 +697,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 2", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_2", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 2.", @@ -630,6 +706,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 3", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_3", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 3.", @@ -638,6 +715,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 4", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_4", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 4.", @@ -646,6 +724,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 5", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_5", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 5.", @@ -654,6 +733,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 6", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_6", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 6.", @@ -662,6 +742,7 @@ }, { "BriefDescription": "Cycles per thread when uops are executed in p= ort 7", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UOPS_DISPATCHED_PORT.PORT_7", "PublicDescription": "Counts, on the per-thread basis, cycles duri= ng which at least one uop is dispatched from the Reservation Station (RS) t= o port 7.", @@ -670,6 +751,7 @@ }, { "BriefDescription": "Number of uops executed on the core.", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE", "PublicDescription": "Number of uops executed from any thread.", @@ -678,6 +760,7 @@ }, { "BriefDescription": "Cycles at least 1 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1", @@ -686,6 +769,7 @@ }, { "BriefDescription": "Cycles at least 2 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3", "CounterMask": "2", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2", @@ -694,6 +778,7 @@ }, { "BriefDescription": "Cycles at least 3 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3", "CounterMask": "3", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3", @@ -702,6 +787,7 @@ }, { "BriefDescription": "Cycles at least 4 micro-op is executed from a= ny thread on physical core.", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4", @@ -710,6 +796,7 @@ }, { "BriefDescription": "Cycles with no micro-ops executed from any th= read on physical core.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE", @@ -719,6 +806,7 @@ }, { "BriefDescription": "Cycles where at least 1 uop was executed per-= thread", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC", @@ -728,6 +816,7 @@ }, { "BriefDescription": "Cycles where at least 2 uops were executed pe= r-thread", + "Counter": "0,1,2,3", "CounterMask": "2", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC", @@ -737,6 +826,7 @@ }, { "BriefDescription": "Cycles where at least 3 uops were executed pe= r-thread", + "Counter": "0,1,2,3", "CounterMask": "3", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC", @@ -746,6 +836,7 @@ }, { "BriefDescription": "Cycles where at least 4 uops were executed pe= r-thread", + "Counter": "0,1,2,3", "CounterMask": "4", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC", @@ -755,6 +846,7 @@ }, { "BriefDescription": "Counts number of cycles no uops were dispatch= ed to be executed on this thread.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.STALL_CYCLES", @@ -765,6 +857,7 @@ }, { "BriefDescription": "Counts the number of uops to be executed per-= thread each cycle.", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.THREAD", "PublicDescription": "Number of uops to be executed per-thread eac= h cycle.", @@ -773,6 +866,7 @@ }, { "BriefDescription": "Counts the number of x87 uops dispatched.", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UOPS_EXECUTED.X87", "PublicDescription": "Counts the number of x87 uops executed.", @@ -781,6 +875,7 @@ }, { "BriefDescription": "Uops that Resource Allocation Table (RAT) iss= ues to Reservation Station (RS)", + "Counter": "0,1,2,3", "EventCode": "0x0E", "EventName": "UOPS_ISSUED.ANY", "PublicDescription": "Counts the number of uops that the Resource = Allocation Table (RAT) issues to the Reservation Station (RS).", @@ -789,6 +884,7 @@ }, { "BriefDescription": "Number of slow LEA uops being allocated. A uo= p is generally considered SlowLea if it has 3 sources (e.g. 2 sources + imm= ediate) regardless if as a result of LEA instruction or not.", + "Counter": "0,1,2,3", "EventCode": "0x0E", "EventName": "UOPS_ISSUED.SLOW_LEA", "SampleAfterValue": "2000003", @@ -796,6 +892,7 @@ }, { "BriefDescription": "Cycles when Resource Allocation Table (RAT) d= oes not issue Uops to Reservation Station (RS) for the thread", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x0E", "EventName": "UOPS_ISSUED.STALL_CYCLES", @@ -806,6 +903,7 @@ }, { "BriefDescription": "Uops inserted at issue-stage in order to pres= erve upper bits of vector registers.", + "Counter": "0,1,2,3", "EventCode": "0x0E", "EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH", "PublicDescription": "Counts the number of Blend Uops issued by th= e Resource Allocation Table (RAT) to the reservation station (RS) in order = to preserve upper bits of vector registers. Starting with the Skylake micro= architecture, these Blend uops are needed since every Intel SSE instruction= executed in Dirty Upper State needs to preserve bits 128-255 of the destin= ation register. For more information, refer to Mixing Intel AVX and Intel S= SE Code section of the Optimization Guide.", @@ -814,6 +912,7 @@ }, { "BriefDescription": "Number of macro-fused uops retired. (non prec= ise)", + "Counter": "0,1,2,3", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.MACRO_FUSED", "PublicDescription": "Counts the number of macro-fused uops retire= d. (non precise)", @@ -822,6 +921,7 @@ }, { "BriefDescription": "Retirement slots used.", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UOPS_RETIRED.RETIRE_SLOTS", "PublicDescription": "Counts the retirement slots used.", @@ -830,6 +930,7 @@ }, { "BriefDescription": "Cycles without actually retired uops.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0xC2", "EventName": "UOPS_RETIRED.STALL_CYCLES", @@ -840,6 +941,7 @@ }, { "BriefDescription": "Cycles with less than 10 actually retired uop= s.", + "Counter": "0,1,2,3", "CounterMask": "16", "EventCode": "0xC2", "EventName": "UOPS_RETIRED.TOTAL_CYCLES", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json b/too= ls/perf/pmu-events/arch/x86/skylakex/skx-metrics.json index 8126f952a30c..e5e86892d7bb 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json @@ -68,7 +68,7 @@ }, { "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "tma_info_system_cpu_utilization", + "MetricExpr": "tma_info_system_cpus_utilized", "MetricName": "cpu_utilization", "ScaleUnit": "100%" }, @@ -163,7 +163,7 @@ }, { "BriefDescription": "Ratio of number of code read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=3D0x12C= C0233@ / INST_RETIRED.ANY", + "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=3D0x12c= c0233@ / INST_RETIRED.ANY", "MetricName": "llc_code_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, @@ -187,7 +187,7 @@ }, { "BriefDescription": "Ratio of number of data read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=3D0x12D= 40433@ / INST_RETIRED.ANY", + "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=3D0x12d= 40433@ / INST_RETIRED.ANY", "MetricName": "llc_data_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, @@ -310,7 +310,7 @@ { "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", "MetricExpr": "34 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / tma_info= _thread_slots", - "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_gro= up", + "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequence= r_group", "MetricName": "tma_assists", "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer >= 0.05 & tma_heavy_operations > 0.1)", "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", @@ -319,7 +319,7 @@ { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", "MetricExpr": "1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * (IN= T_MISC.RECOVERY_CYCLES_ANY / 2 if #SMT_on else INT_MISC.RECOVERY_CYCLES)) /= tma_info_thread_slots", - "MetricGroup": "TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group", "MetricName": "tma_backend_bound", "MetricThreshold": "tma_backend_bound > 0.2", "MetricgroupNoGroup": "TopdownL1", @@ -340,7 +340,7 @@ "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL= _BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group= ;tma_bad_speculation_group;tma_issueBM", + "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_= group;tma_bad_speculation_group;tma_issueBM", "MetricName": "tma_branch_mispredicts", "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_specula= tion > 0.15", "MetricgroupNoGroup": "TopdownL2", @@ -378,7 +378,7 @@ "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(44 * tma_info_system_core_frequency * (MEM_LOAD_L3= _HIT_RETIRED.XSNP_HITM * (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER= _CORE / (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OFFCORE_R= ESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 44 * tma_info_system_= core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks", - "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;t= ma_issueSyncxn;tma_l3_bound_group", + "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_gr= oup;tma_issueSyncxn;tma_l3_bound_group", "MetricName": "tma_contested_accesses", "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound = > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Re= lated metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma= _remote_cache", @@ -399,7 +399,7 @@ "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "44 * tma_info_system_core_frequency * (MEM_LOAD_L3_= HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OFFCORE_RES= PONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OFFCORE_RESPONSE.DEMAND_DATA= _RD.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_H= IT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / = 2) / tma_info_thread_clks", - "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSync= xn;tma_l3_bound_group", + "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issu= eSyncxn;tma_l3_bound_group", "MetricName": "tma_data_sharing", "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma= _false_sharing, tma_machine_clears, tma_remote_cache", @@ -417,7 +417,7 @@ { "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", "MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks", - "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group", + "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group", "MetricName": "tma_divider", "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tm= a_backend_bound > 0.2)", "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", @@ -448,14 +448,14 @@ "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_= latency_group;tma_issueFB", "MetricName": "tma_dsb_switches", "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency >= 0.1 & tma_frontend_bound > 0.15)", - "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_mis= ses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_ban= dwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_= info_inst_mix_iptb, tma_lcp", "ScaleUnit": "100%" }, { "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "min(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks", - "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_= l1_bound_group", + "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB= ;tma_l1_bound_group", "MetricName": "tma_dtlb_load", "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (t= ma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: t= ma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_me= mory_synchronization", @@ -464,7 +464,7 @@ { "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", "MetricExpr": "(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks", - "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_= store_bound_group", + "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB= ;tma_store_bound_group", "MetricName": "tma_dtlb_store", "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, = tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchroniz= ation", @@ -474,7 +474,7 @@ "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(110 * tma_info_system_core_frequency * (OFFCORE_RE= SPONSE.DEMAND_RFO.L3_MISS.REMOTE_HITM + OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.= REMOTE_HITM) + 47.5 * tma_info_system_core_frequency * (OFFCORE_RESPONSE.DE= MAND_RFO.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HITM_OT= HER_CORE)) / tma_info_thread_clks", - "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;t= ma_issueSyncxn;tma_store_bound_group", + "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_gr= oup;tma_issueSyncxn;tma_store_bound_group", "MetricName": "tma_false_sharing", "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > = 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_= RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related= metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma= _remote_cache", @@ -484,7 +484,7 @@ "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PE= ND_MISS.FB_FULL\\,cmask\\=3D1@ / tma_info_thread_clks", - "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_is= sueSL;tma_issueSmSt;tma_l1_bound_group", + "MetricGroup": "BvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;t= ma_issueSL;tma_issueSmSt;tma_l1_bound_group", "MetricName": "tma_fb_full", "MetricThreshold": "tma_fb_full > 0.3", "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwi= dth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store= _latency, tma_streaming_stores", @@ -497,7 +497,7 @@ "MetricName": "tma_fetch_bandwidth", "MetricThreshold": "tma_fetch_bandwidth > 0.2", "MetricgroupNoGroup": "TopdownL2", - "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_sw= itches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_= info_inst_mix_iptb, tma_lcp", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_sw= itches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tm= a_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", "ScaleUnit": "100%" }, { @@ -512,6 +512,7 @@ }, { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", "MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;= tma_issueD0", "MetricName": "tma_few_uops_instructions", @@ -540,7 +541,7 @@ }, { "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", - "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\= =3D0x03@ / UOPS_RETIRED.RETIRE_SLOTS", + "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_= SLOTS", "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_= group;tma_issue2P", "MetricName": "tma_fp_scalar", "MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tm= a_light_operations > 0.6)", @@ -587,7 +588,7 @@ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots= ", - "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group", "MetricName": "tma_frontend_bound", "MetricThreshold": "tma_frontend_bound > 0.15", "MetricgroupNoGroup": "TopdownL1", @@ -597,7 +598,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", "MetricExpr": "tma_light_operations * UOPS_RETIRED.MACRO_FUSED / U= OPS_RETIRED.RETIRE_SLOTS", - "MetricGroup": "Branches;Pipeline;TopdownL3;tma_L3_group;tma_light= _operations_group", + "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_= light_operations_group", "MetricName": "tma_fused_instructions", "MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_opera= tions > 0.6", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of = legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Ot= her_Light_Ops in MTL!)}", @@ -616,7 +617,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", "MetricExpr": "(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDAT= A_STALL\\,cmask\\=3D1\\,edge@) / tma_info_thread_clks", - "MetricGroup": "BigFootprint;FetchLat;IcMiss;TopdownL3;tma_L3_grou= p;tma_fetch_latency_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3= _group;tma_fetch_latency_group", "MetricName": "tma_icache_misses", "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency = > 0.1 & tma_frontend_bound > 0.15)", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", @@ -649,24 +650,6 @@ "MetricGroup": "BrMispredicts", "MetricName": "tma_info_bad_spec_spec_clears_ratio" }, - { - "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "(100 * (1 - tma_core_bound / (((EXE_ACTIVITY.EXE_BO= UND_0_PORTS + tma_core_bound * RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTED.T= HREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CPU= _CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVITY.1_PORTS_UTIL= + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if = ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_= MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_POR= TS_UTIL) / CPU_CLK_UNHALTED.THREAD) if tma_core_bound < (((EXE_ACTIVITY.EXE= _BOUND_0_PORTS + tma_core_bound * RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTE= D.THREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / = CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVITY.1_PORTS_U= TIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD = if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STAL= LS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_= PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) else 1) if tma_info_system_smt_2t_ut= ilization > 0.5 else 0)", - "MetricGroup": "Cor;SMT", - "MetricName": "tma_info_botlnk_core_bound_likely" - }, - { - "BriefDescription": "Total pipeline cost of DSB (uop cache) misses= - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * (100 * (tma_fetch_latency * (DSB2MITE_SWITCHE= S.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / ((ICACHE_16B.IFDATA_STALL + 2= * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=3D0x1\\,edge\\=3D0x1@) / CPU_CLK_U= NHALTED.THREAD + ICACHE_TAG.STALLS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CL= EAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD + 9 * BACLEARS.ANY / CPU_CLK_U= NHALTED.THREAD) + min(2 * IDQ.MS_SWITCHES / CPU_CLK_UNHALTED.THREAD, 1) + D= ECODE.LCP / CPU_CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CP= U_CLK_UNHALTED.THREAD) + tma_fetch_bandwidth * tma_mite / (tma_mite + tma_d= sb)))", - "MetricGroup": "DSBmiss;Fed", - "MetricName": "tma_info_botlnk_dsb_misses" - }, - { - "BriefDescription": "Total pipeline cost of Instruction Cache miss= es - subset of the Big_Code Bottleneck.", - "MetricExpr": "100 * (100 * (tma_fetch_latency * ((ICACHE_16B.IFDA= TA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=3D0x1\\,edge\\=3D0x1@)= / CPU_CLK_UNHALTED.THREAD) / ((ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16= B.IFDATA_STALL\\,cmask\\=3D0x1\\,edge\\=3D0x1@) / CPU_CLK_UNHALTED.THREAD += ICACHE_TAG.STALLS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CLEAR_RESTEER_CYCL= ES / CPU_CLK_UNHALTED.THREAD + 9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) = + min(2 * IDQ.MS_SWITCHES / CPU_CLK_UNHALTED.THREAD, 1) + DECODE.LCP / CPU_= CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.T= HREAD)))", - "MetricGroup": "Fed;FetchLat;IcMiss", - "MetricName": "tma_info_botlnk_ic_misses" - }, { "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", "MetricConstraint": "NO_GROUP_EVENTS", @@ -675,6 +658,14 @@ "MetricName": "tma_info_botlnk_l0_core_bound_likely", "MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5" }, + { + "BriefDescription": "Total pipeline cost of DSB (uop cache) hits -= subset of the Instruction_Fetch_BW Bottleneck", + "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / = (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite= )))", + "MetricGroup": "DSB;FetchBW;tma_issueFB", + "MetricName": "tma_info_botlnk_l2_dsb_bandwidth", + "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10", + "PublicDescription": "Total pipeline cost of DSB (uop cache) hits = - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_s= witches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_front= end_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp" + }, { "BriefDescription": "Total pipeline cost of DSB (uop cache) misses= - subset of the Instruction_Fetch_BW Bottleneck", "MetricConstraint": "NO_GROUP_EVENTS", @@ -682,7 +673,7 @@ "MetricGroup": "DSBmiss;Fed;tma_issueFB", "MetricName": "tma_info_botlnk_l2_dsb_misses", "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10", - "PublicDescription": "Total pipeline cost of DSB (uop cache) misse= s - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb= _switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_in= st_mix_iptb, tma_lcp" + "PublicDescription": "Total pipeline cost of DSB (uop cache) misse= s - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb= _switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_= frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp" }, { "BriefDescription": "Total pipeline cost of Instruction Cache miss= es - subset of the Big_Code Bottleneck", @@ -692,40 +683,34 @@ "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5", "PublicDescription": "Total pipeline cost of Instruction Cache mis= ses - subset of the Big_Code Bottleneck. Related metrics: " }, - { - "BriefDescription": "Total pipeline cost of \"useful operations\" = - the baseline operations not covered by Branching_Overhead nor Irregular_O= verhead.", - "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES= + BR_INST_RETIRED.NEAR_CALL) / tma_info_thread_slots - tma_microcode_seque= ncer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists= / tma_microcode_sequencer) * tma_heavy_operations)", - "MetricGroup": "Ret", - "MetricName": "tma_info_bottleneck_base_non_br", - "MetricThreshold": "tma_info_bottleneck_base_non_br > 20" - }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", - "MetricGroup": "BigFootprint;Fed;Frontend;IcMiss;MemoryTLB", + "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "tma_info_bottleneck_big_code", "MetricThreshold": "tma_info_bottleneck_big_code > 20" }, { - "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + BR_INST_RETI= RED.NEAR_CALL) / tma_info_thread_slots)", - "MetricGroup": "Ret", + "BriefDescription": "Total pipeline cost of instructions used for = program control-flow - a subset of the Retiring category in TMA", + "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_= RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)", + "MetricGroup": "BvBO;Ret", "MetricName": "tma_info_bottleneck_branching_overhead", - "MetricThreshold": "tma_info_bottleneck_branching_overhead > 5" + "MetricThreshold": "tma_info_bottleneck_branching_overhead > 5", + "PublicDescription": "Total pipeline cost of instructions used for= program control-flow - a subset of the Retiring category in TMA. Examples = include function calls; loops and alignments. (A lower bound)" }, { "BriefDescription": "Total pipeline cost of external Memory- or Ca= che-Bandwidth related bottlenecks", - "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_b= ound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_= l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma= _data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tm= a_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound += tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_= fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))", - "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", + "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_b= ound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_= l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma= _data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tm= a_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound += tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_= fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_sto= re_fwd_blk)))", + "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_cache_memory_bandwidth", "MetricThreshold": "tma_info_bottleneck_cache_memory_bandwidth > 2= 0", "PublicDescription": "Total pipeline cost of external Memory- or C= ache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_= system_dram_bw_use, tma_mem_bandwidth, tma_sq_full" }, { "BriefDescription": "Total pipeline cost of external Memory- or Ca= che-Latency related bottlenecks", - "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bou= nd * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3= _bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses = + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound = * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bou= nd + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bou= nd + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_= store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tm= a_store_latency)))", - "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat", + "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bou= nd * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3= _bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses = + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound = * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bou= nd + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bou= nd + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_= store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tm= a_store_latency)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_hit_= latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_laten= cy + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))", + "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat", "MetricName": "tma_info_bottleneck_cache_memory_latency", "MetricThreshold": "tma_info_bottleneck_cache_memory_latency > 20"= , "PublicDescription": "Total pipeline cost of external Memory- or C= ache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_= mem_latency" @@ -733,23 +718,23 @@ { "BriefDescription": "Total pipeline cost when the execution is com= pute-bound - an estimation", "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider = + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tm= a_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializin= g_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_= utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))", - "MetricGroup": "Cor;tma_issueComp", + "MetricGroup": "BvCB;Cor;tma_issueComp", "MetricName": "tma_info_bottleneck_compute_bound_est", "MetricThreshold": "tma_info_bottleneck_compute_bound_est > 20", "PublicDescription": "Total pipeline cost when the execution is co= mpute-bound - an estimation. Covers Core Bound when High ILP as well as whe= n long-latency execution units are busy. Related metrics: " }, { - "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks (when the front-end could not sustain operations = delivery to the back-end)", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode= _sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_la= tency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches = + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_mi= crocode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) *= (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_swit= ches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteer= s * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_misp= redicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_b= ranches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + t= ma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code= ", - "MetricGroup": "Fed;FetchBW;Frontend", + "MetricGroup": "BvFB;Fed;FetchBW;Frontend", "MetricName": "tma_info_bottleneck_instruction_fetch_bw", "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20" }, { "BriefDescription": "Total pipeline cost of irregular execution (e= .g", "MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_inst= ructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequence= r) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clea= rs_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tm= a_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma= _mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma= _dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_swit= ches) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_m= ispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes = / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_= bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_= 0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tm= a_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequence= r) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)", - "MetricGroup": "Bad;Cor;Ret;tma_issueMS", + "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS", "MetricName": "tma_info_bottleneck_irregular_overhead", "MetricThreshold": "tma_info_bottleneck_irregular_overhead > 10", "PublicDescription": "Total pipeline cost of irregular execution (= e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloa= ds, overhead in system services or virtualized environments). Related metri= cs: tma_microcode_sequencer, tma_ms_switches" @@ -757,8 +742,8 @@ { "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", "MetricConstraint": "NO_GROUP_EVENTS", - "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_= fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_= bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store /= (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency= )))", - "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB", + "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_spl= it_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma= _dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)= ) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_store= s + tma_store_latency)))", + "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB", "MetricName": "tma_info_bottleneck_memory_data_tlbs", "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20", "PublicDescription": "Total pipeline cost of Memory Address Transl= ation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load,= tma_dtlb_store, tma_info_bottleneck_memory_synchronization" @@ -766,7 +751,7 @@ { "BriefDescription": "Total pipeline cost of Memory Synchronization= related bottlenecks (data transfers and coherency updates across processor= s)", "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * = (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cach= e / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (t= ma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_boun= d) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses = + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / = (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bo= und) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_= stores + tma_store_latency - tma_store_latency)) + tma_machine_clears * (1 = - tma_other_nukes / tma_other_nukes))", - "MetricGroup": "Mem;Offcore;tma_issueTLB", + "MetricGroup": "BvMS;Mem;Offcore;tma_issueTLB", "MetricName": "tma_info_bottleneck_memory_synchronization", "MetricThreshold": "tma_info_bottleneck_memory_synchronization > 1= 0", "PublicDescription": "Total pipeline cost of Memory Synchronizatio= n related bottlenecks (data transfers and coherency updates across processo= rs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_me= mory_data_tlbs" @@ -775,18 +760,25 @@ "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other= _mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetc= h_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switc= hes + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", - "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM", + "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM", "MetricName": "tma_info_bottleneck_mispredictions", "MetricThreshold": "tma_info_bottleneck_mispredictions > 20", "PublicDescription": "Total pipeline cost of Branch Misprediction = related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_= spec_branch_misprediction_cost, tma_mispredicts_resteers" }, { - "BriefDescription": "Total pipeline cost of remaining bottlenecks = (apart from those listed in the Info.Bottlenecks metrics class)", - "MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bott= leneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info= _bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_laten= cy + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_sync= hronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_i= rregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottl= eneck_base_non_br)", - "MetricGroup": "Cor;Offcore", + "BriefDescription": "Total pipeline cost of remaining bottlenecks = in the back-end", + "MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bott= leneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info= _bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_laten= cy + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_sync= hronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_i= rregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottl= eneck_useful_work)", + "MetricGroup": "BvOB;Cor;Offcore", "MetricName": "tma_info_bottleneck_other_bottlenecks", "MetricThreshold": "tma_info_bottleneck_other_bottlenecks > 20", - "PublicDescription": "Total pipeline cost of remaining bottlenecks= (apart from those listed in the Info.Bottlenecks metrics class). Examples = include data-dependencies (Core Bound when Low ILP) and other unlisted memo= ry-related stalls." + "PublicDescription": "Total pipeline cost of remaining bottlenecks= in the back-end. Examples include data-dependencies (Core Bound when Low I= LP) and other unlisted memory-related stalls." + }, + { + "BriefDescription": "Total pipeline cost of \"useful operations\" = - the portion of Retiring category not covered by Branching_Overhead nor Ir= regular_Overhead.", + "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES= + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slot= s - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_se= quencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)"= , + "MetricGroup": "BvUW;Ret", + "MetricName": "tma_info_bottleneck_useful_work", + "MetricThreshold": "tma_info_bottleneck_useful_work > 20" }, { "BriefDescription": "Fraction of branches that are CALL or RET", @@ -840,7 +832,7 @@ }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\= =3D0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=3D0xfc@) = / (2 * tma_info_core_core_clks)", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + cpu@FP_ARITH_INST_R= ETIRED.128B_PACKED_DOUBLE\\,umask\\=3D0xfc@) / (2 * tma_info_core_core_clks= )", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." @@ -857,7 +849,7 @@ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB", "MetricName": "tma_info_frontend_dsb_coverage", "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_inf= o_thread_ipc / 4 > 0.35", - "PublicDescription": "Fraction of Uops delivered by the DSB (aka D= ecoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp" + "PublicDescription": "Fraction of Uops delivered by the DSB (aka D= ecoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses,= tma_info_inst_mix_iptb, tma_lcp" }, { "BriefDescription": "Average number of cycles of a switch from the= DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details= .", @@ -918,7 +910,7 @@ { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", "MetricConstraint": "NO_GROUP_EVENTS", - "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALA= R_SINGLE\\,umask\\=3D0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\= ,umask\\=3D0xfc@)", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + = cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=3D0xfc@)", "MetricGroup": "Flops;InsType", "MetricName": "tma_info_inst_mix_iparith", "MetricThreshold": "tma_info_inst_mix_iparith < 10", @@ -1008,18 +1000,12 @@ "MetricThreshold": "tma_info_inst_mix_ipswpf < 100" }, { - "BriefDescription": "Instruction per taken branch", + "BriefDescription": "Instructions per taken branch", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN", "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB", "MetricName": "tma_info_inst_mix_iptb", "MetricThreshold": "tma_info_inst_mix_iptb < 9", - "PublicDescription": "Instruction per taken branch. Related metric= s: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tm= a_info_frontend_dsb_coverage, tma_lcp" - }, - { - "BriefDescription": "STLB (2nd level TLB) code speculative misses = per kilo instruction (misses of any page-size that complete the page walk)"= , - "MetricExpr": "tma_info_memory_tlb_code_stlb_mpki", - "MetricGroup": "Fed;MemoryTLB", - "MetricName": "tma_info_memory_code_stlb_mpki" + "PublicDescription": "Instructions per taken branch. Related metri= cs: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth= , tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp" }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", @@ -1057,12 +1043,6 @@ "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t" }, - { - "BriefDescription": "Average Parallel L2 cache miss data reads", - "MetricExpr": "tma_info_memory_latency_data_l2_mlp", - "MetricGroup": "Memory_BW;Offcore", - "MetricName": "tma_info_memory_data_l2_mlp" - }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", @@ -1070,17 +1050,11 @@ "MetricName": "tma_info_memory_fb_hpki" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l1d_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", - "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / (duration_time * 1e3 /= 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l1d_cache_fill_bw_2t" - }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", @@ -1094,29 +1068,11 @@ "MetricName": "tma_info_memory_l1mpki_load" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l2_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 2 cache [GB / sec]", - "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / (duration_time * 1e3 /= 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l2_cache_fill_bw_2t" - }, - { - "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1e3 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", - "MetricGroup": "L2Evicts;Mem;Server", - "MetricName": "tma_info_memory_l2_evictions_nonsilent_pki" - }, - { - "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1e3 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", - "MetricGroup": "L2Evicts;Mem;Server", - "MetricName": "tma_info_memory_l2_evictions_silent_pki" - }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", "MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_= RETIRED.ANY", @@ -1148,29 +1104,23 @@ "MetricName": "tma_info_memory_l2mpki_load" }, { - "BriefDescription": "", - "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration= _time", - "MetricGroup": "Mem;MemoryBW;Offcore", - "MetricName": "tma_info_memory_l3_cache_access_bw" + "BriefDescription": "Offcore requests (L2 cache miss) per kilo ins= truction for demand RFOs", + "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.AN= Y", + "MetricGroup": "CacheMisses;Offcore", + "MetricName": "tma_info_memory_l2mpki_rfo" }, { - "BriefDescription": "Average per-core data access bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / (duratio= n_time * 1e3 / 1e3)", + "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", + "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration= _time", "MetricGroup": "Mem;MemoryBW;Offcore", - "MetricName": "tma_info_memory_l3_cache_access_bw_2t" + "MetricName": "tma_info_memory_l3_cache_access_bw" }, { - "BriefDescription": "", + "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time", "MetricGroup": "Mem;MemoryBW", "MetricName": "tma_info_memory_l3_cache_fill_bw" }, - { - "BriefDescription": "Average per-core data fill bandwidth to the L= 3 cache [GB / sec]", - "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / (duration_time = * 1e3 / 1e3)", - "MetricGroup": "Mem;MemoryBW", - "MetricName": "tma_info_memory_l3_cache_fill_bw_2t" - }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", @@ -1183,29 +1133,17 @@ "MetricGroup": "Memory_BW;Offcore", "MetricName": "tma_info_memory_latency_data_l2_mlp" }, - { - "BriefDescription": "Average Latency for L2 cache miss demand Load= s", - "MetricExpr": "tma_info_memory_load_l2_miss_latency", - "MetricGroup": "Memory_Lat;Offcore", - "MetricName": "tma_info_memory_latency_load_l2_miss_latency" - }, - { - "BriefDescription": "Average Parallel L2 cache miss demand Loads", - "MetricExpr": "tma_info_memory_load_l2_mlp", - "MetricGroup": "Memory_BW;Offcore", - "MetricName": "tma_info_memory_latency_load_l2_mlp" - }, { "BriefDescription": "Average Latency for L2 cache miss demand Load= s", "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCO= RE_REQUESTS.DEMAND_DATA_RD", "MetricGroup": "Memory_Lat;Offcore", - "MetricName": "tma_info_memory_load_l2_miss_latency" + "MetricName": "tma_info_memory_latency_load_l2_miss_latency" }, { "BriefDescription": "Average Parallel L2 cache miss demand Loads", "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCO= RE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD", "MetricGroup": "Memory_BW;Offcore", - "MetricName": "tma_info_memory_load_l2_mlp" + "MetricName": "tma_info_memory_latency_load_l2_mlp" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", @@ -1213,15 +1151,9 @@ "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "tma_info_memory_load_miss_real_latency" }, - { - "BriefDescription": "STLB (2nd level TLB) data load speculative mi= sses per kilo instruction (misses of any page-size that complete the page w= alk)", - "MetricExpr": "tma_info_memory_tlb_load_stlb_mpki", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_load_stlb_mpki" - }, { "BriefDescription": "Un-cacheable retired load per kilo instructio= n", - "MetricExpr": "tma_info_memory_uc_load_pki", + "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY", "MetricGroup": "Mem", "MetricName": "tma_info_memory_mix_uc_load_pki" }, @@ -1232,18 +1164,6 @@ "MetricName": "tma_info_memory_mlp", "PublicDescription": "Memory-Level-Parallelism (average number of = L1 miss demand load when there is at least one such miss. Per-Logical Proce= ssor)" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "tma_info_memory_tlb_page_walks_utilization", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_page_walks_utilization" - }, - { - "BriefDescription": "STLB (2nd level TLB) data store speculative m= isses per kilo instruction (misses of any page-size that complete the page = walk)", - "MetricExpr": "tma_info_memory_tlb_store_stlb_mpki", - "MetricGroup": "Mem;MemoryTLB", - "MetricName": "tma_info_memory_store_stlb_mpki" - }, { "BriefDescription": "STLB (2nd level TLB) code speculative misses = per kilo instruction (misses of any page-size that complete the page walk)"= , "MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY= ", @@ -1271,17 +1191,23 @@ "MetricName": "tma_info_memory_tlb_store_stlb_mpki" }, { - "BriefDescription": "Un-cacheable retired load per kilo instructio= n", - "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY", - "MetricGroup": "Mem", - "MetricName": "tma_info_memory_uc_load_pki" - }, - { - "BriefDescription": "", + "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per core", "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_G= E_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=3D1@)", "MetricGroup": "Cor;Pipeline;PortsUtil;SMT", "MetricName": "tma_info_pipeline_execute" }, + { + "BriefDescription": "Average number of uops fetched from DSB per c= ycle", + "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY", + "MetricGroup": "Fed;FetchBW", + "MetricName": "tma_info_pipeline_fetch_dsb" + }, + { + "BriefDescription": "Average number of uops fetched from MITE per = cycle", + "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES", + "MetricGroup": "Fed;FetchBW", + "MetricName": "tma_info_pipeline_fetch_mite" + }, { "BriefDescription": "Instructions per a microcode Assist invocatio= n", "MetricExpr": "INST_RETIRED.ANY / (FP_ASSIST.ANY + OTHER_ASSISTS.A= NY)", @@ -1304,13 +1230,13 @@ }, { "BriefDescription": "Average CPU Utilization (percentage)", - "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC", + "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online", "MetricGroup": "HPC;Summary", "MetricName": "tma_info_system_cpu_utilization" }, { "BriefDescription": "Average number of utilized CPUs", - "MetricExpr": "#num_cpus_online * tma_info_system_cpu_utilization"= , + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC", "MetricGroup": "Summary", "MetricName": "tma_info_system_cpus_utilized" }, @@ -1470,7 +1396,7 @@ "MetricThreshold": "tma_info_thread_uoppi > 1.05" }, { - "BriefDescription": "Instruction per taken branch", + "BriefDescription": "Uops per taken branch", "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TA= KEN", "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "tma_info_thread_uptb", @@ -1479,7 +1405,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks", - "MetricGroup": "BigFootprint;FetchLat;MemoryTLB;TopdownL3;tma_L3_g= roup;tma_fetch_latency_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma= _L3_group;tma_fetch_latency_group", "MetricName": "tma_itlb_misses", "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > = 0.1 & tma_frontend_bound > 0.15)", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", @@ -1494,11 +1420,20 @@ "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_micr= ocode_sequencer, tma_ms_switches, tma_ports_utilized_1", "ScaleUnit": "100%" }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les with demand load accesses that hit the L1 cache", + "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETI= RED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLE= S_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks", + "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound= _group", + "MetricName": "tma_l1_hit_latency", + "MetricThreshold": "tma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1= & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", + "PublicDescription": "This metric roughly estimates fraction of cy= cles with demand load accesses that hit the L1 cache. The short latency of = the L1 data cache may be exposed in pointer-chasing memory access patterns = as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT", + "ScaleUnit": "100%" + }, { "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_= RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + cpu@L1D_PEND_MISS.FB_FULL\\,cm= ask\\=3D1@) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_M= ISS) / tma_info_thread_clks)", - "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_gr= oup;tma_memory_bound_group", + "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_= L3_group;tma_memory_bound_group", "MetricName": "tma_l2_bound", "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 = & tma_backend_bound > 0.2)", "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", @@ -1516,7 +1451,7 @@ { "BriefDescription": "This metric estimates fraction of cycles with= demand load accesses that hit the L3 cache under unloaded scenarios (possi= bly L3 latency limited)", "MetricExpr": "17 * tma_info_system_core_frequency * (MEM_LOAD_RET= IRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2))= / tma_info_thread_clks", - "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_= l3_bound_group", + "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat= ;tma_l3_bound_group", "MetricName": "tma_l3_hit_latency", "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.0= 5 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3= hits) will improve the latency; reduce contention with sibling physical co= res and increase performance. Note the value of this node may overlap with= its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tm= a_info_bottleneck_cache_memory_latency, tma_mem_latency", @@ -1528,7 +1463,7 @@ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_= group;tma_issueFB", "MetricName": "tma_lcp", "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tm= a_frontend_bound > 0.15)", - "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, t= ma_info_inst_mix_iptb", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_= bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses,= tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb", "ScaleUnit": "100%" }, { @@ -1573,7 +1508,7 @@ "MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_grou= p", "MetricName": "tma_local_mem", "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 &= (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)= ))", - "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM", "ScaleUnit": "100%" }, { @@ -1582,14 +1517,14 @@ "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1= _bound_group", "MetricName": "tma_lock_latency", "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 &= (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", - "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS. Related metrics: tma_store_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS. Related metrics: tma_store_latency", "ScaleUnit": "100%" }, { "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", - "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group= ;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn", + "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_= group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn", "MetricName": "tma_machine_clears", "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation= > 0.15", "MetricgroupNoGroup": "TopdownL2", @@ -1599,7 +1534,7 @@ { "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory - DRAM ([SPR-HBM] and/or HBM)", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / tma_info_thread_clks", - "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_b= ound_group;tma_issueBW", + "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_d= ram_bound_group;tma_issueBW", "MetricName": "tma_mem_bandwidth", "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.= 1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuris= tic assumes that a similar off-core traffic is generated by all IA cores. T= his metric does not aggregate non-data-read requests by this logical proces= sor; requests from other IA Logical Processors/Physical Cores/sockets; or o= ther non-IA devices like GPU; hence the maximum external memory bandwidth l= imits may or may not be approached when this metric is flagged (see Uncore = counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache= _memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full", @@ -1608,7 +1543,7 @@ { "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory - DRA= M ([SPR-HBM] and/or HBM)", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth", - "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_= bound_group;tma_issueLat", + "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_= dram_bound_group;tma_issueLat", "MetricName": "tma_mem_latency", "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 = & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory - DR= AM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from ot= her Logical Processors/Physical Cores/sockets (see Uncore counters for that= ). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_la= tency", @@ -1635,6 +1570,7 @@ }, { "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * IDQ.M= S_UOPS / tma_info_thread_slots", "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operatio= ns_group;tma_issueMC;tma_issueMS", "MetricName": "tma_microcode_sequencer", @@ -1645,7 +1581,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL= _BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_inf= o_thread_clks", - "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_b= ranch_resteers_group;tma_issueBM", + "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;= tma_branch_resteers_group;tma_issueBM", "MetricName": "tma_mispredicts_resteers", "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_= resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related m= etrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost= , tma_info_bottleneck_mispredictions", @@ -1681,7 +1617,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTS", - "MetricGroup": "Branches;Pipeline;TopdownL3;tma_L3_group;tma_light= _operations_group", + "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_= light_operations_group", "MetricName": "tma_non_fused_branches", "MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_opera= tions > 0.6", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", @@ -1690,7 +1626,7 @@ { "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / UOPS_RETI= RED.RETIRE_SLOTS", - "MetricGroup": "Pipeline;TopdownL4;tma_L4_group;tma_other_light_op= s_group", + "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_lig= ht_ops_group", "MetricName": "tma_nop_instructions", "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_= ops > 0.3 & tma_light_operations > 0.6)", "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", @@ -1708,7 +1644,7 @@ { "BriefDescription": "This metric estimates fraction of slots the C= PU was stalled due to other cases of misprediction (non-retired x86 branche= s or other types).", "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.A= LL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)", - "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mi= spredicts_group", + "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_bran= ch_mispredicts_group", "MetricName": "tma_other_mispredicts", "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mis= predicts > 0.1 & tma_bad_speculation > 0.15)", "ScaleUnit": "100%" @@ -1716,7 +1652,7 @@ { "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Nukes (Machine Clears) not related to memory ordering= .", "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY= _ORDERING / MACHINE_CLEARS.COUNT), 0.0001)", - "MetricGroup": "Machine_Clears;TopdownL3;tma_L3_group;tma_machine_= clears_group", + "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_mac= hine_clears_group", "MetricName": "tma_other_nukes", "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears >= 0.1 & tma_bad_speculation > 0.15)", "ScaleUnit": "100%" @@ -1804,7 +1740,7 @@ }, { "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", - "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + tma_core_bound * = RS_EVENTS.EMPTY_CYCLES) / tma_info_thread_clks * (CYCLE_ACTIVITY.STALLS_TOT= AL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_thread_clks", + "MetricExpr": "EXE_ACTIVITY.EXE_BOUND_0_PORTS / tma_info_thread_cl= ks", "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utiliza= tion_group", "MetricName": "tma_ports_utilized_0", "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utiliz= ation > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", @@ -1832,7 +1768,7 @@ { "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else= UOPS_EXECUTED.CORE_CYCLES_GE_3) / tma_info_core_core_clks", - "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utiliza= tion_group", + "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_ut= ilization_group", "MetricName": "tma_ports_utilized_3m", "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utili= zation > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", "ScaleUnit": "100%" @@ -1859,7 +1795,7 @@ { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots", - "MetricGroup": "TmaL1;TopdownL1;tma_L1_group", + "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group", "MetricName": "tma_retiring", "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.= 1", "MetricgroupNoGroup": "TopdownL1", @@ -1869,7 +1805,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", "MetricExpr": "PARTIAL_RAT_STALLS.SCOREBOARD / tma_info_thread_clk= s", - "MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_gr= oup;tma_issueSO", + "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bou= nd_group;tma_issueSO", "MetricName": "tma_serializing_operation", "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bo= und > 0.1 & tma_backend_bound > 0.2)", "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD. Related me= trics: tma_ms_switches", @@ -1897,7 +1833,7 @@ { "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", "MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on els= e OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks", - "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueB= W;tma_l3_bound_group", + "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_i= ssueBW;tma_l3_bound_group", "MetricName": "tma_sq_full", "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tm= a_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). Related metrics: tma_fb_full= , tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, = tma_mem_bandwidth", @@ -1925,7 +1861,7 @@ "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(L2_RQSTS.RFO_HIT * 11 * (1 - MEM_INST_RETIRED.LOCK= _LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / = MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUEST= S_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks", - "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issue= RFO;tma_issueSL;tma_store_bound_group", + "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_= issueRFO;tma_issueSL;tma_store_bound_group", "MetricName": "tma_store_latency", "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0= .2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency", @@ -1958,7 +1894,7 @@ { "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", "MetricExpr": "9 * BACLEARS.ANY / tma_info_thread_clks", - "MetricGroup": "BigFootprint;FetchLat;TopdownL4;tma_L4_group;tma_b= ranch_resteers_group", + "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;= tma_branch_resteers_group", "MetricName": "tma_unknown_branches", "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_rest= eers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (e.g. first time the= branch is fetched or hitting BTB capacity limit) hence called Unknown Bran= ches. Sample with: BACLEARS.ANY", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json b/to= ols/perf/pmu-events/arch/x86/skylakex/uncore-cache.json index 543dfc1e5ad7..da46a3aeb58c 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json @@ -1,8 +1,10 @@ [ { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -10,8 +12,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -19,8 +23,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -28,8 +34,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -37,8 +45,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -46,8 +56,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -55,8 +67,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -64,8 +78,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -73,8 +89,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -82,8 +100,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -91,8 +111,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -100,8 +122,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -109,8 +133,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -118,8 +144,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -127,8 +155,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -136,8 +166,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -145,8 +177,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -154,8 +188,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -163,8 +199,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -172,8 +210,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -181,8 +221,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -190,8 +232,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -199,8 +243,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -208,8 +254,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -217,8 +265,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -226,8 +276,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -235,8 +287,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -244,8 +298,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -253,8 +309,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -262,8 +320,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -271,8 +331,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -280,8 +342,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -289,8 +353,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -298,8 +364,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -307,8 +375,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -316,8 +386,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -325,8 +397,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -334,8 +408,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -343,8 +419,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -352,8 +430,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -361,8 +441,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -370,8 +452,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -379,8 +463,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -388,8 +474,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -397,8 +485,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -406,8 +496,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -415,8 +507,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -424,8 +518,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -433,8 +529,10 @@ }, { "BriefDescription": "CHA to iMC Bypass; Intermediate bypass Taken"= , + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when the CHA was = able to bypass HA pipe on the way to iMC. This is a latency optimization f= or situations when there is light loadings on the memory subsystem. This c= an be filtered by when the bypass was taken and when it was not.; Filter fo= r transactions that succeeded in taking the intermediate bypass.", "UMask": "0x2", @@ -442,8 +540,10 @@ }, { "BriefDescription": "CHA to iMC Bypass; Not Taken", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when the CHA was = able to bypass HA pipe on the way to iMC. This is a latency optimization f= or situations when there is light loadings on the memory subsystem. This c= an be filtered by when the bypass was taken and when it was not.; Filter fo= r transactions that could not take the bypass, and issues a read to memory.= Note that transactions that did not take the bypass but did not issue read= to memory will not be counted.", "UMask": "0x4", @@ -451,8 +551,10 @@ }, { "BriefDescription": "CHA to iMC Bypass; Taken", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_CHA_BYPASS_CHA_IMC.TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when the CHA was = able to bypass HA pipe on the way to iMC. This is a latency optimization f= or situations when there is light loadings on the memory subsystem. This c= an be filtered by when the bypass was taken and when it was not.; Filter fo= r transactions that succeeded in taking the full bypass.", "UMask": "0x1", @@ -460,6 +562,7 @@ }, { "BriefDescription": "Clockticks of the uncore caching & home agent= (CHA)", + "Counter": "0,1,2,3", "EventName": "UNC_CHA_CLOCKTICKS", "PerPkg": "1", "PublicDescription": "Counts clockticks of the clock controlling t= he uncore caching and home agent (CHA).", @@ -467,55 +570,69 @@ }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xC0", "EventName": "UNC_CHA_CMS_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "Core PMA Events; C1 State", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_CHA_CORE_PMA.C1_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Core PMA Events; C1 Transition", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_CHA_CORE_PMA.C1_TRANSITION", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Core PMA Events; C6 State", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_CHA_CORE_PMA.C6_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Core PMA Events; C6 Transition", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_CHA_CORE_PMA.C6_TRANSITION", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Core PMA Events; GV", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_CHA_CORE_PMA.GV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Core Cross Snoops Issued; Any Cycle with Mult= iple Snoops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.ANY_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0xe2", @@ -523,8 +640,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Any Single Snoop", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.ANY_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0xe1", @@ -532,8 +651,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Any Snoop to Remote= Node", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.ANY_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0xe4", @@ -541,6 +662,7 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Multiple Core Reque= sts", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.CORE_GTONE", "PerPkg": "1", @@ -550,8 +672,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Single Core Request= s", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.CORE_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x41", @@ -559,8 +683,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Core Request to Rem= ote Node", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.CORE_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x44", @@ -568,6 +694,7 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Multiple Eviction", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EVICT_GTONE", "PerPkg": "1", @@ -577,8 +704,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Single Eviction", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EVICT_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x81", @@ -586,8 +715,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Eviction to Remote = Node", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EVICT_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x84", @@ -595,8 +726,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Multiple External S= noops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EXT_GTONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x22", @@ -604,8 +737,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; Single External Sno= ops", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EXT_ONE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x21", @@ -613,8 +748,10 @@ }, { "BriefDescription": "Core Cross Snoops Issued; External Snoop to R= emote Node", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_CHA_CORE_SNP.EXT_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of transactions that trigg= er a configurable number of cross snoops. Cores are snooped if the transac= tion looks up the cache and determines that it is necessary based on the op= eration type and what CoreValid bits are set. For example, if 2 CV bits ar= e set on a data read, the cores must have the data in S state so it is not = necessary to snoop them. However, if only 1 CV bit is set the core my have= modified the data. If the transaction was an RFO, it would need to invali= date the lines. This event can be filtered based on who triggered the init= ial snoop(s).", "UMask": "0x24", @@ -622,14 +759,17 @@ }, { "BriefDescription": "Counter 0 Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_CHA_COUNTER0_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Since occupancy counts can only be captured = in the Cbo's 0 counter, this event allows a user to capture occupancy relat= ed information by filtering the Cb0 occupancy count captured in Counter 0. = The filtering available is found in the control register - threshold, inv= ert and edge detect. E.g. setting threshold to 1 can effectively monitor = how many cycles the monitored queue has an entry.", "Unit": "CHA" }, { "BriefDescription": "Multi-socket cacheline Directory state lookup= s; Snoop Not Needed", + "Counter": "0,1,2,3", "EventCode": "0x53", "EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP", "PerPkg": "1", @@ -639,6 +779,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state lookup= s; Snoop Needed", + "Counter": "0,1,2,3", "EventCode": "0x53", "EventName": "UNC_CHA_DIR_LOOKUP.SNP", "PerPkg": "1", @@ -648,6 +789,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state update= s; Directory Updated memory write from the HA pipe", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_CHA_DIR_UPDATE.HA", "PerPkg": "1", @@ -657,6 +799,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory state update= s; Directory Updated memory write from TOR pipe", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_CHA_DIR_UPDATE.TOR", "PerPkg": "1", @@ -666,8 +809,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Down", + "Counter": "0,1,2,3", "EventCode": "0xAE", "EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x4", @@ -675,8 +820,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Up", + "Counter": "0,1,2,3", "EventCode": "0xAE", "EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x1", @@ -684,6 +831,7 @@ }, { "BriefDescription": "FaST wire asserted; Horizontal", + "Counter": "0,1,2,3", "EventCode": "0xA5", "EventName": "UNC_CHA_FAST_ASSERTED.HORZ", "PerPkg": "1", @@ -693,8 +841,10 @@ }, { "BriefDescription": "FaST wire asserted; Vertical", + "Counter": "0,1,2,3", "EventCode": "0xA5", "EventName": "UNC_CHA_FAST_ASSERTED.VERT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles either the local= or incoming distress signals are asserted. Incoming distress includes up,= dn and across.", "UMask": "0x1", @@ -702,6 +852,7 @@ }, { "BriefDescription": "Read request from a remote socket which hit i= n the HitMe Cache to a line In the E state", + "Counter": "0,1,2,3", "EventCode": "0x5F", "EventName": "UNC_CHA_HITME_HIT.EX_RDS", "PerPkg": "1", @@ -711,80 +862,100 @@ }, { "BriefDescription": "Counts Number of Hits in HitMe Cache; Shared = hit and op is RdInvOwn, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x5F", "EventName": "UNC_CHA_HITME_HIT.SHARED_OWNREQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Hits in HitMe Cache; op is W= bMtoE", + "Counter": "0,1,2,3", "EventCode": "0x5F", "EventName": "UNC_CHA_HITME_HIT.WBMTOE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Hits in HitMe Cache; op is W= bMtoI, WbPushMtoI, WbFlush, or WbMtoS", + "Counter": "0,1,2,3", "EventCode": "0x5F", "EventName": "UNC_CHA_HITME_HIT.WBMTOI_OR_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times HitMe Cache is accesse= d; op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "UNC_CHA_HITME_LOOKUP.READ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times HitMe Cache is accesse= d; op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoS", + "Counter": "0,1,2,3", "EventCode": "0x5E", "EventName": "UNC_CHA_HITME_LOOKUP.WRITE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache; No SF= /LLC HitS/F and op is RdInvOwn", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache; op is= RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.READ_OR_INV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Counts Number of Misses in HitMe Cache; SF/LL= C HitS/F and op is RdInvOwn", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_CHA_HITME_MISS.SHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache; Deallocate HitME$ on Reads without RspFwdI*", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache; op is RspIFwd or RspIFwdWb for a local request", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Received RspFwdI* for a local request, but c= onverted HitME$ to SF entry", "UMask": "0x1", @@ -792,16 +963,20 @@ }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache; Update HitMe Cache on RdInvOwn even if not RspFwdI*", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache; op is RspIFwd or RspIFwdWb for a remote request", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.RSPFWDI_REM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Updated HitME$ on RspFwdI* or local HitM/E r= eceived for a remote request", "UMask": "0x2", @@ -809,16 +984,20 @@ }, { "BriefDescription": "Counts the number of Allocate/Update to HitMe= Cache; Update HitMe Cache to SHARed", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_CHA_HITME_UPDATE.SHARED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x1", @@ -826,8 +1005,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x2", @@ -835,8 +1016,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x4", @@ -844,8 +1027,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x8", @@ -853,8 +1038,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x1", @@ -862,8 +1049,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x2", @@ -871,8 +1060,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x4", @@ -880,8 +1071,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x8", @@ -889,8 +1082,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -898,8 +1093,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -907,8 +1104,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -916,8 +1115,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -925,8 +1126,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Left", + "Counter": "0,1,2,3", "EventCode": "0xAD", "EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.LEFT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x1", @@ -934,8 +1137,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Right", + "Counter": "0,1,2,3", "EventCode": "0xAD", "EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.RIGHT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x4", @@ -943,6 +1148,7 @@ }, { "BriefDescription": "Normal priority reads issued to the memory co= ntroller from the CHA", + "Counter": "0,1,2,3", "EventCode": "0x59", "EventName": "UNC_CHA_IMC_READS_COUNT.NORMAL", "PerPkg": "1", @@ -952,8 +1158,10 @@ }, { "BriefDescription": "HA to iMC Reads Issued; ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x59", "EventName": "UNC_CHA_IMC_READS_COUNT.PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count of the number of reads issued to any o= f the memory controller channels. This can be filtered by the priority of = the reads.", "UMask": "0x2", @@ -961,6 +1169,7 @@ }, { "BriefDescription": "CHA to iMC Full Line Writes Issued; Full Line= Non-ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL", "PerPkg": "1", @@ -970,8 +1179,10 @@ }, { "BriefDescription": "Writes Issued to the iMC by the HA; Full Line= MIG", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_MIG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of writes issued fro= m the HA into the memory controller. This counts for all four channels. I= t can be filtered by full/partial and ISOCH/non-ISOCH.", "UMask": "0x10", @@ -979,8 +1190,10 @@ }, { "BriefDescription": "Writes Issued to the iMC by the HA; ISOCH Ful= l Line", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of writes issued fro= m the HA into the memory controller. This counts for all four channels. I= t can be filtered by full/partial and ISOCH/non-ISOCH.", "UMask": "0x4", @@ -988,8 +1201,10 @@ }, { "BriefDescription": "Writes Issued to the iMC by the HA; Partial N= on-ISOCH", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of writes issued fro= m the HA into the memory controller. This counts for all four channels. I= t can be filtered by full/partial and ISOCH/non-ISOCH.", "UMask": "0x2", @@ -997,8 +1212,10 @@ }, { "BriefDescription": "Writes Issued to the iMC by the HA; Partial M= IG", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_MIG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of writes issued fro= m the HA into the memory controller. This counts for all four channels. I= t can be filtered by full/partial and ISOCH/non-ISOCH.; Filter for memory c= ontroller 5 only.", "UMask": "0x20", @@ -1006,8 +1223,10 @@ }, { "BriefDescription": "Writes Issued to the iMC by the HA; ISOCH Par= tial", + "Counter": "0,1,2,3", "EventCode": "0x5B", "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of writes issued fro= m the HA into the memory controller. This counts for all four channels. I= t can be filtered by full/partial and ISOCH/non-ISOCH.", "UMask": "0x8", @@ -1015,64 +1234,80 @@ }, { "BriefDescription": "Counts Number of times IODC entry allocation = is attempted; Number of IODC allocations", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_CHA_IODC_ALLOC.INVITOM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times IODC entry allocation = is attempted; Number of IODC allocations dropped due to IODC Full", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_CHA_IODC_ALLOC.IODCFULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Counts Number of times IODC entry allocation = is attempted; Number of IDOC allocation dropped due to OSB gate", + "Counter": "0,1,2,3", "EventCode": "0x62", "EventName": "UNC_CHA_IODC_ALLOC.OSBGATED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Counts number of IODC deallocations; IODC dea= llocated due to any reason", + "Counter": "0,1,2,3", "EventCode": "0x63", "EventName": "UNC_CHA_IODC_DEALLOC.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Counts number of IODC deallocations; IODC dea= llocated due to conflicting transaction", + "Counter": "0,1,2,3", "EventCode": "0x63", "EventName": "UNC_CHA_IODC_DEALLOC.SNPOUT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Counts number of IODC deallocations; IODC dea= llocated due to WbMtoE", + "Counter": "0,1,2,3", "EventCode": "0x63", "EventName": "UNC_CHA_IODC_DEALLOC.WBMTOE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Counts number of IODC deallocations; IODC dea= llocated due to WbMtoI", + "Counter": "0,1,2,3", "EventCode": "0x63", "EventName": "UNC_CHA_IODC_DEALLOC.WBMTOI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Counts number of IODC deallocations; IODC dea= llocated due to WbPushMtoI", + "Counter": "0,1,2,3", "EventCode": "0x63", "EventName": "UNC_CHA_IODC_DEALLOC.WBPUSHMTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Moved to Cbo section", "UMask": "0x4", @@ -1080,8 +1315,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Any Request", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.ANY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Filters for any transaction origi= nating from the IPQ or IRQ. This does not include lookups originating from= the ISMQ.", "UMask": "0x11", @@ -1089,8 +1326,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Data Read Req= uest", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Read transactions", "UMask": "0x3", @@ -1098,8 +1337,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Local", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.", "UMask": "0x31", @@ -1107,8 +1348,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Remote", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.", "UMask": "0x91", @@ -1116,8 +1359,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; External Snoo= p Request", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Filters for only snoop requests c= oming from the remote socket(s) through the IPQ.", "UMask": "0x9", @@ -1125,8 +1370,10 @@ }, { "BriefDescription": "Cache and Snoop Filter Lookups; Write Request= s", + "Counter": "0,1,2,3", "EventCode": "0x34", "EventName": "UNC_CHA_LLC_LOOKUP.WRITE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set umask bit 0 and select a state or = states to match. Otherwise, the event will count nothing. CHAFilter0[24:= 21,17] bits correspond to [FMESI] state.; Writeback transactions from L2 to= the LLC This includes all write transactions -- both Cacheable and UC.", "UMask": "0x5", @@ -1134,35 +1381,43 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_E", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.E_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_F", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.F_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Lines Victimized; Local - All Lines", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x2f", @@ -1170,8 +1425,10 @@ }, { "BriefDescription": "Lines Victimized; Local - Lines in E State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x22", @@ -1179,8 +1436,10 @@ }, { "BriefDescription": "Lines Victimized; Local - Lines in F State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x28", @@ -1188,8 +1447,10 @@ }, { "BriefDescription": "Lines Victimized; Local - Lines in M State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x21", @@ -1197,8 +1458,10 @@ }, { "BriefDescription": "Lines Victimized; Local - Lines in S State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x24", @@ -1206,26 +1469,32 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_M", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.M_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.REMOTE_ALL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Lines Victimized; Remote - All Lines", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x8f", @@ -1233,8 +1502,10 @@ }, { "BriefDescription": "Lines Victimized; Remote - Lines in E State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x82", @@ -1242,8 +1513,10 @@ }, { "BriefDescription": "Lines Victimized; Remote - Lines in F State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_F", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x88", @@ -1251,8 +1524,10 @@ }, { "BriefDescription": "Lines Victimized; Remote - Lines in M State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x81", @@ -1260,8 +1535,10 @@ }, { "BriefDescription": "Lines Victimized; Remote - Lines in S State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of lines that were victimi= zed on a fill. This can be filtered by the state that the line was in.", "UMask": "0x84", @@ -1269,15 +1546,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_S", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.S_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Lines Victimized; Lines in E state", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E", "PerPkg": "1", @@ -1287,6 +1567,7 @@ }, { "BriefDescription": "Lines Victimized; Lines in F State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_F", "PerPkg": "1", @@ -1296,6 +1577,7 @@ }, { "BriefDescription": "Lines Victimized; Lines in M state", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M", "PerPkg": "1", @@ -1305,6 +1587,7 @@ }, { "BriefDescription": "Lines Victimized; Lines in S State", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S", "PerPkg": "1", @@ -1314,8 +1597,10 @@ }, { "BriefDescription": "Cbo Misc; CV0 Prefetch Miss", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.CV0_PREF_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous events in the Cbo.", "UMask": "0x20", @@ -1323,8 +1608,10 @@ }, { "BriefDescription": "Cbo Misc; CV0 Prefetch Victim", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.CV0_PREF_VIC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous events in the Cbo.", "UMask": "0x10", @@ -1332,6 +1619,7 @@ }, { "BriefDescription": "Number of times that an RFO hit in S state.", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.RFO_HIT_S", "PerPkg": "1", @@ -1341,8 +1629,10 @@ }, { "BriefDescription": "Cbo Misc; Silent Snoop Eviction", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.RSPI_WAS_FSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous events in the Cbo.; Counts the= number of times when a Snoop hit in FSE states and triggered a silent evic= tion. This is useful because this information is lost in the PRE encodings= .", "UMask": "0x1", @@ -1350,8 +1640,10 @@ }, { "BriefDescription": "Cbo Misc; Write Combining Aliasing", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_CHA_MISC.WC_ALIASING", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Miscellaneous events in the Cbo.; Counts the= number of times that a USWC write (WCIL(F)) transaction hit in the LLC in = M state, triggering a WBMtoI followed by the USWC write. This occurs when = there is WC aliasing.", "UMask": "0x2", @@ -1359,16 +1651,20 @@ }, { "BriefDescription": "OSB Snoop Broadcast", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_CHA_OSB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count of OSB snoop broadcasts. Counts by 1 p= er request causing OSB snoops to be broadcast. Does not count all the snoop= s generated by OSB.", "Unit": "CHA" }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC0_SMI2", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.EDC0_SMI2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 2 only.", "UMask": "0x4", @@ -1376,8 +1672,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC1_SMI3", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.EDC1_SMI3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 3 only.", "UMask": "0x8", @@ -1385,8 +1683,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC2_SMI4", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.EDC2_SMI4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 4 only.", "UMask": "0x10", @@ -1394,8 +1694,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC3_SMI5", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.EDC3_SMI5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 5 only.", "UMask": "0x20", @@ -1403,8 +1705,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; MC0_SMI0", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC0_SMI0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 0 only.", "UMask": "0x1", @@ -1412,8 +1716,10 @@ }, { "BriefDescription": "CHA iMC CHNx READ Credits Empty; MC1_SMI1", + "Counter": "0,1,2,3", "EventCode": "0x58", "EventName": "UNC_CHA_READ_NO_CREDITS.MC1_SMI1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending reads from the CHA into the iMC. In order t= o send reads into the memory controller, the HA must first acquire a credit= for the iMC's AD Ingress queue.; Filter for memory controller 1 only.", "UMask": "0x2", @@ -1421,6 +1727,7 @@ }, { "BriefDescription": "Local requests for exclusive ownership of a c= ache line without receiving data", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL", "PerPkg": "1", @@ -1430,6 +1737,7 @@ }, { "BriefDescription": "Local requests for exclusive ownership of a c= ache line without receiving data", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE", "PerPkg": "1", @@ -1439,6 +1747,7 @@ }, { "BriefDescription": "Read requests", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS", "PerPkg": "1", @@ -1448,6 +1757,7 @@ }, { "BriefDescription": "Read requests from a unit on this socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS_LOCAL", "PerPkg": "1", @@ -1457,6 +1767,7 @@ }, { "BriefDescription": "Read requests from a remote socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.READS_REMOTE", "PerPkg": "1", @@ -1466,6 +1777,7 @@ }, { "BriefDescription": "Write requests", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES", "PerPkg": "1", @@ -1475,6 +1787,7 @@ }, { "BriefDescription": "Write Requests from a unit on this socket", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL", "PerPkg": "1", @@ -1484,6 +1797,7 @@ }, { "BriefDescription": "Read and Write Requests; Writes Remote", + "Counter": "0,1,2,3", "EventCode": "0x50", "EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE", "PerPkg": "1", @@ -1493,8 +1807,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AD", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_CHA_RING_BOUNCES_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x1", @@ -1502,8 +1818,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AK", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_CHA_RING_BOUNCES_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x2", @@ -1511,8 +1829,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; BL", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_CHA_RING_BOUNCES_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x4", @@ -1520,8 +1840,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; IV", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_CHA_RING_BOUNCES_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x8", @@ -1529,8 +1851,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = AD", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_CHA_RING_BOUNCES_VERT.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x1", @@ -1538,8 +1862,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Acknowledgements to core", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_CHA_RING_BOUNCES_VERT.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x2", @@ -1547,8 +1873,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Data Responses to core", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_CHA_RING_BOUNCES_VERT.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x4", @@ -1556,8 +1884,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Snoops of processor's cache.", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_CHA_RING_BOUNCES_VERT.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x8", @@ -1565,87 +1895,109 @@ }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AD", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AK", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; Acknowled= gements to Agent 1", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; BL", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; IV", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Vertical Ring; AD", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Acknowledge= ments to core", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Data Respon= ses to core", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_CHA_RING_SINK_STARVED_VERT.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Snoops of p= rocessor's cache.", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_CHA_RING_SINK_STARVED_VERT.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Source Throttle", + "Counter": "0,1,2,3", "EventCode": "0xA4", "EventName": "UNC_CHA_RING_SRC_THRTL", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Allocations; IPQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x4", @@ -1653,6 +2005,7 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; IRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IRQ", "PerPkg": "1", @@ -1662,8 +2015,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; IRQ Rejected"= , + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.IRQ_REJ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x2", @@ -1671,8 +2026,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; PRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.PRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x10", @@ -1680,8 +2037,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; PRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.PRQ_REJ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x20", @@ -1689,8 +2048,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; RRQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x40", @@ -1698,8 +2059,10 @@ }, { "BriefDescription": "Ingress (from CMS) Allocations; WBQ", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_CHA_RxC_INSERTS.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of allocations per cycle into = the specified Ingress queue.", "UMask": "0x80", @@ -1707,238 +2070,297 @@ }, { "BriefDescription": "Ingress Probe Queue Rejects; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; Non UPI AK Reque= st", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; Non UPI IV Reque= st", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_CHA_RxC_IPQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; HA", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; Merging these tw= o together to make room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress Probe Queue Rejects; Victim", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_CHA_RxC_IPQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD = REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD = RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non= UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non= UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_CHA_RxC_IRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; All= ow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; ANY= 0", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; HA"= , + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Mer= ging these two together to make room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC= Victim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Phy= Addr Match", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH", "PerPkg": "1", @@ -1947,24 +2369,30 @@ }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; SF = Victim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Vic= tim", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_CHA_RxC_IRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "ISMQ Rejects; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x1", @@ -1972,8 +2400,10 @@ }, { "BriefDescription": "ISMQ Rejects; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x2", @@ -1981,8 +2411,10 @@ }, { "BriefDescription": "ISMQ Rejects; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x40", @@ -1990,8 +2422,10 @@ }, { "BriefDescription": "ISMQ Rejects; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x10", @@ -1999,8 +2433,10 @@ }, { "BriefDescription": "ISMQ Rejects; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x20", @@ -2008,8 +2444,10 @@ }, { "BriefDescription": "ISMQ Rejects; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x4", @@ -2017,8 +2455,10 @@ }, { "BriefDescription": "ISMQ Rejects; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x8", @@ -2026,8 +2466,10 @@ }, { "BriefDescription": "ISMQ Rejects; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_CHA_RxC_ISMQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x80", @@ -2035,8 +2477,10 @@ }, { "BriefDescription": "ISMQ Retries; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x1", @@ -2044,8 +2488,10 @@ }, { "BriefDescription": "ISMQ Retries; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x2", @@ -2053,8 +2499,10 @@ }, { "BriefDescription": "ISMQ Retries; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x40", @@ -2062,8 +2510,10 @@ }, { "BriefDescription": "ISMQ Retries; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x10", @@ -2071,8 +2521,10 @@ }, { "BriefDescription": "ISMQ Retries; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x20", @@ -2080,8 +2532,10 @@ }, { "BriefDescription": "ISMQ Retries; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x4", @@ -2089,8 +2543,10 @@ }, { "BriefDescription": "ISMQ Retries; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x8", @@ -2098,8 +2554,10 @@ }, { "BriefDescription": "ISMQ Retries; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_CHA_RxC_ISMQ0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x80", @@ -2107,8 +2565,10 @@ }, { "BriefDescription": "ISMQ Rejects; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_CHA_RxC_ISMQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x1", @@ -2116,8 +2576,10 @@ }, { "BriefDescription": "ISMQ Rejects; HA", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_CHA_RxC_ISMQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x2", @@ -2125,8 +2587,10 @@ }, { "BriefDescription": "ISMQ Retries; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_CHA_RxC_ISMQ1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x1", @@ -2134,8 +2598,10 @@ }, { "BriefDescription": "ISMQ Retries; HA", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_CHA_RxC_ISMQ1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the ISMQ had to retry. Transaction pass through the ISMQ as responses fo= r requests that already exist in the Cbo. Some examples include: when data= is returned or when snoop responses come back from the cores.", "UMask": "0x2", @@ -2143,8 +2609,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy; IPQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of entries in the specified In= gress queue in each cycle.", "UMask": "0x4", @@ -2152,6 +2620,7 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy; IRQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.IRQ", "PerPkg": "1", @@ -2161,8 +2630,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy; RRQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.RRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of entries in the specified In= gress queue in each cycle.", "UMask": "0x40", @@ -2170,8 +2641,10 @@ }, { "BriefDescription": "Ingress (from CMS) Occupancy; WBQ", + "Counter": "0", "EventCode": "0x11", "EventName": "UNC_CHA_RxC_OCCUPANCY.WBQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of entries in the specified In= gress queue in each cycle.", "UMask": "0x80", @@ -2179,8 +2652,10 @@ }, { "BriefDescription": "Other Retries; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x1", @@ -2188,8 +2663,10 @@ }, { "BriefDescription": "Other Retries; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x2", @@ -2197,8 +2674,10 @@ }, { "BriefDescription": "Other Retries; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x40", @@ -2206,8 +2685,10 @@ }, { "BriefDescription": "Other Retries; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x10", @@ -2215,8 +2696,10 @@ }, { "BriefDescription": "Other Retries; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x20", @@ -2224,8 +2707,10 @@ }, { "BriefDescription": "Other Retries; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x4", @@ -2233,8 +2718,10 @@ }, { "BriefDescription": "Other Retries; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x8", @@ -2242,8 +2729,10 @@ }, { "BriefDescription": "Other Retries; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_CHA_RxC_OTHER0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x80", @@ -2251,8 +2740,10 @@ }, { "BriefDescription": "Other Retries; Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x40", @@ -2260,8 +2751,10 @@ }, { "BriefDescription": "Other Retries; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x1", @@ -2269,8 +2762,10 @@ }, { "BriefDescription": "Other Retries; HA", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x2", @@ -2278,8 +2773,10 @@ }, { "BriefDescription": "Other Retries; Merging these two together to = make room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x20", @@ -2287,8 +2784,10 @@ }, { "BriefDescription": "Other Retries; LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x4", @@ -2296,8 +2795,10 @@ }, { "BriefDescription": "Other Retries; PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x80", @@ -2305,8 +2806,10 @@ }, { "BriefDescription": "Other Retries; SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x8", @@ -2314,8 +2817,10 @@ }, { "BriefDescription": "Other Retries; Victim", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_CHA_RxC_OTHER1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Retry Queue Inserts of Transactions that wer= e already in another Retry Q (sub-events encode the reason for the next rej= ect)", "UMask": "0x10", @@ -2323,136 +2828,170 @@ }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD = REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD = RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non= UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL = WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non= UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_CHA_RxC_PRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; All= ow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; ANY= 0", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; HA"= , + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC= OR SF Way", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC= Victim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Phy= Addr Match", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; SF = Victim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "Ingress (from CMS) Request Queue Rejects; Vic= tim", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_CHA_RxC_PRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "Request Queue Retries; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x1", @@ -2460,8 +2999,10 @@ }, { "BriefDescription": "Request Queue Retries; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x2", @@ -2469,8 +3010,10 @@ }, { "BriefDescription": "Request Queue Retries; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x40", @@ -2478,8 +3021,10 @@ }, { "BriefDescription": "Request Queue Retries; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x10", @@ -2487,8 +3032,10 @@ }, { "BriefDescription": "Request Queue Retries; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x20", @@ -2496,8 +3043,10 @@ }, { "BriefDescription": "Request Queue Retries; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x4", @@ -2505,8 +3054,10 @@ }, { "BriefDescription": "Request Queue Retries; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x8", @@ -2514,8 +3065,10 @@ }, { "BriefDescription": "Request Queue Retries; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x80", @@ -2523,8 +3076,10 @@ }, { "BriefDescription": "Request Queue Retries; Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x40", @@ -2532,8 +3087,10 @@ }, { "BriefDescription": "Request Queue Retries; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x1", @@ -2541,8 +3098,10 @@ }, { "BriefDescription": "Request Queue Retries; HA", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x2", @@ -2550,8 +3109,10 @@ }, { "BriefDescription": "Request Queue Retries; Merging these two toge= ther to make room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x20", @@ -2559,8 +3120,10 @@ }, { "BriefDescription": "Request Queue Retries; LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x4", @@ -2568,8 +3131,10 @@ }, { "BriefDescription": "Request Queue Retries; PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x80", @@ -2577,8 +3142,10 @@ }, { "BriefDescription": "Request Queue Retries; SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x8", @@ -2586,8 +3153,10 @@ }, { "BriefDescription": "Request Queue Retries; Victim", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ = (everything except for ISMQ)", "UMask": "0x10", @@ -2595,8 +3164,10 @@ }, { "BriefDescription": "RRQ Rejects; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x1", @@ -2604,8 +3175,10 @@ }, { "BriefDescription": "RRQ Rejects; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x2", @@ -2613,8 +3186,10 @@ }, { "BriefDescription": "RRQ Rejects; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x40", @@ -2622,8 +3197,10 @@ }, { "BriefDescription": "RRQ Rejects; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x10", @@ -2631,8 +3208,10 @@ }, { "BriefDescription": "RRQ Rejects; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x20", @@ -2640,8 +3219,10 @@ }, { "BriefDescription": "RRQ Rejects; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x4", @@ -2649,8 +3230,10 @@ }, { "BriefDescription": "RRQ Rejects; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x8", @@ -2658,8 +3241,10 @@ }, { "BriefDescription": "RRQ Rejects; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_CHA_RxC_RRQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x80", @@ -2667,8 +3252,10 @@ }, { "BriefDescription": "RRQ Rejects; Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x40", @@ -2676,8 +3263,10 @@ }, { "BriefDescription": "RRQ Rejects; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x1", @@ -2685,8 +3274,10 @@ }, { "BriefDescription": "RRQ Rejects; HA", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x2", @@ -2694,8 +3285,10 @@ }, { "BriefDescription": "RRQ Rejects; Merging these two together to ma= ke room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x20", @@ -2703,8 +3296,10 @@ }, { "BriefDescription": "RRQ Rejects; LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x4", @@ -2712,8 +3307,10 @@ }, { "BriefDescription": "RRQ Rejects; PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x80", @@ -2721,8 +3318,10 @@ }, { "BriefDescription": "RRQ Rejects; SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x8", @@ -2730,8 +3329,10 @@ }, { "BriefDescription": "RRQ Rejects; Victim", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_CHA_RxC_RRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the RRQ (Remote Response Queue) had to retry.", "UMask": "0x10", @@ -2739,8 +3340,10 @@ }, { "BriefDescription": "WBQ Rejects; AD REQ on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x1", @@ -2748,8 +3351,10 @@ }, { "BriefDescription": "WBQ Rejects; AD RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x2", @@ -2757,8 +3362,10 @@ }, { "BriefDescription": "WBQ Rejects; Non UPI AK Request", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.AK_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x40", @@ -2766,8 +3373,10 @@ }, { "BriefDescription": "WBQ Rejects; BL NCB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x10", @@ -2775,8 +3384,10 @@ }, { "BriefDescription": "WBQ Rejects; BL NCS on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x20", @@ -2784,8 +3395,10 @@ }, { "BriefDescription": "WBQ Rejects; BL RSP on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x4", @@ -2793,8 +3406,10 @@ }, { "BriefDescription": "WBQ Rejects; BL WB on VN0", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x8", @@ -2802,8 +3417,10 @@ }, { "BriefDescription": "WBQ Rejects; Non UPI IV Request", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_CHA_RxC_WBQ0_REJECT.IV_NON_UPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x80", @@ -2811,8 +3428,10 @@ }, { "BriefDescription": "WBQ Rejects; Allow Snoop", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x40", @@ -2820,8 +3439,10 @@ }, { "BriefDescription": "WBQ Rejects; ANY0", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.ANY0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x1", @@ -2829,8 +3450,10 @@ }, { "BriefDescription": "WBQ Rejects; HA", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x2", @@ -2838,8 +3461,10 @@ }, { "BriefDescription": "WBQ Rejects; Merging these two together to ma= ke room for ANY_REJECT_*0", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x20", @@ -2847,8 +3472,10 @@ }, { "BriefDescription": "WBQ Rejects; LLC Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x4", @@ -2856,8 +3483,10 @@ }, { "BriefDescription": "WBQ Rejects; PhyAddr Match", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x80", @@ -2865,8 +3494,10 @@ }, { "BriefDescription": "WBQ Rejects; SF Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x8", @@ -2874,8 +3505,10 @@ }, { "BriefDescription": "WBQ Rejects; Victim", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_CHA_RxC_WBQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a transaction flowing throug= h the WBQ (Writeback Queue) had to retry.", "UMask": "0x10", @@ -2883,8 +3516,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x1", @@ -2892,8 +3527,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x10", @@ -2901,8 +3538,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x4", @@ -2910,8 +3549,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x40", @@ -2919,8 +3560,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x1", @@ -2928,8 +3571,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x10", @@ -2937,8 +3582,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x2", @@ -2946,8 +3593,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x4", @@ -2955,8 +3604,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x40", @@ -2964,8 +3615,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_CHA_RxR_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x8", @@ -2973,8 +3626,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x1", @@ -2982,8 +3637,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x10", @@ -2991,8 +3648,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AK - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x2", @@ -3000,8 +3659,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x4", @@ -3009,8 +3670,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x40", @@ -3018,8 +3681,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IFV - Credit= ", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.IFV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x80", @@ -3027,8 +3692,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IV - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_CHA_RxR_CRD_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x8", @@ -3036,8 +3703,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -3045,8 +3714,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -3054,8 +3725,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -3063,8 +3736,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -3072,8 +3747,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -3081,8 +3758,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_CHA_RxR_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -3090,8 +3769,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -3099,8 +3780,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -3108,8 +3791,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -3117,8 +3802,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -3126,8 +3813,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -3135,8 +3824,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_CHA_RxR_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -3144,6 +3835,7 @@ }, { "BriefDescription": "Snoop filter capacity evictions for E-state e= ntries.", + "Counter": "0,1,2,3", "EventCode": "0x3D", "EventName": "UNC_CHA_SF_EVICTION.E_STATE", "PerPkg": "1", @@ -3153,6 +3845,7 @@ }, { "BriefDescription": "Snoop filter capacity evictions for M-state e= ntries.", + "Counter": "0,1,2,3", "EventCode": "0x3D", "EventName": "UNC_CHA_SF_EVICTION.M_STATE", "PerPkg": "1", @@ -3162,6 +3855,7 @@ }, { "BriefDescription": "Snoop filter capacity evictions for S-state e= ntries.", + "Counter": "0,1,2,3", "EventCode": "0x3D", "EventName": "UNC_CHA_SF_EVICTION.S_STATE", "PerPkg": "1", @@ -3171,8 +3865,10 @@ }, { "BriefDescription": "Snoops Sent; All", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .", "UMask": "0x1", @@ -3180,8 +3876,10 @@ }, { "BriefDescription": "Snoops Sent; Broadcast snoop for Local Reques= ts", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.BCST_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of broadcast snoops issued by the HA. This filter incl= udes only requests coming from local sockets.", "UMask": "0x10", @@ -3189,8 +3887,10 @@ }, { "BriefDescription": "Snoops Sent; Broadcast snoops for Remote Requ= ests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.BCST_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of broadcast snoops issued by the HA.This filter inclu= des only requests coming from remote sockets.", "UMask": "0x20", @@ -3198,8 +3898,10 @@ }, { "BriefDescription": "Snoops Sent; Directed snoops for Local Reques= ts", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of directed snoops issued by the HA. This filter inclu= des only requests coming from local sockets.", "UMask": "0x40", @@ -3207,8 +3909,10 @@ }, { "BriefDescription": "Snoops Sent; Directed snoops for Remote Reque= sts", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of directed snoops issued by the HA. This filter inclu= des only requests coming from remote sockets.", "UMask": "0x80", @@ -3216,8 +3920,10 @@ }, { "BriefDescription": "Snoops Sent; Broadcast or directed Snoops sen= t for Local Requests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.LOCAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of broadcast or directed snoops issued by the HA per r= equest. This filter includes only requests coming from the local socket.", "UMask": "0x4", @@ -3225,8 +3931,10 @@ }, { "BriefDescription": "Snoops Sent; Broadcast or directed Snoops sen= t for Remote Requests", + "Counter": "0,1,2,3", "EventCode": "0x51", "EventName": "UNC_CHA_SNOOPS_SENT.REMOTE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of snoops issued by the HA= .; Counts the number of broadcast or directed snoops issued by the HA per r= equest. This filter includes only requests coming from the remote socket.", "UMask": "0x8", @@ -3234,6 +3942,7 @@ }, { "BriefDescription": "RspCnflct* Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPCNFLCTS", "PerPkg": "1", @@ -3243,8 +3952,10 @@ }, { "BriefDescription": "Snoop Responses Received; RspFwd", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of RspI snoop respon= ses received. Whenever a snoops are issued, one or more snoop responses wi= ll be returned depending on the topology of the system. In systems larger= than 2s, when multiple snoops are returned this will count all the snoops = that are received. For example, if 3 snoops were issued and returned RspI,= RspS, and RspSFwd; then each of these sub-events would increment by 1.; Fi= lters for a snoop response of RspFwd to a CA request. This snoop response = is only possible for RdCur when a snoop HITM/E in a remote caching agent an= d it directly forwards data to a requestor without changing the requestor's= cache line state.", "UMask": "0x80", @@ -3252,6 +3963,7 @@ }, { "BriefDescription": "RspI Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPI", "PerPkg": "1", @@ -3261,6 +3973,7 @@ }, { "BriefDescription": "RspIFwd Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPIFWD", "PerPkg": "1", @@ -3270,8 +3983,10 @@ }, { "BriefDescription": "Snoop Responses Received : RspS", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop Responses Received : RspS : Counts the= total number of RspI snoop responses received. Whenever a snoops are issu= ed, one or more snoop responses will be returned depending on the topology = of the system. In systems larger than 2s, when multiple snoops are return= ed this will count all the snoops that are received. For example, if 3 sno= ops were issued and returned RspI, RspS, and RspSFwd; then each of these su= b-events would increment by 1. : Filters for snoop responses of RspS. RspS= is returned when a remote cache has data but is not forwarding it. It is = a way to let the requesting socket know that it cannot allocate the data in= E state. No data is sent with S RspS.", "UMask": "0x2", @@ -3279,6 +3994,7 @@ }, { "BriefDescription": "RspSFwd Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSPSFWD", "PerPkg": "1", @@ -3288,6 +4004,7 @@ }, { "BriefDescription": "Rsp*Fwd*WB Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSP_FWD_WB", "PerPkg": "1", @@ -3297,6 +4014,7 @@ }, { "BriefDescription": "Rsp*WB Snoop Responses Received", + "Counter": "0,1,2,3", "EventCode": "0x5C", "EventName": "UNC_CHA_SNOOP_RESP.RSP_WBWB", "PerPkg": "1", @@ -3306,8 +4024,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspCnflct", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for snoops responses of RspConflict to local CA reques= ts. This is returned when a snoop finds an existing outstanding transactio= n in a remote caching agent when it CAMs that caching agent. This triggers= conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI= .", "UMask": "0x40", @@ -3315,8 +4035,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspFwd", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of RspFwd to local CA requests. = This snoop response is only possible for RdCur when a snoop HITM/E in a rem= ote caching agent and it directly forwards data to a requestor without chan= ging the requestor's cache line state.", "UMask": "0x80", @@ -3324,8 +4046,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspI", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for snoops responses of RspI to local CA requests. Rs= pI is returned when the remote cache does not have the data, or when the re= mote cache silently evicts data (such as when an RFO hits non-modified data= ).", "UMask": "0x1", @@ -3333,8 +4057,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspIFwd", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for snoop responses of RspIFwd to local CA requests. = This is returned when a remote caching agent forwards data and the requesti= ng agent is able to acquire the data in E or M states. This is commonly re= turned with RFO transactions. It can be either a HitM or a HitFE.", "UMask": "0x4", @@ -3342,8 +4068,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspS", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for snoop responses of RspS to local CA requests. Rsp= S is returned when a remote cache has data but is not forwarding it. It is= a way to let the requesting socket know that it cannot allocate the data i= n E state. No data is sent with S RspS.", "UMask": "0x2", @@ -3351,8 +4079,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; RspSFwd", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of RspSFwd to local CA requests. = This is returned when a remote caching agent forwards data but holds on to= its current copy. This is common for data and code reads that hit in a re= mote socket in E or F state.", "UMask": "0x8", @@ -3360,8 +4090,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; Rsp*FWD*WB", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSP_FWD_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of Rsp*Fwd*WB to local CA request= s. This snoop response is only used in 4s systems. It is used when a snoo= p HITM's in a remote caching agent and it directly forwards data to a reque= stor, and simultaneously returns data to the home to be written back to mem= ory.", "UMask": "0x20", @@ -3369,8 +4101,10 @@ }, { "BriefDescription": "Snoop Responses Received Local; Rsp*WB", + "Counter": "0,1,2,3", "EventCode": "0x5D", "EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSP_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of RspIWB or RspSWB to local CA r= equests. This is returned when a non-RFO request hits in M state. Data an= d Code Reads can return either RspIWB or RspSWB depending on how the system= has been configured. InvItoE transactions will also return RspIWB because= they must acquire ownership.", "UMask": "0x10", @@ -3378,8 +4112,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -3387,8 +4123,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -3396,8 +4134,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -3405,8 +4145,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -3414,8 +4156,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -3423,8 +4167,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -3432,8 +4178,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -3441,8 +4189,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -3450,8 +4200,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -3459,8 +4211,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -3468,8 +4222,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -3477,8 +4233,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -3486,8 +4244,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -3495,8 +4255,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -3504,8 +4266,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -3513,8 +4277,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -3522,8 +4288,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -3531,8 +4299,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -3540,8 +4310,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -3549,8 +4321,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -3558,8 +4332,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -3567,8 +4343,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -3576,8 +4354,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -3585,8 +4365,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -3594,8 +4376,10 @@ }, { "BriefDescription": "TOR Inserts; Hits from Local", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.ALL_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.", "UMask": "0x15", @@ -3603,8 +4387,10 @@ }, { "BriefDescription": "TOR Inserts; All from Local iA and IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.ALL_IO_IA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; A= ll locally initiated requests", "UMask": "0x35", @@ -3612,8 +4398,10 @@ }, { "BriefDescription": "TOR Inserts; Misses from Local", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.ALL_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.", "UMask": "0x25", @@ -3621,8 +4409,10 @@ }, { "BriefDescription": "TOR Inserts; SF/LLC Evictions", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.EVICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; T= OR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)= ", "UMask": "0x2", @@ -3630,8 +4420,10 @@ }, { "BriefDescription": "TOR Inserts; Hit (Not a Miss)", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; H= ITs (hit is defined to be not a miss [see below], as a result for any reque= st allocated into the TOR, one of either HIT or MISS must be true)", "UMask": "0x10", @@ -3639,6 +4431,7 @@ }, { "BriefDescription": "TOR Inserts; All from Local iA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA", "PerPkg": "1", @@ -3648,6 +4441,7 @@ }, { "BriefDescription": "TOR Inserts; Hits from Local iA", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT", "PerPkg": "1", @@ -3657,6 +4451,7 @@ }, { "BriefDescription": "TOR Inserts : CRds issued by iA Cores that Hi= t the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD", "Filter": "config1=3D0x40233", @@ -3667,6 +4462,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores that Hi= t the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD", "Filter": "config1=3D0x40433", @@ -3677,6 +4473,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD", "Filter": "config1=3D0x4b233", @@ -3686,6 +4483,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD", "Filter": "config1=3D0x4b433", @@ -3695,6 +4493,7 @@ }, { "BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores t= hat hit the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefRFO", "Filter": "config1=3D0x4b033", @@ -3705,6 +4504,7 @@ }, { "BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Hi= t the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO", "Filter": "config1=3D0x40033", @@ -3715,6 +4515,7 @@ }, { "BriefDescription": "TOR Inserts : All requests from iA Cores that= Missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS", "PerPkg": "1", @@ -3724,6 +4525,7 @@ }, { "BriefDescription": "TOR Inserts : CRds issued by iA Cores that Mi= ssed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD", "Filter": "config1=3D0x40233", @@ -3734,6 +4536,7 @@ }, { "BriefDescription": "TOR Inserts : DRds issued by iA Cores that Mi= ssed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD", "Filter": "config1=3D0x40433", @@ -3744,6 +4547,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD", "Filter": "config1=3D0x4b233", @@ -3753,6 +4557,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD", "Filter": "config1=3D0x4b433", @@ -3762,6 +4567,7 @@ }, { "BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores t= hat missed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefRFO", "Filter": "config1=3D0x4b033", @@ -3772,6 +4578,7 @@ }, { "BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Mi= ssed the LLC", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO", "Filter": "config1=3D0x40033", @@ -3782,8 +4589,10 @@ }, { "BriefDescription": "TOR Inserts; All from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; A= ll locally generated IO traffic", "UMask": "0x34", @@ -3791,6 +4600,7 @@ }, { "BriefDescription": "TOR Inserts; Hits from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT", "PerPkg": "1", @@ -3800,6 +4610,7 @@ }, { "BriefDescription": "TOR Inserts; Misses from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS", "PerPkg": "1", @@ -3809,8 +4620,10 @@ }, { "BriefDescription": "TOR Inserts; ItoM misses from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM", + "Experimental": "1", "Filter": "config1=3D0x49033", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that are generated from local IO ItoM requests that mis= s the LLC. An ItoM request is used by IIO to request a data write without f= irst reading the data for ownership.", @@ -3819,8 +4632,10 @@ }, { "BriefDescription": "TOR Inserts; RdCur misses from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RDCUR", + "Experimental": "1", "Filter": "config1=3D0x43C33", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that are generated from local IO RdCur requests and mis= s the LLC. A RdCur request is used by IIO to read data without changing sta= te.", @@ -3829,8 +4644,10 @@ }, { "BriefDescription": "TOR Inserts; RFO misses from Local IO", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO", + "Experimental": "1", "Filter": "config1=3D0x40033", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that are generated from local IO RFO requests that miss= the LLC. A read for ownership (RFO) requests a cache line to be cached in = E state with the intent to modify.", @@ -3839,8 +4656,10 @@ }, { "BriefDescription": "TOR Inserts; IPQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.", "UMask": "0x8", @@ -3848,26 +4667,32 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IPQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x18", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IPQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x28", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts; IRQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.IRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.", "UMask": "0x1", @@ -3875,17 +4700,21 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x37", "Unit": "CHA" }, { "BriefDescription": "TOR Inserts; Miss", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.; M= isses. (a miss is defined to be any transaction from the IRQ, PRQ, RRQ, IP= Q or (in the victim case) the ISMQ, that required the CHA to spawn a new UP= I/SMI3 request on the UPI fabric (including UPI snoops and/or any RD/WR to = a local memory controller, in the event that the CHA is the home node)). B= asically, if the LLC/SF/MLC complex were not able to service the request wi= thout involving another agent...it is a miss. If only IDI snoops were requ= ired, it is not a miss (that means the SF/MLC com", "UMask": "0x20", @@ -3893,8 +4722,10 @@ }, { "BriefDescription": "TOR Inserts; PRQ", + "Counter": "0,1,2,3", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.PRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent.", "UMask": "0x4", @@ -3902,6 +4733,7 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.REM_ALL", @@ -3911,44 +4743,54 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.RRQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x50", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x60", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.WBQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x90", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_CHA_TOR_INSERTS.WBQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa0", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy; All from Local", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_FROM_LOC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); All remotely= generated requests", "UMask": "0x37", @@ -3956,8 +4798,10 @@ }, { "BriefDescription": "TOR Occupancy; Hits from Local", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x17", @@ -3965,8 +4809,10 @@ }, { "BriefDescription": "TOR Occupancy; Misses from Local", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x27", @@ -3974,8 +4820,10 @@ }, { "BriefDescription": "TOR Occupancy; SF/LLC Evictions", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.EVICT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T; TOR allocation occurred as a result of SF/LLC evictions (c= ame from the ISMQ)", "UMask": "0x2", @@ -3983,8 +4831,10 @@ }, { "BriefDescription": "TOR Occupancy; Hit (Not a Miss)", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T; HITs (hit is defined to be not a miss [see below], as a re= sult for any request allocated into the TOR, one of either HIT or MISS must= be true)", "UMask": "0x10", @@ -3992,6 +4842,7 @@ }, { "BriefDescription": "TOR Occupancy; All from Local iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA", "PerPkg": "1", @@ -4001,6 +4852,7 @@ }, { "BriefDescription": "TOR Occupancy; Hits from Local iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT", "PerPkg": "1", @@ -4010,6 +4862,7 @@ }, { "BriefDescription": "TOR Occupancy : CRds issued by iA Cores that = Hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD", "Filter": "config1=3D0x40233", @@ -4020,6 +4873,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores that = Hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD", "Filter": "config1=3D0x40433", @@ -4030,6 +4884,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD", "Filter": "config1=3D0x4b233", @@ -4039,6 +4894,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD", "Filter": "config1=3D0x4b433", @@ -4048,6 +4904,7 @@ }, { "BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores= that hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefRFO", "Filter": "config1=3D0x4b033", @@ -4058,6 +4915,7 @@ }, { "BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that = Hit the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO", "Filter": "config1=3D0x40033", @@ -4068,6 +4926,7 @@ }, { "BriefDescription": "TOR Occupancy; Misses from Local iA", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS", "PerPkg": "1", @@ -4077,6 +4936,7 @@ }, { "BriefDescription": "TOR Occupancy : CRds issued by iA Cores that = Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD", "Filter": "config1=3D0x40233", @@ -4087,6 +4947,7 @@ }, { "BriefDescription": "TOR Occupancy : DRds issued by iA Cores that = Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD", "Filter": "config1=3D0x40433", @@ -4097,6 +4958,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD", "Filter": "config1=3D0x4b233", @@ -4106,6 +4968,7 @@ }, { "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD", "Filter": "config1=3D0x4b433", @@ -4115,6 +4978,7 @@ }, { "BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores= that missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefRFO", "Filter": "config1=3D0x4b033", @@ -4125,6 +4989,7 @@ }, { "BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that = Missed the LLC", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO", "Filter": "config1=3D0x40033", @@ -4135,8 +5000,10 @@ }, { "BriefDescription": "TOR Occupancy; All from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T; All locally generated IO traffic", "UMask": "0x34", @@ -4144,8 +5011,10 @@ }, { "BriefDescription": "TOR Occupancy; Hits from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x14", @@ -4153,8 +5022,10 @@ }, { "BriefDescription": "TOR Occupancy; Misses from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x24", @@ -4162,8 +5033,10 @@ }, { "BriefDescription": "TOR Occupancy; ITOM Misses from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM", + "Experimental": "1", "Filter": "config1=3D0x49033", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that are generated from local IO ItoM req= uests that miss the LLC. An ItoM is used by IIO to request a data write wit= hout first reading the data for ownership.", @@ -4172,8 +5045,10 @@ }, { "BriefDescription": "TOR Occupancy; RDCUR misses from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RDCUR", + "Experimental": "1", "Filter": "config1=3D0x43C33", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that are generated from local IO RdCur re= quests that miss the LLC. A RdCur request is used by IIO to read data witho= ut changing state.", @@ -4182,8 +5057,10 @@ }, { "BriefDescription": "TOR Occupancy; RFO misses from Local IO", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO", + "Experimental": "1", "Filter": "config1=3D0x40033", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that are generated from local IO RFO requ= ests that miss the LLC. A read for ownership (RFO) requests data to be cach= ed in E state with the intent to modify.", @@ -4192,8 +5069,10 @@ }, { "BriefDescription": "TOR Occupancy; IPQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x8", @@ -4201,26 +5080,32 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x18", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x28", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy; IRQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x1", @@ -4228,17 +5113,21 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.ALL_FROM_LOC", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x37", "Unit": "CHA" }, { "BriefDescription": "TOR Occupancy; Miss", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T; Misses. (a miss is defined to be any transaction from the= IRQ, PRQ, RRQ, IPQ or (in the victim case) the ISMQ, that required the CHA= to spawn a new UPI/SMI3 request on the UPI fabric (including UPI snoops an= d/or any RD/WR to a local memory controller, in the event that the CHA is t= he home node)). Basically, if the LLC/SF/MLC complex were not able to serv= ice the request without involving another agent...it is a miss. If only ID= I snoops were required, it is not a miss (that means the SF/MLC com", "UMask": "0x20", @@ -4246,8 +5135,10 @@ }, { "BriefDescription": "TOR Occupancy; PRQ", + "Counter": "0", "EventCode": "0x36", "EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. T", "UMask": "0x4", @@ -4255,8 +5146,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -4264,8 +5157,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -4273,8 +5168,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -4282,8 +5179,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -4291,8 +5190,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -4300,8 +5201,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -4309,8 +5212,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -4318,8 +5223,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -4327,8 +5234,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -4336,8 +5245,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -4345,8 +5256,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_CHA_TxR_HORZ_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -4354,8 +5267,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -4363,8 +5278,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Credit", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -4372,8 +5289,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= K - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -4381,8 +5300,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -4390,8 +5311,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Credit", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -4399,8 +5322,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; I= V - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -4408,8 +5333,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -4417,8 +5344,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -4426,8 +5355,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -4435,8 +5366,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -4444,8 +5377,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -4453,8 +5388,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -4462,8 +5399,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -4471,8 +5410,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -4480,8 +5421,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -4489,8 +5432,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -4498,8 +5443,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -4507,8 +5454,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_CHA_TxR_HORZ_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -4516,8 +5465,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x1", @@ -4525,8 +5476,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x20", @@ -4534,8 +5487,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x2", @@ -4543,8 +5498,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x4", @@ -4552,8 +5509,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x40", @@ -4561,8 +5520,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_CHA_TxR_HORZ_NACK.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x8", @@ -4570,8 +5531,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -4579,8 +5542,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -4588,8 +5553,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -4597,8 +5564,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -4606,8 +5575,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -4615,8 +5586,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -4624,8 +5597,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= D - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_CHA_TxR_HORZ_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x1", @@ -4633,8 +5608,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= K - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_CHA_TxR_HORZ_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x2", @@ -4642,8 +5619,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; B= L - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_CHA_TxR_HORZ_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x4", @@ -4651,8 +5630,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; I= V - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_CHA_TxR_HORZ_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x8", @@ -4660,8 +5641,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -4669,8 +5652,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -4678,8 +5663,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -4687,8 +5674,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -4696,8 +5685,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -4705,8 +5696,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -4714,8 +5707,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -4723,8 +5718,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -4732,8 +5729,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -4741,8 +5740,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -4750,8 +5751,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -4759,8 +5762,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -4768,8 +5773,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; IV", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_CHA_TxR_VERT_BYPASS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -4777,8 +5784,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= D ring. Some example include outbound requests, snoop requests, and snoop = responses.", "UMask": "0x1", @@ -4786,8 +5795,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= D ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -4795,8 +5806,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= K ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -4804,8 +5817,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= K ring.", "UMask": "0x20", @@ -4813,8 +5828,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the B= L ring. This is commonly used to send data from the cache to various desti= nations.", "UMask": "0x4", @@ -4822,8 +5839,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the B= L ring. This is commonly used for transferring writeback data to the cache= .", "UMask": "0x40", @@ -4831,8 +5850,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV"= , + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the I= V ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -4840,8 +5861,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AD ring. Some example include outbound requests, snoop requests, and snoop= responses.", "UMask": "0x1", @@ -4849,8 +5872,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -4858,8 +5883,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -4867,8 +5894,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AK ring.", "UMask": "0x20", @@ -4876,8 +5905,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = BL ring. This is commonly used to send data from the cache to various dest= inations.", "UMask": "0x4", @@ -4885,8 +5916,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = BL ring. This is commonly used for transferring writeback data to the cach= e.", "UMask": "0x40", @@ -4894,8 +5927,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; IV", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -4903,8 +5938,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD r= ing. Some example include outbound requests, snoop requests, and snoop res= ponses.", "UMask": "0x1", @@ -4912,8 +5949,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD r= ing. This is commonly used for outbound requests.", "UMask": "0x10", @@ -4921,8 +5960,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK r= ing. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -4930,8 +5971,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK r= ing.", "UMask": "0x20", @@ -4939,8 +5982,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL r= ing. This is commonly used to send data from the cache to various destinat= ions.", "UMask": "0x4", @@ -4948,8 +5993,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL r= ing. This is commonly used for transferring writeback data to the cache.", "UMask": "0x40", @@ -4957,8 +6004,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; IV", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_CHA_TxR_VERT_INSERTS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV r= ing. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -4966,8 +6015,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x1", @@ -4975,8 +6026,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x10", @@ -4984,8 +6037,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x2", @@ -4993,8 +6048,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x20", @@ -5002,8 +6059,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x4", @@ -5011,8 +6070,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x40", @@ -5020,8 +6081,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; IV", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_CHA_TxR_VERT_NACK.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x8", @@ -5029,8 +6092,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AD ring. Some example include outbound requests, snoop requests, and sn= oop responses.", "UMask": "0x1", @@ -5038,8 +6103,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -5047,8 +6114,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -5056,8 +6125,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AK ring.", "UMask": "0x20", @@ -5065,8 +6136,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he BL ring. This is commonly used to send data from the cache to various d= estinations.", "UMask": "0x4", @@ -5074,8 +6147,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he BL ring. This is commonly used for transferring writeback data to the c= ache.", "UMask": "0x40", @@ -5083,8 +6158,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; IV", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -5092,8 +6169,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x1", @@ -5101,8 +6180,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x10", @@ -5110,8 +6191,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x2", @@ -5119,8 +6202,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x20", @@ -5128,8 +6213,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x4", @@ -5137,8 +6224,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x40", @@ -5146,8 +6235,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; IV"= , + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_CHA_TxR_VERT_STARVED.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x8", @@ -5155,8 +6246,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; AD REQ Credit= s", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x4", @@ -5164,8 +6257,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; AD RSP VN0 Cr= edits", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x8", @@ -5173,8 +6268,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; BL NCB Credit= s", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x40", @@ -5182,8 +6279,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; BL NCS Credit= s", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x80", @@ -5191,8 +6290,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; BL RSP Credit= s", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x10", @@ -5200,8 +6301,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; BL DRS Credit= s", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x20", @@ -5209,8 +6312,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; VN0 Credits", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x2", @@ -5218,8 +6323,10 @@ }, { "BriefDescription": "UPI Ingress Credit Allocations; VNA Credits", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of UPI credits acquired fo= r either the AD or BL ring. In order to send snoops, snoop responses, requ= ests, data, etc to the UPI agent on the ring, it is necessary to first acqu= ire a credit for the UPI ingress buffer. This can be used with the Credit = Occupancy event in order to calculate average credit lifetime. This event = supports filtering to cover the VNA/VN0 credits and the different message c= lasses. Note that you must select the link that you would like to monitor = using the link select register, and you can only monitor 1 link at a time."= , "UMask": "0x1", @@ -5227,8 +6334,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; AD REQ VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x4", @@ -5236,8 +6345,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; AD RSP VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x8", @@ -5245,8 +6356,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; BL NCB VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x40", @@ -5254,6 +6367,7 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; BL NCS VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_NCS", "PerPkg": "1", @@ -5263,8 +6377,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; BL RSP VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x10", @@ -5272,8 +6388,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; BL DRS VN0= Credits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x20", @@ -5281,8 +6399,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; AD VNA Cre= dits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VNA_AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x1", @@ -5290,8 +6410,10 @@ }, { "BriefDescription": "UPI Ingress Credits In Use Cycles; BL VNA Cre= dits", + "Counter": "0", "EventCode": "0x3B", "EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VNA_BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of UPI credits availa= ble in each cycle for either the AD or BL ring. In order to send snoops, s= noop responses, requests, data, etc to the UPI agent on the ring, it is nec= essary to first acquire a credit for the UPI ingress buffer. This stat inc= rements by the number of credits that are available each cycle. This can b= e used in conjunction with the Credit Acquired event in order to calculate = average credit lifetime. This event supports filtering for the different t= ypes of credits that are available. Note that you must select the link tha= t you would like to monitor using the link select register, and you can onl= y monitor 1 link at a time.", "UMask": "0x2", @@ -5299,8 +6421,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -5308,8 +6432,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -5317,8 +6443,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -5326,8 +6454,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -5335,8 +6465,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -5344,8 +6476,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -5353,8 +6487,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -5362,8 +6498,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -5371,8 +6509,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x4", @@ -5380,8 +6520,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x8", @@ -5389,8 +6531,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x1", @@ -5398,8 +6542,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x2", @@ -5407,8 +6553,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Down", + "Counter": "0,1,2,3", "EventCode": "0xAC", "EventName": "UNC_CHA_VERT_RING_IV_IN_USE.DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x4", @@ -5416,8 +6564,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Up", + "Counter": "0,1,2,3", "EventCode": "0xAC", "EventName": "UNC_CHA_VERT_RING_IV_IN_USE.UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x1", @@ -5425,8 +6575,10 @@ }, { "BriefDescription": "WbPushMtoI; Pushed to LLC", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_CHA_WB_PUSH_MTOI.LLC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when the CHA was = received WbPushMtoI; Counts the number of times when the CHA was able to pu= sh WbPushMToI to LLC", "UMask": "0x1", @@ -5434,8 +6586,10 @@ }, { "BriefDescription": "WbPushMtoI; Pushed to Memory", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_CHA_WB_PUSH_MTOI.MEM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when the CHA was = received WbPushMtoI; Counts the number of times when the CHA was unable to = push WbPushMToI to LLC (hence pushed it to MEM)", "UMask": "0x2", @@ -5443,8 +6597,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC0_SMI2", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC0_SMI2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 2 only.", "UMask": "0x4", @@ -5452,8 +6608,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC1_SMI3", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC1_SMI3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 3 only.", "UMask": "0x8", @@ -5461,8 +6619,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC2_SMI4", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC2_SMI4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 4 only.", "UMask": "0x10", @@ -5470,8 +6630,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC3_SMI5", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC3_SMI5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 5 only.", "UMask": "0x20", @@ -5479,8 +6641,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; MC0_SMI0", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC0_SMI0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 0 only.", "UMask": "0x1", @@ -5488,8 +6652,10 @@ }, { "BriefDescription": "CHA iMC CHNx WRITE Credits Empty; MC1_SMI1", + "Counter": "0,1,2,3", "EventCode": "0x5A", "EventName": "UNC_CHA_WRITE_NO_CREDITS.MC1_SMI1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times when there are no= credits available for sending WRITEs from the CHA into the iMC. In order = to send WRITEs into the memory controller, the HA must first acquire a cred= it for the iMC's BL Ingress queue.; Filter for memory controller 1 only.", "UMask": "0x2", @@ -5497,8 +6663,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Any RspIFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.ANY_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Any Request - Response I to Fwd F/E", "UMask": "0xe4", @@ -5506,8 +6674,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.ANY_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Any Request - Response I to Fwd M", "UMask": "0xf0", @@ -5515,8 +6685,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Any RspSFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.ANY_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Any Request - Response S to Fwd F/E", "UMask": "0xe2", @@ -5524,8 +6696,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Any RspSFwdM", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.ANY_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Any Request - Response S to Fwd M", "UMask": "0xe8", @@ -5533,8 +6707,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Any RspHitFSE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.ANY_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Any Request - Response any to Hit F/S/E= ", "UMask": "0xe1", @@ -5542,8 +6718,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Core RspIFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.CORE_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Core Request - Response I to Fwd F/E", "UMask": "0x44", @@ -5551,8 +6729,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Core RspIFwdM", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.CORE_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Core Request - Response I to Fwd M", "UMask": "0x50", @@ -5560,8 +6740,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Core RspSFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.CORE_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Core Request - Response S to Fwd F/E", "UMask": "0x42", @@ -5569,8 +6751,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Core RspSFwdM", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.CORE_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Core Request - Response S to Fwd M", "UMask": "0x48", @@ -5578,8 +6762,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Core RspHitFSE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.CORE_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Core Request - Response any to Hit F/S/= E", "UMask": "0x41", @@ -5587,8 +6773,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Evict RspIFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Eviction Request - Response I to Fwd F/= E", "UMask": "0x84", @@ -5596,8 +6784,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Evict RspIFwdM", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Eviction Request - Response I to Fwd M"= , "UMask": "0x90", @@ -5605,8 +6795,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Evict RspSFwdFE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Eviction Request - Response S to Fwd F/= E", "UMask": "0x82", @@ -5614,8 +6806,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Evict RspSFwdM", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Eviction Request - Response S to Fwd M"= , "UMask": "0x88", @@ -5623,8 +6817,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; Evict RspHitFSE", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EVICT_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; Eviction Request - Response any to Hit = F/S/E", "UMask": "0x81", @@ -5632,8 +6828,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; External RspIFwdF= E", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EXT_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; External Request - Response I to Fwd F/= E", "UMask": "0x24", @@ -5641,8 +6839,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; External RspIFwdM= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EXT_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; External Request - Response I to Fwd M"= , "UMask": "0x30", @@ -5650,8 +6850,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; External RspSFwdF= E", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EXT_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; External Request - Response S to Fwd F/= E", "UMask": "0x22", @@ -5659,8 +6861,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; External RspSFwdM= ", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EXT_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; External Request - Response S to Fwd M"= , "UMask": "0x28", @@ -5668,8 +6872,10 @@ }, { "BriefDescription": "Core Cross Snoop Responses; External RspHitFS= E", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_CHA_XSNP_RESP.EXT_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of core cross snoops. Cor= es are snooped if the transaction looks up the cache and determines that it= is necessary based on the operation type. This event can be filtered based= on who triggered the initial snoop(s): from Evictions, Core or External = (i.e. from a remote node) Requests. And the event can be filtered based on= the responses: RspX_Fwd/HitY where Y is the state prior to the snoop resp= onse and X is the state following.; External Request - Response any to Hit = F/S/E", "UMask": "0x21", @@ -5677,6 +6883,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CLOCKTICKS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventName": "UNC_C_CLOCKTICKS", "PerPkg": "1", @@ -5684,6 +6891,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_FAST_ASSERTED.HORZ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA5", "EventName": "UNC_C_FAST_ASSERTED", @@ -5693,15 +6901,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.ANY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.ANY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.DATA_READ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.DATA_READ", @@ -5711,24 +6922,29 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.LOCAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x31", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x91", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.REMOTE_SNOOP", @@ -5738,15 +6954,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_LOOKUP.WRITE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.WRITE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_E", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.E_STATE", @@ -5756,6 +6975,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_F", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.F_STATE", @@ -5765,15 +6985,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.LOCAL_ALL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2f", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_M", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.M_STATE", @@ -5783,15 +7006,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.REMOTE_ALL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_LLC_VICTIMS.TOTAL_S", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x37", "EventName": "UNC_C_LLC_VICTIMS.S_STATE", @@ -5801,59 +7027,72 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SRC_THRTL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA4", "EventName": "UNC_C_RING_SRC_THRTL", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.EVICT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.EVICT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.HIT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IPQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IPQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x18", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IPQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x28", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IRQ", @@ -5863,6 +7102,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IA_HIT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IRQ_HIT", @@ -5872,6 +7112,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IA_MISS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.IRQ_MISS", @@ -5881,51 +7122,62 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x37", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.LOC_IA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x31", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IO", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.LOC_IO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x34", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.MISS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.PRQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.PRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IO_HIT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.PRQ_HIT", @@ -5935,6 +7187,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_INSERTS.IO_MISS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.PRQ_MISS", @@ -5944,6 +7197,7 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.REM_ALL", @@ -5953,87 +7207,106 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.RRQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x50", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.RRQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x60", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.WBQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x90", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.WBQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa0", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.EVICT", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.EVICT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.HIT", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IPQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IPQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x18", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IPQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x28", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IA", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IRQ", @@ -6043,6 +7316,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IA_HIT", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IRQ_HIT", @@ -6052,6 +7326,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IA_MISS", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.IRQ_MISS", @@ -6061,608 +7336,743 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.LOC_ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x37", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IA", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.LOC_IA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x31", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IO", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.LOC_IO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x34", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.MISS", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.PRQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.PRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IO_HIT", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.PRQ_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TOR_OCCUPANCY.IO_MISS", + "Counter": "0", "Deprecated": "1", "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.PRQ_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x24", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x80", "EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x82", "EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x88", "EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8A", "EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x86", "EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8E", "EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x8C", "EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x57", "EventName": "UNC_H_BYPASS_CHA_IMC.INTERMEDIATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x57", "EventName": "UNC_H_BYPASS_CHA_IMC.NOT_TAKEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_BYPASS_CHA_IMC.TAKEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x57", "EventName": "UNC_H_BYPASS_CHA_IMC.TAKEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CMS_CLOCKTICKS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_H_CLOCK", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_PMA.C1_STATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x17", "EventName": "UNC_H_CORE_PMA.C1_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_PMA.C1_TRANSITION", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x17", "EventName": "UNC_H_CORE_PMA.C1_TRANSITION", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_PMA.C6_STATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x17", "EventName": "UNC_H_CORE_PMA.C6_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_PMA.C6_TRANSITION", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x17", "EventName": "UNC_H_CORE_PMA.C6_TRANSITION", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_PMA.GV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x17", "EventName": "UNC_H_CORE_PMA.GV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.ANY_GTONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.ANY_GTONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.ANY_ONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.ANY_ONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.ANY_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.ANY_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.CORE_GTONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.CORE_GTONE", @@ -6672,24 +8082,29 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.CORE_ONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.CORE_ONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x41", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.CORE_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.CORE_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x44", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EVICT_GTONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EVICT_GTONE", @@ -6699,59 +8114,72 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EVICT_ONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EVICT_ONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x81", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EVICT_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EVICT_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x84", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EXT_GTONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EXT_GTONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x22", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EXT_ONE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EXT_ONE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x21", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_CORE_SNP.EXT_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x33", "EventName": "UNC_H_CORE_SNP.EXT_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x24", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_COUNTER0_OCCUPANCY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x1F", "EventName": "UNC_H_COUNTER0_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_DIR_LOOKUP.NO_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x53", "EventName": "UNC_H_DIR_LOOKUP.NO_SNP", @@ -6761,6 +8189,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_DIR_LOOKUP.SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x53", "EventName": "UNC_H_DIR_LOOKUP.SNP", @@ -6770,6 +8199,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_DIR_UPDATE.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x54", "EventName": "UNC_H_DIR_UPDATE.HA", @@ -6779,6 +8209,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_DIR_UPDATE.TOR", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x54", "EventName": "UNC_H_DIR_UPDATE.TOR", @@ -6788,24 +8219,29 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAE", "EventName": "UNC_H_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAE", "EventName": "UNC_H_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_HIT.EX_RDS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5F", "EventName": "UNC_H_HITME_HIT.EX_RDS", @@ -6815,411 +8251,502 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_HIT.SHARED_OWNREQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5F", "EventName": "UNC_H_HITME_HIT.SHARED_OWNREQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_HIT.WBMTOE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5F", "EventName": "UNC_H_HITME_HIT.WBMTOE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_HIT.WBMTOI_OR_S", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5F", "EventName": "UNC_H_HITME_HIT.WBMTOI_OR_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_LOOKUP.READ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5E", "EventName": "UNC_H_HITME_LOOKUP.READ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_LOOKUP.WRITE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5E", "EventName": "UNC_H_HITME_LOOKUP.WRITE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x60", "EventName": "UNC_H_HITME_MISS.NOTSHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_MISS.READ_OR_INV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x60", "EventName": "UNC_H_HITME_MISS.READ_OR_INV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_MISS.SHARED_RDINVOWN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x60", "EventName": "UNC_H_HITME_MISS.SHARED_RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_UPDATE.DEALLOCATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x61", "EventName": "UNC_H_HITME_UPDATE.DEALLOCATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x61", "EventName": "UNC_H_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_UPDATE.RDINVOWN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x61", "EventName": "UNC_H_HITME_UPDATE.RDINVOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_UPDATE.RSPFWDI_REM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x61", "EventName": "UNC_H_HITME_UPDATE.RSPFWDI_REM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HITME_UPDATE.SHARED", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x61", "EventName": "UNC_H_HITME_UPDATE.SHARED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA7", "EventName": "UNC_H_HORZ_RING_AD_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA7", "EventName": "UNC_H_HORZ_RING_AD_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA7", "EventName": "UNC_H_HORZ_RING_AD_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA7", "EventName": "UNC_H_HORZ_RING_AD_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA9", "EventName": "UNC_H_HORZ_RING_AK_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA9", "EventName": "UNC_H_HORZ_RING_AK_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA9", "EventName": "UNC_H_HORZ_RING_AK_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA9", "EventName": "UNC_H_HORZ_RING_AK_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAB", "EventName": "UNC_H_HORZ_RING_BL_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAB", "EventName": "UNC_H_HORZ_RING_BL_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAB", "EventName": "UNC_H_HORZ_RING_BL_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAB", "EventName": "UNC_H_HORZ_RING_BL_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_IV_IN_USE.LEFT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAD", "EventName": "UNC_H_HORZ_RING_IV_IN_USE.LEFT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_HORZ_RING_IV_IN_USE.RIGHT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAD", "EventName": "UNC_H_HORZ_RING_IV_IN_USE.RIGHT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_READS_COUNT.NORMAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x59", "EventName": "UNC_H_IMC_READS_COUNT.NORMAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_READS_COUNT.PRIORITY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x59", "EventName": "UNC_H_IMC_READS_COUNT.PRIORITY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.FULL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.FULL_MIG", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.FULL_MIG", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.FULL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.PARTIAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.PARTIAL_MIG", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL_MIG", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5B", "EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL_PRIORITY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_ALLOC.INVITOM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x62", "EventName": "UNC_H_IODC_ALLOC.INVITOM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_ALLOC.IODCFULL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x62", "EventName": "UNC_H_IODC_ALLOC.IODCFULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_ALLOC.OSBGATED", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x62", "EventName": "UNC_H_IODC_ALLOC.OSBGATED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_DEALLOC.ALL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x63", "EventName": "UNC_H_IODC_DEALLOC.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_DEALLOC.SNPOUT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x63", "EventName": "UNC_H_IODC_DEALLOC.SNPOUT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_DEALLOC.WBMTOE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x63", "EventName": "UNC_H_IODC_DEALLOC.WBMTOE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_DEALLOC.WBMTOI", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x63", "EventName": "UNC_H_IODC_DEALLOC.WBMTOI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_IODC_DEALLOC.WBPUSHMTOI", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x63", "EventName": "UNC_H_IODC_DEALLOC.WBPUSHMTOI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_MISC.CV0_PREF_MISS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x39", "EventName": "UNC_H_MISC.CV0_PREF_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_MISC.CV0_PREF_VIC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x39", "EventName": "UNC_H_MISC.CV0_PREF_VIC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_MISC.RFO_HIT_S", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x39", "EventName": "UNC_H_MISC.RFO_HIT_S", @@ -7229,86 +8756,105 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_MISC.RSPI_WAS_FSE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x39", "EventName": "UNC_H_MISC.RSPI_WAS_FSE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_MISC.WC_ALIASING", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x39", "EventName": "UNC_H_MISC.WC_ALIASING", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_OSB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x55", "EventName": "UNC_H_OSB", + "Experimental": "1", "PerPkg": "1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.EDC0_SMI2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.EDC0_SMI2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.EDC1_SMI3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.EDC1_SMI3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.EDC2_SMI4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.EDC2_SMI4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.EDC3_SMI5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.EDC3_SMI5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.MC0_SMI0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.MC0_SMI0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_READ_NO_CREDITS.MC1_SMI1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x58", "EventName": "UNC_H_READ_NO_CREDITS.MC1_SMI1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_REQUESTS.INVITOE_LOCAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.INVITOE_LOCAL", @@ -7318,6 +8864,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_REQUESTS.INVITOE_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.INVITOE_REMOTE", @@ -7327,6 +8874,7 @@ }, { "BriefDescription": "read requests from home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.READS", @@ -7336,6 +8884,7 @@ }, { "BriefDescription": "read requests from local home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.READS_LOCAL", @@ -7345,15 +8894,18 @@ }, { "BriefDescription": "read requests from remote home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.READS_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "write requests from home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.WRITES", @@ -7363,6 +8915,7 @@ }, { "BriefDescription": "write requests from local home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.WRITES_LOCAL", @@ -7372,177 +8925,216 @@ }, { "BriefDescription": "write requests from remote home agent", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x50", "EventName": "UNC_H_REQUESTS.WRITES_REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_HORZ.AD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA1", "EventName": "UNC_H_RING_BOUNCES_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_HORZ.AK", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA1", "EventName": "UNC_H_RING_BOUNCES_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_HORZ.BL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA1", "EventName": "UNC_H_RING_BOUNCES_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_HORZ.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA1", "EventName": "UNC_H_RING_BOUNCES_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_VERT.AD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA0", "EventName": "UNC_H_RING_BOUNCES_VERT.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_VERT.AK", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA0", "EventName": "UNC_H_RING_BOUNCES_VERT.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_VERT.BL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA0", "EventName": "UNC_H_RING_BOUNCES_VERT.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_BOUNCES_VERT.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA0", "EventName": "UNC_H_RING_BOUNCES_VERT.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_HORZ.AD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA3", "EventName": "UNC_H_RING_SINK_STARVED_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_HORZ.AK", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA3", "EventName": "UNC_H_RING_SINK_STARVED_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA3", "EventName": "UNC_H_RING_SINK_STARVED_HORZ.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_HORZ.BL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA3", "EventName": "UNC_H_RING_SINK_STARVED_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_HORZ.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA3", "EventName": "UNC_H_RING_SINK_STARVED_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_VERT.AD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA2", "EventName": "UNC_H_RING_SINK_STARVED_VERT.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_VERT.AK", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA2", "EventName": "UNC_H_RING_SINK_STARVED_VERT.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_VERT.BL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA2", "EventName": "UNC_H_RING_SINK_STARVED_VERT.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RING_SINK_STARVED_VERT.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA2", "EventName": "UNC_H_RING_SINK_STARVED_VERT.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.IPQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.IPQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.IRQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.IRQ", @@ -7552,276 +9144,337 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.IRQ_REJ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.IRQ_REJ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.PRQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.PRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.PRQ_REJ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.PRQ_REJ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.RRQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.RRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_INSERTS.WBQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x13", "EventName": "UNC_H_RxC_INSERTS.WBQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x22", "EventName": "UNC_H_RxC_IPQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.ANY_IPQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IPQ1_REJECT.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x23", "EventName": "UNC_H_RxC_IPQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x18", "EventName": "UNC_H_RxC_IRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.ANY_REJECT_IRQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.PA_MATCH", @@ -7831,177 +9484,216 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_IRQ1_REJECT.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x19", "EventName": "UNC_H_RxC_IRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x24", "EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2C", "EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x25", "EventName": "UNC_H_RxC_ISMQ1_REJECT.ANY_ISMQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x25", "EventName": "UNC_H_RxC_ISMQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ1_RETRY.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2D", "EventName": "UNC_H_RxC_ISMQ1_RETRY.ANY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_ISMQ1_RETRY.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2D", "EventName": "UNC_H_RxC_ISMQ1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OCCUPANCY.IPQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x11", "EventName": "UNC_H_RxC_OCCUPANCY.IPQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OCCUPANCY.IRQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x11", "EventName": "UNC_H_RxC_OCCUPANCY.IRQ", @@ -8011,1005 +9703,1228 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OCCUPANCY.RRQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x11", "EventName": "UNC_H_RxC_OCCUPANCY.RRQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OCCUPANCY.WBQ", + "Counter": "0", "Deprecated": "1", "EventCode": "0x11", "EventName": "UNC_H_RxC_OCCUPANCY.WBQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2E", "EventName": "UNC_H_RxC_OTHER0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.ANY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_OTHER1_RETRY.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2F", "EventName": "UNC_H_RxC_OTHER1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x20", "EventName": "UNC_H_RxC_PRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.ANY_PRQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_PRQ1_REJECT.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x21", "EventName": "UNC_H_RxC_PRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2A", "EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.ANY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2B", "EventName": "UNC_H_RxC_REQ_Q1_RETRY.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x26", "EventName": "UNC_H_RxC_RRQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.ANY_RRQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_RRQ1_REJECT.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x27", "EventName": "UNC_H_RxC_RRQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.AD_REQ_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.AD_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.BL_NCB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.BL_NCS_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.BL_RSP_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x28", "EventName": "UNC_H_RxC_WBQ0_REJECT.BL_WB_VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.ALLOW_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.ANY0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.ANY_WBQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.HA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.HA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.LLC_OR_SF_WAY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.LLC_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.PA_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.SF_VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxC_WBQ1_REJECT.VICTIM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x29", "EventName": "UNC_H_RxC_WBQ1_REJECT.VICTIM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BUSY_STARVED.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB4", "EventName": "UNC_H_RxR_BUSY_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BUSY_STARVED.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB4", "EventName": "UNC_H_RxR_BUSY_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BUSY_STARVED.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB4", "EventName": "UNC_H_RxR_BUSY_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BUSY_STARVED.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB4", "EventName": "UNC_H_RxR_BUSY_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_BYPASS.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB2", "EventName": "UNC_H_RxR_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.IFV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.IFV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_CRD_STARVED.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB3", "EventName": "UNC_H_RxR_CRD_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_INSERTS.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB1", "EventName": "UNC_H_RxR_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_RxR_OCCUPANCY.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xB0", "EventName": "UNC_H_RxR_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SF_EVICTION.E_STATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x3D", "EventName": "UNC_H_SF_EVICTION.E_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SF_EVICTION.M_STATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x3D", "EventName": "UNC_H_SF_EVICTION.M_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SF_EVICTION.S_STATE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x3D", "EventName": "UNC_H_SF_EVICTION.S_STATE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.ALL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.BCST_LOCAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.BCST_LOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.BCST_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.BCST_REM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.DIRECT_LOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.DIRECT_REM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.LOCAL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.LOCAL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOPS_SENT.REMOTE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x51", "EventName": "UNC_H_SNOOPS_SENT.REMOTE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPCNFLCTS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPCNFLCT", @@ -9019,24 +10934,29 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPFWD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPI", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPIFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPIFWD", @@ -9046,15 +10966,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSPSFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSPSFWD", @@ -9064,6 +10987,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSP_FWD_WB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSP_FWD_WB", @@ -9073,1575 +10997,1925 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP.RSP_WBWB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5C", "EventName": "UNC_H_SNOOP_RESP.RSP_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPCNFLCT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPFWD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPI", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPIFWD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPSFWD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSP_FWD_WB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSP_FWD_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_SNOOP_RESP_LOCAL.RSP_WB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5D", "EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSP_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD0", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD2", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD4", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xD6", "EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_ADS_USED.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9D", "EventName": "UNC_H_TxR_HORZ_ADS_USED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_ADS_USED.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9D", "EventName": "UNC_H_TxR_HORZ_ADS_USED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_ADS_USED.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9D", "EventName": "UNC_H_TxR_HORZ_ADS_USED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_ADS_USED.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9D", "EventName": "UNC_H_TxR_HORZ_ADS_USED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_ADS_USED.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9D", "EventName": "UNC_H_TxR_HORZ_ADS_USED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_BYPASS.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9F", "EventName": "UNC_H_TxR_HORZ_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_FULL.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x96", "EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_CYCLES_NE.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x97", "EventName": "UNC_H_TxR_HORZ_CYCLES_NE.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_INSERTS.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x95", "EventName": "UNC_H_TxR_HORZ_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_NACK.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x99", "EventName": "UNC_H_TxR_HORZ_NACK.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_OCCUPANCY.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x94", "EventName": "UNC_H_TxR_HORZ_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_STARVED.AD_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9B", "EventName": "UNC_H_TxR_HORZ_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_STARVED.AK_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9B", "EventName": "UNC_H_TxR_HORZ_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_STARVED.BL_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9B", "EventName": "UNC_H_TxR_HORZ_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_HORZ_STARVED.IV_BNC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9B", "EventName": "UNC_H_TxR_HORZ_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_ADS_USED.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9C", "EventName": "UNC_H_TxR_VERT_ADS_USED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_BYPASS.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9E", "EventName": "UNC_H_TxR_VERT_BYPASS.IV_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_FULL.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x92", "EventName": "UNC_H_TxR_VERT_CYCLES_FULL.IV_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_CYCLES_NE.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x93", "EventName": "UNC_H_TxR_VERT_CYCLES_NE.IV_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_INSERTS.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x91", "EventName": "UNC_H_TxR_VERT_INSERTS.IV_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_NACK.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x98", "EventName": "UNC_H_TxR_VERT_NACK.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_OCCUPANCY.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x90", "EventName": "UNC_H_TxR_VERT_OCCUPANCY.IV_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.AD_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.AD_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.AK_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.AK_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.BL_AG0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.BL_AG1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_TxR_VERT_STARVED.IV", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x9A", "EventName": "UNC_H_TxR_VERT_STARVED.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AD_IN_USE.DN_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA6", "EventName": "UNC_H_VERT_RING_AD_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AD_IN_USE.DN_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA6", "EventName": "UNC_H_VERT_RING_AD_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AD_IN_USE.UP_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA6", "EventName": "UNC_H_VERT_RING_AD_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AD_IN_USE.UP_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA6", "EventName": "UNC_H_VERT_RING_AD_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AK_IN_USE.DN_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA8", "EventName": "UNC_H_VERT_RING_AK_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AK_IN_USE.DN_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA8", "EventName": "UNC_H_VERT_RING_AK_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AK_IN_USE.UP_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA8", "EventName": "UNC_H_VERT_RING_AK_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_AK_IN_USE.UP_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xA8", "EventName": "UNC_H_VERT_RING_AK_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_BL_IN_USE.DN_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAA", "EventName": "UNC_H_VERT_RING_BL_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_BL_IN_USE.DN_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAA", "EventName": "UNC_H_VERT_RING_BL_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_BL_IN_USE.UP_EVEN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAA", "EventName": "UNC_H_VERT_RING_BL_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_BL_IN_USE.UP_ODD", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAA", "EventName": "UNC_H_VERT_RING_BL_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_IV_IN_USE.DN", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAC", "EventName": "UNC_H_VERT_RING_IV_IN_USE.DN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_VERT_RING_IV_IN_USE.UP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xAC", "EventName": "UNC_H_VERT_RING_IV_IN_USE.UP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WB_PUSH_MTOI.LLC", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x56", "EventName": "UNC_H_WB_PUSH_MTOI.LLC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WB_PUSH_MTOI.MEM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x56", "EventName": "UNC_H_WB_PUSH_MTOI.MEM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.EDC0_SMI2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.EDC0_SMI2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.EDC1_SMI3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.EDC1_SMI3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.EDC2_SMI4", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.EDC2_SMI4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.EDC3_SMI5", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.EDC3_SMI5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.MC0_SMI0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.MC0_SMI0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_WRITE_NO_CREDITS.MC1_SMI1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5A", "EventName": "UNC_H_WRITE_NO_CREDITS.MC1_SMI1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.ANY_RSPI_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.ANY_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe4", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.ANY_RSPI_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.ANY_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf0", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.ANY_RSPS_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.ANY_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe2", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.ANY_RSPS_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.ANY_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe8", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.ANY_RSP_HITFSE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.ANY_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe1", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.CORE_RSPI_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.CORE_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x44", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.CORE_RSPI_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.CORE_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x50", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.CORE_RSPS_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.CORE_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x42", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.CORE_RSPS_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.CORE_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x48", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.CORE_RSP_HITFSE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.CORE_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x41", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EVICT_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x84", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EVICT_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x90", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EVICT_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x82", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EVICT_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x88", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EVICT_RSP_HITFSE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EVICT_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x81", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EXT_RSPI_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EXT_RSPI_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x24", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EXT_RSPI_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EXT_RSPI_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x30", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EXT_RSPS_FWDFE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EXT_RSPS_FWDFE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x22", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EXT_RSPS_FWDM", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EXT_RSPS_FWDM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x28", "Unit": "CHA" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_CHA_XSNP_RESP.EXT_RSP_HITFSE", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x32", "EventName": "UNC_H_XSNP_RESP.EXT_RSP_HITFSE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x21", "Unit": "CHA" diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-interconnect.js= on b/tools/perf/pmu-events/arch/x86/skylakex/uncore-interconnect.json index f32d4d9d283a..216a00237cd1 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-interconnect.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-interconnect.json @@ -1,8 +1,10 @@ [ { "BriefDescription": "Total Write Cache Occupancy; Any Source", + "Counter": "0,1", "EventCode": "0xF", "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of reads and writes t= hat are outstanding in the uncore in each cycle. This is effectively the s= um of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks all requests f= rom any source port.", "UMask": "0x1", @@ -10,8 +12,10 @@ }, { "BriefDescription": "Total Write Cache Occupancy; Snoops", + "Counter": "0,1", "EventCode": "0xF", "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.IV_Q", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of reads and writes t= hat are outstanding in the uncore in each cycle. This is effectively the s= um of the READ_OCCUPANCY and WRITE_OCCUPANCY events.", "UMask": "0x2", @@ -19,6 +23,7 @@ }, { "BriefDescription": "Total IRP occupancy of inbound read and write= requests.", + "Counter": "0,1", "EventCode": "0xF", "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM", "PerPkg": "1", @@ -28,15 +33,19 @@ }, { "BriefDescription": "IRP Clocks", + "Counter": "0,1", "EventCode": "0x1", "EventName": "UNC_I_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "Coherent Ops; CLFlush", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.CLFLUSH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x80", @@ -44,8 +53,10 @@ }, { "BriefDescription": "Coherent Ops; CRd", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x2", @@ -53,8 +64,10 @@ }, { "BriefDescription": "Coherent Ops; DRd", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.DRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x4", @@ -62,8 +75,10 @@ }, { "BriefDescription": "Coherent Ops; PCIDCAHin5t", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.PCIDCAHINT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x20", @@ -71,8 +86,10 @@ }, { "BriefDescription": "Coherent Ops; PCIRdCur", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.PCIRDCUR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x1", @@ -80,6 +97,7 @@ }, { "BriefDescription": "PCIITOM request issued by the IRP unit to the= mesh with the intention of writing a full cacheline.", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.PCITOM", "PerPkg": "1", @@ -89,6 +107,7 @@ }, { "BriefDescription": "RFO request issued by the IRP unit to the mes= h with the intention of writing a partial cacheline.", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.RFO", "PerPkg": "1", @@ -98,8 +117,10 @@ }, { "BriefDescription": "Coherent Ops; WbMtoI", + "Counter": "0,1", "EventCode": "0x10", "EventName": "UNC_I_COHERENT_OPS.WBMTOI", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of coherency related opera= tions serviced by the IRP", "UMask": "0x40", @@ -107,13 +128,16 @@ }, { "BriefDescription": "FAF RF full", + "Counter": "0,1", "EventCode": "0x17", "EventName": "UNC_I_FAF_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "Inbound read requests received by the IRP and= inserted into the FAF queue.", + "Counter": "0,1", "EventCode": "0x18", "EventName": "UNC_I_FAF_INSERTS", "PerPkg": "1", @@ -122,6 +146,7 @@ }, { "BriefDescription": "Occupancy of the IRP FAF queue.", + "Counter": "0,1", "EventCode": "0x19", "EventName": "UNC_I_FAF_OCCUPANCY", "PerPkg": "1", @@ -130,95 +155,119 @@ }, { "BriefDescription": "FAF allocation -- sent to ADQ", + "Counter": "0,1", "EventCode": "0x16", "EventName": "UNC_I_FAF_TRANSACTIONS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "All Inserts Inbound (p2p + faf + cset)", + "Counter": "0,1", "EventCode": "0x1E", "EventName": "UNC_I_IRP_ALL.INBOUND_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "All Inserts Outbound (BL, AK, Snoops)", + "Counter": "0,1", "EventCode": "0x1E", "EventName": "UNC_I_IRP_ALL.OUTBOUND_INSERTS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Cache Inserts of Atomic = Transactions as Secondary", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Cache Inserts of Read Tr= ansactions as Secondary", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.2ND_RD_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Cache Inserts of Write T= ransactions as Secondary", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.2ND_WR_INSERT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Fastpath Rejects", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.FAST_REJ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Fastpath Requests", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.FAST_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Fastpath Transfers From = Primary to Secondary", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.FAST_XFER", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0; Prefetch Ack Hints From = Primary to Secondary", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.PF_ACK_HINT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 0", + "Counter": "0,1", "EventCode": "0x1C", "EventName": "UNC_I_MISC0.UNKNOWN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "IRP" }, { "BriefDescription": "Misc Events - Set 1; Lost Forward", + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.LOST_FWD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop pulled away ownership before a write w= as committed", "UMask": "0x10", @@ -226,8 +275,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Received Invalid", + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SEC_RCVD_INVLD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Secondary received a transfer that did not h= ave sufficient MESI state", "UMask": "0x20", @@ -235,8 +286,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Received Valid", + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SEC_RCVD_VLD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Secondary received a transfer that did have = sufficient MESI state", "UMask": "0x40", @@ -244,8 +297,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Slow Transfer of E Line"= , + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SLOW_E", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Secondary received a transfer that did have = sufficient MESI state", "UMask": "0x4", @@ -253,8 +308,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Slow Transfer of I Line"= , + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SLOW_I", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop took cacheline ownership before write = from data was committed.", "UMask": "0x1", @@ -262,8 +319,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Slow Transfer of M Line"= , + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SLOW_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Snoop took cacheline ownership before write = from data was committed.", "UMask": "0x8", @@ -271,8 +330,10 @@ }, { "BriefDescription": "Misc Events - Set 1; Slow Transfer of S Line"= , + "Counter": "0,1", "EventCode": "0x1D", "EventName": "UNC_I_MISC1.SLOW_S", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Secondary received a transfer that did not h= ave sufficient MESI state", "UMask": "0x2", @@ -280,88 +341,110 @@ }, { "BriefDescription": "P2P Requests", + "Counter": "0,1", "EventCode": "0x14", "EventName": "UNC_I_P2P_INSERTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "P2P requests from the ITC", "Unit": "IRP" }, { "BriefDescription": "P2P Occupancy", + "Counter": "0,1", "EventCode": "0x15", "EventName": "UNC_I_P2P_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "P2P B & S Queue Occupancy", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; P2P completions", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.CMPL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; match if local only", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.LOC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; match if local and target m= atches", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.LOC_AND_TGT_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; P2P Message", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.MSG", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; P2P reads", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; Match if remote only", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.REM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; match if remote and target = matches", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.REM_AND_TGT_MATCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IRP" }, { "BriefDescription": "P2P Transactions; P2P Writes", + "Counter": "0,1", "EventCode": "0x13", "EventName": "UNC_I_P2P_TRANSACTIONS.WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Responses to snoops of any type that hit M, E= , S or I line in the IIO", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit M, E, S or I line in the IIO", "UMask": "0x7e", @@ -369,8 +452,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit E or= S line in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit E or S line in the IIO cache", "UMask": "0x74", @@ -378,8 +463,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit I li= ne in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit I line in the IIO cache", "UMask": "0x72", @@ -387,8 +474,10 @@ }, { "BriefDescription": "Responses to snoops of any type that hit M li= ne in the IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that hit M line in the IIO cache", "UMask": "0x78", @@ -396,8 +485,10 @@ }, { "BriefDescription": "Responses to snoops of any type that miss the= IIO cache", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.ALL_MISS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Responses to snoops of any type (code, data,= invalidate) that miss the IIO cache", "UMask": "0x71", @@ -405,64 +496,80 @@ }, { "BriefDescription": "Snoop Responses; Hit E or S", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_ES", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; Hit I", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; Hit M", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.HIT_M", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; Miss", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; SnpCode", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPCODE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; SnpData", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPDATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IRP" }, { "BriefDescription": "Snoop Responses; SnpInv", + "Counter": "0,1", "EventCode": "0x12", "EventName": "UNC_I_SNOOP_RESP.SNPINV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IRP" }, { "BriefDescription": "Inbound Transaction Count; Atomic", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.ATOMIC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Inbound transactions fr= om the IRP to the Uncore. This can be filtered based on request type in ad= dition to the source queue. Note the special filtering equation. We do OR= -reduction on the request type. If the SOURCE bit is set, then we also do = AND qualification based on the source portID.; Tracks the number of atomic = transactions", "UMask": "0x10", @@ -470,8 +577,10 @@ }, { "BriefDescription": "Inbound Transaction Count; Other", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.OTHER", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Inbound transactions fr= om the IRP to the Uncore. This can be filtered based on request type in ad= dition to the source queue. Note the special filtering equation. We do OR= -reduction on the request type. If the SOURCE bit is set, then we also do = AND qualification based on the source portID.; Tracks the number of 'other'= kinds of transactions.", "UMask": "0x20", @@ -479,8 +588,10 @@ }, { "BriefDescription": "Inbound Transaction Count; Read Prefetches", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.RD_PREF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Inbound transactions fr= om the IRP to the Uncore. This can be filtered based on request type in ad= dition to the source queue. Note the special filtering equation. We do OR= -reduction on the request type. If the SOURCE bit is set, then we also do = AND qualification based on the source portID.; Tracks the number of read pr= efetches.", "UMask": "0x4", @@ -488,8 +599,10 @@ }, { "BriefDescription": "Inbound Transaction Count; Reads", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.READS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Inbound transactions fr= om the IRP to the Uncore. This can be filtered based on request type in ad= dition to the source queue. Note the special filtering equation. We do OR= -reduction on the request type. If the SOURCE bit is set, then we also do = AND qualification based on the source portID.; Tracks only read requests (n= ot including read prefetches).", "UMask": "0x1", @@ -497,8 +610,10 @@ }, { "BriefDescription": "Inbound Transaction Count; Writes", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.WRITES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of Inbound transactions fr= om the IRP to the Uncore. This can be filtered based on request type in ad= dition to the source queue. Note the special filtering equation. We do OR= -reduction on the request type. If the SOURCE bit is set, then we also do = AND qualification based on the source portID.; Tracks only write requests. = Each write request should have a prefetch, so there is no need to explicit= ly track these requests. For writes that are tickled and have to retry, th= e counter will be incremented for each retry.", "UMask": "0x2", @@ -506,6 +621,7 @@ }, { "BriefDescription": "Inbound write (fast path) requests received b= y the IRP.", + "Counter": "0,1", "EventCode": "0x11", "EventName": "UNC_I_TRANSACTIONS.WR_PREF", "PerPkg": "1", @@ -515,118 +631,150 @@ }, { "BriefDescription": "AK Egress Allocations", + "Counter": "0,1", "EventCode": "0xB", "EventName": "UNC_I_TxC_AK_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x5", "EventName": "UNC_I_TxC_BL_DRS_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Inserts", + "Counter": "0,1", "EventCode": "0x2", "EventName": "UNC_I_TxC_BL_DRS_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL DRS Egress Occupancy", + "Counter": "0,1", "EventCode": "0x8", "EventName": "UNC_I_TxC_BL_DRS_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x6", "EventName": "UNC_I_TxC_BL_NCB_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Inserts", + "Counter": "0,1", "EventCode": "0x3", "EventName": "UNC_I_TxC_BL_NCB_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCB Egress Occupancy", + "Counter": "0,1", "EventCode": "0x9", "EventName": "UNC_I_TxC_BL_NCB_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Cycles Full", + "Counter": "0,1", "EventCode": "0x7", "EventName": "UNC_I_TxC_BL_NCS_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Inserts", + "Counter": "0,1", "EventCode": "0x4", "EventName": "UNC_I_TxC_BL_NCS_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "BL NCS Egress Occupancy", + "Counter": "0,1", "EventCode": "0xA", "EventName": "UNC_I_TxC_BL_NCS_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IRP" }, { "BriefDescription": "No AD Egress Credit Stalls", + "Counter": "0,1", "EventCode": "0x1A", "EventName": "UNC_I_TxR2_AD_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number times when it is not possi= ble to issue a request to the R2PCIe because there are no AD Egress Credits= available.", "Unit": "IRP" }, { "BriefDescription": "No BL Egress Credit Stalls", + "Counter": "0,1", "EventCode": "0x1B", "EventName": "UNC_I_TxR2_BL_STALL_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number times when it is not possi= ble to issue data to the R2PCIe because there are no BL Egress Credits avai= lable.", "Unit": "IRP" }, { "BriefDescription": "Outbound Read Requests", + "Counter": "0,1", "EventCode": "0xD", "EventName": "UNC_I_TxS_DATA_INSERTS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of requests issued to the = switch (towards the devices).", "Unit": "IRP" }, { "BriefDescription": "Outbound Read Requests", + "Counter": "0,1", "EventCode": "0xE", "EventName": "UNC_I_TxS_DATA_INSERTS_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of requests issued to the = switch (towards the devices).", "Unit": "IRP" }, { "BriefDescription": "Outbound Request Queue Occupancy", + "Counter": "0,1", "EventCode": "0xC", "EventName": "UNC_I_TxS_REQUEST_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of outstanding outbou= nd requests from the IRP to the switch (towards the devices). This can be = used in conjunction with the allocations event in order to calculate averag= e latency of outbound requests.", "Unit": "IRP" }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -634,8 +782,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -643,8 +793,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -652,8 +804,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -661,8 +815,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -670,8 +826,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -679,8 +837,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -688,8 +848,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -697,8 +859,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -706,8 +870,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -715,8 +881,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -724,8 +892,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -733,8 +903,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -742,8 +914,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -751,8 +925,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -760,8 +936,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -769,8 +947,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -778,8 +958,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x88", "EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -787,8 +969,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -796,8 +980,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -805,8 +991,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -814,8 +1002,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -823,8 +1013,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -832,8 +1024,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x8A", "EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -841,8 +1035,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -850,8 +1046,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -859,8 +1057,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -868,8 +1068,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -877,8 +1079,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -886,8 +1090,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -895,8 +1101,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -904,8 +1112,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -913,8 +1123,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -922,8 +1134,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -931,8 +1145,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -940,8 +1156,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -949,8 +1167,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -958,8 +1178,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -967,8 +1189,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -976,8 +1200,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -985,8 +1211,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -994,8 +1222,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2,3", "EventCode": "0x8E", "EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -1003,8 +1233,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -1012,8 +1244,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -1021,8 +1255,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -1030,8 +1266,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -1039,8 +1277,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -1048,8 +1288,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2,3", "EventCode": "0x8C", "EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -1057,6 +1299,7 @@ }, { "BriefDescription": "Traffic in which the M2M to iMC Bypass was no= t taken", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M2M_BYPASS_M2M_Egress.NOT_TAKEN", "PerPkg": "1", @@ -1066,43 +1309,54 @@ }, { "BriefDescription": "M2M to iMC Bypass; Taken", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M2M_BYPASS_M2M_Egress.TAKEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC Bypass; Not Taken", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_BYPASS_M2M_INGRESS.NOT_TAKEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC Bypass; Taken", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M2M_BYPASS_M2M_INGRESS.TAKEN", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Cycles - at UCLK", + "Counter": "0,1,2,3", "EventName": "UNC_M2M_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2,3", "EventCode": "0xC0", "EventName": "UNC_M2M_CMS_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Cycles when direct to core mode (which bypass= es the CHA) was disabled", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE", "PerPkg": "1", @@ -1111,6 +1365,7 @@ }, { "BriefDescription": "Messages sent direct to core (bypassing the C= HA)", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M2M_DIRECT2CORE_TAKEN", "PerPkg": "1", @@ -1119,6 +1374,7 @@ }, { "BriefDescription": "Number of reads in which direct to core trans= action were overridden", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE", "PerPkg": "1", @@ -1127,6 +1383,7 @@ }, { "BriefDescription": "Number of reads in which direct to Intel(R) U= PI transactions were overridden", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS", "PerPkg": "1", @@ -1135,6 +1392,7 @@ }, { "BriefDescription": "Cycles when direct to Intel(R) UPI was disabl= ed", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE", "PerPkg": "1", @@ -1143,6 +1401,7 @@ }, { "BriefDescription": "Messages sent direct to the Intel(R) UPI", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_M2M_DIRECT2UPI_TAKEN", "PerPkg": "1", @@ -1151,6 +1410,7 @@ }, { "BriefDescription": "Number of reads that a message sent direct2 I= ntel(R) UPI was overridden", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE", "PerPkg": "1", @@ -1159,70 +1419,87 @@ }, { "BriefDescription": "Directory Hit; On NonDirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On NonDirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On NonDirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On NonDirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Directory Hit; On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory lookups (any= state found)", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.ANY", "PerPkg": "1", @@ -1232,6 +1509,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookups (cac= heline found in A state)", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_A", "PerPkg": "1", @@ -1241,6 +1519,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in I state)", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_I", "PerPkg": "1", @@ -1250,6 +1529,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory lookup (cach= eline found in S state)", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_S", "PerPkg": "1", @@ -1259,70 +1539,87 @@ }, { "BriefDescription": "Directory Miss; On NonDirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On NonDirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On NonDirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On NonDirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On Dirty Line in A State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_A", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On Dirty Line in I State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_I", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On Dirty Line in L State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_P", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Directory Miss; On Dirty Line in S State", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_S", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to I", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A2I", "PerPkg": "1", @@ -1332,6 +1629,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = A to S", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.A2S", "PerPkg": "1", @@ -1341,6 +1639,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from/= to Any state", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.ANY", "PerPkg": "1", @@ -1350,6 +1649,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to A", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I2A", "PerPkg": "1", @@ -1359,6 +1659,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = I to S", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.I2S", "PerPkg": "1", @@ -1368,6 +1669,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to A", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S2A", "PerPkg": "1", @@ -1377,6 +1679,7 @@ }, { "BriefDescription": "Multi-socket cacheline Directory update from = S to I", + "Counter": "0,1,2,3", "EventCode": "0x2E", "EventName": "UNC_M2M_DIRECTORY_UPDATE.S2I", "PerPkg": "1", @@ -1386,8 +1689,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Down", + "Counter": "0,1,2,3", "EventCode": "0xAE", "EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x4", @@ -1395,8 +1700,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Up", + "Counter": "0,1,2,3", "EventCode": "0xAE", "EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x1", @@ -1404,8 +1711,10 @@ }, { "BriefDescription": "FaST wire asserted; Horizontal", + "Counter": "0,1,2,3", "EventCode": "0xA5", "EventName": "UNC_M2M_FAST_ASSERTED.HORZ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles either the local= or incoming distress signals are asserted. Incoming distress includes up,= dn and across.", "UMask": "0x2", @@ -1413,8 +1722,10 @@ }, { "BriefDescription": "FaST wire asserted; Vertical", + "Counter": "0,1,2,3", "EventCode": "0xA5", "EventName": "UNC_M2M_FAST_ASSERTED.VERT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles either the local= or incoming distress signals are asserted. Incoming distress includes up,= dn and across.", "UMask": "0x1", @@ -1422,8 +1733,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x1", @@ -1431,8 +1744,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x2", @@ -1440,8 +1755,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x4", @@ -1449,8 +1766,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA7", "EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x8", @@ -1458,8 +1777,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x1", @@ -1467,8 +1788,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x2", @@ -1476,8 +1799,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x4", @@ -1485,8 +1810,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA9", "EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x8", @@ -1494,8 +1821,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Even", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -1503,8 +1832,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -1512,8 +1843,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Even", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -1521,8 +1854,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAB", "EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -1530,8 +1865,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Left", + "Counter": "0,1,2,3", "EventCode": "0xAD", "EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.LEFT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x1", @@ -1539,8 +1876,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Right", + "Counter": "0,1,2,3", "EventCode": "0xAD", "EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.RIGHT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x4", @@ -1548,6 +1887,7 @@ }, { "BriefDescription": "Reads to iMC issued", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_IMC_READS.ALL", "PerPkg": "1", @@ -1557,22 +1897,27 @@ }, { "BriefDescription": "M2M Reads Issued to iMC; All, regardless of p= riority.", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_IMC_READS.FROM_TRANSGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "M2M Reads Issued to iMC; Critical Priority", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_IMC_READS.ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Reads to iMC issued at Normal Priority (Non-I= sochronous)", + "Counter": "0,1,2,3", "EventCode": "0x37", "EventName": "UNC_M2M_IMC_READS.NORMAL", "PerPkg": "1", @@ -1582,6 +1927,7 @@ }, { "BriefDescription": "Writes to iMC issued", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.ALL", "PerPkg": "1", @@ -1591,30 +1937,37 @@ }, { "BriefDescription": "M2M Writes Issued to iMC; All, regardless of = priority.", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.FROM_TRANSGRESS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "M2M Writes Issued to iMC; Full Line Non-ISOCH= ", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M Writes Issued to iMC; ISOCH Full Line", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.FULL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "M2M Writes Issued to iMC; All, regardless of = priority.", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.NI", "PerPkg": "1", @@ -1623,6 +1976,7 @@ }, { "BriefDescription": "Partial Non-Isochronous writes to the iMC", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.PARTIAL", "PerPkg": "1", @@ -1632,44 +1986,55 @@ }, { "BriefDescription": "M2M Writes Issued to iMC; ISOCH Partial", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_M2M_IMC_WRITES.PARTIAL_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Number Packet Header Matches; MC Match", + "Counter": "0,1,2,3", "EventCode": "0x4C", "EventName": "UNC_M2M_PKT_MATCH.MC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Number Packet Header Matches; Mesh Match", + "Counter": "0,1,2,3", "EventCode": "0x4C", "EventName": "UNC_M2M_PKT_MATCH.MESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Cycles Full", + "Counter": "0,1,2,3", "EventCode": "0x53", "EventName": "UNC_M2M_PREFCAM_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Prefetch CAM Cycles Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x54", "EventName": "UNC_M2M_PREFCAM_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Prefetch requests that got turn into a demand= request", + "Counter": "0,1,2,3", "EventCode": "0x56", "EventName": "UNC_M2M_PREFCAM_DEMAND_PROMOTIONS", "PerPkg": "1", @@ -1678,6 +2043,7 @@ }, { "BriefDescription": "Inserts into the Memory Controller Prefetch Q= ueue", + "Counter": "0,1,2,3", "EventCode": "0x57", "EventName": "UNC_M2M_PREFCAM_INSERTS", "PerPkg": "1", @@ -1686,15 +2052,19 @@ }, { "BriefDescription": "Prefetch CAM Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x55", "EventName": "UNC_M2M_PREFCAM_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AD", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M2M_RING_BOUNCES_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x1", @@ -1702,8 +2072,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AK", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M2M_RING_BOUNCES_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x2", @@ -1711,8 +2083,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; BL", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M2M_RING_BOUNCES_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x4", @@ -1720,8 +2094,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; IV", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M2M_RING_BOUNCES_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x8", @@ -1729,8 +2105,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = AD", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M2M_RING_BOUNCES_VERT.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x1", @@ -1738,8 +2116,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Acknowledgements to core", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M2M_RING_BOUNCES_VERT.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x2", @@ -1747,8 +2127,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Data Responses to core", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M2M_RING_BOUNCES_VERT.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x4", @@ -1756,8 +2138,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Snoops of processor's cache.", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M2M_RING_BOUNCES_VERT.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x8", @@ -1765,174 +2149,217 @@ }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AD", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AK", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; Acknowled= gements to Agent 1", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; BL", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; IV", + "Counter": "0,1,2,3", "EventCode": "0xA3", "EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Vertical Ring; AD", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Acknowledge= ments to core", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Data Respon= ses to core", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_M2M_RING_SINK_STARVED_VERT.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Snoops of p= rocessor's cache.", + "Counter": "0,1,2,3", "EventCode": "0xA2", "EventName": "UNC_M2M_RING_SINK_STARVED_VERT.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "Source Throttle", + "Counter": "0,1,2,3", "EventCode": "0xA4", "EventName": "UNC_M2M_RING_SRC_THRTL", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Ch= annel 0", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Ch= annel 1", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Ch= annel 2", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Ch= annel 0", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Ch= annel 1", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Ch= annel 2", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) Full", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M2M_RxC_AD_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_M2M_RxC_AD_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Ingress (from CMS) Queue Inserts", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_M2M_RxC_AD_INSERTS", "PerPkg": "1", @@ -1941,6 +2368,7 @@ }, { "BriefDescription": "AD Ingress (from CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M2M_RxC_AD_OCCUPANCY", "PerPkg": "1", @@ -1948,20 +2376,25 @@ }, { "BriefDescription": "BL Ingress (from CMS) Full", + "Counter": "0,1,2,3", "EventCode": "0x8", "EventName": "UNC_M2M_RxC_BL_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "BL Ingress (from CMS) Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_M2M_RxC_BL_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "BL Ingress (from CMS) Allocations", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_M2M_RxC_BL_INSERTS", "PerPkg": "1", @@ -1969,6 +2402,7 @@ }, { "BriefDescription": "BL Ingress (from CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x6", "EventName": "UNC_M2M_RxC_BL_OCCUPANCY", "PerPkg": "1", @@ -1976,8 +2410,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x1", @@ -1985,8 +2421,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x10", @@ -1994,8 +2432,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x4", @@ -2003,8 +2443,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x40", @@ -2012,8 +2454,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x1", @@ -2021,8 +2465,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x10", @@ -2030,8 +2476,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x2", @@ -2039,8 +2487,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x4", @@ -2048,8 +2498,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x40", @@ -2057,8 +2509,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M2M_RxR_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x8", @@ -2066,8 +2520,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x1", @@ -2075,8 +2531,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x10", @@ -2084,8 +2542,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AK - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x2", @@ -2093,8 +2553,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x4", @@ -2102,8 +2564,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x40", @@ -2111,8 +2575,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IFV - Credit= ", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.IFV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x80", @@ -2120,8 +2586,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IV - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M2M_RxR_CRD_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x8", @@ -2129,8 +2597,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -2138,8 +2608,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -2147,8 +2619,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -2156,8 +2630,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -2165,8 +2641,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -2174,8 +2652,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M2M_RxR_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -2183,8 +2663,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -2192,8 +2674,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -2201,8 +2685,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -2210,8 +2696,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -2219,8 +2707,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -2228,8 +2718,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M2M_RxR_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -2237,8 +2729,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -2246,8 +2740,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -2255,8 +2751,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -2264,8 +2762,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -2273,8 +2773,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -2282,8 +2784,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD0", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -2291,8 +2795,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -2300,8 +2806,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -2309,8 +2817,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -2318,8 +2828,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -2327,8 +2839,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -2336,8 +2850,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD2", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -2345,8 +2861,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -2354,8 +2872,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -2363,8 +2883,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -2372,8 +2894,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -2381,8 +2905,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -2390,8 +2916,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD4", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -2399,8 +2927,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -2408,8 +2938,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -2417,8 +2949,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -2426,8 +2960,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -2435,8 +2971,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -2444,8 +2982,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2,3", "EventCode": "0xD6", "EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -2453,151 +2993,190 @@ }, { "BriefDescription": "Number AD Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M2M_TGR_AD_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Number BL Ingress Credits", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M2M_TGR_BL_CREDITS", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Full; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Full; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Full; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Not Empty; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Not Empty; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Tracker Cycles Not Empty; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Tracker Inserts; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2M_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Tracker Inserts; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2M_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Tracker Inserts; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "UNC_M2M_TRACKER_INSERTS.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Tracker Occupancy; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Tracker Occupancy; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Tracker Occupancy; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x47", "EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Data Pending Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x48", "EventName": "UNC_M2M_TRACKER_PENDING_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Credit Acquired", + "Counter": "0,1,2,3", "EventCode": "0xD", "EventName": "UNC_M2M_TxC_AD_CREDITS_ACQUIRED", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Credits Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xE", "EventName": "UNC_M2M_TxC_AD_CREDIT_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Full", + "Counter": "0,1,2,3", "EventCode": "0xC", "EventName": "UNC_M2M_TxC_AD_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Not Empty", + "Counter": "0,1,2,3", "EventCode": "0xB", "EventName": "UNC_M2M_TxC_AD_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Allocations", + "Counter": "0,1,2,3", "EventCode": "0x9", "EventName": "UNC_M2M_TxC_AD_INSERTS", "PerPkg": "1", @@ -2605,20 +3184,25 @@ }, { "BriefDescription": "Cycles with No AD Egress (to CMS) Credits", + "Counter": "0,1,2,3", "EventCode": "0xF", "EventName": "UNC_M2M_TxC_AD_NO_CREDIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "Cycles Stalled with No AD Egress (to CMS) Cre= dits", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M2M_TxC_AD_NO_CREDIT_STALLED", + "Experimental": "1", "PerPkg": "1", "Unit": "M2M" }, { "BriefDescription": "AD Egress (to CMS) Occupancy", + "Counter": "0,1,2,3", "EventCode": "0xA", "EventName": "UNC_M2M_TxC_AD_OCCUPANCY", "PerPkg": "1", @@ -2626,430 +3210,537 @@ }, { "BriefDescription": "Outbound Ring Transactions on AK; CRD Transac= tions to Cbo", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_M2M_TxC_AK.CRD_CBO", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Outbound Ring Transactions on AK; NDR Transac= tions", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_M2M_TxC_AK.NDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Credit Acquired; Common Me= sh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1D", "EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Credit Acquired; Common Me= sh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1D", "EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Credits Occupancy; Common = Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1E", "EventName": "UNC_M2M_TxC_AK_CREDIT_OCCUPANCY.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Credits Occupancy; Common = Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1E", "EventName": "UNC_M2M_TxC_AK_CREDIT_OCCUPANCY.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; All", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Common Mesh Stop - N= ear Side", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Common Mesh Stop - F= ar Side", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Read Credit Request"= , + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Read Credit Request"= , + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x88", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Write Compare Reques= t", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Write Compare Reques= t", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP1", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa0", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Write Credit Request= ", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Full; Write Credit Request= ", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x90", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; All", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; Common Mesh Sto= p - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; Common Mesh Sto= p - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; Read Credit Req= uest", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.RDCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; Write Compare R= equest", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Not Empty; Write Credit Re= quest", + "Counter": "0,1,2,3", "EventCode": "0x13", "EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; All", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Common Mesh S= top - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Common Mesh S= top - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Prefetch Read= Cam Hit", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.PREF_RD_CAM_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Read Credit R= equest", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.RDCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Write Compare= Request", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.WRCMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Allocations; Write Credit = Request", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M2M_TxC_AK_INSERTS.WRCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "Cycles with No AK Egress (to CMS) Credits; Co= mmon Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Cycles with No AK Egress (to CMS) Credits; Co= mmon Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1F", "EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Cre= dits; Common Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Cre= dits; Common Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; All", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; Common Mesh Sto= p - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; Common Mesh Sto= p - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; Read Credit Req= uest", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.RDCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; Write Compare R= equest", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCMP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Occupancy; Write Credit Re= quest", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Sideband", + "Counter": "0,1,2,3", "EventCode": "0x6B", "EventName": "UNC_M2M_TxC_AK_SIDEBAND.RD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "AK Egress (to CMS) Sideband", + "Counter": "0,1,2,3", "EventCode": "0x6B", "EventName": "UNC_M2M_TxC_AK_SIDEBAND.WR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Outbound DRS Ring Transactions to Cache; Data= to Cache", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2M_TxC_BL.DRS_CACHE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Outbound DRS Ring Transactions to Cache; Data= to Core", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2M_TxC_BL.DRS_CORE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Outbound DRS Ring Transactions to Cache; Data= to QPI", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_M2M_TxC_BL.DRS_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Credit Acquired; Common Me= sh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Credit Acquired; Common Me= sh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x19", "EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Credits Occupancy; Common = Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2M_TxC_BL_CREDIT_OCCUPANCY.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Credits Occupancy; Common = Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1A", "EventName": "UNC_M2M_TxC_BL_CREDIT_OCCUPANCY.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Full; All", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Full; Common Mesh Stop - N= ear Side", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Full; Common Mesh Stop - F= ar Side", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Not Empty; All", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2M_TxC_BL_CYCLES_NE.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Not Empty; Common Mesh Sto= p - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Not Empty; Common Mesh Sto= p - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x17", "EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Allocations; All", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2M_TxC_BL_INSERTS.ALL", "PerPkg": "1", @@ -3058,54 +3749,67 @@ }, { "BriefDescription": "BL Egress (to CMS) Allocations; Common Mesh S= top - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2M_TxC_BL_INSERTS.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Allocations; Common Mesh S= top - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_M2M_TxC_BL_INSERTS.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Cycles with No BL Egress (to CMS) Credits; Co= mmon Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1B", "EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Cycles with No BL Egress (to CMS) Credits; Co= mmon Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1B", "EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Cre= dits; Common Mesh Stop - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x1C", "EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Cre= dits; Common Mesh Stop - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x1C", "EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Occupancy; All", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2M_TxC_BL_OCCUPANCY.ALL", "PerPkg": "1", @@ -3114,24 +3818,30 @@ }, { "BriefDescription": "BL Egress (to CMS) Occupancy; Common Mesh Sto= p - Near Side", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2M_TxC_BL_OCCUPANCY.CMS0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "BL Egress (to CMS) Occupancy; Common Mesh Sto= p - Far Side", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_M2M_TxC_BL_OCCUPANCY.CMS1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -3139,8 +3849,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -3148,8 +3860,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -3157,8 +3871,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -3166,8 +3882,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9D", "EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -3175,8 +3893,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -3184,8 +3904,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -3193,8 +3915,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -3202,8 +3926,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -3211,8 +3937,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -3220,8 +3948,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9F", "EventName": "UNC_M2M_TxR_HORZ_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -3229,8 +3959,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -3238,8 +3970,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Credit", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -3247,8 +3981,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= K - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -3256,8 +3992,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -3265,8 +4003,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Credit", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -3274,8 +4014,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; I= V - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x96", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -3283,8 +4025,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -3292,8 +4036,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -3301,8 +4047,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -3310,8 +4058,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -3319,8 +4069,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -3328,8 +4080,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x97", "EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -3337,8 +4091,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -3346,8 +4102,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -3355,8 +4113,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -3364,8 +4124,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -3373,8 +4135,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -3382,8 +4146,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x95", "EventName": "UNC_M2M_TxR_HORZ_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -3391,8 +4157,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x1", @@ -3400,8 +4168,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x20", @@ -3409,8 +4179,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x2", @@ -3418,8 +4190,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x4", @@ -3427,8 +4201,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x40", @@ -3436,8 +4212,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x99", "EventName": "UNC_M2M_TxR_HORZ_NACK.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x8", @@ -3445,8 +4223,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -3454,8 +4234,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -3463,8 +4245,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -3472,8 +4256,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -3481,8 +4267,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -3490,8 +4278,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce"= , + "Counter": "0,1,2,3", "EventCode": "0x94", "EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -3499,8 +4289,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= D - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_M2M_TxR_HORZ_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x1", @@ -3508,8 +4300,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= K - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_M2M_TxR_HORZ_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x2", @@ -3517,8 +4311,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; B= L - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_M2M_TxR_HORZ_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x4", @@ -3526,8 +4322,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; I= V - Bounce", + "Counter": "0,1,2,3", "EventCode": "0x9B", "EventName": "UNC_M2M_TxR_HORZ_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x8", @@ -3535,8 +4333,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -3544,8 +4344,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -3553,8 +4355,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -3562,8 +4366,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -3571,8 +4377,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -3580,8 +4388,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9C", "EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -3589,8 +4399,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -3598,8 +4410,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -3607,8 +4421,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -3616,8 +4432,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -3625,8 +4443,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -3634,8 +4454,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -3643,8 +4465,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; IV", + "Counter": "0,1,2,3", "EventCode": "0x9E", "EventName": "UNC_M2M_TxR_VERT_BYPASS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -3652,8 +4476,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= D ring. Some example include outbound requests, snoop requests, and snoop = responses.", "UMask": "0x1", @@ -3661,8 +4487,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= D ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -3670,8 +4498,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= K ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -3679,8 +4509,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= K ring.", "UMask": "0x20", @@ -3688,8 +4520,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the B= L ring. This is commonly used to send data from the cache to various desti= nations.", "UMask": "0x4", @@ -3697,8 +4531,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the B= L ring. This is commonly used for transferring writeback data to the cache= .", "UMask": "0x40", @@ -3706,8 +4542,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV"= , + "Counter": "0,1,2,3", "EventCode": "0x92", "EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the I= V ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -3715,8 +4553,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AD ring. Some example include outbound requests, snoop requests, and snoop= responses.", "UMask": "0x1", @@ -3724,8 +4564,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -3733,8 +4575,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -3742,8 +4586,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AK ring.", "UMask": "0x20", @@ -3751,8 +4597,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = BL ring. This is commonly used to send data from the cache to various dest= inations.", "UMask": "0x4", @@ -3760,8 +4608,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = BL ring. This is commonly used for transferring writeback data to the cach= e.", "UMask": "0x40", @@ -3769,8 +4619,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; IV", + "Counter": "0,1,2,3", "EventCode": "0x93", "EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -3778,8 +4630,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD r= ing. Some example include outbound requests, snoop requests, and snoop res= ponses.", "UMask": "0x1", @@ -3787,8 +4641,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD r= ing. This is commonly used for outbound requests.", "UMask": "0x10", @@ -3796,8 +4652,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK r= ing. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -3805,8 +4663,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK r= ing.", "UMask": "0x20", @@ -3814,8 +4674,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL r= ing. This is commonly used to send data from the cache to various destinat= ions.", "UMask": "0x4", @@ -3823,8 +4685,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL r= ing. This is commonly used for transferring writeback data to the cache.", "UMask": "0x40", @@ -3832,8 +4696,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; IV", + "Counter": "0,1,2,3", "EventCode": "0x91", "EventName": "UNC_M2M_TxR_VERT_INSERTS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV r= ing. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -3841,8 +4707,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x1", @@ -3850,8 +4718,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x10", @@ -3859,8 +4729,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x2", @@ -3868,8 +4740,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x20", @@ -3877,8 +4751,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x4", @@ -3886,8 +4762,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x40", @@ -3895,8 +4773,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; IV", + "Counter": "0,1,2,3", "EventCode": "0x98", "EventName": "UNC_M2M_TxR_VERT_NACK.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x8", @@ -3904,8 +4784,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AD ring. Some example include outbound requests, snoop requests, and sn= oop responses.", "UMask": "0x1", @@ -3913,8 +4795,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -3922,8 +4806,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -3931,8 +4817,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AK ring.", "UMask": "0x20", @@ -3940,8 +4828,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he BL ring. This is commonly used to send data from the cache to various d= estinations.", "UMask": "0x4", @@ -3949,8 +4839,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he BL ring. This is commonly used for transferring writeback data to the c= ache.", "UMask": "0x40", @@ -3958,8 +4850,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; IV", + "Counter": "0,1,2,3", "EventCode": "0x90", "EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -3967,8 +4861,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x1", @@ -3976,8 +4872,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x10", @@ -3985,8 +4883,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x2", @@ -3994,8 +4894,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x20", @@ -4003,8 +4905,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 0", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x4", @@ -4012,8 +4916,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 1", + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x40", @@ -4021,8 +4927,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; IV"= , + "Counter": "0,1,2,3", "EventCode": "0x9A", "EventName": "UNC_M2M_TxR_VERT_STARVED.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x8", @@ -4030,8 +4938,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -4039,8 +4949,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -4048,8 +4960,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -4057,8 +4971,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA6", "EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -4066,8 +4982,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -4075,8 +4993,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -4084,8 +5004,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -4093,8 +5015,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xA8", "EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -4102,8 +5026,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Even", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x4", @@ -4111,8 +5037,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x8", @@ -4120,8 +5048,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Even", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x1", @@ -4129,8 +5059,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Odd", + "Counter": "0,1,2,3", "EventCode": "0xAA", "EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x2", @@ -4138,8 +5070,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Down", + "Counter": "0,1,2,3", "EventCode": "0xAC", "EventName": "UNC_M2M_VERT_RING_IV_IN_USE.DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x4", @@ -4147,8 +5081,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Up", + "Counter": "0,1,2,3", "EventCode": "0xAC", "EventName": "UNC_M2M_VERT_RING_IV_IN_USE.UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x1", @@ -4156,179 +5092,223 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Chan= nel 0", + "Counter": "0,1,2,3", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Chan= nel 1", + "Counter": "0,1,2,3", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Chan= nel 2", + "Counter": "0,1,2,3", "EventCode": "0x4D", "EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Chan= nel 0", + "Counter": "0,1,2,3", "EventCode": "0x4E", "EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Chan= nel 1", + "Counter": "0,1,2,3", "EventCode": "0x4E", "EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Chan= nel 2", + "Counter": "0,1,2,3", "EventCode": "0x4E", "EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Full; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Full; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Full; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x4A", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Cycles Not Empty; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x4B", "EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Inserts; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Inserts; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Inserts; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x61", "EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Occupancy; Channel 0", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Occupancy; Channel 1", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M2M" }, { "BriefDescription": "Write Tracker Occupancy; Channel 2", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -4336,8 +5316,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -4345,8 +5327,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -4354,8 +5338,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -4363,8 +5349,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -4372,8 +5360,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2", "EventCode": "0x80", "EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -4381,8 +5371,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -4390,8 +5382,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -4399,8 +5393,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -4408,8 +5404,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -4417,8 +5415,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -4426,8 +5426,10 @@ }, { "BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2", "EventCode": "0x82", "EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -4435,8 +5437,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -4444,8 +5448,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -4453,8 +5459,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -4462,8 +5470,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -4471,8 +5481,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -4480,8 +5492,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2", "EventCode": "0x88", "EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -4489,8 +5503,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -4498,8 +5514,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -4507,8 +5525,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -4516,8 +5536,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -4525,8 +5547,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -4534,8 +5558,10 @@ }, { "BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2", "EventCode": "0x8A", "EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 0 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -4543,8 +5569,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -4552,8 +5580,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -4561,8 +5591,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -4570,8 +5602,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -4579,8 +5613,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -4588,8 +5624,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2", "EventCode": "0x84", "EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -4597,8 +5635,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 0", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -4606,8 +5646,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 1", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -4615,8 +5657,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 2", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -4624,8 +5668,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 3", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -4633,8 +5679,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 4", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -4642,8 +5690,10 @@ }, { "BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgre= ss 5", + "Counter": "0,1,2", "EventCode": "0x86", "EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 AD credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -4651,8 +5701,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 0", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x1", @@ -4660,8 +5712,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 1", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x2", @@ -4669,8 +5723,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 2", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x4", @@ -4678,8 +5734,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 3", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x8", @@ -4687,8 +5745,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 4", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x10", @@ -4696,8 +5756,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgre= ss 5", + "Counter": "0", "EventCode": "0x8E", "EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits in use in a= given cycle, per transgress", "UMask": "0x20", @@ -4705,8 +5767,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 0", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x1", @@ -4714,8 +5778,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 1", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x2", @@ -4723,8 +5789,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 2", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x4", @@ -4732,8 +5800,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 3", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x8", @@ -4741,8 +5811,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 4", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x10", @@ -4750,8 +5822,10 @@ }, { "BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgres= s 5", + "Counter": "0,1,2", "EventCode": "0x8C", "EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CMS Agent 1 BL credits acquired in= a given cycle, per transgress.", "UMask": "0x20", @@ -4759,8 +5833,10 @@ }, { "BriefDescription": "CBox AD Credits Empty; Requests", + "Counter": "0,1,2", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to Cbox on the = AD Ring (covers higher CBoxes)", "UMask": "0x4", @@ -4768,8 +5844,10 @@ }, { "BriefDescription": "CBox AD Credits Empty; Snoops", + "Counter": "0,1,2", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to Cbox on the = AD Ring (covers higher CBoxes)", "UMask": "0x8", @@ -4777,8 +5855,10 @@ }, { "BriefDescription": "CBox AD Credits Empty; VNA Messages", + "Counter": "0,1,2", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to Cbox on the = AD Ring (covers higher CBoxes)", "UMask": "0x1", @@ -4786,8 +5866,10 @@ }, { "BriefDescription": "CBox AD Credits Empty; Writebacks", + "Counter": "0,1,2", "EventCode": "0x22", "EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to Cbox on the = AD Ring (covers higher CBoxes)", "UMask": "0x2", @@ -4795,39 +5877,49 @@ }, { "BriefDescription": "Number of uclks in domain", + "Counter": "0,1,2", "EventCode": "0x1", "EventName": "UNC_M3UPI_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of uclks in the M3 uclk do= main. This could be slightly different than the count in the Ubox because = of enable/freeze delays. However, because the M3 is close to the Ubox, the= y generally should not diverge by more than a handful of cycles.", "Unit": "M3UPI" }, { "BriefDescription": "CMS Clockticks", + "Counter": "0,1,2", "EventCode": "0xC0", "EventName": "UNC_M3UPI_CMS_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "D2C Sent", + "Counter": "0,1,2", "EventCode": "0x2B", "EventName": "UNC_M3UPI_D2C_SENT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases BL sends direct to core", "Unit": "M3UPI" }, { "BriefDescription": "D2U Sent", + "Counter": "0,1,2", "EventCode": "0x2A", "EventName": "UNC_M3UPI_D2U_SENT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cases where SMI3 sends D2U command", "Unit": "M3UPI" }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Down", + "Counter": "0,1,2", "EventCode": "0xAE", "EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x4", @@ -4835,8 +5927,10 @@ }, { "BriefDescription": "Egress Blocking due to Ordering requirements;= Up", + "Counter": "0,1,2", "EventCode": "0xAE", "EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of cycles IV was blocked in th= e TGR Egress due to SNP/GO Ordering requirements", "UMask": "0x1", @@ -4844,8 +5938,10 @@ }, { "BriefDescription": "FaST wire asserted; Horizontal", + "Counter": "0,1,2", "EventCode": "0xA5", "EventName": "UNC_M3UPI_FAST_ASSERTED.HORZ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles either the local= or incoming distress signals are asserted. Incoming distress includes up,= dn and across.", "UMask": "0x2", @@ -4853,8 +5949,10 @@ }, { "BriefDescription": "FaST wire asserted; Vertical", + "Counter": "0,1,2", "EventCode": "0xA5", "EventName": "UNC_M3UPI_FAST_ASSERTED.VERT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles either the local= or incoming distress signals are asserted. Incoming distress includes up,= dn and across.", "UMask": "0x1", @@ -4862,8 +5960,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Even", + "Counter": "0,1,2", "EventCode": "0xA7", "EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x1", @@ -4871,8 +5971,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Left and Odd", + "Counter": "0,1,2", "EventCode": "0xA7", "EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x2", @@ -4880,8 +5982,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Even", + "Counter": "0,1,2", "EventCode": "0xA7", "EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x4", @@ -4889,8 +5993,10 @@ }, { "BriefDescription": "Horizontal AD Ring In Use; Right and Odd", + "Counter": "0,1,2", "EventCode": "0xA7", "EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AD ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. We really have two rings -- a cloc= kwise ring and a counter-clockwise ring. On the left side of the ring, the= UP direction is on the clockwise ring and DN is on the counter-clockwise r= ing. On the right side of the ring, this is reversed. The first half of t= he CBos are on the left side of the ring, and the 2nd half are on the right= side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD= is NOT the same ring as CBo 2 UP AD because they are on opposite sides of = the ring.", "UMask": "0x8", @@ -4898,8 +6004,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Even", + "Counter": "0,1,2", "EventCode": "0xA9", "EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x1", @@ -4907,8 +6015,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Left and Odd", + "Counter": "0,1,2", "EventCode": "0xA9", "EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x2", @@ -4916,8 +6026,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Even", + "Counter": "0,1,2", "EventCode": "0xA9", "EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x4", @@ -4925,8 +6037,10 @@ }, { "BriefDescription": "Horizontal AK Ring In Use; Right and Odd", + "Counter": "0,1,2", "EventCode": "0xA9", "EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal AK ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clockw= ise ring and a counter-clockwise ring. On the left side of the ring, the U= P direction is on the clockwise ring and DN is on the counter-clockwise rin= g. On the right side of the ring, this is reversed. The first half of the= CBos are on the left side of the ring, and the 2nd half are on the right s= ide of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD i= s NOT the same ring as CBo 2 UP AD because they are on opposite sides of th= e ring.", "UMask": "0x8", @@ -4934,8 +6048,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Even", + "Counter": "0,1,2", "EventCode": "0xAB", "EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -4943,8 +6059,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Left and Odd", + "Counter": "0,1,2", "EventCode": "0xAB", "EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -4952,8 +6070,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Even", + "Counter": "0,1,2", "EventCode": "0xAB", "EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -4961,8 +6081,10 @@ }, { "BriefDescription": "Horizontal BL Ring in Use; Right and Odd", + "Counter": "0,1,2", "EventCode": "0xAB", "EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal BL ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop.We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -4970,8 +6092,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Left", + "Counter": "0,1,2", "EventCode": "0xAD", "EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.LEFT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x1", @@ -4979,8 +6103,10 @@ }, { "BriefDescription": "Horizontal IV Ring in Use; Right", + "Counter": "0,1,2", "EventCode": "0xAD", "EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.RIGHT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Horizon= tal IV ring is being used at this ring stop. This includes when packets ar= e passing by and when packets are being sunk, but does not include when pac= kets are being sent from the ring stop. There is only 1 IV ring. Therefor= e, if one wants to monitor the Even ring, they should select both UP_EVEN a= nd DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN= _ODD.", "UMask": "0x4", @@ -4988,8 +6114,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; IIO0 and IIO1 share the = same ring destination. (1 VN0 credit only)", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO0_IIO1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x1", @@ -4997,8 +6125,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; IIO2", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO2_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x2", @@ -5006,8 +6136,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; IIO3", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO3_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x4", @@ -5015,8 +6147,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; IIO4", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO4_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x8", @@ -5024,8 +6158,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; IIO5", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO5_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x10", @@ -5033,8 +6169,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; All IIO targets for NCS = are in single mask. ORs them together", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x20", @@ -5042,8 +6180,10 @@ }, { "BriefDescription": "M2 BL Credits Empty; Selected M2p BL NCS cred= its", + "Counter": "0,1,2", "EventCode": "0x23", "EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS_SEL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No vn0 and vna credits available to send to = M2", "UMask": "0x40", @@ -5051,8 +6191,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; AD - Slot 0", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x1", @@ -5060,8 +6202,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; AD - Slot 1", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x2", @@ -5069,8 +6213,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; AD - Slot 2", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x4", @@ -5078,8 +6224,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; AK - Slot 0", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x10", @@ -5087,8 +6235,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; AK - Slot 2", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x20", @@ -5096,8 +6246,10 @@ }, { "BriefDescription": "Multi Slot Flit Received; BL - Slot 0", + "Counter": "0,1,2", "EventCode": "0x3E", "EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.BL_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Multi slot flit received - S0, S1 and/or S2 = populated (can use AK S0/S1 masks for AK allocations)", "UMask": "0x8", @@ -5105,8 +6257,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AD", + "Counter": "0,1,2", "EventCode": "0xA1", "EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x1", @@ -5114,8 +6268,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; AK", + "Counter": "0,1,2", "EventCode": "0xA1", "EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x2", @@ -5123,8 +6279,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; BL", + "Counter": "0,1,2", "EventCode": "0xA1", "EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x4", @@ -5132,8 +6290,10 @@ }, { "BriefDescription": "Messages that bounced on the Horizontal Ring.= ; IV", + "Counter": "0,1,2", "EventCode": "0xA1", "EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Horizontal ring that were bounced, by ring type.", "UMask": "0x8", @@ -5141,8 +6301,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = AD", + "Counter": "0,1,2", "EventCode": "0xA0", "EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x1", @@ -5150,8 +6312,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Acknowledgements to core", + "Counter": "0,1,2", "EventCode": "0xA0", "EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x2", @@ -5159,8 +6323,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Data Responses to core", + "Counter": "0,1,2", "EventCode": "0xA0", "EventName": "UNC_M3UPI_RING_BOUNCES_VERT.BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x4", @@ -5168,8 +6334,10 @@ }, { "BriefDescription": "Messages that bounced on the Vertical Ring.; = Snoops of processor's cache.", + "Counter": "0,1,2", "EventCode": "0xA0", "EventName": "UNC_M3UPI_RING_BOUNCES_VERT.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles incoming messages from the = Vertical ring that were bounced, by ring type.", "UMask": "0x8", @@ -5177,87 +6345,109 @@ }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AD", + "Counter": "0,1,2", "EventCode": "0xA3", "EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; AK", + "Counter": "0,1,2", "EventCode": "0xA3", "EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; Acknowled= gements to Agent 1", + "Counter": "0,1,2", "EventCode": "0xA3", "EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK_AG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; BL", + "Counter": "0,1,2", "EventCode": "0xA3", "EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Horizontal Ring; IV", + "Counter": "0,1,2", "EventCode": "0xA3", "EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Vertical Ring; AD", + "Counter": "0,1,2", "EventCode": "0xA2", "EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Acknowledge= ments to core", + "Counter": "0,1,2", "EventCode": "0xA2", "EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Data Respon= ses to core", + "Counter": "0,1,2", "EventCode": "0xA2", "EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.BL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "Sink Starvation on Vertical Ring; Snoops of p= rocessor's cache.", + "Counter": "0,1,2", "EventCode": "0xA2", "EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.IV", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "Source Throttle", + "Counter": "0,1,2", "EventCode": "0xA4", "EventName": "UNC_M3UPI_RING_SRC_THRTL", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "Lost Arb for VN0; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Home (REQ) messages on AD. REQ is generally used to send requests, request= responses, and snoop responses.", "UMask": "0x1", @@ -5265,8 +6455,10 @@ }, { "BriefDescription": "Lost Arb for VN0; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Response (RSP) messages on AD. RSP packets are used to transmit a variety = of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -5274,8 +6466,10 @@ }, { "BriefDescription": "Lost Arb for VN0; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -5283,8 +6477,10 @@ }, { "BriefDescription": "Lost Arb for VN0; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to tran= smit data without coherency. For example, non-coherent read data returns."= , "UMask": "0x20", @@ -5292,8 +6488,10 @@ }, { "BriefDescription": "Lost Arb for VN0; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -5301,8 +6499,10 @@ }, { "BriefDescription": "Lost Arb for VN0; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Response (RSP) messages on BL. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -5310,8 +6510,10 @@ }, { "BriefDescription": "Lost Arb for VN0; WB on BL", + "Counter": "0,1,2", "EventCode": "0x4B", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message requested but lost arbitration; = Data Response (WB) messages on BL. WB is generally used to transmit data w= ith coherency. For example, remote reads and writes, or cache to cache tra= nsfers will transmit their data using WB.", "UMask": "0x10", @@ -5319,8 +6521,10 @@ }, { "BriefDescription": "Lost Arb for VN1; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Home (REQ) messages on AD. REQ is generally used to send requests, request= responses, and snoop responses.", "UMask": "0x1", @@ -5328,8 +6532,10 @@ }, { "BriefDescription": "Lost Arb for VN1; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Response (RSP) messages on AD. RSP packets are used to transmit a variety = of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -5337,8 +6543,10 @@ }, { "BriefDescription": "Lost Arb for VN1; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -5346,8 +6554,10 @@ }, { "BriefDescription": "Lost Arb for VN1; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to tran= smit data without coherency. For example, non-coherent read data returns."= , "UMask": "0x20", @@ -5355,8 +6565,10 @@ }, { "BriefDescription": "Lost Arb for VN1; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -5364,8 +6576,10 @@ }, { "BriefDescription": "Lost Arb for VN1; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Response (RSP) messages on BL. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -5373,8 +6587,10 @@ }, { "BriefDescription": "Lost Arb for VN1; WB on BL", + "Counter": "0,1,2", "EventCode": "0x4C", "EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message requested but lost arbitration; = Data Response (WB) messages on BL. WB is generally used to transmit data w= ith coherency. For example, remote reads and writes, or cache to cache tra= nsfers will transmit their data using WB.", "UMask": "0x10", @@ -5382,8 +6598,10 @@ }, { "BriefDescription": "Arb Miscellaneous; AD, BL Parallel Win", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD and BL messages won arbitration concurren= tly / in parallel", "UMask": "0x40", @@ -5391,8 +6609,10 @@ }, { "BriefDescription": "Arb Miscellaneous; No Progress on Pending AD = VN0", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arbitration stage made no progress on pendin= g ad vn0 messages because slotting stage cannot accept new message", "UMask": "0x4", @@ -5400,8 +6620,10 @@ }, { "BriefDescription": "Arb Miscellaneous; No Progress on Pending AD = VN1", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arbitration stage made no progress on pendin= g ad vn1 messages because slotting stage cannot accept new message", "UMask": "0x8", @@ -5409,8 +6631,10 @@ }, { "BriefDescription": "Arb Miscellaneous; No Progress on Pending BL = VN0", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arbitration stage made no progress on pendin= g bl vn0 messages because slotting stage cannot accept new message", "UMask": "0x10", @@ -5418,8 +6642,10 @@ }, { "BriefDescription": "Arb Miscellaneous; No Progress on Pending BL = VN1", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Arbitration stage made no progress on pendin= g bl vn1 messages because slotting stage cannot accept new message", "UMask": "0x20", @@ -5427,8 +6653,10 @@ }, { "BriefDescription": "Arb Miscellaneous; Parallel Bias to VN0", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.PAR_BIAS_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0/VN1 arbiter gave second, consecutive win= to vn0, delaying vn1 win, because vn0 offered parallel ad/bl", "UMask": "0x1", @@ -5436,8 +6664,10 @@ }, { "BriefDescription": "Arb Miscellaneous; Parallel Bias to VN1", + "Counter": "0,1,2", "EventCode": "0x4D", "EventName": "UNC_M3UPI_RxC_ARB_MISC.PAR_BIAS_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0/VN1 arbiter gave second, consecutive win= to vn1, delaying vn0 win, because vn1 offered parallel ad/bl", "UMask": "0x2", @@ -5445,8 +6675,10 @@ }, { "BriefDescription": "Can't Arb for VN0; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Home (REQ) messages on AD. R= EQ is generally used to send requests, request responses, and snoop respons= es.", "UMask": "0x1", @@ -5454,8 +6686,10 @@ }, { "BriefDescription": "Can't Arb for VN0; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Response (RSP) messages on AD= . RSP packets are used to transmit a variety of protocol flits including g= rants and completions (CMP).", "UMask": "0x4", @@ -5463,8 +6697,10 @@ }, { "BriefDescription": "Can't Arb for VN0; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Snoops (SNP) messages on AD. = SNP is used for outgoing snoops.", "UMask": "0x2", @@ -5472,8 +6708,10 @@ }, { "BriefDescription": "Can't Arb for VN0; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Non-Coherent Broadcast (NCB) = messages on BL. NCB is generally used to transmit data without coherency. = For example, non-coherent read data returns.", "UMask": "0x20", @@ -5481,8 +6719,10 @@ }, { "BriefDescription": "Can't Arb for VN0; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Non-Coherent Standard (NCS) m= essages on BL.", "UMask": "0x40", @@ -5490,8 +6730,10 @@ }, { "BriefDescription": "Can't Arb for VN0; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Response (RSP) messages on BL= . RSP packets are used to transmit a variety of protocol flits including gr= ants and completions (CMP).", "UMask": "0x8", @@ -5499,8 +6741,10 @@ }, { "BriefDescription": "Can't Arb for VN0; WB on BL", + "Counter": "0,1,2", "EventCode": "0x49", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message was not able to request arbitrat= ion while some other message won arbitration; Data Response (WB) messages o= n BL. WB is generally used to transmit data with coherency. For example, = remote reads and writes, or cache to cache transfers will transmit their da= ta using WB.", "UMask": "0x10", @@ -5508,8 +6752,10 @@ }, { "BriefDescription": "Can't Arb for VN1; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Home (REQ) messages on AD. R= EQ is generally used to send requests, request responses, and snoop respons= es.", "UMask": "0x1", @@ -5517,8 +6763,10 @@ }, { "BriefDescription": "Can't Arb for VN1; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Response (RSP) messages on AD= . RSP packets are used to transmit a variety of protocol flits including g= rants and completions (CMP).", "UMask": "0x4", @@ -5526,8 +6774,10 @@ }, { "BriefDescription": "Can't Arb for VN1; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Snoops (SNP) messages on AD. = SNP is used for outgoing snoops.", "UMask": "0x2", @@ -5535,8 +6785,10 @@ }, { "BriefDescription": "Can't Arb for VN1; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Non-Coherent Broadcast (NCB) = messages on BL. NCB is generally used to transmit data without coherency. = For example, non-coherent read data returns.", "UMask": "0x20", @@ -5544,8 +6796,10 @@ }, { "BriefDescription": "Can't Arb for VN1; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Non-Coherent Standard (NCS) m= essages on BL.", "UMask": "0x40", @@ -5553,8 +6807,10 @@ }, { "BriefDescription": "Can't Arb for VN1; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Response (RSP) messages on BL= . RSP packets are used to transmit a variety of protocol flits including gr= ants and completions (CMP).", "UMask": "0x8", @@ -5562,8 +6818,10 @@ }, { "BriefDescription": "Can't Arb for VN1; WB on BL", + "Counter": "0,1,2", "EventCode": "0x4A", "EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message was not able to request arbitrat= ion while some other message won arbitration; Data Response (WB) messages o= n BL. WB is generally used to transmit data with coherency. For example, = remote reads and writes, or cache to cache transfers will transmit their da= ta using WB.", "UMask": "0x10", @@ -5571,8 +6829,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Home (REQ) messages on AD. REQ i= s generally used to send requests, request responses, and snoop responses."= , "UMask": "0x1", @@ -5580,8 +6840,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Response (RSP) messages on AD. R= SP packets are used to transmit a variety of protocol flits including grant= s and completions (CMP).", "UMask": "0x4", @@ -5589,8 +6851,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Snoops (SNP) messages on AD. SNP= is used for outgoing snoops.", "UMask": "0x2", @@ -5598,8 +6862,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) mess= ages on BL. NCB is generally used to transmit data without coherency. For= example, non-coherent read data returns.", "UMask": "0x20", @@ -5607,8 +6873,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messa= ges on BL.", "UMask": "0x40", @@ -5616,8 +6884,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Response (RSP) messages on BL. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x8", @@ -5625,8 +6895,10 @@ }, { "BriefDescription": "No Credits to Arb for VN0; WB on BL", + "Counter": "0,1,2", "EventCode": "0x47", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Data Response (WB) messages on BL= . WB is generally used to transmit data with coherency. For example, remo= te reads and writes, or cache to cache transfers will transmit their data u= sing WB.", "UMask": "0x10", @@ -5634,8 +6906,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Home (REQ) messages on AD. REQ i= s generally used to send requests, request responses, and snoop responses."= , "UMask": "0x1", @@ -5643,8 +6917,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Response (RSP) messages on AD. R= SP packets are used to transmit a variety of protocol flits including grant= s and completions (CMP).", "UMask": "0x4", @@ -5652,8 +6928,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Snoops (SNP) messages on AD. SNP= is used for outgoing snoops.", "UMask": "0x2", @@ -5661,8 +6939,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) mess= ages on BL. NCB is generally used to transmit data without coherency. For= example, non-coherent read data returns.", "UMask": "0x20", @@ -5670,8 +6950,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messa= ges on BL.", "UMask": "0x40", @@ -5679,8 +6961,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Response (RSP) messages on BL. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x8", @@ -5688,8 +6972,10 @@ }, { "BriefDescription": "No Credits to Arb for VN1; WB on BL", + "Counter": "0,1,2", "EventCode": "0x48", "EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN1 message is blocked from requesting arbit= ration due to lack of remote UPI credits; Data Response (WB) messages on BL= . WB is generally used to transmit data with coherency. For example, remo= te reads and writes, or cache to cache transfers will transmit their data u= sing WB.", "UMask": "0x10", @@ -5697,8 +6983,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses; AD to Slot 0 on BL Ar= b", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_BL_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times message is bypassed around t= he Ingress Queue; AD is taking bypass to slot 0 of independent flit while b= l message is in arbitration", "UMask": "0x2", @@ -5706,8 +6994,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses; AD to Slot 0 on Idle"= , + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times message is bypassed around t= he Ingress Queue; AD is taking bypass to slot 0 of independent flit while p= ipeline is idle", "UMask": "0x1", @@ -5715,8 +7005,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses; AD + BL to Slot 1", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S1_BL_SLOT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times message is bypassed around t= he Ingress Queue; AD is taking bypass to flit slot 1 while merging with bl = message in same flit", "UMask": "0x4", @@ -5724,8 +7016,10 @@ }, { "BriefDescription": "Ingress Queue Bypasses; AD + BL to Slot 2", + "Counter": "0,1,2", "EventCode": "0x40", "EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S2_BL_SLOT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times message is bypassed around t= he Ingress Queue; AD is taking bypass to flit slot 2 while merging with bl = message in same flit", "UMask": "0x8", @@ -5733,8 +7027,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; REQ on AD"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Home (REQ) messages on AD. REQ is generally u= sed to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -5742,8 +7038,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; RSP on AD"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Response (RSP) messages on AD. RSP packets ar= e used to transmit a variety of protocol flits including grants and complet= ions (CMP).", "UMask": "0x4", @@ -5751,8 +7049,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; SNP on AD"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Snoops (SNP) messages on AD. SNP is used for = outgoing snoops.", "UMask": "0x2", @@ -5760,8 +7060,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; NCB on BL"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL. = NCB is generally used to transmit data without coherency. For example, non= -coherent read data returns.", "UMask": "0x20", @@ -5769,8 +7071,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; NCS on BL"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -5778,8 +7082,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; RSP on BL"= , + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are= used to transmit a variety of protocol flits including grants and completi= ons (CMP).", "UMask": "0x8", @@ -5787,8 +7093,10 @@ }, { "BriefDescription": "VN0 message lost contest for flit; WB on BL", + "Counter": "0,1,2", "EventCode": "0x50", "EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN0 packets lost t= he contest for Flit Slot 0.; Data Response (WB) messages on BL. WB is gene= rally used to transmit data with coherency. For example, remote reads and = writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -5796,8 +7104,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; REQ on AD"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Home (REQ) messages on AD. REQ is generally u= sed to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -5805,8 +7115,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; RSP on AD"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Response (RSP) messages on AD. RSP packets ar= e used to transmit a variety of protocol flits including grants and complet= ions (CMP).", "UMask": "0x4", @@ -5814,8 +7126,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; SNP on AD"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Snoops (SNP) messages on AD. SNP is used for = outgoing snoops.", "UMask": "0x2", @@ -5823,8 +7137,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; NCB on BL"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL. = NCB is generally used to transmit data without coherency. For example, non= -coherent read data returns.", "UMask": "0x20", @@ -5832,8 +7148,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; NCS on BL"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -5841,8 +7159,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; RSP on BL"= , + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are= used to transmit a variety of protocol flits including grants and completi= ons (CMP).", "UMask": "0x8", @@ -5850,8 +7170,10 @@ }, { "BriefDescription": "VN1 message lost contest for flit; WB on BL", + "Counter": "0,1,2", "EventCode": "0x51", "EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress VN1 packets lost t= he contest for Flit Slot 0.; Data Response (WB) messages on BL. WB is gene= rally used to transmit data with coherency. For example, remote reads and = writes, or cache to cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -5859,8 +7181,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events; Any In BGF FIFO"= , + "Counter": "0,1,2", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Indication that at least one packet (flit) i= s in the bgf (fifo only)", "UMask": "0x1", @@ -5868,8 +7192,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events; Any in BGF Path"= , + "Counter": "0,1,2", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_PATH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Indication that at least one packet (flit) i= s in the bgf path (i.e. pipe to fifo)", "UMask": "0x2", @@ -5877,8 +7203,10 @@ }, { "BriefDescription": "Miscellaneous Credit Events; No D2K For Arb", + "Counter": "0,1,2", "EventCode": "0x60", "EventName": "UNC_M3UPI_RxC_CRD_MISC.NO_D2K_FOR_ARB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "VN0 or VN1 BL RSP message was blocked from a= rbitration request due to lack of D2K CMP credits", "UMask": "0x4", @@ -5886,8 +7214,10 @@ }, { "BriefDescription": "Credit Occupancy; D2K Credits", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.D2K_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "D2K completion fifo credit occupancy (credit= s in use), accumulated across all cycles", "UMask": "0x10", @@ -5895,8 +7225,10 @@ }, { "BriefDescription": "Credit Occupancy; Packets in BGF FIFO", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy of m3upi ingress -> upi link layer= bgf; packets (flits) in fifo", "UMask": "0x2", @@ -5904,8 +7236,10 @@ }, { "BriefDescription": "Credit Occupancy; Packets in BGF Path", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_PATH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy of m3upi ingress -> upi link layer= bgf; packets (flits) in path (i.e. pipe to fifo or fifo)", "UMask": "0x4", @@ -5913,8 +7247,10 @@ }, { "BriefDescription": "Credit Occupancy", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_FIFO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "count of bl messages in pump-1-pending state= , in completion fifo only", "UMask": "0x40", @@ -5922,8 +7258,10 @@ }, { "BriefDescription": "Credit Occupancy", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_TOTAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "count of bl messages in pump-1-pending state= , in marker table and in fifo", "UMask": "0x20", @@ -5931,8 +7269,10 @@ }, { "BriefDescription": "Credit Occupancy; Transmit Credits", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.TxQ_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Link layer transmit queue credit occupancy (= credits in use), accumulated across all cycles", "UMask": "0x8", @@ -5940,8 +7280,10 @@ }, { "BriefDescription": "Credit Occupancy; VNA In Use", + "Counter": "0,1,2", "EventCode": "0x61", "EventName": "UNC_M3UPI_RxC_CRD_OCC.VNA_IN_USE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote UPI VNA credit occupancy (number of c= redits in use), accumulated across all cycles", "UMask": "0x1", @@ -5949,8 +7291,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; H= ome (REQ) messages on AD. REQ is generally used to send requests, request = responses, and snoop responses.", "UMask": "0x1", @@ -5958,8 +7302,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on AD. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -5967,8 +7313,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; S= noops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -5976,8 +7324,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Broadcast (NCB) messages on BL. NCB is generally used to trans= mit data without coherency. For example, non-coherent read data returns.", "UMask": "0x20", @@ -5985,8 +7335,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -5994,8 +7346,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on BL. RSP packets are used to transmit a variety of= protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -6003,8 +7357,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Emp= ty; WB on BL", + "Counter": "0,1,2", "EventCode": "0x43", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the UPI Ing= ress is not empty. This tracks one of the three rings that are used by the= UPI agent. This can be used in conjunction with the UPI Ingress Occupancy= Accumulator event in order to calculate average queue occupancy. Multiple= ingress buffers can be tracked at a given time using multiple counters.; D= ata Response (WB) messages on BL. WB is generally used to transmit data wi= th coherency. For example, remote reads and writes, or cache to cache tran= sfers will transmit their data using WB.", "UMask": "0x10", @@ -6012,8 +7368,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; H= ome (REQ) messages on AD. REQ is generally used to send requests, request = responses, and snoop responses.", "UMask": "0x1", @@ -6021,8 +7379,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on AD. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -6030,8 +7390,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; S= noops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -6039,8 +7401,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Broadcast (NCB) messages on BL. NCB is generally used to trans= mit data without coherency. For example, non-coherent read data returns.", "UMask": "0x20", @@ -6048,8 +7412,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -6057,8 +7423,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on BL. RSP packets are used to transmit a variety of= protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -6066,8 +7434,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Emp= ty; WB on BL", + "Counter": "0,1,2", "EventCode": "0x44", "EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; D= ata Response (WB) messages on BL. WB is generally used to transmit data wi= th coherency. For example, remote reads and writes, or cache to cache tran= sfers will transmit their data using WB.", "UMask": "0x10", @@ -6075,8 +7445,10 @@ }, { "BriefDescription": "Data Flit Not Sent; All", + "Counter": "0,1,2", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data flit is ready for transmission but coul= d not be sent", "UMask": "0x1", @@ -6084,8 +7456,10 @@ }, { "BriefDescription": "Data Flit Not Sent; No BGF Credits", + "Counter": "0,1,2", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.NO_BGF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data flit is ready for transmission but coul= d not be sent", "UMask": "0x2", @@ -6093,8 +7467,10 @@ }, { "BriefDescription": "Data Flit Not Sent; No TxQ Credits", + "Counter": "0,1,2", "EventCode": "0x57", "EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.NO_TXQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Data flit is ready for transmission but coul= d not be sent", "UMask": "0x4", @@ -6102,8 +7478,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence; Wait on Pum= p 0", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P0_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "generating bl data flit sequence; waiting fo= r data pump 0", "UMask": "0x1", @@ -6111,8 +7489,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_AT_LIMIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "pump-1-pending logic is at capacity (pending= table plus completion fifo at limit)", "UMask": "0x10", @@ -6120,8 +7500,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_BUSY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "pump-1-pending logic is tracking at least on= e message", "UMask": "0x8", @@ -6129,8 +7511,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_FIFO_FULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "pump-1-pending completion fifo is full", "UMask": "0x40", @@ -6138,8 +7522,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_HOLD_P0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "pump-1-pending logic is at or near capacity,= such that pump-0-only bl messages are getting stalled in slotting stage", "UMask": "0x20", @@ -6147,8 +7533,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_TO_LIMBO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "a bl message finished but is in limbo and mo= ved to pump-1-pending logic", "UMask": "0x4", @@ -6156,8 +7544,10 @@ }, { "BriefDescription": "Generating BL Data Flit Sequence; Wait on Pum= p 1", + "Counter": "0,1,2", "EventCode": "0x59", "EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "generating bl data flit sequence; waiting fo= r data pump 1", "UMask": "0x2", @@ -6165,15 +7555,19 @@ }, { "BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC", + "Counter": "0,1,2", "EventCode": "0x5A", "EventName": "UNC_M3UPI_RxC_FLITS_MISC", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "Sent Header Flit; One Message", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.1_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "One message in flit; VNA or non-VNA flit", "UMask": "0x1", @@ -6181,8 +7575,10 @@ }, { "BriefDescription": "Sent Header Flit; One Message in non-VNA", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.1_MSG_VNX", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "One message in flit; non-VNA flit", "UMask": "0x8", @@ -6190,8 +7586,10 @@ }, { "BriefDescription": "Sent Header Flit; Two Messages", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.2_MSGS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Two messages in flit; VNA flit", "UMask": "0x2", @@ -6199,8 +7597,10 @@ }, { "BriefDescription": "Sent Header Flit; Three Messages", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.3_MSGS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Three messages in flit; VNA flit", "UMask": "0x4", @@ -6208,40 +7608,50 @@ }, { "BriefDescription": "Sent Header Flit", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "Sent Header Flit", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "Sent Header Flit", + "Counter": "0,1,2", "EventCode": "0x56", "EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "Slotting BL Message Into Header Flit; All", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.ALL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "Slotting BL Message Into Header Flit; Needs D= ata Flit", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.NEED_DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL message requires data flit sequence", "UMask": "0x2", @@ -6249,8 +7659,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit; Wait on= Pump 0", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P0_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Waiting for header pump 0", "UMask": "0x4", @@ -6258,8 +7670,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit; Don't N= eed Pump 1", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header pump 1 is not required for flit", "UMask": "0x10", @@ -6267,8 +7681,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit; Don't N= eed Pump 1 - Bubble", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_BUT_BUBBLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header pump 1 is not required for flit but f= lit transmission delayed", "UMask": "0x20", @@ -6276,8 +7692,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit; Don't N= eed Pump 1 - Not Avail", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_NOT_AVAIL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Header pump 1 is not required for flit and n= ot available", "UMask": "0x40", @@ -6285,8 +7703,10 @@ }, { "BriefDescription": "Slotting BL Message Into Header Flit; Wait on= Pump 1", + "Counter": "0,1,2", "EventCode": "0x58", "EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_WAIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Waiting for header pump 1", "UMask": "0x8", @@ -6294,8 +7714,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Accumulate", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Header flit slotting control state machine is in any accumulate state= ; multi-message flit may be assembled over multiple cycles", "UMask": "0x1", @@ -6303,8 +7725,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Accumulate Ready", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; header flit slotting control state machine is in accum_ready state; f= lit is ready to send but transmission is blocked; more messages may be slot= ted into flit", "UMask": "0x2", @@ -6312,8 +7736,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Accumulate Wasted", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_WASTED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Flit is being assembled over multiple cycles, but no additional messa= ge is being slotted into flit in current cycle; accumulate cycle is wasted"= , "UMask": "0x4", @@ -6321,8 +7747,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Run-Ahead - Blocked", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_BLOCKED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Header flit slotting entered run-ahead state; new header flit is star= ted while transmission of prior, fully assembled flit is blocked", "UMask": "0x8", @@ -6330,8 +7758,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Run-Ahead - Message", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Header flit slotting is in run-ahead to start new flit, and message i= s actually slotted into new flit", "UMask": "0x10", @@ -6339,8 +7769,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Parallel Ok", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; New header flit construction may proceed in parallel with data flit s= equence", "UMask": "0x20", @@ -6348,8 +7780,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Parallel Flit Finished", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR_FLIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Header flit finished assembly in parallel with data flit sequence", "UMask": "0x80", @@ -6357,8 +7791,10 @@ }, { "BriefDescription": "Flit Gen - Header 1; Parallel Message", + "Counter": "0,1,2", "EventCode": "0x53", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 1; Message is slotted into header flit in parallel with data flit sequen= ce", "UMask": "0x40", @@ -6366,8 +7802,10 @@ }, { "BriefDescription": "Flit Gen - Header 2; Rate-matching Stall", + "Counter": "0,1,2", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 2; Rate-matching stall injected", "UMask": "0x1", @@ -6375,8 +7813,10 @@ }, { "BriefDescription": "Flit Gen - Header 2; Rate-matching Stall - No= Message", + "Counter": "0,1,2", "EventCode": "0x54", "EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL_NOMSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Events related to Header Flit Generation - S= et 2; Rate matching stall injected, but no additional message slotted durin= g stall cycle", "UMask": "0x2", @@ -6384,8 +7824,10 @@ }, { "BriefDescription": "Header Not Sent; All", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent", "UMask": "0x1", @@ -6393,8 +7835,10 @@ }, { "BriefDescription": "Header Not Sent; No BGF Credits", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_BGF_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; No BGF credits available", "UMask": "0x2", @@ -6402,8 +7846,10 @@ }, { "BriefDescription": "Header Not Sent; No BGF Credits + No Extra Me= ssage Slotted", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_BGF_NO_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; No BGF credits available; no additional message slotted in= to flit", "UMask": "0x8", @@ -6411,8 +7857,10 @@ }, { "BriefDescription": "Header Not Sent; No TxQ Credits", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_TXQ_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; No TxQ credits available", "UMask": "0x4", @@ -6420,8 +7868,10 @@ }, { "BriefDescription": "Header Not Sent; No TxQ Credits + No Extra Me= ssage Slotted", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_TXQ_NO_MSG", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; No TxQ credits available; no additional message slotted in= to flit", "UMask": "0x10", @@ -6429,8 +7879,10 @@ }, { "BriefDescription": "Header Not Sent; Sent - One Slot Taken", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.ONE_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; sending header flit with only one slot taken (two slots fr= ee)", "UMask": "0x20", @@ -6438,8 +7890,10 @@ }, { "BriefDescription": "Header Not Sent; Sent - Three Slots Taken", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.THREE_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; sending header flit with three slots taken (no slots free)= ", "UMask": "0x80", @@ -6447,8 +7901,10 @@ }, { "BriefDescription": "Header Not Sent; Sent - Two Slots Taken", + "Counter": "0,1,2", "EventCode": "0x55", "EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.TWO_TAKEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "header flit is ready for transmission but co= uld not be sent; sending header flit with only two slots taken (one slots f= ree)", "UMask": "0x40", @@ -6456,8 +7912,10 @@ }, { "BriefDescription": "Message Held; Can't Slot AD", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_AD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "some AD message could not be slotted (logica= l OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})", "UMask": "0x40", @@ -6465,8 +7923,10 @@ }, { "BriefDescription": "Message Held; Can't Slot BL", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_BL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "some BL message could not be slotted (logica= l OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})", "UMask": "0x80", @@ -6474,8 +7934,10 @@ }, { "BriefDescription": "Message Held; Parallel AD Lost", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_AD_LOST", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "some AD message lost contest for slot 0 (log= ical OR of all AD events under INGR_SLOT_LOST_MC_VN{0,1})", "UMask": "0x10", @@ -6483,8 +7945,10 @@ }, { "BriefDescription": "Message Held; Parallel Attempt", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_ATTEMPT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ad and bl messages attempted to slot into th= e same flit in parallel", "UMask": "0x4", @@ -6492,8 +7956,10 @@ }, { "BriefDescription": "Message Held; Parallel BL Lost", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_BL_LOST", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "some BL message lost contest for slot 0 (log= ical OR of all BL events under INGR_SLOT_LOST_MC_VN{0,1})", "UMask": "0x20", @@ -6501,8 +7967,10 @@ }, { "BriefDescription": "Message Held; Parallel Success", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_SUCCESS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "ad and bl messages were actually slotted int= o the same flit in parallel", "UMask": "0x8", @@ -6510,8 +7978,10 @@ }, { "BriefDescription": "Message Held; VN0", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "vn0 message(s) that couldn't be slotted into= last vn0 flit are held in slotting stage while processing vn1 flit", "UMask": "0x1", @@ -6519,8 +7989,10 @@ }, { "BriefDescription": "Message Held; VN1", + "Counter": "0,1,2", "EventCode": "0x52", "EventName": "UNC_M3UPI_RxC_HELD.VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "vn1 message(s) that couldn't be slotted into= last vn1 flit are held in slotting stage while processing vn0 flit", "UMask": "0x2", @@ -6528,8 +8000,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; REQ o= n AD", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Home (REQ) = messages on AD. REQ is generally used to send requests, request responses,= and snoop responses.", "UMask": "0x1", @@ -6537,8 +8011,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; RSP o= n AD", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Response (R= SP) messages on AD. RSP packets are used to transmit a variety of protocol= flits including grants and completions (CMP).", "UMask": "0x4", @@ -6546,8 +8022,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; SNP o= n AD", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Snoops (SNP= ) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -6555,8 +8033,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; NCB o= n BL", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Non-Coheren= t Broadcast (NCB) messages on BL. NCB is generally used to transmit data w= ithout coherency. For example, non-coherent read data returns.", "UMask": "0x20", @@ -6564,8 +8044,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; NCS o= n BL", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Non-Coheren= t Standard (NCS) messages on BL.", "UMask": "0x40", @@ -6573,8 +8055,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; RSP o= n BL", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Response (R= SP) messages on BL. RSP packets are used to transmit a variety of protocol = flits including grants and completions (CMP).", "UMask": "0x8", @@ -6582,8 +8066,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; WB on= BL", + "Counter": "0,1,2", "EventCode": "0x41", "EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I Ingress. This tracks one of the three rings that are used by the UPI age= nt. This can be used in conjunction with the UPI Ingress Occupancy Accumul= ator event in order to calculate average queue latency. Multiple ingress b= uffers can be tracked at a given time using multiple counters.; Data Respon= se (WB) messages on BL. WB is generally used to transmit data with coheren= cy. For example, remote reads and writes, or cache to cache transfers will= transmit their data using WB.", "UMask": "0x10", @@ -6591,8 +8077,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; REQ o= n AD", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; H= ome (REQ) messages on AD. REQ is generally used to send requests, request = responses, and snoop responses.", "UMask": "0x1", @@ -6600,8 +8088,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; RSP o= n AD", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on AD. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -6609,8 +8099,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; SNP o= n AD", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; S= noops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -6618,8 +8110,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; NCB o= n BL", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Broadcast (NCB) messages on BL. NCB is generally used to trans= mit data without coherency. For example, non-coherent read data returns.", "UMask": "0x20", @@ -6627,8 +8121,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; NCS o= n BL", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; N= on-Coherent Standard (NCS) messages on BL.", "UMask": "0x40", @@ -6636,8 +8132,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; RSP o= n BL", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; R= esponse (RSP) messages on BL. RSP packets are used to transmit a variety of= protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -6645,8 +8143,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; WB on= BL", + "Counter": "0,1,2", "EventCode": "0x42", "EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the UP= I VN1 Ingress. This tracks one of the three rings that are used by the UP= I agent. This can be used in conjunction with the UPI VN1 Ingress Occupan= cy Accumulator event in order to calculate average queue latency. Multiple= ingress buffers can be tracked at a given time using multiple counters.; D= ata Response (WB) messages on BL. WB is generally used to transmit data wi= th coherency. For example, remote reads and writes, or cache to cache tran= sfers will transmit their data using WB.", "UMask": "0x10", @@ -6654,8 +8154,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; REQ= on AD", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Home (REQ) messages on AD. REQ is g= enerally used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -6663,8 +8165,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; RSP= on AD", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Response (RSP) messages on AD. RSP = packets are used to transmit a variety of protocol flits including grants a= nd completions (CMP).", "UMask": "0x4", @@ -6672,8 +8176,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; SNP= on AD", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Snoops (SNP) messages on AD. SNP is= used for outgoing snoops.", "UMask": "0x2", @@ -6681,8 +8187,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; NCB= on BL", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Non-Coherent Broadcast (NCB) message= s on BL. NCB is generally used to transmit data without coherency. For ex= ample, non-coherent read data returns.", "UMask": "0x20", @@ -6690,8 +8198,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; NCS= on BL", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Non-Coherent Standard (NCS) messages= on BL.", "UMask": "0x40", @@ -6699,8 +8209,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; RSP= on BL", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Response (RSP) messages on BL. RSP p= ackets are used to transmit a variety of protocol flits including grants an= d completions (CMP).", "UMask": "0x8", @@ -6708,8 +8220,10 @@ }, { "BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; WB = on BL", + "Counter": "0,1,2", "EventCode": "0x45", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Data Response (WB) messages on BL. = WB is generally used to transmit data with coherency. For example, remote = reads and writes, or cache to cache transfers will transmit their data usin= g WB.", "UMask": "0x10", @@ -6717,8 +8231,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; REQ= on AD", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Home (REQ) messages on AD. REQ is g= enerally used to send requests, request responses, and snoop responses.", "UMask": "0x1", @@ -6726,8 +8242,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; RSP= on AD", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Response (RSP) messages on AD. RSP = packets are used to transmit a variety of protocol flits including grants a= nd completions (CMP).", "UMask": "0x4", @@ -6735,8 +8253,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; SNP= on AD", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Snoops (SNP) messages on AD. SNP is= used for outgoing snoops.", "UMask": "0x2", @@ -6744,8 +8264,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; NCB= on BL", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Non-Coherent Broadcast (NCB) message= s on BL. NCB is generally used to transmit data without coherency. For ex= ample, non-coherent read data returns.", "UMask": "0x20", @@ -6753,8 +8275,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; NCS= on BL", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Non-Coherent Standard (NCS) messages= on BL.", "UMask": "0x40", @@ -6762,8 +8286,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; RSP= on BL", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Response (RSP) messages on BL. RSP p= ackets are used to transmit a variety of protocol flits including grants an= d completions (CMP).", "UMask": "0x8", @@ -6771,8 +8297,10 @@ }, { "BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; WB = on BL", + "Counter": "0,1,2", "EventCode": "0x46", "EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the occupancy of a given UPI VN1= Ingress queue in each cycle. This tracks one of the three ring Ingress b= uffers. This can be used with the UPI VN1 Ingress Not Empty event to calc= ulate average occupancy or the UPI VN1 Ingress Allocations event in order = to calculate average queuing latency.; Data Response (WB) messages on BL. = WB is generally used to transmit data with coherency. For example, remote = reads and writes, or cache to cache transfers will transmit their data usin= g WB.", "UMask": "0x10", @@ -6780,8 +8308,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Home (REQ) messages on AD. REQ i= s generally used to send requests, request responses, and snoop responses."= , "UMask": "0x1", @@ -6789,8 +8319,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Response (RSP) messages on AD. R= SP packets are used to transmit a variety of protocol flits including grant= s and completions (CMP).", "UMask": "0x4", @@ -6798,8 +8330,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Snoops (SNP) messages on AD. SNP= is used for outgoing snoops.", "UMask": "0x2", @@ -6807,8 +8341,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Non-Coherent Broadcast (NCB) mess= ages on BL. NCB is generally used to transmit data without coherency. For= example, non-coherent read data returns.", "UMask": "0x20", @@ -6816,8 +8352,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Non-Coherent Standard (NCS) messa= ges on BL.", "UMask": "0x40", @@ -6825,8 +8363,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Response (RSP) messages on BL. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x8", @@ -6834,8 +8374,10 @@ }, { "BriefDescription": "VN0 message can't slot into flit; WB on BL", + "Counter": "0,1,2", "EventCode": "0x4E", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Data Response (WB) messages on BL= . WB is generally used to transmit data with coherency. For example, remo= te reads and writes, or cache to cache transfers will transmit their data u= sing WB.", "UMask": "0x10", @@ -6843,8 +8385,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Home (REQ) messages on AD. REQ i= s generally used to send requests, request responses, and snoop responses."= , "UMask": "0x1", @@ -6852,8 +8396,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Response (RSP) messages on AD. R= SP packets are used to transmit a variety of protocol flits including grant= s and completions (CMP).", "UMask": "0x4", @@ -6861,8 +8407,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Snoops (SNP) messages on AD. SNP= is used for outgoing snoops.", "UMask": "0x2", @@ -6870,8 +8418,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Non-Coherent Broadcast (NCB) mess= ages on BL. NCB is generally used to transmit data without coherency. For= example, non-coherent read data returns.", "UMask": "0x20", @@ -6879,8 +8429,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; NCS on BL", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Non-Coherent Standard (NCS) messa= ges on BL.", "UMask": "0x40", @@ -6888,8 +8440,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Response (RSP) messages on BL. RS= P packets are used to transmit a variety of protocol flits including grants= and completions (CMP).", "UMask": "0x8", @@ -6897,8 +8451,10 @@ }, { "BriefDescription": "VN1 message can't slot into flit; WB on BL", + "Counter": "0,1,2", "EventCode": "0x4F", "EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Count cases where Ingress has packets to sen= d but did not have time to pack into flit before sending to Agent so slot w= as left NULL which could have been used.; Data Response (WB) messages on BL= . WB is generally used to transmit data with coherency. For example, remo= te reads and writes, or cache to cache transfers will transmit their data u= sing WB.", "UMask": "0x10", @@ -6906,32 +8462,40 @@ }, { "BriefDescription": "SMI3 Prefetch Messages; Lost Arbitration", + "Counter": "0,1,2", "EventCode": "0x62", "EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.ARB_LOST", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "SMI3 Prefetch Messages; Arrived", + "Counter": "0,1,2", "EventCode": "0x62", "EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.ARRIVED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "SMI3 Prefetch Messages; Dropped - Old", + "Counter": "0,1,2", "EventCode": "0x62", "EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.DROP_OLD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "SMI3 Prefetch Messages; Dropped - Wrap", + "Counter": "0,1,2", "EventCode": "0x62", "EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.DROP_WRAP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Dropped because it was overwritten by new me= ssage while prefetch queue was full", "UMask": "0x10", @@ -6939,16 +8503,20 @@ }, { "BriefDescription": "SMI3 Prefetch Messages; Slotted", + "Counter": "0,1,2", "EventCode": "0x62", "EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.SLOTTED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "Remote VNA Credits; Any In Use", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.ANY_IN_USE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "At least one remote vna credit is in use", "UMask": "0x20", @@ -6956,8 +8524,10 @@ }, { "BriefDescription": "Remote VNA Credits; Corrected", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.CORRECTED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of remote vna credits corrected (loca= l return) per cycle", "UMask": "0x2", @@ -6965,8 +8535,10 @@ }, { "BriefDescription": "Remote VNA Credits; Level < 1", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote vna credit level is less than 1 (i.e.= no vna credits available)", "UMask": "0x4", @@ -6974,8 +8546,10 @@ }, { "BriefDescription": "Remote VNA Credits; Level < 4", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote vna credit level is less than 4; bl (= or ad requiring 4 vna) cannot arb on vna", "UMask": "0x8", @@ -6983,8 +8557,10 @@ }, { "BriefDescription": "Remote VNA Credits; Level < 5", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.LT5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Remote vna credit level is less than 5; para= llel ad/bl arb on vna not possible", "UMask": "0x10", @@ -6992,8 +8568,10 @@ }, { "BriefDescription": "Remote VNA Credits; Used", + "Counter": "0,1,2", "EventCode": "0x5B", "EventName": "UNC_M3UPI_RxC_VNA_CRD.USED", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of remote vna credits consumed per cy= cle", "UMask": "0x1", @@ -7001,8 +8579,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB4", "EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x1", @@ -7010,8 +8590,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2", "EventCode": "0xB4", "EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x10", @@ -7019,8 +8601,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB4", "EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x4", @@ -7028,8 +8612,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2", "EventCode": "0xB4", "EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, because a mess= age from the other queue has higher priority", "UMask": "0x40", @@ -7037,8 +8623,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x1", @@ -7046,8 +8634,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AD - Credit", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x10", @@ -7055,8 +8645,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x2", @@ -7064,8 +8656,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x4", @@ -7073,8 +8667,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; BL - Credit", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x40", @@ -7082,8 +8678,10 @@ }, { "BriefDescription": "Transgress Ingress Bypass; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0xB2", "EventName": "UNC_M3UPI_RxR_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the CMS Ingress"= , "UMask": "0x8", @@ -7091,8 +8689,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x1", @@ -7100,8 +8700,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AD - Credit"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x10", @@ -7109,8 +8711,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; AK - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x2", @@ -7118,8 +8722,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x4", @@ -7127,8 +8733,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; BL - Credit"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x40", @@ -7136,8 +8744,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IFV - Credit= ", + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.IFV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x80", @@ -7145,8 +8755,10 @@ }, { "BriefDescription": "Transgress Injection Starvation; IV - Bounce"= , + "Counter": "0,1,2", "EventCode": "0xB3", "EventName": "UNC_M3UPI_RxR_CRD_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cycles under injection starvation mod= e. This starvation is triggered when the CMS Ingress cannot send a transac= tion onto the mesh for a long period of time. In this case, the Ingress is= unable to forward to the Egress due to a lack of credit.", "UMask": "0x8", @@ -7154,8 +8766,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -7163,8 +8777,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AD - Credit", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -7172,8 +8788,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -7181,8 +8799,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -7190,8 +8810,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; BL - Credit", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -7199,8 +8821,10 @@ }, { "BriefDescription": "Transgress Ingress Allocations; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0xB1", "EventName": "UNC_M3UPI_RxR_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the CMS Ingress = The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -7208,8 +8832,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x1", @@ -7217,8 +8843,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AD - Credit", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x10", @@ -7226,8 +8854,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x2", @@ -7235,8 +8865,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x4", @@ -7244,8 +8876,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; BL - Credit", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x40", @@ -7253,8 +8887,10 @@ }, { "BriefDescription": "Transgress Ingress Occupancy; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0xB0", "EventName": "UNC_M3UPI_RxR_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Ingress buffers in t= he CMS The Ingress is used to queue up requests received from the mesh", "UMask": "0x8", @@ -7262,8 +8898,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -7271,8 +8909,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -7280,8 +8920,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -7289,8 +8931,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -7298,8 +8942,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -7307,8 +8953,10 @@ }, { "BriefDescription": "Stall on No AD Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2", "EventCode": "0xD0", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -7316,8 +8964,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -7325,8 +8975,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -7334,8 +8986,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -7343,8 +8997,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -7352,8 +9008,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -7361,8 +9019,10 @@ }, { "BriefDescription": "Stall on No AD Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2", "EventCode": "0xD2", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -7370,8 +9030,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -7379,8 +9041,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -7388,8 +9052,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -7397,8 +9063,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -7406,8 +9074,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -7415,8 +9085,10 @@ }, { "BriefDescription": "Stall on No BL Agent0 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2", "EventCode": "0xD4", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 0 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -7424,8 +9096,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 0", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x1", @@ -7433,8 +9107,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 1", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x2", @@ -7442,8 +9118,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 2", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x4", @@ -7451,8 +9129,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 3", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x8", @@ -7460,8 +9140,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 4", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x10", @@ -7469,8 +9151,10 @@ }, { "BriefDescription": "Stall on No BL Agent1 Transgress Credits; For= Transgress 5", + "Counter": "0,1,2", "EventCode": "0xD6", "EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Agent 1 Egress Buffe= r is stalled waiting for a TGR credit to become available, per transgress."= , "UMask": "0x20", @@ -7478,8 +9162,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x1", @@ -7487,8 +9173,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x4", @@ -7496,8 +9184,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x2", @@ -7505,8 +9195,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x8", @@ -7514,8 +9206,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x10", @@ -7523,8 +9217,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x40", @@ -7532,8 +9228,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x20", @@ -7541,8 +9239,10 @@ }, { "BriefDescription": "Failed ARB for AD; VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x30", "EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD arb but no win; arb request asserted but = not won", "UMask": "0x80", @@ -7550,8 +9250,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2", "EventCode": "0x2C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cases when the AD flowQ is bypassed (= S0, S1 and S2 indicate which slot was bypassed with S0 having the highest p= riority and S2 the least)", "UMask": "0x1", @@ -7559,8 +9261,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2", "EventCode": "0x2C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cases when the AD flowQ is bypassed (= S0, S1 and S2 indicate which slot was bypassed with S0 having the highest p= riority and S2 the least)", "UMask": "0x2", @@ -7568,8 +9272,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2", "EventCode": "0x2C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cases when the AD flowQ is bypassed (= S0, S1 and S2 indicate which slot was bypassed with S0 having the highest p= riority and S2 the least)", "UMask": "0x4", @@ -7577,8 +9283,10 @@ }, { "BriefDescription": "AD FlowQ Bypass", + "Counter": "0,1,2", "EventCode": "0x2C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.BL_EARLY_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts cases when the AD flowQ is bypassed (= S0, S1 and S2 indicate which slot was bypassed with S0 having the highest p= riority and S2 the least)", "UMask": "0x8", @@ -7586,8 +9294,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x1", @@ -7595,8 +9305,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x4", @@ -7604,8 +9316,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x2", @@ -7613,8 +9327,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x8", @@ -7622,8 +9338,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x10", @@ -7631,8 +9349,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x40", @@ -7640,8 +9360,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x20", @@ -7649,8 +9371,10 @@ }, { "BriefDescription": "AD Flow Q Not Empty; VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x27", "EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the AD Egress queue is Not = Empty", "UMask": "0x80", @@ -7658,8 +9382,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x1", @@ -7667,8 +9393,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x4", @@ -7676,8 +9404,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x2", @@ -7685,8 +9415,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x8", @@ -7694,8 +9426,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x10", @@ -7703,8 +9437,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x40", @@ -7712,8 +9448,10 @@ }, { "BriefDescription": "AD Flow Q Inserts; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x2D", "EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x20", @@ -7721,64 +9459,80 @@ }, { "BriefDescription": "AD Flow Q Occupancy; VN0 REQ Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN0 RSP Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN0 SNP Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN0 WB Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN1 REQ Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN1 RSP Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "AD Flow Q Occupancy; VN1 SNP Messages", + "Counter": "0", "EventCode": "0x1C", "EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "Number of Snoop Targets; CHA on VN0", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_CHA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN0 Snpf = to CHA", "UMask": "0x4", @@ -7786,8 +9540,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Non Idle cycles on V= N0", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_NON_IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of non-idle = cycles in issuing Vn0 Snpf", "UMask": "0x40", @@ -7795,8 +9551,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Peer UPI0 on VN0", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_PEER_UPI0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN0 Snpf = to peer UPI0", "UMask": "0x1", @@ -7804,8 +9562,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Peer UPI1 on VN0", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_PEER_UPI1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN0 Snpf = to peer UPI1", "UMask": "0x2", @@ -7813,8 +9573,10 @@ }, { "BriefDescription": "Number of Snoop Targets; CHA on VN1", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_CHA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN1 Snpf = to CHA", "UMask": "0x20", @@ -7822,8 +9584,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Non Idle cycles on V= N1", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_NON_IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of non-idle = cycles in issuing Vn1 Snpf", "UMask": "0x80", @@ -7831,8 +9595,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Peer UPI0 on VN1", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_PEER_UPI0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN1 Snpf = to peer UPI0", "UMask": "0x8", @@ -7840,8 +9606,10 @@ }, { "BriefDescription": "Number of Snoop Targets; Peer UPI1 on VN1", + "Counter": "0", "EventCode": "0x3C", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_PEER_UPI1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of snpfanout targets and non-idle cyc= les can be used to calculate average snpfanout latency; Number of VN1 Snpf = to peer UPI1", "UMask": "0x10", @@ -7849,8 +9617,10 @@ }, { "BriefDescription": "Snoop Arbitration; FlowQ Won", + "Counter": "0,1,2", "EventCode": "0x3D", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN0_SNPFP_NONSNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outcome of SnpF pending arbitration; FlowQ t= xn issued when SnpF pending on Vn0", "UMask": "0x1", @@ -7858,8 +9628,10 @@ }, { "BriefDescription": "Snoop Arbitration; FlowQ SnpF Won", + "Counter": "0,1,2", "EventCode": "0x3D", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN0_SNPFP_VN2SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outcome of SnpF pending arbitration; FlowQ V= n0 SnpF issued when SnpF pending on Vn1", "UMask": "0x4", @@ -7867,8 +9639,10 @@ }, { "BriefDescription": "Snoop Arbitration; FlowQ Won", + "Counter": "0,1,2", "EventCode": "0x3D", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN1_SNPFP_NONSNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outcome of SnpF pending arbitration; FlowQ t= xn issued when SnpF pending on Vn1", "UMask": "0x2", @@ -7876,8 +9650,10 @@ }, { "BriefDescription": "Snoop Arbitration; FlowQ SnpF Won", + "Counter": "0,1,2", "EventCode": "0x3D", "EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN1_SNPFP_VN0SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Outcome of SnpF pending arbitration; FlowQ V= n1 SnpF issued when SnpF pending on Vn0", "UMask": "0x8", @@ -7885,8 +9661,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x1", @@ -7894,8 +9672,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x2", @@ -7903,8 +9683,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x8", @@ -7912,8 +9694,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x10", @@ -7921,8 +9705,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x20", @@ -7930,8 +9716,10 @@ }, { "BriefDescription": "Speculative ARB for AD - Credit Available; = VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x34", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request with prior cycle = credit check complete and credit avail", "UMask": "0x80", @@ -7939,8 +9727,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN0 RE= Q Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x1", @@ -7948,8 +9738,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN0 SN= P Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x2", @@ -7957,8 +9749,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN0 WB= Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x8", @@ -7966,8 +9760,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN1 RE= Q Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x10", @@ -7975,8 +9771,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN1 SN= P Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x20", @@ -7984,8 +9782,10 @@ }, { "BriefDescription": "Speculative ARB for AD - New Message; VN1 WB= Messages", + "Counter": "0,1,2", "EventCode": "0x33", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x80", @@ -7993,8 +9793,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN0 REQ = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x1", @@ -8002,8 +9804,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN0 RSP = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x4", @@ -8011,8 +9815,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN0 SNP = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x2", @@ -8020,8 +9826,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN0 WB M= essages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x8", @@ -8029,8 +9837,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN1 REQ = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x10", @@ -8038,8 +9848,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN1 RSP = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x40", @@ -8047,8 +9859,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN1 SNP = Messages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x20", @@ -8056,8 +9870,10 @@ }, { "BriefDescription": "Speculative ARB for AD - No Credit; VN1 WB M= essages", + "Counter": "0,1,2", "EventCode": "0x32", "EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "AD speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x80", @@ -8065,22 +9881,28 @@ }, { "BriefDescription": "AK Flow Q Inserts", + "Counter": "0,1,2", "EventCode": "0x2F", "EventName": "UNC_M3UPI_TxC_AK_FLQ_INSERTS", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "AK Flow Q Occupancy", + "Counter": "0", "EventCode": "0x1E", "EventName": "UNC_M3UPI_TxC_AK_FLQ_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "M3UPI" }, { "BriefDescription": "Failed ARB for BL; VN0 NCB Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x4", @@ -8088,8 +9910,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN0 NCS Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x8", @@ -8097,8 +9921,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x1", @@ -8106,8 +9932,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x2", @@ -8115,8 +9943,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN1 NCS Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x40", @@ -8124,8 +9954,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN1 NCB Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x80", @@ -8133,8 +9965,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x10", @@ -8142,8 +9976,10 @@ }, { "BriefDescription": "Failed ARB for BL; VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x35", "EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL arb but no win; arb request asserted but = not won", "UMask": "0x20", @@ -8151,8 +9987,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x1", @@ -8160,8 +9998,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x4", @@ -8169,8 +10009,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x2", @@ -8178,8 +10020,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x8", @@ -8187,8 +10031,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x10", @@ -8196,8 +10042,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x40", @@ -8205,8 +10053,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x20", @@ -8214,8 +10064,10 @@ }, { "BriefDescription": "BL Flow Q Not Empty; VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x28", "EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the BL Egress queue is Not = Empty", "UMask": "0x80", @@ -8223,8 +10075,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x1", @@ -8232,8 +10086,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x2", @@ -8241,8 +10097,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN0 NCS Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x8", @@ -8250,8 +10108,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN0 NCB Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x4", @@ -8259,8 +10119,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x10", @@ -8268,8 +10130,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x20", @@ -8277,8 +10141,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN1_NCB Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x80", @@ -8286,8 +10152,10 @@ }, { "BriefDescription": "BL Flow Q Inserts; VN1_NCS Messages", + "Counter": "0,1,2", "EventCode": "0x2E", "EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of allocations into the QP= I FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accum= ulator event in order to calculate average queue latency. Only a single Fl= owQ queue can be tracked at any given time. It is not possible to filter b= ased on direction or polarity.", "UMask": "0x40", @@ -8295,72 +10163,90 @@ }, { "BriefDescription": "BL Flow Q Occupancy; VN0 NCB Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN0 NCS Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN0 RSP Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN0 WB Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN1_NCS Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN1_NCB Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN1 RSP Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "M3UPI" }, { "BriefDescription": "BL Flow Q Occupancy; VN1 WB Messages", + "Counter": "0", "EventCode": "0x1D", "EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "M3UPI" }, { "BriefDescription": "Speculative ARB for BL - New Message; VN0 WB= Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x2", @@ -8368,8 +10254,10 @@ }, { "BriefDescription": "Speculative ARB for BL - New Message; VN0 NC= S Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x8", @@ -8377,8 +10265,10 @@ }, { "BriefDescription": "Speculative ARB for BL - New Message; VN0 WB= Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x1", @@ -8386,8 +10276,10 @@ }, { "BriefDescription": "Speculative ARB for BL - New Message; VN1 WB= Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x20", @@ -8395,8 +10287,10 @@ }, { "BriefDescription": "Speculative ARB for BL - New Message; VN1 NC= B Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x80", @@ -8404,8 +10298,10 @@ }, { "BriefDescription": "Speculative ARB for BL - New Message; VN1 RS= P Messages", + "Counter": "0,1,2", "EventCode": "0x38", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request due to new messag= e arriving on a specific channel (MC/VN)", "UMask": "0x10", @@ -8413,8 +10309,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 0 NCB Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x4", @@ -8422,8 +10320,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 0 NCS Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x8", @@ -8431,8 +10331,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x1", @@ -8440,8 +10342,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 0 WB Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x2", @@ -8449,8 +10353,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 1 NCS Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x40", @@ -8458,8 +10364,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 1 NCB Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x80", @@ -8467,8 +10375,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x10", @@ -8476,8 +10386,10 @@ }, { "BriefDescription": "Speculative ARB for AD Failed - No Credit; VN= 1 WB Messages", + "Counter": "0,1,2", "EventCode": "0x37", "EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "BL speculative arb request asserted due to n= o other channel being active (have a valid entry but don't have credits to = send)", "UMask": "0x20", @@ -8485,8 +10397,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0x9D", "EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -8494,8 +10408,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AD - Credit", + "Counter": "0,1,2", "EventCode": "0x9D", "EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -8503,8 +10419,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0x9D", "EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -8512,8 +10430,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0x9D", "EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -8521,8 +10441,10 @@ }, { "BriefDescription": "CMS Horizontal ADS Used; BL - Credit", + "Counter": "0,1,2", "EventCode": "0x9D", "EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Horizontal Anti-= Deadlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -8530,8 +10452,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -8539,8 +10463,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AD - Credit", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -8548,8 +10474,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -8557,8 +10485,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -8566,8 +10496,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; BL - Credit", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -8575,8 +10507,10 @@ }, { "BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0x9F", "EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Horizontal E= gress, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -8584,8 +10518,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Bounce", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -8593,8 +10529,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= D - Credit", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -8602,8 +10540,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; A= K - Bounce", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -8611,8 +10551,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Bounce", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -8620,8 +10562,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; B= L - Credit", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -8629,8 +10573,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; I= V - Bounce", + "Counter": "0,1,2", "EventCode": "0x96", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Full. The egress is used to queue up requests destined for t= he Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -8638,8 +10584,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -8647,8 +10595,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AD - Credit", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -8656,8 +10606,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -8665,8 +10617,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -8674,8 +10628,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; BL - Credit", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -8683,8 +10639,10 @@ }, { "BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Emp= ty; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0x97", "EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles the Transgress buffers in the Common = Mesh Stop are Not-Empty. The egress is used to queue up requests destined = for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -8692,8 +10650,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -8701,8 +10661,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -8710,8 +10672,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -8719,8 +10683,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -8728,8 +10694,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -8737,8 +10705,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0x95", "EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Transgress bu= ffers in the Common Mesh Stop The egress is used to queue up requests dest= ined for the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -8746,8 +10716,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x1", @@ -8755,8 +10727,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x20", @@ -8764,8 +10738,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x2", @@ -8773,8 +10749,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x4", @@ -8782,8 +10760,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x40", @@ -8791,8 +10771,10 @@ }, { "BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce", + "Counter": "0,1,2", "EventCode": "0x99", "EventName": "UNC_M3UPI_TxR_HORZ_NACK.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Horizontal Ring", "UMask": "0x8", @@ -8800,8 +10782,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x1", @@ -8809,8 +10793,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x10", @@ -8818,8 +10804,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x2", @@ -8827,8 +10815,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x4", @@ -8836,8 +10826,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_CRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x40", @@ -8845,8 +10837,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce"= , + "Counter": "0,1,2", "EventCode": "0x94", "EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Transgress buffers i= n the Common Mesh Stop The egress is used to queue up requests destined fo= r the Horizontal Ring on the Mesh.", "UMask": "0x8", @@ -8854,8 +10848,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= D - Bounce", + "Counter": "0,1,2", "EventCode": "0x9B", "EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AD_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x1", @@ -8863,8 +10859,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; A= K - Bounce", + "Counter": "0,1,2", "EventCode": "0x9B", "EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AK_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x2", @@ -8872,8 +10870,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; B= L - Bounce", + "Counter": "0,1,2", "EventCode": "0x9B", "EventName": "UNC_M3UPI_TxR_HORZ_STARVED.BL_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x4", @@ -8881,8 +10881,10 @@ }, { "BriefDescription": "CMS Horizontal Egress Injection Starvation; I= V - Bounce", + "Counter": "0,1,2", "EventCode": "0x9B", "EventName": "UNC_M3UPI_TxR_HORZ_STARVED.IV_BNC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Transgress buffer cannot send a transaction ont= o the Horizontal ring for a long period of time.", "UMask": "0x8", @@ -8890,8 +10892,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -8899,8 +10903,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -8908,8 +10914,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -8917,8 +10925,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -8926,8 +10936,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -8935,8 +10947,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9C", "EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets using the Vertical Anti-De= adlock Slot, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -8944,8 +10958,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x1", @@ -8953,8 +10969,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x10", @@ -8962,8 +10980,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x2", @@ -8971,8 +10991,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x20", @@ -8980,8 +11002,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x4", @@ -8989,8 +11013,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x40", @@ -8998,8 +11024,10 @@ }, { "BriefDescription": "CMS Vertical ADS Used; IV", + "Counter": "0,1,2", "EventCode": "0x9E", "EventName": "UNC_M3UPI_TxR_VERT_BYPASS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of packets bypassing the Vertical Egr= ess, broken down by ring type and CMS Agent.", "UMask": "0x8", @@ -9007,8 +11035,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= D ring. Some example include outbound requests, snoop requests, and snoop = responses.", "UMask": "0x1", @@ -9016,8 +11046,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= D ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -9025,8 +11057,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the A= K ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -9034,8 +11068,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the A= K ring.", "UMask": "0x20", @@ -9043,8 +11079,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the B= L ring. This is commonly used to send data from the cache to various desti= nations.", "UMask": "0x4", @@ -9052,8 +11090,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 1 destined for the B= L ring. This is commonly used for transferring writeback data to the cache= .", "UMask": "0x40", @@ -9061,8 +11101,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV"= , + "Counter": "0,1,2", "EventCode": "0x92", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Full. The Egress is used to queue up requests destined for the Ve= rtical Ring on the Mesh.; Ring transactions from Agent 0 destined for the I= V ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -9070,8 +11112,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AD ring. Some example include outbound requests, snoop requests, and snoop= responses.", "UMask": "0x1", @@ -9079,8 +11123,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -9088,8 +11134,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -9097,8 +11145,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = AK ring.", "UMask": "0x20", @@ -9106,8 +11156,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = BL ring. This is commonly used to send data from the cache to various dest= inations.", "UMask": "0x4", @@ -9115,8 +11167,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the = BL ring. This is commonly used for transferring writeback data to the cach= e.", "UMask": "0x40", @@ -9124,8 +11178,10 @@ }, { "BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty= ; IV", + "Counter": "0,1,2", "EventCode": "0x93", "EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles the Common Mesh Stop Egress= was Not Empty. The Egress is used to queue up requests destined for the V= ertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the = IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -9133,8 +11189,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD r= ing. Some example include outbound requests, snoop requests, and snoop res= ponses.", "UMask": "0x1", @@ -9142,8 +11200,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD r= ing. This is commonly used for outbound requests.", "UMask": "0x10", @@ -9151,8 +11211,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK r= ing. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -9160,8 +11222,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK r= ing.", "UMask": "0x20", @@ -9169,8 +11233,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL r= ing. This is commonly used to send data from the cache to various destinat= ions.", "UMask": "0x4", @@ -9178,8 +11244,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL r= ing. This is commonly used for transferring writeback data to the cache.", "UMask": "0x40", @@ -9187,8 +11255,10 @@ }, { "BriefDescription": "CMS Vert Egress Allocations; IV", + "Counter": "0,1,2", "EventCode": "0x91", "EventName": "UNC_M3UPI_TxR_VERT_INSERTS.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the Common Mesh S= top Egress. The Egress is used to queue up requests destined for the Verti= cal Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV r= ing. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -9196,8 +11266,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x1", @@ -9205,8 +11277,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x10", @@ -9214,8 +11288,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x2", @@ -9223,8 +11299,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x20", @@ -9232,8 +11310,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x4", @@ -9241,8 +11321,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x40", @@ -9250,8 +11332,10 @@ }, { "BriefDescription": "CMS Vertical Egress NACKs; IV", + "Counter": "0,1,2", "EventCode": "0x98", "EventName": "UNC_M3UPI_TxR_VERT_NACK.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts number of Egress packets NACK'ed on t= o the Vertical Ring", "UMask": "0x8", @@ -9259,8 +11343,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AD ring. Some example include outbound requests, snoop requests, and sn= oop responses.", "UMask": "0x1", @@ -9268,8 +11354,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AD ring. This is commonly used for outbound requests.", "UMask": "0x10", @@ -9277,8 +11365,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he AK ring. This is commonly used for credit returns and GO responses.", "UMask": "0x2", @@ -9286,8 +11376,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he AK ring.", "UMask": "0x20", @@ -9295,8 +11387,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he BL ring. This is commonly used to send data from the cache to various d= estinations.", "UMask": "0x4", @@ -9304,8 +11398,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for t= he BL ring. This is commonly used for transferring writeback data to the c= ache.", "UMask": "0x40", @@ -9313,8 +11409,10 @@ }, { "BriefDescription": "CMS Vert Egress Occupancy; IV", + "Counter": "0,1,2", "EventCode": "0x90", "EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Occupancy event for the Egress buffers in th= e Common Mesh Stop The egress is used to queue up requests destined for th= e Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for t= he IV ring. This is commonly used for snoops to the cores.", "UMask": "0x8", @@ -9322,8 +11420,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.AD_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x1", @@ -9331,8 +11431,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AD = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.AD_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x10", @@ -9340,8 +11442,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.AK_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x2", @@ -9349,8 +11453,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; AK = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.AK_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x20", @@ -9358,8 +11464,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 0", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.BL_AG0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x4", @@ -9367,8 +11475,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; BL = - Agent 1", + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.BL_AG1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x40", @@ -9376,8 +11486,10 @@ }, { "BriefDescription": "CMS Vertical Egress Injection Starvation; IV"= , + "Counter": "0,1,2", "EventCode": "0x9A", "EventName": "UNC_M3UPI_TxR_VERT_STARVED.IV", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts injection starvation. This starvatio= n is triggered when the CMS Egress cannot send a transaction onto the Verti= cal ring for a long period of time.", "UMask": "0x8", @@ -9385,8 +11497,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x2", @@ -9394,8 +11508,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x8", @@ -9403,8 +11519,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x4", @@ -9412,8 +11530,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x10", @@ -9421,8 +11541,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x40", @@ -9430,8 +11552,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x20", @@ -9439,8 +11563,10 @@ }, { "BriefDescription": "UPI0 AD Credits Empty; VNA", + "Counter": "0,1,2", "EventCode": "0x20", "EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPIs on the = AD Ring", "UMask": "0x1", @@ -9448,8 +11574,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN0 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_NCS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x4", @@ -9457,8 +11585,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN0 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x2", @@ -9466,8 +11596,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN0 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x8", @@ -9475,8 +11607,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN1 RSP Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_NCS_NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x20", @@ -9484,8 +11618,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN1 REQ Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x10", @@ -9493,8 +11629,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VN1 SNP Messages", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x40", @@ -9502,8 +11640,10 @@ }, { "BriefDescription": "UPI0 BL Credits Empty; VNA", + "Counter": "0,1,2", "EventCode": "0x21", "EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "No credits available to send to UPI on the B= L Ring (diff between non-SMI and SMI mode)", "UMask": "0x1", @@ -9511,6 +11651,7 @@ }, { "BriefDescription": "Prefetches generated by the flow control queu= e of the M3UPI unit.", + "Counter": "0,1,2", "EventCode": "0x29", "EventName": "UNC_M3UPI_UPI_PREFETCH_SPAWN", "PerPkg": "1", @@ -9519,8 +11660,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Even", + "Counter": "0,1,2", "EventCode": "0xA6", "EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -9528,8 +11671,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Down and Odd", + "Counter": "0,1,2", "EventCode": "0xA6", "EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -9537,8 +11682,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Even", + "Counter": "0,1,2", "EventCode": "0xA6", "EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -9546,8 +11693,10 @@ }, { "BriefDescription": "Vertical AD Ring In Use; Up and Odd", + "Counter": "0,1,2", "EventCode": "0xA6", "EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AD ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. We really have two rings -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -9555,8 +11704,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Even", + "Counter": "0,1,2", "EventCode": "0xA8", "EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x4", @@ -9564,8 +11715,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Down and Odd", + "Counter": "0,1,2", "EventCode": "0xA8", "EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x8", @@ -9573,8 +11726,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Even", + "Counter": "0,1,2", "EventCode": "0xA8", "EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x1", @@ -9582,8 +11737,10 @@ }, { "BriefDescription": "Vertical AK Ring In Use; Up and Odd", + "Counter": "0,1,2", "EventCode": "0xA8", "EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l AK ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings in -- a clock= wise ring and a counter-clockwise ring. On the left side of the ring, the = UP direction is on the clockwise ring and DN is on the counter-clockwise ri= ng. On the right side of the ring, this is reversed. The first half of th= e CBos are on the left side of the ring, and the 2nd half are on the right = side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD = is NOT the same ring as CBo 2 UP AD because they are on opposite sides of t= he ring.", "UMask": "0x2", @@ -9591,8 +11748,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Even", + "Counter": "0,1,2", "EventCode": "0xAA", "EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x4", @@ -9600,8 +11759,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Down and Odd", + "Counter": "0,1,2", "EventCode": "0xAA", "EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x8", @@ -9609,8 +11770,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Even", + "Counter": "0,1,2", "EventCode": "0xAA", "EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_EVEN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x1", @@ -9618,8 +11781,10 @@ }, { "BriefDescription": "Vertical BL Ring in Use; Up and Odd", + "Counter": "0,1,2", "EventCode": "0xAA", "EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_ODD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l BL ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop.We really have two rings -- a clockwi= se ring and a counter-clockwise ring. On the left side of the ring, the UP= direction is on the clockwise ring and DN is on the counter-clockwise ring= . On the right side of the ring, this is reversed. The first half of the = CBos are on the left side of the ring, and the 2nd half are on the right si= de of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is= NOT the same ring as CBo 2 UP AD because they are on opposite sides of the= ring.", "UMask": "0x2", @@ -9627,8 +11792,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Down", + "Counter": "0,1,2", "EventCode": "0xAC", "EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.DN", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x4", @@ -9636,8 +11803,10 @@ }, { "BriefDescription": "Vertical IV Ring in Use; Up", + "Counter": "0,1,2", "EventCode": "0xAC", "EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.UP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Vertica= l IV ring is being used at this ring stop. This includes when packets are = passing by and when packets are being sunk, but does not include when packe= ts are being sent from the ring stop. There is only 1 IV ring. Therefore,= if one wants to monitor the Even ring, they should select both UP_EVEN and= DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_O= DD.", "UMask": "0x1", @@ -9645,8 +11814,10 @@ }, { "BriefDescription": "VN0 Credit Used; WB on BL", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Data Response (WB) messages on BL. WB is generally used to tran= smit data with coherency. For example, remote reads and writes, or cache t= o cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -9654,8 +11825,10 @@ }, { "BriefDescription": "VN0 Credit Used; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally u= sed to transmit data without coherency. For example, non-coherent read dat= a returns.", "UMask": "0x20", @@ -9663,8 +11836,10 @@ }, { "BriefDescription": "VN0 Credit Used; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Home (REQ) messages on AD. REQ is generally used to send reques= ts, request responses, and snoop responses.", "UMask": "0x1", @@ -9672,8 +11847,10 @@ }, { "BriefDescription": "VN0 Credit Used; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Response (RSP) messages on AD. RSP packets are used to transmit= a variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -9681,8 +11858,10 @@ }, { "BriefDescription": "VN0 Credit Used; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -9690,8 +11869,10 @@ }, { "BriefDescription": "VN0 Credit Used; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x5C", "EventName": "UNC_M3UPI_VN0_CREDITS_USED.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN0 credit was used on the= DRS message channel. In order for a request to be transferred across UPI,= it must be guaranteed to have a flit buffer on the remote socket to sink i= nto. There are two credit pools, VNA and VN0. VNA is a shared pool used t= o achieve high performance. The VN0 pool has reserved entries for each mes= sage class and is used to prevent deadlock. Requests first attempt to acqu= ire a VNA credit, and then fall back to VN0 if they fail. This counts the = number of times a VN0 credit was used. Note that a single VN0 credit holds= access to potentially multiple flit buffers. For example, a transfer that= uses VNA could use 9 flit buffers and in that case uses 9 credits. A tran= sfer on VN0 will only count a single credit even though it may use multiple= buffers.; Response (RSP) messages on BL. RSP packets are used to transmit = a variety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -9699,8 +11880,10 @@ }, { "BriefDescription": "VN0 No Credits; WB on BL", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Data Response (WB) messages on BL. WB is generally used to transmit data w= ith coherency. For example, remote reads and writes, or cache to cache tra= nsfers will transmit their data using WB.", "UMask": "0x10", @@ -9708,8 +11891,10 @@ }, { "BriefDescription": "VN0 No Credits; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to tran= smit data without coherency. For example, non-coherent read data returns."= , "UMask": "0x20", @@ -9717,8 +11902,10 @@ }, { "BriefDescription": "VN0 No Credits; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Home (REQ) messages on AD. REQ is generally used to send requests, request= responses, and snoop responses.", "UMask": "0x1", @@ -9726,8 +11913,10 @@ }, { "BriefDescription": "VN0 No Credits; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Response (RSP) messages on AD. RSP packets are used to transmit a variety = of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -9735,8 +11924,10 @@ }, { "BriefDescription": "VN0 No Credits; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -9744,8 +11935,10 @@ }, { "BriefDescription": "VN0 No Credits; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x5E", "EventName": "UNC_M3UPI_VN0_NO_CREDITS.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN0 Credits; = Response (RSP) messages on BL. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -9753,8 +11946,10 @@ }, { "BriefDescription": "VN1 Credit Used; WB on BL", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Data Response (WB) messages on BL. WB is generally used to trans= mit data with coherency. For example, remote reads and writes, or cache to= cache transfers will transmit their data using WB.", "UMask": "0x10", @@ -9762,8 +11957,10 @@ }, { "BriefDescription": "VN1 Credit Used; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally us= ed to transmit data without coherency. For example, non-coherent read data= returns.", "UMask": "0x20", @@ -9771,8 +11968,10 @@ }, { "BriefDescription": "VN1 Credit Used; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Home (REQ) messages on AD. REQ is generally used to send request= s, request responses, and snoop responses.", "UMask": "0x1", @@ -9780,8 +11979,10 @@ }, { "BriefDescription": "VN1 Credit Used; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Response (RSP) messages on AD. RSP packets are used to transmit = a variety of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -9789,8 +11990,10 @@ }, { "BriefDescription": "VN1 Credit Used; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -9798,8 +12001,10 @@ }, { "BriefDescription": "VN1 Credit Used; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x5D", "EventName": "UNC_M3UPI_VN1_CREDITS_USED.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times a VN1 credit was used on the= WB message channel. In order for a request to be transferred across QPI, = it must be guaranteed to have a flit buffer on the remote socket to sink in= to. There are two credit pools, VNA and VN1. VNA is a shared pool used to= achieve high performance. The VN1 pool has reserved entries for each mess= age class and is used to prevent deadlock. Requests first attempt to acqui= re a VNA credit, and then fall back to VN1 if they fail. This counts the n= umber of times a VN1 credit was used. Note that a single VN1 credit holds = access to potentially multiple flit buffers. For example, a transfer that = uses VNA could use 9 flit buffers and in that case uses 9 credits. A trans= fer on VN1 will only count a single credit even though it may use multiple = buffers.; Response (RSP) messages on BL. RSP packets are used to transmit a= variety of protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -9807,8 +12012,10 @@ }, { "BriefDescription": "VN1 No Credits; WB on BL", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Data Response (WB) messages on BL. WB is generally used to transmit data w= ith coherency. For example, remote reads and writes, or cache to cache tra= nsfers will transmit their data using WB.", "UMask": "0x10", @@ -9816,8 +12023,10 @@ }, { "BriefDescription": "VN1 No Credits; NCB on BL", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to tran= smit data without coherency. For example, non-coherent read data returns."= , "UMask": "0x20", @@ -9825,8 +12034,10 @@ }, { "BriefDescription": "VN1 No Credits; REQ on AD", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Home (REQ) messages on AD. REQ is generally used to send requests, request= responses, and snoop responses.", "UMask": "0x1", @@ -9834,8 +12045,10 @@ }, { "BriefDescription": "VN1 No Credits; RSP on AD", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.RSP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Response (RSP) messages on AD. RSP packets are used to transmit a variety = of protocol flits including grants and completions (CMP).", "UMask": "0x4", @@ -9843,8 +12056,10 @@ }, { "BriefDescription": "VN1 No Credits; SNP on AD", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Snoops (SNP) messages on AD. SNP is used for outgoing snoops.", "UMask": "0x2", @@ -9852,8 +12067,10 @@ }, { "BriefDescription": "VN1 No Credits; RSP on BL", + "Counter": "0,1,2", "EventCode": "0x5F", "EventName": "UNC_M3UPI_VN1_NO_CREDITS.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of Cycles there were no VN1 Credits; = Response (RSP) messages on BL. RSP packets are used to transmit a variety o= f protocol flits including grants and completions (CMP).", "UMask": "0x8", @@ -9861,15 +12078,18 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_M2M_TxC_BL.DRS_UPI", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x40", "EventName": "UNC_NoUnit_TxC_BL.DRS_UPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "M2M" }, { "BriefDescription": "Clocks of the Intel(R) Ultra Path Interconnec= t (UPI)", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_UPI_CLOCKTICKS", "PerPkg": "1", @@ -9878,6 +12098,7 @@ }, { "BriefDescription": "Data Response packets that go direct to core"= , + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2C", "PerPkg": "1", @@ -9887,6 +12108,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_DIRECT_ATTEMPTS.D2U", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x12", "EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2K", @@ -9896,6 +12118,7 @@ }, { "BriefDescription": "Data Response packets that go direct to Intel= (R) UPI", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2U", "PerPkg": "1", @@ -9905,70 +12128,87 @@ }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x18", "EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "Cycles Intel(R) UPI is in L1 power mode (shut= down)", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_UPI_L1_POWER_CYCLES", "PerPkg": "1", @@ -9977,164 +12217,205 @@ }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK", + "Counter": "0,1,2,3", "EventCode": "0x14", "EventName": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_CRD_RETURN_BLOCKED", + "Counter": "0,1,2,3", "EventCode": "0x16", "EventName": "UNC_UPI_M3_CRD_RETURN_BLOCKED", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THR= ESH", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THR= ESH", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK", + "Counter": "0,1,2,3", "EventCode": "0x15", "EventName": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "Cycles where phy is not in L0, L0c, L0p, L1", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_UPI_PHY_INIT_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "L1 Req Nack", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_UPI_POWER_L1_NACK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a link sends/rece= ives a LinkReqNAck. When the UPI links would like to change power state, t= he Tx side initiates a request to the Rx side requesting to change states. = This requests can either be accepted or denied. If the Rx side replies wi= th an Ack, the power mode will change. If it replies with NAck, no change = will take place. This can be filtered based on Rx and Tx. An Rx LinkReqNA= ck refers to receiving an NAck (meaning this agent's Tx originally requeste= d the power change). A Tx LinkReqNAck refers to sending this command (mean= ing the peer agent's Tx originally requested the power change and this agen= t accepted it).", "Unit": "UPI" }, { "BriefDescription": "L1 Req (same as L1 Ack).", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_UPI_POWER_L1_REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a link sends/rece= ives a LinkReqAck. When the UPI links would like to change power state, th= e Tx side initiates a request to the Rx side requesting to change states. = This requests can either be accepted or denied. If the Rx side replies wit= h an Ack, the power mode will change. If it replies with NAck, no change w= ill take place. This can be filtered based on Rx and Tx. An Rx LinkReqAck= refers to receiving an Ack (meaning this agent's Tx originally requested t= he power change). A Tx LinkReqAck refers to sending this command (meaning = the peer agent's Tx originally requested the power change and this agent ac= cepted it).", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA", + "Counter": "0,1,2,3", "EventCode": "0x46", "EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "Cycles the Rx of the Intel(R) UPI is in L0p p= ower mode", + "Counter": "0,1,2,3", "EventCode": "0x25", "EventName": "UNC_UPI_RxL0P_POWER_CYCLES", "PerPkg": "1", @@ -10143,16 +12424,20 @@ }, { "BriefDescription": "Cycles in L0. Receive side.", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_UPI_RxL0_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of UPI qfclk cycles spent in L0 power= mode in the Link Layer. L0 is the default mode which provides the highest= performance with the most power. Use edge detect to count the number of i= nstances that the link entered L0. Link power states are per link and per = direction, so for example the Tx direction could be in one state while Rx w= as in another. The phy layer sometimes leaves L0 for training, which will= not be captured by this event.", "Unit": "UPI" }, { "BriefDescription": "Matches on Receive path of a UPI Port; Non-Co= herent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCB", "UMask": "0xe", @@ -10160,8 +12445,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Non-Co= herent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCB", "UMask": "0x10e", @@ -10169,8 +12456,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Non-Co= herent Standard", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCS", "UMask": "0xf", @@ -10178,8 +12467,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Non-Co= herent Standard", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCS", "UMask": "0x10f", @@ -10187,8 +12478,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Reques= t", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQ Message Class", "UMask": "0x8", @@ -10196,8 +12489,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Reques= t Opcode", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match REQ Opcodes - Specified in Umask[7:4]"= , "UMask": "0x108", @@ -10205,24 +12500,30 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - Conflict", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPCNFLT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1aa", "Unit": "UPI" }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - Invalid", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12a", "Unit": "UPI" }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - Data", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0xc", @@ -10230,8 +12531,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - Data", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0x10c", @@ -10239,8 +12542,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - No Data", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - RSP", "UMask": "0xa", @@ -10248,8 +12553,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Respon= se - No Data", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - RSP", "UMask": "0x10a", @@ -10257,8 +12564,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Snoop"= , + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "SNP Message Class", "UMask": "0x9", @@ -10266,8 +12575,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Snoop = Opcode", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match SNP Opcodes - Specified in Umask[7:4]"= , "UMask": "0x109", @@ -10275,8 +12586,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Writeb= ack", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0xd", @@ -10284,8 +12597,10 @@ }, { "BriefDescription": "Matches on Receive path of a UPI Port; Writeb= ack", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0x10d", @@ -10293,6 +12608,7 @@ }, { "BriefDescription": "FLITs received which bypassed the Slot0 Recei= ve Buffer", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT0", "PerPkg": "1", @@ -10302,6 +12618,7 @@ }, { "BriefDescription": "FLITs received which bypassed the Slot0 Recei= ve Buffer", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT1", "PerPkg": "1", @@ -10311,6 +12628,7 @@ }, { "BriefDescription": "FLITs received which bypassed the Slot0 Recei= ve Buffer", + "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT2", "PerPkg": "1", @@ -10320,46 +12638,57 @@ }, { "BriefDescription": "CRC Errors Detected", + "Counter": "0,1,2,3", "EventCode": "0xB", "EventName": "UNC_UPI_RxL_CRC_ERRORS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of CRC errors detected in the UPI Age= nt. Each UPI flit incorporates 8 bits of CRC for error detection. This co= unts the number of flits where the CRC was able to detect an error. After = an error has been detected, the UPI agent will send a request to the transm= itting socket to resend the flit (as well as any flits that came after it).= ", "Unit": "UPI" }, { "BriefDescription": "LLR Requests Sent", + "Counter": "0,1,2,3", "EventCode": "0x8", "EventName": "UNC_UPI_RxL_CRC_LLR_REQ_TRANSMIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of LLR Requests were transmitted. Th= is should generally be <=3D the number of CRC errors detected. If multiple= errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx = side, there is no need to send more LLR_REQ_NACKs.", "Unit": "UPI" }, { "BriefDescription": "VN0 Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x39", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that an RxQ VN0 c= redit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). Thi= s includes packets that went through the RxQ and those that were bypasssed.= ", "Unit": "UPI" }, { "BriefDescription": "VN1 Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x3A", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that an RxQ VN1 c= redit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). Thi= s includes packets that went through the RxQ and those that were bypasssed.= ", "Unit": "UPI" }, { "BriefDescription": "VNA Credit Consumed", + "Counter": "0,1,2,3", "EventCode": "0x38", "EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VNA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that an RxQ VNA c= redit was consumed (i.e. message uses a VNA credit for the Rx Buffer). Thi= s includes packets that went through the RxQ and those that were bypasssed.= ", "Unit": "UPI" }, { "BriefDescription": "Valid data FLITs received from any slot", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.ALL_DATA", "PerPkg": "1", @@ -10369,6 +12698,7 @@ }, { "BriefDescription": "Null FLITs received from any slot", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.ALL_NULL", "PerPkg": "1", @@ -10378,8 +12708,10 @@ }, { "BriefDescription": "Valid Flits Received; Data", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Data Flits (which consume all slots), but how much to count= is based on Slot0-2 mask, so count can be 0-3 depending on which slots are= enabled for counting..", "UMask": "0x8", @@ -10387,8 +12719,10 @@ }, { "BriefDescription": "Valid Flits Received; Idle", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.IDLE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).", "UMask": "0x47", @@ -10396,8 +12730,10 @@ }, { "BriefDescription": "Valid Flits Received; LLCRD Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Enables counting of LLCRD (with non-zero payload). This only appl= ies to slot 2 since LLCRD is only allowed in slot 2", "UMask": "0x10", @@ -10405,8 +12741,10 @@ }, { "BriefDescription": "Valid Flits Received; LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Equivalent to an idle packet. Enables counting of slot 0 LLCTRL = messages.", "UMask": "0x40", @@ -10414,6 +12752,7 @@ }, { "BriefDescription": "Protocol header and credit FLITs received fro= m any slot", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.NON_DATA", "PerPkg": "1", @@ -10423,6 +12762,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_FLITS.ALL_NULL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.NULL", @@ -10432,8 +12772,10 @@ }, { "BriefDescription": "Valid Flits Received; Protocol Header", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slo= t uMask bits)", "UMask": "0x80", @@ -10441,17 +12783,21 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_FLITS.PROTHDR", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.PROT_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "Valid Flits Received; Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 0 - Other mask bits determine types of headers to coun= t.", "UMask": "0x1", @@ -10459,8 +12805,10 @@ }, { "BriefDescription": "Valid Flits Received; Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 1 - Other mask bits determine types of headers to coun= t.", "UMask": "0x2", @@ -10468,8 +12816,10 @@ }, { "BriefDescription": "Valid Flits Received; Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_UPI_RxL_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 2 - Other mask bits determine types of headers to coun= t.", "UMask": "0x4", @@ -10477,62 +12827,76 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.NCB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.NCS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.REQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.RSP", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_RxL_BASIC_HDR_MATCH.WB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x5", "EventName": "UNC_UPI_RxL_HDR_MATCH.WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "UPI" }, { "BriefDescription": "RxQ Flit Buffer Allocations; Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the UPI Rx Flit B= uffer. Generally, when data is transmitted across UPI, it will bypass the = RxQ and pass directly to the ring interface. If things back up getting tra= nsmitted onto the ring, however, it may need to allocate into this buffer, = thus increasing the latency. This event can be used in conjunction with th= e Flit Buffer Occupancy event in order to calculate the average flit buffer= lifetime.", "UMask": "0x1", @@ -10540,8 +12904,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Allocations; Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the UPI Rx Flit B= uffer. Generally, when data is transmitted across UPI, it will bypass the = RxQ and pass directly to the ring interface. If things back up getting tra= nsmitted onto the ring, however, it may need to allocate into this buffer, = thus increasing the latency. This event can be used in conjunction with th= e Flit Buffer Occupancy event in order to calculate the average flit buffer= lifetime.", "UMask": "0x2", @@ -10549,8 +12915,10 @@ }, { "BriefDescription": "RxQ Flit Buffer Allocations; Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_UPI_RxL_INSERTS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the UPI Rx Flit B= uffer. Generally, when data is transmitted across UPI, it will bypass the = RxQ and pass directly to the ring interface. If things back up getting tra= nsmitted onto the ring, however, it may need to allocate into this buffer, = thus increasing the latency. This event can be used in conjunction with th= e Flit Buffer Occupancy event in order to calculate the average flit buffer= lifetime.", "UMask": "0x4", @@ -10558,8 +12926,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets; Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of elements in the UP= I RxQ in each cycle. Generally, when data is transmitted across UPI, it wi= ll bypass the RxQ and pass directly to the ring interface. If things back = up getting transmitted onto the ring, however, it may need to allocate into= this buffer, thus increasing the latency. This event can be used in conju= nction with the Flit Buffer Not Empty event to calculate average occupancy,= or with the Flit Buffer Allocations event to track average lifetime.", "UMask": "0x1", @@ -10567,8 +12937,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets; Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of elements in the UP= I RxQ in each cycle. Generally, when data is transmitted across UPI, it wi= ll bypass the RxQ and pass directly to the ring interface. If things back = up getting transmitted onto the ring, however, it may need to allocate into= this buffer, thus increasing the latency. This event can be used in conju= nction with the Flit Buffer Not Empty event to calculate average occupancy,= or with the Flit Buffer Allocations event to track average lifetime.", "UMask": "0x2", @@ -10576,8 +12948,10 @@ }, { "BriefDescription": "RxQ Occupancy - All Packets; Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x32", "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of elements in the UP= I RxQ in each cycle. Generally, when data is transmitted across UPI, it wi= ll bypass the RxQ and pass directly to the ring interface. If things back = up getting transmitted onto the ring, however, it may need to allocate into= this buffer, thus increasing the latency. This event can be used in conju= nction with the Flit Buffer Not Empty event to calculate average occupancy,= or with the Flit Buffer Allocations event to track average lifetime.", "UMask": "0x4", @@ -10585,118 +12959,147 @@ }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1", + "Counter": "0,1,2,3", "EventCode": "0x33", "EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "UPI" }, { "BriefDescription": "Cycles in which the Tx of the Intel(R) Ultra = Path Interconnect (UPI) is in L0p power mode", + "Counter": "0,1,2,3", "EventCode": "0x27", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES", "PerPkg": "1", @@ -10705,30 +13108,38 @@ }, { "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER", + "Counter": "0,1,2,3", "EventCode": "0x28", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT", + "Counter": "0,1,2,3", "EventCode": "0x29", "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "Cycles in L0. Transmit side.", + "Counter": "0,1,2,3", "EventCode": "0x26", "EventName": "UNC_UPI_TxL0_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of UPI qfclk cycles spent in L0 power= mode in the Link Layer. L0 is the default mode which provides the highest= performance with the most power. Use edge detect to count the number of i= nstances that the link entered L0. Link power states are per link and per = direction, so for example the Tx direction could be in one state while Rx w= as in another. The phy layer sometimes leaves L0 for training, which will= not be captured by this event.", "Unit": "UPI" }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Non-C= oherent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCB", "UMask": "0xe", @@ -10736,8 +13147,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Non-C= oherent Bypass", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCB", "UMask": "0x10e", @@ -10745,8 +13158,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Non-C= oherent Standard", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCS", "UMask": "0xf", @@ -10754,8 +13169,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Non-C= oherent Standard", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - NCS", "UMask": "0x10f", @@ -10763,8 +13180,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Reque= st", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "REQ Message Class", "UMask": "0x8", @@ -10772,8 +13191,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Reque= st Opcode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match REQ Opcodes - Specified in Umask[7:4]"= , "UMask": "0x108", @@ -10781,24 +13202,30 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - Conflict", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPCNFLT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1aa", "Unit": "UPI" }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - Invalid", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPI", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12a", "Unit": "UPI" }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - Data", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0xc", @@ -10806,8 +13233,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - Data", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0x10c", @@ -10815,8 +13244,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - No Data", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - RSP", "UMask": "0xa", @@ -10824,8 +13255,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Respo= nse - No Data", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class - RSP", "UMask": "0x10a", @@ -10833,8 +13266,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Snoop= ", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "SNP Message Class", "UMask": "0x9", @@ -10842,8 +13277,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Snoop= Opcode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match SNP Opcodes - Specified in Umask[7:4]"= , "UMask": "0x109", @@ -10851,8 +13288,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Write= back", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0xd", @@ -10860,8 +13299,10 @@ }, { "BriefDescription": "Matches on Transmit path of a UPI Port; Write= back", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB_OPC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Match Message Class -WB", "UMask": "0x10d", @@ -10869,6 +13310,7 @@ }, { "BriefDescription": "FLITs that bypassed the TxL Buffer", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_UPI_TxL_BYPASSED", "PerPkg": "1", @@ -10877,6 +13319,7 @@ }, { "BriefDescription": "Valid data FLITs transmitted via any slot", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.ALL_DATA", "PerPkg": "1", @@ -10886,6 +13329,7 @@ }, { "BriefDescription": "Null FLITs transmitted from any slot", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.ALL_NULL", "PerPkg": "1", @@ -10895,6 +13339,7 @@ }, { "BriefDescription": "Valid Flits Sent; Data", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.DATA", "PerPkg": "1", @@ -10904,6 +13349,7 @@ }, { "BriefDescription": "Idle FLITs transmitted", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.IDLE", "PerPkg": "1", @@ -10913,8 +13359,10 @@ }, { "BriefDescription": "Valid Flits Sent; LLCRD Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.LLCRD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Enables counting of LLCRD (with non-zero payload). This only appl= ies to slot 2 since LLCRD is only allowed in slot 2", "UMask": "0x10", @@ -10922,8 +13370,10 @@ }, { "BriefDescription": "Valid Flits Sent; LLCTRL", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.LLCTRL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Equivalent to an idle packet. Enables counting of slot 0 LLCTRL = messages.", "UMask": "0x40", @@ -10931,6 +13381,7 @@ }, { "BriefDescription": "Protocol header and credit FLITs transmitted = across any slot", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.NON_DATA", "PerPkg": "1", @@ -10940,6 +13391,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_FLITS.ALL_NULL", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.NULL", @@ -10949,8 +13401,10 @@ }, { "BriefDescription": "Valid Flits Sent; Protocol Header", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.PROTHDR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slo= t uMask bits)", "UMask": "0x80", @@ -10958,17 +13412,21 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_FLITS.PROTHDR", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.PROT_HDR", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "UPI" }, { "BriefDescription": "Valid Flits Sent; Slot 0", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.SLOT0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 0 - Other mask bits determine types of headers to coun= t.", "UMask": "0x1", @@ -10976,8 +13434,10 @@ }, { "BriefDescription": "Valid Flits Sent; Slot 1", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.SLOT1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 1 - Other mask bits determine types of headers to coun= t.", "UMask": "0x2", @@ -10985,8 +13445,10 @@ }, { "BriefDescription": "Valid Flits Sent; Slot 2", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_UPI_TxL_FLITS.SLOT2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Shows legal flit time (hides impact of L0p a= nd L0c).; Count Slot 2 - Other mask bits determine types of headers to coun= t.", "UMask": "0x4", @@ -10994,157 +13456,195 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.DATA_HDR", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.DUAL_SLOT_HDR", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.LOC", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.NCB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.NCB", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.NCS", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.NCS", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.NON_DATA_HDR", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.REM", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.REQ", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.REQ", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.RSP_DATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.RSP_NODATA", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.SGL_SLOT_HDR", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.SNP", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.SNP", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "UPI" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_UPI_TxL_BASIC_HDR_MATCH.WB", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x4", "EventName": "UNC_UPI_TxL_HDR_MATCH.WB", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "UPI" }, { "BriefDescription": "Tx Flit Buffer Allocations", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_UPI_TxL_INSERTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of allocations into the UPI Tx Flit B= uffer. Generally, when data is transmitted across UPI, it will bypass the = TxQ and pass directly to the link. However, the TxQ will be used with L0p = and when LLR occurs, increasing latency to transfer out to the link. This = event can be used in conjunction with the Flit Buffer Occupancy event in or= der to calculate the average flit buffer lifetime.", "Unit": "UPI" }, { "BriefDescription": "Tx Flit Buffer Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_UPI_TxL_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Accumulates the number of flits in the TxQ. = Generally, when data is transmitted across UPI, it will bypass the TxQ and= pass directly to the link. However, the TxQ will be used with L0p and whe= n LLR occurs, increasing latency to transfer out to the link. This can be u= sed with the cycles not empty event to track average occupancy, or the allo= cations event to track average lifetime in the TxQ.", "Unit": "UPI" }, { "BriefDescription": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01", + "Counter": "0,1,2,3", "EventCode": "0x45", "EventName": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01", + "Experimental": "1", "PerPkg": "1", "Unit": "UPI" }, { "BriefDescription": "VNA Credits Pending Return - Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x44", "EventName": "UNC_UPI_VNA_CREDIT_RETURN_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of VNA credits in the Rx side that ar= e waitng to be returned back across the link.", "Unit": "UPI" }, { "BriefDescription": "Clockticks in the UBOX using a dedicated 48-b= it Fixed Counter", + "Counter": "FIXED", "EventCode": "0xff", "EventName": "UNC_U_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "Unit": "UBOX" }, { "BriefDescription": "Message Received", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Virtual Logical Wire (legacy) message were r= eceived from Uncore.", "UMask": "0x8", @@ -11152,8 +13652,10 @@ }, { "BriefDescription": "Message Received", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.INT_PRIO", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Virtual Logical Wire (legacy) message were r= eceived from Uncore.", "UMask": "0x10", @@ -11161,8 +13663,10 @@ }, { "BriefDescription": "Message Received; IPI", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.IPI_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Virtual Logical Wire (legacy) message were r= eceived from Uncore.; Inter Processor Interrupts", "UMask": "0x4", @@ -11170,8 +13674,10 @@ }, { "BriefDescription": "Message Received; MSI", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.MSI_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Virtual Logical Wire (legacy) message were r= eceived from Uncore.; Message Signaled Interrupts - interrupts sent by devi= ces (including PCIe via IOxAPIC) (Socket Mode only)", "UMask": "0x2", @@ -11179,8 +13685,10 @@ }, { "BriefDescription": "Message Received; VLW", + "Counter": "0,1", "EventCode": "0x42", "EventName": "UNC_U_EVENT_MSG.VLW_RCVD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Virtual Logical Wire (legacy) message were r= eceived from Uncore.", "UMask": "0x1", @@ -11188,16 +13696,20 @@ }, { "BriefDescription": "IDI Lock/SplitLock Cycles", + "Counter": "0,1", "EventCode": "0x44", "EventName": "UNC_U_LOCK_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of times an IDI Lock/SplitLock sequen= ce was started", "Unit": "UBOX" }, { "BriefDescription": "Cycles PHOLD Assert to Ack; Assert to ACK", + "Counter": "0,1", "EventCode": "0x45", "EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "PHOLD cycles.", "UMask": "0x1", @@ -11205,38 +13717,47 @@ }, { "BriefDescription": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY", + "Counter": "0,1", "EventCode": "0x4C", "EventName": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_RACU_DRNG.RDRAND", + "Counter": "0,1", "EventCode": "0x4C", "EventName": "UNC_U_RACU_DRNG.RDRAND", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "UBOX" }, { "BriefDescription": "UNC_U_RACU_DRNG.RDSEED", + "Counter": "0,1", "EventCode": "0x4C", "EventName": "UNC_U_RACU_DRNG.RDSEED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "UBOX" }, { "BriefDescription": "RACU Request", + "Counter": "0,1", "EventCode": "0x46", "EventName": "UNC_U_RACU_REQUESTS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number outstanding register requests within = message channel tracker", "Unit": "UBOX" }, { "BriefDescription": "UPI interconnect send bandwidth for payload. = Derived from unc_upi_txl_flits.all_data", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UPI_DATA_BANDWIDTH_TX", "PerPkg": "1", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-io.json b/tools= /perf/pmu-events/arch/x86/skylakex/uncore-io.json index 743c91f3d2f0..bce46dd4f395 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-io.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-io.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "PCI Express bandwidth reading at IIO. Derived= from unc_iio_data_req_of_cpu.mem_read.part0", + "Counter": "0,1", "EventCode": "0x83", "EventName": "LLC_MISSES.PCIE_READ", "FCMask": "0x07", @@ -16,6 +17,7 @@ }, { "BriefDescription": "PCI Express bandwidth writing at IIO. Derived= from unc_iio_data_req_of_cpu.mem_write.part0", + "Counter": "0,1", "EventCode": "0x83", "EventName": "LLC_MISSES.PCIE_WRITE", "FCMask": "0x07", @@ -31,6 +33,7 @@ }, { "BriefDescription": "Clockticks of the IIO Traffic Controller", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_IIO_CLOCKTICKS", "PerPkg": "1", @@ -39,6 +42,7 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 0-3", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS", "FCMask": "0x4", @@ -49,6 +53,7 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 0", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0", "FCMask": "0x4", @@ -59,6 +64,7 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 1", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1", "FCMask": "0x4", @@ -69,6 +75,7 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 2", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2", "FCMask": "0x4", @@ -79,6 +86,7 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts of completions= with data: Part 3", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3", "FCMask": "0x4", @@ -89,8 +97,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts; Port 0", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x01", @@ -99,8 +109,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts; Port 1", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x02", @@ -109,8 +121,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts; Port 2", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x04", @@ -119,8 +133,10 @@ }, { "BriefDescription": "PCIe Completion Buffer Inserts; Port 3", + "Counter": "0,1,2,3", "EventCode": "0xC2", "EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x08", @@ -129,6 +145,7 @@ }, { "BriefDescription": "PCIe Completion Buffer occupancy of completio= ns with data: Part 0-3", + "Counter": "2,3", "EventCode": "0xD5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS", "FCMask": "0x04", @@ -138,6 +155,7 @@ }, { "BriefDescription": "PCIe Completion Buffer occupancy of completio= ns with data: Part 0", + "Counter": "2,3", "EventCode": "0xD5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0", "FCMask": "0x04", @@ -147,6 +165,7 @@ }, { "BriefDescription": "PCIe Completion Buffer occupancy of completio= ns with data: Part 1", + "Counter": "2,3", "EventCode": "0xD5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1", "FCMask": "0x04", @@ -156,6 +175,7 @@ }, { "BriefDescription": "PCIe Completion Buffer occupancy of completio= ns with data: Part 2", + "Counter": "2,3", "EventCode": "0xD5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2", "FCMask": "0x04", @@ -165,6 +185,7 @@ }, { "BriefDescription": "PCIe Completion Buffer occupancy of completio= ns with data: Part 3", + "Counter": "2,3", "EventCode": "0xD5", "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3", "FCMask": "0x04", @@ -174,8 +195,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -185,8 +208,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -196,8 +221,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -207,8 +234,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -218,8 +247,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -229,8 +260,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -240,8 +273,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -251,8 +286,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -262,8 +299,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -273,8 +312,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -284,8 +325,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -295,8 +338,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's PCICFG space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -306,8 +351,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -317,8 +364,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -328,8 +377,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -339,8 +390,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -350,8 +403,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -361,8 +416,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -372,8 +429,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -383,8 +442,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -394,8 +455,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -405,8 +468,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -416,8 +481,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -427,8 +494,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's IO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -438,6 +507,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part0", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -449,6 +519,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part1", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -460,6 +531,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part2", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -471,6 +543,7 @@ }, { "BriefDescription": "Read request for 4 bytes made by the CPU to I= IO Part3", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -482,8 +555,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's MMIO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -493,8 +568,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core reading from = Card's MMIO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -504,6 +581,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part0 by= the CPU", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -515,6 +593,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part1 by= the CPU", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -526,6 +605,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part2 by= the CPU", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -537,6 +617,7 @@ }, { "BriefDescription": "Write request of 4 bytes made to IIO Part3 by= the CPU", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -548,8 +629,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's MMIO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -559,8 +642,10 @@ }, { "BriefDescription": "Data requested by the CPU; Core writing to Ca= rd's MMIO space", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -570,6 +655,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part0", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0", "FCMask": "0x07", @@ -581,6 +667,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part1", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1", "FCMask": "0x07", @@ -592,6 +679,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part2", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2", "FCMask": "0x07", @@ -603,6 +691,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= a different IIO unit to IIO Part3", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3", "FCMask": "0x07", @@ -614,8 +703,10 @@ }, { "BriefDescription": "Data requested by the CPU; Another card (diff= erent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -625,8 +716,10 @@ }, { "BriefDescription": "Data requested by the CPU; Another card (diff= erent IIO stack) reading from this card.", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -636,6 +729,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part0 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0", "FCMask": "0x07", @@ -647,6 +741,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part1 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1", "FCMask": "0x07", @@ -658,6 +753,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part2 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2", "FCMask": "0x07", @@ -669,6 +765,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made to= IIO Part3 by a different IIO unit", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3", "FCMask": "0x07", @@ -680,8 +777,10 @@ }, { "BriefDescription": "Data requested by the CPU; Another card (diff= erent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -691,8 +790,10 @@ }, { "BriefDescription": "Data requested by the CPU; Another card (diff= erent IIO stack) writing to this card.", + "Counter": "2,3", "EventCode": "0xC0", "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -702,8 +803,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -713,8 +816,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -724,8 +829,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -735,8 +842,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -746,8 +855,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -757,8 +868,10 @@ }, { "BriefDescription": "Data requested of the CPU; Atomic requests ta= rgeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -768,8 +881,10 @@ }, { "BriefDescription": "Data requested of the CPU; Completion of atom= ic requests targeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -779,8 +894,10 @@ }, { "BriefDescription": "Data requested of the CPU; Completion of atom= ic requests targeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -790,8 +907,10 @@ }, { "BriefDescription": "Data requested of the CPU; Completion of atom= ic requests targeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -801,8 +920,10 @@ }, { "BriefDescription": "Data requested of the CPU; Completion of atom= ic requests targeting DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -812,6 +933,7 @@ }, { "BriefDescription": "PCI Express bandwidth reading at IIO, part 0"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -823,6 +945,7 @@ }, { "BriefDescription": "PCI Express bandwidth reading at IIO, part 1"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -834,6 +957,7 @@ }, { "BriefDescription": "PCI Express bandwidth reading at IIO, part 2"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -845,6 +969,7 @@ }, { "BriefDescription": "PCI Express bandwidth reading at IIO, part 3"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -856,8 +981,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card reading from = DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -867,8 +994,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card reading from = DRAM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -878,6 +1007,7 @@ }, { "BriefDescription": "PCI Express bandwidth writing at IIO, part 0"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -889,6 +1019,7 @@ }, { "BriefDescription": "PCI Express bandwidth writing at IIO, part 1"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -900,6 +1031,7 @@ }, { "BriefDescription": "PCI Express bandwidth writing at IIO, part 2"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -911,6 +1043,7 @@ }, { "BriefDescription": "PCI Express bandwidth writing at IIO, part 3"= , + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -922,8 +1055,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card writing to DR= AM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -933,8 +1068,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card writing to DR= AM", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -944,8 +1081,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -955,8 +1094,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -966,8 +1107,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -977,8 +1120,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -988,8 +1133,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -999,8 +1146,10 @@ }, { "BriefDescription": "Data requested of the CPU; Messages", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -1010,6 +1159,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0", "FCMask": "0x07", @@ -1021,6 +1171,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= IIO Part1 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1", "FCMask": "0x07", @@ -1032,6 +1183,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= IIO Part2 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2", "FCMask": "0x07", @@ -1043,6 +1195,7 @@ }, { "BriefDescription": "Peer to peer read request for 4 bytes made by= IIO Part3 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3", "FCMask": "0x07", @@ -1054,8 +1207,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card reading from = another Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -1065,8 +1220,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card reading from = another Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -1076,6 +1233,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0", "FCMask": "0x07", @@ -1087,6 +1245,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1", "FCMask": "0x07", @@ -1098,6 +1257,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2", "FCMask": "0x07", @@ -1109,6 +1269,7 @@ }, { "BriefDescription": "Peer to peer write request of 4 bytes made by= IIO Part0 to an IIO target", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3", "FCMask": "0x07", @@ -1120,8 +1281,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card writing to an= other Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -1131,8 +1294,10 @@ }, { "BriefDescription": "Data requested of the CPU; Card writing to an= other Card (same or different stack)", + "Counter": "0,1", "EventCode": "0x83", "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -1142,29 +1307,37 @@ }, { "BriefDescription": "Num Link Correctable Errors", + "Counter": "0,1,2,3", "EventCode": "0xF", "EventName": "UNC_IIO_LINK_NUM_CORR_ERR", + "Experimental": "1", "PerPkg": "1", "Unit": "IIO" }, { "BriefDescription": "Num Link Retries", + "Counter": "0,1,2,3", "EventCode": "0xE", "EventName": "UNC_IIO_LINK_NUM_RETRIES", + "Experimental": "1", "PerPkg": "1", "Unit": "IIO" }, { "BriefDescription": "Number packets that passed the Mask/Match Fil= ter", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_IIO_MASK_MATCH", + "Experimental": "1", "PerPkg": "1", "Unit": "IIO" }, { "BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x1", @@ -1172,8 +1345,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus an= d PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x8", @@ -1181,8 +1356,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus an= d !(PCIE bus)", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x4", @@ -1190,8 +1367,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus; PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x2", @@ -1199,8 +1378,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus; !(Non-PCIE bus)= and PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x10", @@ -1208,8 +1389,10 @@ }, { "BriefDescription": "AND Mask/match for debug bus", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if all bits specified by mask match= ", "UMask": "0x20", @@ -1217,8 +1400,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x1", @@ -1226,8 +1411,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus and= PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x8", @@ -1235,8 +1422,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus and= !(PCIE bus)", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x4", @@ -1244,8 +1433,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x2", @@ -1253,8 +1444,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; !(Non-PCIE bus) = and PCIE bus", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x10", @@ -1262,8 +1455,10 @@ }, { "BriefDescription": "OR Mask/match for debug bus; !(Non-PCIE bus) = and !(PCIE bus)", + "Counter": "0,1,2,3", "EventCode": "0x3", "EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_NOT_BUS1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Asserted if any bits specified by mask match= ", "UMask": "0x20", @@ -1271,15 +1466,19 @@ }, { "BriefDescription": "Counting disabled", + "Counter": "0,1,2,3", "EventName": "UNC_IIO_NOTHING", + "Experimental": "1", "PerPkg": "1", "Unit": "IIO" }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1288,9 +1487,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1299,9 +1500,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1310,9 +1513,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1321,9 +1526,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1332,9 +1539,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1343,9 +1552,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1354,9 +1565,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1365,9 +1578,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1376,9 +1591,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1387,6 +1604,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART0", @@ -1398,6 +1616,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART1", @@ -1409,6 +1628,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART2", @@ -1420,6 +1640,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART3", @@ -1431,9 +1652,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1442,9 +1665,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1453,6 +1678,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0", @@ -1464,6 +1690,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1", @@ -1475,6 +1702,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART2", @@ -1486,6 +1714,7 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3", @@ -1497,9 +1726,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1508,9 +1739,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1519,9 +1752,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1530,9 +1765,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1541,9 +1778,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1552,9 +1791,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1563,9 +1804,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1574,9 +1817,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1585,9 +1830,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1596,9 +1843,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1607,9 +1856,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1618,9 +1869,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1629,9 +1882,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1640,9 +1895,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1651,9 +1908,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1662,9 +1921,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1673,9 +1934,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1684,9 +1947,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1695,9 +1960,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD0", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1706,9 +1973,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD1", + "Counter": "0,1", "Deprecated": "1", "EventCode": "0x83", "EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1717,9 +1986,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1728,9 +1999,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1739,9 +2012,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1750,9 +2025,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1761,9 +2038,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1772,9 +2051,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1783,9 +2064,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1794,9 +2077,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1805,9 +2090,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1816,9 +2103,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1827,9 +2116,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1838,9 +2129,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1849,9 +2142,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1860,9 +2155,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1871,9 +2168,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1882,9 +2181,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1893,9 +2194,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1904,9 +2207,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1915,9 +2220,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1926,9 +2233,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -1937,9 +2246,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -1948,9 +2259,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -1959,9 +2272,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -1970,9 +2285,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -1981,9 +2298,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -1992,9 +2311,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2003,9 +2324,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2014,9 +2337,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2025,9 +2350,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2036,9 +2363,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2047,9 +2376,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2058,9 +2389,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2069,9 +2402,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2080,9 +2415,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2091,9 +2428,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2102,9 +2441,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2113,9 +2454,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2124,9 +2467,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2135,9 +2480,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2146,9 +2493,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2157,9 +2506,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2168,9 +2519,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2179,9 +2532,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2190,9 +2545,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2201,9 +2558,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2212,9 +2571,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2223,9 +2584,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD0", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2234,9 +2597,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD1", + "Counter": "2,3", "Deprecated": "1", "EventCode": "0xC0", "EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2245,17 +2610,21 @@ }, { "BriefDescription": "Symbol Times on Link", + "Counter": "0,1,2,3", "EventCode": "0x82", "EventName": "UNC_IIO_SYMBOL_TIMES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Gen1 - increment once every 4nS, Gen2 - incr= ement once every 2nS, Gen3 - increment once every 1nS", "Unit": "IIO" }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2264,9 +2633,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2275,9 +2646,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2286,9 +2659,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2297,9 +2672,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2308,9 +2685,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMIC.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2319,9 +2698,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2330,9 +2711,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2341,9 +2724,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2352,9 +2737,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2363,9 +2750,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2374,9 +2763,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2385,9 +2776,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2396,9 +2789,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2407,9 +2802,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2418,9 +2815,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2429,9 +2828,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2440,9 +2841,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2451,9 +2854,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2462,9 +2867,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2473,9 +2880,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2484,9 +2893,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2495,9 +2906,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2506,9 +2919,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2517,9 +2932,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2528,9 +2945,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2539,9 +2958,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2550,9 +2971,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.MSG.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2561,9 +2984,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2572,9 +2997,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2583,9 +3010,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2594,9 +3023,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2605,9 +3036,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2616,9 +3049,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2627,9 +3062,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2638,9 +3075,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2649,9 +3088,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2660,9 +3101,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2671,9 +3114,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2682,9 +3127,11 @@ }, { "BriefDescription": "This event is deprecated.", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_IN.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2693,9 +3140,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2704,9 +3153,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2715,9 +3166,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2726,9 +3179,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2737,9 +3192,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2748,9 +3205,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2759,9 +3218,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2770,9 +3231,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2781,9 +3244,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2792,9 +3257,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2803,9 +3270,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2814,9 +3283,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2825,9 +3296,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2836,9 +3309,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2847,9 +3322,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2858,9 +3335,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2869,9 +3348,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2880,9 +3361,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2891,9 +3374,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2902,9 +3387,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2913,9 +3400,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2924,9 +3413,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -2935,9 +3426,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.IO_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -2946,9 +3439,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -2957,9 +3452,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -2968,9 +3465,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -2979,9 +3478,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -2990,9 +3491,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -3001,9 +3504,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -3012,9 +3517,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -3023,9 +3530,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -3034,9 +3543,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -3045,9 +3556,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -3056,9 +3569,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -3067,9 +3582,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -3078,9 +3595,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -3089,9 +3608,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -3100,9 +3621,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -3111,9 +3634,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -3122,9 +3647,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -3133,9 +3660,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -3144,9 +3673,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x1", @@ -3155,9 +3686,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x2", @@ -3166,9 +3699,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART2", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x4", @@ -3177,9 +3712,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART3", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x8", @@ -3188,9 +3725,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD0", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x10", @@ -3199,9 +3738,11 @@ }, { "BriefDescription": "This event is deprecated. Refer to new event = UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD1", + "Counter": "0,1,2,3", "Deprecated": "1", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x7", "PerPkg": "1", "PortMask": "0x20", @@ -3210,8 +3751,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3221,8 +3764,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3232,8 +3777,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3243,8 +3790,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3254,8 +3803,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3265,8 +3816,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3276,8 +3829,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3287,8 +3842,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3298,8 +3855,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3309,8 +3868,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3320,8 +3881,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3331,8 +3894,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's PCICFG space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3342,8 +3907,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3353,8 +3920,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3364,8 +3933,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3375,8 +3946,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3386,8 +3959,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3397,8 +3972,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3408,8 +3985,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3419,8 +3998,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3430,8 +4011,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3441,8 +4024,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3452,8 +4037,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3463,8 +4050,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's IO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3474,6 +4063,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part0", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -3485,6 +4075,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part1", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -3496,6 +4087,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part2", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -3507,6 +4099,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by the CPU to IIO Part3", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -3518,8 +4111,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3529,8 +4124,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e reading from Card's MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3540,6 +4137,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part0 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -3551,6 +4149,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part1 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -3562,6 +4161,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part2 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -3573,6 +4173,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made to IIO Part3 by the CPU", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -3584,8 +4185,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3595,8 +4198,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Cor= e writing to Card's MMIO space", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3606,6 +4211,7 @@ }, { "BriefDescription": "Peer to peer read request for up to a 64 byte= transaction is made by a different IIO unit to IIO Part0", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0", "FCMask": "0x07", @@ -3617,6 +4223,7 @@ }, { "BriefDescription": "Peer to peer read request for up to a 64 byte= transaction is made by a different IIO unit to IIO Part1", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1", "FCMask": "0x07", @@ -3628,6 +4235,7 @@ }, { "BriefDescription": "Peer to peer read request for up to a 64 byte= transaction is made by a different IIO unit to IIO Part2", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2", "FCMask": "0x07", @@ -3639,6 +4247,7 @@ }, { "BriefDescription": "Peer to peer read request for up to a 64 byte= transaction is made by a different IIO unit to IIO Part3", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3", "FCMask": "0x07", @@ -3650,8 +4259,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Ano= ther card (different IIO stack) reading from this card.", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3661,8 +4272,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Ano= ther card (different IIO stack) reading from this card.", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3672,6 +4285,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made to IIO Part0 by a different IIO unit", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0", "FCMask": "0x07", @@ -3683,6 +4297,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made to IIO Part1 by a different IIO unit", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1", "FCMask": "0x07", @@ -3694,6 +4309,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made to IIO Part2 by a different IIO unit", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2", "FCMask": "0x07", @@ -3705,6 +4321,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made to IIO Part3 by a different IIO unit", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3", "FCMask": "0x07", @@ -3716,8 +4333,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Ano= ther card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3727,8 +4346,10 @@ }, { "BriefDescription": "Number Transactions requested by the CPU; Ano= ther card (different IIO stack) writing to this card.", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3738,8 +4359,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3749,8 +4372,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3760,8 +4385,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3771,8 +4398,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3782,8 +4411,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3793,8 +4424,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Ato= mic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3804,8 +4437,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Com= pletion of atomic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3815,8 +4450,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Com= pletion of atomic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -3826,8 +4463,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Com= pletion of atomic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -3837,8 +4476,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Com= pletion of atomic requests targeting DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -3848,6 +4489,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part0 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0", "FCMask": "0x07", @@ -3859,6 +4501,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part1 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1", "FCMask": "0x07", @@ -3870,6 +4513,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part2 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2", "FCMask": "0x07", @@ -3881,6 +4525,7 @@ }, { "BriefDescription": "Read request for up to a 64 byte transaction = is made by IIO Part3 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3", "FCMask": "0x07", @@ -3892,8 +4537,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3903,8 +4550,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d reading from DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3914,6 +4563,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part0 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0", "FCMask": "0x07", @@ -3925,6 +4575,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part1 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1", "FCMask": "0x07", @@ -3936,6 +4587,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part2 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2", "FCMask": "0x07", @@ -3947,6 +4599,7 @@ }, { "BriefDescription": "Write request of up to a 64 byte transaction = is made by IIO Part3 to Memory", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3", "FCMask": "0x07", @@ -3958,8 +4611,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -3969,8 +4624,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d writing to DRAM", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -3980,8 +4637,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", @@ -3991,8 +4650,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", @@ -4002,8 +4663,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART2", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", @@ -4013,8 +4676,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART3", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", @@ -4024,8 +4689,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -4035,8 +4702,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Mes= sages", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -4046,6 +4715,7 @@ }, { "BriefDescription": "Peer to peer read request of up to a 64 byte = transaction is made by IIO Part0 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART0", "FCMask": "0x07", @@ -4057,6 +4727,7 @@ }, { "BriefDescription": "Peer to peer read request of up to a 64 byte = transaction is made by IIO Part1 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART1", "FCMask": "0x07", @@ -4068,6 +4739,7 @@ }, { "BriefDescription": "Peer to peer read request of up to a 64 byte = transaction is made by IIO Part2 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART2", "FCMask": "0x07", @@ -4079,6 +4751,7 @@ }, { "BriefDescription": "Peer to peer read request of up to a 64 byte = transaction is made by IIO Part3 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART3", "FCMask": "0x07", @@ -4090,8 +4763,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d reading from another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -4101,8 +4776,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d reading from another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -4112,6 +4789,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made by IIO Part0 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0", "FCMask": "0x07", @@ -4123,6 +4801,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made by IIO Part1 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1", "FCMask": "0x07", @@ -4134,6 +4813,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made by IIO Part2 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2", "FCMask": "0x07", @@ -4145,6 +4825,7 @@ }, { "BriefDescription": "Peer to peer write request of up to a 64 byte= transaction is made by IIO Part3 to an IIO target", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3", "FCMask": "0x07", @@ -4156,8 +4837,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.VTD0", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x10", @@ -4167,8 +4850,10 @@ }, { "BriefDescription": "Number Transactions requested of the CPU; Car= d writing to another Card (same or different stack)", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.VTD1", + "Experimental": "1", "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x20", @@ -4178,72 +4863,90 @@ }, { "BriefDescription": "VTd Access; context cache miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.CTXT_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "IIO" }, { "BriefDescription": "VTd Access; L1 miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.L1_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "IIO" }, { "BriefDescription": "VTd Access; L2 miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.L2_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "IIO" }, { "BriefDescription": "VTd Access; L3 miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.L3_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "IIO" }, { "BriefDescription": "VTd Access; Vtd hit", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.L4_PAGE_HIT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "IIO" }, { "BriefDescription": "VTd Access; TLB miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.TLB1_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "IIO" }, { "BriefDescription": "VTd Access; TLB is full", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.TLB_FULL", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "IIO" }, { "BriefDescription": "VTd Access; TLB miss", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_IIO_VTD_ACCESS.TLB_MISS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "IIO" }, { "BriefDescription": "VTd Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x40", "EventName": "UNC_IIO_VTD_OCCUPANCY", + "Experimental": "1", "PerPkg": "1", "Unit": "IIO" } diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json b/t= ools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json index 7a40aa0f1018..96cdb52f2778 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "LLC_MISSES.MEM_READ", "PerPkg": "1", @@ -11,6 +12,7 @@ }, { "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "LLC_MISSES.MEM_WRITE", "PerPkg": "1", @@ -21,8 +23,10 @@ }, { "BriefDescription": "DRAM Activate Count; Activate due to Bypass", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_M_ACT_COUNT.BYP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of DRAM Activate commands = sent on this channel. Activate commands are issued to open up a page on th= e DRAM devices so that it can be read or written to with a CAS. One can ca= lculate the number of Page Misses by subtracting the number of Page Miss pr= echarges from the number of Activates.", "UMask": "0x8", @@ -30,8 +34,10 @@ }, { "BriefDescription": "DRAM Activate Count; Activate due to Read", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_M_ACT_COUNT.RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of DRAM Activate commands = sent on this channel. Activate commands are issued to open up a page on th= e DRAM devices so that it can be read or written to with a CAS. One can ca= lculate the number of Page Misses by subtracting the number of Page Miss pr= echarges from the number of Activates.", "UMask": "0x1", @@ -39,6 +45,7 @@ }, { "BriefDescription": "DRAM Page Activate commands sent due to a wri= te request", + "Counter": "0,1,2,3", "EventCode": "0x1", "EventName": "UNC_M_ACT_COUNT.WR", "PerPkg": "1", @@ -48,30 +55,37 @@ }, { "BriefDescription": "ACT command issued by 2 cycle bypass", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M_BYP_CMDS.ACT", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "CAS command issued by 2 cycle bypass", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M_BYP_CMDS.CAS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "PRE command issued by 2 cycle bypass", + "Counter": "0,1,2,3", "EventCode": "0xA1", "EventName": "UNC_M_BYP_CMDS.PRE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "All DRAM CAS Commands issued", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.ALL", "PerPkg": "1", @@ -81,6 +95,7 @@ }, { "BriefDescription": "All DRAM Read CAS Commands issued (including = underfills)", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", @@ -90,14 +105,17 @@ }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; R= ead CAS issued in Read ISOCH Mode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x40", "Unit": "iMC" }, { "BriefDescription": "All DRAM Read CAS Commands issued (does not i= nclude underfills)", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD_REG", "PerPkg": "1", @@ -107,14 +125,17 @@ }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; R= ead CAS issued in RMM", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD_RMM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x20", "Unit": "iMC" }, { "BriefDescription": "DRAM Underfill Read CAS Commands issued", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL", "PerPkg": "1", @@ -124,14 +145,17 @@ }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; R= ead CAS issued in WMM", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.RD_WMM", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "All DRAM Write CAS commands issued", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", @@ -141,16 +165,20 @@ }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; R= ead CAS issued in Write ISOCH Mode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.WR_ISOCH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x80", "Unit": "iMC" }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; D= RAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.WR_RMM", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of Opportunistic DRA= M Write CAS commands issued on this channel while in Read-Major-Mode.", "UMask": "0x8", @@ -158,6 +186,7 @@ }, { "BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; D= RAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_M_CAS_COUNT.WR_WMM", "PerPkg": "1", @@ -167,6 +196,7 @@ }, { "BriefDescription": "Memory controller clock ticks", + "Counter": "0,1,2,3", "EventName": "UNC_M_CLOCKTICKS", "PerPkg": "1", "PublicDescription": "Counts clockticks of the fixed frequency clo= ck of the memory controller using one of the programmable counters.", @@ -174,23 +204,29 @@ }, { "BriefDescription": "Clockticks in the Memory Controller using a d= edicated 48-bit Fixed Counter", + "Counter": "FIXED", "EventCode": "0xff", "EventName": "UNC_M_CLOCKTICKS_F", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "DRAM Precharge All Commands", + "Counter": "0,1,2,3", "EventCode": "0x6", "EventName": "UNC_M_DRAM_PRE_ALL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times that the precharg= e all command was sent.", "Unit": "iMC" }, { "BriefDescription": "Number of DRAM Refreshes Issued", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_M_DRAM_REFRESH.HIGH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of refreshes issued.", "UMask": "0x4", @@ -198,8 +234,10 @@ }, { "BriefDescription": "Number of DRAM Refreshes Issued", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_M_DRAM_REFRESH.PANIC", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of refreshes issued.", "UMask": "0x2", @@ -207,16 +245,20 @@ }, { "BriefDescription": "ECC Correctable Errors", + "Counter": "0,1,2,3", "EventCode": "0x9", "EventName": "UNC_M_ECC_CORRECTABLE_ERRORS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of ECC errors detected and= corrected by the iMC on this channel. This counter is only useful with EC= C DRAM devices. This count will increment one time for each correction reg= ardless of the number of bits corrected. The iMC can correct up to 4 bit e= rrors in independent channel mode and 8 bit errors in lockstep mode.", "Unit": "iMC" }, { "BriefDescription": "Cycles in a Major Mode; Isoch Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_M_MAJOR_MODES.ISOCH", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of cycles spent in a= major mode (selected by a filter) on the given channel. Major modea are = channel-wide, and not a per-rank (or dimm or bank) mode.; We group these tw= o modes together so that we can use four counters to track each of the majo= r modes at one time. These major modes are used whenever there is an ISOCH= txn in the memory controller. In these mode, only ISOCH transactions are = processed.", "UMask": "0x8", @@ -224,8 +266,10 @@ }, { "BriefDescription": "Cycles in a Major Mode; Partial Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_M_MAJOR_MODES.PARTIAL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of cycles spent in a= major mode (selected by a filter) on the given channel. Major modea are = channel-wide, and not a per-rank (or dimm or bank) mode.; This major mode i= s used to drain starved underfill reads. Regular reads and writes are bloc= ked and only underfill reads will be processed.", "UMask": "0x4", @@ -233,8 +277,10 @@ }, { "BriefDescription": "Cycles in a Major Mode; Read Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_M_MAJOR_MODES.READ", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of cycles spent in a= major mode (selected by a filter) on the given channel. Major modea are = channel-wide, and not a per-rank (or dimm or bank) mode.; Read Major Mode i= s the default mode for the iMC, as reads are generally more critical to for= ward progress than writes.", "UMask": "0x1", @@ -242,8 +288,10 @@ }, { "BriefDescription": "Cycles in a Major Mode; Write Major Mode", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_M_MAJOR_MODES.WRITE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the total number of cycles spent in a= major mode (selected by a filter) on the given channel. Major modea are = channel-wide, and not a per-rank (or dimm or bank) mode.; This mode is trig= gered when the WPQ hits high occupancy and causes writes to be higher prior= ity than reads. This can cause blips in the available read bandwidth in th= e system and temporarily increase read latencies in order to achieve better= bus utilizations and higher bandwidth.", "UMask": "0x2", @@ -251,14 +299,17 @@ }, { "BriefDescription": "Channel DLLOFF Cycles", + "Counter": "0,1,2,3", "EventCode": "0x84", "EventName": "UNC_M_POWER_CHANNEL_DLLOFF", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles when all the ranks in the c= hannel are in CKE Slow (DLLOFF) mode.", "Unit": "iMC" }, { "BriefDescription": "Cycles where DRAM ranks are in power down (CK= E) mode", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "UNC_M_POWER_CHANNEL_PPD", "MetricExpr": "(UNC_M_POWER_CHANNEL_PPD / UNC_M_CLOCKTICKS) * 100"= , @@ -269,8 +320,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x1", @@ -278,8 +331,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x2", @@ -287,8 +342,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x4", @@ -296,8 +353,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x8", @@ -305,8 +364,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x10", @@ -314,8 +375,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x20", @@ -323,8 +386,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK6", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x40", @@ -332,8 +397,10 @@ }, { "BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x83", "EventName": "UNC_M_POWER_CKE_CYCLES.RANK7", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent in CKE ON mode. The = filter allows you to select a rank to monitor. If multiple ranks are in CK= E ON mode at one time, the counter will ONLY increment by one rather than d= oing accumulation. Multiple counters will need to be used to track multipl= e ranks simultaneously. There is no distinction between the different CKE = modes (APD, PPDS, PPDF). This can be determined based on the system progra= mming. These events should commonly be used with Invert to get the number = of cycles in power saving mode. Edge Detect is also useful here. Make sur= e that you do NOT use Invert with Edge Detect (this just confuses the syste= m and is not necessary).", "UMask": "0x80", @@ -341,21 +408,26 @@ }, { "BriefDescription": "Critical Throttle Cycles", + "Counter": "0,1,2,3", "EventCode": "0x86", "EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the iMC is = in critical thermal throttling. When this happens, all traffic is blocked.= This should be rare unless something bad is going on in the platform. Th= ere is no filtering by rank for this event.", "Unit": "iMC" }, { "BriefDescription": "UNC_M_POWER_PCU_THROTTLING", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_M_POWER_PCU_THROTTLING", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "Cycles Memory is in self refresh power mode", + "Counter": "0,1,2,3", "EventCode": "0x43", "EventName": "UNC_M_POWER_SELF_REFRESH", "MetricExpr": "(UNC_M_POWER_SELF_REFRESH / UNC_M_CLOCKTICKS) * 100= ", @@ -366,8 +438,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.; Thermal throttling is performed= per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by= ID.", "UMask": "0x1", @@ -375,8 +449,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK1", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x2", @@ -384,8 +460,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK2", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x4", @@ -393,8 +471,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x8", @@ -402,8 +482,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK4", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x10", @@ -411,8 +493,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK5", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x20", @@ -420,8 +504,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK6", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x40", @@ -429,8 +515,10 @@ }, { "BriefDescription": "Throttle Cycles for Rank 0; DIMM ID", + "Counter": "0,1,2,3", "EventCode": "0x41", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK7", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles while the iMC is= being throttled by either thermal constraints or by the PCU throttling. I= t is not possible to distinguish between the two. This can be filtered by = rank. If multiple ranks are selected and are being throttled at the same t= ime, the counter will only increment by 1.", "UMask": "0x80", @@ -438,8 +526,10 @@ }, { "BriefDescription": "Read Preemption Count; Read over Read Preempt= ion", + "Counter": "0,1,2,3", "EventCode": "0x8", "EventName": "UNC_M_PREEMPTION.RD_PREEMPT_RD", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a read in the iMC= preempts another read or write. Generally reads to an open page are issue= d ahead of requests to closed pages. This improves the page hit rate of th= e system. However, high priority requests can cause pages of active reques= ts to be closed in order to get them out. This will reduce the latency of = the high-priority request at the expense of lower bandwidth and increased o= verall average latency.; Filter for when a read preempts another read.", "UMask": "0x1", @@ -447,8 +537,10 @@ }, { "BriefDescription": "Read Preemption Count; Read over Write Preemp= tion", + "Counter": "0,1,2,3", "EventCode": "0x8", "EventName": "UNC_M_PREEMPTION.RD_PREEMPT_WR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a read in the iMC= preempts another read or write. Generally reads to an open page are issue= d ahead of requests to closed pages. This improves the page hit rate of th= e system. However, high priority requests can cause pages of active reques= ts to be closed in order to get them out. This will reduce the latency of = the high-priority request at the expense of lower bandwidth and increased o= verall average latency.; Filter for when a read preempts a write.", "UMask": "0x2", @@ -456,8 +548,10 @@ }, { "BriefDescription": "DRAM Precharge commands.; Precharge due to by= pass", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M_PRE_COUNT.BYP", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of DRAM Precharge commands= sent on this channel.", "UMask": "0x10", @@ -465,8 +559,10 @@ }, { "BriefDescription": "DRAM Precharge commands.; Precharge due to ti= mer expiration", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M_PRE_COUNT.PAGE_CLOSE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of DRAM Precharge commands= sent on this channel.; Counts the number of DRAM Precharge commands sent o= n this channel as a result of the page close counter expiring. This does n= ot include implicit precharge commands sent in auto-precharge mode.", "UMask": "0x2", @@ -474,6 +570,7 @@ }, { "BriefDescription": "Pre-charges due to page misses", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M_PRE_COUNT.PAGE_MISS", "PerPkg": "1", @@ -483,6 +580,7 @@ }, { "BriefDescription": "Pre-charge for reads", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M_PRE_COUNT.RD", "PerPkg": "1", @@ -492,8 +590,10 @@ }, { "BriefDescription": "Pre-charge for writes", + "Counter": "0,1,2,3", "EventCode": "0x2", "EventName": "UNC_M_PRE_COUNT.WR", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of DRAM Precharge commands= sent on this channel.", "UMask": "0x8", @@ -501,1390 +601,1739 @@ }, { "BriefDescription": "Read CAS issued with HIGH priority", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M_RD_CAS_PRIO.HIGH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Read CAS issued with LOW priority", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M_RD_CAS_PRIO.LOW", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Read CAS issued with MEDIUM priority", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M_RD_CAS_PRIO.MED", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Read CAS issued with PANIC NON ISOCH priority= (starved)", + "Counter": "0,1,2,3", "EventCode": "0xA0", "EventName": "UNC_M_RD_CAS_PRIO.PANIC", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 0; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB0", "EventName": "UNC_M_RD_CAS_RANK0.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 1; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB1", "EventName": "UNC_M_RD_CAS_RANK1.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 2; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB2", "EventName": "UNC_M_RD_CAS_RANK2.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 3; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB3", "EventName": "UNC_M_RD_CAS_RANK3.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 4; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB4", "EventName": "UNC_M_RD_CAS_RANK4.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 5; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB5", "EventName": "UNC_M_RD_CAS_RANK5.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 6; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB6", "EventName": "UNC_M_RD_CAS_RANK6.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "RD_CAS Access to Rank 7; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB7", "EventName": "UNC_M_RD_CAS_RANK7.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "Read Pending Queue Full Cycles", + "Counter": "0,1,2,3", "EventCode": "0x12", "EventName": "UNC_M_RPQ_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the Read Pe= nding Queue is full. When the RPQ is full, the HA will not be able to issu= e any additional read requests into the iMC. This count should be similar = count in the HA which tracks the number of cycles that the HA has no RPQ cr= edits, just somewhat smaller to account for the credit return overhead. We= generally do not expect to see RPQ become full except for potentially duri= ng Write Major Mode or while running with slow DRAM. This event only track= s non-ISOC queue entries.", "Unit": "iMC" }, { "BriefDescription": "Read Pending Queue Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x11", "EventName": "UNC_M_RPQ_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Read Pe= nding Queue is not empty. This can then be used to calculate the average o= ccupancy (in conjunction with the Read Pending Queue Occupancy count). The= RPQ is used to schedule reads out to the memory controller and to track th= e requests. Requests allocate into the RPQ soon after they enter the memor= y controller, and need credits for an entry in this buffer before being sen= t from the HA to the iMC. They deallocate after the CAS command has been i= ssued to memory. This filter is to be used in conjunction with the occupan= cy filter so that one can correctly track the average occupancies for sched= ulable entries and scheduled requests.", "Unit": "iMC" }, { "BriefDescription": "Read Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x10", "EventName": "UNC_M_RPQ_INSERTS", "PerPkg": "1", @@ -1893,6 +2342,7 @@ }, { "BriefDescription": "Read Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_M_RPQ_OCCUPANCY", "PerPkg": "1", @@ -1901,46 +2351,57 @@ }, { "BriefDescription": "Transition from WMM to RMM because of low thr= eshold; Transition from WMM to RMM because of starve counter", + "Counter": "0,1,2,3", "EventCode": "0xC0", "EventName": "UNC_M_WMM_TO_RMM.LOW_THRESH", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "Transition from WMM to RMM because of low thr= eshold", + "Counter": "0,1,2,3", "EventCode": "0xC0", "EventName": "UNC_M_WMM_TO_RMM.STARVE", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "Transition from WMM to RMM because of low thr= eshold", + "Counter": "0,1,2,3", "EventCode": "0xC0", "EventName": "UNC_M_WMM_TO_RMM.VMSE_RETRY", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "Write Pending Queue Full Cycles", + "Counter": "0,1,2,3", "EventCode": "0x22", "EventName": "UNC_M_WPQ_CYCLES_FULL", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the Write P= ending Queue is full. When the WPQ is full, the HA will not be able to iss= ue any additional write requests into the iMC. This count should be simila= r count in the CHA which tracks the number of cycles that the CHA has no WP= Q credits, just somewhat smaller to account for the credit return overhead.= ", "Unit": "iMC" }, { "BriefDescription": "Write Pending Queue Not Empty", + "Counter": "0,1,2,3", "EventCode": "0x21", "EventName": "UNC_M_WPQ_CYCLES_NE", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the Write P= ending Queue is not empty. This can then be used to calculate the average = queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).= The WPQ is used to schedule write out to the memory controller and to tra= ck the writes. Requests allocate into the WPQ soon after they enter the me= mory controller, and need credits for an entry in this buffer before being = sent from the CHA to the iMC. They deallocate after being issued to DRAM. = Write requests themselves are able to complete (from the perspective of th= e rest of the system) as soon they have posted to the iMC. This is not to = be confused with actually performing the write to DRAM. Therefore, the ave= rage latency for this queue is actually not useful for deconstruction inter= mediate write latencies.", "Unit": "iMC" }, { "BriefDescription": "Write Pending Queue Allocations", + "Counter": "0,1,2,3", "EventCode": "0x20", "EventName": "UNC_M_WPQ_INSERTS", "PerPkg": "1", @@ -1949,6 +2410,7 @@ }, { "BriefDescription": "Write Pending Queue Occupancy", + "Counter": "0,1,2,3", "EventCode": "0x81", "EventName": "UNC_M_WPQ_OCCUPANCY", "PerPkg": "1", @@ -1957,1359 +2419,1701 @@ }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x23", "EventName": "UNC_M_WPQ_READ_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a request hits in= the WPQ (write-pending queue). The iMC allows writes and reads to pass up= other writes to different addresses. Before a read or a write is issued, = it will first CAM the WPQ to see if there is a write pending to that addres= s. When reads hit, they are able to directly pull their data from the WPQ = instead of going to memory. Writes that hit will overwrite the existing da= ta. Partial writes that hit will not need to do underfill reads and will s= imply update their relevant sections.", "Unit": "iMC" }, { "BriefDescription": "Write Pending Queue CAM Match", + "Counter": "0,1,2,3", "EventCode": "0x24", "EventName": "UNC_M_WPQ_WRITE_HIT", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of times a request hits in= the WPQ (write-pending queue). The iMC allows writes and reads to pass up= other writes to different addresses. Before a read or a write is issued, = it will first CAM the WPQ to see if there is a write pending to that addres= s. When reads hit, they are able to directly pull their data from the WPQ = instead of going to memory. Writes that hit will overwrite the existing da= ta. Partial writes that hit will not need to do underfill reads and will s= imply update their relevant sections.", "Unit": "iMC" }, { "BriefDescription": "Not getting the requested Major Mode", + "Counter": "0,1,2,3", "EventCode": "0xC1", "EventName": "UNC_M_WRONG_MM", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 0; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB8", "EventName": "UNC_M_WR_CAS_RANK0.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 1; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xB9", "EventName": "UNC_M_WR_CAS_RANK1.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 2; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBA", "EventName": "UNC_M_WR_CAS_RANK2.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 3; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBB", "EventName": "UNC_M_WR_CAS_RANK3.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 4; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBC", "EventName": "UNC_M_WR_CAS_RANK4.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 5; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "UNC_M_WR_CAS_RANK5.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 6; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBE", "EventName": "UNC_M_WR_CAS_RANK6.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; All Banks", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.ALLBANKS", + "Experimental": "1", "PerPkg": "1", "UMask": "0x10", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 0", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK0", + "Experimental": "1", "PerPkg": "1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 1", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x1", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 10", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK10", + "Experimental": "1", "PerPkg": "1", "UMask": "0xa", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 11", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK11", + "Experimental": "1", "PerPkg": "1", "UMask": "0xb", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 12", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK12", + "Experimental": "1", "PerPkg": "1", "UMask": "0xc", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 13", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK13", + "Experimental": "1", "PerPkg": "1", "UMask": "0xd", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 14", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK14", + "Experimental": "1", "PerPkg": "1", "UMask": "0xe", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 15", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK15", + "Experimental": "1", "PerPkg": "1", "UMask": "0xf", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 2", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x2", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 3", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x3", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 4", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK4", + "Experimental": "1", "PerPkg": "1", "UMask": "0x4", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 5", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK5", + "Experimental": "1", "PerPkg": "1", "UMask": "0x5", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 6", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK6", + "Experimental": "1", "PerPkg": "1", "UMask": "0x6", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 7", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK7", + "Experimental": "1", "PerPkg": "1", "UMask": "0x7", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 8", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK8", + "Experimental": "1", "PerPkg": "1", "UMask": "0x8", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank 9", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANK9", + "Experimental": "1", "PerPkg": "1", "UMask": "0x9", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank Group 0 (Banks = 0-3)", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANKG0", + "Experimental": "1", "PerPkg": "1", "UMask": "0x11", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank Group 1 (Banks = 4-7)", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANKG1", + "Experimental": "1", "PerPkg": "1", "UMask": "0x12", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank Group 2 (Banks = 8-11)", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANKG2", + "Experimental": "1", "PerPkg": "1", "UMask": "0x13", "Unit": "iMC" }, { "BriefDescription": "WR_CAS Access to Rank 7; Bank Group 3 (Banks = 12-15)", + "Counter": "0,1,2,3", "EventCode": "0xBF", "EventName": "UNC_M_WR_CAS_RANK7.BANKG3", + "Experimental": "1", "PerPkg": "1", "UMask": "0x14", "Unit": "iMC" diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-power.json b/to= ols/perf/pmu-events/arch/x86/skylakex/uncore-power.json index ceef46046488..809b86dde933 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-power.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-power.json @@ -1,147 +1,185 @@ [ { "BriefDescription": "pclk Cycles", + "Counter": "0,1,2,3", "EventName": "UNC_P_CLOCKTICKS", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "The PCU runs off a fixed 1 GHz clock. This = event counts the number of pclk cycles measured while the counter was enabl= ed. The pclk, like the Memory Controller's dclk, counts at a constant rate= making it a good measure of actual wall time.", "Unit": "PCU" }, { "BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES", + "Counter": "0,1,2,3", "EventCode": "0x60", "EventName": "UNC_P_CORE_TRANSITION_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "UNC_P_DEMOTIONS", + "Counter": "0,1,2,3", "EventCode": "0x30", "EventName": "UNC_P_DEMOTIONS", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 0 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x75", "EventName": "UNC_P_FIVR_PS_PS0_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles spent in phase-shedding power state 0= ", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 1 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x76", "EventName": "UNC_P_FIVR_PS_PS1_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles spent in phase-shedding power state 1= ", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 2 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x77", "EventName": "UNC_P_FIVR_PS_PS2_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles spent in phase-shedding power state 2= ", "Unit": "PCU" }, { "BriefDescription": "Phase Shed 3 Cycles", + "Counter": "0,1,2,3", "EventCode": "0x78", "EventName": "UNC_P_FIVR_PS_PS3_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Cycles spent in phase-shedding power state 3= ", "Unit": "PCU" }, { "BriefDescription": "Thermal Strongest Upper Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x4", "EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when thermal con= ditions are the upper limit on frequency. This is related to the THERMAL_T= HROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are abo= ve the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled = at the output of the algorithm that determines the actual frequency, while = THERMAL_THROTTLE looks at the input.", "Unit": "PCU" }, { "BriefDescription": "Power Strongest Upper Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x5", "EventName": "UNC_P_FREQ_MAX_POWER_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when power is th= e upper limit on frequency.", "Unit": "PCU" }, { "BriefDescription": "IO P Limit Strongest Lower Limit Cycles", + "Counter": "0,1,2,3", "EventCode": "0x73", "EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when IO P Limit = is preventing us from dropping the frequency lower. This algorithm monitor= s the needs to the IO subsystem on both local and remote sockets and will m= aintain a frequency high enough to maintain good IO BW. This is necessary = for when all the IA cores on a socket are idle but a user still would like = to maintain high IO Bandwidth.", "Unit": "PCU" }, { "BriefDescription": "Cycles spent changing Frequency", + "Counter": "0,1,2,3", "EventCode": "0x74", "EventName": "UNC_P_FREQ_TRANS_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the system = is changing frequency. This can not be filtered by thread ID. One can als= o use it with the occupancy counter that monitors number of threads in C0 t= o estimate the performance impact that frequency transitions had on the sys= tem.", "Unit": "PCU" }, { "BriefDescription": "UNC_P_MCP_PROCHOT_CYCLES", + "Counter": "0,1,2,3", "EventCode": "0x6", "EventName": "UNC_P_MCP_PROCHOT_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Memory Phase Shedding Cycles", + "Counter": "0,1,2,3", "EventCode": "0x2F", "EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that the PCU has= triggered memory phase shedding. This is a mode that can be run in the iM= C physicals that saves power at the expense of additional latency.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C0", + "Counter": "0,1,2,3", "EventCode": "0x2A", "EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the package= was in C0. This event can be used in conjunction with edge detect to coun= t C0 entrances (or exits using invert). Residency events do not include tr= ansition times.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C2E", + "Counter": "0,1,2,3", "EventCode": "0x2B", "EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the package= was in C2E. This event can be used in conjunction with edge detect to cou= nt C2E entrances (or exits using invert). Residency events do not include = transition times.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C3", + "Counter": "0,1,2,3", "EventCode": "0x2C", "EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the package= was in C3. This event can be used in conjunction with edge detect to coun= t C3 entrances (or exits using invert). Residency events do not include tr= ansition times.", "Unit": "PCU" }, { "BriefDescription": "Package C State Residency - C6", + "Counter": "0,1,2,3", "EventCode": "0x2D", "EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles when the package= was in C6. This event can be used in conjunction with edge detect to coun= t C6 entrances (or exits using invert). Residency events do not include tr= ansition times.", "Unit": "PCU" }, { "BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES", + "Counter": "0,1,2,3", "EventCode": "0x7", "EventName": "UNC_P_PMAX_THROTTLED_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" }, { "BriefDescription": "Number of cores in C-State; C0 and C1", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "UMask": "0x40", @@ -149,8 +187,10 @@ }, { "BriefDescription": "Number of cores in C-State; C3", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "UMask": "0x80", @@ -158,8 +198,10 @@ }, { "BriefDescription": "Number of cores in C-State; C6 and C7", + "Counter": "0,1,2,3", "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "UMask": "0xc0", @@ -167,32 +209,40 @@ }, { "BriefDescription": "External Prochot", + "Counter": "0,1,2,3", "EventCode": "0xA", "EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that we are in e= xternal PROCHOT mode. This mode is triggered when a sensor off the die det= ermines that something off-die (like DRAM) is too hot and must throttle to = avoid damaging the chip.", "Unit": "PCU" }, { "BriefDescription": "Internal Prochot", + "Counter": "0,1,2,3", "EventCode": "0x9", "EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Counts the number of cycles that we are in I= nternal PROCHOT mode. This mode is triggered when a sensor on the die dete= rmines that we are too hot and must throttle to avoid damaging the chip.", "Unit": "PCU" }, { "BriefDescription": "Total Core C State Transition Cycles", + "Counter": "0,1,2,3", "EventCode": "0x72", "EventName": "UNC_P_TOTAL_TRANSITION_CYCLES", + "Experimental": "1", "PerPkg": "1", "PublicDescription": "Number of cycles spent performing core C sta= te transitions across all cores.", "Unit": "PCU" }, { "BriefDescription": "VR Hot", + "Counter": "0,1,2,3", "EventCode": "0x42", "EventName": "UNC_P_VR_HOT_CYCLES", + "Experimental": "1", "PerPkg": "1", "Unit": "PCU" } diff --git a/tools/perf/pmu-events/arch/x86/skylakex/virtual-memory.json b/= tools/perf/pmu-events/arch/x86/skylakex/virtual-memory.json index 73feadaf7674..ad33fff57c03 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/virtual-memory.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/virtual-memory.json @@ -1,6 +1,7 @@ [ { "BriefDescription": "Load misses in all DTLB levels that cause pag= e walks", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK", "PublicDescription": "Counts demand data loads that caused a page = walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB leve= ls, but the walk need not have completed.", @@ -9,6 +10,7 @@ }, { "BriefDescription": "Loads that miss the DTLB and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.STLB_HIT", "PublicDescription": "Counts loads that miss the DTLB (Data TLB) a= nd hit the STLB (Second level TLB).", @@ -17,6 +19,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for a load. EPT page walk duration are excluded in Skylake.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE", @@ -26,6 +29,7 @@ }, { "BriefDescription": "Load miss in all TLB levels causes a page wal= k that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes= ) caused by demand data loads. This implies it missed in the DTLB and furth= er levels of TLB. The page walk can end with or without a fault.", @@ -34,6 +38,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data load= to a 1G page", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G", "PublicDescription": "Counts completed page walks (1G sizes) caus= ed by demand data loads. This implies address translations missed in the DT= LB and further levels of TLB. The page walk can end with or without a fault= .", @@ -42,6 +47,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data load= to a 2M/4M page", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M sizes) c= aused by demand data loads. This implies address translations missed in the= DTLB and further levels of TLB. The page walk can end with or without a fa= ult.", @@ -50,6 +56,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data load= to a 4K page", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K sizes) caus= ed by demand data loads. This implies address translations missed in the DT= LB and further levels of TLB. The page walk can end with or without a fault= .", @@ -58,6 +65,7 @@ }, { "BriefDescription": "Counts 1 per cycle for each PMH that is busy = with a page walk for a load. EPT page walk duration are excluded in Skylake= .", + "Counter": "0,1,2,3", "EventCode": "0x08", "EventName": "DTLB_LOAD_MISSES.WALK_PENDING", "PublicDescription": "Counts 1 per cycle for each PMH that is busy= with a page walk for a load. EPT page walk duration are excluded in Skylak= e microarchitecture.", @@ -66,6 +74,7 @@ }, { "BriefDescription": "Store misses in all DTLB levels that cause pa= ge walks", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK", "PublicDescription": "Counts demand data stores that caused a page= walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB lev= els, but the walk need not have completed.", @@ -74,6 +83,7 @@ }, { "BriefDescription": "Stores that miss the DTLB and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.STLB_HIT", "PublicDescription": "Stores that miss the DTLB (Data TLB) and hit= the STLB (2nd Level TLB).", @@ -82,6 +92,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for a store. EPT page walk duration are excluded in Skylake.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE", @@ -91,6 +102,7 @@ }, { "BriefDescription": "Store misses in all TLB levels causes a page = walk that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes= ) caused by demand data stores. This implies it missed in the DTLB and furt= her levels of TLB. The page walk can end with or without a fault.", @@ -99,6 +111,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data stor= e to a 1G page", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G", "PublicDescription": "Counts completed page walks (1G sizes) caus= ed by demand data stores. This implies address translations missed in the D= TLB and further levels of TLB. The page walk can end with or without a faul= t.", @@ -107,6 +120,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data stor= e to a 2M/4M page", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M sizes) c= aused by demand data stores. This implies address translations missed in th= e DTLB and further levels of TLB. The page walk can end with or without a f= ault.", @@ -115,6 +129,7 @@ }, { "BriefDescription": "Page walk completed due to a demand data stor= e to a 4K page", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K sizes) caus= ed by demand data stores. This implies address translations missed in the D= TLB and further levels of TLB. The page walk can end with or without a faul= t.", @@ -123,6 +138,7 @@ }, { "BriefDescription": "Counts 1 per cycle for each PMH that is busy = with a page walk for a store. EPT page walk duration are excluded in Skylak= e.", + "Counter": "0,1,2,3", "EventCode": "0x49", "EventName": "DTLB_STORE_MISSES.WALK_PENDING", "PublicDescription": "Counts 1 per cycle for each PMH that is busy= with a page walk for a store. EPT page walk duration are excluded in Skyla= ke microarchitecture.", @@ -131,6 +147,7 @@ }, { "BriefDescription": "Counts 1 per cycle for each PMH that is busy = with a EPT (Extended Page Table) walk for any request type.", + "Counter": "0,1,2,3", "EventCode": "0x4f", "EventName": "EPT.WALK_PENDING", "PublicDescription": "Counts cycles for each PMH (Page Miss Handle= r) that is busy with an EPT (Extended Page Table) walk for any request type= .", @@ -139,6 +156,7 @@ }, { "BriefDescription": "Flushing of the Instruction TLB (ITLB) pages,= includes 4k/2M/4M pages.", + "Counter": "0,1,2,3", "EventCode": "0xAE", "EventName": "ITLB.ITLB_FLUSH", "PublicDescription": "Counts the number of flushes of the big or s= mall ITLB pages. Counting include both TLB Flush (covering all sets) and TL= B Set Clear (set-specific).", @@ -147,6 +165,7 @@ }, { "BriefDescription": "Misses at all ITLB levels that cause page wal= ks", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK", "PublicDescription": "Counts page walks of any page size (4K/2M/4M= /1G) caused by a code fetch. This implies it missed in the ITLB and further= levels of TLB, but the walk need not have completed.", @@ -155,6 +174,7 @@ }, { "BriefDescription": "Instruction fetch requests that miss the ITLB= and hit the STLB.", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.STLB_HIT", "SampleAfterValue": "100003", @@ -162,6 +182,7 @@ }, { "BriefDescription": "Cycles when at least one PMH is busy with a p= age walk for code (instruction fetch) request. EPT page walk duration are e= xcluded in Skylake.", + "Counter": "0,1,2,3", "CounterMask": "1", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_ACTIVE", @@ -171,6 +192,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (All page sizes)", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_COMPLETED", "PublicDescription": "Counts completed page walks (all page sizes)= caused by a code fetch. This implies it missed in the ITLB (Instruction TL= B) and further levels of TLB. The page walk can end with or without a fault= .", @@ -179,6 +201,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (1G)", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_COMPLETED_1G", "PublicDescription": "Counts completed page walks (1G page sizes) = caused by a code fetch. This implies it missed in the ITLB (Instruction TLB= ) and further levels of TLB. The page walk can end with or without a fault.= ", @@ -187,6 +210,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (2M/4M)", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", "PublicDescription": "Counts completed page walks (2M/4M page size= s) caused by a code fetch. This implies it missed in the ITLB (Instruction = TLB) and further levels of TLB. The page walk can end with or without a fau= lt.", @@ -195,6 +219,7 @@ }, { "BriefDescription": "Code miss in all TLB levels causes a page wal= k that completes. (4K)", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_COMPLETED_4K", "PublicDescription": "Counts completed page walks (4K page sizes) = caused by a code fetch. This implies it missed in the ITLB (Instruction TLB= ) and further levels of TLB. The page walk can end with or without a fault.= ", @@ -203,6 +228,7 @@ }, { "BriefDescription": "Counts 1 per cycle for each PMH that is busy = with a page walk for an instruction fetch request. EPT page walk duration a= re excluded in Skylake.", + "Counter": "0,1,2,3", "EventCode": "0x85", "EventName": "ITLB_MISSES.WALK_PENDING", "PublicDescription": "Counts 1 per cycle for each PMH (Page Miss H= andler) that is busy with a page walk for an instruction fetch request. EPT= page walk duration are excluded in Skylake microarchitecture.", @@ -211,6 +237,7 @@ }, { "BriefDescription": "DTLB flush attempts of the thread-specific en= tries", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "TLB_FLUSH.DTLB_THREAD", "PublicDescription": "Counts the number of DTLB flush attempts of = the thread-specific entries.", @@ -219,6 +246,7 @@ }, { "BriefDescription": "STLB flush attempts", + "Counter": "0,1,2,3", "EventCode": "0xBD", "EventName": "TLB_FLUSH.STLB_ANY", "PublicDescription": "Counts the number of any STLB flush attempts= (such as entire, VPID, PCID, InvPage, CR3 write, etc.).", --=20 2.45.2.627.g7a2c4fd464-goog