Replace 'the the' with 'the' in the comment.
Signed-off-by: Slark Xiao <[email protected]>
---
tools/perf/Documentation/perf-diff.txt | 2 +-
tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json | 2 +-
tools/perf/pmu-events/arch/x86/silvermont/pipeline.json | 2 +-
tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json | 2 +-
tools/perf/util/cs-etm.c | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
index be65bd55ab2a..b77957ac288b 100644
--- a/tools/perf/Documentation/perf-diff.txt
+++ b/tools/perf/Documentation/perf-diff.txt
@@ -285,7 +285,7 @@ If specified the 'Weighted diff' column is displayed with value 'd' computed as:
- period being the hist entry period value
- - WEIGHT-A/WEIGHT-B being user supplied weights in the the '-c' option
+ - WEIGHT-A/WEIGHT-B being user supplied weights in the '-c' option
behind ':' separator like '-c wdiff:1,2'.
- WEIGHT-A being the weight of the data file
- WEIGHT-B being the weight of the baseline data file
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
index aa460d0c4851..59ab88de1b37 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
@@ -1923,7 +1923,7 @@
"EventCode": "0x25",
"EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
"PerPkg": "1",
- "PublicDescription": "Counts cycles when the the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
+ "PublicDescription": "Counts cycles when the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
"Unit": "UPI LL"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json b/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
index 03a4c7f26698..3278c7d1654d 100644
--- a/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
@@ -257,7 +257,7 @@
"Counter": "0,1",
"EventCode": "0xCA",
"EventName": "NO_ALLOC_CYCLES.NOT_DELIVERED",
- "PublicDescription": "The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources. When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the the front-end bandwidth.",
+ "PublicDescription": "The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources. When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth.",
"SampleAfterValue": "200003",
"UMask": "0x50"
},
diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
index aa0f67613c4a..0c96e6924d62 100644
--- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
@@ -1852,7 +1852,7 @@
"EventCode": "0x25",
"EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
"PerPkg": "1",
- "PublicDescription": "Counts cycles when the the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
+ "PublicDescription": "Counts cycles when the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
"Unit": "UPI LL"
},
{
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index 8b95fb3c4d7b..16db965ac995 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -1451,7 +1451,7 @@ static int cs_etm__sample(struct cs_etm_queue *etmq,
* tidq->packet->instr_count represents the number of
* instructions in the current etm packet.
*
- * Period instructions (Pi) contains the the number of
+ * Period instructions (Pi) contains the number of
* instructions executed after the sample point(n) from the
* previous etm packet. This will always be less than
* etm->instructions_sample_period.
--
2.25.1
On Fri, Jul 22, 2022 at 3:38 AM Slark Xiao <[email protected]> wrote:
>
> Replace 'the the' with 'the' in the comment.
Thanks, these files are generated from files here:
https://download.01.org/perfmon/
I've added Intel folks to the e-mail, so they can update those files.
Thanks,
Ian
> Signed-off-by: Slark Xiao <[email protected]>
> ---
> tools/perf/Documentation/perf-diff.txt | 2 +-
> tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json | 2 +-
> tools/perf/pmu-events/arch/x86/silvermont/pipeline.json | 2 +-
> tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json | 2 +-
> tools/perf/util/cs-etm.c | 2 +-
> 5 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
> index be65bd55ab2a..b77957ac288b 100644
> --- a/tools/perf/Documentation/perf-diff.txt
> +++ b/tools/perf/Documentation/perf-diff.txt
> @@ -285,7 +285,7 @@ If specified the 'Weighted diff' column is displayed with value 'd' computed as:
>
> - period being the hist entry period value
>
> - - WEIGHT-A/WEIGHT-B being user supplied weights in the the '-c' option
> + - WEIGHT-A/WEIGHT-B being user supplied weights in the '-c' option
> behind ':' separator like '-c wdiff:1,2'.
> - WEIGHT-A being the weight of the data file
> - WEIGHT-B being the weight of the baseline data file
> diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
> index aa460d0c4851..59ab88de1b37 100644
> --- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
> +++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json
> @@ -1923,7 +1923,7 @@
> "EventCode": "0x25",
> "EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
> "PerPkg": "1",
> - "PublicDescription": "Counts cycles when the the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
> + "PublicDescription": "Counts cycles when the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
> "Unit": "UPI LL"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json b/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
> index 03a4c7f26698..3278c7d1654d 100644
> --- a/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
> +++ b/tools/perf/pmu-events/arch/x86/silvermont/pipeline.json
> @@ -257,7 +257,7 @@
> "Counter": "0,1",
> "EventCode": "0xCA",
> "EventName": "NO_ALLOC_CYCLES.NOT_DELIVERED",
> - "PublicDescription": "The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources. When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the the front-end bandwidth.",
> + "PublicDescription": "The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources. When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth.",
> "SampleAfterValue": "200003",
> "UMask": "0x50"
> },
> diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
> index aa0f67613c4a..0c96e6924d62 100644
> --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
> +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
> @@ -1852,7 +1852,7 @@
> "EventCode": "0x25",
> "EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
> "PerPkg": "1",
> - "PublicDescription": "Counts cycles when the the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
> + "PublicDescription": "Counts cycles when the receive side (Rx) of the Intel Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power.",
> "Unit": "UPI LL"
> },
> {
> diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
> index 8b95fb3c4d7b..16db965ac995 100644
> --- a/tools/perf/util/cs-etm.c
> +++ b/tools/perf/util/cs-etm.c
> @@ -1451,7 +1451,7 @@ static int cs_etm__sample(struct cs_etm_queue *etmq,
> * tidq->packet->instr_count represents the number of
> * instructions in the current etm packet.
> *
> - * Period instructions (Pi) contains the the number of
> + * Period instructions (Pi) contains the number of
> * instructions executed after the sample point(n) from the
> * previous etm packet. This will always be less than
> * etm->instructions_sample_period.
> --
> 2.25.1
>