Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp3746704pxt; Tue, 10 Aug 2021 10:20:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlf5QNJK0RVc/8nz3eYkzR20P+1gjVPcYiBOcioJiM1Y7O0gq0kiwHV5r+dr+olphD0il6 X-Received: by 2002:a05:6602:2003:: with SMTP id y3mr632224iod.95.1628616047193; Tue, 10 Aug 2021 10:20:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628616047; cv=none; d=google.com; s=arc-20160816; b=sJkVC2klhlNcxe/uEkCq4v+SPUG4X510nwXxPSwyevV7EZ2s4AZRe4O9ga37ymxfSY le0rMZe8u8Or6JNkA2NLvcdTZ+xcKs7Y7MIQsZvmGTuKdH815UanK8+KjiLZUF7Xqdc5 kF087N+J3K1zL/rfnHkNKt89KlR3/VA2Cqg8zboCh8WydAYItidcKkIN3uycgYPycj0+ yRMSEhzRaoorNnfQSvXg2IX+Nr5odnmGmgYyFYOaCQ0JAdBumgwEVv1pOND4pa57YWpj SksifPyhhxV1Z7yjBEM9nc1s7lOdROPlVvlTAlYVPHkoPDxnxtEqi3NERigWeaL0CkWJ IwUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=CC+Dc6uzQR8fyYYKojNTNybUZHcx3hfnIAtWJSBAYN0=; b=yEhEiNFmEHfnnbUl3Tn6IMOUqB240ukoy6BRCRolsZBguvriBKA5rATuPPrPitGlsE Jq29VABSS3Qx5vNmeHONZ7sc26c5Ff7qML+oWVAQSHTGsyx/No136gxInROJZEqmiorw uuLjeq24dwoC/M6j0DH9mNvI3U+IIcMpchrldu0RzOv1D7JyZ/D+C+2F870nF0K33fP3 9Wzadk670UIXqbfTRtuIGPVzUXIisBb6t7bF8jMeKbT3XgWoNNFIElIfQcIKAQvm1yto IpZXqZUiuvVe/A+2lE7WVWD5DOCvPN5g2rlOhKFfPgQRtDI4Tp6kclHtvY+809iB9FF/ XQlw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=BV2uPb3a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d7si20384240iop.31.2021.08.10.10.20.34; Tue, 10 Aug 2021 10:20:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=BV2uPb3a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239214AbhHJJoI (ORCPT + 99 others); Tue, 10 Aug 2021 05:44:08 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:14104 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239011AbhHJJoA (ORCPT ); Tue, 10 Aug 2021 05:44:00 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 17A9eGEs019623; Tue, 10 Aug 2021 02:43:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=CC+Dc6uzQR8fyYYKojNTNybUZHcx3hfnIAtWJSBAYN0=; b=BV2uPb3aNNft94VEoNyyYNp6XHq8S0Hac7jHRdyLUhbjLW63L8riHTtIPvLn75CqUy4Z YQIN65xrmlKlRGXv9RlpE7IHW3FocQFUsKdS6kMZh/CLTdo9o8i6p+6rt8G25VLwOwLo C68lXpFRRuLWaFeKSwApzCkiDuFDe0/lcn8y0v3xY+/z6VU6EqGBCvatJcj1sh15LpJu pKpsAifNdyM3V5OJnd/WtLtaxF5krK8RNNU8aKpP4TZLOayoFkZJpamTgJYazU/G//iw 7+YY1H2Zx36AjhZPIZm1YII3m81RQyACQgdHugnJOH5iJ6Iq/LYTROndv0X1prKZ2W/H 4A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aaxfkmbxh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Aug 2021 02:43:29 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 10 Aug 2021 02:43:27 -0700 Received: from bbhushan2.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 10 Aug 2021 02:43:24 -0700 From: Bharat Bhushan To: , , , , , CC: Bharat Bhushan Subject: [PATCH v2 3/4] perf/marvell: cn10k DDR perfmon event overflow handling Date: Tue, 10 Aug 2021 15:13:06 +0530 Message-ID: <20210810094307.29595-4-bbhushan2@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210810094307.29595-1-bbhushan2@marvell.com> References: <20210810094307.29595-1-bbhushan2@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: bRoqHhcwlkJ_P5LdctRt-eaq3F62DGqq X-Proofpoint-GUID: bRoqHhcwlkJ_P5LdctRt-eaq3F62DGqq X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-10_03:2021-08-06,2021-08-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org CN10k DSS h/w perfmon does not support event overflow interrupt, so periodic timer is being used. Each event counter is 48bit, which in worst case scenario can increment at maximum 5.6 GT/s. At this rate it may take many hours to overflow these counters. Therefore polling period for overflow is set to 100 sec, which can be changed using sysfs parameter. Two fixed event counters starts counting from zero on overflow, so overflow condition is when new count less than previous count. While eight programmable event counters freezes at maximum value. Also individual counter cannot be restarted, so need to restart all eight counters. Signed-off-by: Bharat Bhushan --- v1->v2: - No change drivers/perf/marvell_cn10k_ddr_pmu.c | 111 +++++++++++++++++++++++++++ 1 file changed, 111 insertions(+) diff --git a/drivers/perf/marvell_cn10k_ddr_pmu.c b/drivers/perf/marvell_cn10k_ddr_pmu.c index 8f9e3d1fcd8d..80f1441961d0 100644 --- a/drivers/perf/marvell_cn10k_ddr_pmu.c +++ b/drivers/perf/marvell_cn10k_ddr_pmu.c @@ -11,6 +11,7 @@ #include #include #include +#include /* Performance Counters Operating Mode Control Registers */ #define DDRC_PERF_CNT_OP_MODE_CTRL 0x8020 @@ -128,6 +129,7 @@ struct cn10k_ddr_pmu { struct device *dev; int active_events; struct perf_event *events[DDRC_PERF_NUM_COUNTERS]; + struct hrtimer hrtimer; }; #define to_cn10k_ddr_pmu(p) container_of(p, struct cn10k_ddr_pmu, pmu) @@ -251,6 +253,18 @@ static const struct attribute_group *cn10k_attr_groups[] = { NULL, }; +/* Default poll timeout is 100 sec, which is very sufficient for + * 48 bit counter incremented max at 5.6 GT/s, which may take many + * hours to overflow. + */ +static unsigned long cn10k_ddr_pmu_poll_period_sec = 100; +module_param_named(poll_period_sec, cn10k_ddr_pmu_poll_period_sec, ulong, 0644); + +static ktime_t cn10k_ddr_pmu_timer_period(void) +{ + return ms_to_ktime((u64)cn10k_ddr_pmu_poll_period_sec * USEC_PER_SEC); +} + static uint64_t ddr_perf_get_event_bitmap(int eventid) { uint64_t event_bitmap = 0; @@ -439,6 +453,10 @@ static int cn10k_ddr_perf_event_add(struct perf_event *event, int flags) pmu->active_events++; hwc->idx = counter; + if (pmu->active_events == 1) + hrtimer_start(&pmu->hrtimer, cn10k_ddr_pmu_timer_period(), + HRTIMER_MODE_REL_PINNED); + if (counter < DDRC_PERF_NUM_GEN_COUNTERS) { /* Generic counters, configure event id */ reg_offset = DDRC_PERF_CFG(counter); @@ -487,6 +505,10 @@ static void cn10k_ddr_perf_event_del(struct perf_event *event, int flags) cn10k_ddr_perf_free_counter(pmu, counter); pmu->active_events--; hwc->idx = -1; + + /* Cancel timer when no events to capture */ + if (pmu->active_events == 0) + hrtimer_cancel(&pmu->hrtimer); } static void cn10k_ddr_perf_pmu_enable(struct pmu *pmu) @@ -505,6 +527,92 @@ static void cn10k_ddr_perf_pmu_disable(struct pmu *pmu) DDRC_PERF_CNT_END_OP_CTRL); } +static void cn10k_ddr_perf_event_update_all(struct cn10k_ddr_pmu *pmu) +{ + struct hw_perf_event *hwc; + int i; + + for (i = 0; i < DDRC_PERF_NUM_GEN_COUNTERS; i++) { + if (pmu->events[i] == NULL) + continue; + + cn10k_ddr_perf_event_update(pmu->events[i]); + } + + /* Reset previous count as h/w counter are reset */ + for (i = 0; i < DDRC_PERF_NUM_GEN_COUNTERS; i++) { + if (pmu->events[i] == NULL) + continue; + + hwc = &pmu->events[i]->hw; + local64_set(&hwc->prev_count, 0); + } +} + +static irqreturn_t cn10k_ddr_pmu_overflow_handler(struct cn10k_ddr_pmu *pmu) +{ + struct perf_event *event; + struct hw_perf_event *hwc; + uint64_t prev_count, new_count; + uint64_t value; + int i; + + event = pmu->events[DDRC_PERF_READ_COUNTER_IDX]; + if (event) { + hwc = &event->hw; + prev_count = local64_read(&hwc->prev_count); + new_count = cn10k_ddr_perf_read_counter(pmu, hwc->idx); + + /* Overflow condition is when new count less than + * previous count + */ + if (new_count < prev_count) + cn10k_ddr_perf_event_update(event); + } + + event = pmu->events[DDRC_PERF_WRITE_COUNTER_IDX]; + if (event) { + hwc = &event->hw; + prev_count = local64_read(&hwc->prev_count); + new_count = cn10k_ddr_perf_read_counter(pmu, hwc->idx); + + /* Overflow condition is when new count less than + * previous count + */ + if (new_count < prev_count) + cn10k_ddr_perf_event_update(event); + } + + for (i = 0; i < DDRC_PERF_NUM_GEN_COUNTERS; i++) { + if (pmu->events[i] == NULL) + continue; + + value = cn10k_ddr_perf_read_counter(pmu, i); + if (value == DDRC_PERF_CNT_MAX_VALUE) { + pr_info("Counter-(%d) reached max value\n", i); + cn10k_ddr_perf_event_update_all(pmu); + cn10k_ddr_perf_pmu_disable(&pmu->pmu); + cn10k_ddr_perf_pmu_enable(&pmu->pmu); + } + } + + return IRQ_HANDLED; +} + +static enum hrtimer_restart cn10k_ddr_pmu_timer_handler(struct hrtimer *hrtimer) +{ + struct cn10k_ddr_pmu *pmu = container_of(hrtimer, struct cn10k_ddr_pmu, + hrtimer); + unsigned long flags; + + local_irq_save(flags); + cn10k_ddr_pmu_overflow_handler(pmu); + local_irq_restore(flags); + + hrtimer_forward_now(hrtimer, cn10k_ddr_pmu_timer_period()); + return HRTIMER_RESTART; +} + static int cn10k_ddr_perf_probe(struct platform_device *pdev) { struct cn10k_ddr_pmu *ddr_pmu; @@ -555,6 +663,9 @@ static int cn10k_ddr_perf_probe(struct platform_device *pdev) if (!name) return -ENOMEM; + hrtimer_init(&ddr_pmu->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + ddr_pmu->hrtimer.function = cn10k_ddr_pmu_timer_handler; + ret = perf_pmu_register(&ddr_pmu->pmu, name, -1); if (ret) return ret; -- 2.17.1