Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1219896ybl; Tue, 3 Dec 2019 03:46:41 -0800 (PST) X-Google-Smtp-Source: APXvYqx0VFZFaSUPT501iyErP/ckOKBs2uZj1ieQ/3sONxQ3/58pEP8soirsdNaXhfHJA01RBgfB X-Received: by 2002:a05:6830:1e37:: with SMTP id t23mr2850990otr.16.1575373601772; Tue, 03 Dec 2019 03:46:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575373601; cv=none; d=google.com; s=arc-20160816; b=r//fejnxLJ6MGMgQ3Uy6VNhgwcy3kyS2chMHOdM7gFxJdxFHsZZ2GtMGyq5SC5rEna sbanAH5KMQhnk8jM7xNUGUpHQdSSTma/JaGQ1R1ZnYejhNgrOrSzn3p2VAFq+yudM0V2 Yv3WQGw9pyZB0nrcKYHMZ9qIZoG5F+Dsq62T40x81nttsNW4u5jeRRaHYdVWHWxmWae4 WnvfwjygwllRwdWxdTADSqa7atrsFM7T7NSqbYClJtIAcux4R/BlXpr7cdAlYjxROf81 oV05iEhm9z4zNQlaNKYxJcNoVfp/mtOG88yNblOsGu2e1uQJiQ57yyKotTqOBzimKVhW bKAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:references:cc:to:from:subject; bh=icpSg5jAH9SbdH536Cx3H3iftKPWJUTyj6MfGwr3m+A=; b=lc/tVOhv0tvu1vmdFLMU018rX4/5qp591BqUBVyNLdXsQpGWYUWRgOYHG+g6x0Lr01 CUgDf8cXRsFHLdaURGeDA5HQTCZGmZUGy3XI2yTB7AG1TXocG2qBFp7KseDuviw+2WQi 5Z1QqFfuIm7dwbu7Y1aax7RAzKn91qZ4w8UnkNLairkBuosEdCjPtlruIffEi94/ivJK LFY2aVrDX33b8yiJBkud2JU1Fzt3i6P/tGUih+wPBn8ZBjZp1G1RAB9/JWj4uQKXwsog GhAC5nsBzTI8SN3HlE62QomAuTp+jYo+djOEJvnSB5RSii7u0iwSvUyfOJ++WOl/pSdt eHwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o4si1111372oif.58.2019.12.03.03.46.29; Tue, 03 Dec 2019 03:46:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726017AbfLCLpb (ORCPT + 99 others); Tue, 3 Dec 2019 06:45:31 -0500 Received: from mga09.intel.com ([134.134.136.24]:48897 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725773AbfLCLpb (ORCPT ); Tue, 3 Dec 2019 06:45:31 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Dec 2019 03:45:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,273,1571727600"; d="scan'208";a="293772844" Received: from linux.intel.com ([10.54.29.200]) by orsmga001.jf.intel.com with ESMTP; 03 Dec 2019 03:45:30 -0800 Received: from [10.125.253.16] (abudanko-mobl.ccr.corp.intel.com [10.125.253.16]) by linux.intel.com (Postfix) with ESMTP id 7BE5F58033E; Tue, 3 Dec 2019 03:45:28 -0800 (PST) Subject: [PATCH v5 3/3] perf record: adapt affinity to machines with #CPUs > 1K From: Alexey Budankov To: Arnaldo Carvalho de Melo Cc: Jiri Olsa , Namhyung Kim , Alexander Shishkin , Peter Zijlstra , Ingo Molnar , Andi Kleen , linux-kernel References: Organization: Intel Corp. Message-ID: <96d7e2ff-ce8b-c1e0-d52c-aa59ea96f0ea@linux.intel.com> Date: Tue, 3 Dec 2019 14:45:27 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use struct mmap_cpu_mask type for tool's thread and mmap data buffers to overcome current 1024 CPUs mask size limitation of cpu_set_t type. Currently glibc cpu_set_t type has internal mask size limit of 1024 CPUs. Moving to struct mmap_cpu_mask type allows overcoming that limit. tools bitmap API is used to manipulate objects of struct mmap_cpu_mask type. Reported-by: Andi Kleen Signed-off-by: Alexey Budankov --- tools/perf/builtin-record.c | 28 ++++++++++++++++++++++------ tools/perf/util/mmap.c | 28 ++++++++++++++++++++++------ tools/perf/util/mmap.h | 2 +- 3 files changed, 45 insertions(+), 13 deletions(-) diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index fb19ef63cc35..7bc83755ef8c 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -62,6 +62,7 @@ #include #include #include +#include struct switch_output { bool enabled; @@ -93,7 +94,7 @@ struct record { bool timestamp_boundary; struct switch_output switch_output; unsigned long long samples; - cpu_set_t affinity_mask; + struct mmap_cpu_mask affinity_mask; unsigned long output_max_size; /* = 0: unlimited */ }; @@ -961,10 +962,15 @@ static struct perf_event_header finished_round_event = { static void record__adjust_affinity(struct record *rec, struct mmap *map) { if (rec->opts.affinity != PERF_AFFINITY_SYS && - !CPU_EQUAL(&rec->affinity_mask, &map->affinity_mask)) { - CPU_ZERO(&rec->affinity_mask); - CPU_OR(&rec->affinity_mask, &rec->affinity_mask, &map->affinity_mask); - sched_setaffinity(0, sizeof(rec->affinity_mask), &rec->affinity_mask); + !bitmap_equal(rec->affinity_mask.bits, map->affinity_mask.bits, + rec->affinity_mask.nbits)) { + bitmap_zero(rec->affinity_mask.bits, rec->affinity_mask.nbits); + bitmap_or(rec->affinity_mask.bits, rec->affinity_mask.bits, + map->affinity_mask.bits, rec->affinity_mask.nbits); + sched_setaffinity(0, MMAP_CPU_MASK_BYTES(&rec->affinity_mask), + (cpu_set_t *)rec->affinity_mask.bits); + if (verbose == 2) + mmap_cpu_mask__scnprintf(&rec->affinity_mask, "thread"); } } @@ -2433,7 +2439,6 @@ int cmd_record(int argc, const char **argv) # undef REASON #endif - CPU_ZERO(&rec->affinity_mask); rec->opts.affinity = PERF_AFFINITY_SYS; rec->evlist = evlist__new(); @@ -2499,6 +2504,16 @@ int cmd_record(int argc, const char **argv) symbol__init(NULL); + if (rec->opts.affinity != PERF_AFFINITY_SYS) { + rec->affinity_mask.nbits = cpu__max_cpu(); + rec->affinity_mask.bits = bitmap_alloc(rec->affinity_mask.nbits); + if (!rec->affinity_mask.bits) { + pr_err("Failed to allocate thread mask for %ld cpus\n", rec->affinity_mask.nbits); + return -ENOMEM; + } + pr_debug2("thread mask[%ld]: empty\n", rec->affinity_mask.nbits); + } + err = record__auxtrace_init(rec); if (err) goto out; @@ -2613,6 +2628,7 @@ int cmd_record(int argc, const char **argv) err = __cmd_record(&record, argc, argv); out: + bitmap_free(rec->affinity_mask.bits); evlist__delete(rec->evlist); symbol__exit(); auxtrace_record__free(rec->itr); diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c index 43c12b4a3e17..832d2cb94b2c 100644 --- a/tools/perf/util/mmap.c +++ b/tools/perf/util/mmap.c @@ -219,6 +219,8 @@ static void perf_mmap__aio_munmap(struct mmap *map __maybe_unused) void mmap__munmap(struct mmap *map) { + bitmap_free(map->affinity_mask.bits); + perf_mmap__aio_munmap(map); if (map->data != NULL) { munmap(map->data, mmap__mmap_len(map)); @@ -227,7 +229,7 @@ void mmap__munmap(struct mmap *map) auxtrace_mmap__munmap(&map->auxtrace_mmap); } -static void build_node_mask(int node, cpu_set_t *mask) +static void build_node_mask(int node, struct mmap_cpu_mask *mask) { int c, cpu, nr_cpus; const struct perf_cpu_map *cpu_map = NULL; @@ -240,17 +242,23 @@ static void build_node_mask(int node, cpu_set_t *mask) for (c = 0; c < nr_cpus; c++) { cpu = cpu_map->map[c]; /* map c index to online cpu index */ if (cpu__get_node(cpu) == node) - CPU_SET(cpu, mask); + set_bit(cpu, mask->bits); } } -static void perf_mmap__setup_affinity_mask(struct mmap *map, struct mmap_params *mp) +static int perf_mmap__setup_affinity_mask(struct mmap *map, struct mmap_params *mp) { - CPU_ZERO(&map->affinity_mask); + map->affinity_mask.nbits = cpu__max_cpu(); + map->affinity_mask.bits = bitmap_alloc(map->affinity_mask.nbits); + if (!map->affinity_mask.bits) + return -1; + if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) build_node_mask(cpu__get_node(map->core.cpu), &map->affinity_mask); else if (mp->affinity == PERF_AFFINITY_CPU) - CPU_SET(map->core.cpu, &map->affinity_mask); + set_bit(map->core.cpu, map->affinity_mask.bits); + + return 0; } int mmap__mmap(struct mmap *map, struct mmap_params *mp, int fd, int cpu) @@ -261,7 +269,15 @@ int mmap__mmap(struct mmap *map, struct mmap_params *mp, int fd, int cpu) return -1; } - perf_mmap__setup_affinity_mask(map, mp); + if (mp->affinity != PERF_AFFINITY_SYS && + perf_mmap__setup_affinity_mask(map, mp)) { + pr_debug2("failed to alloc mmap affinity mask, error %d\n", + errno); + return -1; + } + + if (verbose == 2) + mmap_cpu_mask__scnprintf(&map->affinity_mask, "mmap"); map->core.flush = mp->flush; diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h index ef51667fabcb..9d5f589f02ae 100644 --- a/tools/perf/util/mmap.h +++ b/tools/perf/util/mmap.h @@ -40,7 +40,7 @@ struct mmap { int nr_cblocks; } aio; #endif - cpu_set_t affinity_mask; + struct mmap_cpu_mask affinity_mask; void *data; int comp_level; }; -- 2.20.1