Received: by 10.223.164.202 with SMTP id h10csp1121544wrb; Sun, 26 Nov 2017 20:36:33 -0800 (PST) X-Google-Smtp-Source: AGs4zMaB95ILj42jRO+PGo4GkcqGc6AbLTW44nj5CQ8biPMCd0SfOyei+aUd6hseVMMExeSsF49X X-Received: by 10.84.235.65 with SMTP id g1mr35615289plt.13.1511757393018; Sun, 26 Nov 2017 20:36:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511757392; cv=none; d=google.com; s=arc-20160816; b=nmlDdVXYR5AKu8eW7y5WlwaclsjVStcY1dMaop/uZK0kT23OnSpsGPaaGkvD9LEo6J wQn06jHY4emAC0qYyY57+13i4dA8CpZeLH4HaVI0cM2Ct18eV2gQqDZCGqsmfX5nBSva rh3XEhC98sQapjq/FuPxgmgRfc2Yp6vADgjFTN3ML3SAhIK9YfFUbUlV1twj3/GcD/NQ 5v5miOm71RtvcDeMOLMkU3ciKV78gzb+KpPJ5Zy1ZsvniVD9NnXgpjxLRxQyLCO5/6nb pYLCq+psXKDf2yEJ3NDo5+xmqijnP1ENSn+jXPTHzdWCSCv9jBGWsKIUDJ6DXYgp9nty 3Rfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Egeo+IAz+pjR/4WodQ+ePreMvKKeWeF087eM7b85QpY=; b=VsAq2rTaZp0G3PlaH+RO660vdado+L8Xkx+aDpd1VmcN+ZyZEXklOyVwKyzCdSIye8 uNH2ILv65cuXu2o2/69NBO/8N41gLw8AOeyGEXQRtEuJLmQegxzOWDw47oFd9C1TXcie dKbzgy4WWqM2mNr+Q5idCFV2YX7TSogG7qmMSOMsyjZOaPT6a3D10blRhNa1TJp11sGg tnw5oLKQ3kCGIoLdp8rYGhu7VaRWdKkQYL5ESZmTxf2MivY6X29ZwfsUayLNeejmMe4b LiablMwQEZ7NCNVORb1F0nTA8zrXWcD1Rc8JCGWs/QMIyr0L6qFcvrZ71HalIRY4DO9w eWRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o12si17688895plg.494.2017.11.26.20.36.20; Sun, 26 Nov 2017 20:36:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751350AbdK0EYn (ORCPT + 77 others); Sun, 26 Nov 2017 23:24:43 -0500 Received: from smtp2.provo.novell.com ([137.65.250.81]:34087 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751227AbdK0EYk (ORCPT ); Sun, 26 Nov 2017 23:24:40 -0500 Received: from localhost.localdomain (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by smtp2.provo.novell.com with ESMTP (TLS encrypted); Sun, 26 Nov 2017 21:24:27 -0700 From: Davidlohr Bueso To: acme@kernel.org Cc: james.yang@arm.com, kim.phillips@arm.com, dave@stgolabs.net, linux-kernel@vger.kernel.org, Kim Phillips , Davidlohr Bueso Subject: [PATCH 2/3] perf bench futex: Add --affine-wakers option to wake-parallel Date: Sun, 26 Nov 2017 20:21:00 -0800 Message-Id: <20171127042101.3659-3-dave@stgolabs.net> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171127042101.3659-1-dave@stgolabs.net> References: <20171127042101.3659-1-dave@stgolabs.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: James Yang The waker threads' processor affinity is not specified, so the result has run-to-run variability as the scheduler decides on which CPUs they are to run. So we add a -W/--affine-wakers flag to stripe the affinity of the waker threads across the online CPUs instead of having the scheduler place them. Cc: Kim Phillips Signed-off-by: James Yang Signed-off-by: Davidlohr Bueso --- tools/perf/bench/futex-wake-parallel.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c index 979e303e4797..c04e207ea37c 100644 --- a/tools/perf/bench/futex-wake-parallel.c +++ b/tools/perf/bench/futex-wake-parallel.c @@ -39,6 +39,7 @@ static u_int32_t futex = 0; static pthread_t *blocked_worker; static bool done = false, silent = false, fshared = false; +static bool affine_wakers = false; static unsigned int nblocked_threads = 0, nwaking_threads = 0; static pthread_mutex_t thread_lock; static pthread_cond_t thread_parent, thread_worker; @@ -51,6 +52,7 @@ static const struct option options[] = { OPT_UINTEGER('w', "nwakers", &nwaking_threads, "Specify amount of waking threads"), OPT_BOOLEAN( 's', "silent", &silent, "Silent mode: do not display data/details"), OPT_BOOLEAN( 'S', "shared", &fshared, "Use shared futexes instead of private ones"), + OPT_BOOLEAN( 'W', "affine-wakers", &affine_wakers, "Stripe affinity of waker threads across CPUs"), OPT_END() }; @@ -78,7 +80,8 @@ static void *waking_workerfn(void *arg) return NULL; } -static void wakeup_threads(struct thread_data *td, pthread_attr_t thread_attr) +static void wakeup_threads(struct thread_data *td, pthread_attr_t thread_attr, + struct cpu_map *cpu) { unsigned int i; @@ -91,6 +94,17 @@ static void wakeup_threads(struct thread_data *td, pthread_attr_t thread_attr) * as it will affect the order to acquire the hb spinlock. * For now let the scheduler decide. */ + if (affine_wakers) { + cpu_set_t cpuset; + CPU_ZERO(&cpuset); + CPU_SET(cpu->map[(i + 1) % cpu->nr], &cpuset); + + if (pthread_attr_setaffinity_np(&thread_attr, + sizeof(cpu_set_t), + &cpuset)) + err(EXIT_FAILURE, "pthread_attr_setaffinity_np"); + } + if (pthread_create(&td[i].worker, &thread_attr, waking_workerfn, (void *)&td[i])) err(EXIT_FAILURE, "pthread_create"); @@ -276,7 +290,7 @@ int bench_futex_wake_parallel(int argc, const char **argv) usleep(100000); /* Ok, all threads are patiently blocked, start waking folks up */ - wakeup_threads(waking_worker, thread_attr); + wakeup_threads(waking_worker, thread_attr, cpu); for (i = 0; i < nblocked_threads; i++) { ret = pthread_join(blocked_worker[i], NULL); -- 2.13.6 From 1586074959708501942@xxx Wed Dec 06 22:22:32 +0000 2017 X-GM-THRID: 1586074959708501942 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread