Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1396778pxf; Fri, 9 Apr 2021 07:28:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLEU/ijw+Qeos9s/ut9qht5T8dZ7Zl9LXUnR8KQ8WJ71+miMv6Nzx2kmVt4baRajGVWOa8 X-Received: by 2002:a05:6402:31ad:: with SMTP id dj13mr17407211edb.167.1617978498503; Fri, 09 Apr 2021 07:28:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617978498; cv=none; d=google.com; s=arc-20160816; b=xoYOGtTpigRpqu8/4I6RD/IzcBx1inUn4mpzztxg3w9sw9eGFnYmyano0+lvmaEV/f rPM6yRRKPvJhdkmc8KMYwclPsoSwnAtaJTl+4FHfn/gqIl0v40omGVjZhepU6UEk7AsK n7l0RfaqNFfyNsvwCrzVZGXoAmRcDdYSe30dqzJpMFaD48oE3J3VQ+gCgfyNxP8zZkGc HFj2M9A12Bb0JUfIRSNYby+l9zo0eiWDFAITYR7E9otrk2MLr6T0P9pAyrGF1U59fva/ ECGcZHEFZfidasLwAoXXqplmRvUlneJX+c5FXe4qMJB1CJ7o6o7y71654Q3RhyS91h0h NbIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=4ThxPDQhM4kEVDLwubZDAdURyhcYo7/BXJ9Qt3etGlA=; b=yC4v3eQ2ypveQYH0zhmxk2q8Gk2tR3LNKS+wMj5+C37HNw19OJsYcA9RveqPr7mScN 2geRIOtOnLlc3rwAswtS5iZwXuxZXGilG5yuXDjAP0bs6m0aGnAT5besdmgYN+nUCZcG DsnFHo94/BccSxy5nuLs0Xbq7Gnt6O/FeI8M6jENGEKGapkURJTF/Ap7XLuoBOJ1/cQt K4m0tx9sRbMhD9lnShOZTT1rCPU0V6kk72+ZBftrGtxTmIfHGllcvAY7phELiBHa0X2y v63rBUBQh7T91csSaYmhu4dL9dP8mIxeftAiz1o+jCuqA99k5v4JTcG3WchSUS3uGCei jgzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@telus.net header.s=google header.b=GQ2HjbBq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=telus.net Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i17si2384954edc.436.2021.04.09.07.27.54; Fri, 09 Apr 2021 07:28:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@telus.net header.s=google header.b=GQ2HjbBq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=telus.net Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232990AbhDIO1B (ORCPT + 99 others); Fri, 9 Apr 2021 10:27:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233752AbhDIO07 (ORCPT ); Fri, 9 Apr 2021 10:26:59 -0400 Received: from mail-io1-xd2f.google.com (mail-io1-xd2f.google.com [IPv6:2607:f8b0:4864:20::d2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57DBCC061760 for ; Fri, 9 Apr 2021 07:26:46 -0700 (PDT) Received: by mail-io1-xd2f.google.com with SMTP id y4so6116265ioy.3 for ; Fri, 09 Apr 2021 07:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=telus.net; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4ThxPDQhM4kEVDLwubZDAdURyhcYo7/BXJ9Qt3etGlA=; b=GQ2HjbBqbU54oGdh+pNfj9lLnSjk3I2zlzKyK2fhwKEhXWgb0UpY0jg9qujwPGWZuf EjSc4KSApOTWC0F9Xks4NITd1MCy2QEMFmso0aLUavxykJVoH8PrvnfRbpgn3OjTw2NT 07u7Wum1sRvT+v+eSonaywAuwfb9aOtHqaGJxJI2hEzWNSSRqpdP2L6SORSiSbBBrrcu iONtHPHt+jwr9rRt5Udp/6RtKvA2igtxqtnZLCtN0nGXNEyMucH/CJNzlLsBpwPIqZLU tktxjOBN1pX0QwqUvRF2QjLv7UO7KaWzISR+lAi4iEeDKP4+9WT82FT4ux/yFqRgDMv2 /tjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4ThxPDQhM4kEVDLwubZDAdURyhcYo7/BXJ9Qt3etGlA=; b=R+MA8a9+/Q9cpU+j8whMNzqco5qMwW3X+jx5/DOGMcW8TWqTI2gJmXlMrkcVddGdiZ J81EShmG18bBXaUCVnqZjU4/d3glccRqer2elUiBycui5m9igTVZYSAPHlcbdYAm18kH ev0LslYQ5uVJUdOeSBHRGyzoE5pkbvemgTGQ3wm+SRzJpr45s9aWf0wdEqYdZ9jOtU8j W2I+JEJ0HBFNKhXfcyUVShYEj/az4ZwumA7UU/BgreTAwWJU4ed8J+StX5SSzxI7ojQb V0CEXthX5+x/NMAHatnOsIUTX4h6eHrHx8Qgy64DhGfgETeOYVAuKZR+i8tk1YZoDX7B k0fw== X-Gm-Message-State: AOAM530bgtwmBJmKQGJtATtLHEbQuovIndMJfZaQ+/DK9CFGwwrkzq+0 AAMHygmpabN0Y85SYrGS1jLnbgj3S6dNQIk9KIyTDw== X-Received: by 2002:a05:6638:338a:: with SMTP id h10mr14570283jav.129.1617978404469; Fri, 09 Apr 2021 07:26:44 -0700 (PDT) MIME-Version: 1.0 References: <20210404083354.23060-1-psampat@linux.ibm.com> <0a4b32e0-426e-4886-ae37-6d0bdafdea7f@linux.ibm.com> In-Reply-To: <0a4b32e0-426e-4886-ae37-6d0bdafdea7f@linux.ibm.com> From: Doug Smythies Date: Fri, 9 Apr 2021 07:26:38 -0700 Message-ID: Subject: Re: [RFC v3 0/2] CPU-Idle latency selftest framework To: Pratik Sampat Cc: rjw@rjwysocki.net, Daniel Lezcano , shuah@kernel.org, ego@linux.vnet.ibm.com, svaidy@linux.ibm.com, Linux PM list , Linux Kernel Mailing List , linux-kselftest@vger.kernel.org, pratik.r.sampat@gmail.com, dsmythies Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 9, 2021 at 12:43 AM Pratik Sampat wrote: > On 09/04/21 10:53 am, Doug Smythies wrote: > > I tried V3 on a Intel i5-10600K processor with 6 cores and 12 CPUs. > > The core to cpu mappings are: > > core 0 has cpus 0 and 6 > > core 1 has cpus 1 and 7 > > core 2 has cpus 2 and 8 > > core 3 has cpus 3 and 9 > > core 4 has cpus 4 and 10 > > core 5 has cpus 5 and 11 > > > > By default, it will test CPUs 0,2,4,6,10 on cores 0,2,4,0,2,4. > > wouldn't it make more sense to test each core once? > > Ideally it would be better to run on all the CPUs, however on larger systems > that I'm testing on with hundreds of cores and a high a thread count, the > execution time increases while not particularly bringing any additional > information to the table. > > That is why it made sense only run on one of the threads of each core to make > the experiment faster while preserving accuracy. > > To handle various thread topologies it maybe worthwhile if we parse > /sys/devices/system/cpu/cpuX/topology/thread_siblings_list for each core and > use this information to run only once per physical core, rather than > assuming the topology. > > What are your thoughts on a mechanism like this? Yes, seems like a good solution. ... Doug