Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp23031370rwd; Fri, 30 Jun 2023 16:56:11 -0700 (PDT) X-Google-Smtp-Source: APBJJlFrLK3yYyKppfkA1WHVWi0df7gVAxxPSL65ZBjh0N0vq+FadpUnHGLphmQwJxjlrk/Ubt2N X-Received: by 2002:a05:6a00:2d1d:b0:663:18c:a176 with SMTP id fa29-20020a056a002d1d00b00663018ca176mr4711459pfb.32.1688169371375; Fri, 30 Jun 2023 16:56:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688169371; cv=none; d=google.com; s=arc-20160816; b=WMNW6B2tJ2C0jjXUAIFSqtSMOSA1JM2kDOokV9mGZt1jopQN36++NM89VtvqaIk3Uf byyeZsspcJhsZh92KuruGSTwoDTFwRxxO5FQCRq9Kz/MMc7efUT/xvba0jo8n0mLoN69 sznTVHXrIEo/CMZP+ZnqyJ23dWWhA+zxGoIuvBkZmITTDuh+GcVF75lhI3KUHtj0nzpm m5IMYdWhiRdvcsyUXQwir3720MvG4YMUrkTRRNBHl8sPbe5/wPDS20+EahpEmonGM8+X zbBokeP7rr1fMZp+5ZbQWHLbitkfLBWH/xSeWRjP6pka10If5O6mHA9/ivXknuZjo0B/ 2qpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:mime-version :dkim-signature; bh=VK1/OtAJXK8/kraYa6/U+8C0ZuqhuzcqNzn9O4FJGas=; fh=9Xs5OdYSG9DoKM5xbywMWUAnqOn1fPqc6LhM/2fZIYE=; b=njJalpuz0hv3BIgUjEXVJqnAr6FwEXYAC7DOC5dBnNxLTDmKdDncMQZqkQ0YwW62um lbYDGWJdw/r4670EeRE0UcLzv3vP5nozf/E7bpFoj4vRg+HRLHzO3QdviTocvEwGWjOk Na8DJunB/RaRVXdd81u64yp8oPcU7FETp9jbJwYQktkrI8CgBep515JHFqa94w2OWuDV 7Mq1n5/8UBASyUGS8PNSjJ9n+X718wqwJziuTqlUMR5iAECixiu7uK9gMEO62Xb/5vWN oAGqG0/fi5ojthwA1BZ5erwLUbGj7UbOFsR4+hstFDM2v4KKagVPHJlRKwMs6tsPY2oa 8jmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cloudflare.com header.s=google header.b=JIyBKNtR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=cloudflare.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d6-20020a630e06000000b0055731f11410si13406087pgl.470.2023.06.30.16.56.00; Fri, 30 Jun 2023 16:56:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@cloudflare.com header.s=google header.b=JIyBKNtR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=cloudflare.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231686AbjF3XXX (ORCPT + 99 others); Fri, 30 Jun 2023 19:23:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229695AbjF3XXU (ORCPT ); Fri, 30 Jun 2023 19:23:20 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A34F46BC for ; Fri, 30 Jun 2023 16:22:42 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3fba5a8af2cso25365365e9.3 for ; Fri, 30 Jun 2023 16:22:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google; t=1688167360; x=1690759360; h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=VK1/OtAJXK8/kraYa6/U+8C0ZuqhuzcqNzn9O4FJGas=; b=JIyBKNtRrCvh1/J0Aia88RunCnBe34DcvcRR/xqkto9DKiP0CoJsMe9AEsMrcocQXa KXzU0xeE8Ynn4vsjVq6S/1cSMppdCM5vHTv6cycRYhKMbtS8osvCknVpybopDo8HlGmS 2aY3f2gRBCcrD1RClb7o73Cvr+nsmVkDEnhIo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688167360; x=1690759360; h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=VK1/OtAJXK8/kraYa6/U+8C0ZuqhuzcqNzn9O4FJGas=; b=Z4GDd4/Hjx/Zu7AQGEYSRRntiZEEyd4kBNjNxERfS+oXB8qTxC2dj3VC3cYb+2ICQv 8tzgPD683w9/w5qgZ2NncJiWEFhJRUGmAtbLBw1LP3rv3Z4eVASMlurf/jz6AN5QaNfk Z0RuyfwIavwdG1cBaYqC0GrrDW8ubWRsATH+SMg5AQztigO53yGV7FQdU7nuKIwrqlzh MHxJolQym1dV/xPzrVRNxcfQsdYAlmR8ZWSd0Hc+f0wGkCyl+01PncTYlPvRk1eIRmVj aFiAtOpma7Igs+O2ZF+lUGaWfR7k87kyCwEzaymMVgilSz2nMknW6Tcr7f/H4tf08f1l 9rXg== X-Gm-Message-State: AC+VfDwUEJSPaaJ9ArBJSQsN0Q15iI38BI+MBoujJ07zC00AvJyZGptW lkryduT5LLm7Wj0k0pJJrI7Drzg3fEKZi+A33//jtg== X-Received: by 2002:a05:600c:221a:b0:3f4:a09f:1877 with SMTP id z26-20020a05600c221a00b003f4a09f1877mr2789670wml.23.1688167360268; Fri, 30 Jun 2023 16:22:40 -0700 (PDT) MIME-Version: 1.0 From: Ivan Babrou Date: Fri, 30 Jun 2023 16:22:28 -0700 Message-ID: Subject: Expensive memory.stat + cpu.stat reads To: cgroups@vger.kernel.org Cc: Linux MM , kernel-team , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , linux-kernel Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, We're seeing CPU load issues with cgroup stats retrieval. I made a public gist with all the details, including the repro code (which unfortunately requires heavily loaded hardware) and some flamegraphs: * https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13 I'll repeat the gist of that gist here. Our repro has the following output after a warm-up run: completed: 5.17s [manual / mem-stat + cpu-stat] completed: 5.59s [manual / cpu-stat + mem-stat] completed: 0.52s [manual / mem-stat] completed: 0.04s [manual / cpu-stat] The first two lines do effectively the following: for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat /sys/fs/cgroup/system.slice/cpu.stat > /dev/null The latter two are the same thing, but via two loops: for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat > /dev/null; done for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat > /dev/null; done As you might've noticed from the output, splitting the loop into two makes the code run 10x faster. This isn't great, because most monitoring software likes to get all stats for one service before reading the stats for the next one, which maps to the slow and expensive way of doing this. We're running Linux v6.1 (the output is from v6.1.25) with no patches that touch the cgroup or mm subsystems, so you can assume vanilla kernel. From the flamegraph it just looks like rstat flushing takes longer. I used the following flags on an AMD EPYC 7642 system (our usual pick cpu-clock was blaming spinlock irqrestore, which was questionable): perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro Naturally, there are two questions that arise: * Is this expected (I guess not, but good to be sure)? * What can we do to make this better? I am happy to try out patches or to do some tracing to help understand this better.