Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp4043097pxb; Mon, 4 Oct 2021 16:02:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyfSIl+tu3/FOz2JWypJ39B94/kgQJgyqOW3iNYjvsLntVIi9VclXQSMNnq4MaEXkbppGmK X-Received: by 2002:a17:90b:1106:: with SMTP id gi6mr6736478pjb.144.1633388527310; Mon, 04 Oct 2021 16:02:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633388527; cv=none; d=google.com; s=arc-20160816; b=LC1l/J8p8jshRr9QBGSwV6wp1AUZ5/nUXiGvAmcDiSd6L5+Xk5vHlx+UKufSkqbyeo wuNfvtB0rqQIF5ZHHTCx2ORY4EbDIoOr3AV55258TJc1mHnVIhgtpIYRTQecuM+5YUV/ g3ux+vY/TpfP/BvBYPuh1S6GGSrmQURQGjP0SpXTjX660EPR30O/zWiyX15HMTgBK1DW G39s0jGOC8gz3rTqhm2HZhS3nKndYLNOO84mI3zzGixZNzmc/X0AVV37yAsxHktEPefe 5GDSqlv6ZWiYpsAcMTnGaBlDzFqAFdpM5v8AxWTPy6DdkAXZhJzLsLk+r7vNyMLD36MZ ua6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=gKBTIHZkFkwseGAi5II5oGVj0iBXe7h2uvt78kuBKSA=; b=fToZi2gpaIFLOuiiGBmir6QSfNv9biNphLbf58/QM99k4JF4JfJOXuJIzp90HLHcnQ Nt5lCzyPmLcO1JfYQ1FYIzEMfAlCXqrj111A24MjH/kuSBS7+qFP2ajC9DeWmiL1JqgN mv6Owcf7mqYJW4c2jw23wBojfoxU5d5/HV8mxNVVCAnM972O4pBirPeC7Kzoc+qLFUlh 3U396utjxPkJoa/7IwGjWS3sx8vMlhQdZMBCmuMYbLytvrXa54xtWHyjtuSassjKCt8N i5HxdbTZryy1DLF4zU/vzsIXxcbP1Qy4GgSgW9lgOU5JJ+sUD58cgg6ks66OdN80L6KD WxgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bBqGZLlt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b23si20081956pje.44.2021.10.04.16.01.54; Mon, 04 Oct 2021 16:02:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bBqGZLlt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234350AbhJDR1Q (ORCPT + 99 others); Mon, 4 Oct 2021 13:27:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233892AbhJDR1P (ORCPT ); Mon, 4 Oct 2021 13:27:15 -0400 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EBF3C061745 for ; Mon, 4 Oct 2021 10:25:26 -0700 (PDT) Received: by mail-lf1-x12c.google.com with SMTP id n8so19091969lfk.6 for ; Mon, 04 Oct 2021 10:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gKBTIHZkFkwseGAi5II5oGVj0iBXe7h2uvt78kuBKSA=; b=bBqGZLltEUDsR1PL9Qc1Y/XRczlTpQxaY5AFRGuodqngidy2xOtDflakPH5qdOqet0 ny3SMOeBMt2G4zr7THRWei1s9a9znFAqdy7l8EbjWtiqoCkmH5ZY6ewRFn6M+9PmOZh7 DV3FZ5a8xAC6Y04zmZqcJ48/Js2PEaa/0LC0yJn4nRk0oiYY1nAkGggkRnmPKxPlmVF4 GErOtqrFb2+A5BIZdLLgIsEZ4qDah8/7VyCTtKaFb4za+Tk32wmnsjxI6tTj1hVG+M3Q gwNbLXdk/1YG8nhU32hxUc+vnh5/8TgxuzvkLfzzX5LYxFxLY+ygKfBQ5MSfZKEW90Ew SAsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gKBTIHZkFkwseGAi5II5oGVj0iBXe7h2uvt78kuBKSA=; b=pndVqeXjVb5ImYc1igfS8tetnHFRBNm6hgxY2Aw29AKQAuxONoAC5gCY3O2K3uZUWH IL7BpXmPCBEsMXA2uhlmfATOqMDUhzm6gD2fqPoRTfjgxqlfCZL1ximjiRnD2BHf82/6 oHtZOh8zJbXvUr191fUeZuyEIAw4xKwKPTXiIJ6n21rvtJMv2cCZMgnKJF9Bb9GvUArA 0y+7T6vwHx6gUOwAVxrCTKmpzZzxmnFCs3XQ1d5ODEFWVnmM7hvSXMsO315Upa5URAd3 9f77XIvDY4AkzGCKy93kRMZYlKAmTHdwczKIGlBEX4EnJkbIc91F5XkLkV6ShpcbPmxr dZ9Q== X-Gm-Message-State: AOAM5333b6kQ+s2HxXY8+n94tniu0S8DwpngSJ+oYS5q/vw5y0fYWxXP jwBUo8YAyOF1AMhZlIY5v122e+lxAN0xw6bgEy+SEg== X-Received: by 2002:a2e:a370:: with SMTP id i16mr16470454ljn.35.1633368323563; Mon, 04 Oct 2021 10:25:23 -0700 (PDT) MIME-Version: 1.0 References: <20210929235936.2859271-1-shakeelb@google.com> In-Reply-To: From: Shakeel Butt Date: Mon, 4 Oct 2021 10:25:12 -0700 Message-ID: Subject: Re: [PATCH] cgroup: rstat: optimize flush through speculative test To: Tejun Heo Cc: Johannes Weiner , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Cgroups , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 4, 2021 at 10:00 AM Tejun Heo wrote: > > Hello, Shakeel. > > On Wed, Sep 29, 2021 at 04:59:36PM -0700, Shakeel Butt wrote: > > Currently cgroup_rstat_updated() has a speculative already-on-list test > > to check if the given cgroup is already part of the rstat update tree. > > This helps in reducing the contention on the rstat cpu lock. This patch > > adds the similar speculative not-on-list test on the rstat flush > > codepath. > > > > Recently the commit aa48e47e3906 ("memcg: infrastructure to flush memcg > > stats") added periodic rstat flush. On a large system which is not much > > busy, most of the per-cpu rstat tree would be empty. So, the speculative > > not-on-list test helps in eliminating unnecessary work and potentially > > reducing contention on the rstat cpu lock. Please note this might > > introduce temporary inaccuracy but with the frequent and periodic flush > > this would not be an issue. > > > > To evaluate the impact of this patch, an 8 GiB tmpfs file is created on > > a system with swap-on-zram and the file was pushed to swap through > > memory.force_empty interface. On reading the whole file, the memcg stat > > flush in the refault code path is triggered. With this patch, we > > observed 38% reduction in the read time of 8 GiB file. > > The patch looks fine to me but that's a lot of reduction in read time. Can > you elaborate a bit on why this makes such a huge difference? Who's hitting > on that lock so hard? > It was actually due to machine size. I ran a single threaded workload without any interference on a 112 cpus machine. So, most of the time the flush was acquiring and releasing the per-cpu rstat lock for empty trees. Shakeel