Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp7885045rwl; Thu, 23 Mar 2023 09:47:53 -0700 (PDT) X-Google-Smtp-Source: AK7set+dGNhJB/BmDr9kVcd+HuhLaLP+BFNCB77L7UkCsZhXHOQWbekzLopw26EN5/kod1kq5scg X-Received: by 2002:a17:906:26c4:b0:92b:4f8e:dddb with SMTP id u4-20020a17090626c400b0092b4f8edddbmr13185096ejc.34.1679590073428; Thu, 23 Mar 2023 09:47:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679590073; cv=none; d=google.com; s=arc-20160816; b=PzyMDDWeE2mC6QxSy4WevLEmecWsXaEgYjeDWf0Ha9y2/GH/oC0VKf8gSq3GV5//KS a94/uef1Z+pqkT6IcMRG0c5n8dlJqOzyMlaSsNb1Zsw7Qfii0f1o3TtaLUqtrZp1huYh proUVzid0kD8CPkjJFoxyJEdomTAfnc1wgZxuywB9HmRCAo6dRR14j5PbCD5yIzQFhHt fRW+PHqOG8ijGRj/0p4S/Am1SxqhcUmYjXpxprZue5qKG23VeIcrFocKTbCw9dwkAuhP RJdiPVE2w4RlIPuZ26OTC58ndif7zaXkYDfvCgPgdDrWmpfQnQv0cez1o6JwfDK/Z/w2 gFEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=GglKicPIARqjK50ePWIEou5wLn8UHWFeaeY7+rjXEwI=; b=j6NXJo1SG1xEhcQ5w1jGJprQBG3RZ4lk5HMB06lzQcp4ISGo6D092jw8iDKTcipU2H lUhoe8K/vwhgyaQxhRvmgQjOYuiHokCdiO9yNqPTj/SaAZ+CwcqyeVvhcBdBzXskdN0+ +7Er7a43RSHPeb2oza4ivMNSsibLNas4tKDYdN0uIKwpueGkITIJckyFncZZz8SXGPQv WDfBYMtl9l7WU/WLaxzRgDVz3g8z3I0nfvQ3nMf+2R6UBH8IXEkRJjn2oMBGUBBrXkoB 3M3Akb6HYFnkv4nEiiKhSXAvAjMs+QXAf6T8mPSVvc1t6AXxPSYnHlF23JEjER81bixx U8hA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=qDusDQcI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b21-20020a05640202d500b004beaccf8fd3si19221374edx.409.2023.03.23.09.47.29; Thu, 23 Mar 2023 09:47:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=qDusDQcI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232381AbjCWQqk (ORCPT + 99 others); Thu, 23 Mar 2023 12:46:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231603AbjCWQqW (ORCPT ); Thu, 23 Mar 2023 12:46:22 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC32B35EEA for ; Thu, 23 Mar 2023 09:45:37 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id r11so4520852wrr.12 for ; Thu, 23 Mar 2023 09:45:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679589935; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=GglKicPIARqjK50ePWIEou5wLn8UHWFeaeY7+rjXEwI=; b=qDusDQcIUB4xyGrh0Cy0pH4GJXpbd3dBvtD3xReDL20t/GutJptx7dOvT4vlZdqKI3 f+fWp4deIzDbuOXho4jdEyhGJ09Xa27VcoxrPtkG5R0wKq0G7Q0d7rZYqyTfujICDJOz iuWKWXvAzM22e22natXMUpus3et4jQPBuPeOT3gbm0Mlyxjo/kxGQFoR4SGwQ3jKsTnT BTidcAWwLqvWaic+X/v0ffejNUKAS5lmND0yzPTIIy2pIZpPFUQvlsCES8MUG15SNwqZ fB6xcKWpDZezHntFyeWpTZwVIMFwjI9Nzh3eHR5yL9Q2SXI2A/gWdF3BMTOj6udQZDKZ wyQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679589935; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GglKicPIARqjK50ePWIEou5wLn8UHWFeaeY7+rjXEwI=; b=X44NID08Ww997ru1Ygo9exsuNr8SRg2lO2aE3QUV8YAc5vkWPbKYcdodNPII4RFJKr lObwnQdwaey4eeAyzn5NaOACDP0E6IvNAhxyxpAC4p1Tok9rJI+X1OaVgXv/kwFwXzwV YGzMVHyhAAueJuFCwwUEA4WyTctTmjnDriqXo3eYiXesrvydTnBkelee9q2PZQX0awFY XB8V/p9rylWPLZAwYjZbSfHxzO83HUS41uDpLAxfWFlfHzY5hYZXlCXwB7p8d5Qn8mu8 hLMwn7/aecIrJl01XpOSrS9B4S9LjjXPEpscCMhTqRzEFaByN+ssxPOL4E1LcEW50b3o xNGg== X-Gm-Message-State: AAQBX9fOgduzD4erpy97pSMyKPQwgmAu+2GC/FQ+6IRA6xL3LwShGQFv 69J3wjNDQsi4M/kwPS9cIu1YRvrbKx9XAWV09eWmyf/w4wkzllbSjwA= X-Received: by 2002:adf:fd81:0:b0:2c7:1483:9479 with SMTP id d1-20020adffd81000000b002c714839479mr897523wrr.11.1679589935242; Thu, 23 Mar 2023 09:45:35 -0700 (PDT) MIME-Version: 1.0 References: <20230323040037.2389095-1-yosryahmed@google.com> <20230323040037.2389095-2-yosryahmed@google.com> In-Reply-To: From: Shakeel Butt Date: Thu, 23 Mar 2023 09:45:20 -0700 Message-ID: Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock To: Yosry Ahmed Cc: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-15.7 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,ENV_AND_HDR_SPF_MATCH, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 23, 2023 at 9:37=E2=80=AFAM Yosry Ahmed = wrote: > > On Thu, Mar 23, 2023 at 9:29=E2=80=AFAM Shakeel Butt wrote: > > > > On Thu, Mar 23, 2023 at 9:18=E2=80=AFAM Yosry Ahmed wrote: > > > > > > On Thu, Mar 23, 2023 at 9:10=E2=80=AFAM Shakeel Butt wrote: > > > > > > > > On Thu, Mar 23, 2023 at 8:46=E2=80=AFAM Shakeel Butt wrote: > > > > > > > > > > On Thu, Mar 23, 2023 at 8:43=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > On Thu, Mar 23, 2023 at 8:40=E2=80=AFAM Shakeel Butt wrote: > > > > > > > > > > > > > > On Thu, Mar 23, 2023 at 6:36=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > > > [...] > > > > > > > > > > > > > > > > > > > > > 2. Are we really calling rstat flush in irq context? > > > > > > > > > > > > > > > > > > > > I think it is possible through the charge/uncharge path= : > > > > > > > > > > memcg_check_events()->mem_cgroup_threshold()->mem_cgrou= p_usage(). I > > > > > > > > > > added the protection against flushing in an interrupt c= ontext for > > > > > > > > > > future callers as well, as it may cause a deadlock if w= e don't disable > > > > > > > > > > interrupts when acquiring cgroup_rstat_lock. > > > > > > > > > > > > > > > > > > > > > 3. The mem_cgroup_flush_stats() call in mem_cgroup_us= age() is only > > > > > > > > > > > done for root memcg. Why is mem_cgroup_threshold() in= terested in root > > > > > > > > > > > memcg usage? Why not ignore root memcg in mem_cgroup_= threshold() ? > > > > > > > > > > > > > > > > > > > > I am not sure, but the code looks like event notificati= ons may be set > > > > > > > > > > up on root memcg, which is why we need to check thresho= lds. > > > > > > > > > > > > > > > > > > This is something we should deprecate as root memcg's usa= ge is ill defined. > > > > > > > > > > > > > > > > Right, but I think this would be orthogonal to this patch s= eries. > > > > > > > > > > > > > > > > > > > > > > I don't think we can make cgroup_rstat_lock a non-irq-disabli= ng lock > > > > > > > without either breaking a link between mem_cgroup_threshold a= nd > > > > > > > cgroup_rstat_lock or make mem_cgroup_threshold work without d= isabling > > > > > > > irqs. > > > > > > > > > > > > > > So, this patch can not be applied before either of those two = tasks are > > > > > > > done (and we may find more such scenarios). > > > > > > > > > > > > > > > > > > Could you elaborate why? > > > > > > > > > > > > My understanding is that with an in_task() check to make sure w= e only > > > > > > acquire cgroup_rstat_lock from non-irq context it should be fin= e to > > > > > > acquire cgroup_rstat_lock without disabling interrupts. > > > > > > > > > > From mem_cgroup_threshold() code path, cgroup_rstat_lock will be = taken > > > > > with irq disabled while other code paths will take cgroup_rstat_l= ock > > > > > with irq enabled. This is a potential deadlock hazard unless > > > > > cgroup_rstat_lock is always taken with irq disabled. > > > > > > > > Oh you are making sure it is not taken in the irq context through > > > > should_skip_flush(). Hmm seems like a hack. Normally it is recommen= ded > > > > to actually remove all such users instead of silently > > > > ignoring/bypassing the functionality. > > > > > > It is a workaround, we simply accept to read stale stats in irq > > > context instead of the expensive flush operation. > > > > > > > > > > > So, how about removing mem_cgroup_flush_stats() from > > > > mem_cgroup_usage(). It will break the known chain which is taking > > > > cgroup_rstat_lock with irq disabled and you can add > > > > WARN_ON_ONCE(!in_task()). > > > > > > This changes the behavior in a more obvious way because: > > > 1. The memcg_check_events()->mem_cgroup_threshold()->mem_cgroup_usage= () > > > path is also exercised in a lot of paths outside irq context, this > > > will change the behavior for any event thresholds on the root memcg. > > > With proposed skipped flushing in irq context we only change the > > > behavior in a small subset of cases. > > > > > > I think we can skip flushing in irq context for now, and separately > > > deprecate threshold events for the root memcg. When that is done we > > > can come back and remove should_skip_flush() and add a VM_BUG_ON or > > > WARN_ON_ONCE instead. WDYT? > > > > > > 2. mem_cgroup_usage() is also used when reading usage from userspace. > > > This should be an easy workaround though. > > > > This is a cgroup v1 behavior and to me it is totally reasonable to get > > the 2 second stale root's usage. Even if you want to skip flushing in > > irq, do that in the memcg code and keep VM_BUG_ON/WARN_ON_ONCE in the > > rstat core code. This way we will know if other subsystems are doing > > the same or not. > > We can do that. Basically in mem_cgroup_usage() have: > > /* Some useful comment */ > if (in_task()) > mem_cgroup_flush_stats(); > > and in cgroup_rstat_flush() have: > WARN_ON_ONCE(!in_task()); > > I am assuming VM_BUG_ON is not used outside mm code. > > The only thing that worries me is that if there is another unlikely > path somewhere that flushes stats in irq context we may run into a > deadlock. I am a little bit nervous about not skipping flushing if > !in_task() in cgroup_rstat_flush(). I think it is a good thing. We will find such scenarios and fix those instead of hiding them forever or keeping the door open for new such scenarios.