Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp7875485rwl; Thu, 23 Mar 2023 09:40:04 -0700 (PDT) X-Google-Smtp-Source: AK7set+f6sKT2X4GK11uHx80VoafyHYUlXPYcv8f7qnssBf1OgfezyxQIFyzsKXT6pEdNlm8tTes X-Received: by 2002:a17:906:6d4b:b0:862:c1d5:ea1b with SMTP id a11-20020a1709066d4b00b00862c1d5ea1bmr11429896ejt.8.1679589604245; Thu, 23 Mar 2023 09:40:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679589604; cv=none; d=google.com; s=arc-20160816; b=BpuIHLB3Y9+xr4xzlfOapWuXne6ttOrWN6ixtwvogpXS6i2k0/1nC+IdWvVn2bToIh +q9Ex/GF70oEjOs9upVr1osqWjrHxs02zPrriBKgVlbdR2UXPY8j7JUtUffZXdSfInA4 CVsebllKgtwcqZMjHYdiM44sTTVQB5spIuBGjsXil1NpUiVesGLsjE5/heOKn+0+JwGK myXpe8puopvK+AieEGN7KWg5C24IkiF5SGod8g1+YqdYrMnND3dnxvk1pqGKUdnVf38B tovqV4TMFEdRFW7uYjiCEFBzPXGLZg9rTHcLlTfWndoqOTt/yQYGO4DxafNG6NzsAtQp CR1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=zI1OjhppbjxP5gm//hDBy+RdPuj4ToAxdCG6kel4DWE=; b=kgY59YUxgdMB7fRNz1/qVX4YsjQU5S0dZrMLdCPzrY9PinleRlqa1SbJtok9AgFKdD oZ86pXAL3AxSuJJTtHY7FC/Qcvoe5E8h2njfwy9Azq4L0xkzpFZ7ixV/jYFAIEzoCF0g AZHLAanTEqUDlXG2gFntHREeSm/rygydIiBts89iI4iF5J2tRs4u2E0IudNwntiEP/nc cM6vAxNZSWEGoeEqYJEzb0QUh/QscTTU9Dlq2+lIrJiw54bNNr1MJ/arkTyI1BCxUF2T KTnJMzHgFx9sfRJIJSJnO4K9z72EOsjy7sdo/JWxVCVAbdfEAj4l9ZKQMP8OrOewBI0T wxIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=A+p59gcS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m19-20020aa7d353000000b00502038c2d09si2814492edr.596.2023.03.23.09.39.37; Thu, 23 Mar 2023 09:40:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=A+p59gcS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232302AbjCWQjJ (ORCPT + 99 others); Thu, 23 Mar 2023 12:39:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230143AbjCWQiq (ORCPT ); Thu, 23 Mar 2023 12:38:46 -0400 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96CE13A84F for ; Thu, 23 Mar 2023 09:37:00 -0700 (PDT) Received: by mail-ed1-x532.google.com with SMTP id ek18so89426258edb.6 for ; Thu, 23 Mar 2023 09:37:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679589415; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=zI1OjhppbjxP5gm//hDBy+RdPuj4ToAxdCG6kel4DWE=; b=A+p59gcSMpaGuinyKCOkPl7rgUJJ70UmgDqUkVTD3B9rgQ7uHV6j0HDtHmK2b6R8J+ 2/W1xAnGe9EtNhtO3J4UWlZYNadlYe/pQgiFRbc72HImcZJAX4zF33moW/GMZWhD5z2r 9tIEbrowrjIaYTkv6X9H/wnQxjsoPpkwT4hqIqavlmqQ6bRyNCx8YthG45c3lvwquAjs 2MQdXNkp0JfThOZdK+pUJdSsddDI72vGYhy7ZNth5151+8iycOuvjTfc1vGbgovDUghT XGmNOxuIyQvAxdyTwpxdf/mtm1jm2UDm0hbpIGDh2ACT1/fVvTSV9URC48IavjDL6qaO +aoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679589415; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zI1OjhppbjxP5gm//hDBy+RdPuj4ToAxdCG6kel4DWE=; b=KJbialLQyBGqizYPqnUw9/J7pfKrKGjtoOrF6tt7tmSGx55eMbNeVEfYtcd4iKYlGg H65FaOOLug9yXt9WwYyzvmE8RCBa0x0nDVF/IF0zD5jDw6yTm7hVSjzmvDWZujK91+Ma cduMAstxwVazHvvdyDdvX6hmBkBTm7eQnUGyt4DOYG40mqrjvSp/xLfkxsb/JYzk8Ps0 nbIcvKLK59UM4FmnIBIi2ZNb8oJ3ScME6cxW680/S9cclfB+pZD6ug+xDcnCZjF1L6pB OrfOpEs/srdTgQ0UTNfH0oo1Erg80rEXHDvKxLDPsoc+dB1PeIpkQzesOta/qRXijNzf Bmnw== X-Gm-Message-State: AO0yUKUiSjrlentj271SY+88qvThO16Td/X5PIDsGwnUOT2cpOK4rBgF V5IZsnxMR/dqPZ+/r+Hw1VCG/KeUG7d28ieBg2Sziw== X-Received: by 2002:a17:906:bccd:b0:8b1:28f6:8ab3 with SMTP id lw13-20020a170906bccd00b008b128f68ab3mr5657477ejb.15.1679589415489; Thu, 23 Mar 2023 09:36:55 -0700 (PDT) MIME-Version: 1.0 References: <20230323040037.2389095-1-yosryahmed@google.com> <20230323040037.2389095-2-yosryahmed@google.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 23 Mar 2023 09:36:19 -0700 Message-ID: Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock To: Shakeel Butt Cc: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-15.7 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,ENV_AND_HDR_SPF_MATCH, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 23, 2023 at 9:29=E2=80=AFAM Shakeel Butt = wrote: > > On Thu, Mar 23, 2023 at 9:18=E2=80=AFAM Yosry Ahmed wrote: > > > > On Thu, Mar 23, 2023 at 9:10=E2=80=AFAM Shakeel Butt wrote: > > > > > > On Thu, Mar 23, 2023 at 8:46=E2=80=AFAM Shakeel Butt wrote: > > > > > > > > On Thu, Mar 23, 2023 at 8:43=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > On Thu, Mar 23, 2023 at 8:40=E2=80=AFAM Shakeel Butt wrote: > > > > > > > > > > > > On Thu, Mar 23, 2023 at 6:36=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > [...] > > > > > > > > > > > > > > > > > > > 2. Are we really calling rstat flush in irq context? > > > > > > > > > > > > > > > > > > I think it is possible through the charge/uncharge path: > > > > > > > > > memcg_check_events()->mem_cgroup_threshold()->mem_cgroup_= usage(). I > > > > > > > > > added the protection against flushing in an interrupt con= text for > > > > > > > > > future callers as well, as it may cause a deadlock if we = don't disable > > > > > > > > > interrupts when acquiring cgroup_rstat_lock. > > > > > > > > > > > > > > > > > > > 3. The mem_cgroup_flush_stats() call in mem_cgroup_usag= e() is only > > > > > > > > > > done for root memcg. Why is mem_cgroup_threshold() inte= rested in root > > > > > > > > > > memcg usage? Why not ignore root memcg in mem_cgroup_th= reshold() ? > > > > > > > > > > > > > > > > > > I am not sure, but the code looks like event notification= s may be set > > > > > > > > > up on root memcg, which is why we need to check threshold= s. > > > > > > > > > > > > > > > > This is something we should deprecate as root memcg's usage= is ill defined. > > > > > > > > > > > > > > Right, but I think this would be orthogonal to this patch ser= ies. > > > > > > > > > > > > > > > > > > > I don't think we can make cgroup_rstat_lock a non-irq-disabling= lock > > > > > > without either breaking a link between mem_cgroup_threshold and > > > > > > cgroup_rstat_lock or make mem_cgroup_threshold work without dis= abling > > > > > > irqs. > > > > > > > > > > > > So, this patch can not be applied before either of those two ta= sks are > > > > > > done (and we may find more such scenarios). > > > > > > > > > > > > > > > Could you elaborate why? > > > > > > > > > > My understanding is that with an in_task() check to make sure we = only > > > > > acquire cgroup_rstat_lock from non-irq context it should be fine = to > > > > > acquire cgroup_rstat_lock without disabling interrupts. > > > > > > > > From mem_cgroup_threshold() code path, cgroup_rstat_lock will be ta= ken > > > > with irq disabled while other code paths will take cgroup_rstat_loc= k > > > > with irq enabled. This is a potential deadlock hazard unless > > > > cgroup_rstat_lock is always taken with irq disabled. > > > > > > Oh you are making sure it is not taken in the irq context through > > > should_skip_flush(). Hmm seems like a hack. Normally it is recommende= d > > > to actually remove all such users instead of silently > > > ignoring/bypassing the functionality. > > > > It is a workaround, we simply accept to read stale stats in irq > > context instead of the expensive flush operation. > > > > > > > > So, how about removing mem_cgroup_flush_stats() from > > > mem_cgroup_usage(). It will break the known chain which is taking > > > cgroup_rstat_lock with irq disabled and you can add > > > WARN_ON_ONCE(!in_task()). > > > > This changes the behavior in a more obvious way because: > > 1. The memcg_check_events()->mem_cgroup_threshold()->mem_cgroup_usage() > > path is also exercised in a lot of paths outside irq context, this > > will change the behavior for any event thresholds on the root memcg. > > With proposed skipped flushing in irq context we only change the > > behavior in a small subset of cases. > > > > I think we can skip flushing in irq context for now, and separately > > deprecate threshold events for the root memcg. When that is done we > > can come back and remove should_skip_flush() and add a VM_BUG_ON or > > WARN_ON_ONCE instead. WDYT? > > > > 2. mem_cgroup_usage() is also used when reading usage from userspace. > > This should be an easy workaround though. > > This is a cgroup v1 behavior and to me it is totally reasonable to get > the 2 second stale root's usage. Even if you want to skip flushing in > irq, do that in the memcg code and keep VM_BUG_ON/WARN_ON_ONCE in the > rstat core code. This way we will know if other subsystems are doing > the same or not. We can do that. Basically in mem_cgroup_usage() have: /* Some useful comment */ if (in_task()) mem_cgroup_flush_stats(); and in cgroup_rstat_flush() have: WARN_ON_ONCE(!in_task()); I am assuming VM_BUG_ON is not used outside mm code. The only thing that worries me is that if there is another unlikely path somewhere that flushes stats in irq context we may run into a deadlock. I am a little bit nervous about not skipping flushing if !in_task() in cgroup_rstat_flush().