Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp8041109rwb; Wed, 23 Nov 2022 14:38:55 -0800 (PST) X-Google-Smtp-Source: AA0mqf4dkznSfU+xIKL3XCH6IawFJiP4qN0OGJBEvmiMyqajaqJbxYmTWOijIHw+IeqPmySFVYNN X-Received: by 2002:a17:906:ef1:b0:78d:260d:a6e4 with SMTP id x17-20020a1709060ef100b0078d260da6e4mr24965614eji.93.1669243135772; Wed, 23 Nov 2022 14:38:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669243135; cv=none; d=google.com; s=arc-20160816; b=jIbR80glqLWeVMJKdusI+dEMg73B9Up7291mniD5M79f9rUlwec34Yaedah9vKO4he RGa7IBDgjPfbAcuQX6yzh5US9b/kDkPsS6fy6yyXbRCbsdngA/rfQaF+QAS/Z56moGUn 8Ce4ayGZgQRoU/lHVHMjUpp6Ydz8gy4AB9KA7fJpA34AkeuToLGcy5tbWA6C1jtqw5KU 2YzlYLiJ0l4TMm6TQ1kmGcfDLACdjHhAxRjegKwTWyJrwDW8J7n5we3KPCdl4pTUcoh4 SD6YoHbLtcOQ8J4OYwxxq/jvMW7efO98cPJSvZFIcr8CioCqIdxoYwjIPhSO4e2jTfEA tnng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=2PsyXXA9CudTt1GF7QXo+ry29gXPUEu+QzCWmZI6Zig=; b=JVIsPZuC5P2Jc55V4oa5Le0nEGgma0CaMv5LmolGdbdCma+LpdByNefVngBvFDgs9Q sCbtlNtO0E53L2bHck0HZJDzOOQWQk8LXycahVvz75W1d+OuPzXooQXu42zyTcBGxZfF i153bLHKp858w7sxb0NFnrJYFTOKLqETDskRuWMNyj7epUNtVz5QUItTsvIhpZwnvmt3 gdX9ajrfr/MQTc7VoT+wzfk3aMaNVCxA15y7sm+MIaHppPrnDeYpLNc2kf7m94ju2bXn 9P6AzKObDSht5JZ4GOeTBtKZ6AdPyJC9c4frrhRqYA1PlJ6/vpY3ORxVgFRwurh/cdnk favA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=XMfsHKwK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ht16-20020a170907609000b007826cb6f57csi15839359ejc.407.2022.11.23.14.38.34; Wed, 23 Nov 2022 14:38:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=XMfsHKwK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229653AbiKWWag (ORCPT + 89 others); Wed, 23 Nov 2022 17:30:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229647AbiKWWae (ORCPT ); Wed, 23 Nov 2022 17:30:34 -0500 Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D395CFDDA6 for ; Wed, 23 Nov 2022 14:30:31 -0800 (PST) Received: by mail-qk1-x729.google.com with SMTP id x18so13488091qki.4 for ; Wed, 23 Nov 2022 14:30:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=2PsyXXA9CudTt1GF7QXo+ry29gXPUEu+QzCWmZI6Zig=; b=XMfsHKwK6gxd9tt2hChYTe5pQ25W+G022dNNp25j8jGYTdJNaF40WrrHzq8fOVjvNQ 6oCZ7b3jr+K2sYCzBFxmbnk24Ep4ujNlf4doTSuQkHNDrqeL5ghS/4l4Xl148Unxbptt iHkxYQ0vlJAVtsjKS2tOCk6WFNQGGe/k97tPlHGzXftZ9/eWz1QepcoTWTyk0oklY6S3 h+08Iq4aqap3gfcMhOfLp5DkiOX/fswAoTFAJQKh2cvH3o9OTCQAUIuAGNpMkSmNnicN S2SqUR1boFkW3FREVyDD/Sx6b827THmq1YMvB30nC/aeRC118qw2xmUFWjVSv1lxqE1J J+AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=2PsyXXA9CudTt1GF7QXo+ry29gXPUEu+QzCWmZI6Zig=; b=fhS+8TvToR1PC1LpKZ6S8UvXRBPyoJY4fF/bL91SSdvtKmc9A27jsc+b2At0qrkWtH bQXzZKZvAV78dYgzKv9g/KokpeWv7NdD2M9mSKrwYRptMPTna8KF0WzR8fIUjN3T/wtB 33je/a0nqQLQFC2NpLHMXx+ogBZkgVOLqxnJ6jps+i77smXhBn9ZwUYsOdwNiUHUQzql e40WsQARAQ39fyL7+TrN6AB7H7h3ri8bRJkvAlbAdMlhYL/UknmIKdpAY8SflUcMVOP0 kabmDmLf4wI2UxAtt7fzeOVUL20V8gWpr7qekFL83J9TADnABRxlj4M7E+LPReBA7nGZ 1nHA== X-Gm-Message-State: ANoB5plgXgT9xxtvC6rj1EOGbGNcBHmEgHOOiOW1t7PZcFHmQtJB/ISr 2x8kROdHNw8SgkSTeZEvPsZaGg== X-Received: by 2002:a05:620a:1310:b0:6f9:ffc6:45d1 with SMTP id o16-20020a05620a131000b006f9ffc645d1mr18525732qkj.663.1669242631003; Wed, 23 Nov 2022 14:30:31 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:bc4]) by smtp.gmail.com with ESMTPSA id fg22-20020a05622a581600b003a580cd979asm10695213qtb.58.2022.11.23.14.30.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:30:30 -0800 (PST) Date: Wed, 23 Nov 2022 17:30:56 -0500 From: Johannes Weiner To: Yosry Ahmed Cc: Mina Almasry , Huang Ying , Yang Shi , Tim Chen , weixugc@google.com, shakeelb@google.com, gthelen@google.com, fvdl@google.com, Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH V1] mm: Disable demotion from proactive reclaim Message-ID: References: <20221122203850.2765015-1-almasrymina@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Nov 23, 2022 at 01:35:13PM -0800, Yosry Ahmed wrote: > On Wed, Nov 23, 2022 at 1:21 PM Mina Almasry wrote: > > > > On Wed, Nov 23, 2022 at 10:00 AM Johannes Weiner wrote: > > > > > > Hello Mina, > > > > > > On Tue, Nov 22, 2022 at 12:38:45PM -0800, Mina Almasry wrote: > > > > Since commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg > > > > reclaim""), the proactive reclaim interface memory.reclaim does both > > > > reclaim and demotion. This is likely fine for us for latency critical > > > > jobs where we would want to disable proactive reclaim entirely, and is > > > > also fine for latency tolerant jobs where we would like to both > > > > proactively reclaim and demote. > > > > > > > > However, for some latency tiers in the middle we would like to demote but > > > > not reclaim. This is because reclaim and demotion incur different latency > > > > costs to the jobs in the cgroup. Demoted memory would still be addressable > > > > by the userspace at a higher latency, but reclaimed memory would need to > > > > incur a pagefault. > > > > > > > > To address this, I propose having reclaim-only and demotion-only > > > > mechanisms in the kernel. There are a couple possible > > > > interfaces to carry this out I considered: > > > > > > > > 1. Disable demotion in the memory.reclaim interface and add a new > > > > demotion interface (memory.demote). > > > > 2. Extend memory.reclaim with a "demote=" flag to configure the demotion > > > > behavior in the kernel like so: > > > > - demote=0 would disable demotion from this call. > > > > - demote=1 would allow the kernel to demote if it desires. > > > > - demote=2 would only demote if possible but not attempt any > > > > other form of reclaim. > > > > > > Unfortunately, our proactive reclaim stack currently relies on > > > memory.reclaim doing both. It may not stay like that, but I'm a bit > > > wary of changing user-visible semantics post-facto. > > > > > > In patch 2, you're adding a node interface to memory.demote. Can you > > > add this to memory.reclaim instead? This would allow you to control > > > demotion and reclaim independently as you please: if you call it on a > > > node with demotion targets, it will demote; if you call it on a node > > > without one, it'll reclaim. And current users will remain unaffected. > > > > Hello Johannes, thanks for taking a look! > > > > I can certainly add the "nodes=" arg to memory.reclaim and you're > > right, that would help in bridging the gap. However, if I understand > > the underlying code correctly, with only the nodes= arg the kernel > > will indeed attempt demotion first, but the kernel will also merrily > > fall back to reclaiming if it can't demote the full amount. I had > > hoped to have the flexibility to protect latency sensitive jobs from > > reclaim entirely while attempting to do demotion. > > > > There are probably ways to get around that in the userspace. I presume > > the userspace can check if there is available memory on the node's > > demotion targets, and if so, the kernel should demote-only. But I feel > > that wouldn't be reliable as the demotion logic may change across > > kernel versions. The userspace may think the kernel would demote but > > instead demotion failed due to whatever heuristic introduced into the > > new kernel version. > > > > The above is just one angle of the issue. Another angle (which Yosry > > would care most about I think) is that at Google we call > > memory.reclaim mainly when memory.current is too close to memory.max > > and we expect the memory usage of the cgroup to drop as a result of a > > success memory.reclaim call. I suspect once we take in commit > > 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg reclaim""), > > we would run into that regression, but I defer to Yosry here, he may > > have a solution for that in mind already. > > We don't exactly rely on memory.current, but we do have a separate > proactive reclaim policy today from demotion, and we do expect > memory.reclaim to reclaim memory and not demote it. So it is important > that we can control reclaim vs. demotion separately. Having > memory.reclaim do demotions by default is not ideal for our current > setup, so at least having a demote= argument to control it (no > demotions, may demote, only demote) is needed. With a nodemask you should be able to only reclaim by specifying terminal memory tiers that do that, and leave out higher tiers that demote. That said, it would actually be nice if reclaim policy wouldn't have to differ from demotion policy longer term. Ultimately it comes down to mapping age to memory tier, right? Such that hot pages are in RAM, warm pages are in CXL, cold pages are in storage. If you apply equal presure on all tiers, it's access frequency that should determine which RAM pages to demote, and which CXL pages to reclaim. If RAM pages are hot and refuse demotion, and CXL pages are cold in comparison, CXL should clear out. If RAM pages are warm, they should get demoted to CXL but not reclaimed further from there (and rotate instead). Do we know what's preventing this from happening today? What's the reason you want to control them independently?