Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DE60C74A5B for ; Fri, 17 Mar 2023 02:06:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229620AbjCQCGb (ORCPT ); Thu, 16 Mar 2023 22:06:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229476AbjCQCG3 (ORCPT ); Thu, 16 Mar 2023 22:06:29 -0400 X-Greylist: delayed 443 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 16 Mar 2023 19:06:27 PDT Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E77093FBBC for ; Thu, 16 Mar 2023 19:06:27 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id 81A84E0518; Fri, 17 Mar 2023 01:59:03 +0000 (UTC) From: Yang Yang To: hannes@cmpxchg.org Cc: akpm@linux-foundation.org, iamjoonsoo.kim@lge.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, yang.yang29@zte.com.cn Subject: Re: [PATCH linux-next] mm: workingset: simplify the calculation of workingset size Date: Fri, 17 Mar 2023 01:59:03 +0000 Message-Id: <20230317015903.16978-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230316143007.GC116016@cmpxchg.org> References: <20230316143007.GC116016@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >On Thu, Mar 16, 2023 at 05:23:05PM +0800, yang.yang29@zte.com.cn wrote: >> From: Yang Yang >> >> After we implemented workingset detection for anonymous LRU[1], >> the calculation of workingset size is a little complex. Actually there is >> no need to call mem_cgroup_get_nr_swap_pages() if refault page is >> anonymous page, since we are doing swapping then should always >> give pressure to NR_ACTIVE_ANON. > > This is false. > > (mem_cgroup_)get_nr_swap_pages() returns the *free swap slots*. There > might be swap, but if it's full, reclaim stops scanning anonymous > pages altogether. That means that refaults of either type can no > longer displace existing anonymous pages, only cache. I see in this patch "mm: vmscan: enforce inactive:active ratio at the reclaim root", reclaim will be done in the combined workingset of different workloads in different cgroups. So if current cgroup reach it's swap limit(mem_cgroup_get_nr_swap_pages(memcg) == 0), but other cgroup still has swap slot, should we allow the refaulting page to active and give pressure to other cgroup?