Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3524073pxb; Mon, 24 Jan 2022 11:22:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJwL1mPdGDxjAUnQC+EZWWqFOItvpO+v/VlYdfm/9wr1bhXu6wjL4s7xNgOBvoi2WjPJwY7F X-Received: by 2002:a17:90b:3ec4:: with SMTP id rm4mr3329284pjb.120.1643052141783; Mon, 24 Jan 2022 11:22:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643052141; cv=none; d=google.com; s=arc-20160816; b=xH5cdzByGLMsD9GUXlbAc7xxW1Rpwj26H/D1pUCeHYY7vIaA+JXn6TGd92kClwHhR/ opbVJTFiD/cBLSRRIwnRmEuUkkkZH6WxHfCZJUVwFkrGNoidVsK7HYXaRtFbcl7qYpQX UH6W+WpWe1mm+xc0D2Zq0pGX3aL3PVBvul8D12Be8qoFdjPxHNPGnaCckZ2/FFutdZD4 CYbUBpoCtvh6v0e04zD0u9/K0QAxjdlexsiJFHV71FuRJQyAEyyXwuF74YNRQulXQ/lV Ogbs/z87ICba1wZ3oI+mvwqwLFJ7HSiMn/GXlzCT4Odn23zEgFKzaH+R+IEEp6Y+xUbF 5bsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=/oVUfbULWzKcz6IcOUxqImvz1eec+zp5tb8htvYP5Ec=; b=XyI5XB8Kbr3XvhEAARv1Q/MVMUEcfmBCMS/CABVxk4yx6/5h48OeCT2IhGEN1q7x1w P2ZqJZXV7gxK0cMwvyM2ZbKILLMlspM3GUkxo77pKSTu2RFURDsc5CkE0dJLNcXF+Zle +DzFXUX9zYimwJ29Nbzl4FQLlxWWEJuNrG5+eqTFBXlLDYXKvxUwURwo3Dj+5eNFlkPU 9lcw/6fm1tnGq+0/ow18UlPw62sNS7DX7iBLB7rfhdTHFg1vAareDIpBt8Yo1XNh1TvQ xDlTlgNm06eJaeXJsC7sw7GTWR2yk1jWjOLVMhkB/NhxGwawqWcgqbz4llRXRuTrHKjR /r7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b="R/mMkFSq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u12si4003364pgb.821.2022.01.24.11.22.08; Mon, 24 Jan 2022 11:22:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b="R/mMkFSq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238028AbiAXOB4 (ORCPT + 99 others); Mon, 24 Jan 2022 09:01:56 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:48144 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237761AbiAXOBy (ORCPT ); Mon, 24 Jan 2022 09:01:54 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 0950F1F38F; Mon, 24 Jan 2022 14:01:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1643032913; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=/oVUfbULWzKcz6IcOUxqImvz1eec+zp5tb8htvYP5Ec=; b=R/mMkFSqw868tgKe1AOIrnmkM4oyJJLFOjbba51a4nt3TgAU/BJkSvVKkmOrGvvocrh8n+ M17ZEkJZHvJebplEMlHBbi0dE2qj9idOaVWhyNXHk2F8qviIV195vTWni2RHJuvMu6RqSy YRkRz+Wpl5M41zBhVtcMjxVgHAQFmJk= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 1EDEFA3B8D; Mon, 24 Jan 2022 14:01:51 +0000 (UTC) Date: Mon, 24 Jan 2022 15:01:47 +0100 From: Michal Hocko To: Yu Zhao Cc: Andrew Morton , Linus Torvalds , Andi Kleen , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Matthew Wilcox , Mel Gorman , Michael Larabel , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Konstantin Kharlamov Subject: Re: [PATCH v6 6/9] mm: multigenerational lru: aging Message-ID: References: <20220104202227.2903605-1-yuzhao@google.com> <20220104202227.2903605-7-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun 23-01-22 14:28:30, Yu Zhao wrote: > On Wed, Jan 19, 2022 at 10:42:47AM +0100, Michal Hocko wrote: > > On Wed 19-01-22 00:04:10, Yu Zhao wrote: > > > On Mon, Jan 10, 2022 at 11:54:42AM +0100, Michal Hocko wrote: > > > > On Sun 09-01-22 21:47:57, Yu Zhao wrote: > > > > > On Fri, Jan 07, 2022 at 03:44:50PM +0100, Michal Hocko wrote: > > > > > > On Tue 04-01-22 13:22:25, Yu Zhao wrote: > > > > > > [...] > > > > > > > +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_mm_walk *walk) > > > > > > > +{ > > > > > > > + static const struct mm_walk_ops mm_walk_ops = { > > > > > > > + .test_walk = should_skip_vma, > > > > > > > + .p4d_entry = walk_pud_range, > > > > > > > + }; > > > > > > > + > > > > > > > + int err; > > > > > > > +#ifdef CONFIG_MEMCG > > > > > > > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > > > > > > > +#endif > > > > > > > + > > > > > > > + walk->next_addr = FIRST_USER_ADDRESS; > > > > > > > + > > > > > > > + do { > > > > > > > + unsigned long start = walk->next_addr; > > > > > > > + unsigned long end = mm->highest_vm_end; > > > > > > > + > > > > > > > + err = -EBUSY; > > > > > > > + > > > > > > > + rcu_read_lock(); > > > > > > > +#ifdef CONFIG_MEMCG > > > > > > > + if (memcg && atomic_read(&memcg->moving_account)) > > > > > > > + goto contended; > > > > > > > +#endif > > > > > > > + if (!mmap_read_trylock(mm)) > > > > > > > + goto contended; > > > > > > > > > > > > Have you evaluated the behavior under mmap_sem contention? I mean what > > > > > > would be an effect of some mms being excluded from the walk? This path > > > > > > is called from direct reclaim and we do allocate with exclusive mmap_sem > > > > > > IIRC and the trylock can fail in a presence of pending writer if I am > > > > > > not mistaken so even the read lock holder (e.g. an allocation from the #PF) > > > > > > can bypass the walk. > > > > > > > > > > You are right. Here it must be a trylock; otherwise it can deadlock. > > > > > > > > Yeah, this is clear. > > > > > > > > > I think there might be a misunderstanding: the aging doesn't > > > > > exclusively rely on page table walks to gather the accessed bit. It > > > > > prefers page table walks but it can also fallback to the rmap-based > > > > > function, i.e., lru_gen_look_around(), which only gathers the accessed > > > > > bit from at most 64 PTEs and therefore is less efficient. But it still > > > > > retains about 80% of the performance gains. > > > > > > > > I have to say that I really have hard time to understand the runtime > > > > behavior depending on that interaction. How does the reclaim behave when > > > > the virtual scan is enabled, partially enabled and almost completely > > > > disabled due to different constrains? I do not see any such an > > > > evaluation described in changelogs and I consider this to be a rather > > > > important information to judge the overall behavior. > > > > > > It doesn't have (partially) enabled/disabled states nor does its > > > behavior change with different reclaim constraints. Having either > > > would make its design too complex to implement or benchmark. > > > > Let me clarify. By "partially enabled" I really meant behavior depedning > > on runtime conditions. Say mmap_sem cannot be locked for half of scanned > > tasks and/or allocation for the mm walker fails due to lack of memory. > > How does this going to affect reclaim efficiency. > > Understood. This is not only possible -- it's the default for our ARM > hardware that doesn't support the accessed bit, i.e., CPUs that don't > automatically set the accessed bit. > > In try_to_inc_max_seq(), we have: > /* > * If the hardware doesn't automatically set the accessed bit, fallback > * to lru_gen_look_around(), which only clears the accessed bit in a > * handful of PTEs. Spreading the work out over a period of time usually > * is less efficient, but it avoids bursty page faults. > */ > if the accessed bit is not supported > return > > if alloc_mm_walk() fails > return > > walk_mm() > if mmap_sem contented > return > > scan page tables > > We have a microbenchmark that specifically measures this worst case > scenario by entirely disabling page table scanning. Its results showed > that this still retains more than 90% of the optimal performance. I'll > share this microbenchmark in another email when answering Barry's > questions regarding the accessed bit. > > Our profiling infra also indirectly confirms this: it collects data > from real users running on hardware with and without the accessed > bit. Users running on hardware without the accessed bit indeed suffer > a small performance degradation, compared with users running on > hardware with it. But they still benefit almost as much, compared with > users running on the same hardware but without MGLRU. This definitely a good information to have in the cover letter. > > How does a user/admin > > know that the memory reclaim is in a "degraded" mode because of the > > contention? > > As we previously discussed here: > https://lore.kernel.org/linux-mm/Ydu6fXg2FmrseQOn@google.com/ > there used to be a counter measuring the contention, and it was deemed > unnecessary and removed in v4. But I don't have a problem if we want > to revive it. Well, counter might be rather tricky but few trace points would make some sense to me. -- Michal Hocko SUSE Labs