Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4392713pxb; Tue, 26 Jan 2021 22:06:33 -0800 (PST) X-Google-Smtp-Source: ABdhPJzx08qYWeNSDdw6ITNZMvrvRTJ0NnEwVdSU+7YWcJY20NF6tf3IuAYkBh2x4cpHbjMlBgCz X-Received: by 2002:a17:906:3c04:: with SMTP id h4mr5679871ejg.51.1611727593398; Tue, 26 Jan 2021 22:06:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611727593; cv=none; d=google.com; s=arc-20160816; b=jvIj+CubWtWmVBX9l/4cLqqx6W/uM7g1FxfUjcXVKe2KmF2YyoNLZ/TtNE+Jnic5Vg gVOh4UD5P9cxh6B1iOIt17du/DiLDEKWclr8O3GkM6HOZmSNYslai3RIg3ofp/imUECY OWsoa+oMQVE8FjSP15tByWB0Z3PnrmB828Xw5uUvKF2cjjcwkyyR+8yzUHkBPHLLF4eG EaOwjQSvAfcCGz5ZGJumIrnR2/HNOZUsigm+AOSQnTIDpOhwuO7UuGERCIaKsGGjxh2g Ex5PsSvDpE1n5BaNAkWlgYQbVuSn/oYR4hmM2rO9ZKnMO0SGTgXacmD3+X+0M2yj4YMi eHvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=dPrngUuweh3KwybjfuyfYtaTo+1zIXOafjTF3f9Fi8U=; b=HLL8wpJAewge+NFx7Qv8/AuFmsWkw2r5AYnRUtm3qTLNItdXsBifgUw4RXhQqlhU62 +767EdeqBaKCYesmccj7qTa1W4M5SR/qr7obGMWrRin0ho7Fco1DnyBttU16zJw6uqMs yQCxKvhtZ3WkdEVM0LTR9Z0AAlJwJJEl2JwWhHvHrTkK8D0WkMAFqG5AdEK+e8yo+P0a CkXE4CgCgsVHJmZaip/mO2Ecag37lnI8aP1UA7AZAA55yedTEStUW3NRspjXXT86nE/T eGYV9MARWCh0w+5XR3C+TaHsfL5W8vuBaq/U15YejGCz9q0hxmMdNELQAqZePkQaBhaZ h4jA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sSPJ+2FN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c25si446735edx.588.2021.01.26.22.06.09; Tue, 26 Jan 2021 22:06:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sSPJ+2FN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393808AbhAZR4p (ORCPT + 99 others); Tue, 26 Jan 2021 12:56:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:53544 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390726AbhAZI7G (ORCPT ); Tue, 26 Jan 2021 03:59:06 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 77A1B229C4; Tue, 26 Jan 2021 08:56:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611651431; bh=3dgUnP8aJ1eg4nORcRJtC2l55XdTih1ZiktWTkuQgN4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=sSPJ+2FNgJy9vX8TwuFtwlJ4BJGqjUsMVwz5vKSBR+DobJHY/hllcDCu+vsFJxOfA PtkBWbLVoKefAGzdw4tyPia/jf4/w/CTdqHUVcVPQYf8O6K/yO3n8YH3+6Qdp/WX2S 3kpUytNDMZHdzYJVTeOCy1th0H1D8f7O80nUrqwGjIsLbS4pp10nhROJcxxrWQAvfI 8rjhyrPOGkxeS0hfiSW+pq2m1eEQO7Dad0uLwiHZ3KfYXTJUzK2DuHM2u3gAHO+f+q ugk4X5eb03lt2fccryhj8VWMjQNpJj9IPEsXSd0j6TsD/IowA22eEkV42dAFy7PBId 8LtfF1A42mNbA== Date: Tue, 26 Jan 2021 10:56:54 +0200 From: Mike Rapoport To: Michal Hocko Cc: Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org, Hagen Paul Pfeifer , Palmer Dabbelt Subject: Re: [PATCH v16 08/11] secretmem: add memcg accounting Message-ID: <20210126085654.GO6332@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> <20210121122723.3446-9-rppt@kernel.org> <20210125165451.GT827@dhcp22.suse.cz> <20210125213817.GM6332@kernel.org> <20210126073142.GY827@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210126073142.GY827@dhcp22.suse.cz> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote: > On Mon 25-01-21 23:38:17, Mike Rapoport wrote: > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote: > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote: > > > > From: Mike Rapoport > > > > > > > > Account memory consumed by secretmem to memcg. The accounting is updated > > > > when the memory is actually allocated and freed. > > > > > > What does this mean? > > > > That means that the accounting is updated when secretmem does cma_alloc() > > and cma_relase(). > > > > > What are the lifetime rules? > > > > Hmm, what do you mean by lifetime rules? > > OK, so let's start by reservation time (mmap time right?) then the > instantiation time (faulting in memory). What if the calling process of > the former has a different memcg context than the later. E.g. when you > send your fd or inherited fd over fork will move to a different memcg. > > What about freeing path? E.g. when you punch a hole in the middle of > a mapping? > > Please make sure to document all this. So, does something like this answer your question: --- The memory cgroup is charged when secremem allocates pages from CMA to increase large pages pool during ->fault() processing. The pages are uncharged from memory cgroup when they are released back to CMA at the time secretme inode is evicted. --- > > > [...] > > > > > > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order) > > > > +{ > > > > + int err; > > > > + > > > > + err = memcg_kmem_charge_page(page, gfp, order); > > > > + if (err) > > > > + return err; > > > > + > > > > + /* > > > > + * seceremem caches are unreclaimable kernel allocations, so treat > > > > + * them as unreclaimable slab memory for VM statistics purposes > > > > + */ > > > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, > > > > + PAGE_SIZE << order); > > > > > > A lot of memcg accounted memory is not reclaimable. Why do you abuse > > > SLAB counter when this is not a slab owned memory? Why do you use the > > > kmem accounting API when __GFP_ACCOUNT should give you the same without > > > this details? > > > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp. > > Other people are working on this to change. But OK, I do see that this > can be done later but it looks rather awkward. > > > Besides, kmem accounting with __GFP_ACCOUNT does not seem > > to update stats and there was an explicit request for statistics: > > > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/ > > charging and stats are two different things. You can still take care of > your stats without explicitly using the charging API. But this is a mere > detail. It just hit my eyes. > > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here: > > > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/ > > Those arguments should be a part of the changelof. > > > I think that a dedicated stats counter would be too much at the moment and > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory. > > Why do you think it would be too much? If the secret memory becomes a > prevalent memory user because it will happen to back the whole virtual > machine then hiding it into any existing counter would be less than > useful. > > Please note that this all is a user visible stuff that will become PITA > (if possible) to change later on. You should really have strong > arguments in your justification here. I think that adding a dedicated counter for few 2M areas per container is not worth the churn. When we'll get to the point that secretmem can be used to back the entire guest memory we can add a new counter and it does not seem to PITA to me. -- Sincerely yours, Mike.