Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp2360378ybh; Mon, 16 Mar 2020 01:32:50 -0700 (PDT) X-Google-Smtp-Source: ADFU+vt4unUaLNC/4h45pF80Kk/KPX3HNWc7Y/IFW7lF4vj6f7XOygypFgvfELKmhofW4RGbQ7mF X-Received: by 2002:a54:468b:: with SMTP id k11mr15812843oic.134.1584347570681; Mon, 16 Mar 2020 01:32:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584347570; cv=none; d=google.com; s=arc-20160816; b=M2OcUHmyrX1mDe5xv35dPlb1dfE5Vq1/QFZ4vawEv0Qrmz67XvuF4A3Y4INzq1l18E S9PuOJAfilqvmyxLO+WWRFcF7/wpWBdHk9sKlMUVlrOXP3LEhTyFQ4HMv5lZ0ZQ5Gyfd EWPOO3huETzExhXE5tgMmW3s2qrMNwV74t+jcHn1rvAviTB+NUyRYExfmfu11OUWU5DF XEHnf7SgQDFjwAPGP5fqCAps6nFhPZ97lmZ8EIxEcLFhUiO3o8hPk7TMmstsfQcwLgp1 vGMiIm5c9GBl89t6m/5IxDlAL+oWRkKDlRcd/WZiW4L0DxOt18KoLcBRr7hmnX3prPl1 OjLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=C7Hw1OYvTbtKF+eKZ/cSIxMxy2lz9OWKZVFktoETx7U=; b=B2a6MeToOBB6j5mQt5dn4UAVlMvyGpH/xhNe5UADFdM5nZP6xQOIO9SsGonTW3VTUs iKuVRh0rDygF4wviVD97HJ7l8QqYS8pVv6xIpmtF2oMiWqZFp8M7Z+AFzg1AaNStSqE2 mGSZbZLFKecvUfunfUACCqzOmDFlmOCP0cRNOBh834PIh66x1QOeTbjxzskgPSEebgr/ OW5bJV3qB/THdh37EmQ1Brg1DhvnjWcd7a9cG881YpawwvqkVuSwZmnI//N7i3WPIVRA 5Wn81zD3dDiSL7EvBjw4b6+nXLRonmSioIY7QAJUzEXUrAyJr55+63kZoYrxXE7P0uZ+ 5TBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IfbKFO8O; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d10si9380889oti.226.2020.03.16.01.32.36; Mon, 16 Mar 2020 01:32:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IfbKFO8O; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730121AbgCPIcA (ORCPT + 99 others); Mon, 16 Mar 2020 04:32:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:55952 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729994AbgCPIb7 (ORCPT ); Mon, 16 Mar 2020 04:31:59 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B8DDC20658; Mon, 16 Mar 2020 08:31:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584347518; bh=G/JbKz/TJsKUzqrpwMaev68yogcQGTS/UnZoH2XPFgg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=IfbKFO8Olx7qaBdiqWuMLp8xz8Zy2G0XfEcnRj/20uRxt6mP1nmMvB3VMrQ/6a9JK QKl2lckiUNdzmL+oSMPBXcOpsQe5G+Ap88ZZHxugi3IZTn6ZYNx0FsC+y9CGEuaBs3 qJ4ckJIbWPYCIkYNW0ayNVOEoPQCBlr1kuPEIizc= Date: Mon, 16 Mar 2020 10:31:54 +0200 From: Leon Romanovsky To: Jaewon Kim Cc: Vlastimil Babka , adobriyan@gmail.com, akpm@linux-foundation.org, labbott@redhat.com, sumit.semwal@linaro.org, minchan@kernel.org, ngupta@vflare.org, sergey.senozhatsky.work@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, jaewon31.kim@gmail.com, Linux API Subject: Re: [RFC PATCH 0/3] meminfo: introduce extra meminfo Message-ID: <20200316083154.GF8510@unreal> References: <20200311034441.23243-1-jaewon31.kim@samsung.com> <20200313174827.GA67638@unreal> <5E6EFB6C.7050105@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <5E6EFB6C.7050105@samsung.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 16, 2020 at 01:07:08PM +0900, Jaewon Kim wrote: > > > On 2020년 03월 14일 02:48, Leon Romanovsky wrote: > > On Fri, Mar 13, 2020 at 04:19:36PM +0100, Vlastimil Babka wrote: > >> +CC linux-api, please include in future versions as well > >> > >> On 3/11/20 4:44 AM, Jaewon Kim wrote: > >>> /proc/meminfo or show_free_areas does not show full system wide memory > >>> usage status. There seems to be huge hidden memory especially on > >>> embedded Android system. Because it usually have some HW IP which do not > >>> have internal memory and use common DRAM memory. > >>> > >>> In Android system, most of those hidden memory seems to be vmalloc pages > >>> , ion system heap memory, graphics memory, and memory for DRAM based > >>> compressed swap storage. They may be shown in other node but it seems to > >>> useful if /proc/meminfo shows all those extra memory information. And > >>> show_mem also need to print the info in oom situation. > >>> > >>> Fortunately vmalloc pages is alread shown by commit 97105f0ab7b8 > >>> ("mm: vmalloc: show number of vmalloc pages in /proc/meminfo"). Swap > >>> memory using zsmalloc can be seen through vmstat by commit 91537fee0013 > >>> ("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo. > >>> > >>> Memory usage of specific driver can be various so that showing the usage > >>> through upstream meminfo.c is not easy. To print the extra memory usage > >>> of a driver, introduce following APIs. Each driver needs to count as > >>> atomic_long_t. > >>> > >>> int register_extra_meminfo(atomic_long_t *val, int shift, > >>> const char *name); > >>> int unregister_extra_meminfo(atomic_long_t *val); > >>> > >>> Currently register ION system heap allocator and zsmalloc pages. > >>> Additionally tested on local graphics driver. > >>> > >>> i.e) cat /proc/meminfo | tail -3 > >>> IonSystemHeap: 242620 kB > >>> ZsPages: 203860 kB > >>> GraphicDriver: 196576 kB > >>> > >>> i.e.) show_mem on oom > >>> <6>[ 420.856428] Mem-Info: > >>> <6>[ 420.856433] IonSystemHeap:32813kB ZsPages:44114kB GraphicDriver::13091kB > >>> <6>[ 420.856450] active_anon:957205 inactive_anon:159383 isolated_anon:0 > >> I like the idea and the dynamic nature of this, so that drivers not present > >> wouldn't add lots of useless zeroes to the output. > >> It also makes simpler the decisions of "what is important enough to need its own > >> meminfo entry". > >> > >> The suggestion for hunting per-driver /sys files would only work if there was a > >> common name to such files so once can find(1) them easily. > >> It also doesn't work for the oom/failed alloc warning output. > > Of course there is a need to have a stable name for such an output, this > > is why driver/core should be responsible for that and not drivers authors. > > > > The use case which I had in mind slightly different than to look after OOM. > > > > I'm interested to optimize our drivers in their memory footprint to > > allow better scale in SR-IOV mode where one device creates many separate > > copies of itself. Those copies easily can take gigabytes of RAM due to > > the need to optimize for high-performance networking. Sometimes the > > amount of memory and not HW is actually limits the scale factor. > > > > So I would imagine this feature being used as an aid for the driver > > developers and not for the runtime decisions. > > > > My 2-cents. > > > > Thanks > > > > > Thank you for your comment. > My idea, I think, may be able to help each driver developer to see their memory usage. > But I'd like to see overall memory usage through the one node. It is more than enough :). > > Let me know if you have more comment. > I am planning to move my logic to be shown on a new node, /proc/meminfo_extra at v2. Can you please help me to understand how that file will look like once many drivers will start to use this interface? Will I see multiple lines? Something like: driver1 .... driver2 .... driver3 .... ... driver1000 .... How can we extend it to support subsystems core code? Thanks > > Thank you > Jaewon Kim