Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2276366pxb; Fri, 5 Feb 2021 13:32:41 -0800 (PST) X-Google-Smtp-Source: ABdhPJzEfgavkq7xK1oUtNUI8cSROV2h0UEcIIn2+1RjAvGzbPKIoht7TaoaGb/z6nYwOA/YHpQi X-Received: by 2002:a17:906:8410:: with SMTP id n16mr6010135ejx.551.1612560761716; Fri, 05 Feb 2021 13:32:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612560761; cv=none; d=google.com; s=arc-20160816; b=oQG6DVfMTM93XS+7U1+jEYRGVXycoelaXjacdBO3oPxMEu3IwqLWcSdrhhLf72EL6M slzxoa7jPtfthcFOd42xjcdXkVZs4WQ4CGWSmvL3WUYCjFm/tdFVeMXOhpkLcVeOIOBS chTYbn+ZVl3ga9/hcMjGvvZP584o+/AG59LvZh4Nwf4SLXTAxRUZ0XLn47Fn3ivVEz+f tSIZjWBqN+EgjGerKBpsvhFkSA+p1XXgvaZSKMxC73qZq00rUIKKfFo/O/dz5mhtL5w/ 5JUM2dKhco+nDo8V0J6SlYkxC1CnLeSYrsKcHhQabe6dveyBLKoVSLElk4cL1OVmvaHT YnWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=aQuIxt4qvPi4mU+J/H3ohYCdk1Zq8+c6bt8dGfJG8Pw=; b=bviO8HVeaXPr66c9DOI8/eLEyySZ3NmvJ3r5jB82eRfCnUlQLuPllQiE2jXX1wPwSk 7Ftz39NS/DPaUHX2ESMlnNpFM8H8L7OTCk43KdQ+z0OEu2ttQxpyRZyKdq243ZKVKHj1 gRmjuMisl7oXl8FBNI2W6vjX8ZG1ex4UKj/46XABenjv+dNxSWi8k5WkBHGSfGQnAnxu s876vzkZBLYuuGK4fWxyCExeMnI5doKnuoevD56Lb4HaLExM3LNVm7ASCoEObJS/g4IN 9iqxtPgX45cN6Ayy/0Hi+AZB/IjOIL+J6pTb2ATHWeLCRGKg4uYEUWpXGfP+10ISqD7k itLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=M9KAfYqj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k21si3956986ejx.367.2021.02.05.13.32.16; Fri, 05 Feb 2021 13:32:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=M9KAfYqj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230038AbhBEV31 (ORCPT + 99 others); Fri, 5 Feb 2021 16:29:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233680AbhBEV2o (ORCPT ); Fri, 5 Feb 2021 16:28:44 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55E40C06174A for ; Fri, 5 Feb 2021 13:28:04 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id a16so4218794plh.8 for ; Fri, 05 Feb 2021 13:28:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=aQuIxt4qvPi4mU+J/H3ohYCdk1Zq8+c6bt8dGfJG8Pw=; b=M9KAfYqjvDgDhmLxL0xXu4CVhni6pxxAvxd2v/ELKrGswp12IARwV/uqMEHWiEwctn Izw+OFc+zwt+12exGttKFso8ieX4K6alWxxAopLLyM874U+RevfvNJ7utSQcOvxDFuCM MOcEoB28MErvU5zMoJetH3EthkYgUxs8jtvQIbsnBkxhUVJoJz9ZMjU/NB5qrIgn1Izg A3BHLhozns7v1shEh7o0iFytEA8rbyw5DbiRkLFMaU+iQFOQifHGKy3YPZpn2Wm0lKxx fFoV6C8KpgGiBmbHthZppBN1+I5vi/QVPoWQ4BGhs12I6AaqMnQylfZndH2csiUinspA vLVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=aQuIxt4qvPi4mU+J/H3ohYCdk1Zq8+c6bt8dGfJG8Pw=; b=AQw8bkgLuc0TnvPnKufUiG8mZSl7TIsIec45OvMl9dyymGIqjTowQtSQO93E4DvPxg luvVaRQQ+a5EviaUsSEY90kJU0lUapQCgbFZB9Vg+NBMoBZwJkSCdnkaiCxdZo6D8IHb wy04DxI34J7aqyHITRnDEqME/cC6l2921MTJ7I3oUw2axojk5D5h706i/isug222p2o6 rZ36EVu0KT0JUub21MjGY5VcJnUd08nqIwefvR6tNdmTnbR6Gf6s78OeTdHrADv8cHOP LnW3ClCDAFuKzN1APHUnb3eyTak10g29erSG1XR++7jAJ8n+JRxPpEvH32YzXS5Hmxo3 vTXg== X-Gm-Message-State: AOAM530YWfGTqbCzqC7aavdOL964V8im3xWTcdn1PrazPZueWue4SpuJ 2XubrTeT9ijm3a/BqiL9TNQ= X-Received: by 2002:a17:90b:4ad2:: with SMTP id mh18mr5749448pjb.137.1612560483852; Fri, 05 Feb 2021 13:28:03 -0800 (PST) Received: from google.com ([2620:15c:211:201:708b:34cf:3e70:176d]) by smtp.gmail.com with ESMTPSA id k31sm12256798pgi.5.2021.02.05.13.28.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 13:28:02 -0800 (PST) Sender: Minchan Kim Date: Fri, 5 Feb 2021 13:28:01 -0800 From: Minchan Kim To: John Hubbard Cc: Andrew Morton , gregkh@linuxfoundation.org, surenb@google.com, joaodias@google.com, LKML , linux-mm Subject: Re: [PATCH] mm: cma: support sysfs Message-ID: References: <87d7ec1f-d892-0491-a2de-3d0feecca647@nvidia.com> <71c4ce84-8be7-49e2-90bd-348762b320b4@nvidia.com> <34110c61-9826-4cbe-8cd4-76f5e7612dbd@nvidia.com> <269689b7-3b6d-55dc-9044-fbf2984089ab@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <269689b7-3b6d-55dc-9044-fbf2984089ab@nvidia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 05, 2021 at 12:25:52PM -0800, John Hubbard wrote: > On 2/5/21 8:15 AM, Minchan Kim wrote: > ... > > > Yes, approximately. I was wondering if this would suffice at least as a baseline: > > > > > > cma_alloc_success 125 > > > cma_alloc_failure 25 > > > > IMO, regardless of the my patch, it would be good to have such statistics > > in that CMA was born to replace carved out memory with dynamic allocation > > ideally for memory efficiency ideally so failure should regard critical > > so admin could notice it how the system is hurt. > > Right. So CMA failures are useful for the admin to see, understood. > > > > > Anyway, it's not enough for me and orthgonal with my goal. > > > > OK. But...what *is* your goal, and why is this useless (that's what > orthogonal really means here) for your goal? As I mentioned, the goal is to monitor the failure from each of CMA since they have each own purpose. Let's have an example. System has 5 CMA area and each CMA is associated with each user scenario. They have exclusive CMA area to avoid fragmentation problem. CMA-1 depends on bluetooh CMA-2 depends on WIFI CMA-3 depends on sensor-A CMA-4 depends on sensor-B CMA-5 depends on sensor-C With this, we could catch which module was affected but with global failure, I couldn't find who was affected. > > Also, would you be willing to try out something simple first, > such as providing indication that cma is active and it's overall success > rate, like this: > > /proc/vmstat: > > cma_alloc_success 125 > cma_alloc_failure 25 > > ...or is the only way to provide the more detailed items, complete with > per-CMA details, in a non-debugfs location? > > > > > > > > ...and then, to see if more is needed, some questions: > > > > > > a) Do you know of an upper bound on how many cma areas there can be > > > (I think Matthew also asked that)? > > > > There is no upper bound since it's configurable. > > > > OK, thanks,so that pretty much rules out putting per-cma details into > anything other than a directory or something like it. > > > > > > > b) Is tracking the cma area really as valuable as other possibilities? We can put > > > "a few" to "several" items here, so really want to get your very favorite bits of > > > information in. If, for example, there can be *lots* of cma areas, then maybe tracking > > > > At this moment, allocation/failure for each CMA area since they have > > particular own usecase, which makes me easy to keep which module will > > be affected. I think it is very useful per-CMA statistics as minimum > > code change so I want to enable it by default under CONFIG_CMA && CONFIG_SYSFS. > > > > > by a range of allocation sizes is better... > > > > I takes your suggestion something like this. > > > > [alloc_range] could be order or range by interval > > > > /sys/kernel/mm/cma/cma-A/[alloc_range]/success > > /sys/kernel/mm/cma/cma-A/[alloc_range]/fail > > .. > > .. > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/success > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/fail > > Actually, I meant, "ranges instead of cma areas", like this: > > / / / / ... > / / > The idea is that knowing the allocation sizes that succeeded > and failed is maybe even more interesting and useful than > knowing the cma area that contains them. Understand your point but it would make hard to find who was affected by the failure. That's why I suggested to have your suggestion under additional config since per-cma metric with simple sucess/failure are enough. > > > > > I agree it would be also useful but I'd like to enable it under > > CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset. > > > > I will stop harassing you very soon, just want to bottom out on > understanding the real goals first. :) > I hope my example makes the goal more clear for you.