Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp4855188pxu; Tue, 13 Oct 2020 08:39:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyfp+5tC+lJM93tLR+sor2FSM3Wg0mNXpm0qxKtWYGPxpZcodo+euRYrVZHDarQfKHHFlQ1 X-Received: by 2002:aa7:cd90:: with SMTP id x16mr132672edv.302.1602603586123; Tue, 13 Oct 2020 08:39:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602603586; cv=none; d=google.com; s=arc-20160816; b=cGe07B8/36mEdXssp0va9VUR19jzqeJIWpIZIl8bEtbfLZ/A/usiqKvnWCImEVqAhX UoWVaQi6X5yW/ihEd2xT+cJO4a2kCeIWS2R2BEU7tSBM2YyD0294xgDkZl0zDNE2B04s lagAWy6GIBo+khQLvxA/p0gI+JJ0IvOtp8plO2CUEWUQiMbm0/LxpUCPw6+1LbQDJxvo qT9JdhxrfeX0g2mhFQepI+6fAyvBZSpB1ChRuGH8MzcNmYBoo369uU3xuz1wFZ9YnIWs mZHfPuIGglhAueO0cE0EbnUv2S30PTmbl5N9tqRqky9ghy6RqWJ4JZRPH/dYiWbbq7Bi +Kqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=veclwl1rWnO1PyZHw8dPBjATssIz1mruYYxnvO/JFq8=; b=msu/BqS64NhUrJWQjQA98x1HbtAX9UrZ1s2V9yQ35I5OjdjjfCsC3/eSJRctZ7DFt8 B8Kj7fb7dCFOAnztAUTR6V10xGXGmXISWm+/OJqMt8t7dssyJpg1gtT+uM+OHSoCTFc0 0xnnzymUQPYg5MRuykN0MQq7U7XUnyw0pYMswA3snMWUBgm/36/FKSqkK+YeJTZZtDo4 52auzB2LU7UPZuuB7hPPZJSaXjf5LKLH6BI0xpW3sSBS2qjJVCGGh14tbXhXo0I+fFcp H9Pvr1G9TLd5PXuoeiifu/UuEA7EGFNy0Tnf3g26Cd57ICLxn6GCKOIagDiKeJmfVXyc 3xag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="sWqJ/y9J"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v16si68714ejg.54.2020.10.13.08.39.08; Tue, 13 Oct 2020 08:39:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="sWqJ/y9J"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728963AbgJMDaV (ORCPT + 99 others); Mon, 12 Oct 2020 23:30:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727831AbgJMDaV (ORCPT ); Mon, 12 Oct 2020 23:30:21 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED3EFC0613D0 for ; Mon, 12 Oct 2020 20:30:20 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id k8so15798984pfk.2 for ; Mon, 12 Oct 2020 20:30:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=veclwl1rWnO1PyZHw8dPBjATssIz1mruYYxnvO/JFq8=; b=sWqJ/y9JSkmPh1tUBls/oFgZXBBJAbrSQFTt+uSGq/Iu5Cll4SUFao4LHB8uZML9Nf suEOfzzkUGhoWi1X8PeH7GScXYHrF5xPgJwx1WDg8hdRc8vhZuVkpui0eXEtiy3SgcYi R8EIf7ncK1EhX/rdKQOmKglxFInKA3tnaF4Wbo4C5B1WXahadXjBgxv3YEPnUBjvo1yc K8QTIYwTPjSHxidx02iPEuML0hdy8P6WJiVz+ogJMp3NSuWMBYhN0PE6qUt/bP4AVwjj QhMuB7hqQXpc3ZNOYun05eTJ40dodrPHEW+oE0s1QGwBm8v+icuYsiWBseFF5z4khe6e RXtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=veclwl1rWnO1PyZHw8dPBjATssIz1mruYYxnvO/JFq8=; b=hKhSm0Y2B2zzfmsbWPQNKqShfEFFr6E2UA4fV5J07wj/RoGG7scVvaEp41ZNqgaXj/ wblJ0pFbLaNivoPKXMtV1hayjAjIG34D1hoi51xGxmNInIlmWriO5gU4rIcWECBbRoqX bG1mEtNcgdHtMDYbXRj6BtHI0a5Vwa9dlvjWceYfB1MC6SNlCG4oYTzFQ0W1pFAzNfDK 995j4vat2najBZwmf4jSPHjqe/AObbutV2Pe5GvNbMOwUd9T75eY4vWMMuyeQaWtbW8u Tsj7R+4d/66pYDzHLSxfJuPqLIZCzn8wDoYrMFnJp3+Ib1id1ZvHHwPnlyzLJGefvcbc Csqg== X-Gm-Message-State: AOAM532qc8G/bSNPGjtw9om0kNE2GmfZIo35KsvVMFpDSB4b+cDgkT8v gfsVS3mvE3eTS9ztehI03lkgB72veRv9lIYyMmO20w== X-Received: by 2002:a17:90a:b78b:: with SMTP id m11mr23945507pjr.13.1602559820309; Mon, 12 Oct 2020 20:30:20 -0700 (PDT) MIME-Version: 1.0 References: <20201010103854.66746-1-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Tue, 13 Oct 2020 11:29:44 +0800 Message-ID: Subject: Re: [External] Re: [PATCH] mm: proc: add Sock to /proc/meminfo To: Cong Wang Cc: Greg KH , rafael@kernel.org, "Michael S. Tsirkin" , Jason Wang , David Miller , Jakub Kicinski , Alexey Dobriyan , Andrew Morton , Eric Dumazet , Alexey Kuznetsov , Hideaki YOSHIFUJI , Steffen Klassert , Herbert Xu , Shakeel Butt , Will Deacon , Michal Hocko , Roman Gushchin , Neil Brown , Mike Rapoport , Sami Tolvanen , "Kirill A. Shutemov" , Feng Tang , Paolo Abeni , Willem de Bruijn , Randy Dunlap , Florian Westphal , gustavoars@kernel.org, Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Peter Zijlstra , Christian Brauner , "Eric W. Biederman" , Thomas Gleixner , dave@stgolabs.net, Michel Lespinasse , Jann Horn , chenqiwu@xiaomi.com, christophe.leroy@c-s.fr, Minchan Kim , Martin KaFai Lau , Alexei Starovoitov , Daniel Borkmann , Miaohe Lin , Kees Cook , LKML , virtualization@lists.linux-foundation.org, Linux Kernel Network Developers , linux-fsdevel , linux-mm Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 13, 2020 at 5:47 AM Cong Wang wrote: > > On Sun, Oct 11, 2020 at 9:22 PM Muchun Song wr= ote: > > > > On Mon, Oct 12, 2020 at 2:39 AM Cong Wang wr= ote: > > > > > > On Sat, Oct 10, 2020 at 3:39 AM Muchun Song wrote: > > > > > > > > The amount of memory allocated to sockets buffer can become signifi= cant. > > > > However, we do not display the amount of memory consumed by sockets > > > > buffer. In this case, knowing where the memory is consumed by the k= ernel > > > > > > We do it via `ss -m`. Is it not sufficient? And if not, why not addin= g it there > > > rather than /proc/meminfo? > > > > If the system has little free memory, we can know where the memory is v= ia > > /proc/meminfo. If a lot of memory is consumed by socket buffer, we cann= ot > > know it when the Sock is not shown in the /proc/meminfo. If the unaware= user > > can't think of the socket buffer, naturally they will not `ss -m`. The > > end result > > Interesting, we already have a few counters related to socket buffers, > are you saying these are not accounted in /proc/meminfo either? Yeah, these are not accounted for in /proc/meminfo. > If yes, why are page frags so special here? If not, they are more > important than page frags, so you probably want to deal with them > first. > > > > is that we still don=E2=80=99t know where the memory is consumed. And w= e add the > > Sock to the /proc/meminfo just like the memcg does('sock' item in the c= group > > v2 memory.stat). So I think that adding to /proc/meminfo is sufficient. > > It looks like actually the socket page frag is already accounted, > for example, the tcp_sendmsg_locked(): > > copy =3D min_t(int, copy, pfrag->size - pfrag->of= fset); > > if (!sk_wmem_schedule(sk, copy)) > goto wait_for_memory; > Yeah, it is already accounted for. But it does not represent real memory usage. This is just the total amount of charged memory. For example, if a task sends a 10-byte message, it only charges one page to memcg. But the system may allocate 8 pages. Therefore, it does not truly reflect the memory allocated by the page frag memory allocation path. > > > > > > > > > > static inline void __skb_frag_unref(skb_frag_t *frag) > > > > { > > > > - put_page(skb_frag_page(frag)); > > > > + struct page *page =3D skb_frag_page(frag); > > > > + > > > > + if (put_page_testzero(page)) { > > > > + dec_sock_node_page_state(page); > > > > + __put_page(page); > > > > + } > > > > } > > > > > > You mix socket page frag with skb frag at least, not sure this is exa= ctly > > > what you want, because clearly skb page frags are frequently used > > > by network drivers rather than sockets. > > > > > > Also, which one matches this dec_sock_node_page_state()? Clearly > > > not skb_fill_page_desc() or __skb_frag_ref(). > > > > Yeah, we call inc_sock_node_page_state() in the skb_page_frag_refill(). > > How is skb_page_frag_refill() possibly paired with __skb_frag_unref()? > > > So if someone gets the page returned by skb_page_frag_refill(), it must > > put the page via __skb_frag_unref()/skb_frag_unref(). We use PG_private > > to indicate that we need to dec the node page state when the refcount o= f > > page reaches zero. > > skb_page_frag_refill() is called on frags not within an skb, for instance= , > sk_page_frag_refill() uses it for a per-socket or per-process page frag. > But, __skb_frag_unref() is specifically used for skb frags, which are > supposed to be filled by skb_fill_page_desc() (page is allocated by drive= r). > > They are different things you are mixing them up, which looks clearly > wrong or at least misleading. Yeah, it looks a little strange. I just want to account for page frag allocations. So I have to use PG_private to distinguish the page from page frag or others in the __skb_frag_unref(). If the page is allocated from skb_page_frag_refill, we should decrease the statistics. Thanks. > > Thanks. --=20 Yours, Muchun