Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp80818pxf; Tue, 16 Mar 2021 23:07:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/ccgwIbjLbZxZiFYBQv5prA9EKFwin4/48AuS2K7FW6k0Vb1JZ82Sx2cqtwdR+s/A8t65 X-Received: by 2002:a17:907:3e9e:: with SMTP id hs30mr34123808ejc.66.1615961273174; Tue, 16 Mar 2021 23:07:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615961273; cv=none; d=google.com; s=arc-20160816; b=SOGy64MkYp89R8N1gwxFCKmQhNHNTuoRyw1z4MHOcu4/JyETkEHl2uNedrlvj7JEKs b96qqvdt6H9c2xcHcoj91IPuEQZaB8d9g3Av1D+VFJk0hE7ml7gxcXze24sxPs/18Eo1 vhUEUcDFC9fXVTM4OzQYMz9QZmlKtt5LfFlv2MuR6RG9O5yEgBGG5CIpAE0fG4XjxFu8 s9YsiW7KS3tbGBc0QAVnt+u6URGGpL6mQNSWvZUuzvQcy3lIBuExFSfBlzFqSNK41MFn 0OzcaURFZO47irSEmFX0Wsg3DsDk7kMRYfH7CB7IevURhm3svSMI1BqjxtCIC8CXwP7S IP7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=BZV91l3Kb8EFINEr5Dzile2PDT5SHthUyDAm4nn5Ats=; b=U8cVSxjbhmHDblOrP/oYolmpB7iE5rBsw+h3Lohp2wSt7x9Aosbi1BCIzUCI1lnx7h W++lAHv+5zUmt5RAc1BtA7JjVAMEoJa8+pNl4OYX8VCHPDtEzZUGwUwb7RQdpGGcC8Jk nzNsMvrCc7Jm2BErwPncE360v0zD+QmvClqfoIdW93ojRIJRXsbmrBVY2fX4nEvHTx93 HUEUtAwMto2RPzKAnHYanR4T4L5jaZ4XBY4UgxZYR27Jict4AuFmqBZIrv2wV59GmF/M f+L8l2KcLF4+fHAQ1Q0abwysqV0oitkyU4R7655VirrFhh2CtJ+PFzRilqrTS3mS6O+m WnrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=FGiS2Pmz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ec10si15220029ejb.386.2021.03.16.23.07.30; Tue, 16 Mar 2021 23:07:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=FGiS2Pmz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229978AbhCQGF2 (ORCPT + 99 others); Wed, 17 Mar 2021 02:05:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229868AbhCQGFX (ORCPT ); Wed, 17 Mar 2021 02:05:23 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6E27C06174A for ; Tue, 16 Mar 2021 23:05:23 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id o10so24164935pgg.4 for ; Tue, 16 Mar 2021 23:05:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BZV91l3Kb8EFINEr5Dzile2PDT5SHthUyDAm4nn5Ats=; b=FGiS2PmzaeDH4WF0bP7Hv4fgzprSnD1b+qFC24/T4VD3FAaCWtiibTviZQNnPhNvnw 1q6y6hWlFrJhmNjCMgkm1m/n71JgRwgcghwBE1ZZzlWvSblahgxHCO/zMfSUaohMYxcV cKt+lhi8X1ptWPDzZeT9wdbaQkGO+xB3rWyDFnY1xIt3NI5KiY+LC3ZIuQf6CC17gu/n dvObaXhKRQDLfZrX3CxwdXGBv2j83MeFRPlQvrbX4eqHiUjE1xlJFynkrvKiJirl26Zb 5LXmoAFZgFcBT27k/CBmzk0ftyXVk/eK+Yl2YFZhxza3iphST+KXgsIAHC9afWJfilvV c8DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BZV91l3Kb8EFINEr5Dzile2PDT5SHthUyDAm4nn5Ats=; b=tgm9fT9ijc5m6W4P1iPidej6jN2p6GM0O/2Bl7GJ5QzgUHN8/lSeX1LpGmGKddyTuT l+vaLd89sQPvJy87uhnP4XjRRDJiUDe3pacv/zykqe7Kne5EFw91xURIeRvT5lvCVMAJ wfw6vyrY8PgY12o8nsNzQd0duSyhTtTnUnT+TDW5crM+379M37btSgXFCsMx+Bv7rdO0 yRJIM06cj+aBRRlB+i5bpUZWzdosRsVcJO5VqO0F131XdtC6LdrCyeb/9Q+B5awKIWlh eeIGso1Vp/5bt+K0b0lGMSQCDUhGd7s0XfYMkW7t/DD2ULCTlzTej71kgT8uUPY8+6r3 cu8Q== X-Gm-Message-State: AOAM532ZKaa3qlVGOko9Sqhcth8PZ51s6CRo1pJDGrx49paVfuOn7hKg tYH5aTmXsS4jywV6+RfTo8K0IAOVhf/xZJDjjQdH8Q== X-Received: by 2002:a63:1843:: with SMTP id 3mr1281148pgy.253.1615961122794; Tue, 16 Mar 2021 23:05:22 -0700 (PDT) MIME-Version: 1.0 References: <20210316041645.144249-1-arjunroy.kdev@gmail.com> In-Reply-To: From: Arjun Roy Date: Tue, 16 Mar 2021 23:05:11 -0700 Message-ID: Subject: Re: [mm, net-next v2] mm: net: memcg accounting for TCP rx zerocopy To: Johannes Weiner Cc: Arjun Roy , Andrew Morton , David Miller , netdev , Linux Kernel Mailing List , Cgroups , Linux MM , Shakeel Butt , Eric Dumazet , Soheil Hassas Yeganeh , Jakub Kicinski , Michal Hocko , Yang Shi , Roman Gushchin Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 16, 2021 at 3:27 AM Johannes Weiner wrote: > > Hello, > > On Mon, Mar 15, 2021 at 09:16:45PM -0700, Arjun Roy wrote: > > From: Arjun Roy > > > > TCP zerocopy receive is used by high performance network applications > > to further scale. For RX zerocopy, the memory containing the network > > data filled by the network driver is directly mapped into the address > > space of high performance applications. To keep the TLB cost low, > > these applications unmap the network memory in big batches. So, this > > memory can remain mapped for long time. This can cause a memory > > isolation issue as this memory becomes unaccounted after getting > > mapped into the application address space. This patch adds the memcg > > accounting for such memory. > > > > Accounting the network memory comes with its own unique challenges. > > The high performance NIC drivers use page pooling to reuse the pages > > to eliminate/reduce expensive setup steps like IOMMU. These drivers > > keep an extra reference on the pages and thus we can not depend on the > > page reference for the uncharging. The page in the pool may keep a > > memcg pinned for arbitrary long time or may get used by other memcg. > > The page pool knows when a page is unmapped again and becomes > available for recycling, right? Essentially the 'free' phase of that > private allocator. That's where the uncharge should be done. > In general, no it does not. The page pool, how it operates and whether it exists in the first place, is an optimization that a given NIC driver can choose to make - and so there's no generic plumbing that ties page unmap events to something that a page pool could subscribe to that I am aware of. All it can do is check, at a given point, whether it can reuse a page or not, typically by checking the current page refcount. A couple of examples for drivers with such a mechanism - mlx5: (https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c#L248) Or intel fm10k: (https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/intel/fm10k/fm10k_main.c#L207) Note that typically map count is not checked (maybe because page-flipping receive zerocopy did not exist as a consideration when the driver was written). So given that the page pool is essentially checking on demand for whether a page is usable or not - since there is no specific plumbing invoked when a page is usable again (i.e. unmapped, in this case) - we opted to hook into when the mapcount is decremented inside unmap() path. > For one, it's more aligned with the usual memcg charge lifetime rules. > > But also it doesn't add what is essentially a private driver callback > to the generic file unmapping path. > I understand the concern, and share it - the specific thing we'd like to avoid is to have driver specific code in the unmap path, and not in the least because any given driver could do its own thing. Rather, we consider this mechanism that we added as generic to zerocopy network data reception - that it does the right thing, no matter what the driver is doing. This would be transparent to the driver, in other words - all the driver has to do is to continue doing what it was before, using page->refcnt == 1 to decide whether it can use a page or if it is not already in use. Consider this instead as a broadly applicable networking feature adding a callback to the unmap path, instead of a particular driver. And while it is just TCP at present, it fundamentally isn't limited to TCP. I do have a request for clarification, if you could specify the usual memcg charge lifetime rules that you are referring to here? Just to make sure we're on the same page. > Finally, this will eliminate the need for making up a new charge type > (MEMCG_DATA_SOCK) and allow using the standard kmem charging API. > > > This patch decouples the uncharging of the page from the refcnt and > > associates it with the map count i.e. the page gets uncharged when the > > last address space unmaps it. Now the question is, what if the driver > > drops its reference while the page is still mapped? That is fine as > > the address space also holds a reference to the page i.e. the > > reference count can not drop to zero before the map count. > > > > Signed-off-by: Arjun Roy > > Co-developed-by: Shakeel Butt > > Signed-off-by: Shakeel Butt > > Signed-off-by: Eric Dumazet > > Signed-off-by: Soheil Hassas Yeganeh > > --- > > > > Changelog since v1: > > - Pages accounted for in this manner are now tracked via MEMCG_SOCK. > > - v1 allowed for a brief period of double-charging, now we have a > > brief period of under-charging to avoid undue memory pressure. > > I'm afraid we'll have to go back to v1. > > Let's address the issues raised with it: > > 1. The NR_FILE_MAPPED accounting. It is longstanding Linux behavior > that driver pages mapped into userspace are accounted as file > pages, because userspace is actually doing mmap() against a driver > file/fd (as opposed to an anon mmap). That is how they show up in > vmstat, in meminfo, and in the per process stats. There is no > reason to make memcg deviate from this. If we don't like it, it > should be taken on by changing vm_insert_page() - not trick rmap > into thinking these arent memcg pages and then fixing it up with > additional special-cased accounting callbacks. > > v1 did this right, it charged the pages the way we handle all other > userspace pages: before rmap, and then let the generic VM code do > the accounting for us with the cgroup-aware vmstat infrastructure. To clarify, are you referring to the v1 approach for this patch from a few weeks ago? (i.e. charging for the page before vm_insert_page()). This patch changes when the charging happens, and, as you note, makes it a forced charge since we've already inserted the mappings into the user's address space - but it isn't otherwise fundamentally different from v1 in what it does. And unmap is the same. > > 2. The double charging. Could you elaborate how much we're talking > about in any given batch? Is this a problem worth worrying about? > The period of double counting in v1 of this patch was from around the time we do vm_insert_page() (note that the pages were accounted just prior to being inserted) till the struct sk_buff's were disposed of - for an skb that's up to 45 pages. But note that is for one socket, and there can be quite a lot of open sockets and depending on what happens in terms of scheduling the period of time we're double counting can be a bit high. v1 patch series had some discussion on this: https://www.spinics.net/lists/cgroups/msg27665.html which is why we have post-charging in v2. > > The way I see it, any conflict here is caused by the pages being > counted in the SOCK counter already, but not actually *tracked* on > a per page basis. If it's worth addressing, we should look into > fixing the root cause over there first if possible, before trying > to work around it here. > When you say tracked on a per-page basis, I assume you mean using the usual mechanism where a page has a non-null memcg set, with unaccounting occuring when the refcount goes to 0. Networking currently will account/unaccount bytes just based on a page count (and the memcg set in struct sock) rather than setting it in the page itself - because the recycling of pages means the next time around it could be charged to another memcg. And the refcount never goes to 1 due to the pooling (in the absence of eviction for busy pages during packet reception). When sitting in the driver page pool, non-null memcg does not work since we do not know which socket (thus which memcg) the page would be destined for since we do not know whose packet arrives next. The page pooling does make this all this a bit awkward, and the approach in this patch seems to me the cleanest solution. > > > The newly-added GFP_NOFAIL is especially worrisome. The pages > should be charged before we make promises to userspace, not be > force-charged when it's too late. > > We have sk context when charging the inserted pages. Can we > uncharge MEMCG_SOCK after each batch of inserts? That's only 32 > pages worth of overcharging, so not more than the regular charge > batch memcg is using. Right now sock uncharging is happening as we cleanup the skb, which can have up to 45 pages. So if we tried uncharging per SKB we'd then have up to 45 pages worth of overcharging per socket, multiplied by however many sockets are currently being overcharged in this manner. > > An even better way would be to do charge stealing where we reuse > the existing MEMCG_SOCK charges and don't have to get any new ones > at all - just set up page->memcg and remove the charge from the sk. > That gets a bit complicated since we have to be careful with forward allocs in the socket layer - and the bookkeeping gets a bit complicated since now for the struct sock we'll need to track how many bytes were 'stolen' and update all the code where socket accounting happens. It ends up being a larger/messier code change then. Thanks, -Arjun > > > > But yeah, it depends a bit if this is a practical concern. > > Thanks, > Johannes