Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp620747pxf; Thu, 18 Mar 2021 08:09:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzY+W7QyX5G56aJoVyqzy8N73EBJtmeBFLw54I+bIGXScKP59z7/Appq0pTR4spjx/YMx5z X-Received: by 2002:a05:6402:440d:: with SMTP id y13mr4253762eda.316.1616080163515; Thu, 18 Mar 2021 08:09:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616080163; cv=none; d=google.com; s=arc-20160816; b=UkPnmsbghlU3HFdcPcTaUv/vufGnL9Vev7b12QErpJGpmFnqXZa7ApMovQLloEhvA7 GDNvZlxe+WkRx84BfHiJEo9pJJw41Ytc7K9Dw3F3QDZR1KJdPC6orQ64BjPK4iL6wHUj PKzq7zlXAalIP/MpqzfxY4++2nCMoiQDfGxbGOQIfo/6Md/7evFqSl03ZowutZck2jK4 baz0EwgVqjXHTMFba9noF5I/eiT3045t/077m8HogwIJHX9DSmFu/xz2TZ9JtWdmI+BD FXeqBxe8Wtq2IJPgroIuxLsr8Y45skWIlRHD15puOmR+TNG19sGWgNVQKz6J9RLsGO1E WLBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=fbzGRMSg35Yk7urbJWTBF/LFrWpG+uSfyGbnx+dj3t0=; b=yJxt6j2oJZx3XyDofqR4IDnafihE+O1GaiQASFiGb8zsr6PrWHJsaxkGN6bcdzbuEn ekdZiQ7nj4Ox/M4tujvNDu0rFGfligRZ0VKZh2mDEPuhFAOoVNfhsysjx4eeD6Wc3ptD /id7rJfZVZwiIba8y2HxsKA6rqPFYhoRecMQ2G7tYSHhhghNI0PIoYLBxNmXdRUxRF8P rbnwB3xPyfSn1APjiwJnJm7jE9wwo4CEfyLk/DRswN8c3Q5mGwR6RZp1hSXdEzpUieH7 myVbKgNj9PSJLTyssNRKwUS5nFsLwL28GQ9yKp/CjE4tGGf4ImIS66+kefZqaebHfa7G fCxg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=YnGk2Gkg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m9si1865865eje.591.2021.03.18.08.08.53; Thu, 18 Mar 2021 08:09:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=YnGk2Gkg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231583AbhCRPHn (ORCPT + 99 others); Thu, 18 Mar 2021 11:07:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229960AbhCRPH0 (ORCPT ); Thu, 18 Mar 2021 11:07:26 -0400 Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com [IPv6:2607:f8b0:4864:20::72b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06DFBC06174A for ; Thu, 18 Mar 2021 08:07:26 -0700 (PDT) Received: by mail-qk1-x72b.google.com with SMTP id l4so2317427qkl.0 for ; Thu, 18 Mar 2021 08:07:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fbzGRMSg35Yk7urbJWTBF/LFrWpG+uSfyGbnx+dj3t0=; b=YnGk2Gkg4F4VvEzaISXtVkjvk97Qrijt2BYdZXCmtZvDgopwnUvrdbiEgmzSjdca8i LOGqc2jFPYkk7w8JqSaQu/0tXg3/dujRJQ8C7/rvrC/ExOY5oGEpviDlgvcV9a+6Eh/1 rG/y/UdySEHq6xwg/3Jmbd5cD2nhGiX+uOXSXIs8pznMBuUis3eK74kW3rEj4ZPpHRlK zz2zgQabP+miGKRY9FUEwlGuaE2jZ4An87vNYT3gU+AL2b3TAa3IYzp+GuvTPAm/Fg+n +jcaLh+MuyUVsEKF3OzcAk7WAaxHTD4XwhVDxS212TySZhAUBVZIpXEz5b9FDTPinCiO dnBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=fbzGRMSg35Yk7urbJWTBF/LFrWpG+uSfyGbnx+dj3t0=; b=VVu5Mp5NkgX77iXvF+gCzMQ2aObF8jBXZBcdMyLS0hwKOjkq54c0VDfHclFWXTWgZ/ FpcMphB6fMEgPis4jcroeXgDx7w8zyIWnr/Z9KcbW2vHGHiJ51RcVSuoVfON9gJxiaBR p4dGeu2G8EY9COduz4sr6zQYFTB99yjKWjgg5796TchjUnxOJjqtbJLE/6qtDpSloHe9 QFg541QijD6Gk5RjMbwVKc/vEI1/uRRDyBCEeI1E7XDFELjTcKIVc+WQpS8jlez8cZ+S b3SVr6Y+uYD7ckeZHPTDadcdquvpw+vztkWyNMvIwAWcn/F7B42uqlwqWfuKVHOl965q Afeg== X-Gm-Message-State: AOAM530hO/X6luXPm8txlr3todLiFh1JjBrdSpl7EC1FKwYyJp9T3Mc5 j1A+5FmclHhwAr4kqmQiF5zQgQ== X-Received: by 2002:a05:620a:981:: with SMTP id x1mr4514571qkx.501.1616080045272; Thu, 18 Mar 2021 08:07:25 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e46c]) by smtp.gmail.com with ESMTPSA id t2sm1612214qtd.13.2021.03.18.08.07.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Mar 2021 08:07:24 -0700 (PDT) Date: Thu, 18 Mar 2021 11:07:23 -0400 From: Johannes Weiner To: Michal Hocko Cc: Matthew Wilcox , Hugh Dickins , Zhou Guanghui , linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, npiggin@gmail.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, guohanjun@huawei.com, dingtianhong@huawei.com, chenweilong@huawei.com, rui.xiang@huawei.com Subject: Re: [PATCH v2 2/2] mm/memcg: set memcg when split page Message-ID: References: <20210304074053.65527-3-zhouguanghui1@huawei.com> <20210308210225.GF3479805@casper.infradead.org> <20210309123255.GI3479805@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 18, 2021 at 03:05:00PM +0100, Michal Hocko wrote: > On Thu 11-03-21 12:37:20, Hugh Dickins wrote: > > On Thu, 11 Mar 2021, Michal Hocko wrote: > > > On Thu 11-03-21 10:21:39, Johannes Weiner wrote: > > > > On Thu, Mar 11, 2021 at 09:37:02AM +0100, Michal Hocko wrote: > > > > > Johannes, Hugh, > > > > > > > > > > what do you think about this approach? If we want to stick with > > > > > split_page approach then we need to update the missing place Matthew has > > > > > pointed out. > > > > > > > > I find the __free_pages() code quite tricky as well. But for that > > > > reason I would actually prefer to initiate the splitting in there, > > > > since that's the place where we actually split the page, rather than > > > > spread the handling of this situation further out. > > > > > > > > The race condition shouldn't be hot, so I don't think we need to be as > > > > efficient about setting page->memcg_data only on the higher-order > > > > buddies as in Willy's scratch patch. We can call split_page_memcg(), > > > > which IMO should actually help document what's happening to the page. > > > > > > > > I think that function could also benefit a bit more from step-by-step > > > > documentation about what's going on. The kerneldoc is helpful, but I > > > > don't think it does justice to how tricky this race condition is. > > > > > > > > Something like this? > > > > > > > > void __free_pages(struct page *page, unsigned int order) > > > > { > > > > /* > > > > * Drop the base reference from __alloc_pages and free. In > > > > * case there is an outstanding speculative reference, from > > > > * e.g. the page cache, it will put and free the page later. > > > > */ > > > > if (likely(put_page_testzero(page))) { > > > > free_the_page(page, order); > > > > return; > > > > } > > > > > > > > /* > > > > * The speculative reference will put and free the page. > > > > * > > > > * However, if the speculation was into a higher-order page > > > > * that isn't marked compound, the other side will know > > > > * nothing about our buddy pages and only free the order-0 > > > > * page at the start of our chunk! We must split off and free > > > > * the buddy pages here. > > > > * > > > > * The buddy pages aren't individually refcounted, so they > > > > * can't have any pending speculative references themselves. > > > > */ > > > > if (!PageHead(page) && order > 0) { > > > > split_page_memcg(page, 1 << order); > > > > while (order-- > 0) > > > > free_the_page(page + (1 << order), order); > > > > } > > > > } > > > > > > Fine with me. Mathew was concerned about more places that do something > > > similar but I would say that if we find out more places we might > > > reconsider and currently stay with a reasonably clear model that it is > > > only head patch that carries the memcg information and split_page_memcg > > > is necessary to break such page into smaller pieces. > > > > I agree: I do like Johannes' suggestion best, now that we already > > have split_page_memcg(). Not too worried about contrived use of > > free_unref_page() here; and whether non-compound high-order pages > > should be perpetuated is a different discussion. > > Matthew, are you planning to post a patch with suggested changes or > should I do it? I'll post a proper patch.