Received: by 2002:a05:7412:8d1c:b0:fa:4c10:6cad with SMTP id bj28csp118069rdb; Tue, 16 Jan 2024 17:34:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IElWcVAExlLMUlZjreZx/pZS6d5OJwCA/BQG5ECH7z4Iv7trO/Rmt1YP1iTUmPgygGGIsmj X-Received: by 2002:ac8:5f53:0:b0:429:f01a:61bd with SMTP id y19-20020ac85f53000000b00429f01a61bdmr5217609qta.135.1705455260801; Tue, 16 Jan 2024 17:34:20 -0800 (PST) Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id b5-20020a05622a020500b0042a0b441002si1190293qtx.612.2024.01.16.17.34.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Jan 2024 17:34:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-28430-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=fail (signature failed); spf=pass (google.com: domain of linux-kernel+bounces-28430-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-28430-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 8FE471C244EE for ; Wed, 17 Jan 2024 01:34:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4D78E4C64; Wed, 17 Jan 2024 01:34:14 +0000 (UTC) Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1B014687 for ; Wed, 17 Jan 2024 01:34:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705455253; cv=none; b=gKsYCMIzaS2koMkzNa2RED7jD02Prux+4LZBDchJfEhfnbJ8IXTZr+XvmsJCEFG+Cbu3KHOVII4yIm1ChJmGOGW/zkfkC1mSpXRi4n6f2H+gbDvOZiKZHJm2yfkGNfggO/ZKqV7VwZs7Mofz5oZh0I1UgKVhTsenM7PeRi4fcG4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705455253; c=relaxed/simple; bh=Bu02hejcdkvtR8GKaENOTxwiUSz6U7HJKAvIKycRKRE=; h=Received:Received:Received:Message-ID:Date:MIME-Version: User-Agent:Subject:Content-Language:To:CC:References:From: In-Reply-To:Content-Type:Content-Transfer-Encoding: X-Originating-IP:X-ClientProxiedBy; b=lGpMnHZBJkgG6s5I2/GhDMJYsbOE0pi7wDRPBmgTYJrIxo6x37VXQaAvVFMtpMBUUOkFhWeaek1twM1R8nZSoQQ4nHlYA3VB3hGLF0S2lT5BzDCCEjxC/R5jXLUbeh7T/O9qkD+38H9dF20Tufbc92nPbRiBgEfWZTBDsqPYEfk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4TF7fH6c46z1V48D; Wed, 17 Jan 2024 09:32:23 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 13B0D140113; Wed, 17 Jan 2024 09:34:02 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 17 Jan 2024 09:34:01 +0800 Message-ID: <49ee43cd-f356-4441-ba95-4ac81ef98bb2@huawei.com> Date: Wed, 17 Jan 2024 09:34:01 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() Content-Language: en-US To: Ryan Roberts , Matthew Wilcox CC: Andrew Morton , , , David Hildenbrand References: <20240116071302.2282230-1-wangkefeng.wang@huawei.com> <2c24afdf-5103-4c1b-a649-2eeed185f3fb@arm.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100001.china.huawei.com (7.185.36.93) On 2024/1/16 23:07, Ryan Roberts wrote: > On 16/01/2024 14:51, Matthew Wilcox wrote: >> On Tue, Jan 16, 2024 at 02:35:54PM +0000, Ryan Roberts wrote: >>> On 16/01/2024 07:13, Kefeng Wang wrote: >>>> In order to allocate as much as possible of large folio, move >>>> the mem charge into alloc_anon_folio() and try the next order >>>> if mem_cgroup_charge() fails, also we change the GFP_KERNEL >>>> to gfp to be consistent with PMD THP. >>> >>> I agree that changing gfp gives you consistency. But it's not entirely clear to >>> me why THP should use one set of flags for this case, and since pages another. >>> Why does this difference exist? >> >> I think it needs to be spelled out much better in the changelog. Here's >> my attempt at explaining why we might want this change. >> >> mem_cgroup_charge() uses the GFP flags in a fairly sophisticated way. >> In addition to checking gfpflags_allow_blocking(), it pays attention to >> __GFP_NORETRY and __GFP_RETRY_MAYFAIL to ensure that processes within >> this memcg do not exceed their quotas. Using the same GFP flags ensures >> that we handle large anonymous folios correctly, including falling back >> to smaller orders when there is plenty of memory available in the system >> but this memcg is close to its limits. > > Thanks for the explanation. Please add to the commit log. Thanks, it is much better, will update, a similar change in THP, see commit 3b3636924dfe "mm, memcg: sync allocation and memcg charge gfp flags for THP". > > Essentially you are saying that previously, all mTHP allocations would cause > reclaim from the memcg if the allocation caused the quota to be used up. But > with this change, it might now avoid that reclaim and just OOM, if the flags are > as such? So then we retry with the next lowest available size. Makes sense! > With correct GFP, we could get less reclaim and faster fallabck to next order, that's what I want too. > >> >> ... I remain not-an-expert in memcg and anonymous memory and welcome >> improvements to that text. > > Me too...