Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp2176368pxb; Fri, 8 Oct 2021 02:28:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwQpZETOBrEEHSgUs+vqYkh6qJg0bHwL2lgwQutOsNqDxluTk0aWqz27hhytILS126wkdnw X-Received: by 2002:aa7:dbd2:: with SMTP id v18mr13574589edt.315.1633685307441; Fri, 08 Oct 2021 02:28:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633685307; cv=none; d=google.com; s=arc-20160816; b=z1JesBPGcwIEZeB1gmbJ1yqYkTVnRrecS3gs0odWHu9ktc/MrF1RgL4A/isl5Z05Mo ONzK7yFBOF+R6AAgyFYQeeanj4gxOewM8c5sOgHRsKRcmlsittg8wKicVsPtc7cuwpAl g0GFxZSMW1vQbk3PqhV0MA0/+9qglgUbuSMuWIIURmfBkaxz+YQtYWLFF4YFLMKYHbBW QoJ2XIlfHtTDDey1PgN2vR+okgS1Dj4TRQFNHTZT3S9RIqgZtacxco/X6XF/XuYLz9t+ xg94DNhLfCYaC2lfPoir2CqyUtehBo3opHdKzKM0Agti86NYNf93aZTS8mlM3Y3WyKGv X6ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:references:cc :to:subject:from:dkim-signature; bh=dpPiV3bJCSO+Krvm6GpN+2Uo7e56DJCpPVOsUghiKmE=; b=YIbGRbfTfRQlgTmESsY+PLy64vIn295DTE1eeaV2RVA171Y9ih9AOSS3jMKKMmlg1q uzFc+zgCDx7vz9sSV7ADtfjHKpeNFsnzTZs52ZY39nrGkno0oHWkG6nTAmZE1TZVTR9w e/ZpwfsLOxXHEel/qPCJ8Ox4mFBvcaaGNaFWTCO5FXJPENAS4DK/zLbfIuZtkdAWEiud Ag7VDNG+eI0Z6WwnirgKjmphRENM+IwIVU+7/+HxE75BDSwUCpXb7P/UVSPV4u46EQA+ mRaAOyU8NvEEUD3PBO42WKksjVcKtncs0pPI5SkDwptD0kXULXGPtE8PMGYUefO+2K/7 XvgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@virtuozzo.com header.s=relay header.b=tqgKapK8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m23si2482339ejg.260.2021.10.08.02.28.03; Fri, 08 Oct 2021 02:28:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@virtuozzo.com header.s=relay header.b=tqgKapK8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235969AbhJHJZs (ORCPT + 99 others); Fri, 8 Oct 2021 05:25:48 -0400 Received: from relay.sw.ru ([185.231.240.75]:50148 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236335AbhJHJZq (ORCPT ); Fri, 8 Oct 2021 05:25:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=Content-Type:MIME-Version:Date:Message-ID:Subject :From; bh=dpPiV3bJCSO+Krvm6GpN+2Uo7e56DJCpPVOsUghiKmE=; b=tqgKapK8JdPzur2w3X/ 6pSaHgtdkh0ouvn4j+ofbPaYDXCmaXkKg1LQwfrCq3EEpsUAhT6+2m3Y/euudRVCyKoy3SWFTb272 eFW0FMx4A0IIjtdoqVrilXf+dVFn1+5Bq5B/gKKfJ6gJbdQBmMd48pW37dTXtMY1WdKCJ4hHbtA=; Received: from [10.93.0.56] by relay.sw.ru with esmtp (Exim 4.94.2) (envelope-from ) id 1mYm6U-005QDB-A5; Fri, 08 Oct 2021 12:23:46 +0300 From: Vasily Averin Subject: [PATCH memcg] memcg: enable memory accounting in __alloc_pages_bulk To: Michal Hocko Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org, Mel Gorman , Uladzislau Rezki , Vlastimil Babka References: <6411d3f7-b3a4-23a8-14fc-fcf6b9c5b73a@virtuozzo.com> Message-ID: Date: Fri, 8 Oct 2021 12:23:43 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: <6411d3f7-b3a4-23a8-14fc-fcf6b9c5b73a@virtuozzo.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Enable memory accounting for bulk page allocator. Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") Cc: Signed-off-by: Vasily Averin --- mm/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 62 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c274cf..602819a232e5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5172,6 +5172,55 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, return true; } +#ifdef CONFIG_MEMCG_KMEM +static bool memcg_bulk_pre_charge_hook(struct obj_cgroup **objcgp, gfp_t gfp, + unsigned int nr_pages) +{ + struct obj_cgroup *objcg = NULL; + + if (!memcg_kmem_enabled() || !(gfp & __GFP_ACCOUNT)) + return true; + + objcg = get_obj_cgroup_from_current(); + + if (objcg && obj_cgroup_charge(objcg, gfp, nr_pages << PAGE_SHIFT)) { + obj_cgroup_put(objcg); + return false; + } + obj_cgroup_get_many(objcg, nr_pages); + *objcgp = objcg; + return true; +} + +static void memcg_bulk_charge_hook(struct obj_cgroup *objcg, + struct page *page) +{ + page->memcg_data = (unsigned long)objcg | MEMCG_DATA_KMEM; +} + +static void memcg_bulk_post_charge_hook(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ + obj_cgroup_uncharge(objcg, nr_pages << PAGE_SHIFT); + percpu_ref_put_many(&objcg->refcnt, nr_pages + 1); +} +#else +static bool memcg_bulk_pre_charge_hook(struct obj_cgroup **objcgp, gfp_t gfp, + unsigned int nr_pages) +{ + return true; +} + +static void memcg_bulk_charge_hook(struct obj_cgroup *objcgp, + struct page *page) +{ +} + +static void memcg_bulk_post_charge_hook(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ +} +#endif /* * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array * @gfp: GFP flags for the allocation @@ -5207,6 +5256,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, gfp_t alloc_gfp; unsigned int alloc_flags = ALLOC_WMARK_LOW; int nr_populated = 0, nr_account = 0; + unsigned int nr_pre_charge = 0; + struct obj_cgroup *objcg = NULL; /* * Skip populated array elements to determine if any pages need @@ -5275,6 +5326,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, if (unlikely(!zone)) goto failed; + nr_pre_charge = nr_pages - nr_populated; + if (!memcg_bulk_pre_charge_hook(&objcg, gfp, nr_pre_charge)) + goto failed; + /* Attempt the batch allocation */ local_lock_irqsave(&pagesets.lock, flags); pcp = this_cpu_ptr(zone->per_cpu_pageset); @@ -5287,9 +5342,9 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nr_populated++; continue; } - page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags, pcp, pcp_list); + if (unlikely(!page)) { /* Try and get at least one page */ if (!nr_populated) @@ -5297,6 +5352,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, break; } nr_account++; + if (objcg) + memcg_bulk_charge_hook(objcg, page); prep_new_page(page, 0, gfp, 0); if (page_list) @@ -5310,13 +5367,16 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); + if (objcg) + memcg_bulk_post_charge_hook(objcg, nr_pre_charge - nr_account); out: return nr_populated; failed_irq: local_unlock_irqrestore(&pagesets.lock, flags); - + if (objcg) + memcg_bulk_post_charge_hook(objcg, nr_pre_charge); failed: page = __alloc_pages(gfp, 0, preferred_nid, nodemask); if (page) { -- 2.31.1