Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp1677347rwb; Wed, 26 Jul 2023 17:03:24 -0700 (PDT) X-Google-Smtp-Source: APBJJlGnhoKBixZXW8AU+vfOsCyC6eeFUwpq3a+RS6fPZMblKILGP2qxCLEEy8GaayvkXgjIQZmo X-Received: by 2002:a17:90a:7e15:b0:268:323e:99a2 with SMTP id i21-20020a17090a7e1500b00268323e99a2mr2837421pjl.4.1690416203780; Wed, 26 Jul 2023 17:03:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690416203; cv=none; d=google.com; s=arc-20160816; b=wiugUSO7G+uzUdJ3WooZt9wI2JM/l5teN15etSrHsRtpdAkq0mH8uuJGhKR722ZwSs oakIGaM1bWoronjwPiXXsPnRZ2SN7ihs5CrTgqmfvgvwYTBgeocz1csxtRuMy14QUtep MFJ4jI4jZpkgajNKHkr6KCQuvbhSlOAuuHvUllBmT41qBINsVKxvDaQZPys5z2w2DhV5 NSsgi4tvhWcBG5Eftn761Xsb4R3uw24sqVctNWc2F/JJHSmEwMUBUiXm7B8tLajLfCnF edPDSAS/qtOH5KNB9ZsJGezaphPtHzf2LybTKb/sPzoDoFL62J/9fOicHtbSPw+9dgew siwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:dkim-signature:date; bh=sTgnN9ZeCU8wiDSONMF29b7WeR8DaW8U9oGN1VLWAJs=; fh=bXYs0+Wi54QeAtNxWJdFdYc5h6SFiRwpU4h1X3XPZxs=; b=yqpmgo85wzWLBuQiB7SUiHl8201WA2kwe33jQ0YUyn2GYnDNtB5GXgKIqRtj1Q0KTj mfeQbSXh2jJKwSqF3yXiqF0VIjd64YMTKMuEedZuDOfAxtlRCOx+VlDnQquP+QbQxDye pd45PFCzdLwF8nEyxrOyqKuIAzvSPJRvtZZVme1ZNohirNJ/VISwu0A9fm9vS9RK130u MbPxpQa89k4pe/ikHjgn5MCqC+QqBlCPUtVVnSCXS4Tbn9qST9vNhNfMlCvRDfCBse6P zK06AKbmUCiGAsuxNlJw3hjlfv3WF3y+xaFMGj0oNfvC/U+iozZfL97jQTcbCvU/7kDt 2dUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=NKW2Mrgm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a4-20020a17090a854400b0025bb4a1c12esi247574pjw.148.2023.07.26.17.03.11; Wed, 26 Jul 2023 17:03:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=NKW2Mrgm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229940AbjGZXiW (ORCPT + 99 others); Wed, 26 Jul 2023 19:38:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229587AbjGZXiV (ORCPT ); Wed, 26 Jul 2023 19:38:21 -0400 Received: from out-24.mta0.migadu.com (out-24.mta0.migadu.com [91.218.175.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 674A9E4C for ; Wed, 26 Jul 2023 16:38:20 -0700 (PDT) Date: Wed, 26 Jul 2023 16:38:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1690414697; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sTgnN9ZeCU8wiDSONMF29b7WeR8DaW8U9oGN1VLWAJs=; b=NKW2MrgmXsiJppIB+OLNLxajVnmn+Wn5652O4+9iPCMbNLfO9dfUZsiagbuZOLok2qQJd9 2izmMoBxHggaEFQA/W2ge7sIuH9TkPVxiemOIw1hlnnIwzW1ekv+YEOqcexDHJUYLvy8JI 7U0MNYy1bnE1GTqd1+27kUraNH7oNuw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Johannes Weiner Cc: Andrew Morton , Vlastimil Babka , Mel Gorman , Rik van Riel , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: page_alloc: consume available CMA space first Message-ID: References: <20230726145304.1319046-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230726145304.1319046-1-hannes@cmpxchg.org> X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 26, 2023 at 10:53:04AM -0400, Johannes Weiner wrote: > On a memcache setup with heavy anon usage and no swap, we routinely > see premature OOM kills with multiple gigabytes of free space left: > > Node 0 Normal free:4978632kB [...] free_cma:4893276kB > > This free space turns out to be CMA. We set CMA regions aside for > potential hugetlb users on all of our machines, figuring that even if > there aren't any, the memory is available to userspace allocations. > > When the OOMs trigger, it's from unmovable and reclaimable allocations > that aren't allowed to dip into CMA. The non-CMA regions meanwhile are > dominated by the anon pages. > > > Because we have more options for CMA pages, change the policy to > always fill up CMA first. This reduces the risk of premature OOMs. I suspect it might cause regressions on small(er) devices where a relatively small cma area (Mb's) is often reserved for a use by various device drivers, which can't handle allocation failures well (even interim allocation failures). A startup time can regress too: migrating pages out of cma will take time. And given the velocity of kernel upgrades on such devices, we won't learn about it for next couple of years. > Movable pages can be migrated out of CMA when necessary, but we don't > have a mechanism to migrate them *into* CMA to make room for unmovable > allocations. The only recourse we have for these pages is reclaim, > which due to a lack of swap is unavailable in our case. Idk, should we introduce such a mechanism? Or use some alternative heuristics, which will be a better compromise between those who need cma allocations always pass and those who use large cma areas for opportunistic huge page allocations. Of course, we can add a boot flag/sysctl/per-cma-area flag, but I doubt we want really this. Thanks!