Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp3351675pxx; Mon, 2 Nov 2020 06:46:44 -0800 (PST) X-Google-Smtp-Source: ABdhPJxckBsMR4I3oxP4ROudS2zsl7MrPlIE42xwvaJf3aIEqbsBrXChdoObwuLkWVwB9VT8RHPv X-Received: by 2002:aa7:df04:: with SMTP id c4mr7252131edy.25.1604328404705; Mon, 02 Nov 2020 06:46:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604328404; cv=none; d=google.com; s=arc-20160816; b=Dde466NiCf6uyS/UQThrorFTuoVr88EvACLp8e2Qpaf6CAlYqf9Rb5kS/FauNE0ngI u4dmIMbluZ3mTBEB99VBOZQZgnsBp7aj1oXfX0TWuSsPM4Hy0lHPliNQ0wyxn6TBzMIQ D/BBgFti3kG94uF+mB38AcZounpjT/yfI5U5BcnChFLKYJ63nYsDxGkdSFZAbiQB5gFg QAo1l4PbVQQF1xCKFekXuILkqY8q9ihJ6vRzl2/050b1MsFgGqRQDFihmTahHoGCDbwg awC91xwbtQoMlU6e+uR74+0ssARPysLFn+KlO7qWw1S6RceqpzUoCUziZhU0tVXQ4n32 CNjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=LAIhWQkbEQ84z1rg6f9nPHULGOV+ks+CCN5U2ajICqs=; b=K5pUN/gFD4pjujv+ScELVqikN8rOBstLZZZO4m7HkmaZ7zEgq4h0sK2NRl9qn5GFcP 5A1WQ5OKSkfbi2eeskqjFXoz5WdGcLoI/4osf/WHAUZcFCb6hT17UlbGDzWTjwn0KrDl Cw8vaj0l99ZEIxNvVbo8YtxPw078BacrsFU2uECjcAsUhY8ljNJ1Q6XFip8CZOI5fCD0 OQKLMgSZD2slZDqThTrQdcg8QH1KeWpPund9CTvBWMwYqbeHK3Wvf9o0HKuNNRutJMzd gCdb+LniQMJ7Vcja3UVfYJ713DFxe4vITBwAjzVmzXzH9W75vf6oOEmPLjqHt6dbUObQ Mtww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e0ip68za; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gb27si10411587ejc.536.2020.11.02.06.46.21; Mon, 02 Nov 2020 06:46:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e0ip68za; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726206AbgKBOo5 (ORCPT + 99 others); Mon, 2 Nov 2020 09:44:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725927AbgKBOo4 (ORCPT ); Mon, 2 Nov 2020 09:44:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91A7DC0617A6 for ; Mon, 2 Nov 2020 06:44:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LAIhWQkbEQ84z1rg6f9nPHULGOV+ks+CCN5U2ajICqs=; b=e0ip68zaNrPteHrhWIOl5+jLot a30V0TtxffodM6dQWHkJ1PvhuSZVZRyixvD+OdL5BtSKd4lTerkUbmvmpSrWVYzwfsbEN40OwSYu4 jjAVn8feDYdIs7yS0G4zsqNo5kom/t2fHQwr/W/5Ybo8TWn0Wwkuw+GzL0A4Bb8Ihp0U01jaf7Sdn W46Zhn9Oa+HliKtVaR45F5C80wwpYCrrFadfATcAtzlUoPHjTEsGEFEaQ5agi17bjiVqcWzSaFCf5 Uc1t2uaUzHHv5e30GhlxmI1HCuZb5IwON8qbDeHHQo4pgfHH+VJQ/PCNr32SPJfXoDuq0iRZsA4cS g/Q1K2tQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZb4j-0007IO-M6; Mon, 02 Nov 2020 14:44:49 +0000 Date: Mon, 2 Nov 2020 14:44:49 +0000 From: Matthew Wilcox To: Chris Goldsworthy Cc: Andrew Morton , Minchan Kim , Nitin Gupta , Sergey Senozhatsky , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/2] Increasing CMA Utilization with a GFP Flag Message-ID: <20201102144449.GM27442@casper.infradead.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 02, 2020 at 06:39:20AM -0800, Chris Goldsworthy wrote: > The current approach to increasing CMA utilization introduced in > commit 16867664936e ("mm,page_alloc,cma: conditionally prefer cma > pageblocks for movable allocations") increases CMA utilization by > redirecting MIGRATE_MOVABLE allocations to a CMA region, when > greater than half of the free pages in a given zone are CMA pages. > The issue in this approach is that allocations with type > MIGRATE_MOVABLE can still succumb to pinning. To get around > this, one approach is to re-direct allocations to the CMA areas, that > are known not to be victims of pinning. > > To this end, this series brings in __GFP_CMA, which we mark with > allocations that we know are safe to be redirected to a CMA area. This feels backwards to me. What you're essentially saying is "Some allocations marked with GFP_MOVABLE turn out not to be movable, so we're going to add another GFP_REALLY_MOVABLE flag" instead of tracking down which GFP_MOVABLE allocations aren't really movable.