Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp4543361rwl; Tue, 28 Mar 2023 08:13:29 -0700 (PDT) X-Google-Smtp-Source: AK7set/ZgNodmhKxZ1UXGwE8lhWRBS4Pr07XQj3/yS+4EceYahD548S5ixIwyZ16fVc3I3fLJ3w1 X-Received: by 2002:a05:6a20:92a4:b0:d4:ea20:2185 with SMTP id q36-20020a056a2092a400b000d4ea202185mr11853875pzg.6.1680016409049; Tue, 28 Mar 2023 08:13:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680016409; cv=none; d=google.com; s=arc-20160816; b=KCF+nBC7LLWM/7xAseUhgdgsacj0HRbSqBYHUgS71+uvtJTL5t0SWpfdkDXCPGPdhz o84VsIpQnHN7iHaVbAdDiFwayjcvLMb2I00pidAgaa+2PmboeNfiKSZhNUZ31hAE9Yha w74Sy8kyyQr0sSiuL/LX1dTYxAW+tSz2ZZko6Nn5hI6So0VYKwXLPhBSO9HLej/VvBJW n2uD5xWDnSairVd/61xjT/wqtIJ9lnYvts0ny3MhgYNY4D67rSQ2B7wXgKMvcEmh/kYG N0YP3xFi0QcBowRZG8ajZH8qcFiZbfADnAbX6Gi++j6SlczEgzOrh18mKI3s0QP6VrKR +V1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=RXkg0HVZ8NHYhvIo1+sH8HMZEhd1Vj2IUIOiilypWco=; b=D7lpiihL/z/fTcDAwK/mR742faFllNMFmdd1yhJMFQ+j25E2+N7P/O0pQUzQ6IGFuO mdDcSDHJOlV5+ZllPUE/ibJUfECPIxS5cX9TZBhXBA58qEkQIflAXBViNvk6bIrgHmbu 6HA6we5boYl8NGIJnIFmhDb3Dci8cnLq++sRgozfGR3jenMGs0eg0ODE3POGApPQbSmA ASh/bzK3eHCXMHoiChQ8m+litxU5IsUvBslSX0ABTQ5Cg2UVgNwBej68b7sWizZ2leeH p69964M46/M5EkbVMPXTFhW/tdKPcm96fZ0kW5/3Za0BrgM2FRkYa1iLBhsyVOc6AS7Z yGDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Qmi4b60j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fa16-20020a056a002d1000b005b9a30ee5a4si30125759pfb.116.2023.03.28.08.13.16; Tue, 28 Mar 2023 08:13:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Qmi4b60j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233912AbjC1PNC (ORCPT + 99 others); Tue, 28 Mar 2023 11:13:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232769AbjC1PMu (ORCPT ); Tue, 28 Mar 2023 11:12:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B1C11027C for ; Tue, 28 Mar 2023 08:12:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 36602B81D92 for ; Tue, 28 Mar 2023 15:11:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7A49C4339B; Tue, 28 Mar 2023 15:11:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680016293; bh=kEtXdgCEBV5p0GqSsNHxDFPAO4gt8lDkvKFBSENj3yE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Qmi4b60jdCXHiubROfANwWXwBPf/XpnrzqrX94qr468D5KhnYOpEyIcXYc9Enains BN3pDUwsgTHq8z5Ar+M8UuwU05OJbflhrIHJafv3SpdCNq3F84eZioEbdvrA8oR5tU pLI9lj2C8nYsFfz0SgTHE0d+uCuH3hzLVNqxO1s0AQWWLzMbu/E90zIzEJt0vsle25 +8rdrtGX873DLMg2heMztHWRmul3QK11yyYzOQ+pyExEGjemPfYwi38WsPKxRJQ7/T NBKg2Lz3eV9txZkD8TelrzTdIyp5Vptjk9xKxv19LdbdAL+mwm3o+l3OXRPou0q7xQ jPD3A1wr/CFJw== Date: Tue, 28 Mar 2023 18:11:20 +0300 From: Mike Rapoport To: Michal Hocko Cc: linux-mm@kvack.org, Andrew Morton , Dave Hansen , Peter Zijlstra , Rick Edgecombe , Song Liu , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, x86@kernel.org Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc() Message-ID: References: <20230308094106.227365-1-rppt@kernel.org> <20230308094106.227365-2-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 28, 2023 at 09:39:37AM +0200, Michal Hocko wrote: > On Tue 28-03-23 09:25:35, Mike Rapoport wrote: > > On Mon, Mar 27, 2023 at 03:43:27PM +0200, Michal Hocko wrote: > > > On Sat 25-03-23 09:38:12, Mike Rapoport wrote: > > > > On Fri, Mar 24, 2023 at 09:37:31AM +0100, Michal Hocko wrote: > > > > > On Wed 08-03-23 11:41:02, Mike Rapoport wrote: > > > > > > From: "Mike Rapoport (IBM)" > > > > > > > > > > > > When set_memory or set_direct_map APIs used to change attribute or > > > > > > permissions for chunks of several pages, the large PMD that maps these > > > > > > pages in the direct map must be split. Fragmenting the direct map in such > > > > > > manner causes TLB pressure and, eventually, performance degradation. > > > > > > > > > > > > To avoid excessive direct map fragmentation, add ability to allocate > > > > > > "unmapped" pages with __GFP_UNMAPPED flag that will cause removal of the > > > > > > allocated pages from the direct map and use a cache of the unmapped pages. > > > > > > > > > > > > This cache is replenished with higher order pages with preference for > > > > > > PMD_SIZE pages when possible so that there will be fewer splits of large > > > > > > pages in the direct map. > > > > > > > > > > > > The cache is implemented as a buddy allocator, so it can serve high order > > > > > > allocations of unmapped pages. > > > > > > > > > > Why do we need a dedicated gfp flag for all this when a dedicated > > > > > allocator is used anyway. What prevents users to call unmapped_pages_{alloc,free}? > > > > > > > > Using unmapped_pages_{alloc,free} adds complexity to the users which IMO > > > > outweighs the cost of a dedicated gfp flag. > > > > > > Aren't those users rare and very special anyway? > > > > > > > For modules we'd have to make x86::module_{alloc,free}() take care of > > > > mapping and unmapping the allocated pages in the modules virtual address > > > > range. This also might become relevant for another architectures in future > > > > and than we'll have several complex module_alloc()s. > > > > > > The module_alloc use is lacking any justification. More context would be > > > more than useful. Also vmalloc support for the proposed __GFP_UNMAPPED > > > likely needs more explanation as well. > > > > Right now module_alloc() boils down to vmalloc() with the virtual range > > limited to the modules area. The allocated chunk contains both code and > > data. When CONFIG_STRICT_MODULE_RWX is set, parts of the memory allocated > > with module_alloc() remapped with different permissions both in vmalloc > > address space and in the direct map. The change of permissions for small > > ranges causes splits of large pages in the direct map. > > OK, so you want to reduce that direct map fragmentation? Yes. > Is that a real problem? A while ago Intel folks published report [1] that showed better performance with large pages in the direct map for majority of benchmarks. > My impression is that modules are mostly static thing. BPF > might be a different thing though. I have a recollection that BPF guys > were dealing with direct map fragmention as well. Modules are indeed static, but module_alloc() used by anything that allocates code pages, e.g. kprobes, ftrace and BPF. Besides, Thomas mentioned that having code in 2M pages reduces iTLB pressure [2], but that's not only about avoiding the splits in the direct map but also about using large mappings in the modules address space. BPF guys suggested an allocator for executable memory [3] mainly because they've seen performance improvement of 0.6% - 0.9% in their setups [4]. > > If we were to use unmapped_pages_alloc() in modules_alloc(), we would have > > to implement the part of vmalloc() that reserves the virtual addresses and > > maps the allocated memory there in module_alloc(). > > Another option would be to provide an allocator for the backing pages to > vmalloc. But I do agree that a gfp flag is a less laborous way to > achieve the same. So the primary question really is whether we really > need vmalloc support for unmapped memory. I'm not sure I follow here. module_alloc() is essentially an alias to vmalloc(), so to reduce direct map fragmentation caused by code allocations the most sensible way IMO is to support unmapped memory in vmalloc(). I also think vmalloc with unmmapped pages can provide backing pages for execmem_alloc() Song proposed. > -- > Michal Hocko > SUSE Labs [1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/ [2] https://lore.kernel.org/all/87mt86rbvy.ffs@tglx/ [3] https://lore.kernel.org/all/20221107223921.3451913-1-song@kernel.org/ [4] https://lore.kernel.org/bpf/20220707223546.4124919-1-song@kernel.org/ -- Sincerely yours, Mike.