Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3251266pxk; Mon, 5 Oct 2020 05:18:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzDoCP+eKUv/3BJ068EQto+ZvTWC8C7cZA/o4r68eHL7Rfxj0dmdtII2OhQrm/2julfbyLU X-Received: by 2002:a50:e442:: with SMTP id e2mr17295638edm.186.1601900311573; Mon, 05 Oct 2020 05:18:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601900311; cv=none; d=google.com; s=arc-20160816; b=KqjPiclFZQRR/Endsd//JnTf/BkExk9RpmOeyF9HfSfHvXZYBxz49dcQqRAGIvBz6r K3TAY2Tdtj7KTYQyRSb64Tf5Vv5cki7tR6cFfN7CZ/X1+a86gJ8QdLZtIoxi/ccrTT72 mV7oTlNhqXIWFwSUtR+9g6JPb+laK5GSmui3eC1i/RMNLzzqpdQkpxQ+MeicWUtHNR7o hZ7sK2CThzEqfzqNbSG3KbeYbErljdMlmmPrDd/nbsCHIW7mDTTAfxxvhJnIYaY6+skA gtw4nPDpD7ABjB0EGtODPCjKNgpapHF/oGqSINPsB20nUt1tqn5syCJhFzMmsyG/A6ec 0Nmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=I+9FWa50a4fAVSLereFH2OCTk21D3ec5peVlRys8eQY=; b=VpGQSeaQZZI0vSHhV5qOTCMHT9CWvu64lCPhjXKE3a2fDJvBPEuu/D9Z0JFPFgoq1S WUKqxDPTvC6JK5k8gulj/zXVJVqiAggdtRBC+0NDJSBbBNWcqtxbJERp4bYQLNAY0cop nl3JWjliJuCBqCKYba/NhYa/CsJ/7IOQKvO1tOJIkppbsC0yEYGrb9ew5+QX/+jd94fw bdWtxTtBs2EKxKeM5/RW6mn/MlNXRP/svli0bE8aewxi/PijzuEie9hGwFV1tHJEAQ/k tmMk4vS0X8bxR+HcR4dPHDANIauHQ4QlNZPMoc+aDI6VKkpRTTouaE/7jKXr4G4XD+xO TfUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RlKFUjuh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gz18si6308446ejb.408.2020.10.05.05.18.08; Mon, 05 Oct 2020 05:18:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RlKFUjuh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726605AbgJEMQe (ORCPT + 99 others); Mon, 5 Oct 2020 08:16:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:20738 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725994AbgJEMQb (ORCPT ); Mon, 5 Oct 2020 08:16:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900189; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I+9FWa50a4fAVSLereFH2OCTk21D3ec5peVlRys8eQY=; b=RlKFUjuh8aMeDLhwWX1WZ5S0QGEVmrisUilRPf+BEJSvL7QxUs2dcau5pypzfunDA26Yqp nDK9t6cV+6kexLTmxZNqDXdpg4ps0xAfXnQG0umi/smAnLUuklKZYQeI5+LZydRTM2W6XR 9UZ/PfrkJrI6b+u2LQIvHAE9xgYMstw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-547-TlvT2dKcMy2qnf7NNmOSuA-1; Mon, 05 Oct 2020 08:16:25 -0400 X-MC-Unique: TlvT2dKcMy2qnf7NNmOSuA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5087107ACF5; Mon, 5 Oct 2020 12:16:22 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AAF6271A3; Mon, 5 Oct 2020 12:16:13 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Oscar Salvador , Pankaj Gupta , Wei Yang , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list() Date: Mon, 5 Oct 2020 14:15:32 +0200 Message-Id: <20201005121534.15649-4-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Whenever we move pages between freelists via move_to_free_list()/ move_freepages_block(), we don't actually touch the pages: 1. Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. When undoing isolation, we move the pages back to the target list. 2. Page stealing (steal_suitable_fallback()) moves free pages directly between lists without touching them. 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves free pages directly between freelists without touching them. We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order <= pageblock_order) and document the behavior. To simplify, let's move the pages to the tail for all move_to_free_list()/move_freepages_block() users. In 2., the target list is empty, so there should be no change. In 3., we might observe a change, however, highatomic is more concerned about allocations succeeding than cache hotness - if we ever realize this change degrades a workload, we can special-case this instance and add a proper comment. This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist. Reviewed-by: Oscar Salvador Acked-by: Pankaj Gupta Reviewed-by: Wei Yang Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 10 +++++++--- mm/page_isolation.c | 5 +++++ 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df5ff0cd6df1..b187e46cf640 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, area->nr_free++; } -/* Used for pages which are on another list */ +/* + * Used for pages which are on another list. Move the pages to the tail + * of the list - so the moved pages won't immediately be considered for + * allocation again (e.g., optimization for memory onlining). + */ static inline void move_to_free_list(struct page *page, struct zone *zone, unsigned int order, int migratetype) { struct free_area *area = &zone->free_area[order]; - list_move(&page->lru, &area->free_list[migratetype]); + list_move_tail(&page->lru, &area->free_list[migratetype]); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, #endif /* - * Move the free pages in a range to the free lists of the requested type. + * Move the free pages in a range to the freelist tail of the requested type. * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..83692b937784 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelist. This is an optimization for memory + * onlining - just onlined memory won't immediately be considered for + * allocation. */ if (!isolated_page) { nr_pages = move_freepages_block(zone, page, migratetype, NULL); -- 2.26.2