Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp9646216rwl; Wed, 11 Jan 2023 08:12:43 -0800 (PST) X-Google-Smtp-Source: AMrXdXtjQaSIyD2Aoh/OSijeZhTTv8rmlkefnivp0XbLz0gzQAMGrGojGBtq5IUbB9KiIxuXBm12 X-Received: by 2002:a17:907:6d13:b0:7c0:db53:c599 with SMTP id sa19-20020a1709076d1300b007c0db53c599mr73107016ejc.22.1673453563699; Wed, 11 Jan 2023 08:12:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673453563; cv=none; d=google.com; s=arc-20160816; b=KMAxYBP9E+c/cTUk1WXwO3lUAv30q/+WnSWkmZkkU76P8fVXK9P5wcoy4J0Vx2LI4I BEWsGbBBbS8ATWP6WUQMINRFOXkycsu/3S7ejANkvIpsKYQ58jcADjrbMGZ6RyTumzDL JdaMOg1cJ6BwVxPP0pkcMevKMsv3RQNRcTDz5uvAr0Ars1s8BvoGbi4WzuRCqowwsJyR +fdVo0RCs6kW1xXnpgKr1RxcRwe9p/3TMsPf5Fp9OMJahHaa8f3JSaNeq2MRhWRNavOw /3cS3qrMWVrRTOonxCQuODxZ5bEXb/U65ydAss0wSWjSnqt96EYyCNcv4U3L+Jf8xHnY 8/OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=fZWudOd53M9B+QPoYGeIXyJSJl4V6HkBvNZMFsAxicY=; b=lJahQQjUJtaXLRO+0nSK1njnA+Kiqai2pcDNA8nTTDmf+S++o4bx7gLtIEvRIRCxfF NROlwXFY8xW0qey10ppjYTihHARyBL5dOrBwdFttSQMEU6rPjh+mauCvjjwdjrGZXVUU tODhL7XS6UZB3XNEnCsZ+7M6aHchSKF3rlhVn1ucxON6ul6sIGdtfFbsgJf7lyH+hVmj qFQSC/glJ952WIXVSpPo6E+wtl7GFk9Tv5I8SixWg/4PXWlVuHK+TQTfqMSjsCCwC5Uu rmetsdKKm1lKCTjB2CdKtcVBbj8txN97W9iiUVWS2+AupaFUFQNc81zzbwGGQYubPk38 T33A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=r6+wHZFK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c8-20020a05640227c800b0046bce2261e5si17958451ede.471.2023.01.11.08.12.30; Wed, 11 Jan 2023 08:12:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=r6+wHZFK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235046AbjAKQCb (ORCPT + 51 others); Wed, 11 Jan 2023 11:02:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234315AbjAKQCW (ORCPT ); Wed, 11 Jan 2023 11:02:22 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 187C4DFAD; Wed, 11 Jan 2023 08:02:20 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 546CFB81C66; Wed, 11 Jan 2023 16:02:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FE72C433D2; Wed, 11 Jan 2023 16:02:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673452938; bh=K7/h0A8oZBYbzjOJUINJhcgSBwc1sDE2mWca3xe+FO8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=r6+wHZFKC5h0Z66cYAnsAEh9vRyp6guqbqeZXEdtuBMQpYSzdXE5cA5PQBBw27rf/ 9RbbdQTuSXD7l5g3uu5DhVTRUKZiU9A8+VqsHSUW7rzoDNjAZsFBunt5JL4W1UK9dN jdavi7yDMk3g/wOc3ROZhEULqz6/gvkFjouGFQwNEBOv2bCuud/LeHdx7JXzeA6QKu B6tz1cvB9gWNTZbBBDnUXCaGPiG1uWlvoooIxPruYh00A8kzwCtN0JwBUmD3EMwoa+ HFM7/oeznxZBd8PBx3+Wx45AdWpNHTOVPsOTdH2g5SURgFfe9e7xEJdwCYQCSpydkE j/uvy1BsoUE4A== Date: Wed, 11 Jan 2023 18:02:03 +0200 From: Mike Rapoport To: Michal Hocko Cc: Jonathan Corbet , Andrew Morton , Bagas Sanjaya , David Hildenbrand , Johannes Weiner , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , Mel Gorman , Vlastimil Babka , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2 2/2] docs/mm: Physical Memory: add structure, introduction and nodes description Message-ID: References: <20230110152358.2641910-1-rppt@kernel.org> <20230110152358.2641910-3-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 11, 2023 at 02:36:16PM +0100, Michal Hocko wrote: > On Wed 11-01-23 14:24:43, Mike Rapoport wrote: > > On Tue, Jan 10, 2023 at 05:54:10PM +0100, Michal Hocko wrote: > > > On Tue 10-01-23 17:23:58, Mike Rapoport wrote: > > > [...] > > > > +* ``ZONE_DMA`` and ``ZONE_DMA32`` represent memory suitable for DMA by > > > > + peripheral devices that cannot access all of the addressable memory. > > > > > > I think it would be better to not keep the historical DMA based menaning > > > and teach that future developers. You can say something like > > > > > > ZONE_DMA and ZONE_DMA32 have historically been used for memory suitable > > > for DMA. For many years there are better more robust interfaces to > > > get memory with DMA specific requirements (Documentation/core-api/dma-api.rst). > > > > But even today ZONE_DMA(32) means that the memory is suitable for DMA. This > > is nicely encapsulated with dma APIs and there should be no new GFP_DMA > > users, but still memory outside ZONE_DMA is not suitable for DMA. > > Well, the thing is that ZONE_DMA means different thing for different > architectures. For x86 it is effectivelly about ISA attached HW - which > means almost nothing these days. There is plethora of other HW with > different address range constrains for DMA transfer so binding the zone > with DMA is more likely to cause confusion than it helps. Ok, how about * ``ZONE_DMA`` and ``ZONE_DMA32`` historically represented memory suitable for DMA by peripheral devices that cannot access all of the addressable memory. For many years there are better more and robust interfaces to get memory with DMA specific requirements (:ref:`DMA API <_dma_api>`), but ``ZONE_DMA`` and ``ZONE_DMA32`` still represent memory ranges that have restrictions on how they can be accessed. Depending on the architecture, either of these zone types or even they both can be disabled at build time using ``CONFIG_ZONE_DMA`` and ``CONFIG_ZONE_DMA32`` configuration options. Some 64-bit platforms may need both zones as they support peripherals with different DMA addressing limitations. > > > > + Depending on the architecture, either of these zone types or even they both > > > > + can be disabled at build time using ``CONFIG_ZONE_DMA`` and > > > > + ``CONFIG_ZONE_DMA32`` configuration options. Some 64-bit platforms may need > > > > + both zones as they support peripherals with different DMA addressing > > > > + limitations. > > > > + > > > > +* ``ZONE_NORMAL`` is for normal memory that can be accessed by the kernel all > > > > + the time. DMA operations can be performed on pages in this zone if the DMA > > > > + devices support transfers to all addressable memory. ``ZONE_NORMAL`` is > > > > + always enabled. > > > > + > > > > +* ``ZONE_HIGHMEM`` is the part of the physical memory that is not covered by a > > > > + permanent mapping in the kernel page tables. The memory in this zone is only > > > > + accessible to the kernel using temporary mappings. This zone is available > > > > + only on some 32-bit architectures and is enabled with ``CONFIG_HIGHMEM``. > > > > + > > > > +* ``ZONE_MOVABLE`` is for normal accessible memory, just like ``ZONE_NORMAL``. > > > > + The difference is that most pages in ``ZONE_MOVABLE`` are movable. > > > > > > This is really confusing because those pages are not really movable. You > > > cannot move a page itself. I guess you meant to say something like > > > > > > The difference is that there are means to migrate memory via > > > migrate_pages interface. A typical example would be a memory mapped to > > > userspace which can be rellocate the underlying memory content and > > > update page tables so that userspace doesn't notice the physical data > > > placement has changed. > > > > I agree that this sentence is a bit confusing, but there's a clarification > > below. Also, I'd like to keep this at high level without going to the > > details about how exactly the pages can be migrated. > > Yes, ZONE_MOVABLE is confusing as well. I do not think you do not have > to elaborate more than just state that the memory should be migrateable. > > > > > That means > > > > + that while virtual addresses of these pages do not change, their content may > > > > + move between different physical pages. ``ZONE_MOVABLE`` is only enabled when > > > > + one of ``kernelcore``, ``movablecore`` and ``movable_node`` parameters is > > > > + present in the kernel command line. See :ref:`Page migration > > > > + ` for additional details. > > > > > > This is not really true. The movable zone can be also enabled by memory > > > hotplug. In fact it is one of the more common usecases for the zone > > > because memory hot remove largerly depends on memory to be migrated for > > > offlining to succeed in most cases. > > > > Right. How about this version of ZONE_MOVABLE description: > > > > * ``ZONE_MOVABLE`` is for normal accessible memory, just like ``ZONE_NORMAL``. > > The difference is that the contents of most pages in ``ZONE_MOVABLE`` is > > movable. That means that while virtual addresses of these pages do not > > change, their content may move between different physical pages. Often > > ``ZONE_MOVABLE`` is populated during memory hotplug, but it may be > > also populated on boot using one of ``kernelcore``, ``movablecore`` and > > ``movable_node`` kernel command line parameters. See :ref:`Page migration > > ` and :ref:`Memory Hot(Un)Plug <_admin_guide_memory_hotplug>` > > for additional details. > > Yes, sounds much better! > > [...] > > > > + 1G 9G 17G > > > > + +--------------------------------+ +--------------------------+ > > > > + | node 0 | | node 1 | > > > > + +--------------------------------+ +--------------------------+ > > > > + > > > > + 1G 4G 4200M 9G 9320M 17G > > > > + +---------+----------+-----------+ +------------+-------------+ > > > > + | DMA32 | NORMAL | MOVABLE | | NORMAL | MOVABLE | > > > > + +---------+----------+-----------+ +------------+-------------+ > > > > > > I think it is useful to note that nodes and zones can overlap in the > > > physical address range. It is not uncommong to interleave two nodes and > > > it is also possible that memory holes are memory hotplugged into MOVABLE > > > zone arbitrarily in the physical address range. > > > > Hmm, not sure I understand what you mean by "overlap". > > For interleaved nodes you mean that node 0 may span, say [0x0, 0x2000) and > > [0x4000, 06000) and node 1 spans [0x2000, 0x4000) and [0x6000, 0x8000)? > > Yes. that would be represented by > NODE_DATA(0)->start_pfn = 0 > NODE_DATA(0)->node_spanned_pages= 0x6000 > NODE_DATA(1)->start_pfn = 0x4000 > NODE_DATA(1)->node_spanned_pages= 0x6000 > > > > And as for MOVABLE zone, you mean that it can appear between ranges of > > NORMAL zone? > > Yes and also other zones as well but that is less likely as those tend > to be populated from the early boot. But theoretically it can be placed > in any physical range with page block granularity. Hmm, these are not easy to explain, but I'll try to come up with something. I'd prefer to have this as a followup patch, though. > -- > Michal Hocko > SUSE Labs -- Sincerely yours, Mike.