Received: by 2002:a05:7412:f589:b0:e2:908c:2ebd with SMTP id eh9csp359415rdb; Tue, 31 Oct 2023 09:22:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE6PZfyJDQE5HbZaayJXgHOzmXE1rpCAL33HOom5ndnzZmhunHdy2kkYzorlkfL/ErXTs8t X-Received: by 2002:a17:903:41d0:b0:1cc:5273:2105 with SMTP id u16-20020a17090341d000b001cc52732105mr6889575ple.47.1698769348970; Tue, 31 Oct 2023 09:22:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698769348; cv=none; d=google.com; s=arc-20160816; b=grSVZ/ZxN1894aTjPgG2QjXHGtmH21R4nOErMfGfUQOy9Rw/cDQcRMF4B/0v3le8uu /bZ1sf8Q0qtf7mmNkaemRl6xTiMumWKJedGlCD5gWPFWjtd1afu18sR5hLGIZo6UHjs1 B72KD0khAwM/u+NlrJ8T5FSWSULvea/2863j7Ruh8WsLeSCddqZa0rph5+tYnw626VDb IF2tn24ovb0C71h9JSAA/cU4BMzTwQeBPp4vil3l6DXPynlp7uuOd0EDLwwkEc/EXZJH eOzittaAvWM9YpxLDPEGx3UkqCUFo9Yu3E7UzCEsP+IfLou8vod+EEn5XEkL03fbj3aW G6Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=3z6tukCl+9Ypo8mcVzJeMZlWMbgi6HguP+juUjy6MUA=; fh=MHAUCrX9Vv5SDvyEx8WN2E3kUlwuloHK3qENeOkPpXc=; b=mfhaU5DdFHSkEE3B22yACzmXEIS5UBMQcTmiNs6DFKNTGlU7GmwAWE3UjsdCvXcpOx MQn+QjLq1sOJdgXD1R0Y1/KQxX3sWMaDLW7awhY4kinnazjscWBN7y8YZOqEpTCVbsYK no4MwzdDZ0gGM9FxtoapcbfstvzNMmGCrezwxkTGUdQYgP69Vv+1LVxGwdR+HWvgv+QO ARMxbQczDrr2isFr9BsWR3KT0kENbfzddt7vodOcsrtE1VetsquKOIWIck61XuDSAtoV Z7tcMQRDnVgmc4/GgLIZemagLaxhNbyl3vSuR2yBX+8VbdXM6ngSBj3630dGxCsBDMhZ DW9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=J6p2Y8hB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id a19-20020a170903101300b001c9b5be3c37si1149245plb.226.2023.10.31.09.22.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Oct 2023 09:22:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=J6p2Y8hB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id D3A198028E6C; Tue, 31 Oct 2023 09:22:27 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344986AbjJaQWY (ORCPT + 99 others); Tue, 31 Oct 2023 12:22:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344836AbjJaQWX (ORCPT ); Tue, 31 Oct 2023 12:22:23 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B497D110 for ; Tue, 31 Oct 2023 09:22:18 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id 5614622812f47-3b4145e887bso3182233b6e.3 for ; Tue, 31 Oct 2023 09:22:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1698769338; x=1699374138; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=3z6tukCl+9Ypo8mcVzJeMZlWMbgi6HguP+juUjy6MUA=; b=J6p2Y8hBEcJig2cnSTwrYtLx6gs5sT/OPgA7LX7AI0NSN8z11IXYXCVkpfBWNtSDY4 vnt2fBvCAICMOQQmVeQPgumYt7sYKMt3BeLE3T+zj00abSVXLjp+LuJjwf4BY2CQd2PB 2+pk6moymtVGwJQ14Y9u0iy0GJwqXoYNbqP9gA/r66oUs6PyKjiUyMnW8tRE89nlm4F3 zU/VBqHQTmo22n7Mf/t2SyOyy7APH8ebI7BZJbnoDPGaifR6fzQ4iW/zr6OMkI8Wk5XS JopQeQ6/FO2VNeyhXHkQ5UuIJu8fiNgyJ5Zpcl1QArzbaT8jZJww7ovwwHxieNCJUlp8 12Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698769338; x=1699374138; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=3z6tukCl+9Ypo8mcVzJeMZlWMbgi6HguP+juUjy6MUA=; b=BBU1OgWx9KMnLtB5ZDPVgZp5M99qIKGmfQSio+oC6YL5+/kbJLBQtmeHeY5Tc2JGi2 FR7nsHqvWsdMl/dsiWN5tyMbaxvVkO5PJkn/qzhQIErsySn6FFTwVFeSfXIKsqzt/YfJ FYttTz8VNehODPLJndR0lGhLohv1JJ42Hcv2zVXnaAtGYCjVGk1SzdEux6vN9fjrr6pG G2geg23FnouHR9vzO4iqGmNhL/PpTFVPEOGFacj0veUaOChFrIHaIbM7dIHiqQs0vK/z BobawcLMkEzHq7PGDA9+eKwn889hNMJPSXuLyNSz4BKahjombCSANKxJCFVntQbxXqF1 StGw== X-Gm-Message-State: AOJu0YwaILm3ABYCRQoSat2MQ94LEAfwaO7XgXPycDDoU4M3shiy21Hf Ld8oabgJIsB5VZs7vvdOC9znDA== X-Received: by 2002:a05:6808:14d0:b0:3ae:4774:c00c with SMTP id f16-20020a05680814d000b003ae4774c00cmr15224164oiw.53.1698769337776; Tue, 31 Oct 2023 09:22:17 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:a294]) by smtp.gmail.com with ESMTPSA id b22-20020a05620a119600b0076cc4610d0asm628787qkk.85.2023.10.31.09.22.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Oct 2023 09:22:17 -0700 (PDT) Date: Tue, 31 Oct 2023 12:22:16 -0400 From: Johannes Weiner To: Michal Hocko Cc: Gregory Price , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, linux-mm@kvack.org, ying.huang@intel.com, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, weixugc@google.com, apopple@nvidia.com, tim.c.chen@intel.com, dave.hansen@intel.com, shy828301@gmail.com, gregkh@linuxfoundation.org, rafael@kernel.org, Gregory Price Subject: Re: [RFC PATCH v3 0/4] Node Weights and Weighted Interleave Message-ID: <20231031162216.GB3029315@cmpxchg.org> References: <20231031003810.4532-1-gregory.price@memverge.com> <20231031152142.GA3029315@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 31 Oct 2023 09:22:28 -0700 (PDT) On Tue, Oct 31, 2023 at 04:56:27PM +0100, Michal Hocko wrote: > On Tue 31-10-23 11:21:42, Johannes Weiner wrote: > > On Tue, Oct 31, 2023 at 10:53:41AM +0100, Michal Hocko wrote: > > > On Mon 30-10-23 20:38:06, Gregory Price wrote: > > > > This patchset implements weighted interleave and adds a new sysfs > > > > entry: /sys/devices/system/node/nodeN/accessM/il_weight. > > > > > > > > The il_weight of a node is used by mempolicy to implement weighted > > > > interleave when `numactl --interleave=...` is invoked. By default > > > > il_weight for a node is always 1, which preserves the default round > > > > robin interleave behavior. > > > > > > > > Interleave weights may be set from 0-100, and denote the number of > > > > pages that should be allocated from the node when interleaving > > > > occurs. > > > > > > > > For example, if a node's interleave weight is set to 5, 5 pages > > > > will be allocated from that node before the next node is scheduled > > > > for allocations. > > > > > > I find this semantic rather weird TBH. First of all why do you think it > > > makes sense to have those weights global for all users? What if > > > different applications have different view on how to spred their > > > interleaved memory? > > > > > > I do get that you might have a different tiers with largerly different > > > runtime characteristics but why would you want to interleave them into a > > > single mapping and have hard to predict runtime behavior? > > > > > > [...] > > > > In this way it becomes possible to set an interleaving strategy > > > > that fits the available bandwidth for the devices available on > > > > the system. An example system: > > > > > > > > Node 0 - CPU+DRAM, 400GB/s BW (200 cross socket) > > > > Node 1 - CPU+DRAM, 400GB/s BW (200 cross socket) > > > > Node 2 - CXL Memory. 64GB/s BW, on Node 0 root complex > > > > Node 3 - CXL Memory. 64GB/s BW, on Node 1 root complex > > > > > > > > In this setup, the effective weights for nodes 0-3 for a task > > > > running on Node 0 may be [60, 20, 10, 10]. > > > > > > > > This spreads memory out across devices which all have different > > > > latency and bandwidth attributes at a way that can maximize the > > > > available resources. > > > > > > OK, so why is this any better than not using any memory policy rely > > > on demotion to push out cold memory down the tier hierarchy? > > > > > > What is the actual real life usecase and what kind of benefits you can > > > present? > > > > There are two things CXL gives you: additional capacity and additional > > bus bandwidth. > > > > The promotion/demotion mechanism is good for the capacity usecase, > > where you have a nice hot/cold gradient in the workingset and want > > placement accordingly across faster and slower memory. > > > > The interleaving is useful when you have a flatter workingset > > distribution and poorer access locality. In that case, the CPU caches > > are less effective and the workload can be bus-bound. The workload > > might fit entirely into DRAM, but concentrating it there is > > suboptimal. Fanning it out in proportion to the relative performance > > of each memory tier gives better resuls. > > > > We experimented with datacenter workloads on such machines last year > > and found significant performance benefits: > > > > https://lore.kernel.org/linux-mm/YqD0%2FtzFwXvJ1gK6@cmpxchg.org/T/ > > Thanks, this is a useful insight. > > > This hopefully also explains why it's a global setting. The usecase is > > different from conventional NUMA interleaving, which is used as a > > locality measure: spread shared data evenly between compute > > nodes. This one isn't about locality - the CXL tier doesn't have local > > compute. Instead, the optimal spread is based on hardware parameters, > > which is a global property rather than a per-workload one. > > Well, I am not convinced about that TBH. Sure it is probably a good fit > for this specific CXL usecase but it just doesn't fit into many others I > can think of - e.g. proportional use of those tiers based on the > workload - you get what you pay for. > > Is there any specific reason for not having a new interleave interface > which defines weights for the nodemask? Is this because the policy > itself is very dynamic or is this more driven by simplicity of use? A downside of *requiring* weights to be paired with the mempolicy is that it's then the application that would have to figure out the weights dynamically, instead of having a static host configuration. A policy of "I want to be spread for optimal bus bandwidth" translates between different hardware configurations, but optimal weights will vary depending on the type of machine a job runs on. That doesn't mean there couldn't be usecases for having weights as policy as well in other scenarios, like you allude to above. It's just so far such usecases haven't really materialized or spelled out concretely. Maybe we just want both - a global default, and the ability to override it locally. Could you elaborate on the 'get what you pay for' usecase you mentioned?