Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp897832yba; Thu, 18 Apr 2019 11:24:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqxcdaKnkG+ZEJAQMLu/iMFFMpn2gzlh+HBxwXzGusILcXEoIklcnFcYVEND22xltrLHGK+a X-Received: by 2002:a63:6581:: with SMTP id z123mr87312371pgb.243.1555611898019; Thu, 18 Apr 2019 11:24:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555611898; cv=none; d=google.com; s=arc-20160816; b=LOuuJ847u9vcF7VLJgfJObX7wjTa2UaoRDIDjtPh1kqe8jmlSwghGEoiblqiJNM6bI 9+PlT/vYUn7KTWZaaLnZKwfZp2Cvky93AwiON6th7MXYbYD0knqn5WprwmrTJ3vFKyRW nqy/t6nRkQ4bRMRh26oOKuZFEEI8oCSiVXSoHUbB2quS3wbootu0PKSSUNQKJNziGjM4 LKrITv8H4+wn9+dJkjO+wkojS5iX1S3IfFYLL4/KsvzvSOBzLDTvyMkaqtyAprbRCDaU FZXqE5H/SWsndPtKboDL48yLWHBEDbMnmN5kCT1bI7kH6v88BABGqRScsXb2ee2YvOFD 9psg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=xIusIxkcx3B+oA9NDkJkzuuAi7XFBLAModjhhRwRZvE=; b=OZEBA4PAaM029xZzCR0UErwa8TBoHZ9Ai8R/6SVmgeEsZBwHugu5lp7fe4DsVofqch a7GLwUd8afIwdaiSaGtJH9ANGb5aqopTf630+33jGc8+xi6iTbo2ngL4X6mwydh0FPgL DMIpOxqVtgpbvjc9XrTcjF7bkt2BklsDEpVGsXFjU/b20rJdtqn6kZT+rE5ZcoxWHzCE k/YYBaAyy5pZqJNhix/YOufmzIqcFpYILTA9luqioVQRS5LvKvsbqUxNz1Iexvo5Bf+b 1xK8f4dm4uI4zJ0A6A5e4atl+9sSF+JtDl5tldinYNCoFUpZg92JSksvx9gDUPf5oyfn xyoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l184si2456931pge.425.2019.04.18.11.24.43; Thu, 18 Apr 2019 11:24:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403885AbfDRSXH (ORCPT + 99 others); Thu, 18 Apr 2019 14:23:07 -0400 Received: from mga11.intel.com ([192.55.52.93]:64530 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390262AbfDRSW5 (ORCPT ); Thu, 18 Apr 2019 14:22:57 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Apr 2019 11:22:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,367,1549958400"; d="scan'208";a="144017502" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga007.fm.intel.com with ESMTP; 18 Apr 2019 11:22:56 -0700 Date: Thu, 18 Apr 2019 12:16:43 -0600 From: Keith Busch To: Dave Hansen Cc: Michal Hocko , Yang Shi , mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v2 RFC PATCH 0/9] Another Approach to Use PMEM as NUMA Node Message-ID: <20190418181643.GB7659@localhost.localdomain> References: <1554955019-29472-1-git-send-email-yang.shi@linux.alibaba.com> <20190412084702.GD13373@dhcp22.suse.cz> <20190416074714.GD11561@dhcp22.suse.cz> <876768ad-a63a-99c3-59de-458403f008c4@linux.alibaba.com> <20190417092318.GG655@dhcp22.suse.cz> <5c2d37e1-c7f6-5b7b-4f8e-a34e981b841e@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5c2d37e1-c7f6-5b7b-4f8e-a34e981b841e@intel.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 17, 2019 at 10:13:44AM -0700, Dave Hansen wrote: > On 4/17/19 2:23 AM, Michal Hocko wrote: > > yes. This could be achieved by GFP_NOWAIT opportunistic allocation for > > the migration target. That should prevent from loops or artificial nodes > > exhausting quite naturaly AFAICS. Maybe we will need some tricks to > > raise the watermark but I am not convinced something like that is really > > necessary. > > I don't think GFP_NOWAIT alone is good enough. > > Let's say we have a system full of clean page cache and only two nodes: > 0 and 1. GFP_NOWAIT will eventually kick off kswapd on both nodes. > Each kswapd will be migrating pages to the *other* node since each is in > the other's fallback path. > > I think what you're saying is that, eventually, the kswapds will see > allocation failures and stop migrating, providing hysteresis. This is > probably true. > > But, I'm more concerned about that window where the kswapds are throwing > pages at each other because they're effectively just wasting resources > in this window. I guess we should figure our how large this window is > and how fast (or if) the dampening occurs in practice. I'm still refining tests to help answer this and have some preliminary data. My test rig has CPU + memory Node 0, memory-only Node 1, and a fast swap device. The test has an application strict mbind more than the total memory to node 0, and forever writes random cachelines from per-cpu threads. I'm testing two memory pressure policies: Node 0 can migrate to Node 1, no cycles Node 0 and Node 1 migrate with each other (0 -> 1 -> 0 cycles) After the initial ramp up time, the second policy is ~7-10% slower than no cycles. There doesn't appear to be a temporary window dealing with bouncing pages: it's just a slower overall steady state. Looks like when migration fails and falls back to swap, the newly freed pages occasionaly get sniped by the other node, keeping the pressure up.