Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp954895yba; Thu, 18 Apr 2019 12:26:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7l/XW1PVnut6LJGpL0tV/+gzWu1PepYGG17IyGDwanAZufwPqHGx6proflozkYPlXqAry X-Received: by 2002:a65:414a:: with SMTP id x10mr91258046pgp.237.1555615599615; Thu, 18 Apr 2019 12:26:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555615599; cv=none; d=google.com; s=arc-20160816; b=YTUF5qk8Szz0JF6eh4OjAir3OrQFETzGQV19RjVZaw/wkaFJ5P5SXaS4f5WmnLLXje YoUk4rwjHZQ+dlMdn8BPsdiGyIeNhoU6c/a3aQ/58e5Jpu1o1zNEbNVQ8pm6C63Pb20r qIm50pUVIGq4dN1EldO+4sWfy7I/6m0uX/AnNx8rbEMD0NXyXWwtaPyqw76ttrdUT/1Z mevZDuN432/IzqhKycmMnyXDgLrtkFdy/Y0fJT4rlAV7FSA9+r8BD++fRRl7fTuVGNx+ 6WJW3OKLjEFyjUo7iae3hKuz0Bz4ZDztOzLrZIzJkGnN7uRg0jWXVet17OBXHN+thq3B 6p7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=8QzPRxE9GQoBbILPD9hqOGzz1p8EdnUFfxd0351ryJc=; b=l3RLS9EemqbtZaVHu+Lmni1x9SgD0boFl+XoSdaTzH+Ug8t6PbqbFYMvI7GdvZ0JYx Li1hqlfsHs34TEIjvTCEtXCArvZehzoaFDRpofDPtZPm5Gz4UH+fXiBkZVJlUmn4SCde gqQ1rRFwEGm0gIMKZ/b/nrUgo/o9eT30mx0rvfmeOKNv4WddFNY3heqALkdB3SRflhJE t38LsSn2KrwIgLew/U1bgPAskMtJQ0DKhq6hAq+cBfpDCxfNLeifqtaIzTzeIY+HkqJd 551TuVLwSxG/QPXANMYVuoU72fgusovK47+s54oJqiTHjO8C1bpeNiuRI49fW9MdDO+8 QChw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ct3si1299086plb.387.2019.04.18.12.26.21; Thu, 18 Apr 2019 12:26:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389824AbfDRTXz (ORCPT + 99 others); Thu, 18 Apr 2019 15:23:55 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:54297 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729641AbfDRTXy (ORCPT ); Thu, 18 Apr 2019 15:23:54 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0TPeyaMj_1555615421; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TPeyaMj_1555615421) by smtp.aliyun-inc.com(127.0.0.1); Fri, 19 Apr 2019 03:23:47 +0800 Subject: Re: [v2 RFC PATCH 0/9] Another Approach to Use PMEM as NUMA Node To: Keith Busch , Dave Hansen Cc: Michal Hocko , mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1554955019-29472-1-git-send-email-yang.shi@linux.alibaba.com> <20190412084702.GD13373@dhcp22.suse.cz> <20190416074714.GD11561@dhcp22.suse.cz> <876768ad-a63a-99c3-59de-458403f008c4@linux.alibaba.com> <20190417092318.GG655@dhcp22.suse.cz> <5c2d37e1-c7f6-5b7b-4f8e-a34e981b841e@intel.com> <20190418181643.GB7659@localhost.localdomain> From: Yang Shi Message-ID: <8259dfd6-9044-b9f8-29b1-f427b4435eda@linux.alibaba.com> Date: Thu, 18 Apr 2019 12:23:41 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20190418181643.GB7659@localhost.localdomain> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/18/19 11:16 AM, Keith Busch wrote: > On Wed, Apr 17, 2019 at 10:13:44AM -0700, Dave Hansen wrote: >> On 4/17/19 2:23 AM, Michal Hocko wrote: >>> yes. This could be achieved by GFP_NOWAIT opportunistic allocation for >>> the migration target. That should prevent from loops or artificial nodes >>> exhausting quite naturaly AFAICS. Maybe we will need some tricks to >>> raise the watermark but I am not convinced something like that is really >>> necessary. >> I don't think GFP_NOWAIT alone is good enough. >> >> Let's say we have a system full of clean page cache and only two nodes: >> 0 and 1. GFP_NOWAIT will eventually kick off kswapd on both nodes. >> Each kswapd will be migrating pages to the *other* node since each is in >> the other's fallback path. >> >> I think what you're saying is that, eventually, the kswapds will see >> allocation failures and stop migrating, providing hysteresis. This is >> probably true. >> >> But, I'm more concerned about that window where the kswapds are throwing >> pages at each other because they're effectively just wasting resources >> in this window. I guess we should figure our how large this window is >> and how fast (or if) the dampening occurs in practice. > I'm still refining tests to help answer this and have some preliminary > data. My test rig has CPU + memory Node 0, memory-only Node 1, and a > fast swap device. The test has an application strict mbind more than > the total memory to node 0, and forever writes random cachelines from > per-cpu threads. Thanks for the test. A follow-up question, how about the size for each node? Is node 1 bigger than node 0? Since PMEM typically has larger capacity, so I'm wondering whether the capacity may make things different or not. > I'm testing two memory pressure policies: > > Node 0 can migrate to Node 1, no cycles > Node 0 and Node 1 migrate with each other (0 -> 1 -> 0 cycles) > > After the initial ramp up time, the second policy is ~7-10% slower than > no cycles. There doesn't appear to be a temporary window dealing with > bouncing pages: it's just a slower overall steady state. Looks like when > migration fails and falls back to swap, the newly freed pages occasionaly > get sniped by the other node, keeping the pressure up.