Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5479092img; Wed, 27 Mar 2019 09:14:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqwhdMRtD9goro/HrhD/qt11ED4ZbvN5XLo3r6Nn9VRMwAFq6bn892nVSUykXxToA62Q2sSx X-Received: by 2002:a65:4247:: with SMTP id d7mr16902390pgq.114.1553703261070; Wed, 27 Mar 2019 09:14:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553703261; cv=none; d=google.com; s=arc-20160816; b=bQHeZImBNgWwNBeEdqf3APTXbO2wC4s5v5Mqa8ch4iWitqrqy4B4x7SwtTAok6kOOo W2vnxUO6LHAZSPMAgvv95bT/UtVwwqjme+SzqiehKEW8EkrN+qM7zKB4EGZdG1IbJ4lF ZvrS+yO7hw+q3YGa7IBXmLOKLb6lfzTBPuv1A7O7Q3dLSSfmIzfjFZatR30NKXgqus/X Lu1SlWb9XGygm04Hxma5ixaAoTdBdwRz3z604mUqVNPdUdIZ0KhSShgPO246ykalZ1tS XnrRjNZHWOgaUK4z9kMUCLw6VnuxhP40Myl4nHBNMkBVZRabsMYh3Ymib1UAzYan/Al5 lXfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=bg3tpBh6DuR3dv1KbA/RP6AkbCjzPoNEIFDkGVKbHa0=; b=cmSkc0txRx+rlMWRLHFw0YVaTb5nAR1cIEcnkefz/6o0VmIFHAZt8hWMN9aOXbsEmz 7gI4r4C2tDWHU0QnLWgdv7jHmbVZ1KqFICzFHqkzVz+vJCszwUUojd6Yef6TbhYCEaeI 2b5XcETAFl8/ScXsiLAJH50i2MZxmpdtVmGnfqp0vxnzja9AqIihxeSiIOriiHyfZMHT 3hBgmM4mwyeDJyhPz7f0aKGHv81RsH3BAuRlkCCmrYiC0vSrqz+EqR0+kmHGKjCEY2Hd awzl0lxzg6ABU/3b5c7fFO538Jra7HzilC1oLwDVEW8XSzMpnoH4JI+5XHwjJCxSn9lk DRMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k19si18649159pfi.31.2019.03.27.09.14.04; Wed, 27 Mar 2019 09:14:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727861AbfC0QNK (ORCPT + 99 others); Wed, 27 Mar 2019 12:13:10 -0400 Received: from mx2.suse.de ([195.135.220.15]:45776 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726126AbfC0QNK (ORCPT ); Wed, 27 Mar 2019 12:13:10 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 56E54AE0E; Wed, 27 Mar 2019 16:13:08 +0000 (UTC) Date: Wed, 27 Mar 2019 17:13:06 +0100 From: Michal Hocko To: Dan Williams Cc: Andrew Morton , =?iso-8859-1?B?Suly9G1l?= Glisse , Logan Gunthorpe , Toshi Kani , Jeff Moyer , Vlastimil Babka , stable , Linux MM , linux-nvdimm , Linux Kernel Mailing List Subject: Re: [PATCH v5 00/10] mm: Sub-section memory hotplug support Message-ID: <20190327161306.GM11927@dhcp22.suse.cz> References: <155327387405.225273.9325594075351253804.stgit@dwillia2-desk3.amr.corp.intel.com> <20190322180532.GM32418@dhcp22.suse.cz> <20190325101945.GD9924@dhcp22.suse.cz> <20190326080408.GC28406@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 26-03-19 17:20:41, Dan Williams wrote: > On Tue, Mar 26, 2019 at 1:04 AM Michal Hocko wrote: > > > > On Mon 25-03-19 13:03:47, Dan Williams wrote: > > > On Mon, Mar 25, 2019 at 3:20 AM Michal Hocko wrote: > > [...] > > > > > User-defined memory namespaces have this problem, but 2MB is the > > > > > default alignment and is sufficient for most uses. > > > > > > > > What does prevent users to go and use a larger alignment? > > > > > > Given that we are living with 64MB granularity on mainstream platforms > > > for the foreseeable future, the reason users can't rely on a larger > > > alignment to address the issue is that the physical alignment may > > > change from one boot to the next. > > > > I would love to learn more about this inter boot volatility. Could you > > expand on that some more? I though that the HW configuration presented > > to the OS would be more or less stable unless the underlying HW changes. > > Even if the configuration is static there can be hardware failures > that prevent a DIMM, or a PCI device to be included in the memory map. > When that happens the BIOS needs to re-layout the map and the result > is not guaranteed to maintain the previous alignment. > > > > No, you can't just wish hardware / platform firmware won't do this, > > > because there are not enough platform resources to give every hardware > > > device a guaranteed alignment. > > > > Guarantee is one part and I can see how nobody wants to give you > > something as strong but how often does that happen in the real life? > > I expect a "rare" event to happen everyday in a data-center fleet. > Failure rates tend towards 100% daily occurrence at scale and in this > case the kernel has everything it needs to mitigate such an event. > > Setting aside the success rate of a software-alignment mitigation, the > reason I am charging this hill again after a 2 year hiatus is the > realization that this problem is wider spread than the original > failing scenario. Back in 2017 the problem seemed limited to custom > memmap= configurations, and collisions between PMEM and System RAM. > Now it is clear that the collisions can happen between PMEM regions > and namespaces as well, and the problem spans platforms from multiple > vendors. Here is the most recent collision problem: > https://github.com/pmem/ndctl/issues/76, from a third-party platform. > > The fix for that issue uncovered a bug in the padding implementation, > and a fix for that bug would result in even more hacks in the nvdimm > code for what is a core kernel deficiency. Code review of those > changes resulted in changing direction to go after the core > deficiency. This kind of information along with real world examples is exactly what you should have added into the cover letter. A previous very vague claims were not really convincing or something that can be considered a proper justification. Please do realize that people who are not working with the affected HW are unlikely to have an idea how serious/relevant those problems really are. People are asking for a smaller memory hotplug granularity for other usecases (e.g. memory ballooning into VMs) which are quite dubious to be honest and not really worth all the code rework. If we are talking about something that can be worked around elsewhere then it is preferred because the code base is not in an excellent shape and putting more on top is just going to cause more headaches. I will try to find some time to review this more deeply (no promises though because time is hectic and this is not a simple feature). For the future, please try harder to write up a proper justification and a highlevel design description which tells a bit about all important parts of the new scheme. -- Michal Hocko SUSE Labs