Received: by 2002:a25:5b86:0:0:0:0:0 with SMTP id p128csp532337ybb; Thu, 28 Mar 2019 07:18:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzV0BwuYw17KYCe9k3j0edYCDzCam29HcDJHXs+qnUMjXLJH5piDSZ5MHt7jlrvMoXBTMuc X-Received: by 2002:a65:6644:: with SMTP id z4mr40358294pgv.198.1553782720529; Thu, 28 Mar 2019 07:18:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553782720; cv=none; d=google.com; s=arc-20160816; b=JF2oMexU+pU/SIAUlv/isfeBSCP4OYgvCR1+SkE+u43GreiLNZ6VUtGFN1snyI4GZn Rx0o+KQch8Y+IqZLG+s88iCJh1FPbsLURgRrOBKtdvSPIk79GvaOcCOpZSnUw6W6lBB1 qu+TnwpK1uk6PwwsHhJej/eiuaTzEte3Akyt5/yotLB+FT/d4frdTU8cCKBuMWSQxT/M rLNvoNeC/VwbAvlLit/XmXCeg8O73EL+Xf9/8bvV5Vi+KHMNFMknx/c/TnkrNxqM/v3X rxx2/9jU0BMaP2movX3xK1eamvenUsdZufyKAd6h6qu70Sj5ilkFBTmy+Cp/6/e5XoEH hMag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=J6su+7OZRspWq5ZW6m3WzaGA6eTHr7d/mGGgYkz6DLg=; b=lRzdD8zNhMR1puHf5ObD639ll1H5STjWfk7L/2o9qIoqsFFUyld9/79nlPbXFm4Vvf JKKPrkLhdaV/EjHR5yfwaKaIkFjnQ0TvysYQclZS+65o4xtQr3VkmGf+SfkKkJSFGqSY fvXoNQruuZWkfjFl+15wfcmNVXw+mRw2pVdBzHmnlLQ9tSrecSij04JYvISUMgcQisAi znlhABkhc/5Zxht4S/+jZgacM9ndvT0pNrfp2SfzBpC1rKzkljT1Y+gF0KFa9X3viCWy UOylxPtQgQuCyGxdqYk84WJ9K+bJx5EVkUTnW9EWcxpX8BEquzi1QK1D1C4RVntKDKza gnNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a8si2750810pgj.502.2019.03.28.07.18.23; Thu, 28 Mar 2019 07:18:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726302AbfC1OQe (ORCPT + 99 others); Thu, 28 Mar 2019 10:16:34 -0400 Received: from mx2.suse.de ([195.135.220.15]:49126 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725849AbfC1OQe (ORCPT ); Thu, 28 Mar 2019 10:16:34 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C140EB780; Thu, 28 Mar 2019 14:16:32 +0000 (UTC) Date: Thu, 28 Mar 2019 15:16:31 +0100 From: Michal Hocko To: David Hildenbrand Cc: Dan Williams , Andrew Morton , =?iso-8859-1?B?Suly9G1l?= Glisse , Logan Gunthorpe , Toshi Kani , Jeff Moyer , Vlastimil Babka , stable , Linux MM , linux-nvdimm , Linux Kernel Mailing List Subject: Re: [PATCH v5 00/10] mm: Sub-section memory hotplug support Message-ID: <20190328141631.GB7155@dhcp22.suse.cz> References: <155327387405.225273.9325594075351253804.stgit@dwillia2-desk3.amr.corp.intel.com> <20190322180532.GM32418@dhcp22.suse.cz> <20190325101945.GD9924@dhcp22.suse.cz> <20190326080408.GC28406@dhcp22.suse.cz> <20190327161306.GM11927@dhcp22.suse.cz> <9e769f3d-00f2-a8bb-2d8d-097735cb2a6d@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9e769f3d-00f2-a8bb-2d8d-097735cb2a6d@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 28-03-19 14:38:15, David Hildenbrand wrote: > On 27.03.19 17:13, Michal Hocko wrote: [...] > > People are asking for a smaller memory hotplug granularity for other > > usecases (e.g. memory ballooning into VMs) which are quite dubious to > > be honest and not really worth all the code rework. If we are talking > > about something that can be worked around elsewhere then it is preferred > > because the code base is not in an excellent shape and putting more on > > top is just going to cause more headaches. > > At least for virtio-mem, it will be handled similar to xen-balloon and > hyper-v balloon, where whole actions are added and some parts are kept > "soft-offline". But there, one device "owns" the complete section, it > does not overlap with other devices. One section only has one owner. This is exactly what I meant by handing at a higher level. -- Michal Hocko SUSE Labs