Received: by 2002:a4a:be92:0:0:0:0:0 with SMTP id o18csp4104841oop; Tue, 19 Feb 2019 13:59:44 -0800 (PST) X-Google-Smtp-Source: AHgI3IaiU7dFT/jU5oztL5x+UNivVCZMnwuP3CEqeXWvQP4XtK3OG1wJ30511DVr5PpdiUcPy1Ia X-Received: by 2002:a62:b248:: with SMTP id x69mr31335349pfe.256.1550613485018; Tue, 19 Feb 2019 13:58:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550613485; cv=none; d=google.com; s=arc-20160816; b=aljAJu+/KA9GUjw7FdqzGE18eiQ3VriDDcVaaYnHAnFOzLs/mxS7ZAsZ9KdewtgFKC 7J7yNSnjjYmKBKVAq7ZQ+DgkmitA55aH0INVvMvyqQ7rKGdW8GTK1y4HVXbzOE7SsFMV PfOiygAm8hHvTWsOAsro+zvpVv9CAdLyMVA4t6KckTYHqDkqFC4TqNT/oUsL/ai2TT0a 79redWEz9qK+rb5Ivron/A3NcQzoVXIAGggjG0gIbrSPd0VADlEAx/YN6rKkgub8FWQq YWw1Wm0HpFxCUmG27ghsFTwXtNHm13aJhIDCCRYDdveGN7VIHdgNYhwuopfBVlHOseaF 4lnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=B4vZzNCT1vVXv7VLFK2rfiMX4pHtU4UHjNoEIrXglhw=; b=U7rFH8KMGT+rvvfuNGQgVDTabNu4+oYET7AVOLyfarWA4YXzlDLw0eWLzAljLPfEor M111IaU2c0MimwpCdp7tI2RpIYtOBx7EnNJ/E24i3JStCk2O2PLhTX12G72rBWt7hun/ orPMAGf5l4wiru72iTpkqZDhjWzDfT7asiLuD38b8H1ZHDRJbPFwa3BiXq3qwaCTD++B OAfwpvVeb+WnJp3sE74G4X76B/bpIjht5fJoURI3sgPcsH6hWFByP3PnI3zsWwHA68WF NNWRJSSOBdXQLY1bt41DUJhAD7KQccAhUoOjtMoVqclCNzJnyRwfduHdqBRq6c6WAPgY G0rA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=hbjW8WeK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id gn11si17343960plb.94.2019.02.19.13.57.49; Tue, 19 Feb 2019 13:58:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=hbjW8WeK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729556AbfBSV51 (ORCPT + 99 others); Tue, 19 Feb 2019 16:57:27 -0500 Received: from mail-it1-f196.google.com ([209.85.166.196]:34407 "EHLO mail-it1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726583AbfBSV51 (ORCPT ); Tue, 19 Feb 2019 16:57:27 -0500 Received: by mail-it1-f196.google.com with SMTP id d125so4281921ith.1; Tue, 19 Feb 2019 13:57:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=B4vZzNCT1vVXv7VLFK2rfiMX4pHtU4UHjNoEIrXglhw=; b=hbjW8WeKCcdJesMsiZ3n+gqy4Iy1nL7BnJwzV90UHG+oCbOjcmapmPe8ZhfNaC6Y8R BQDvj920iCLIVRO9Nz2igvXphJYEBvuCAg+HqLG0BkqcbI+cFZdguSC/io7OaQSZkxK5 tTy0sHk6cJLzfNxYpnIqkTm4fy0r3oEDn0HsTJEzzdIDB2E5ysHSDT06bbJMz/6WeEht dNCidesbVPz3Ip/qiDZwNNSDe6QQaVyfxkFQLllfFZNyFjfUvpMQwSsQqYXT5kKseWHY VpWnvcwtnFiWfIz7dF5dIxyFVAuGDVw/+rGzijA/OD230gg5zA4MzZi0SxmhsCZT5HjC oGkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=B4vZzNCT1vVXv7VLFK2rfiMX4pHtU4UHjNoEIrXglhw=; b=RjwxKphD4nMxM2SMjTEo6djqa2saLMx9zOhbOEhUDG98WIX2O6idDNeHr4knVQW68I Ct/cHNgrKjOJgs2auZDN+Zo2R7eqNJzqX/80IDkIYPGKa/KJ35sdRRLvCeLCVQhdAGmw tbcEGJTopNrG7tqJyC9IGJHSil8Z8ZPKRECmg7iZAk5hhVlK1/9WMY5NcGQrGI9W5T4Y 9MPLRLiSAjTG78GpdYAuHTA+1l3KlzNJf5oYZxo/r4GnHNYIYJN0FgnM5sxJeZbEc2r3 yHctGWv+dG5CEfjSseEC/aVa7OJaJzxpp4tAKkOjQi6zPgPeveKfg64A2HxbG29xVrzf mSxA== X-Gm-Message-State: AHQUAuaLNkW8v3AYq/xi6G4tQ+Ay0BZXtT40OwA4nAv9oAxVznT24ebL IFAlTHji+DTpwdMyg9gAeQCxo2Y47CPVS88xYgPz5g== X-Received: by 2002:a02:95a:: with SMTP id f87mr5857462jad.83.1550613445982; Tue, 19 Feb 2019 13:57:25 -0800 (PST) MIME-Version: 1.0 References: <20190204201854.2328-1-nitesh@redhat.com> <20190218114601-mutt-send-email-mst@kernel.org> <44740a29-bb14-e6e6-2992-98d0ae58e994@redhat.com> <20190218122636-mutt-send-email-mst@kernel.org> <20190218140947-mutt-send-email-mst@kernel.org> <4039c2e8-5db4-cddd-b997-2fdbcc6f529f@redhat.com> <20190218143819-mutt-send-email-mst@kernel.org> <58714908-f203-0b64-845b-5818e52a62fa@redhat.com> <20190218152021-mutt-send-email-mst@kernel.org> <18d87846-72c7-adf0-5ca3-7312540bb31b@redhat.com> <478a9574-a604-0aa9-d569-6a5cd98d7cdc@redhat.com> <77e71dc3-640b-bbf6-6a47-bb2371c06172@redhat.com> In-Reply-To: From: Alexander Duyck Date: Tue, 19 Feb 2019 13:57:14 -0800 Message-ID: Subject: Re: [RFC][Patch v8 0/7] KVM: Guest Free Page Hinting To: David Hildenbrand Cc: Nitesh Narayan Lal , "Michael S. Tsirkin" , kvm list , LKML , Paolo Bonzini , lcapitulino@redhat.com, pagupta@redhat.com, wei.w.wang@intel.com, Yang Zhang , Rik van Riel , dodgen@google.com, Konrad Rzeszutek Wilk , dhildenb@redhat.com, Andrea Arcangeli Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 19, 2019 at 10:32 AM David Hildenbrand wrote: > > >>> This essentially just ends up being another trade-off of CPU versus > >>> memory though. Assuming we aren't using THP we are going to take a > >>> penalty in terms of performance but could then free individual pages > >>> less than HUGETLB_PAGE_ORDER, but the CPU utilization is going to be > >>> much higher in general even without the hinting. I figure for x86 we > >>> probably don't have too many options since if I am not mistaken > >>> MAX_ORDER is just one or two more than HUGETLB_PAGE_ORDER. > >> > >> THP is an implementation detail in the hypervisor. Yes, it is the common > >> case on x86. But it is e.g. not available on s390x yet. And we also want > >> this mechanism to work on s390x (e.g. for nested virtualization setups > >> as discussed). > >> > >> If we e.g. report any granularity after merging was done in the buddy, > >> we could end up reporting everything from page size up to MAX_SIZE - 1, > >> the hypervisor could ignore hints below a certain magic number, if it > >> makes its life easier. > > > > For each architecture we can do a separate implementation of what to > > hint on. We already do that for bare metal so why would we have guests > > do the same type of hinting in the virtualization case when there are > > fundamental differences in page size and features in each > > architecture? > > > > This is another reason why I think the hypercall approach is a better > > idea since each architecture is likely going to want to handle things > > differently and it would be a pain to try and sort that all out in a > > virtio driver. > > I can't follow. We are talking about something as simple as a minimum > page granularity here that can easily be configured. Nothing that > screams for different implementations. But I get your point, we could > tune for different architectures. I was thinking about the guest side of things. Basically if we need to define different page orders for different architectures then we start needing to do architecture specific includes. Then if we throw in stuff like the fact that the first level of KVM can make use of the host style hints then that is another thing that will be a difference int he different architectures. I'm just worried this stuff is going to start adding up to a bunch of "#ifdef" cruft if we are trying to do this as a virtio driver. > > > >>> > >>> As far as fragmentation my thought is that we may want to look into > >>> adding support to the guest for prioritizing defragmentation on pages > >>> lower than THP size. Then that way we could maintain the higher > >>> overall performance with or without the hinting since shuffling lower > >>> order pages around between guests would start to get expensive pretty > >>> quick. > >> > >> My take would be, design an interface/mechanism that allows any kind of > >> granularity. You can than balance between cpu overead and space shifting. > > > > The problem with using "any kind of granularity" is that in the case > > of memory we are already having problems with 4K pages being deemed > > too small of a granularity to be useful for anything and making > > operations too expensive. > > No, sorry, s390x does it. And via batch reporting it could work. Not > saying we should do page granularity, but "to be useful for anything" is > just wrong. Yeah, I was engaging in a bit of hyperbole. I have had a headache this morning so I am a bit cranky. So I am assuming the batching is the reason why you also have a arch_alloc_page then for the s390 so that you can abort the hint if a page is reallocated before the hint is processed then? I just want to confirm so that my understanding of this is correct. If that is the case I would be much happier with an asynchronous page hint setup as this doesn't deprive the guest of memory while waiting on the hint. The current logic in the patches from Nitesh has the pages unavailable to the guest while waiting on the hint and that has me somewhat concerned as it is going to hurt cache locality as it will guarantee that we cannot reuse the same page if we are doing a cycle of alloc and free for the same page size. > > > > I'm open to using other page orders for other architectures. Nothing > > says we have to stick with THP sized pages for all architectures. I > > have just been focused on x86 and this seems like the best fit for the > > balance between CPU and freeing of memory for now on that > > architecture. > > > >> I feel like repeating myself, but on s390x hinting is done on page > >> granularity, and I have never heard somebody say "how can I turn it off, > >> this is slowing down my system too much.". All we know is that one > >> hypercall per free is most probably not acceptable. We really have to > >> play with the numbers. > > > > My thought was we could look at doing different implementations for > > other architectures such as s390 and powerPC. Odds are the > > implementations would be similar but have slight differences where > > appropriate such as what order we should start hinting on, or if we > > bypass the hypercall/virtio-balloon for a host native approach if > > available. > > > >> I tend to like an asynchronous reporting approach as discussed in this > >> thread, we would have to see if Nitesh could get it implemented. > > > > I agree it would be great if it could work. However I have concerns > > given that work on this patch set dates back to 2017, major issues > > such as working around device assignment have yet to be addressed, and > > it seems like most of the effort is being focused on things that in my > > opinion are being over-engineered for little to no benefit. > > I can understand that you are trying to push your solution. I would do > the same. Again, I don't like a pure synchronous approach that works on > one-element-at-a-time. Period. Other people might have other opinions. > This is mine - luckily I don't have anything to say here :) > > MST also voted for an asynchronous solution if we can make it work. > Nitesh made significant improvements since the 2017. Complicated stuff > needs time. No need to rush. People have been talking about free page > hinting since 2006. I talked to various people that experimented with > bitmap based solutions two years ago. Now that I think I have a better understanding of how the s390x is handling this I'm beginning to come around to the idea of an asynchronous setup. The one thing that has been bugging me about the asynchronous approach is the fact that the pages are not available to the guest while waiting on the hint to be completed. If we can do something like an arch_alloc_page and that would abort the hint and allow us to keep the page available while waiting on the hint that would be my preferred way of handling this. > So much to that, if you think your solution is the way to go, please > follow up on it. Nitesh seems to have decided to look into the > asynchronous approach you also called "great if it could work". As long > as we don't run into elementary blockers there, to me it all looks like > we are making progress, which is good. If we find out asynchronous > doesn't work, synchronous is the only alternative. I plan to follow up in the next week or so. > And just so you don't get me wrong: Thanks for looking and working on > this. And thanks for sharing your opinions and insights! However making > a decision about going your way at this point does not seem reasonable > to me. We have plenty of time. I appreciate the feedback. Sorry if I seemed a bit short. As I mentioned I've had a headache most of the morning which hasn't really helped my mood. Thanks. - Alex