Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1308747ybv; Thu, 6 Feb 2020 01:39:45 -0800 (PST) X-Google-Smtp-Source: APXvYqyjmg9UlI8bQCSHsJ7W1iv+986Coka9YTXG06nRw516RkXMN6p8K1VuUNLyPKZA5K9Sfbv/ X-Received: by 2002:aca:1903:: with SMTP id l3mr6277249oii.16.1580981985739; Thu, 06 Feb 2020 01:39:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580981985; cv=none; d=google.com; s=arc-20160816; b=JntQRim5I7p31BXBFzAXqKEBQDaw1zX1964dY4g+qMt6+wEL9UZD3iEhXnZE+/+Ffm /x9MVupWDNJDjc4ajzGi/gXbTawPzgC/qOfPV3Ea58WZ0g4HkgwkaL3z+gwtlyveJ+Hq SPMnyH/bRmMx7hjbiRco2okQM/Neq0q94DLc+H2nXrdOysupOCMSkgnV/9CnMOWtfvNN tqu6ir9AqkejoxuWy1X+jmNhSojGwrFH3jb6n/px6peBQ5/YIZCJ9Vgh5ZQMn+x7LHAN lAXFg2kacxy0qTcTiwkjZtCaFXmuGCMQ3xJcmMH+1IZe+mT+H87NvTFOOTUolvsPIicI JywA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=/t9maecIs5tki36A7uRhOFb40wkNs+40rTQgn09/WK8=; b=GLIfKwzjujM5veyUNCcpkQzF4TEpdvgsAtC6hyoClWHrNMAExTXghqKA7SdD2RFGty iA4IpiFJ5VyBw2yLrxzZK2DFQU+5GgreP7QH96S82GxorA4qnJQ/4Dak60xrN7uwzG27 ZZGAKOSRi63722+djSNC9Gnppf3V0ubAojG62bx+FYdvprwOsCexeJNTpMHi9zNrXw/J bVHSo1A7AgKDGAE1b5Y++7bfbucm97UFlTkGdFBLrYAgw6/u1qoM0EH6aTVsuOBmp+Wv MqoyLyPHmn8xEWjbyATOdSoiwy5Uq0MDFbrNO8T2/2PbLpUp+2cZ7wgsUCUewzfJ9F3i Xm4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FoS9aORQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o16si1496596otp.289.2020.02.06.01.39.34; Thu, 06 Feb 2020 01:39:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FoS9aORQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728295AbgBFJLV (ORCPT + 99 others); Thu, 6 Feb 2020 04:11:21 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:53990 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727848AbgBFJLU (ORCPT ); Thu, 6 Feb 2020 04:11:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1580980279; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=/t9maecIs5tki36A7uRhOFb40wkNs+40rTQgn09/WK8=; b=FoS9aORQIyVO+vfiRIbxS1f1HoWl536RrgcDsvNb66xB1DhoBQeEy5raHyWdqwmNRgoOD5 hCd14mD9VHGygZX1wyanjW8tPMRG0HVB1hsayLt4Rbd+Kq3sVUrlbBZI2EoPZW1NVSNR72 VQbeee9VPFFTf/jLbg0XoYX/LScI2bk= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-347-hcCeiUPmOL6ODbFcPJug9g-1; Thu, 06 Feb 2020 04:11:16 -0500 X-MC-Unique: hcCeiUPmOL6ODbFcPJug9g-1 Received: by mail-qt1-f200.google.com with SMTP id d9so3391625qtq.13 for ; Thu, 06 Feb 2020 01:11:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=/t9maecIs5tki36A7uRhOFb40wkNs+40rTQgn09/WK8=; b=KVq2xAs+Yf7YVNHYP/TkLqPuqiYFO6PtIfuhVHhlEJ5iasbzcgiStU+n7J7nNTi9zu D489XyBfUJMWvO5iTeqj/pEbj1EqmzUz27SGsYuB2IHzI3MSR9bLJEh5JXgXNhrwJRIw nE8RcasCwbCqeRgHiNzJJ+yv0tyzuSzvI4VFiECpWHeBamFVndtz4B0ENm95MRFA8Ni0 DgsIbkTfQbKrHilva2z+tvA0n7zYYvNPxxSK8Vc8MXzooqsOjAPEAwWEi7efXmleYRmG B0ldYniTbFn8UU62lbJVesZDAHxbttU42z9xr6vuH8QvYzZz/OTc9mr/JszUvDzmulJy tFZQ== X-Gm-Message-State: APjAAAVt1bur/GlRa9EyVvvmP5skExFgIbnSFbfJJCOJjgmEEjgONEHm 4+2ukPSqoXZrzlIU7oBHc8C4aFJnqzBU3QL79QnHJbZMV957N6FdHI7OFYngCpv2rCP/LUvQ+lt ZybGcL/9ewE+eN0Ng16uXlQaR X-Received: by 2002:a05:620a:9cc:: with SMTP id y12mr1588244qky.446.1580980275519; Thu, 06 Feb 2020 01:11:15 -0800 (PST) X-Received: by 2002:a05:620a:9cc:: with SMTP id y12mr1588220qky.446.1580980275214; Thu, 06 Feb 2020 01:11:15 -0800 (PST) Received: from redhat.com (bzq-79-176-41-183.red.bezeqint.net. [79.176.41.183]) by smtp.gmail.com with ESMTPSA id y194sm1123257qkb.113.2020.02.06.01.11.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Feb 2020 01:11:14 -0800 (PST) Date: Thu, 6 Feb 2020 04:11:10 -0500 From: "Michael S. Tsirkin" To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Tyler Sanderson , Wei Wang , Alexander Duyck , David Rientjes , Nadav Amit , Michal Hocko Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM Message-ID: <20200206040958-mutt-send-email-mst@kernel.org> References: <20200205163402.42627-1-david@redhat.com> <20200205163402.42627-4-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200205163402.42627-4-david@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote: > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker") > changed the behavior when deflation happens automatically. Instead of > deflating when called by the OOM handler, the shrinker is used. > > However, the balloon is not simply some slab cache that should be > shrunk when under memory pressure. The shrinker does not have a concept of > priorities, so this behavior cannot be configured. > > There was a report that this results in undesired side effects when > inflating the balloon to shrink the page cache. [1] > "When inflating the balloon against page cache (i.e. no free memory > remains) vmscan.c will both shrink page cache, but also invoke the > shrinkers -- including the balloon's shrinker. So the balloon > driver allocates memory which requires reclaim, vmscan gets this > memory by shrinking the balloon, and then the driver adds the > memory back to the balloon. Basically a busy no-op." > > The name "deflate on OOM" makes it pretty clear when deflation should > happen - after other approaches to reclaim memory failed, not while > reclaiming. This allows to minimize the footprint of a guest - memory > will only be taken out of the balloon when really needed. > > Especially, a drop_slab() will result in the whole balloon getting > deflated - undesired. While handling it via the OOM handler might not be > perfect, it keeps existing behavior. If we want a different behavior, then > we need a new feature bit and document it properly (although, there should > be a clear use case and the intended effects should be well described). > > Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because > this has no such side effects. Always register the shrinker with > VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free > pages that are still to be processed by the guest. The hypervisor takes > care of identifying and resolving possible races between processing a > hinting request and the guest reusing a page. > > In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom > notifier with shrinker"), don't add a moodule parameter to configure the > number of pages to deflate on OOM. Can be re-added if really needed. > Also, pay attention that leak_balloon() returns the number of 4k pages - > convert it properly in virtio_balloon_oom_notify(). > > Note1: using the OOM handler is frowned upon, but it really is what we > need for this feature. > > Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we > could actually skip sending deflation requests to our hypervisor, > making the OOM path *very* simple. Besically freeing pages and > updating the balloon. If the communication with the host ever > becomes a problem on this call path. > > [1] https://www.spinics.net/lists/linux-virtualization/msg40863.html > > Reported-by: Tyler Sanderson > Cc: Michael S. Tsirkin > Cc: Wei Wang > Cc: Alexander Duyck > Cc: David Rientjes > Cc: Nadav Amit > Cc: Michal Hocko > Signed-off-by: David Hildenbrand So the revert looks ok, from that POV and with commit log changes Acked-by: Michael S. Tsirkin however, let's see what do others think, and whether Wei can come up with a fixup for the shrinker. > --- > drivers/virtio/virtio_balloon.c | 107 +++++++++++++------------------- > 1 file changed, 44 insertions(+), 63 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index 7e5d84caeb94..e7b18f556c5e 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -14,6 +14,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -27,7 +28,9 @@ > */ > #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT) > #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256 > -#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 > +/* Maximum number of (4k) pages to deflate on OOM notifications. */ > +#define VIRTIO_BALLOON_OOM_NR_PAGES 256 > +#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80 > > #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \ > __GFP_NOMEMALLOC) > @@ -112,8 +115,11 @@ struct virtio_balloon { > /* Memory statistics */ > struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR]; > > - /* To register a shrinker to shrink memory upon memory pressure */ > + /* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */ > struct shrinker shrinker; > + > + /* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */ > + struct notifier_block oom_nb; > }; > > static struct virtio_device_id id_table[] = { > @@ -786,50 +792,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb, > return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES; > } > > -static unsigned long leak_balloon_pages(struct virtio_balloon *vb, > - unsigned long pages_to_free) > -{ > - return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) / > - VIRTIO_BALLOON_PAGES_PER_PAGE; > -} > - > -static unsigned long shrink_balloon_pages(struct virtio_balloon *vb, > - unsigned long pages_to_free) > -{ > - unsigned long pages_freed = 0; > - > - /* > - * One invocation of leak_balloon can deflate at most > - * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it > - * multiple times to deflate pages till reaching pages_to_free. > - */ > - while (vb->num_pages && pages_freed < pages_to_free) > - pages_freed += leak_balloon_pages(vb, > - pages_to_free - pages_freed); > - > - update_balloon_size(vb); > - > - return pages_freed; > -} > - > static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker, > struct shrink_control *sc) > { > - unsigned long pages_to_free, pages_freed = 0; > struct virtio_balloon *vb = container_of(shrinker, > struct virtio_balloon, shrinker); > > - pages_to_free = sc->nr_to_scan; > - > - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > - pages_freed = shrink_free_pages(vb, pages_to_free); > - > - if (pages_freed >= pages_to_free) > - return pages_freed; > - > - pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed); > - > - return pages_freed; > + return shrink_free_pages(vb, sc->nr_to_scan); > } > > static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker, > @@ -837,26 +806,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker, > { > struct virtio_balloon *vb = container_of(shrinker, > struct virtio_balloon, shrinker); > - unsigned long count; > - > - count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE; > - count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES; > > - return count; > + return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES; > } > > -static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb) > +static int virtio_balloon_oom_notify(struct notifier_block *nb, > + unsigned long dummy, void *parm) > { > - unregister_shrinker(&vb->shrinker); > -} > + struct virtio_balloon *vb = container_of(nb, > + struct virtio_balloon, oom_nb); > + unsigned long *freed = parm; > > -static int virtio_balloon_register_shrinker(struct virtio_balloon *vb) > -{ > - vb->shrinker.scan_objects = virtio_balloon_shrinker_scan; > - vb->shrinker.count_objects = virtio_balloon_shrinker_count; > - vb->shrinker.seeks = DEFAULT_SEEKS; > + *freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) / > + VIRTIO_BALLOON_PAGES_PER_PAGE; > + update_balloon_size(vb); > > - return register_shrinker(&vb->shrinker); > + return NOTIFY_OK; > } > > static int virtballoon_probe(struct virtio_device *vdev) > @@ -933,22 +898,35 @@ static int virtballoon_probe(struct virtio_device *vdev) > virtio_cwrite(vb->vdev, struct virtio_balloon_config, > poison_val, &poison_val); > } > - } > - /* > - * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a > - * shrinker needs to be registered to relieve memory pressure. > - */ > - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) { > - err = virtio_balloon_register_shrinker(vb); > + > + /* > + * We're allowed to reuse any free pages, even if they are > + * still to be processed by the host. > + */ > + vb->shrinker.scan_objects = virtio_balloon_shrinker_scan; > + vb->shrinker.count_objects = virtio_balloon_shrinker_count; > + vb->shrinker.seeks = DEFAULT_SEEKS; > + err = register_shrinker(&vb->shrinker); > if (err) > goto out_del_balloon_wq; > } > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) { > + vb->oom_nb.notifier_call = virtio_balloon_oom_notify; > + vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY; > + err = register_oom_notifier(&vb->oom_nb); > + if (err < 0) > + goto out_unregister_shrinker; > + } > + > virtio_device_ready(vdev); > > if (towards_target(vb)) > virtballoon_changed(vdev); > return 0; > > +out_unregister_shrinker: > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > + unregister_shrinker(&vb->shrinker); > out_del_balloon_wq: > if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > destroy_workqueue(vb->balloon_wq); > @@ -987,8 +965,11 @@ static void virtballoon_remove(struct virtio_device *vdev) > { > struct virtio_balloon *vb = vdev->priv; > > - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) > - virtio_balloon_unregister_shrinker(vb); > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) > + unregister_oom_notifier(&vb->oom_nb); > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > + unregister_shrinker(&vb->shrinker); > + > spin_lock_irq(&vb->stop_update_lock); > vb->stop_update = true; > spin_unlock_irq(&vb->stop_update_lock); > -- > 2.24.1