Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp2293309ybf; Mon, 2 Mar 2020 05:54:27 -0800 (PST) X-Google-Smtp-Source: ADFU+vtYGexnOTFVs7WI9Q+Na687S4jrA9POlG8TjDnqR0p6uhM7VYGkl4daYPcIWJMYShe4tdrJ X-Received: by 2002:aca:d64a:: with SMTP id n71mr3654668oig.72.1583157266466; Mon, 02 Mar 2020 05:54:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583157266; cv=none; d=google.com; s=arc-20160816; b=ocUYTDhPY6izs3vqsbPWscOM1EdECs4S7YeeDDuSrX4mKU91DhQPILg8UUHNWuIt4+ /qH4yq3rrlNBsvUyULpY5EPEBjJmcd3SO/KKDewgBn6HKPpFVa8st4VwkmbJZQ9jpBnM kzSxnSxfk01C59pWOtwOm5nM3tfsLx5cityOZnZPrw1yanTjjOKGWLUdJA21zR5uTlnH kh1dqnKriQA9BvOtGzXSh/pHqS/UDq0ceeji+YjEe2OryQQKDpX9G8JAtKXx74VK0z5H EQ/23qEvy6+u04K/GeWfMQZWCCIyYz4kKmRn4+6eSMuLIfd3zRyHwz2fhAbc68z8DopV oY2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DgbwPUj/m1nwLeqvy+yAawPfy+ZV8Pojgus4///VFkU=; b=JGZ1L189CpkEefsx6LairMv9NGq7m1pl7Jzvr50tg7nMNXISsQsJd3+AH3hwAv7b4Z 2KPnSTAA3JlqXNuc8gDDg39nVqygFWWIl9lfRaXINt7hmK+esEJxXQ8BbAoPp760HKfg Ip1fmn9LxarlBKDt/0eRegyit0DMCd7in/0y68hDWvXqhUsHWlEOClyhWR0uVTWqRAGd 8LdbOfQqDCVmalQUG18Y8FGVdCm67cx/9PZTv/9mjx2b2g5P4jSTRju7L+HuqQgtgpmt xiT8gOzEmCA3c8imEAdkcLozN45c5a7nRT4hCtMDgNRXshw+NNxhS0f9PGdvGtdisFj0 +ocw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UK7RRhxK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v22si3340538otq.109.2020.03.02.05.54.14; Mon, 02 Mar 2020 05:54:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UK7RRhxK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727191AbgCBNwv (ORCPT + 99 others); Mon, 2 Mar 2020 08:52:51 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:47888 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727080AbgCBNwv (ORCPT ); Mon, 2 Mar 2020 08:52:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1583157169; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DgbwPUj/m1nwLeqvy+yAawPfy+ZV8Pojgus4///VFkU=; b=UK7RRhxK1fuCE7dd0H2J1frQxEqo7v56REimnVBMnL+3SV+K6lvGFADcBeTIwjxWLSdbJp pe0xCrk+eS77+57U8vOUvnF5Q/zF4cc/BPkkqv5V3yJvR7tkECugQdxa5iOWcVEc1VDnS+ SAjSj4negLx7jkN2Goag0sY7klVswhI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-347-ULAvd2pTNnqgqlIBwKmDlA-1; Mon, 02 Mar 2020 08:52:48 -0500 X-MC-Unique: ULAvd2pTNnqgqlIBwKmDlA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3528D59328; Mon, 2 Mar 2020 13:52:46 +0000 (UTC) Received: from t480s.redhat.com (ovpn-116-114.ams2.redhat.com [10.36.116.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4EEC919C4F; Mon, 2 Mar 2020 13:52:29 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, virtio-dev@lists.oasis-open.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Michal Hocko , Andrew Morton , "Michael S . Tsirkin" , David Hildenbrand , Jason Wang , Oscar Salvador , Igor Mammedov , Dave Young , Dan Williams , Pavel Tatashin , Stefan Hajnoczi , Vlastimil Babka Subject: [PATCH v1 07/11] virtio-mem: Allow to offline partially unplugged memory blocks Date: Mon, 2 Mar 2020 14:49:37 +0100 Message-Id: <20200302134941.315212-8-david@redhat.com> In-Reply-To: <20200302134941.315212-1-david@redhat.com> References: <20200302134941.315212-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Dropping the reference count of PageOffline() pages allows offlining code to skip them. However, we also have to convert PG_reserved to another flag - let's use PG_dirty - so has_unmovable_pages() will properly handle them. PG_reserved pages get detected as unmovable right away. We need the flag to see if we are onlining pages the first time, or if we allocated them via alloc_contig_range(). Properly take care of offlining code also modifying the stats and special handling in case the driver gets unloaded. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Oscar Salvador Cc: Michal Hocko Cc: Igor Mammedov Cc: Dave Young Cc: Andrew Morton Cc: Dan Williams Cc: Pavel Tatashin Cc: Stefan Hajnoczi Cc: Vlastimil Babka Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 64 ++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 5b26d57be551..2916f8b970fa 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -570,6 +570,53 @@ static void virtio_mem_notify_online(struct virtio_m= em *vm, unsigned long mb_id, virtio_mem_retry(vm); } =20 +static void virtio_mem_notify_going_offline(struct virtio_mem *vm, + unsigned long mb_id) +{ + const unsigned long nr_pages =3D PFN_DOWN(vm->subblock_size); + unsigned long pfn; + int sb_id, i; + + for (sb_id =3D 0; sb_id < vm->nb_sb_per_mb; sb_id++) { + if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) + continue; + /* + * Drop our reference to the pages so the memory can get + * offlined and add the unplugged pages to the managed + * page counters (so offlining code can correctly subtract + * them again). + */ + pfn =3D PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + + sb_id * vm->subblock_size); + adjust_managed_page_count(pfn_to_page(pfn), nr_pages); + for (i =3D 0; i < nr_pages; i++) + page_ref_dec(pfn_to_page(pfn + i)); + } +} + +static void virtio_mem_notify_cancel_offline(struct virtio_mem *vm, + unsigned long mb_id) +{ + const unsigned long nr_pages =3D PFN_DOWN(vm->subblock_size); + unsigned long pfn; + int sb_id, i; + + for (sb_id =3D 0; sb_id < vm->nb_sb_per_mb; sb_id++) { + if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) + continue; + /* + * Get the reference we dropped when going offline and + * subtract the unplugged pages from the managed page + * counters. + */ + pfn =3D PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + + sb_id * vm->subblock_size); + adjust_managed_page_count(pfn_to_page(pfn), -nr_pages); + for (i =3D 0; i < nr_pages; i++) + page_ref_inc(pfn_to_page(pfn + i)); + } +} + /* * This callback will either be called synchronously from add_memory() o= r * asynchronously (e.g., triggered via user space). We have to be carefu= l @@ -616,6 +663,7 @@ static int virtio_mem_memory_notifier_cb(struct notif= ier_block *nb, break; } vm->hotplug_active =3D true; + virtio_mem_notify_going_offline(vm, mb_id); break; case MEM_GOING_ONLINE: mutex_lock(&vm->hotplug_mutex); @@ -640,6 +688,12 @@ static int virtio_mem_memory_notifier_cb(struct noti= fier_block *nb, mutex_unlock(&vm->hotplug_mutex); break; case MEM_CANCEL_OFFLINE: + if (!vm->hotplug_active) + break; + virtio_mem_notify_cancel_offline(vm, mb_id); + vm->hotplug_active =3D false; + mutex_unlock(&vm->hotplug_mutex); + break; case MEM_CANCEL_ONLINE: if (!vm->hotplug_active) break; @@ -666,8 +720,11 @@ static void virtio_mem_set_fake_offline(unsigned lon= g pfn, struct page *page =3D pfn_to_page(pfn); =20 __SetPageOffline(page); - if (!onlined) + if (!onlined) { SetPageDirty(page); + /* FIXME: remove after cleanups */ + ClearPageReserved(page); + } } } =20 @@ -1717,6 +1774,11 @@ static void virtio_mem_remove(struct virtio_device= *vdev) rc =3D virtio_mem_mb_remove(vm, mb_id); BUG_ON(rc); } + /* + * After we unregistered our callbacks, user space can no longer + * offline partially plugged online memory blocks. No need to worry + * about them. + */ =20 /* unregister callbacks */ unregister_virtio_mem_device(vm); --=20 2.24.1