Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp2519886pxu; Sun, 18 Oct 2020 05:41:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyWqyrSkizTNWFl7fwREwBoKiCeolE3//HMTUgd7G1eAGlZ37aBqII4K55uuRbq+NJEw2s3 X-Received: by 2002:a05:6402:395:: with SMTP id o21mr13186214edv.2.1603024901457; Sun, 18 Oct 2020 05:41:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603024901; cv=none; d=google.com; s=arc-20160816; b=x68cLBSvLWeUh2r84HXI/aibRO1c/uW4rw0UWCjNwueRD7p3PoP9YFkJoFO1hO8Vdi fjq79AhY7w5sMs/c8HwPPJOq1r34S2d7Cg1wQlcWW9W2BN0m/nr1kTPmu6v1xWatMmk8 YQ5UxcnFsHHHc8YOwiA9QylzsMoZrO/qhwlzFXmMM2oGb9FsnGtBXg54JW2NgIOQ8TdL 3IgS9DBDI0udXPwLoFBrkq0LJxddC/6U/lIxVccea+8Vi3Ywu/YmxKXw1crcxjjf4QfG BxVNnwqmQ4O40Af6UG9TtmlL98ufO25ssvRb8AfgVycNkXiNRSdBMAsP/HaOknwpMZF0 5Mmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date; bh=EgnIg5REa/zNWsodpzmuYznJof++Rocib2pKRFck9iM=; b=hC2uLq5r5ZXRokYLT97TyasSLJxZAPUmN/sO84eZNVfUosqCbeFxOFZFg3+pGDxzyB 0m9F6TeQRABRXLHs3Y9FVbuhbH31CU8k9+iVXQvQ9R0yBJVL2nmyFQUDthdNr+pYp4gW CVAkI3XQ3fXKehT7qWQCzaUfHLkACPsx3LX4KqLmM6E8ERwmnyv5U46Ju5zxLwnUD5TO QZPKpWZeHidxBzOXb4FUUAJd6RTZ7Y0xDQ5dI0QwM5uYnYpYu+yCYsgM/lfduYwQ+TQ2 d6jUFq7TSXC691LshWIPvtI6QBuLd1K/Yo0izegufMhxdY17ibq904VJWiZzjZfQdTJS DQwA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l4si5568176ejx.679.2020.10.18.05.41.18; Sun, 18 Oct 2020 05:41:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726526AbgJRMiO (ORCPT + 99 others); Sun, 18 Oct 2020 08:38:14 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:39136 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725776AbgJRMiO (ORCPT ); Sun, 18 Oct 2020 08:38:14 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0UCMnpef_1603024690; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0UCMnpef_1603024690) by smtp.aliyun-inc.com(127.0.0.1); Sun, 18 Oct 2020 20:38:10 +0800 Date: Sun, 18 Oct 2020 20:38:10 +0800 From: Wei Yang To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Andrew Morton , "Michael S . Tsirkin" , Jason Wang , Pankaj Gupta Subject: Re: [PATCH v1 13/29] virtio-mem: factor out handling of fake-offline pages in memory notifier Message-ID: <20201018123810.GA51316@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20201012125323.17509-1-david@redhat.com> <20201012125323.17509-14-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201012125323.17509-14-david@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 12, 2020 at 02:53:07PM +0200, David Hildenbrand wrote: >Let's factor out the core pieces and place the implementation next to >virtio_mem_fake_offline(). We'll reuse this functionality soon. > >Cc: "Michael S. Tsirkin" >Cc: Jason Wang >Cc: Pankaj Gupta >Signed-off-by: David Hildenbrand Reviewed-by: Wei Yang >--- > drivers/virtio/virtio_mem.c | 73 +++++++++++++++++++++++++------------ > 1 file changed, 50 insertions(+), 23 deletions(-) > >diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c >index d132bc54ef57..a2124892e510 100644 >--- a/drivers/virtio/virtio_mem.c >+++ b/drivers/virtio/virtio_mem.c >@@ -168,6 +168,10 @@ static LIST_HEAD(virtio_mem_devices); > > static void virtio_mem_online_page_cb(struct page *page, unsigned int order); > static void virtio_mem_retry(struct virtio_mem *vm); >+static void virtio_mem_fake_offline_going_offline(unsigned long pfn, >+ unsigned long nr_pages); >+static void virtio_mem_fake_offline_cancel_offline(unsigned long pfn, >+ unsigned long nr_pages); > > /* > * Register a virtio-mem device so it will be considered for the online_page >@@ -604,27 +608,15 @@ static void virtio_mem_notify_going_offline(struct virtio_mem *vm, > unsigned long mb_id) > { > const unsigned long nr_pages = PFN_DOWN(vm->subblock_size); >- struct page *page; > unsigned long pfn; >- int sb_id, i; >+ int sb_id; > > for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) { > if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) > continue; >- /* >- * Drop our reference to the pages so the memory can get >- * offlined and add the unplugged pages to the managed >- * page counters (so offlining code can correctly subtract >- * them again). >- */ > pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + > sb_id * vm->subblock_size); >- adjust_managed_page_count(pfn_to_page(pfn), nr_pages); >- for (i = 0; i < nr_pages; i++) { >- page = pfn_to_page(pfn + i); >- if (WARN_ON(!page_ref_dec_and_test(page))) >- dump_page(page, "unplugged page referenced"); >- } >+ virtio_mem_fake_offline_going_offline(pfn, nr_pages); > } > } > >@@ -633,21 +625,14 @@ static void virtio_mem_notify_cancel_offline(struct virtio_mem *vm, > { > const unsigned long nr_pages = PFN_DOWN(vm->subblock_size); > unsigned long pfn; >- int sb_id, i; >+ int sb_id; > > for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) { > if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) > continue; >- /* >- * Get the reference we dropped when going offline and >- * subtract the unplugged pages from the managed page >- * counters. >- */ > pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + > sb_id * vm->subblock_size); >- adjust_managed_page_count(pfn_to_page(pfn), -nr_pages); >- for (i = 0; i < nr_pages; i++) >- page_ref_inc(pfn_to_page(pfn + i)); >+ virtio_mem_fake_offline_cancel_offline(pfn, nr_pages); > } > } > >@@ -853,6 +838,48 @@ static int virtio_mem_fake_offline(unsigned long pfn, unsigned long nr_pages) > return 0; > } > >+/* >+ * Handle fake-offline pages when memory is going offline - such that the >+ * pages can be skipped by mm-core when offlining. >+ */ >+static void virtio_mem_fake_offline_going_offline(unsigned long pfn, >+ unsigned long nr_pages) >+{ >+ struct page *page; >+ unsigned long i; >+ >+ /* >+ * Drop our reference to the pages so the memory can get offlined >+ * and add the unplugged pages to the managed page counters (so >+ * offlining code can correctly subtract them again). >+ */ >+ adjust_managed_page_count(pfn_to_page(pfn), nr_pages); >+ /* Drop our reference to the pages so the memory can get offlined. */ >+ for (i = 0; i < nr_pages; i++) { >+ page = pfn_to_page(pfn + i); >+ if (WARN_ON(!page_ref_dec_and_test(page))) >+ dump_page(page, "fake-offline page referenced"); >+ } >+} >+ >+/* >+ * Handle fake-offline pages when memory offlining is canceled - to undo >+ * what we did in virtio_mem_fake_offline_going_offline(). >+ */ >+static void virtio_mem_fake_offline_cancel_offline(unsigned long pfn, >+ unsigned long nr_pages) >+{ >+ unsigned long i; >+ >+ /* >+ * Get the reference we dropped when going offline and subtract the >+ * unplugged pages from the managed page counters. >+ */ >+ adjust_managed_page_count(pfn_to_page(pfn), -nr_pages); >+ for (i = 0; i < nr_pages; i++) >+ page_ref_inc(pfn_to_page(pfn + i)); >+} >+ > static void virtio_mem_online_page_cb(struct page *page, unsigned int order) > { > const unsigned long addr = page_to_phys(page); >-- >2.26.2 -- Wei Yang Help you, Help me