Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp918579pxu; Thu, 15 Oct 2020 21:49:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNAzGgemSnIZI5N2Hw2UaMShJtkCCXH7dnbeTFLZV7BtHIoPDvacU2KfQf4pPmqY2qqKCs X-Received: by 2002:aa7:ccd7:: with SMTP id y23mr1842180edt.106.1602823776226; Thu, 15 Oct 2020 21:49:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602823776; cv=none; d=google.com; s=arc-20160816; b=WTPahcmwTH0rpxXbstCY/yNZ0Tq8VvUl1y8n2usrK/H/DbZK6fMwI3REng+hRzoyuO 4frdlz+nqJsqZM6R5MjWFZQEyfi1353fQd3eG72VV02mfmshWX4vLFsajyUDJ3jAAYf5 yKf9Lndv6/BFiBXGwNA5atsL3df1GWDREMHnGE/StFiJg3VwHPVg0+cotEw2M3vDZisr 0rwzly1f1NEIR6jrbl1eIo/bMsHF5Afo49Qlj22lhFXmqYpbj96SnJKV7IhYWRwGEU7c cHd4QI9G3rwYNfGUB8N08Whje9dgqilr/vfLDWJ0KavCx7jFuRvptHLLfClFkS9uKbbF /CtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date; bh=rowv7+ldc5KkMjm7A+28Scps5Siuq4e7HNiJeSoafjA=; b=KwMyulKVZ4ukmRNAstC7EvnAbzB5YSXMWvVhecLPpFFr413o0W2ZilBMY9CQ/vfW5M tZKFFXhxg3eP4FfC93gFh+iEX5ZGk1/cU/0qOvi4b/fygpM9Aj/jD/Z3A9Q//wq0Q/Xw KMaa+YZx3HfYLjArpgJrBXt6wZpaA0SmSkUW6wYVNVWU0k8Qm2xTycKfZK/aP+VK5HFO +/lQsUE7/regXXcDtVD6pW9xQuF/ObaNSSZsI0WWAGxJthNGWDhvX2MNSwJyWr6+QA5S JImPLBSq3xWFTJn9sxlE3Ew2R9xvzqY77mstP2xd4Q4eBr5GzlMSBi2NLFgrh3vprRnO mydw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i20si855042edb.116.2020.10.15.21.49.13; Thu, 15 Oct 2020 21:49:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726419AbgJPEDG (ORCPT + 99 others); Fri, 16 Oct 2020 00:03:06 -0400 Received: from out30-42.freemail.mail.aliyun.com ([115.124.30.42]:44199 "EHLO out30-42.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726379AbgJPEDG (ORCPT ); Fri, 16 Oct 2020 00:03:06 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0UC9wxDT_1602820981; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0UC9wxDT_1602820981) by smtp.aliyun-inc.com(127.0.0.1); Fri, 16 Oct 2020 12:03:02 +0800 Date: Fri, 16 Oct 2020 12:03:01 +0800 From: Wei Yang To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Andrew Morton , "Michael S . Tsirkin" , Jason Wang , Pankaj Gupta Subject: Re: [PATCH v1 09/29] virtio-mem: don't always trigger the workqueue when offlining memory Message-ID: <20201016040301.GJ86495@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20201012125323.17509-1-david@redhat.com> <20201012125323.17509-10-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201012125323.17509-10-david@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 12, 2020 at 02:53:03PM +0200, David Hildenbrand wrote: >Let's trigger from offlining code when we're not allowed to touch online >memory. This describes the change in virtio_mem_memory_notifier_cb()? > >Handle the other case (memmap possibly freeing up another memory block) >when actually removing memory. When removing via virtio_mem_remove(), >virtio_mem_retry() is a NOP and safe to use. > >While at it, move retry handling when offlining out of >virtio_mem_notify_offline(), to share it with Device Block Mode (DBM) >soon. I may not understand the logic fully. Here is my understanding of current logic: virtio_mem_run_wq() virtio_mem_unplug_request() virtio_mem_mb_unplug_any_sb_offline() virtio_mem_mb_remove() --- 1 virtio_mem_mb_unplug_any_sb_online() virtio_mem_mb_offline_and_remove() --- 2 This patch tries to trigger the wq at 1 and 2. And these two functions are only valid during this code flow. These two functions actually remove some memory from the system. So I am not sure where extra unplug-able memory comes from. I guess those memory is from memory block device and mem_sectioin, memmap? While those memory is still marked as online, right? In case we can gather extra memory at 1 and form a whole memory block. So that we can unplug an online memory block (by moving data to a new place), this just affect the process at 2. This means there is no need to trigger the wq at 1, and we can leave it at 2. > >Cc: "Michael S. Tsirkin" >Cc: Jason Wang >Cc: Pankaj Gupta >Signed-off-by: David Hildenbrand >--- > drivers/virtio/virtio_mem.c | 40 ++++++++++++++++++++++++++----------- > 1 file changed, 28 insertions(+), 12 deletions(-) > >diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c >index 5c93f8a65eba..8ea00f0b2ecd 100644 >--- a/drivers/virtio/virtio_mem.c >+++ b/drivers/virtio/virtio_mem.c >@@ -158,6 +158,7 @@ static DEFINE_MUTEX(virtio_mem_mutex); > static LIST_HEAD(virtio_mem_devices); > > static void virtio_mem_online_page_cb(struct page *page, unsigned int order); >+static void virtio_mem_retry(struct virtio_mem *vm); > > /* > * Register a virtio-mem device so it will be considered for the online_page >@@ -435,9 +436,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id) > static int virtio_mem_mb_remove(struct virtio_mem *vm, unsigned long mb_id) > { > const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id); >+ int rc; > > dev_dbg(&vm->vdev->dev, "removing memory block: %lu\n", mb_id); >- return remove_memory(vm->nid, addr, memory_block_size_bytes()); >+ rc = remove_memory(vm->nid, addr, memory_block_size_bytes()); >+ if (!rc) >+ /* >+ * We might have freed up memory we can now unplug, retry >+ * immediately instead of waiting. >+ */ >+ virtio_mem_retry(vm); >+ return rc; > } > > /* >@@ -452,11 +461,19 @@ static int virtio_mem_mb_offline_and_remove(struct virtio_mem *vm, > unsigned long mb_id) > { > const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id); >+ int rc; > > dev_dbg(&vm->vdev->dev, "offlining and removing memory block: %lu\n", > mb_id); >- return offline_and_remove_memory(vm->nid, addr, >- memory_block_size_bytes()); >+ rc = offline_and_remove_memory(vm->nid, addr, >+ memory_block_size_bytes()); >+ if (!rc) >+ /* >+ * We might have freed up memory we can now unplug, retry >+ * immediately instead of waiting. >+ */ >+ virtio_mem_retry(vm); >+ return rc; > } > > /* >@@ -534,15 +551,6 @@ static void virtio_mem_notify_offline(struct virtio_mem *vm, > BUG(); > break; > } >- >- /* >- * Trigger the workqueue, maybe we can now unplug memory. Also, >- * when we offline and remove a memory block, this will re-trigger >- * us immediately - which is often nice because the removal of >- * the memory block (e.g., memmap) might have freed up memory >- * on other memory blocks we manage. >- */ >- virtio_mem_retry(vm); > } > > static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id) >@@ -679,6 +687,14 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb, > break; > case MEM_OFFLINE: > virtio_mem_notify_offline(vm, mb_id); >+ >+ /* >+ * Trigger the workqueue. Now that we have some offline memory, >+ * maybe we can handle pending unplug requests. >+ */ >+ if (!unplug_online) >+ virtio_mem_retry(vm); >+ > vm->hotplug_active = false; > mutex_unlock(&vm->hotplug_mutex); > break; >-- >2.26.2 -- Wei Yang Help you, Help me