Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp611897ybj; Thu, 7 May 2020 03:34:48 -0700 (PDT) X-Google-Smtp-Source: APiQypIQ6BCJ7hUnE+9Ra6NsCqDsKy86mBhTzzoCi5icVzuu6flIM3mL0jaVejr7usAr+v9/Ialj X-Received: by 2002:a05:6402:1596:: with SMTP id c22mr11621285edv.100.1588847688632; Thu, 07 May 2020 03:34:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588847688; cv=none; d=google.com; s=arc-20160816; b=KJ8g2URO25+FlaJj8VcM3yZbbLeqjSzgQXeAi89kfbhQmlWDc81YxfreqE+odrEtYx IRjUse7xk+VjouivC1jAA5C5/UOhl9cBnINX6pP2j3tHxMMO+cra+Yr+cHpYq09qcEPq X7exF3CYrsFt47uVRQu37rC2+bLyDRqvFfjfWe20bqxFMBleZChz9dgzsUzSbhQwivYs 8tNvKVy5tM/kzNjHzO7cwsZ92lXrtG/ThMsId/I8Ve+TqEzsbRK9+BS7qp0fopJzsTR8 7s+2HccBBsr/zZTwakvxZtUG40meEPmXiDZxrXOwew/L1959TUD8ft9iplokoeOoLpae xw+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rEDzTxpWgi5UEuZA16BMVTRBBFAxWblSmdxcIpvG4Qo=; b=sj4VyAK6cj1c/O/tAmSaaFn1gkhIdOueRet0/fCoTLlXpXJj4yfPKrPwlAmLJxMijY QAywWP1Ngwrl0NPRGaLvoNQSKRHB7VhdWBq5SP1YAKdHX8qfVZOwlJUXITNWhkzBIIFX eYKN1z/WOoB7jvuQ1tkhlNL+EfwKiO9FgEpv0/rVAD1zc6Fppg0fieHcCZaV7DaIcCFZ Ud7QIzm9nxBga9bcKJFnmTzR515Dv7zKVs51PrgsdoXFxl3cESrRWiYyf+lO2GyEoI9D hY0sC5XyY/Tls0n+1vATVPMsnaxZMYnXBoOVRRojrzzh1xMBYWQ92skFduo1TuTCQNtt teAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Y6s9PX+Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jt10si2927476ejb.446.2020.05.07.03.34.24; Thu, 07 May 2020 03:34:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Y6s9PX+Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726904AbgEGKci (ORCPT + 99 others); Thu, 7 May 2020 06:32:38 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:57206 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726616AbgEGKcg (ORCPT ); Thu, 7 May 2020 06:32:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588847553; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rEDzTxpWgi5UEuZA16BMVTRBBFAxWblSmdxcIpvG4Qo=; b=Y6s9PX+ZKdnoki2EALc2ZQJNIQuZGHcQKJgPeRBbhjPKjTNRNYIdgUd7/tdo9IQdd2Q3PZ seiqPz3V2UhZwnveCRFgv40RK1GgyeMJqRpn4SMcnXEMC5yYsiw0fPuqKRqZ0TnOQnlCyI fjSehFOVk3B/tgzgwL/zIOj2Dkw84Hw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-263-tBPKA_JhMZuhpX0Z0QbtTg-1; Thu, 07 May 2020 06:32:26 -0400 X-MC-Unique: tBPKA_JhMZuhpX0Z0QbtTg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7110B872FEA; Thu, 7 May 2020 10:32:24 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-245.ams2.redhat.com [10.36.113.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6EC85D9C5; Thu, 7 May 2020 10:32:12 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, virtio-dev@lists.oasis-open.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Michal Hocko , Andrew Morton , "Michael S . Tsirkin" , David Hildenbrand , Pankaj Gupta , Jason Wang , Oscar Salvador , Igor Mammedov , Dave Young , Dan Williams , Pavel Tatashin , Stefan Hajnoczi , Vlastimil Babka Subject: [PATCH v3 03/15] virtio-mem: Paravirtualized memory hotunplug part 1 Date: Thu, 7 May 2020 12:31:07 +0200 Message-Id: <20200507103119.11219-4-david@redhat.com> In-Reply-To: <20200507103119.11219-1-david@redhat.com> References: <20200507103119.11219-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Unplugging subblocks of memory blocks that are offline is easy. All we have to do is watch out for concurrent onlining activity. Tested-by: Pankaj Gupta Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Oscar Salvador Cc: Michal Hocko Cc: Igor Mammedov Cc: Dave Young Cc: Andrew Morton Cc: Dan Williams Cc: Pavel Tatashin Cc: Stefan Hajnoczi Cc: Vlastimil Babka Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 116 +++++++++++++++++++++++++++++++++++- 1 file changed, 114 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 270ddeaec059..a3ec795be8be 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -123,7 +123,7 @@ struct virtio_mem { * * When this lock is held the pointers can't change, ONLINE and * OFFLINE blocks can't change the state and no subblocks will get - * plugged. + * plugged/unplugged. */ struct mutex hotplug_mutex; bool hotplug_active; @@ -280,6 +280,12 @@ static int virtio_mem_mb_state_prepare_next_mb(struc= t virtio_mem *vm) _mb_id++) \ if (virtio_mem_mb_get_state(_vm, _mb_id) =3D=3D _state) =20 +#define virtio_mem_for_each_mb_state_rev(_vm, _mb_id, _state) \ + for (_mb_id =3D _vm->next_mb_id - 1; \ + _mb_id >=3D _vm->first_mb_id && _vm->nb_mb_state[_state]; \ + _mb_id--) \ + if (virtio_mem_mb_get_state(_vm, _mb_id) =3D=3D _state) + /* * Mark all selected subblocks plugged. * @@ -325,6 +331,19 @@ static bool virtio_mem_mb_test_sb_plugged(struct vir= tio_mem *vm, bit + count; } =20 +/* + * Test if all selected subblocks are unplugged. + */ +static bool virtio_mem_mb_test_sb_unplugged(struct virtio_mem *vm, + unsigned long mb_id, int sb_id, + int count) +{ + const int bit =3D (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb + sb_id; + + /* TODO: Helper similar to bitmap_set() */ + return find_next_bit(vm->sb_bitmap, bit + count, bit) >=3D bit + count; +} + /* * Find the first plugged subblock. Returns vm->nb_sb_per_mb in case the= re is * none. @@ -513,6 +532,9 @@ static void virtio_mem_notify_offline(struct virtio_m= em *vm, BUG(); break; } + + /* trigger the workqueue, maybe we can now unplug memory. */ + virtio_mem_retry(vm); } =20 static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned lon= g mb_id, @@ -1122,6 +1144,94 @@ static int virtio_mem_plug_request(struct virtio_m= em *vm, uint64_t diff) return rc; } =20 +/* + * Unplug the desired number of plugged subblocks of an offline memory b= lock. + * Will fail if any subblock cannot get unplugged (instead of skipping i= t). + * + * Will modify the state of the memory block. Might temporarily drop the + * hotplug_mutex. + * + * Note: Can fail after some subblocks were successfully unplugged. + */ +static int virtio_mem_mb_unplug_any_sb_offline(struct virtio_mem *vm, + unsigned long mb_id, + uint64_t *nb_sb) +{ + int rc; + + rc =3D virtio_mem_mb_unplug_any_sb(vm, mb_id, nb_sb); + + /* some subblocks might have been unplugged even on failure */ + if (!virtio_mem_mb_test_sb_plugged(vm, mb_id, 0, vm->nb_sb_per_mb)) + virtio_mem_mb_set_state(vm, mb_id, + VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL); + if (rc) + return rc; + + if (virtio_mem_mb_test_sb_unplugged(vm, mb_id, 0, vm->nb_sb_per_mb)) { + /* + * Remove the block from Linux - this should never fail. + * Hinder the block from getting onlined by marking it + * unplugged. Temporarily drop the mutex, so + * any pending GOING_ONLINE requests can be serviced/rejected. + */ + virtio_mem_mb_set_state(vm, mb_id, + VIRTIO_MEM_MB_STATE_UNUSED); + + mutex_unlock(&vm->hotplug_mutex); + rc =3D virtio_mem_mb_remove(vm, mb_id); + BUG_ON(rc); + mutex_lock(&vm->hotplug_mutex); + } + return 0; +} + +/* + * Try to unplug the requested amount of memory. + */ +static int virtio_mem_unplug_request(struct virtio_mem *vm, uint64_t dif= f) +{ + uint64_t nb_sb =3D diff / vm->subblock_size; + unsigned long mb_id; + int rc; + + if (!nb_sb) + return 0; + + /* + * We'll drop the mutex a couple of times when it is safe to do so. + * This might result in some blocks switching the state (online/offline= ) + * and we could miss them in this run - we will retry again later. + */ + mutex_lock(&vm->hotplug_mutex); + + /* Try to unplug subblocks of partially plugged offline blocks. */ + virtio_mem_for_each_mb_state_rev(vm, mb_id, + VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL) { + rc =3D virtio_mem_mb_unplug_any_sb_offline(vm, mb_id, + &nb_sb); + if (rc || !nb_sb) + goto out_unlock; + cond_resched(); + } + + /* Try to unplug subblocks of plugged offline blocks. */ + virtio_mem_for_each_mb_state_rev(vm, mb_id, + VIRTIO_MEM_MB_STATE_OFFLINE) { + rc =3D virtio_mem_mb_unplug_any_sb_offline(vm, mb_id, + &nb_sb); + if (rc || !nb_sb) + goto out_unlock; + cond_resched(); + } + + mutex_unlock(&vm->hotplug_mutex); + return 0; +out_unlock: + mutex_unlock(&vm->hotplug_mutex); + return rc; +} + /* * Try to unplug all blocks that couldn't be unplugged before, for examp= le, * because the hypervisor was busy. @@ -1204,8 +1314,10 @@ static void virtio_mem_run_wq(struct work_struct *= work) if (vm->requested_size > vm->plugged_size) { diff =3D vm->requested_size - vm->plugged_size; rc =3D virtio_mem_plug_request(vm, diff); + } else { + diff =3D vm->plugged_size - vm->requested_size; + rc =3D virtio_mem_unplug_request(vm, diff); } - /* TODO: try to unplug memory */ } =20 switch (rc) { --=20 2.25.3