Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp453175ybl; Mon, 2 Dec 2019 13:18:01 -0800 (PST) X-Google-Smtp-Source: APXvYqzRoQSb/y0A+Oiu9RH5HGb35e0wXHBTfQXFY8OorCR33GgxUC5Fs6ESdgSCdP6O+X+EG04p X-Received: by 2002:aca:b10b:: with SMTP id a11mr988070oif.138.1575321481458; Mon, 02 Dec 2019 13:18:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575321481; cv=none; d=google.com; s=arc-20160816; b=Q48mOzlrYQFuJKg/GiidHvjFUsEwwhLhq/kVZDg9fqLO47J/Ff5QPtNWcb7bMLBjuk KbhOMBXWdgqsOJwNy0PAvA8gwYAyruB+N1M4jjTqafOrxR8TRcIc2OewxYK3giHFBUDk XK+StPWTMnoi5Gj4yUwGrHlmjSz2mMxUPzR9xwiTpR0CckmCu5LqbAgcRRit49/A1PnS 2G4/LX28fFLQew3EJnW/b7JPsg0x03uSfd1jm22RHO3/MrK0RHojU0v25Mo6QjuzZSbG cGqNC3fYXOnByPbIWr3v2wVaBmF5yeytaCwateXaAXa8ggtQCHUjlqOeuRRrV1Gb/8ur k1+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:user-agent:in-reply-to:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=HsxIeqenDpNm9usZoCWHak5+2zvDWy2/07QFGWrXAmQ=; b=tUfjm8aVNjOKnzW7mqKQrj9UkZVTHXQ4s6fxEp6K1un5FO91zLeJGKB4KYTnBLeXVh GAWCO8aJWInPcYMvMOF9wYMon3+R47Y28xEyD74PlLZdDI+Lf8QSpGgNtwfujrhjXZ23 ZkG/N/ERE06km3idwBdo3CMMHlxvF3pQ9jjO9n3rx8luDB/OBwRqnxBeQeoDnZGRNgWv uhFPvCgclBADIARFFnHootEwWaRnOJwnfPSSpTXH5u5VgiABsKXV6LoEZ+mW44iM+xcm Gm4wfG3h+EZAb2NvO8D6VHZKOd5l/nRXdFJC/txtd9T+4PgxsCvW5ksna3UGIW3hp35v o5rQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ETSC+kn1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u9si240519oib.55.2019.12.02.13.17.48; Mon, 02 Dec 2019 13:18:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ETSC+kn1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726608AbfLBVQr (ORCPT + 99 others); Mon, 2 Dec 2019 16:16:47 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:44707 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725954AbfLBVQr (ORCPT ); Mon, 2 Dec 2019 16:16:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575321405; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HsxIeqenDpNm9usZoCWHak5+2zvDWy2/07QFGWrXAmQ=; b=ETSC+kn1ZDb73UmCZtZtAJJhRfCVPFglAiCeb2WFbcy41qoU9N/9Aw6pCVC6XxaEQOBLTO mMne9FV5sV7oAtbfO2cJNz7KdiUxp9PzM480ASop6HB318C+CbCrao2WfGO2eP6GhX/Euf axTKNxNneiNzN67IQsoZtNLzygO4ZOg= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-95-m3K7gPocMgmo4s6bed3ZjQ-1; Mon, 02 Dec 2019 16:16:44 -0500 Received: by mail-qk1-f197.google.com with SMTP id j192so651849qke.3 for ; Mon, 02 Dec 2019 13:16:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=yLqKdRYXW22CYFK1+0CdEpAmmC4Khbpr479tlvSNKl0=; b=ENY3kpgxiU3L3AhkqxBJzEVePe3bknPdFzBEfwvWBHAAor1+JpiNoDe5WuEa4emEPh ycMNhzb0SFZGEjFa7KurHET9uE42LTWluBbeZBIC+8ZaX+1oA9xv4hIJp11D/vbhD+4E soKU9sT+ug4yb6+zl8zTT6m2HJbaKcB1qz1uO2orkXxHfBSwbCQl49NT6PaWPjMRoP7T bRz6pQe+QINt3+22KcvxefWBFp4qESKYNEWMMQ35wlGOg/taHJHaKZL4q6/bO/sqw1Jl 2LLWEtjRR+yQBE8Sp9TdhjvOi8M15yRSjIc1LTTRiTVBtBpX1vSVkyfkvbplIhEnUTXC IdXQ== X-Gm-Message-State: APjAAAWJC6cGReMgtlQaL1hhhVkOaFCWwkRIUK+Bmo+6EZjXMnHho4mq ppGdO+PlbWHcnLdOcxH12XedYkoivGMLK01QqHzeKBhR+kZL6mT/Sp9qmChIYoQO8Zln8+O80rb uV2PwKEEdORQejaWKDWrSXY9U X-Received: by 2002:a37:4fd8:: with SMTP id d207mr1180519qkb.464.1575321403298; Mon, 02 Dec 2019 13:16:43 -0800 (PST) X-Received: by 2002:a37:4fd8:: with SMTP id d207mr1180460qkb.464.1575321402781; Mon, 02 Dec 2019 13:16:42 -0800 (PST) Received: from xz-x1 ([104.156.64.74]) by smtp.gmail.com with ESMTPSA id g62sm471984qkd.25.2019.12.02.13.16.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2019 13:16:41 -0800 (PST) Date: Mon, 2 Dec 2019 16:16:40 -0500 From: Peter Xu To: Sean Christopherson Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , "Dr . David Alan Gilbert" , Vitaly Kuznetsov Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Message-ID: <20191202211640.GF31681@xz-x1> References: <20191129213505.18472-1-peterx@redhat.com> <20191129213505.18472-5-peterx@redhat.com> <20191202201036.GJ4063@linux.intel.com> MIME-Version: 1.0 In-Reply-To: <20191202201036.GJ4063@linux.intel.com> User-Agent: Mutt/1.11.4 (2019-03-13) X-MC-Unique: m3K7gPocMgmo4s6bed3ZjQ-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 02, 2019 at 12:10:36PM -0800, Sean Christopherson wrote: > On Fri, Nov 29, 2019 at 04:34:54PM -0500, Peter Xu wrote: > > This patch is heavily based on previous work from Lei Cao > > and Paolo Bonzini . [1] > >=20 > > KVM currently uses large bitmaps to track dirty memory. These bitmaps > > are copied to userspace when userspace queries KVM for its dirty page > > information. The use of bitmaps is mostly sufficient for live > > migration, as large parts of memory are be dirtied from one log-dirty > > pass to another. However, in a checkpointing system, the number of > > dirty pages is small and in fact it is often bounded---the VM is > > paused when it has dirtied a pre-defined number of pages. Traversing a > > large, sparsely populated bitmap to find set bits is time-consuming, > > as is copying the bitmap to user-space. > >=20 > > A similar issue will be there for live migration when the guest memory > > is huge while the page dirty procedure is trivial. In that case for > > each dirty sync we need to pull the whole dirty bitmap to userspace > > and analyse every bit even if it's mostly zeros. > >=20 > > The preferred data structure for above scenarios is a dense list of > > guest frame numbers (GFN). This patch series stores the dirty list in > > kernel memory that can be memory mapped into userspace to allow speedy > > harvesting. > >=20 > > We defined two new data structures: > >=20 > > struct kvm_dirty_ring; > > struct kvm_dirty_ring_indexes; > >=20 > > Firstly, kvm_dirty_ring is defined to represent a ring of dirty > > pages. When dirty tracking is enabled, we can push dirty gfn onto the > > ring. > >=20 > > Secondly, kvm_dirty_ring_indexes is defined to represent the > > user/kernel interface of each ring. Currently it contains two > > indexes: (1) avail_index represents where we should push our next > > PFN (written by kernel), while (2) fetch_index represents where the > > userspace should fetch the next dirty PFN (written by userspace). > >=20 > > One complete ring is composed by one kvm_dirty_ring plus its > > corresponding kvm_dirty_ring_indexes. > >=20 > > Currently, we have N+1 rings for each VM of N vcpus: > >=20 > > - for each vcpu, we have 1 per-vcpu dirty ring, > > - for each vm, we have 1 per-vm dirty ring >=20 > Why? I assume the purpose of per-vcpu rings is to avoid contention betwe= en > threads, but the motiviation needs to be explicitly stated. And why is a > per-vm fallback ring needed? Yes, as explained in previous reply, the problem is there could have guest memory writes without vcpu contexts. >=20 > If my assumption is correct, have other approaches been tried/profiled? > E.g. using cmpxchg to reserve N number of entries in a shared ring. Not yet, but I'd be fine to try anything if there's better alternatives. Besides, could you help explain why sharing one ring and let each vcpu to reserve a region in the ring could be helpful in the pov of performance? > IMO, > adding kvm_get_running_vcpu() is a hack that is just asking for future > abuse and the vcpu/vm/as_id interactions in mark_page_dirty_in_ring() > look extremely fragile. I agree. Another way is to put heavier traffic to the per-vm ring, but the downside could be that the per-vm ring could get full easier (but I haven't tested). > I also dislike having two different mechanisms > for accessing the ring (lock for per-vm, something else for per-vcpu). Actually I proposed to drop the per-vm ring (actually I had a version that implemented this.. and I just changed it back to the per-vm ring later on, see below) and when there's no vcpu context I thought about: (1) use vcpu0 ring (2) or a better algo to pick up a per-vcpu ring (like, the less full ring, we can do many things here, e.g., we can easily maintain a structure track this so we can get O(1) search, I think) I discussed this with Paolo, but I think Paolo preferred the per-vm ring because there's no good reason to choose vcpu0 as what (1) suggested. While if to choose (2) we probably need to lock even for per-cpu ring, so could be a bit slower. Since this is still RFC, I think we still have chance to change this, depending on how the discussion goes. >=20 > > Please refer to the documentation update in this patch for more > > details. > >=20 > > Note that this patch implements the core logic of dirty ring buffer. > > It's still disabled for all archs for now. Also, we'll address some > > of the other issues in follow up patches before it's firstly enabled > > on x86. > >=20 > > [1] https://patchwork.kernel.org/patch/10471409/ > >=20 > > Signed-off-by: Lei Cao > > Signed-off-by: Paolo Bonzini > > Signed-off-by: Peter Xu > > --- >=20 > ... >=20 > > diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c > > new file mode 100644 > > index 000000000000..9264891f3c32 > > --- /dev/null > > +++ b/virt/kvm/dirty_ring.c > > @@ -0,0 +1,156 @@ > > +#include > > +#include > > +#include > > +#include > > + > > +u32 kvm_dirty_ring_get_rsvd_entries(void) > > +{ > > +=09return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size(); > > +} > > + > > +int kvm_dirty_ring_alloc(struct kvm *kvm, struct kvm_dirty_ring *ring) > > +{ > > +=09u32 size =3D kvm->dirty_ring_size; >=20 > Just pass in @size, that way you don't need @kvm. And the callers will b= e > less ugly, e.g. the initial allocation won't need to speculatively set > kvm->dirty_ring_size. Sure. >=20 > > + > > +=09ring->dirty_gfns =3D vmalloc(size); > > +=09if (!ring->dirty_gfns) > > +=09=09return -ENOMEM; > > +=09memset(ring->dirty_gfns, 0, size); > > + > > +=09ring->size =3D size / sizeof(struct kvm_dirty_gfn); > > +=09ring->soft_limit =3D > > +=09 (kvm->dirty_ring_size / sizeof(struct kvm_dirty_gfn)) - >=20 > And passing @size avoids issues like this where a local var is ignored. >=20 > > +=09 kvm_dirty_ring_get_rsvd_entries(); > > +=09ring->dirty_index =3D 0; > > +=09ring->reset_index =3D 0; > > +=09spin_lock_init(&ring->lock); > > + > > +=09return 0; > > +} > > + >=20 > ... >=20 > > +void kvm_dirty_ring_free(struct kvm_dirty_ring *ring) > > +{ > > +=09if (ring->dirty_gfns) { >=20 > Why condition freeing the dirty ring on kvm->dirty_ring_size, this > obviously protects itself. Not to mention vfree() also plays nice with a > NULL input. Ok I can drop this check. >=20 > > +=09=09vfree(ring->dirty_gfns); > > +=09=09ring->dirty_gfns =3D NULL; > > +=09} > > +} > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 681452d288cd..8642c977629b 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -64,6 +64,8 @@ > > #define CREATE_TRACE_POINTS > > #include > > =20 > > +#include > > + > > /* Worst case buffer size needed for holding an integer. */ > > #define ITOA_MAX_LEN 12 > > =20 > > @@ -149,6 +151,10 @@ static void mark_page_dirty_in_slot(struct kvm *kv= m, > > =09=09=09=09 struct kvm_vcpu *vcpu, > > =09=09=09=09 struct kvm_memory_slot *memslot, > > =09=09=09=09 gfn_t gfn); > > +static void mark_page_dirty_in_ring(struct kvm *kvm, > > +=09=09=09=09 struct kvm_vcpu *vcpu, > > +=09=09=09=09 struct kvm_memory_slot *slot, > > +=09=09=09=09 gfn_t gfn); > > =20 > > __visible bool kvm_rebooting; > > EXPORT_SYMBOL_GPL(kvm_rebooting); > > @@ -359,11 +365,22 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct k= vm *kvm, unsigned id) > > =09vcpu->preempted =3D false; > > =09vcpu->ready =3D false; > > =20 > > +=09if (kvm->dirty_ring_size) { > > +=09=09r =3D kvm_dirty_ring_alloc(vcpu->kvm, &vcpu->dirty_ring); > > +=09=09if (r) { > > +=09=09=09kvm->dirty_ring_size =3D 0; > > +=09=09=09goto fail_free_run; >=20 > This looks wrong, kvm->dirty_ring_size is used to free allocations, i.e. > previous allocations will leak if a vcpu allocation fails. You are right. That's an overkill. >=20 > > +=09=09} > > +=09} > > + > > =09r =3D kvm_arch_vcpu_init(vcpu); > > =09if (r < 0) > > -=09=09goto fail_free_run; > > +=09=09goto fail_free_ring; > > =09return 0; > > =20 > > +fail_free_ring: > > +=09if (kvm->dirty_ring_size) > > +=09=09kvm_dirty_ring_free(&vcpu->dirty_ring); > > fail_free_run: > > =09free_page((unsigned long)vcpu->run); > > fail: > > @@ -381,6 +398,8 @@ void kvm_vcpu_uninit(struct kvm_vcpu *vcpu) > > =09put_pid(rcu_dereference_protected(vcpu->pid, 1)); > > =09kvm_arch_vcpu_uninit(vcpu); > > =09free_page((unsigned long)vcpu->run); > > +=09if (vcpu->kvm->dirty_ring_size) > > +=09=09kvm_dirty_ring_free(&vcpu->dirty_ring); > > } > > EXPORT_SYMBOL_GPL(kvm_vcpu_uninit); > > =20 > > @@ -690,6 +709,7 @@ static struct kvm *kvm_create_vm(unsigned long type= ) > > =09struct kvm *kvm =3D kvm_arch_alloc_vm(); > > =09int r =3D -ENOMEM; > > =09int i; > > +=09struct page *page; > > =20 > > =09if (!kvm) > > =09=09return ERR_PTR(-ENOMEM); > > @@ -705,6 +725,14 @@ static struct kvm *kvm_create_vm(unsigned long typ= e) > > =20 > > =09BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); > > =20 > > +=09page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); > > +=09if (!page) { > > +=09=09r =3D -ENOMEM; > > +=09=09goto out_err_alloc_page; > > +=09} > > +=09kvm->vm_run =3D page_address(page); > > +=09BUILD_BUG_ON(sizeof(struct kvm_vm_run) > PAGE_SIZE); > > + > > =09if (init_srcu_struct(&kvm->srcu)) > > =09=09goto out_err_no_srcu; > > =09if (init_srcu_struct(&kvm->irq_srcu)) > > @@ -775,6 +803,9 @@ static struct kvm *kvm_create_vm(unsigned long type= ) > > out_err_no_irq_srcu: > > =09cleanup_srcu_struct(&kvm->srcu); > > out_err_no_srcu: > > +=09free_page((unsigned long)page); > > +=09kvm->vm_run =3D NULL; >=20 > No need to nullify vm_run. Ok. >=20 > > +out_err_alloc_page: > > =09kvm_arch_free_vm(kvm); > > =09mmdrop(current->mm); > > =09return ERR_PTR(r); > > @@ -800,6 +831,15 @@ static void kvm_destroy_vm(struct kvm *kvm) > > =09int i; > > =09struct mm_struct *mm =3D kvm->mm; > > =20 > > +=09if (kvm->dirty_ring_size) { > > +=09=09kvm_dirty_ring_free(&kvm->vm_dirty_ring); > > +=09} >=20 > Unnecessary parantheses. True. Thanks, >=20 > > + > > +=09if (kvm->vm_run) { > > +=09=09free_page((unsigned long)kvm->vm_run); > > +=09=09kvm->vm_run =3D NULL; > > +=09} > > + > > =09kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm); > > =09kvm_destroy_vm_debugfs(kvm); > > =09kvm_arch_sync_events(kvm); > > @@ -2301,7 +2341,7 @@ static void mark_page_dirty_in_slot(struct kvm *k= vm, > > { > > =09if (memslot && memslot->dirty_bitmap) { > > =09=09unsigned long rel_gfn =3D gfn - memslot->base_gfn; > > - > > +=09=09mark_page_dirty_in_ring(kvm, vcpu, memslot, gfn); > > =09=09set_bit_le(rel_gfn, memslot->dirty_bitmap); > > =09} > > } > > @@ -2649,6 +2689,13 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool = yield_to_kernel_mode) > > } > > EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin); >=20 --=20 Peter Xu