Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751666AbdHQJ2i (ORCPT ); Thu, 17 Aug 2017 05:28:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750903AbdHQJ2g (ORCPT ); Thu, 17 Aug 2017 05:28:36 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com D10C9D714B Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=cohuck@redhat.com Date: Thu, 17 Aug 2017 11:28:29 +0200 From: Cornelia Huck To: Paolo Bonzini Cc: Alexander Graf , Radim =?UTF-8?B?S3LEjW3DocWZ?= , linux-mips@linux-mips.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, Marc Zyngier , Christian Borntraeger , James Hogan , Christoffer Dall , Paul Mackerras , David Hildenbrand Subject: Re: [PATCH RFC 0/2] KVM: use RCU to allow dynamic kvm->vcpus array Message-ID: <20170817112829.7795820a.cohuck@redhat.com> In-Reply-To: References: <20170816194037.9460-1-rkrcmar@redhat.com> <20170817093612.024cc4bc.cohuck@redhat.com> Organization: Red Hat GmbH MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 17 Aug 2017 09:28:36 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1062 Lines: 23 On Thu, 17 Aug 2017 11:16:59 +0200 Paolo Bonzini wrote: > On 17/08/2017 09:36, Cornelia Huck wrote: > >> What if we just sent a "vcpu move" request to all vcpus with the new > >> pointer after it moved? That way the vcpu thread itself would be > >> responsible for the migration to the new memory region. Only if all > >> vcpus successfully moved, keep rolling (and allow foreign get_vcpu again). > >> > >> That way we should be basically lock-less and scale well. For additional > >> icing, feel free to increase the vcpu array x2 every time it grows to > >> not run into the slow path too often. > > > > I'd prefer the rcu approach: This is a mechanism already understood > > well, no need to come up with a new one that will likely have its own > > share of problems. > > What Alex is proposing _is_ RCU, except with a homegrown > synchronize_rcu. Using kvm->srcu seems to be the best of both worlds. I'm worried a bit about the 'homegrown' part, though. I also may be misunderstanding what Alex means with "vcpu move"...