Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753318AbdHQQyl (ORCPT ); Thu, 17 Aug 2017 12:54:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58952 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751107AbdHQQyj (ORCPT ); Thu, 17 Aug 2017 12:54:39 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 641405F739 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=pbonzini@redhat.com Subject: Re: [PATCH RFC 2/2] KVM: RCU protected dynamic vcpus array To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , David Hildenbrand Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Christoffer Dall , Marc Zyngier , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf References: <20170816194037.9460-1-rkrcmar@redhat.com> <20170816194037.9460-3-rkrcmar@redhat.com> <7cb42373-355c-7cb3-2979-9529aef0641c@redhat.com> <20170817165044.GF2566@flask> From: Paolo Bonzini Message-ID: <1dea37fa-d51e-1381-c3f1-e067065560b7@redhat.com> Date: Thu, 17 Aug 2017 18:54:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170817165044.GF2566@flask> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 17 Aug 2017 16:54:39 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1959 Lines: 60 On 17/08/2017 18:50, Radim Krčmář wrote: > 2017-08-17 13:14+0200, David Hildenbrand: >>> atomic_set(&kvm->online_vcpus, 0); >>> mutex_unlock(&kvm->lock); >>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h >>> index c8df733eed41..eb9fb5b493ac 100644 >>> --- a/include/linux/kvm_host.h >>> +++ b/include/linux/kvm_host.h >>> @@ -386,12 +386,17 @@ struct kvm_memslots { >>> int used_slots; >>> }; >>> >>> +struct kvm_vcpus { >>> + u32 online; >>> + struct kvm_vcpu *array[]; >> >> On option could be to simply chunk it: >> >> +struct kvm_vcpus { >> + struct kvm_vcpu vcpus[32]; > > I'm thinking of 128/256. > >> +}; >> + >> /* >> * Note: >> * memslots are not sorted by id anymore, please use id_to_memslot() >> @@ -391,7 +395,7 @@ struct kvm { >> struct mutex slots_lock; >> struct mm_struct *mm; /* userspace tied to this vm */ >> struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; >> - struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; >> + struct kvm_vcpus vcpus[(KVM_MAX_VCPUS + 31) / 32]; >> /* >> * created_vcpus is protected by kvm->lock, and is incremented >> @@ -483,12 +487,14 @@ static inline struct kvm_io_bus >> *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx) >> >> >> 1. make nobody access kvm->vcpus directly (factor out) >> 2. allocate next chunk if necessary when creating a VCPU and store >> pointer using WRITE_ONCE >> 3. use READ_ONCE to test for availability of the current chunk > > We can also use kvm->online_vcpus exactly like we did now. > >> kvm_for_each_vcpu just has to use READ_ONCE to access/test for the right >> chunk. Pointers never get invalid. No RCU needed. Sleeping in the loop >> is possible. > > I like this better than SRCU because it keeps the internal code mostly > intact, even though it is compromise solution with a tunable. > (SRCU gives us more protection than we need.) > > I'd do this for v2, Sounds good! Paolo