Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp295473pxv; Wed, 21 Jul 2021 23:36:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxrWLgH3WxAlQy+7pr2n4HI0zAySfXvXTTH6MM6yFOidKfZrSBrqDDTdIpPm6D1FwVjEWtH X-Received: by 2002:a92:b50d:: with SMTP id f13mr26891598ile.253.1626935777919; Wed, 21 Jul 2021 23:36:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626935777; cv=none; d=google.com; s=arc-20160816; b=AK54JTx0CdWqSZWtbxQmKxgyFDS7VLVaasth420RX3C/kkRXSxJlRq4moOZCjvEOZv x4V9qpPjPLS3H3RA1agX1VCyeIjR4oOAPAdPTyE/b9Gc44mGaBLk4G+PFQhrDVoPlWCj Rf0tGSHUuo3u9QZ0rPV7g1xoHXqwZVTwGFOYO0ONtm6qaVpUdLctNhIyHpb5trt3WZcV U46Cp8I/zcKRnd5UE2hcn34rWqOMJgWwIUiCwkiIXEhJXkJyeTyGta2Pl7ajdxY56suY E0dBKkrAVNJuRywtWX3DGQ+VqVVqvuXHvaXyafrUuOkRO7GlaRgvSLqURomzA9c24YuZ Y25A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :content-language:accept-language:in-reply-to:references:message-id :date:thread-index:thread-topic:subject:cc:to:from; bh=5FUt7Hzm3mVSpejxOqCwn7LhdhqNfF0PhCBkFuclKJE=; b=egHLWT4J5gHeqGL+69dXsNuFbqrtJnqt8kU5/9tSs9zK/NJCkwfgFJtSnontj5g+mo XtisXHBrJBNvnjkvbfWO1JE3JMofniz7YtY/iNvPcG3qNrV1+ZbJpUHlsKk2STCi//bf QH7TuILYpmIVFR06SgNxhRdJ7fGcgc5wyEfmnB3dRsfn0pOVpSGnR1kb01IYJH7TNPGk cmf/jGmwpro4dF4boIrjdIRwCBuvQilrONLFnWgENZY+TNwdKkdsG4Cie4DydDT35ToE KqlYwfM6MF6BLd1xwi5q9ag5Tf0Qw4sE7DvBHuI5rxDNkU8kUIgDfldoKa4vYbKl13Uo h3tQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s22si17545248iow.33.2021.07.21.23.36.06; Wed, 21 Jul 2021 23:36:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231327AbhGVFy3 convert rfc822-to-8bit (ORCPT + 99 others); Thu, 22 Jul 2021 01:54:29 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:4021 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230136AbhGVFy2 (ORCPT ); Thu, 22 Jul 2021 01:54:28 -0400 X-Greylist: delayed 643 seconds by postgrey-1.27 at vger.kernel.org; Thu, 22 Jul 2021 01:54:27 EDT Received: from dggems706-chm.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4GVjKW2N7Jzmhtb; Thu, 22 Jul 2021 14:31:59 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 22 Jul 2021 14:34:59 +0800 Received: from lhreml710-chm.china.huawei.com ([169.254.81.184]) by lhreml710-chm.china.huawei.com ([169.254.81.184]) with mapi id 15.01.2176.012; Thu, 22 Jul 2021 07:34:57 +0100 From: Shameerali Kolothum Thodi To: Will Deacon CC: "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.cs.columbia.edu" , "linux-kernel@vger.kernel.org" , "maz@kernel.org" , "catalin.marinas@arm.com" , "james.morse@arm.com" , "julien.thierry.kdev@gmail.com" , "suzuki.poulose@arm.com" , "jean-philippe@linaro.org" , "Alexandru.Elisei@arm.com" , Linuxarm Subject: RE: [PATCH v2 2/3] kvm/arm: Introduce a new vmid allocator for KVM Thread-Topic: [PATCH v2 2/3] kvm/arm: Introduce a new vmid allocator for KVM Thread-Index: AQHXYshW/nzyI5jyG0eXMHJ9tLJ26qtNvlQAgAEAsAA= Date: Thu, 22 Jul 2021 06:34:57 +0000 Message-ID: <8c0345ae808140f79c2adc4e0fd2effc@huawei.com> References: <20210616155606.2806-1-shameerali.kolothum.thodi@huawei.com> <20210616155606.2806-3-shameerali.kolothum.thodi@huawei.com> <20210721160614.GC11003@willie-the-truck> In-Reply-To: <20210721160614.GC11003@willie-the-truck> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.47.80.98] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > -----Original Message----- > From: Will Deacon [mailto:will@kernel.org] > Sent: 21 July 2021 17:06 > To: Shameerali Kolothum Thodi > Cc: linux-arm-kernel@lists.infradead.org; kvmarm@lists.cs.columbia.edu; > linux-kernel@vger.kernel.org; maz@kernel.org; catalin.marinas@arm.com; > james.morse@arm.com; julien.thierry.kdev@gmail.com; > suzuki.poulose@arm.com; jean-philippe@linaro.org; > Alexandru.Elisei@arm.com; Linuxarm > Subject: Re: [PATCH v2 2/3] kvm/arm: Introduce a new vmid allocator for KVM > > On Wed, Jun 16, 2021 at 04:56:05PM +0100, Shameer Kolothum wrote: > > A new VMID allocator for arm64 KVM use. This is based on > > arm64 asid allocator algorithm. > > > > Signed-off-by: Shameer Kolothum > > > --- > > arch/arm64/include/asm/kvm_host.h | 4 + > > arch/arm64/kvm/vmid.c | 206 > ++++++++++++++++++++++++++++++ > > 2 files changed, 210 insertions(+) > > create mode 100644 arch/arm64/kvm/vmid.c > > Generally, I prefer this to the alternative of creating a library. However, > I'd probably remove all the duplicated comments in favour of a reference > to the ASID allocator. That way, we can just comment any VMID-specific > behaviour in here. Agree. I retained the comments mainly for myself as its very difficult at times to follow :) > > Some comments below... > > > diff --git a/arch/arm64/include/asm/kvm_host.h > b/arch/arm64/include/asm/kvm_host.h > > index 7cd7d5c8c4bc..75a7e8071012 100644 > > --- a/arch/arm64/include/asm/kvm_host.h > > +++ b/arch/arm64/include/asm/kvm_host.h > > @@ -680,6 +680,10 @@ int kvm_arm_pvtime_get_attr(struct kvm_vcpu > *vcpu, > > int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu, > > struct kvm_device_attr *attr); > > > > +int kvm_arm_vmid_alloc_init(void); > > +void kvm_arm_vmid_alloc_free(void); > > +void kvm_arm_update_vmid(atomic64_t *id); > > + > > static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch > *vcpu_arch) > > { > > vcpu_arch->steal.base = GPA_INVALID; > > diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c > > new file mode 100644 > > index 000000000000..687e18d33130 > > --- /dev/null > > +++ b/arch/arm64/kvm/vmid.c > > @@ -0,0 +1,206 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * VMID allocator. > > + * > > + * Based on arch/arm64/mm/context.c > > + * > > + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. > > + * Copyright (C) 2012 ARM Ltd. > > + */ > > + > > +#include > > +#include > > + > > +#include > > +#include > > + > > +static u32 vmid_bits; > > +static DEFINE_RAW_SPINLOCK(cpu_vmid_lock); > > + > > +static atomic64_t vmid_generation; > > +static unsigned long *vmid_map; > > + > > +static DEFINE_PER_CPU(atomic64_t, active_vmids); > > +static DEFINE_PER_CPU(u64, reserved_vmids); > > +static cpumask_t tlb_flush_pending; > > + > > +#define VMID_MASK (~GENMASK(vmid_bits - 1, 0)) > > +#define VMID_FIRST_VERSION (1UL << vmid_bits) > > + > > +#define NUM_USER_VMIDS VMID_FIRST_VERSION > > +#define vmid2idx(vmid) ((vmid) & ~VMID_MASK) > > +#define idx2vmid(idx) vmid2idx(idx) > > + > > +#define vmid_gen_match(vmid) \ > > + (!(((vmid) ^ atomic64_read(&vmid_generation)) >> vmid_bits)) > > + > > +static void flush_context(void) > > +{ > > + int cpu; > > + u64 vmid; > > + > > + bitmap_clear(vmid_map, 0, NUM_USER_VMIDS); > > + > > + for_each_possible_cpu(cpu) { > > + vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0); > > + /* > > + * If this CPU has already been through a > > + * rollover, but hasn't run another task in > > + * the meantime, we must preserve its reserved > > + * VMID, as this is the only trace we have of > > + * the process it is still running. > > + */ > > + if (vmid == 0) > > + vmid = per_cpu(reserved_vmids, cpu); > > + __set_bit(vmid2idx(vmid), vmid_map); > > + per_cpu(reserved_vmids, cpu) = vmid; > > + } > > Hmm, so here we're copying the active_vmids into the reserved_vmids on a > rollover, but I wonder if that's overly pessismistic? For the ASID > allocator, every CPU tends to have a current task so it makes sense, but > I'm not sure it's necessarily the case that every CPU tends to have a > vCPU as the current task. For example, imagine you have a nasty 128-CPU > system with 8-bit VMIDs and each CPU has at some point run a vCPU. Then, > on rollover, we'll immediately reserve half of the VMID space, even if > those vCPUs don't even exist any more. > > Not sure if it's worth worrying about, but I wanted to mention it. Ok. I see your suggestion in patch #3 to avoid this. > > > +void kvm_arm_update_vmid(atomic64_t *id) > > +{ > > Take the kvm_vmid here? That would make: > > > + /* Check that our VMID belongs to the current generation. */ > > + vmid = atomic64_read(id); > > + if (!vmid_gen_match(vmid)) { > > + vmid = new_vmid(id); > > + atomic64_set(id, vmid); > > + } > > A bit more readable, as you could pass the pointer directly to new_vmid > for initialisation. Ok. Thanks, Shameer