Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp1582566imm; Fri, 6 Jul 2018 02:47:17 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe3aSTFVWo/Ieg5AY3zcGRHUUrhkBvzcbGL7waQHERVOhuj61g1ct/KSTGEMjoNHRIRiX0o X-Received: by 2002:a62:5bc3:: with SMTP id p186-v6mr9966193pfb.42.1530870437449; Fri, 06 Jul 2018 02:47:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530870437; cv=none; d=google.com; s=arc-20160816; b=IRDmiaaLEkrdYUPzXf4kN8xSicUZQ+Pvqu7tySOY87aqUsRCUEfpf6R1jBYvIxF+AH rLjnHMzG//GhwLBIPlcSblIsDF6XHY4kzK50bL5KqnVentlKEYOjsC4QBnfEqjO17voW 85Dz3SQgoo2udeEVtUpYSwNtVoA4WMgdunZcig98v6LzpxESpUrC5Au0mL+aH5ruvVw3 L5YprdStBRRD1STVZpbo8bsKUjng9D18+o0sD8vALJ7Wm6lg5jC9OzoxLcRARquLDWrP gqdlK5P0ilk68AqfH56Xms6YAF0Hz9dMA+xiAREK1KwIHheHYyCXKRMyMf2iOT5bL3w2 L/Iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=vh9/sJYZ/9W7+OP2yBAUmw3i6NFhIyLjP5+/iD4kxHc=; b=xBw1sBrV4TJ4omDRTgEcpGUhJxy5341pgsQD3MBN2qD+JbDJOl22dxeukDI4J2tZS+ Lql+ZEjoXVIerIHAS+IIzPpmY0g5QwcKnYt0rtkK2m4H24zs9J48yH+uwEbCFYIUQRvo jQHIqUf2HR3O/YiF5WPTPIqsGbRJTyWBDfotgboCK3EVKNYwQQeUWdcuFahu/eFn1MzA 96jhZHZyqgOAI2OMy5B2UztM1m6yJ134l4Bv1wxlAbuAH/ynGfSvD/SrdUt61exb8DhC 6Z+mO36Jx+yY14IHFTa2zgvjs7QNIRgYqLg2aSENlHnjvp49c/1An7o0qkZF45BdQn0b Z0xA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g3-v6si7655780plp.506.2018.07.06.02.47.02; Fri, 06 Jul 2018 02:47:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754068AbeGFJqR (ORCPT + 99 others); Fri, 6 Jul 2018 05:46:17 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:53935 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753457AbeGFJqQ (ORCPT ); Fri, 6 Jul 2018 05:46:16 -0400 Received: from hsi-kbw-5-158-153-52.hsi19.kabel-badenwuerttemberg.de ([5.158.153.52] helo=nanos) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fbNJ8-00051H-CQ; Fri, 06 Jul 2018 11:45:42 +0200 Date: Fri, 6 Jul 2018 11:45:41 +0200 (CEST) From: Thomas Gleixner To: Paolo Bonzini cc: Pavel Tatashin , steven.sistare@oracle.com, daniel.m.jordan@oracle.com, linux@armlinux.org.uk, schwidefsky@de.ibm.com, heiko.carstens@de.ibm.com, john.stultz@linaro.org, sboyd@codeaurora.org, x86@kernel.org, linux-kernel@vger.kernel.org, mingo@redhat.com, hpa@zytor.com, douly.fnst@cn.fujitsu.com, peterz@infradead.org, prarit@redhat.com, feng.tang@intel.com, pmladek@suse.com, gnomes@lxorguk.ukuu.org.uk, linux-s390@vger.kernel.org Subject: Re: [PATCH v12 04/11] kvm/x86: remove kvm memblock dependency In-Reply-To: <52117b6e-cbdc-8583-494b-5e8e5d6d4265@redhat.com> Message-ID: References: <20180621212518.19914-1-pasha.tatashin@oracle.com> <20180621212518.19914-5-pasha.tatashin@oracle.com> <52117b6e-cbdc-8583-494b-5e8e5d6d4265@redhat.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 6 Jul 2018, Paolo Bonzini wrote: > On 06/07/2018 11:24, Thomas Gleixner wrote: > >> The reason for this is to avoid wasting a lot of BSS memory when KVM is > >> not in use. Thomas is going to send his take on this! > > Got it working with per cpu variables, but there is a different subtle > > issue with that. > > > > The pvclock data is mapped into the VDSO as well, i.e. as a full page. > > > > Right now with the linear array, which is forced to be page sized at least > > this only maps pvclock data or zeroed data (after the last CPU) into the > > VDSO. > > > > With PER CPU variables this would map arbitraty other per cpu data which > > happens to be in the same page into the VDSO. Not really what we want. > > > > That means to utilize PER CPU data this requires to allocate page sized > > pvclock data space for each CPU to prevent leaking arbitrary stuff. > > > > As this data is allocated on demand, i.e. only if kvmclock is used, this > > might be tolerable, but I'm not so sure. > > One possibility is to introduce another layer of indirection: in > addition to the percpu pvclock data, add a percpu pointer to the pvclock > data and initialize it to point to a page-aligned variable in BSS. CPU0 > (used by vDSO) doesn't touch the pointer and keeps using the BSS > variable, APs instead redirect the pointer to the percpu data. Yeah, thought about that, but the extra indirection is ugly. Instead of using per cpu data, I just can allocate the memory _after_ the allocators are up and running and use a single page sized static __initdata for the early boot. Thanks, tglx