Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp1964670imm; Fri, 6 Jul 2018 09:27:11 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe9xvLSHB5GFHwh2IOQHmbEDblwKGNLoKUS5upYmcm1KjHIK/L+sb9GGq+WCM5t7XXNflka X-Received: by 2002:a17:902:581:: with SMTP id f1-v6mr10888521plf.48.1530894431326; Fri, 06 Jul 2018 09:27:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530894431; cv=none; d=google.com; s=arc-20160816; b=U1jiD4zlFaNvYXfHm6ZyKsDVgDtNW3Mscts5lLruPrmdy62EBLqxpZsEBcrno6Aj5b 5I5iUkEgzgjOpyC5k5Pqe5viDUZGnw6Q/yCbtIO0gJhLr5MtzQxbC0cihOPjGM2YvhfG UxJnhcU+HqXAXuloneaOJoJMiwDen/FHkPC/tn28pkWbNUx4WyTs1CJ7v+ngAsgmruJK b2YltnuHi+/Nh1Mhkpj7juAyeoQgAKAImRM+kHEtQZq3cWm7uoX0DdWJpn0KaSh00mUE NhhZBO2Z+mUIFkz3Vuscuv+AI60GlKSQgKfJzPCXce6A2GJUyIFgPbLHRGrLKENuPoCl vYrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:subject:cc:to:from:date:user-agent :message-id:arc-authentication-results; bh=POfRHEvAL19GLmSD/11G0vv/Jet+JgPsgKXc2jAyEQA=; b=atbrRALDHCBxFAv1gi4zJK2UxWJmJoewxgVKMTKoFZXNIGE00K7uAC8WSe/4I1dtdW VRbHdsZfoOHL9uzaztoNslMgsW4ziXuezaEK9wjowOsHdYRYgZ2jkPpLXV/XcuAnEkI8 Dtppw3MrHajbdv0cCekasXu97KGHrwgFHgezziggUNFJxfnZVbp0EjuqzvhhhYqkJypN ZTzCacjSXZGswlIEH7Pi7sRqekcOuJMhosipoYNuUC7B1Jc7LZt6yC1ILixafCcpxK0p MimYJ3gEY0RgXw4Bxfv+m3vNFdRYa/K4iWXcdIeKsucCDumRHKUsLq4W5e/aAXDA9EDs tN/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r3-v6si8063446pgo.606.2018.07.06.09.26.56; Fri, 06 Jul 2018 09:27:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933781AbeGFQXc (ORCPT + 99 others); Fri, 6 Jul 2018 12:23:32 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:55140 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933442AbeGFQX2 (ORCPT ); Fri, 6 Jul 2018 12:23:28 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1fbTVp-000060-Ee; Fri, 06 Jul 2018 18:23:13 +0200 Message-ID: <20180706161307.733337643@linutronix.de> User-Agent: quilt/0.65 Date: Fri, 06 Jul 2018 18:13:07 +0200 From: Thomas Gleixner To: LKML Cc: Paolo Bonzini , Radim Krcmar , Peter Zijlstra , Juergen Gross , Pavel Tatashin , steven.sistare@oracle.com, daniel.m.jordan@oracle.com, x86@kernel.org, kvm@vger.kernel.org Subject: [patch 0/7] x86/kvmclock: Remove memblock dependency and further cleanups Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To allow early utilization of kvmclock it is required to remove the memblock dependency. memblock is currently used to allocate the per cpu data for kvmclock. The first patch replaces the memblock with a static array sized 64bytes * NR_CPUS and was posted by Pavel. That patch allocates everything statically which is a waste when kvmclock is not used. The rest of the series cleans up the code and converts it to per cpu variables but does not put the kvmclock data into the per cpu area as that has an issue vs. mapping the boot cpu data into the VDSO (leaks arbitrary data, unless page sized). The per cpu data consists of pointers to the actual data. For the boot cpu a page sized array is statically allocated which can be mapped into the VDSO. That array is used for initializing the first 64 CPU pointers. If there are more CPUs the pvclock data is allocated during CPU bringup. So this still will have some overhead when kvmclock is not in use, but bringing it down to zero would be a massive trainwreck and even more indirections. Thanks, tglx 8<-------------- a/arch/x86/include/asm/kvm_guest.h | 7 arch/x86/include/asm/kvm_para.h | 1 arch/x86/kernel/kvm.c | 14 - arch/x86/kernel/kvmclock.c | 262 ++++++++++++++----------------------- arch/x86/kernel/setup.c | 4 5 files changed, 105 insertions(+), 183 deletions(-)