Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp769038ybj; Tue, 5 May 2020 07:17:51 -0700 (PDT) X-Google-Smtp-Source: APiQypKZHjb+Y3+rfiLDwKsRLzh7H3n9R0GM0215UQeK3ziKI0Xm/Dferl+d8cKsfJ3x5KgT1Ohh X-Received: by 2002:aa7:d7d8:: with SMTP id e24mr2917763eds.215.1588688271321; Tue, 05 May 2020 07:17:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588688271; cv=none; d=google.com; s=arc-20160816; b=z7fdBPdWgp2hLAfEdalpd13NhQDNHUkQvy1Qo5G5hFDgjTZ0QaFkE6hmrOnDGVCVcR tHdQaMAdax7Iuh95nHA5cATuLwmaQnYDXD07KH8dXKaaRPCOBiC0th6Y2pvO5U0LW4N4 8GjofbTqNxWaux0lOVsumqujGZkO/f4hVEO53gP0OHmDPQP2CTBqBXtCuvNgESthtPlp Vj/NUfkSYYHS8G8+EQoKzxQkLiRfDtl3LmPbglf9vfHlXva/B4Go6N/25D5a6vW0D3Se 8IeH+3qXNO/zkQTASmHaoHkwZ12d57I/2leOaGLuxzPrsay+Sw2OJvaO7XQdI6iXWVC6 lt9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:subject:cc:to:from:date:user-agent:message-id; bh=zoy8N9P9KHqcqudY8a0XIj2HBHKKYcfBddRiZI4Ukmw=; b=qTDQACzj4UGF68dvQqxqMqvZbsQExxVRTsUl7iTDEeNLkY8n3zhfwMwWSOs/HTamHq IiNlmcPRe6Kmll9uZcpBTQCLnEMCNGAnbGJDMqiWm5hhefWmAeFHQdKAlQKvlxXjy4iY tZqenp/Yot5L+yD7SccTC71ewz8GL3OYctpFUsEFtsQERCvUnrqtRvDGOCBF1aFz1+L0 +/laHED1hWTgQY/Kui8pldsVsotuzQMarEBBuPpbqcFllnIMZYrhkoOOCZVpiDXn7ccP Y39mRx8gSWZ3UxWskQSSLK6Fce8+LHjCmzziQcaKPXhAoKbvwb2O/bNGDjwAZwMZxJHd SyDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dd22si1269777edb.162.2020.05.05.07.17.27; Tue, 05 May 2020 07:17:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729690AbgEEOOz (ORCPT + 99 others); Tue, 5 May 2020 10:14:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1729651AbgEEOOy (ORCPT ); Tue, 5 May 2020 10:14:54 -0400 Received: from Galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDD03C0610D6 for ; Tue, 5 May 2020 07:14:53 -0700 (PDT) Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jVyLD-0001BC-Rv; Tue, 05 May 2020 16:14:36 +0200 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 4DB3EFFC8D; Tue, 5 May 2020 16:14:35 +0200 (CEST) Message-Id: <20200505134341.873785437@linutronix.de> User-Agent: quilt/0.65 Date: Tue, 05 May 2020 15:41:30 +0200 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Paul E. McKenney" , Andy Lutomirski , Alexandre Chartre , Frederic Weisbecker , Paolo Bonzini , Sean Christopherson , Masami Hiramatsu , Petr Mladek , Steven Rostedt , Joel Fernandes , Boris Ostrovsky , Juergen Gross , Brian Gerst , Mathieu Desnoyers , Josh Poimboeuf , Will Deacon Subject: [patch V4 part 2 18/18] x86/kvm/svm: Move guest enter/exit into .noinstr.text References: <20200505134112.272268764@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move the functions which are inside the RCU off region into the non-instrumentable text section. Signed-off-by: Thomas Gleixner Cc: Paolo Bonzini Cc: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 102 ++++++++++++++++++++++++--------------------- arch/x86/kvm/svm/vmenter.S | 2 2 files changed, 57 insertions(+), 47 deletions(-) --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3278,6 +3278,61 @@ static void svm_cancel_injection(struct void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); +static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, + struct vcpu_svm *svm) +{ + /* + * VMENTER enables interrupts (host state), but the kernel state is + * interrupts disabled when this is invoked. Also tell RCU about + * it. This is the same logic as for exit_to_user_mode(). + * + * 1) Trace interrupts on state + * 2) Prepare lockdep with RCU on + * 3) Invoke context tracking if enabled to adjust RCU state + * 4) Tell lockdep that interrupts are enabled + * + * This has to be after x86_spec_ctrl_set_guest() because that can + * take locks (lockdep needs RCU) and calls into world and some + * more. + */ + instr_begin(); + trace_hardirqs_on_prepare(); + lockdep_hardirqs_on_prepare(CALLER_ADDR0); + instr_end(); + guest_enter_irqoff(); + lockdep_hardirqs_on(CALLER_ADDR0); + + __svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs); + +#ifdef CONFIG_X86_64 + native_wrmsrl(MSR_GS_BASE, svm->host.gs_base); +#else + loadsegment(fs, svm->host.fs); +#ifndef CONFIG_X86_32_LAZY_GS + loadsegment(gs, svm->host.gs); +#endif +#endif + + /* + * VMEXIT disables interrupts (host state, see the CLI in the ASM + * above), but tracing and lockdep have them in state 'on'. Same as + * enter_from_user_mode(). + * + * 1) Tell lockdep that interrupts are disabled + * 2) Invoke context tracking if enabled to reactivate RCU + * 3) Trace interrupts off state + * + * This needs to be done before the below as native_read_msr() + * contains a tracepoint and x86_spec_ctrl_restore_host() calls + * into world and some more. + */ + lockdep_hardirqs_off(CALLER_ADDR0); + guest_exit_irqoff(); + instr_begin(); + trace_hardirqs_off_prepare(); + instr_end(); +} + static void svm_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -3330,52 +3385,7 @@ static void svm_vcpu_run(struct kvm_vcpu */ x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); - /* - * VMENTER enables interrupts (host state), but the kernel state is - * interrupts disabled when this is invoked. Also tell RCU about - * it. This is the same logic as for exit_to_user_mode(). - * - * 1) Trace interrupts on state - * 2) Prepare lockdep with RCU on - * 3) Invoke context tracking if enabled to adjust RCU state - * 4) Tell lockdep that interrupts are enabled - * - * This has to be after x86_spec_ctrl_set_guest() because that can - * take locks (lockdep needs RCU) and calls into world and some - * more. - */ - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(CALLER_ADDR0); - guest_enter_irqoff(); - lockdep_hardirqs_on(CALLER_ADDR0); - - __svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs); - -#ifdef CONFIG_X86_64 - wrmsrl(MSR_GS_BASE, svm->host.gs_base); -#else - loadsegment(fs, svm->host.fs); -#ifndef CONFIG_X86_32_LAZY_GS - loadsegment(gs, svm->host.gs); -#endif -#endif - - /* - * VMEXIT disables interrupts (host state, see the CLI in the ASM - * above), but tracing and lockdep have them in state 'on'. Same as - * enter_from_user_mode(). - * - * 1) Tell lockdep that interrupts are disabled - * 2) Invoke context tracking if enabled to reactivate RCU - * 3) Trace interrupts off state - * - * This needs to be done before the below as native_read_msr() - * contains a tracepoint and x86_spec_ctrl_restore_host() calls - * into world and some more. - */ - lockdep_hardirqs_off(CALLER_ADDR0); - guest_exit_irqoff(); - trace_hardirqs_off_prepare(); + svm_vcpu_enter_exit(vcpu, svm); /* * We do not use IBRS in the kernel. If this vCPU has used the --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -27,7 +27,7 @@ #define VCPU_R15 __VCPU_REGS_R15 * WORD_SIZE #endif - .text +.section .noinstr.text, "ax" /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode