Received: by 2002:a25:b323:0:0:0:0:0 with SMTP id l35csp61247ybj; Thu, 19 Sep 2019 10:40:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqyXOkOjBg+KN95StXYGP3FWTSCSbg6RN07HyLec+iloRbahuEzyejeDp30nweXluGjc8AuI X-Received: by 2002:a17:906:d8a9:: with SMTP id qc9mr14029024ejb.199.1568914848170; Thu, 19 Sep 2019 10:40:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568914848; cv=none; d=google.com; s=arc-20160816; b=koCdcSw+gpZgOOyHEuXl2KPh/OULutrBIk1KYeFRose/zVwh238JzyCiszqL1MfVB2 1uwI6ue2Pmf2hlnCb+LqDx6aKPFb6jwoZjNrqpyIHGL5/jCYAy6b8T4DBrhB3Yodryk3 SRjXA8bE399S9yeZv585Gd9VYKySmwJiTEdQaLEQfGas5stAyYosCc3pwQajx7DM7SyU HSaEZyexgnIe3aJEVIYJmjoVsCEKUcpHwY9rIhW++iZ7lxZK6fdgM4DyKKwBk+InScYg x+ttVn3p9EaMoq0NM2/uf/Xhnsct7zr3KL2axSFwkpqcCGQGlLVCSe+zxamJE76Rx94F fB3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:subject:cc:to :from:date:user-agent:message-id; bh=mPNqXOiqDBu5ZONuFCtP3fWSvJWaD9ZgBu5q1xsqYhU=; b=ugr7dH4evhr6cAEYTA1hnEQ2srYUhtRt9NZ6MeEhfdE+ufj+rrZ8DumAFWXCrsV+XG 6zhmWfgX2LuSbvjSy2QA9n69gV4E6kE2YMUo4YUhRI5mvEhgwwEQ2FwuMxX24RaGcJ7d M6mAq3UjActHUbjV48MWLgqBAmRGDCoqwv3FR/wwVMZG7G77N4SoViKuaA5W0hpO6PuL fszZpndYEFpExJT3pCpao3OIIKCwK6mePrvzkTIKQLOWzO6RA+vSO2j9h5ea7Y7jMwFs 1Cp22Vf2tF7nbPIwKLc4hpANKA4i7AXx8TwZj5eboh+QISkyB2AR2J59VtuTiB7LBgx0 fCmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id sb7si3012585ejb.321.2019.09.19.10.40.25; Thu, 19 Sep 2019 10:40:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403986AbfISPKF (ORCPT + 99 others); Thu, 19 Sep 2019 11:10:05 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:50105 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388199AbfISPKA (ORCPT ); Thu, 19 Sep 2019 11:10:00 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1iAy49-0006pg-1u; Thu, 19 Sep 2019 17:09:53 +0200 Message-Id: <20190919150809.860645841@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 19 Sep 2019 17:03:28 +0200 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Peter Zijlstra , Andy Lutomirski , Catalin Marinas , Will Deacon , Mark Rutland , Marc Zyngier , Paolo Bonzini , kvm@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC patch 14/15] workpending: Provide infrastructure for work before entering a guest References: <20190919150314.054351477@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Entering a guest is similar to exiting to user space. Pending work like handling signals, rescheduling, task work etc. needs to be handled before that. Provide generic infrastructure to avoid duplication of the same handling code all over the place. Update ARM64 struct kvm_vcpu_stat with a signal_exit member so the generic code compiles. Signed-off-by: Thomas Gleixner --- arch/arm64/include/asm/kvm_host.h | 1 include/linux/entry-common.h | 66 ++++++++++++++++++++++++++++++++++++++ kernel/entry/common.c | 44 +++++++++++++++++++++++++ 3 files changed, 111 insertions(+) --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -409,6 +409,7 @@ struct kvm_vcpu_stat { u64 wfi_exit_stat; u64 mmio_exit_user; u64 mmio_exit_kernel; + u64 signal_exits; u64 exits; }; --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -255,4 +255,70 @@ static inline void arch_syscall_exit_tra /* Common syscall exit function */ void syscall_exit_to_usermode(struct pt_regs *regs, long syscall, long retval); +#if IS_ENABLED(CONFIG_KVM) + +#include + +#ifndef ARCH_EXIT_TO_GUESTMODE_WORK +# define ARCH_EXIT_TO_GUESTMODE_WORK (0) +#endif + +#define EXIT_TO_GUESTMODE_WORK \ + (_TIF_NEED_RESCHED | _TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \ + ARCH_EXIT_TO_GUESTMODE_WORK) + +int core_exit_to_guestmode_work(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned long ti_work); + +/** + * arch_exit_to_guestmode - Architecture specific exit to guest mode function + * @kvm: Pointer to the guest instance + * @vcpu: Pointer to current's VCPU data + * @ti_work: Cached TIF flags gathered in exit_to_guestmode() + * + * Invoked from core_exit_to_guestmode_work(). Can be replaced by + * architecture specific code. + */ +static inline int arch_exit_to_guestmode(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned long ti_work); + +#ifndef arch_exit_to_guestmode +static inline int arch_exit_to_guestmode(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned long ti_work) +{ + return 0; +} +#endif + +/** + * exit_to_guestmode - Check and handle pending work which needs to be + * handled before returning to guest mode + * @kvm: Pointer to the guest instance + * @vcpu: Pointer to current's VCPU data + * + * Returns: 0 or an error code + */ +static inline int exit_to_guestmode(struct kvm *kvm, struct kvm_vcpu *vcpu) +{ + unsigned long ti_work = READ_ONCE(current_thread_info()->flags); + + if (unlikely(ti_work & EXIT_TO_GUESTMODE_WORK)) + return core_exit_to_guestmode_work(kvm, vcpu, ti_work); + return 0; +} + + +/** + * _exit_to_guestmode_work_pending - Check if work is pending which needs to be + * handled before returning to guest mode + */ +static inline bool exit_to_guestmode_work_pending(void) +{ + unsigned long ti_work = READ_ONCE(current_thread_info()->flags); + + return !!(ti_work & EXIT_TO_GUESTMODE_WORK); + +} +#endif /* CONFIG_KVM */ + #endif --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -174,3 +174,47 @@ void syscall_exit_to_usermode(struct pt_ do_exit_to_usermode(regs); #endif } + +#if IS_ENABLED(CONFIG_KVM) +int __weak arch_exit_to_guestmode_work(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned long ti_work) +{ + return 0; +} + +int core_exit_to_guestmode_work(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned long ti_work) +{ + /* + * Before returning to guest mode handle all pending work + */ + if (ti_work & _TIF_SIGPENDING) { + vcpu->run->exit_reason = KVM_EXIT_INTR; + vcpu->stat.signal_exits++; + return -EINTR; + } + + if (ti_work & _TIF_NEED_RESCHED) { + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); + schedule(); + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); + } + + if (ti_work & _TIF_PATCH_PENDING) { + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); + klp_update_patch_state(current); + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); + } + + if (ti_work & _TIF_NOTIFY_RESUME) { + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); + clear_thread_flag(TIF_NOTIFY_RESUME); + tracehook_notify_resume(NULL); + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); + } + + /* Any extra architecture specific work */ + return arch_exit_to_guestmode_work(kvm, vcpu, ti_work); +} +EXPORT_SYMBOL_GPL(core_exit_to_guestmode_work); +#endif