Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1566174pxa; Thu, 20 Aug 2020 14:51:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJws5tACTPtpzwD5uouFK6N93phqGehq+j3pJKbR9yobdWRyM9Ot0+cZ7tTcvfFNSu/mPsGA X-Received: by 2002:a05:6402:84e:: with SMTP id b14mr596659edz.115.1597960301337; Thu, 20 Aug 2020 14:51:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597960301; cv=none; d=google.com; s=arc-20160816; b=nuEDi50wY1XgwU0bqfSKvGzOngk1ggN0LzhjCbTRd/+MP/E7JsYiahe5peZnYbHSNW BValxxSkHeVwoWiYtKWUcVX6bhl7dCtzGwA0+TIOVBMjjYYCM2DaOVOy5RC9PyCnT4D2 OCW4jmbKsyTHr5TFKBgHkowv3zSbx0maE+zFdNMlKEUhVwSkz9LdQCzcD9kaPgsgHlFQ X9j1394eWFP0SiP2+EU/PH/xA4glj7xPqLjI708nGm/Xrecwtid71UUIwlum6d0YeQGI QlhsVEHmP03X7aBcyVl1e1He6APjf6u6NakADJFOwyZX47oBmXOea39D0J7I63su2rEb pYpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=GpwFgamWVqMs1v/sOEpIYQQxMEuzWGAMmLZ6OgISKjw=; b=sffG8r31XSf3G7/GSXT+2p7unKsYYZR1p9oOE+IO1WvSkEbF+gIFIkDmzexH1fUiTi vtKjg96YH+AYbp8L6HJu8AqO8xWruawC60b9S+EbFl3vP5sxrvunV1dIKbESYBFu2RlK 2MRSBNchCgt8v0UN5CPLLFWS/5E/99wqPXx2Eh5n3l8xNzrXdxM9hj8hUnzpnE5y8seb WUKbjUyNhs9QaB/5ay0oEQXuy6U+rJzQp5XtJGf1GhldjTs574SNSUfSwjjYQkWdUcaP kiKA9SSnKOCisM3yYzstn0LcFIh+zo4Qdtbq8sdHwyjzSreaheuDeO2BNFPD9yLWIHiv KSTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="hR8BP/i1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb13si3188460edb.446.2020.08.20.14.51.17; Thu, 20 Aug 2020 14:51:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="hR8BP/i1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728300AbgHTUIh (ORCPT + 99 others); Thu, 20 Aug 2020 16:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728130AbgHTUIf (ORCPT ); Thu, 20 Aug 2020 16:08:35 -0400 Received: from mail-oi1-x241.google.com (mail-oi1-x241.google.com [IPv6:2607:f8b0:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9180C061385 for ; Thu, 20 Aug 2020 13:08:34 -0700 (PDT) Received: by mail-oi1-x241.google.com with SMTP id n128so3010264oif.0 for ; Thu, 20 Aug 2020 13:08:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GpwFgamWVqMs1v/sOEpIYQQxMEuzWGAMmLZ6OgISKjw=; b=hR8BP/i1JBmlB1LmQQlCWmfRR0GJ9aYCx7yH/mjonES133wZQVlrqzfUFWRknPJ3VO /lbl4Pz1/2091MNHFqE9FWJhAH/NIZsnd5rHDBlS2hxr30JymZmf2d65AlVyME8mvWII C2zPQSZWBYNaE+V063+hhHkSXf16+iXEn3CmAYQOxxNBXJa3tJ8GfiTdfyGqcsBSKJaA O5nkoC+SSbwKi9wqnedJKOR6OPjfwqDWWLK8YEolJWZVEXjCvFEjPQUVTs29/xgdpUTF r6k0DgH96GCug2xxFeSAHNCoGTRVJAxCZN9scJqPepL3mhCjhAA49jn0RhPFh0lgJEVr fmgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GpwFgamWVqMs1v/sOEpIYQQxMEuzWGAMmLZ6OgISKjw=; b=A0IBnWbdYyjf3NKF/W/a3rxnde53nDv2z2ceny5AR3iYAJKRTfGzLDWpIDb8+1xlxz Kggno8z7XzlJxWl8/81KrwFoLsp70EDvM6RetWxhFpOuNz7ulDBeA9w79h0Jne/NO6XI j0yJ8bWHC/SHKHAmax+9CxGtYWPJjVhS49x0CuJgCZVJeoepNMZrnCrgId+tn5KbRww1 WzZTDvvFJkadENYnssou1Jo8Prp3lkZMTmHGMuGqaxtiXxWqH6rLA++ElziB9ruR/ef8 3v0OUyEKeTvP9JivPFoyN8WrIiaBTyp0cW68i+K+QB+Fr1OhHNJNBR8Vl4PrvusWoZfD 6C7A== X-Gm-Message-State: AOAM533L0cfoL4sC/YVDdkjy/tvadrv5Nz9Tmm8lS6x1Kno+SpM7bM8c /yz6q8MgdB6FbHZY8/FDj4ljothKLUraj3aqGJU4wg== X-Received: by 2002:aca:670b:: with SMTP id z11mr60431oix.6.1597954113907; Thu, 20 Aug 2020 13:08:33 -0700 (PDT) MIME-Version: 1.0 References: <20200401081348.1345307-1-vkuznets@redhat.com> In-Reply-To: <20200401081348.1345307-1-vkuznets@redhat.com> From: Jim Mattson Date: Thu, 20 Aug 2020 13:08:22 -0700 Message-ID: Subject: Re: [PATCH] KVM: VMX: fix crash cleanup when KVM wasn't used To: Vitaly Kuznetsov Cc: Paolo Bonzini , Sean Christopherson , Wanpeng Li , kvm list , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 1, 2020 at 1:13 AM Vitaly Kuznetsov wrote: > > If KVM wasn't used at all before we crash the cleanup procedure fails with > BUG: unable to handle page fault for address: ffffffffffffffc8 > #PF: supervisor read access in kernel mode > #PF: error_code(0x0000) - not-present page > PGD 23215067 P4D 23215067 PUD 23217067 PMD 0 > Oops: 0000 [#8] SMP PTI > CPU: 0 PID: 3542 Comm: bash Kdump: loaded Tainted: G D 5.6.0-rc2+ #823 > RIP: 0010:crash_vmclear_local_loaded_vmcss.cold+0x19/0x51 [kvm_intel] > > The root cause is that loaded_vmcss_on_cpu list is not yet initialized, > we initialize it in hardware_enable() but this only happens when we start > a VM. > > Previously, we used to have a bitmap with enabled CPUs and that was > preventing [masking] the issue. > > Initialized loaded_vmcss_on_cpu list earlier, right before we assign > crash_vmclear_loaded_vmcss pointer. blocked_vcpu_on_cpu list and > blocked_vcpu_on_cpu_lock are moved altogether for consistency. > > Fixes: 31603d4fc2bb ("KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support") > Signed-off-by: Vitaly Kuznetsov > --- > arch/x86/kvm/vmx/vmx.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 3aba51d782e2..39a5dde12b79 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -2257,10 +2257,6 @@ static int hardware_enable(void) > !hv_get_vp_assist_page(cpu)) > return -EFAULT; > > - INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); > - INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); > - spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); > - > r = kvm_cpu_vmxon(phys_addr); > if (r) > return r; > @@ -8006,7 +8002,7 @@ module_exit(vmx_exit); > > static int __init vmx_init(void) > { > - int r; > + int r, cpu; > > #if IS_ENABLED(CONFIG_HYPERV) > /* > @@ -8060,6 +8056,12 @@ static int __init vmx_init(void) > return r; > } > > + for_each_possible_cpu(cpu) { > + INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); > + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); > + spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); > + } Just above this chunk, we have: r = vmx_setup_l1d_flush(vmentry_l1d_flush_param); if (r) { vmx_exit(); ... If we take that early exit, because vmx_setup_l1d_flush() fails, we won't initialize loaded_vmcss_on_cpu. However, vmx_exit() calls kvm_exit(), which calls on_each_cpu(hardware_disable_nolock, NULL, 1). Hardware_disable_nolock() then calls kvm_arch_hardware_disable(), which calls kvm_x86_ops.hardware_disable() [the vmx.c hardware_disable()], which calls vmclear_local_loaded_vmcss(). I believe that vmclear_local_loaded_vmcss() will then try to dereference a NULL pointer, since per_cpu(loaded_vmcss_on_cpu, cpu) is uninitialzed. > #ifdef CONFIG_KEXEC_CORE > rcu_assign_pointer(crash_vmclear_loaded_vmcss, > crash_vmclear_local_loaded_vmcss); > -- > 2.25.1 >