Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2360546pxa; Mon, 24 Aug 2020 11:58:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx2mCjAE0GorX88KzyAYAuxhJZSf+iehfkkI+oeaCYhxPYAidyWqAZngPdBoXfULK9A3Qb2 X-Received: by 2002:a17:906:4813:: with SMTP id w19mr6695132ejq.159.1598295533012; Mon, 24 Aug 2020 11:58:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598295533; cv=none; d=google.com; s=arc-20160816; b=zfsuVWmOd6emCwMwLGfVjcLSa4y6wGbJ78VuycYLOpnE+p860kG2O2JGYQJXK/Kitf NEmJhCjz3cVyNU0PverkOqL7J0A6GaEG1n3SHnU14bwuroJNoTow20S5b6ezZC+SM2K5 1BMp0jDhRhsj9/XiIJLHyLVsr8LguSuZ6RV4F7rEA0GpNnphsocPwQ8v7XnjWVe2uU+h MvQfhpRvyBfNeRrtBvndvEZlZQfbRdK+PMjbipzq7Gg8o/gqNW6/XByZ7Rjk+Zq38IQ9 VR2DwPxMCfYS8hbO9A1EpDSxF9ZBWf1AScgdcRQaPYxC6ZHyUF2Pase7QoR++OsGaqn+ oczA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=MPucIPCHpwX6Bmh4YO5fRSV0Dn46Pt3wZbjUIXrC1hQ=; b=IV02atvQ9lmO0fiOZcAMcnYshOWlNBlML/rzQ6s8VjleKi0/AqaaF6mh3SqfbMiOep p8FioX72oF2hBXDQYiK8HPJ4G9rqwPCys3A/lNgI4QJ8i8t3z1Hcyt2G8rOksDVdwinb kChCIm0Tsj63Or726ayG6KzZrKCWyAgUUkljfeSTO3MpuDnwr1XW2oGWABCpZv81todQ QsSalrC5VPLYWQN2f+FoLhT6nMiMpWFRUAGg8knOvoY/9uyrGylKv6Twu1G0qytcvnTh S+COY14Jzw9EGPYXpmT17dtdVKHYFMAj2ZMXRcFTUroWWUxhqduRKmMYdBoQpMdrCm9n 1aIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ax2mL3p3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z18si3593449edq.290.2020.08.24.11.58.30; Mon, 24 Aug 2020 11:58:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ax2mL3p3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726784AbgHXS5u (ORCPT + 99 others); Mon, 24 Aug 2020 14:57:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725963AbgHXS5s (ORCPT ); Mon, 24 Aug 2020 14:57:48 -0400 Received: from mail-oi1-x244.google.com (mail-oi1-x244.google.com [IPv6:2607:f8b0:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E51A8C061574 for ; Mon, 24 Aug 2020 11:57:47 -0700 (PDT) Received: by mail-oi1-x244.google.com with SMTP id b9so5815466oiy.3 for ; Mon, 24 Aug 2020 11:57:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MPucIPCHpwX6Bmh4YO5fRSV0Dn46Pt3wZbjUIXrC1hQ=; b=ax2mL3p3PeTYQEQ/OIlwLBRIpVmZ0zSoroj/pMHcJf0c/JMUnNjBNrFdFd3Yl4AdgZ Y581dRsZuzhftzdplAoRvVpnnhZhWkALQw3Sym+GHzD1MUuHEQKqucLdNBNH0JN99l70 GuywBwTgMolo+BDGqCsljUQMKoYK+KLrOzAs1ohAFdDkJwK4g7h/b8hRlKRaK+I9DC9/ 9RXFcZOGEM7T2iIJQAQs57oWJrUYveZ4xY29oYtoJiPLkvfVfMtf7vmsSeWcP4FhEqRy jgKhYQ1vNUX+HfUHpU8qv9E66lDdJlDmedmZF2Ijkj+y1JOOHBCX5mAdLNsag/oNeBWC btRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MPucIPCHpwX6Bmh4YO5fRSV0Dn46Pt3wZbjUIXrC1hQ=; b=TQHv56jd0HmgGmRbkuWtTcMXT3p0LKm0ZSCrzN7ppO4vrJ7pjOESqdMJRAQSccnBv0 6inRQYf2OOR/TXrzgxs0EhiyDjRleWgj0AkXYK4qFKg9AHDom6aTavdAdmaLbHFuOhOg T9QynbGpU+LDFWmKgm6/3DE52k99rUppbaCrCV8H/5p7tsaQZ6cppJimKBLT6NHyFOHi SmnNHmJoTqtDPtiejm6/sYg6adqkp5fc17D2q+uOdzHgIYzgqGwkrncfM96dGaZDu7Et 6gCeuin0uNeqvA6zlmF613tc449Hobp5O7Ao6AFfY4PtSkJWe+HlcmH43Un3piv9gg7x VmIQ== X-Gm-Message-State: AOAM533dMdzHzA37mQNz7RMV5rv0uHmzIf77j8IoZBTGi5uOJTXQ0oC2 y01O1Iz2xS4NpvaxI959j339s38+w6mVruRjPlVwlA== X-Received: by 2002:aca:b942:: with SMTP id j63mr504688oif.28.1598295466852; Mon, 24 Aug 2020 11:57:46 -0700 (PDT) MIME-Version: 1.0 References: <20200401081348.1345307-1-vkuznets@redhat.com> <20200822034046.GE4769@sjchrist-ice> In-Reply-To: <20200822034046.GE4769@sjchrist-ice> From: Jim Mattson Date: Mon, 24 Aug 2020 11:57:35 -0700 Message-ID: Subject: Re: [PATCH] KVM: VMX: fix crash cleanup when KVM wasn't used To: Sean Christopherson Cc: Vitaly Kuznetsov , Paolo Bonzini , Wanpeng Li , kvm list , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 21, 2020 at 8:40 PM Sean Christopherson wrote: > > On Thu, Aug 20, 2020 at 01:08:22PM -0700, Jim Mattson wrote: > > On Wed, Apr 1, 2020 at 1:13 AM Vitaly Kuznetsov wrote: > > > --- > > > arch/x86/kvm/vmx/vmx.c | 12 +++++++----- > > > 1 file changed, 7 insertions(+), 5 deletions(-) > > > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > > index 3aba51d782e2..39a5dde12b79 100644 > > > --- a/arch/x86/kvm/vmx/vmx.c > > > +++ b/arch/x86/kvm/vmx/vmx.c > > > @@ -2257,10 +2257,6 @@ static int hardware_enable(void) > > > !hv_get_vp_assist_page(cpu)) > > > return -EFAULT; > > > > > > - INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); > > > - INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); > > > - spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); > > > - > > > r = kvm_cpu_vmxon(phys_addr); > > > if (r) > > > return r; > > > @@ -8006,7 +8002,7 @@ module_exit(vmx_exit); > > > > > > static int __init vmx_init(void) > > > { > > > - int r; > > > + int r, cpu; > > > > > > #if IS_ENABLED(CONFIG_HYPERV) > > > /* > > > @@ -8060,6 +8056,12 @@ static int __init vmx_init(void) > > > return r; > > > } > > > > > > + for_each_possible_cpu(cpu) { > > > + INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); > > > + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); > > > + spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); > > > + } > > > > Just above this chunk, we have: > > > > r = vmx_setup_l1d_flush(vmentry_l1d_flush_param); > > if (r) { > > vmx_exit(); > > ... > > > > If we take that early exit, because vmx_setup_l1d_flush() fails, we > > won't initialize loaded_vmcss_on_cpu. However, vmx_exit() calls > > kvm_exit(), which calls on_each_cpu(hardware_disable_nolock, NULL, 1). > > Hardware_disable_nolock() then calls kvm_arch_hardware_disable(), > > which calls kvm_x86_ops.hardware_disable() [the vmx.c > > hardware_disable()], which calls vmclear_local_loaded_vmcss(). > > > > I believe that vmclear_local_loaded_vmcss() will then try to > > dereference a NULL pointer, since per_cpu(loaded_vmcss_on_cpu, cpu) is > > uninitialzed. > > I agree the code is a mess (kvm_init() and kvm_exit() included), but I'm > pretty sure hardware_disable_nolock() is guaranteed to be a nop as it's > impossible for kvm_usage_count to be non-zero if vmx_init() hasn't > finished. Unless I'm missing something, there's no check for a non-zero kvm_usage_count on this path. There is such a check in hardware_disable_all_nolock(), but not in hardware_disable_nolock().