Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3690619pxb; Mon, 24 Jan 2022 15:28:36 -0800 (PST) X-Google-Smtp-Source: ABdhPJwvya/Ed+CKw5gPtRKuFq4K7Wnx80HHK0wAWZPN1MszOOm0iQThFLdGsHLkK0UBtuzHUEYP X-Received: by 2002:a17:90a:7044:: with SMTP id f62mr589990pjk.246.1643066915922; Mon, 24 Jan 2022 15:28:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643066915; cv=none; d=google.com; s=arc-20160816; b=DTeZOor4gtCk/QXcrCryW/lQV964Ke6iETKwbCyZEmq2ygrnxPXKRJjef32qvnLmb6 NpFeUmUFOxX2hY4D0oF8ZZOvAplESJyh97WQZh7JDv+hv3odzJSPYGx6+W2p9njxkXcY w5z0Jpy5mv8k/Y1UUpTUU1YSFQ3BykWU8mTp+BCvJee5d2ivCHA6KVL4IZ97Lneke96w oo1SgjudxsXEWIRMvJ4fQGbjMKMi8j3yaW+H80MZNgEzIczI1JO9sXd6hFf886fBByIV PaXyM3qvYu9zpA463lRId+3WGygSIcbmiarDa7+393BkR1J+zu4iv14xizM+xM+0Wfz2 mMNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qyIcdJwwRTiozPB7GD1MVwm8Wzj6w7Swfyxy/3mv+aY=; b=v7pYmjkZYSPbca7JhGCwTXy21P9WqbGU0KaBV1D4b4OKgJeZvlI0Z8Ssi1eMNqKOQs XoZcDReFI68/liLhAUOBa3astxiof9cT9fO+PtIyPU4jdOd/0bwYuBcFGY6eACSt27db Hrh1PE+TSceyFl+5UyeK4OA8UEFMXQWIFNTWcEHp4zZfqXA21WG7uUd+YZZjTYXN1/YJ SzT3KXYDLDbxhHtU+eOTzHSX3fNgWCKF6XCjgMyUnJ0OkbOrz5qFKdb3um88Bp8KIBCC apMhNRv4EhY/6OT9HdZ0yw/nxay0M8mk875q7jbHX1tUcAUmspURRfJkV9xFsY9DFgQs pJ1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="oG/h7ez1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b3si1734587plg.481.2022.01.24.15.28.24; Mon, 24 Jan 2022 15:28:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="oG/h7ez1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1848328AbiAXXWO (ORCPT + 99 others); Mon, 24 Jan 2022 18:22:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1584676AbiAXWVc (ORCPT ); Mon, 24 Jan 2022 17:21:32 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6D89C041881; Mon, 24 Jan 2022 12:52:49 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 564CB611C2; Mon, 24 Jan 2022 20:52:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1CECAC340E5; Mon, 24 Jan 2022 20:52:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1643057568; bh=PV3zTcZoEZBIfBNhE05GCrsJQVwhbmMUXbm4kUtn8zI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oG/h7ez1OU3vlzJLnvHfQH70PrRtUmnD0jHOiULPorwOcKbVXdfkssgvAdLj4GW1Y DQyZ19Ka8etfAj9Wo+mhLlmMZ4c+ebzZDgj/VPR3Me5iR96zfTiW4ZgrNJE4MNbAip p31Whvr18iKC7cMVw3gFAB0l915hbRVf1P38e2UI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Matlack , Sean Christopherson , Paolo Bonzini Subject: [PATCH 5.16 0001/1039] KVM: x86/mmu: Fix write-protection of PTs mapped by the TDP MMU Date: Mon, 24 Jan 2022 19:29:50 +0100 Message-Id: <20220124184125.179384891@linuxfoundation.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220124184125.121143506@linuxfoundation.org> References: <20220124184125.121143506@linuxfoundation.org> User-Agent: quilt/0.66 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Matlack commit 7c8a4742c4abe205ec9daf416c9d42fd6b406e8e upstream. When the TDP MMU is write-protection GFNs for page table protection (as opposed to for dirty logging, or due to the HVA not being writable), it checks if the SPTE is already write-protected and if so skips modifying the SPTE and the TLB flush. This behavior is incorrect because it fails to check if the SPTE is write-protected for page table protection, i.e. fails to check that MMU-writable is '0'. If the SPTE was write-protected for dirty logging but not page table protection, the SPTE could locklessly be made writable, and vCPUs could still be running with writable mappings cached in their TLB. Fix this by only skipping setting the SPTE if the SPTE is already write-protected *and* MMU-writable is already clear. Technically, checking only MMU-writable would suffice; a SPTE cannot be writable without MMU-writable being set. But check both to be paranoid and because it arguably yields more readable code. Fixes: 46044f72c382 ("kvm: x86/mmu: Support write protection for nesting in tdp MMU") Cc: stable@vger.kernel.org Signed-off-by: David Matlack Message-Id: <20220113233020.3986005-2-dmatlack@google.com> Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu/tdp_mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1442,12 +1442,12 @@ static bool write_protect_gfn(struct kvm !is_last_spte(iter.old_spte, iter.level)) continue; - if (!is_writable_pte(iter.old_spte)) - break; - new_spte = iter.old_spte & ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); + if (new_spte == iter.old_spte) + break; + tdp_mmu_set_spte(kvm, &iter, new_spte); spte_set = true; }