Received: by 2002:a17:90a:9307:0:0:0:0 with SMTP id p7csp3965429pjo; Tue, 3 Mar 2020 10:12:31 -0800 (PST) X-Google-Smtp-Source: ADFU+vv+aO8jFNQ3uNCu+2mztL2cpVNIqzWFvZzo/yMkxlhH99gzsQszjWXufAQlN1CmPRXazqAE X-Received: by 2002:aca:4fd5:: with SMTP id d204mr3244968oib.76.1583259151068; Tue, 03 Mar 2020 10:12:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583259151; cv=none; d=google.com; s=arc-20160816; b=Ck9AhZTFoXCTowK1cDCK9Xvaa31yfHFJC8jrZNSLqBrnETIpYmX+qqGFUMJkSUmX+I 6UlbuIJY0CWVf05yiTNViVcyGlIaDpK04m7Lu5vlfTMWAGytviuZHnPY+sFnoQro7KWW 1ypmVbtaiGKHQL0OJq0RP65LifDpmyzFQYwuMx3+Tw2qRL9nHWkDrUQ+GPoAmSKpIqrI OrnEfhH1iyQvZcDr0hocMbnzXdpZ0ismMIiO8zYG3obDuKb4l8hZArqKefo8dsDw4K3I 4dEN6x4f8+IpwcPTuVsgmcHMZakJNTBLS9u8iTn1/XZJ41pJKZPlLy2An6skSNmR3XHV NYDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=HAN6vefbNXLpPL0s+g5OzvozBNWQMXq3naJ7JCMn3r0=; b=Uy0tEFvFyZD/aCV41vKd1fP/zMznv5p1XQe2b/Ol6JMJDoc8bssnmFDsAsaxpiIcQG 0XxFnjWXtttOjObc2eD1rSgtjjECEUBEAuigWh52jHOBnUcJTwjEuhvodVeobJqnTktk 67Po3vWp2NZ7Dj5yz6u65PbTwYw8Kg6yM5iSQ0b7Fw8EIP3alNXZWs9pEYYO16v78Bj8 6plho7/pueMZHjGlL5RPIKdhoMZiqt+IKISmZe7m8SOxN0Xdiz654EUsIb79VyLv0GYh R7GxwfBHvyaTnRUA1kN+WXxSjS8/jIXfcPtRlJQNjUkyHmmMsfsVr9a/Xyjv/LbTXFpY oSIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ccj2JU3y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16si2272401oth.23.2020.03.03.10.12.18; Tue, 03 Mar 2020 10:12:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ccj2JU3y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731894AbgCCRuX (ORCPT + 99 others); Tue, 3 Mar 2020 12:50:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:57992 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731877AbgCCRuV (ORCPT ); Tue, 3 Mar 2020 12:50:21 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4426F20870; Tue, 3 Mar 2020 17:50:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583257820; bh=l9CFylLqPON1mile2tiWuTqeLDK0DsuxtM5g8bUvRt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ccj2JU3ygEuH2UKaoKJbjJ3qmPkXAev9TM/ebZMf+xTRG31qe2l6rQ0YS3wdzkk+p iqbdorihLec98qkdk5BNL0J/B7RccvS1WTLHhDBsqt4r9rgKDPb3kas4x4mpzmbiPK BlVKv7CPDGT4cYrpflUEOtObTa1T7QohDQoMZlkA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jim Mattson , Andrew Honig , Sean Christopherson , Paolo Bonzini Subject: [PATCH 5.5 142/176] KVM: Check for a bad hva before dropping into the ghc slow path Date: Tue, 3 Mar 2020 18:43:26 +0100 Message-Id: <20200303174321.201234613@linuxfoundation.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200303174304.593872177@linuxfoundation.org> References: <20200303174304.593872177@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson commit fcfbc617547fc6d9552cb6c1c563b6a90ee98085 upstream. When reading/writing using the guest/host cache, check for a bad hva before checking for a NULL memslot, which triggers the slow path for handing cross-page accesses. Because the memslot is nullified on error by __kvm_gfn_to_hva_cache_init(), if the bad hva is encountered after crossing into a new page, then the kvm_{read,write}_guest() slow path could potentially write/access the first chunk prior to detecting the bad hva. Arguably, performing a partial access is semantically correct from an architectural perspective, but that behavior is certainly not intended. In the original implementation, memslot was not explicitly nullified and therefore the partial access behavior varied based on whether the memslot itself was null, or if the hva was simply bad. The current behavior was introduced as a seemingly unintentional side effect in commit f1b9dd5eb86c ("kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init"), which justified the change with "since some callers don't check the return code from this function, it sit seems prudent to clear ghc->memslot in the event of an error". Regardless of intent, the partial access is dependent on _not_ checking the result of the cache initialization, which is arguably a bug in its own right, at best simply weird. Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.") Cc: Jim Mattson Cc: Andrew Honig Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- virt/kvm/kvm_main.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2287,12 +2287,12 @@ int kvm_write_guest_offset_cached(struct if (slots->generation != ghc->generation) __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); - if (unlikely(!ghc->memslot)) - return kvm_write_guest(kvm, gpa, data, len); - if (kvm_is_error_hva(ghc->hva)) return -EFAULT; + if (unlikely(!ghc->memslot)) + return kvm_write_guest(kvm, gpa, data, len); + r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; @@ -2320,12 +2320,12 @@ int kvm_read_guest_cached(struct kvm *kv if (slots->generation != ghc->generation) __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); - if (unlikely(!ghc->memslot)) - return kvm_read_guest(kvm, ghc->gpa, data, len); - if (kvm_is_error_hva(ghc->hva)) return -EFAULT; + if (unlikely(!ghc->memslot)) + return kvm_read_guest(kvm, ghc->gpa, data, len); + r = __copy_from_user(data, (void __user *)ghc->hva, len); if (r) return -EFAULT;