Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp3701354ybv; Mon, 10 Feb 2020 04:59:48 -0800 (PST) X-Google-Smtp-Source: APXvYqwssf5tgsJxJGFNdzjopYyz/w+K26sMWVj/Lo3fndsK212wlhT2SgjkBxnJ4PFAEqe2mY4T X-Received: by 2002:a9d:ec7:: with SMTP id 65mr973244otj.309.1581339588371; Mon, 10 Feb 2020 04:59:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581339588; cv=none; d=google.com; s=arc-20160816; b=f9mePo8LPfcEGKBdOlH55rsRAt00pkNSKREi0DI805jjSWPC4V20gZJmTxfqLeDV2z qPLstLD4Sr4e86yl5RcwxFyUW6rFpGR2v9qJlCkKgvchvn+5tdlsfPhUyS6OCqCWajwa 4HD7Nl6xP6x5e1BgHMUTuwRFQpjQsy4uv3z+CiM4sSQIHPi9ghpLDd3Kt2NC1a4kRA4S xwkw2HkqFiQjT7BFj9MlSOblLcQUH9EXj8ZVh60kQyRoN3NlJaLTvoRRFdcsmktJntnl pvNSpx42WIH22Gs1UIoIJPu3cjr+IMrhVUPrH4CsBmZQYDqfKQvxUqhWPSyf7F0ToXrp 68dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=hOpD7+lVsJY8oUQeKsNxgD7Pk2Ui6pJoetV80ZdkcvA=; b=VQUvi43EvUV5S/q2qYigU/E/4/Fm5+eA+oisvO13mpsPyHEkadNlkbfu8ZG2sPGtPu On/glY/3OxuavddvN856OX+C1Za6UTEozKKLZ3v1SEkRqJcgeraVgfNivRM7zFHSXc66 KJoTV80yoVbCfLaA9a5p3Zh0WPUrvqJNIhhdcosBocuzY8IWFPMRSiFLFBnnmUV2JGGG x6cTGbLT6ca/Gq/dNxWVwKond/uUL5+dnIjN/M8yJ1ojVEoh32BJuGRN9SuO6nbIqUes 7sk4AZjW2DyanUeByx0YtTJMjdtAC4KYXj1yp9cS2JES6yuSZi1TASdNTyPXSD5KEggi C+YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=W+pE17gT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j15si118244oii.163.2020.02.10.04.59.36; Mon, 10 Feb 2020 04:59:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=W+pE17gT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730059AbgBJM60 (ORCPT + 99 others); Mon, 10 Feb 2020 07:58:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:43344 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729609AbgBJMlU (ORCPT ); Mon, 10 Feb 2020 07:41:20 -0500 Received: from localhost (unknown [209.37.97.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 79BB520733; Mon, 10 Feb 2020 12:41:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581338479; bh=2ifBn54R1zBhvU4qCO9oQLKUnyRbCbxap7YXovSR+F8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W+pE17gTxGlKTatz1a8FNMqOBUKskLSg0emXBmIEwuj784tNhkv3IPglmizrR9Lxy KsrjKdpCYrMHvQUmLfWajvmZM1wqmbm+ORmff9davT5r/HyuP21+jmh0cr9oKKUIl3 4GqwE7+XZLiKhsDBi6cyix3Hb9Nb1Rjg5k2+HZvk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Boris Ostrovsky , Joao Martins , Paolo Bonzini Subject: [PATCH 5.5 246/367] x86/kvm: Introduce kvm_(un)map_gfn() Date: Mon, 10 Feb 2020 04:32:39 -0800 Message-Id: <20200210122446.896396023@linuxfoundation.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210122423.695146547@linuxfoundation.org> References: <20200210122423.695146547@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Boris Ostrovsky commit 1eff70a9abd46f175defafd29bc17ad456f398a7 upstream. kvm_vcpu_(un)map operates on gfns from any current address space. In certain cases we want to make sure we are not mapping SMRAM and for that we can use kvm_(un)map_gfn() that we are introducing in this patch. This is part of CVE-2019-3016. Signed-off-by: Boris Ostrovsky Reviewed-by: Joao Martins Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 29 ++++++++++++++++++++++++----- 2 files changed, 26 insertions(+), 5 deletions(-) --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -775,8 +775,10 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_ kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); +int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); +int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1821,12 +1821,13 @@ struct page *gfn_to_page(struct kvm *kvm } EXPORT_SYMBOL_GPL(gfn_to_page); -static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, struct kvm_host_map *map) { kvm_pfn_t pfn; void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; + struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn); if (!map) return -EINVAL; @@ -1855,14 +1856,20 @@ static int __kvm_map_gfn(struct kvm_memo return 0; } +int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_memslots(vcpu->kvm), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_map_gfn); + int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); + return __kvm_map_gfn(kvm_vcpu_memslots(vcpu), gfn, map); } EXPORT_SYMBOL_GPL(kvm_vcpu_map); -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, - bool dirty) +static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, + struct kvm_host_map *map, bool dirty) { if (!map) return; @@ -1878,7 +1885,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcp #endif if (dirty) { - kvm_vcpu_mark_page_dirty(vcpu, map->gfn); + mark_page_dirty_in_slot(memslot, map->gfn); kvm_release_pfn_dirty(map->pfn); } else { kvm_release_pfn_clean(map->pfn); @@ -1887,6 +1894,18 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcp map->hva = NULL; map->page = NULL; } + +int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) +{ + __kvm_unmap_gfn(gfn_to_memslot(vcpu->kvm, map->gfn), map, dirty); + return 0; +} +EXPORT_SYMBOL_GPL(kvm_unmap_gfn); + +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) +{ + __kvm_unmap_gfn(kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), map, dirty); +} EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)