Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp5338877yba; Mon, 13 May 2019 09:11:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqz6qiavnBiTVOOt8iIzbr04B5LZ48oo0AWMAbyxFkYmXVcoz+Vy8yS/DdfgXxxy7uKA4MYg X-Received: by 2002:a17:902:bb96:: with SMTP id m22mr31698318pls.5.1557763908636; Mon, 13 May 2019 09:11:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557763908; cv=none; d=google.com; s=arc-20160816; b=G/S8PpqHFHx+m3bBYqYZfuNsONOonKRUA1Ppm8ntvf/TJYZLca0duv+ZmPi3aYeh8Q 6H+nAsJMD8V3SO31298Kqel5NR5QjnlE0Z3PcpOR4bJHfBZJVOfQWEUOWwcZh3T7KJLz zMHsRCYkPsmE344m326oexX57jM3YfRUxJ96tX91wE1TAiyBalrX54fo3K9FNBINwGoY ydgph9eAYYqdbxdDFBbV0SpCNMZ0kgIUDTQDNG9y+XMrfY7cFWHjddTz8NY4MVCKfX9l XVpqPPSjgLJacqWwkkxs8qxCPIXkBzONctzKg2OallKVEl9U+2gm33dVowjpOK2G4SY4 7Y4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=+MaSC7D5NYyFHcoLlt21rqwn+ky+e8tS8duegev3cnQ=; b=wkOyUYoRwn4A2+nq/4ZAX82QREq+MOHaSTJMSPe+cMcFcpX5xbEVAuuIhSHvQT9hia 6MNmftnnVMp/ujdx4fhZylOg+tV2fWe26jF2SjDpjMg/JgIayHzrXQ0IuoMKy+gAS7nz hoUc4pg/KQS+d9uOAd7ia1W4lIJ1Sf+DjS3zF2o9R521FAx+gw16q98JvUGIkqIsrA1f 6j5MNYK4kd2SEbt8fQAwt/TcLD4nCQla9O6raZVqgMe8rgZ97bAElR7AI9l4V4iqLuDr gsL9gG/oK7iuSMxfcjlI9u0q2pJ5GjX+aaN4k3cklES65gliVWXHwdd+KDRdfV80Qyca C69Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=z6slNgJ9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m21si16200337pgv.453.2019.05.13.09.11.32; Mon, 13 May 2019 09:11:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=z6slNgJ9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730549AbfEMOkK (ORCPT + 99 others); Mon, 13 May 2019 10:40:10 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:59876 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730542AbfEMOkJ (ORCPT ); Mon, 13 May 2019 10:40:09 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x4DEd2Gn194925; Mon, 13 May 2019 14:39:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=+MaSC7D5NYyFHcoLlt21rqwn+ky+e8tS8duegev3cnQ=; b=z6slNgJ9MrFp8uPsBPJmJU6GEaQ8wt3ZidHa0OAjWLM8dA3+hvZO3Q2zqNmvKu+Qfvha f39YKjJiNdGzsIVZtiZmHgCAk3/RyTNp7XRWQJE/vV0AXPAhhrT+G+j8gHuyTYUQNX0f AIhcCp0PH4W5K5mBi7O15eBaUEQLLjjyWKwdghJjGta7Pwvvxw11nROudMT9WBKTJRLS bc5hFg0tiztOk+LrZZH2/SQPAOQVHBG6LvLcBcZDJcfhj/iVjW1L4APtEZtPI5sxvMGw RUUksDo71NnDOv56zatbXlgzV01aZsNu7msINVw4f0f0NjXKNnCv8fOY+KxWkWNogtPN 5Q== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2sdq1q7axg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2019 14:39:31 +0000 Received: from achartre-desktop.fr.oracle.com (dhcp-10-166-106-34.fr.oracle.com [10.166.106.34]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x4DEcZQK022780; Mon, 13 May 2019 14:39:28 GMT From: Alexandre Chartre To: pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, kvm@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: konrad.wilk@oracle.com, jan.setjeeilers@oracle.com, liran.alon@oracle.com, jwadams@google.com, alexandre.chartre@oracle.com Subject: [RFC KVM 17/27] kvm/isolation: improve mapping copy when mapping is already present Date: Mon, 13 May 2019 16:38:25 +0200 Message-Id: <1557758315-12667-18-git-send-email-alexandre.chartre@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1557758315-12667-1-git-send-email-alexandre.chartre@oracle.com> References: <1557758315-12667-1-git-send-email-alexandre.chartre@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9255 signatures=668686 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905130103 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A mapping can already exist if a buffer was mapped in the KVM address space, and then the buffer was freed but there was no request to unmap from the KVM address space. In that case, clear the existing mapping before mapping the new buffer. Also if the new mapping is a subset of an already larger mapped range, then remap the entire larger map. Signed-off-by: Alexandre Chartre --- arch/x86/kvm/isolation.c | 67 +++++++++++++++++++++++++++++++++++++++++++--- 1 files changed, 63 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c index e494a15..539e287 100644 --- a/arch/x86/kvm/isolation.c +++ b/arch/x86/kvm/isolation.c @@ -88,6 +88,9 @@ struct mm_struct kvm_mm = { DEFINE_STATIC_KEY_FALSE(kvm_isolation_enabled); EXPORT_SYMBOL(kvm_isolation_enabled); +static void kvm_clear_mapping(void *ptr, size_t size, + enum page_table_level level); + /* * When set to true, KVM #VMExit handlers run in isolated address space * which maps only KVM required code and per-VM information instead of @@ -721,6 +724,7 @@ static int kvm_copy_mapping(void *ptr, size_t size, enum page_table_level level) { unsigned long addr = (unsigned long)ptr; unsigned long end = addr + ((unsigned long)size); + unsigned long range_addr, range_end; struct kvm_range_mapping *range_mapping; bool subset; int err; @@ -728,22 +732,77 @@ static int kvm_copy_mapping(void *ptr, size_t size, enum page_table_level level) BUG_ON(current->mm == &kvm_mm); pr_debug("KERNMAP COPY addr=%px size=%lx level=%d\n", ptr, size, level); - range_mapping = kmalloc(sizeof(struct kvm_range_mapping), GFP_KERNEL); - if (!range_mapping) - return -ENOMEM; + mutex_lock(&kvm_range_mapping_lock); + + /* + * A mapping can already exist if the buffer was mapped and then + * freed but there was no request to unmap it. We might also be + * trying to map a subset of an already mapped buffer. + */ + range_mapping = kvm_get_range_mapping_locked(ptr, &subset); + if (range_mapping) { + if (subset) { + pr_debug("range %px/%lx/%d is a subset of %px/%lx/%d already mapped, remapping\n", + ptr, size, level, range_mapping->ptr, + range_mapping->size, range_mapping->level); + range_addr = (unsigned long)range_mapping->ptr; + range_end = range_addr + + ((unsigned long)range_mapping->size); + err = kvm_copy_pgd_range(&kvm_mm, current->mm, + range_addr, range_end, + range_mapping->level); + if (end <= range_end) { + /* + * We effectively have a subset, fully contained + * in the superset. So we are done. + */ + mutex_unlock(&kvm_range_mapping_lock); + return err; + } + /* + * The new range is larger than the existing mapped + * range. So we need an extra mapping to map the end + * of the range. + */ + addr = range_end; + range_mapping = NULL; + pr_debug("adding extra range %lx-%lx (%d)\n", addr, + end, level); + } else { + pr_debug("range %px size=%lx level=%d already mapped, clearing\n", + range_mapping->ptr, range_mapping->size, + range_mapping->level); + kvm_clear_mapping(range_mapping->ptr, + range_mapping->size, + range_mapping->level); + list_del(&range_mapping->list); + } + } + + if (!range_mapping) { + range_mapping = kmalloc(sizeof(struct kvm_range_mapping), + GFP_KERNEL); + if (!range_mapping) { + mutex_unlock(&kvm_range_mapping_lock); + return -ENOMEM; + } + INIT_LIST_HEAD(&range_mapping->list); + } err = kvm_copy_pgd_range(&kvm_mm, current->mm, addr, end, level); if (err) { + mutex_unlock(&kvm_range_mapping_lock); kfree(range_mapping); return err; } - INIT_LIST_HEAD(&range_mapping->list); range_mapping->ptr = ptr; range_mapping->size = size; range_mapping->level = level; list_add(&range_mapping->list, &kvm_range_mapping_list); + mutex_unlock(&kvm_range_mapping_lock); + return 0; } -- 1.7.1