Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp234641rwb; Tue, 25 Jul 2023 15:04:50 -0700 (PDT) X-Google-Smtp-Source: APBJJlHoLxVqAB4r9i0SKceFrdWIzW8ojeXdivgNMwXgKyhX2w3P8ijsd6VmuS/b5acLwsWg3691 X-Received: by 2002:a05:6a20:8f13:b0:134:a4e2:4ac8 with SMTP id b19-20020a056a208f1300b00134a4e24ac8mr298946pzk.39.1690322690113; Tue, 25 Jul 2023 15:04:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690322690; cv=none; d=google.com; s=arc-20160816; b=ujygVeDcZ/AddlFj00MC338W3VCFfUR5rXGfMeGSPp4HMwVNwIWZ5IBGJIVOprZop+ s5fSWglsC4Qd0KWEfftB+4O4IrDs9nhs5titYxhRxRzNv1sgewpm7PnqFdlHRkEFV0kY QzxUMbiw5vr3rIWS4nZlyoRPHKVpu/nideK9KxnjLh6kiTQSaVTA2Kapzy5HVlVT3HnC HEvLYQOpxvskPW4mRWOFg1TRvKEVMgAiwnF+FEO78Ce2iEAHa8V76lypwDbFUU1BehOB tKUNtt+KaXsPc8kpEeLtz0/ImCtqTD9Fvm4gNxrX+IP4i5Qlf8JiY63MqWxSxialMw9f EUzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=ZbjAqcZWDEPBu4Z5VgM6o5j9i98p8IgHL/vVs/X78Tk=; fh=1/sgOwi5AuU8IZymeFKs1Rjnf4QELnrK+C7GntqU2hA=; b=NGqLvl9Y/9+Ry5nnGCfZzLZoXHtRspLxdKj/Tz5xy+t6496OqukU4tRxvAx8+kvhQh EcQWZZ5k0RC2hzmGr+tunkRNarw86TXElGKC8juzGftnsvdVkBBaa3WHJqj7HFPztXpp c7jCaP0GmDKG6uCqGQgQvAL+Oz9rLEf3yHCIFe2sZSIrksK57V2dsNlulufam6F8HAbN Ymhzsl9+PY4LMp07P0tFcDCeGHjalYddi+f0eB8CT3BMbg2SlFVhf1rEJjQzLYjnWF8n GrNnMSb5W/S0cz1mbSc3IZ9Zb4Ai6lY6v0f40OSDlKPFRYGMaBWhQekzbRKB17Q1mUYg aoWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="UIgE/7MB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s21-20020a63af55000000b0054ff0e193desi12078276pgo.49.2023.07.25.15.04.37; Tue, 25 Jul 2023 15:04:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="UIgE/7MB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231733AbjGYWCg (ORCPT + 99 others); Tue, 25 Jul 2023 18:02:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231837AbjGYWCP (ORCPT ); Tue, 25 Jul 2023 18:02:15 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C29ED2135 for ; Tue, 25 Jul 2023 15:01:49 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5635233876bso3129978a12.0 for ; Tue, 25 Jul 2023 15:01:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690322509; x=1690927309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZbjAqcZWDEPBu4Z5VgM6o5j9i98p8IgHL/vVs/X78Tk=; b=UIgE/7MBAcFLmyRerXAOoLDeD7A/VMjQ+Or6J5okuNTdygnqisvlbhUFjqiNcH5mfU dIHeqek9tz4JhUXSrf9sDuA8gHgtKL2UH9T2AgDGGXmfPlx7dUnLgLvfPTZfrrzebC+e kvX/aJiHWNuIKUssqf1mgMt7q1aeZ6uErcbRzFdnFH1pk1LtE+iadkzY48WgqkalCemE N40BE9CE5oukPh6gJmrGtYEOXBLyo0vSOsraaJmIO3kw3bNizXCIMURK1jcE25RGFrOj ndhcLgVqujW0LLlszEKxT7OCXpvK6mZRmcaYBNgv6pNBJcI+2Wae/LKBXlZBNNZMoCYk a1ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690322509; x=1690927309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZbjAqcZWDEPBu4Z5VgM6o5j9i98p8IgHL/vVs/X78Tk=; b=PXLxRzOJVVDryutJGPI/fAx6lpZe5SirtK7Vlef7lbA1ypapVbannbJvMBVU1Y7teB vErtG5CiURgECqMlwdUtMNIAWtjBodj8ejRehv28a6W4G19B3L5AkZN7dZNckVg+429a MClF3wfdCKxKcKrIJYAerbYPdisKNs9wRXguJqWOzvJcQKtvtGuiuXA9OCk/pL9z1UoK QLqiz3vFmqEh0GSLoW7gT4+MdmNBmoZZ50o4VpOtmgeR9fLgCntSgqzbgtbx3i83R6oi rIYf22S6rHsEeZ0rRbDFP1+eGzlBCmEZN+UACOmbuSHCe/9FlkZDNybbBsz96G/PMBbs xGzg== X-Gm-Message-State: ABy/qLbGBpd85O1O4+R8MvzAy6pn7qZ6BVbXdfi++Z43kdYCLbGYpNJ8 e4ZmcRkVdJ+ZF2JqkMbk6Etrc5COQwVK X-Received: from afranji.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:47f1]) (user=afranji job=sendgmr) by 2002:a63:6f86:0:b0:55a:12cf:3660 with SMTP id k128-20020a636f86000000b0055a12cf3660mr2074pgc.1.1690322508626; Tue, 25 Jul 2023 15:01:48 -0700 (PDT) Date: Tue, 25 Jul 2023 22:00:54 +0000 In-Reply-To: <20230725220132.2310657-1-afranji@google.com> Mime-Version: 1.0 References: <20230725220132.2310657-1-afranji@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230725220132.2310657-2-afranji@google.com> Subject: [PATCH v4 01/28] KVM: selftests: Add function to allow one-to-one GVA to GPA mappings From: Ryan Afranji To: linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, isaku.yamahata@intel.com, sagis@google.com, erdemaktas@google.com, afranji@google.com, runanwang@google.com, shuah@kernel.org, drjones@redhat.com, maz@kernel.org, bgardon@google.com, jmattson@google.com, dmatlack@google.com, peterx@redhat.com, oupton@google.com, ricarkol@google.com, yang.zhong@intel.com, wei.w.wang@intel.com, xiaoyao.li@intel.com, pgonda@google.com, eesposit@redhat.com, borntraeger@de.ibm.com, eric.auger@redhat.com, wangyanan55@huawei.com, aaronlewis@google.com, vkuznets@redhat.com, pshier@google.com, axelrasmussen@google.com, zhenzhong.duan@intel.com, maciej.szmigiero@oracle.com, like.xu@linux.intel.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, ackerleytng@google.com Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ackerley Tng One-to-one GVA to GPA mappings can be used in the guest to set up boot sequences during which paging is enabled, hence requiring a transition from using physical to virtual addresses in consecutive instructions. Signed-off-by: Ackerley Tng Change-Id: I5a15e241b3ce9014e17a794478bbfa65b9d8e0a1 Signed-off-by: Ryan Afranji --- .../selftests/kvm/include/kvm_util_base.h | 3 + tools/testing/selftests/kvm/lib/kvm_util.c | 81 ++++++++++++++++++- 2 files changed, 83 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index af26c5687d86..a07ce5f5244a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -513,6 +513,9 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_mi vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + uint32_t data_memslot); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 518990ca408d..5bbcddcd6796 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1371,6 +1371,58 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, return vaddr_start; } +/* + * VM Virtual Address Allocate Shared/Encrypted + * + * Input Args: + * vm - Virtual Machine + * sz - Size in bytes + * vaddr_min - Minimum starting virtual address + * paddr_min - Minimum starting physical address + * data_memslot - memslot number to allocate in + * encrypt - Whether the region should be handled as encrypted + * + * Output Args: None + * + * Return: + * Starting guest virtual address + * + * Allocates at least sz bytes within the virtual address space of the vm + * given by vm. The allocated bytes are mapped to a virtual address >= + * the address given by vaddr_min. Note that each allocation uses a + * a unique set of pages, with the minimum real allocation being at least + * a page. + */ +static vm_vaddr_t +_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + vm_paddr_t paddr_min, uint32_t data_memslot, bool encrypt) +{ + uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); + + virt_pgd_alloc(vm); + vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, + paddr_min, + data_memslot, encrypt); + + /* + * Find an unused range of virtual page addresses of at least + * pages in length. + */ + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min); + + /* Map the virtual pages. */ + for (vm_vaddr_t vaddr = vaddr_start; pages > 0; + pages--, vaddr += vm->page_size, paddr += vm->page_size) { + + virt_pg_map(vm, vaddr, paddr); + + sparsebit_set(vm->vpages_mapped, + vaddr >> vm->page_shift); + } + + return vaddr_start; +} + /* * VM Virtual Address Allocate * @@ -1392,7 +1444,34 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, */ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) { - return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA); + return _vm_vaddr_alloc(vm, sz, vaddr_min, + KVM_UTIL_MIN_PFN * vm->page_size, 0, + vm->protected); +} + +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +{ + return _vm_vaddr_alloc(vm, sz, vaddr_min, + KVM_UTIL_MIN_PFN * vm->page_size, 0, false); +} + +/** + * Allocate memory in @vm of size @sz in memslot with id @data_memslot, + * beginning with the desired address of @vaddr_min. + * + * If there isn't enough memory at @vaddr_min, find the next possible address + * that can meet the requested size in the given memslot. + * + * Return the address where the memory is allocated. + */ +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + uint32_t data_memslot) +{ + vm_vaddr_t gva = _vm_vaddr_alloc(vm, sz, vaddr_min, (vm_paddr_t) vaddr_min, + data_memslot, vm->protected); + ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); + + return gva; } /* -- 2.41.0.487.g6d72f3e995-goog