Received: by 2002:ac0:da4c:0:0:0:0:0 with SMTP id a12csp925380imi; Thu, 21 Jul 2022 13:44:02 -0700 (PDT) X-Google-Smtp-Source: AGRyM1svjM3eA4B+PmzpUPj5n8MI2XoAzllcVyY3d7R1CZo0UKwRqBQqRvh7VB+Nr/VbrI/TFGT/ X-Received: by 2002:a17:907:7b9c:b0:72b:5652:a14a with SMTP id ne28-20020a1709077b9c00b0072b5652a14amr347333ejc.86.1658436241827; Thu, 21 Jul 2022 13:44:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658436241; cv=none; d=google.com; s=arc-20160816; b=uMhIqOu+63jtsYyGEVMS1gpT3hToDXu153QSHgUYJG+VjHkc42ObVKKyp7ExnW5/Rm XNm0sisPw3y+rOjY6k2La1B0e6FifPDDNlzXtewHq/aqBhAuVf5xhm6B4LRAi10N3wQt fJzY7i0NNg8T/b3FP8gH9dqW3GL8oa7ebQUDnPI+jVX9DM6k7ij38poA3onMneq1lMgw 6pixaUFy6pTrfjWx78oMnesehFapZmdYCo66oJ2AWsyLGgT5UCkpHiGNHTn4KV6J0wzA OM1M3rCi2fMo0PsQpWFymDQCUNQvQTwoeOYP3g1Pb82tYE6areuZPBd3VR7T+N2KyvIS RPXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=3aD4926dvljhrKdsU/hZIUVWeXcneDH2x6OW5Cqf9UI=; b=0NqnyHaByXT/DD9hsTvRtagCnEKoHFVrzz560WGeTgU8g95c17MHXXttBhYLFvlCiI JyJtJ9r6h2M8TEZnvEMWbLfzxK15xfdRFfvnMhby9AD5CNfHuaDeiQ+Da1sjaQQojGZA t69QJmJI7JyjjvJYnaXHPXUg/Z/dpYbEwsaMJiPLV6PQXih8ofNaaQQllD7eQYC9Iheb DSbQT6jQ2RpdidR476ymGbUuFE9GH+4MmbI8s0CAxJOcGOJ9Sll/c8oGsKSJsj3CMwXr phQbLE3BX6uTFolVb/BxnfIBfeYLCGLgdCu7QiX6PjJy2qW2HOyDK1IJzRca/t5mldAF 9RsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Vamwul9s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sc34-20020a1709078a2200b0072f1b3ebaafsi4336181ejc.136.2022.07.21.13.42.43; Thu, 21 Jul 2022 13:44:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Vamwul9s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232865AbiGUUYV (ORCPT + 99 others); Thu, 21 Jul 2022 16:24:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231970AbiGUUYU (ORCPT ); Thu, 21 Jul 2022 16:24:20 -0400 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 378F287C0E for ; Thu, 21 Jul 2022 13:24:18 -0700 (PDT) Received: by mail-lf1-x134.google.com with SMTP id y11so4522373lfs.6 for ; Thu, 21 Jul 2022 13:24:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3aD4926dvljhrKdsU/hZIUVWeXcneDH2x6OW5Cqf9UI=; b=Vamwul9sMeLAf/BVd5hnWopf/OqMlFv4Unis7y21ZX+sHwCRPS2CLpzRwc92HMP/E+ //4x5Mz2jZLtKveoiQlGarXNW747cGzQH6BiceV1rRFj0scZ2M/G6uxXGB+6wJhFgHHa 3y549YTAgt5UMCg2Q1R4lAzQmqlGXIEFbi6x1gx8WDDsx508/iVq8lyu9pg/xTwyBFK9 8GXlYSfBDUV9BIRSDHW+etyLsj3D7WpK84EyEywXL22DzkCxDqxmMZ9uqGxmUFIO/rzA D5wy3Rof67Vqy2KpKFEJq2CI0qH1fdlsNmeskw3t9WHrnbZKWXCFXQiUqKj0NjqTXKtT UGCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3aD4926dvljhrKdsU/hZIUVWeXcneDH2x6OW5Cqf9UI=; b=gIQ5Gi7B3qYJX1h/cHBEbUaHPo6rue2O2bhHeCGAJU6QHUOSXM3QtLFgEjSY5uDxc8 ovBAc1kRa/uwyhi04i6tFoZApGp7XZwr9FTX0Alf2JiJ+gZnhJiKCJG63WOYovbCrMce k3i4+rJql11ifO/BRApbUfw4q4s6zrsQ+xjbqzZpe0gMna533LkWiOOURKU+F4ixUUqW kjhJ1ZwFFRNdqtW6K7LiBiuT1UbbjO9el9ZlIk2lA7P56ksG3LgW6N6SVhq01kAd/+OB 5qxz645BIX1ARM2AcqowdVEouxOMzF49a5HNdsDX9qcyUDgS83XZiFxBHO9v4LB0OBsZ 8KEg== X-Gm-Message-State: AJIora+aNDWgVJHkwHgUFI6xn8Wvd5GJxvhA2AHCrM52ufOY0qIffj1g wv7T6knDuO6z1UA8GwCV7f2YPn8HkxbPkkUQcJKy3A== X-Received: by 2002:a05:6512:1381:b0:489:cd0b:3a03 with SMTP id p1-20020a056512138100b00489cd0b3a03mr12880lfa.583.1658435056200; Thu, 21 Jul 2022 13:24:16 -0700 (PDT) MIME-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> <20220511000811.384766-3-vannapurve@google.com> In-Reply-To: From: Vishal Annapurve Date: Thu, 21 Jul 2022 13:24:04 -0700 Message-ID: Subject: Re: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory To: Sean Christopherson Cc: x86 , kvm list , LKML , linux-kselftest@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , dave.hansen@linux.intel.com, "H . Peter Anvin" , shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, Ricardo Koller , Aaron Lewis , wei.w.wang@intel.com, "Kirill A . Shutemov" , Jonathan Corbet , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Chao Peng , Yu Zhang , Jun Nakajima , Dave Hansen , Michael Roth , Quentin Perret , Steven Price , Andi Kleen , David Hildenbrand , Andy Lutomirski , Vlastimil Babka , Marc Orr , Erdem Aktas , Peter Gonda , "Nikunj A. Dadhania" , Austin Diviness Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 20, 2022 at 4:03 PM Sean Christopherson wrote: > ... > > + * which doesn't handle global offset table updates. Calling standard libc > > + * functions would normally result in referring to the global offset table. > > + * Adding O1 here seems to prohibit compiler from replacing the memory > > + * operations with standard libc functions such as memset. > > + */ > > Eww. We should either fix kvm_vm_elf_load() or override the problematic libc > variants. Playing games with per-function attributes is not maintainable. > I will try to spend more time on how kvm_vm_elf_load can be modified to handle GOT fixups in different scenarios including statically/dynamically linked sefltest binaries as I currently recall limited information here. But modifying kvm_vm_elf_load to fixup GOT entries will be insufficient as guest VM code (possibly whole selftest binary) will need to be compiled with flags that allow memset/memcpy implementations to work with specific guest VM configurations e.g. AVX extension. Same concern is outlined in https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kvm/lib/x86_64/svm.c#L64. Would it be ok to maintain selftest binary compilation flags based on guest VM configurations? > > +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op, > > + void *mem, uint64_t pat, uint32_t size) > > Oof. Don't be so agressive in shortening names, _especially_ when there's no > established/universal abbreviation. It took me forever to figure out that "pat" > is "pattern". And for x86, "pat" is especially confusing because it already > a very well-established name that just so happens to be relevant to memory types, > just a different kind of a memory type... > > > +{ > > + uint64_t *buf = (uint64_t *)mem; > > + uint32_t chunk_size = sizeof(pat); > > + uint64_t mem_addr = (uint64_t)mem; > > + > > + if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0)) > > All the patterns are a repeating byte, why restrict this to 8-byte chunks? Then > this confusing assert-but-not-an-assert goes away. > > > + return false; > > + > > + for (uint32_t i = 0; i < (size / chunk_size); i++) { > > + if (op == SET_PAT) > > + buf[i] = pat; > > + if (op == VERIFY_PAT) { > > + if (buf[i] != pat) > > + return false; > > If overriding memset() and memcmp() doesn't work for whatever reason, add proper > helpers instead of a do_stuff() wrapper. > > > + } > > + } > > + > > + return true; > > +} > > + > > +/* Test to verify guest private accesses on private memory with following steps: > > + * 1) Upon entry, guest signals VMM that it has started. > > + * 2) VMM populates the shared memory with known pattern and continues guest > > + * execution. > > + * 3) Guest writes a different pattern on the private memory and signals VMM > > + * that it has updated private memory. > > + * 4) VMM verifies its shared memory contents to be same as the data populated > > + * in step 2 and continues guest execution. > > + * 5) Guest verifies its private memory contents to be same as the data > > + * populated in step 3 and marks the end of the guest execution. > > + */ > > +#define PMPAT_ID 0 > > +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest" > > + > > +/* Guest code execution stages for private mem access test */ > > +#define PMPAT_GUEST_STARTED 0ULL > > +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL > > + > > +static bool pmpat_handle_vm_stage(struct kvm_vm *vm, > > + void *test_info, > > + uint64_t stage) > > > Align parameters, both in prototypes and in invocations. And don't wrap unnecessarily. > > static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info, > uint64_t stage) > > > Or even let that poke out (probably not in this case, but do keep in mind that the > 80 char "limit" is a soft limit that can be broken if doing so yields more readable > code). > > static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info, uint64_t stage) > > > +{ > > + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; > > + > > + switch (stage) { > > + case PMPAT_GUEST_STARTED: { > > + /* Initialize the contents of shared memory */ > > + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, > > + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), > > + "Shared memory update failure"); > > Align indentation (here and many other places). > > > + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); > > + break; > > + } > > + case PMPAT_GUEST_PRIV_MEM_UPDATED: { > > + /* verify host updated data is still intact */ > > + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, > > + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), > > + "Shared memory view mismatch"); > > + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); > > + break; > > + } > > + default: > > + printf("Unhandled VM stage %ld\n", stage); > > + return false; > > + } > > + > > + return true; > > +} > > + > > +static void pmpat_guest_code(void) > > +{ > > + void *priv_mem = (void *)TEST_MEM_GPA; > > + int ret; > > + > > + GUEST_SYNC(PMPAT_GUEST_STARTED); > > + > > + /* Mark the GPA range to be treated as always accessed privately */ > > + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, > > + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, > > + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); > > + GUEST_ASSERT_1(ret == 0, ret); > > "!ret" instead of "ret == 0" > > > + > > + GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, > > + TEST_MEM_SIZE)); > > + GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); > > + > > + GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem, > > + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); > > + > > + GUEST_DONE(); > > +} > > + > > +static struct test_run_helper priv_memfd_testsuite[] = { > > + [PMPAT_ID] = { > > + .test_desc = PMPAT_DESC, > > + .vmst_handler = pmpat_handle_vm_stage, > > + .guest_fn = pmpat_guest_code, > > + }, > > +}; > > ... > > > +/* Do private access to the guest's private memory */ > > +static void setup_and_execute_test(uint32_t test_id) > > This helper appears to be the bulk of the shared code between tests. This can > and should be a helper to create a VM with private memory. Not sure what to call > such a helper, maybe vm_create_with_private_memory()? A little verbose, but > literal isn't always bad. > > > +{ > > + struct kvm_vm *vm; > > + int priv_memfd; > > + int ret; > > + void *shared_mem; > > + struct kvm_enable_cap cap; > > + > > + vm = vm_create_default(VCPU_ID, 0, > > + priv_memfd_testsuite[test_id].guest_fn); > > + > > + /* Allocate shared memory */ > > + shared_mem = mmap(NULL, TEST_MEM_SIZE, > > + PROT_READ | PROT_WRITE, > > + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); > > + TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); > > + > > + /* Allocate private memory */ > > + priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE); > > + TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd"); > > + ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); > > + TEST_ASSERT(ret != -1, "fallocate failed"); > > + > > + priv_memory_region_add(vm, shared_mem, > > + TEST_MEM_SLOT, TEST_MEM_SIZE, > > + TEST_MEM_GPA, priv_memfd, 0); > > + > > + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", > > + TEST_MEM_SIZE/vm_get_page_size(vm), > > + vm_get_page_size(vm)); > > + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, > > + (TEST_MEM_SIZE/vm_get_page_size(vm))); > > + > > + /* Enable exit on KVM_HC_MAP_GPA_RANGE */ > > + pr_info("Enabling exit on map_gpa_range hypercall\n"); > > + ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL); > > + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE), > > + "VM exit on MAP_GPA_RANGE HC not supported"); > > Impressively bizarre indentation :-) > Thanks Sean for all the feedback here. I will address the comments in the next series. Regards, Vishal