Received: by 2002:a05:7412:1492:b0:e2:908c:2ebd with SMTP id s18csp350911rdh; Mon, 21 Aug 2023 18:48:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEo/qDNhWWk5X/B8xJbH91SUhseptTfP3LigDovCFly16v3wGrjuGr/GRz6Rg3tuJnQGbJ X-Received: by 2002:a17:906:3199:b0:993:6845:89d6 with SMTP id 25-20020a170906319900b00993684589d6mr6159016ejy.47.1692668918873; Mon, 21 Aug 2023 18:48:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692668918; cv=none; d=google.com; s=arc-20160816; b=Hw46QafXXybWTj+4xjDsiRXRTrlNgRRxixH2Of9bRs6WRHrIO7JtlKbBCE7Vl+0GOP x8mTs/xsELogYC5gajQEyWDSlbvJAf86Nfgj+w6JkgtWYgczbg1GXkvYZq/QQjvkwvWo i9iohBwwgJyQrmuXZ6R34Clnuc9x4kXf9oC5KIEiuh7HksjpLP4jqVMYUeHsVdiUkv7+ VTNq8vEAL5uIREjhLBLHfWWe8KCjaS41TUwSsvGHzkxXfrLhFKOzeQVDMH8YUzQH0qBN 45OZ1Kik4dpktGdlqzBgtKdPPeXAT+v5SJx051uXIfLSY5OTn2N2PFU1X9tb8WD4HL8r Swgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=WojTsyHJlQQoruGt5rJF23KrHwn53DulZRTBhsYqmPM=; fh=7z1IcOK6FqjRHAzOeRf7xQR5EHmQGXmM1XivRuDF0zs=; b=mt+1zAnOSyL5yMg+hPqFJGYBEHrO1pf5FhPco78bNMzTmPtMrZFPOF09ZJb7e5MW3z iLXXsIYQwiGNG9F5ANYhrQMxXb0GOSNrxkpzlMsUa9qb1sSWMhru8APoY1Nd+VVTIL1N EePfM12RMVFA36yqbcu1qMxmsS+xy7xWXWFCKIVVbSGUZ8An7NEb4B4ZbtcDdyUQx5Si XocCp7JJXGi+sZar4XaTeoz1wk/0EyKOVdZ+1ugO7hK4OU+Y7Wl7MCziOKgTLeJnQYG9 ED+okb/bwiwdcWT7MS/KMvX4CnoAt1anbd/KDvTjruoB4azxNv8AaqBgrbNwhVf5uydW SgwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Xt4OmnWT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k23-20020a1709063fd700b009886470de61si6440977ejj.857.2023.08.21.18.48.15; Mon, 21 Aug 2023 18:48:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Xt4OmnWT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231646AbjHUW7u (ORCPT + 99 others); Mon, 21 Aug 2023 18:59:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231637AbjHUW7u (ORCPT ); Mon, 21 Aug 2023 18:59:50 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1666196 for ; Mon, 21 Aug 2023 15:59:42 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d74a6dfb0deso2208186276.0 for ; Mon, 21 Aug 2023 15:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692658782; x=1693263582; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=WojTsyHJlQQoruGt5rJF23KrHwn53DulZRTBhsYqmPM=; b=Xt4OmnWTg7A1bjVlc1q8eZptmz5K2KqmSUzXT/8gqV5GOaACpuxw0gxMfS2/kZRwWA +YeRpkFzN58QNhMU52t3YOip8FLYfk2phrZLdia6BQO9c5kjd2RLX/Pz65dm8u/htRmO IEu22y/b+4NYqo9L2Yfdd0CoerUxwwqhUpljSpwKXYuOqJOutMCzXlDAa2+XvRBhpYCz VykoD2C8Z0ZXtgz8SSsXLmHMX1tWxok5N4GsxB8a3yvKNxVVwE5v6yab9cw0tVusGKPW bAorv2RHU/G/1/Qga2eUAC0X++45x/Eq0N9gi0m609JJVkvPmGjRhXss35lR2STNX41I EVVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692658782; x=1693263582; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=WojTsyHJlQQoruGt5rJF23KrHwn53DulZRTBhsYqmPM=; b=ih9WlPehpYgoXrOQHFgFyhl29oPDFRMw/peTmG0r+fDJKFtJ3ucdzhNmZLUIzV+FEy 96vSx+kEpuXcpzO/Bp0Owogy2hpRMAazXBjIz3xcy+PoDJk3eJXnvq8leqRXc6grwV0j oz/GOQvIOaDB5QnmI/AmU3aEwoD3YTkCNf1J42NnJgdtWHXQjS3yIGW1JsRqUIquds33 H6mPH4l9pC1gOBgpi1e1n4TYMd6gd9xFutEEFsGZpiBBopTNFempi/CzpcyzEFyQXXg3 rb3e0DO8K95xvAuj3DGz5iqZkVsoR0wsYdffzH3qjuAc2eCiS8AEa5+Q0R2570gDEBJX olmA== X-Gm-Message-State: AOJu0YxemvD5xm8tH8ZCFbPUruPOvOsPy5lPmFIzELiQnQdSYLArZhxA WNjxshUicxcDqKA2iUzL5DzL/HvpdVFHkJKeqw== X-Received: from riemann.sea.corp.google.com ([2620:15c:100:201:61be:5074:9774:e5b]) (user=srutherford job=sendgmr) by 2002:a05:6902:1141:b0:d58:6cea:84de with SMTP id p1-20020a056902114100b00d586cea84demr87039ybu.11.1692658782154; Mon, 21 Aug 2023 15:59:42 -0700 (PDT) Date: Mon, 21 Aug 2023 15:58:59 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog Message-ID: <20230821225859.883120-1-srutherford@google.com> Subject: [PATCH v2] x86/sev: Make enc_dec_hypercall() accept a size instead of npages From: Steve Rutherford To: Borislav Petkov , Thomas Gleixner , thomas.lendacky@amd.com, pankaj.gupta@amd.com Cc: Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Ingo Molnar , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David.Kaplan@amd.com, jacobhxu@google.com, patelsvishal@google.com, bhillier@google.com, Steve Rutherford Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org enc_dec_hypercall() accepted a page count instead of a size, which forced its callers to round up. As a result, non-page aligned vaddrs caused pages to be spuriously marked as decrypted via the encryption status hypercall, which in turn caused consistent corruption of pages during live migration. Live migration requires accurate encryption status information to avoid migrating pages from the wrong perspective. Fixes: 064ce6c550a0 ("mm: x86: Invoke hypercall when page encryption status is changed") Signed-off-by: Steve Rutherford --- arch/x86/include/asm/mem_encrypt.h | 6 +++--- arch/x86/kernel/kvm.c | 4 +--- arch/x86/mm/mem_encrypt_amd.c | 13 ++++++------- 3 files changed, 10 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 7f97a8a97e24..473b16d73b47 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,8 +50,8 @@ void __init sme_enable(struct boot_params *bp); int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size); int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size); -void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, - bool enc); +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, + unsigned long size, bool enc); void __init mem_encrypt_free_decrypted_mem(void); @@ -85,7 +85,7 @@ early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; static inline int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; } static inline void __init -early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {} +early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) {} static inline void mem_encrypt_free_decrypted_mem(void) { } diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6a36db4f79fd..b8ab9ee5896c 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -966,10 +966,8 @@ static void __init kvm_init_platform(void) * Ensure that _bss_decrypted section is marked as decrypted in the * shared pages list. */ - nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted, - PAGE_SIZE); early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted, - nr_pages, 0); + __end_bss_decrypted - __start_bss_decrypted, 0); /* * If not booted using EFI, enable Live migration support. diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 54bbd5163e8d..6faea41e99b6 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -288,11 +288,10 @@ static bool amd_enc_cache_flush_required(void) return !cpu_feature_enabled(X86_FEATURE_SME_COHERENT); } -static void enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) +static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) { #ifdef CONFIG_PARAVIRT - unsigned long sz = npages << PAGE_SHIFT; - unsigned long vaddr_end = vaddr + sz; + unsigned long vaddr_end = vaddr + size; while (vaddr < vaddr_end) { int psize, pmask, level; @@ -342,7 +341,7 @@ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool e snp_set_memory_private(vaddr, npages); if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) - enc_dec_hypercall(vaddr, npages, enc); + enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc); return true; } @@ -466,7 +465,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, ret = 0; - early_set_mem_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc); + early_set_mem_enc_dec_hypercall(start, size, enc); out: __flush_tlb_all(); return ret; @@ -482,9 +481,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) return early_set_memory_enc_dec(vaddr, size, true); } -void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) { - enc_dec_hypercall(vaddr, npages, enc); + enc_dec_hypercall(vaddr, size, enc); } void __init sme_early_init(void) -- 2.42.0.rc1.204.g551eb34607-goog