Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp7194239rwd; Tue, 6 Jun 2023 07:33:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4fhMd2hQonciitNRJGjEZONkvMDeVRPWB12M883UF7/U/Tc8UNNXRwLpAa9huFRX35EJII X-Received: by 2002:a05:620a:6085:b0:75d:4a95:1c70 with SMTP id dx5-20020a05620a608500b0075d4a951c70mr2361103qkb.77.1686061985973; Tue, 06 Jun 2023 07:33:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686061985; cv=none; d=google.com; s=arc-20160816; b=N9tE9TRDfRE9WpLvXOjgxgAqzOKORew+RVsE+HGjQ4DZwVkLppCvA1u2DlvzZRfcRz FDTaDfsQXLVGh0lfNIWBn64Rpp1jfkT5t13dUYGlBpNxreFoQvoYZmvcvrM7z0NTAQ5Z pXQp7NyvGAKm7gNr/EfVlRRORfjqWYmUb6j0YIfGWqCU0vGE76gfmTTs4ttoO7rMSFmf piDl9VHR/OG9yehsApHhcr8gFaqP1SdAvj4SRzWgOva5JPw37G8Oq89HCC7decCKczeC UopJ2imRrqc3ClHkjIFtuZQXZSCW2UMTtY2+Js/6J7+VRTJ77YYPNRzVzb7+9WC3BLkh UU8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UD+W0DNDfNPMN0F5sZaJ8qmACgbIlNXgyXOm6M6uNJE=; b=qT1keD0lHQyQSdpnnseoGwqhuy0ycnLX5IyVH/j9s2Wq3T6mOQZ9GfCwhfh/mj0rrn vIftLAG0vCpqDhFjElz6UskyhqkWR1/9kvDKL22+hO1fDScygYB80P6w2EC5/8ndzpJe ZG8nhVywsH8BjEFOiNMYmWsF1aUvjDcNxkIzxwCknHGuPF+ftn7aaJ0Io6My1KlBSKj5 leXQZKNJK75Bc+pP+Y6mE8wpkwCvVCZBgLwt4VA4Nh4EoqZzwu4tRUys5j3HtKcBihLw FvePdYf6HxghH8eoPw/vVsQc91rIABjzqjTSa6F0sESyc5oyjUwZiRV9DGrGA7IA7HH1 txIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OjyDS5hv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h7-20020ae9ec07000000b007456fcdeeedsi5745582qkg.672.2023.06.06.07.32.49; Tue, 06 Jun 2023 07:33:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OjyDS5hv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237729AbjFFO1x (ORCPT + 99 others); Tue, 6 Jun 2023 10:27:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237929AbjFFO1i (ORCPT ); Tue, 6 Jun 2023 10:27:38 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9769C170B; Tue, 6 Jun 2023 07:27:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686061652; x=1717597652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JjnS0Brxqu2i7phm89Mj2mgUrvnNs0XsgqzO3HIBqfU=; b=OjyDS5hv05l5MUJhCol7vscb7+rYF6L5QTE5oM1mIEc56jF/TexiJyk5 m9MnceRbw8e4rM1gjkG85mPzOjJQhgnZZeCotNh/S6K5KZXI42QhnLdcL UhJCz9R02cZxk31bIMI5qAe9XWvWKBirdY9tdFbTslHSRRnXWxeh4yCjq iwuA7R5wu/7sQ0VAW42EbhwSlqNx1rNyuHDk6xHC7uh2Y4wppW14Hdo1S 0iB0ncKefucAUFiezG50B2l8UGs/ACcxOevsBb5tcHp/vC9W3Ux5EZZpZ LCU/7qhlwN4nvo5GLM1rKIIua1AuNwEBingJuqRozAShCLLKPw2fyMb5S A==; X-IronPort-AV: E=McAfee;i="6600,9927,10732"; a="359147260" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="359147260" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:27:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="659543354" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="659543354" Received: from rgraefe-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.58.173]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:26:54 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 5F41F10DA12; Tue, 6 Jun 2023 17:26:42 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv14 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Tue, 6 Jun 2023 17:26:34 +0300 Message-Id: <20230606142637.5171-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230606142637.5171-1-kirill.shutemov@linux.intel.com> References: <20230606142637.5171-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen Reviewed-by: Ard Biesheuvel Reviewed-by: Tom Lendacky --- drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index 08a9a843550a..853f7dc3c21d 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -46,6 +46,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are two parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+unit_size if 'end' is aligned on a unit_size + * boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if + * 'end' is aligned on a unit_size boundary. (immediately following + * this comment) + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; @@ -93,6 +121,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; -- 2.39.3