Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1078852rwd; Tue, 16 May 2023 11:26:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ57phDlZ88ivL80HkaKBH94AGFzfrX79dVoZ7/dRv7Ox+s11yObS6JCzxAGicB1QEvQsPFc X-Received: by 2002:a05:6a20:8f10:b0:102:345f:593b with SMTP id b16-20020a056a208f1000b00102345f593bmr31239412pzk.4.1684261588978; Tue, 16 May 2023 11:26:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684261588; cv=none; d=google.com; s=arc-20160816; b=C8oCwhzYBMDuapzTOsQoHtM6EF0ZeGaLwYvKz9mwMdSPkSeQvuau1mITCKncVmmiTQ RIH8tvcAeBUqjtN8MiRiEL658tOCKBrhUX88Jrf7hVdQ+G2SfU2FcbtwKYzoSk2STFAp 6lutV39MaDyMezECXarxfRdruVffJQi/LUtr8GwTKtigznZJiJA6q6Vpb3/esG2cAy7M 9SlsYFkOQjm979Qu1Cvq5HDHvFHntJLjp6g4paOkG2mETyPmAvK2vrzEpSg8C13AJRwe NEJbhYMQhKBzxAXPiKX3i6Gc3y3SZdjKsCtF21/BPDlRoQ2XGVD2M4VLwanhutTuDgq8 H52A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=oaW7YC+MMtdke4Owm5YvonWDjRcP7YuhgQVU3EbcWa4=; b=tOzZVXeRaXFvaH/LqNelY/mZIpErF4ic/vpm6overI560KYBnVsjlsHrxEFKWqrjJg KNGvhqEr6+tYLneVyuqUVT5B18IWYe3gXNY+3xRB4U4SJov8N2k9ruAHzCDTemIqtvhI yC47mF7tCpk27y89G53q4DHiAzEwsPPjIgfe9/xjOC2+JU7/BVp3gbbDy1wOvdtcGEPs Li3/9BAYVf1Dt6qjpG1cGG/8h5VgAkkJS8S7dXTXBPSiE6xgjKke3bNIZP9tXe4TBpgv 5vB9NCrTJpX1YTuSpNsxbsaSYMWgPvCzwZMsNS45oRTO+fccI3BHTa8/TTaeOw1FyWzH jzKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=eqrkQFx6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p37-20020a635b25000000b004fbd2a5db20si18200065pgb.538.2023.05.16.11.26.17; Tue, 16 May 2023 11:26:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=eqrkQFx6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232126AbjEPSI4 (ORCPT + 99 others); Tue, 16 May 2023 14:08:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbjEPSIy (ORCPT ); Tue, 16 May 2023 14:08:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DDDAA7; Tue, 16 May 2023 11:08:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D960C63B92; Tue, 16 May 2023 18:08:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4098AC433AC; Tue, 16 May 2023 18:08:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684260532; bh=EuLTL0z0cItgFiTcg+BNi9m4MmZo8uZTaS650kq9lR4=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=eqrkQFx6rhRJujq4VWj4FzAyNGWYXTRwwDgyA5V3A+MHAv5k+HnqxBOF4OIy8cZkJ wlr/nGLB9U0awyzmtcWNdiQ3locGkyOpSlHrOTZrHPz7cdw7STS6cmzLETt17aRQUZ fkQu/Cq+SFiadTqPuOBZ5gStXcB2xqJYRpSp+bJKGfBGn7le4a7xeNDV1zbX5ztiGx i1ohR7qwbhExdzokVQagdx9ZsrS2RI8U+e8S3buraBXpfYStZQ09HaTMP8x2/bNuzr NG/f2QhWBJ8vXT5AilkZk2rYXxj5Eqo367bSvv+7iOHdMvmmK+W+kd5ieeRNFAJzFQ tQ3jSabpkTbMQ== Received: by mail-lj1-f182.google.com with SMTP id 38308e7fff4ca-2ac770a99e2so154913331fa.3; Tue, 16 May 2023 11:08:52 -0700 (PDT) X-Gm-Message-State: AC+VfDwLdyjqtT8PSkYRcIWAXcU1EZICFlgoMXj7Son0qzc2L/F9DwpZ Z+GdwTcmNpEkK6AVXWE5deDt4V+Lqu4G4Gfe8pc= X-Received: by 2002:a2e:9056:0:b0:2a8:bf35:3b7 with SMTP id n22-20020a2e9056000000b002a8bf3503b7mr7472306ljg.32.1684260530160; Tue, 16 May 2023 11:08:50 -0700 (PDT) MIME-Version: 1.0 References: <20230513220418.19357-1-kirill.shutemov@linux.intel.com> <20230513220418.19357-7-kirill.shutemov@linux.intel.com> In-Reply-To: <20230513220418.19357-7-kirill.shutemov@linux.intel.com> From: Ard Biesheuvel Date: Tue, 16 May 2023 20:08:37 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCHv11 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory To: "Kirill A. Shutemov" Cc: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, Dave Hansen Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 14 May 2023 at 00:04, Kirill A. Shutemov wrote: > > load_unaligned_zeropad() can lead to unwanted loads across page boundaries. > The unwanted loads are typically harmless. But, they might be made to > totally unrelated or even unmapped memory. load_unaligned_zeropad() > relies on exception fixup (#PF, #GP and now #VE) to recover from these > unwanted loads. > > But, this approach does not work for unaccepted memory. For TDX, a load > from unaccepted memory will not lead to a recoverable exception within > the guest. The guest will exit to the VMM where the only recourse is to > terminate the guest. > Does this mean that the kernel maps memory before accepting it? As otherwise, I would assume that such an access would page fault inside the guest before triggering an exception related to the unaccepted state. > There are two parts to fix this issue and comprehensively avoid access > to unaccepted memory. Together these ensure that an extra "guard" page > is accepted in addition to the memory that needs to be used. > > 1. Implicitly extend the range_contains_unaccepted_memory(start, end) > checks up to end+unit_size if 'end' is aligned on a unit_size > boundary. > 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' > is aligned on a unit_size boundary. > > Side note: This leads to something strange. Pages which were accepted > at boot, marked by the firmware as accepted and will never > _need_ to be accepted might be on unaccepted_pages list > This is a cue to ensure that the next page is accepted > before 'page' can be used. > > This is an actual, real-world problem which was discovered during TDX > testing. > > Signed-off-by: Kirill A. Shutemov > Reviewed-by: Dave Hansen > --- > drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ > 1 file changed, 35 insertions(+) > > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c > index bb91c41f76fb..3d1ca60916dd 100644 > --- a/drivers/firmware/efi/unaccepted_memory.c > +++ b/drivers/firmware/efi/unaccepted_memory.c > @@ -37,6 +37,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > start -= unaccepted->phys_base; > end -= unaccepted->phys_base; > > + /* > + * load_unaligned_zeropad() can lead to unwanted loads across page > + * boundaries. The unwanted loads are typically harmless. But, they > + * might be made to totally unrelated or even unmapped memory. > + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now > + * #VE) to recover from these unwanted loads. > + * > + * But, this approach does not work for unaccepted memory. For TDX, a > + * load from unaccepted memory will not lead to a recoverable exception > + * within the guest. The guest will exit to the VMM where the only > + * recourse is to terminate the guest. > + * > + * There are two parts to fix this issue and comprehensively avoid > + * access to unaccepted memory. Together these ensure that an extra > + * "guard" page is accepted in addition to the memory that needs to be > + * used: > + * > + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) > + * checks up to end+unit_size if 'end' is aligned on a unit_size > + * boundary. > + * > + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if > + * 'end' is aligned on a unit_size boundary. (immediately following > + * this comment) > + */ > + if (!(end % unit_size)) > + end += unit_size; > + > /* Make sure not to overrun the bitmap */ > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > end = unaccepted->size * unit_size * BITS_PER_BYTE; > @@ -84,6 +112,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) > start -= unaccepted->phys_base; > end -= unaccepted->phys_base; > > + /* > + * Also consider the unaccepted state of the *next* page. See fix #1 in > + * the comment on load_unaligned_zeropad() in accept_memory(). > + */ > + if (!(end % unit_size)) > + end += unit_size; > + > /* Make sure not to overrun the bitmap */ > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > end = unaccepted->size * unit_size * BITS_PER_BYTE; > -- > 2.39.3 >