Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp3793351rwo; Mon, 24 Jul 2023 17:20:04 -0700 (PDT) X-Google-Smtp-Source: APBJJlE8S5v3eHp9vDzs9+PM53lwARuveZ8r1Er0eEcxOL5kwkTfXVJGx9ZRsWfFhCYAKSNO/yGo X-Received: by 2002:a54:450b:0:b0:3a1:b67c:5555 with SMTP id l11-20020a54450b000000b003a1b67c5555mr9670588oil.58.1690244404025; Mon, 24 Jul 2023 17:20:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690244403; cv=none; d=google.com; s=arc-20160816; b=CeYZbAb/aWLulGf+8EsGCd+FMerMkA6LvhCTOoSyivvWHF5YFQ4q560L8jf+qbxjHS Mwkfp2QJE61Rw8F63zUP2N9PstSGuB/4fLQpWWqen2pPKVASEi25PwBZvGgsDhZLT5Hx qtNgQn0llMCbIDSZkRD1IJaqlP+83Q5OCMfPndcHG9wTP9Q5Muo5pJxFjFXcd/desneP kSPtzNF8I1pRmJa+NmRslt8ufZdWBiCYCdQ15VGZxrPAIUR+gzS9gCUyoa3ZJCcmLfqW VWW4QT9YkIKxc0AhVBdKoPMaJAiTcx3nqkhnd2w9GjVbpslnrNIu8zv/FUlPdjHMy36D CN0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Ws0jW2QzC1flV3qq6aKfXd45GztjV67HZ7IwwgzYYCg=; fh=4XslgJTnG8HtKjsWD9DQZuXpVj4IlumluARxO2PrSm8=; b=mtsud0Y6nuboGja1pJfxLoSzRd0be56Rs1BTzNuCa7q2eh8W6wwJTS4ikQO/VLzo6S Tji2LYPWZGfhGvD8eBf5mjwJ7sX4GDQQjbOYD0gwlmGFB3Pck6fIAznGAcA5jfK1xWsR Xeh+G3eUKtF5vYxsx5RNzZBGUoWaBVmlJcisQ2sZSiB3ahti4wdfJ/te7jgF8cunPYBp thwp/cyS4r/SQvgO7QIEtBBH3OtQzie3rcIs3pzi9/e2cEzquj6q2icQoXosEOR2QUL+ DEwXjcTurVFXLFpN3FkZ1U5QfouDLd7A4ONzXJP3A5eapmue3rnV9/QYgBOmUOlej7Ag 6Yjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BpASkEJ2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 7-20020a630007000000b00553836cb761si10086325pga.717.2023.07.24.17.19.51; Mon, 24 Jul 2023 17:20:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BpASkEJ2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229728AbjGXXTh (ORCPT + 99 others); Mon, 24 Jul 2023 19:19:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229459AbjGXXTf (ORCPT ); Mon, 24 Jul 2023 19:19:35 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 761F410B for ; Mon, 24 Jul 2023 16:19:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690240774; x=1721776774; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=lPX5G4Hwb8d5uK9dWZsNfEJ0rXLN4ufzUHm/7m8OYCA=; b=BpASkEJ20Nprqz0TEKkHDb35JRfxu+bUz7sraLboWIFQLIWxelOmSAC9 J8VjsF+5We6d7TciW/dhpM75/r2jw9X+1Z9VyEH2QRYqdMBxT8zZk23WF foKebWPXCZhdVLVGRnHzGV0dOMOSbWSEsDMUHzeRgQeO4cUTOuZgJUd/1 nNuKg0QnFgiuOjmasNcaUpMt9VjH+DoG+rFw9pi+nKwCShJ8aXKFQI7e4 sK3rwvRxK1kwwi89gATVsvN2qI6IBZhg/44qloLWvPPDf/4lCOVxL3xa8 TJ2h0IX6V6tKuVEBDZ0GTac2kn85nbUItTmCG2LgWge33RabW8gckuBXE g==; X-IronPort-AV: E=McAfee;i="6600,9927,10781"; a="365032765" X-IronPort-AV: E=Sophos;i="6.01,229,1684825200"; d="scan'208";a="365032765" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2023 16:19:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10781"; a="849791515" X-IronPort-AV: E=Sophos;i="6.01,229,1684825200"; d="scan'208";a="849791515" Received: from asmaaabd-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.208.137]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2023 16:19:30 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id B7F45103A25; Tue, 25 Jul 2023 02:19:27 +0300 (+03) Date: Tue, 25 Jul 2023 02:19:27 +0300 From: "Kirill A. Shutemov" To: "Michael Kelley (LINUX)" Cc: "Kirill A. Shutemov" , "dave.hansen@intel.com" , "tglx@linutronix.de" , "mingo@redhat.com" , "bp@alien8.de" , Dexuan Cui , "rick.p.edgecombe@intel.com" , "sathyanarayanan.kuppuswamy@linux.intel.com" , "seanjc@google.com" , "thomas.lendacky@amd.com" , "x86@kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCHv3 0/3] x86/tdx: Fix one more load_unaligned_zeropad() issue Message-ID: <20230724231927.pah3dt6gszwtsu45@box.shutemov.name> References: <20230606095622.1939-1-kirill.shutemov@linux.intel.com> <20230707140633.jzuucz52d7jdc763@box.shutemov.name> <20230709060904.w3czdz23453eyx2h@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 13, 2023 at 02:43:39PM +0000, Michael Kelley (LINUX) wrote: > From: Kirill A. Shutemov Sent: Saturday, July 8, 2023 11:09 PM > > > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > > From: Kirill A. Shutemov Sent: Friday, July 7, 2023 7:07 AM > > > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > > From: Kirill A. Shutemov Sent: Tuesday, June 6, > > 2023 2:56 AM > > > > > > [snip] > > > > > > > > > > > It only addresses the problem that happens on transition, but > > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > > steps from private to shared mapping and shared mapping triggers #VE, > > > > kernel should be able to handle it. > > > > > > I'm showing my ignorance of TDX architectural details, but what's the > > > situation where shared mappings in general can trigger a #VE? How > > > do such situations get handled for references that aren't from > > > load_unaligned_zeropad()? > > > > > > > Shared mappings are under host/VMM control. It can just not map the page > > in shared-ept and trigger ept-violation #VE. > > I know you are out on vacation, but let me follow up now for further > discussion when you are back. > > Isn't the scenario you are describing a malfunctioning or malicious > host/VMM? Would what you are describing be done as part of normal > operation? Kernel code must have switched the page from private to > shared for some purpose. As soon as that code (which presumably > does not have any entry in the exception table) touches the page, it > would take the #VE and the enter the die path because there's no fixup. > So is there value in having load_unaligned_zeropad() handle the #VE and > succeed where a normal reference would fail? #VE on shared memory is legitimately used for MMIO. But MMIO region is usually separate from the real memory in physical address space. But we also have DMA. DMA pages allocated from common pool of memory and they can be next to dentry cache that kernel accesses with load_unaligned_zeropad(). DMA pages are shared, but they usually backed by memory and not cause #VE. However shared memory is under full control from VMM and VMM can remove page at any point which would make platform to deliver #VE to the guest. This is pathological scenario, but I think it still worth handling gracefully. > I'd still like to see the private <-> shared transition code mark the pages > as invalid during the transition, and avoid the possibility of #VE and > similar cases with SEV-SNP. Such approach reduces (eliminates?) > entanglement between CoCo-specific exceptions and > load_unaligned_zeropad(). It also greatly simplifies TD Partition cases > and SEV-SNP cases where a paravisor is used. I doesn't eliminates issue for TDX as the scenario above is not transient. It can happen after memory is converted to shared. -- Kiryl Shutsemau / Kirill A. Shutemov