Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp871207pxb; Wed, 29 Sep 2021 11:24:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzjd+le4ixV1Oji9COMqbhF5GxnAdix3XlxOYbOsdomh0pxRa35zTbt2ATkN/79EdGfQHif X-Received: by 2002:a17:906:7ac5:: with SMTP id k5mr1444070ejo.386.1632939847575; Wed, 29 Sep 2021 11:24:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632939847; cv=none; d=google.com; s=arc-20160816; b=NFqO5NNbJWa7qMfX15t31NGK8YXM1nCtWj0OxbA7auTCB5CRsXJBOe5T3CixaimtHN GzbmmGFLluhOcfKJ77TfqmObjz3aTBcELpWWEIwJBGug0Gloq87/XCn/uBwiIRSCwECP CtO5ehgDffduY08trcIVPqbVzy04taTpZZOtgpa/HfdDIHkj8g44CSMILxCWG05d64Kz Pug5sIarNpJb1Kj18FlUlP7INI/zjcUr1clYEWMG7UbVzrDe4hLQNO1ZVEOq/xqxQ014 Pqku/4VJTpOEkLm9AjIfttWz9ScgLF4tQCzjS0WeX5o048lgZAfAkmqf9U1WzuUVzsxH 0dfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=O7rqBdueKVcv868vxhmTrVj9530r51QyHCjtnH4N2yw=; b=QQy6r+TXacZXJNgskikkp0a9p/mH1v6mapzOTe0h1PPfoiVcPPFZcB6FCtqzBSrr5y bQqMsHNhLrta92wikMhV73nG5XNb69+g1KwhHEj8fNG9NQgeIX3uYy351XuesKndSDWl /5vBxaZT38LREcqvpv+zLRJkKmk3fuVzQPVLSUe279PD0Q7xIjqqR/1QxcdgK1tLrTar 9syLxFNNI+BMsZQCFvlx9az/lhBL3Q66In2NVHFoNLySE6utl477oV5KHKcA5VVlLckh hNNf8F+3WZ+VV3tnhfaeliiiim8To/9LzA8AkiWYS7peZg8xcImFzytAFtAS/x5XRXmF XyvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@alien8.de header.s=dkim header.b=YBeC8Kzm; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id kl24si664889ejc.190.2021.09.29.11.23.34; Wed, 29 Sep 2021 11:24:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@alien8.de header.s=dkim header.b=YBeC8Kzm; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345752AbhI2SVM (ORCPT + 99 others); Wed, 29 Sep 2021 14:21:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343727AbhI2SVL (ORCPT ); Wed, 29 Sep 2021 14:21:11 -0400 Received: from mail.skyhub.de (mail.skyhub.de [IPv6:2a01:4f8:190:11c2::b:1457]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88869C06161C; Wed, 29 Sep 2021 11:19:30 -0700 (PDT) Received: from zn.tnic (p200300ec2f0bd10085b5178de8b08a0e.dip0.t-ipconnect.de [IPv6:2003:ec:2f0b:d100:85b5:178d:e8b0:8a0e]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 02A081EC085D; Wed, 29 Sep 2021 20:19:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1632939569; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=O7rqBdueKVcv868vxhmTrVj9530r51QyHCjtnH4N2yw=; b=YBeC8KzmokpHoM3sHCUpYrsqpmDKYWwohdWWv+BVMrKJKs6BhWST147oDSW/w60qa6JN1U JUGhev5lMmPcvTnwL6N9kHEfMyltYmf3PDBlrL8dilFyE85armnZzSII5dfOB06zZfrHHQ LuzApwYwXynEFO67+Ez4FgHu1lYHzKw= Date: Wed, 29 Sep 2021 20:19:29 +0200 From: Borislav Petkov To: Brijesh Singh Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Michael Roth , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com Subject: Re: [PATCH Part2 v5 08/45] x86/fault: Add support to handle the RMP fault for user address Message-ID: References: <20210820155918.7518-1-brijesh.singh@amd.com> <20210820155918.7518-9-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210820155918.7518-9-brijesh.singh@amd.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Fri, Aug 20, 2021 at 10:58:41AM -0500, Brijesh Singh wrote: > +static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_code, > + unsigned long address) > +{ #ifdef CONFIG_AMD_MEM_ENCRYPT > + int rmp_level, level; > + pte_t *pte; > + u64 pfn; > + > + pte = lookup_address_in_mm(current->mm, address, &level); > + > + /* > + * It can happen if there was a race between an unmap event and > + * the RMP fault delivery. > + */ > + if (!pte || !pte_present(*pte)) > + return 1; > + > + pfn = pte_pfn(*pte); > + > + /* If its large page then calculte the fault pfn */ > + if (level > PG_LEVEL_4K) { > + unsigned long mask; > + > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1); Just use two helper variables named properly instead of this oneliner: pages_level = page_level_size(level) / PAGE_SIZE; pages_prev_level = page_level_size(level - 1) / PAGE_SIZE; > + pfn |= (address >> PAGE_SHIFT) & mask; > + } > + > + /* > + * If its a guest private page, then the fault cannot be resolved. > + * Send a SIGBUS to terminate the process. > + */ > + if (snp_lookup_rmpentry(pfn, &rmp_level)) { > + do_sigbus(regs, error_code, address, VM_FAULT_SIGBUS); > + return 1; > + } > + > + /* > + * The backing page level is higher than the RMP page level, request > + * to split the page. > + */ > + if (level > rmp_level) > + return 0; > + > + return 1; #else WARN_ONONCE(1); return -1; #endif and also handle that -1 negative value at the call site. Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette