Received: by 2002:ac8:3b51:0:b0:3f3:9eb6:4eb6 with SMTP id r17csp2778511qtf; Wed, 24 May 2023 10:12:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ40tZhyA/VxJ/kshRb1dUZxNKrjUjdogl8LKT9d8oBNwCFxugr/ieGyjbSr0Hyz8ehTovjO X-Received: by 2002:a17:902:ec83:b0:1a9:7dc2:9427 with SMTP id x3-20020a170902ec8300b001a97dc29427mr1536plg.21.1684948364133; Wed, 24 May 2023 10:12:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948364; cv=none; d=google.com; s=arc-20160816; b=SpL7mnig36+EgDt/hZZCp9BocWxn9FsM2YlCIO86utEEQiMqhLeuQbcPi8WcJL7Q5/ 7ZWlLn74fszynz3AUbCzVRcFmv9rH5vPtGfu5+cVbhldTkDBy2ytCoXSdPBNCAmsVXQi 4GzOECxtwvSjSXlRZjby/+1wjNR/1QSA/W7rk54MpPXoWf0KnD32RSEZvNTmHxat8MTR wHIqZnQkLBglPLQe4nSXY3BgzwDYyowlAeeShxoQ9GAuTnG3IRTzKgQnlDFQUyOT6SHM +WGnfm52etROZOoiN7uveszH2pkbt+BX/o1w6Ym+y6g8LEhXTLqnMouJLsidtEf0N4nh aAlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=0UGlK3NRhda3tfxfOFzgQInbgF26UmfSUa0JnTt2wyc=; b=UHF+F2oJlrLxy3P2Ezmco1C73jD+ooHOSGa0NynGpURYmxbkFziMbleJMfeuMV4S4Z BziQk8wWaOJ/YMWgtIsyXIXBzTMvrKhrvjSDQgH43n0NzPf6ZZpkK2PdzB30IDydfkBw kChdrBjWy79/D2nCJ1KoArm2ISxhEdSr+UZCgxkmqHBkTlVYfefptMkKDPsgwircwcn+ aybfO8lcT6yDAv5tyqJujOZO3zF+O8xMIC65M00FYR93TRyAtruIzfYbyyTacYYHvCOq n/nnQ6KRiyjBnYHT4kjSNHxZeEyG4jmZhirP+IV3raUGVJxaBbsD+1S4ngI3cwGzbzZQ YNuQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Px9bjV8K; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id km3-20020a17090327c300b001992f451a28si1253353plb.384.2023.05.24.10.12.32; Wed, 24 May 2023 10:12:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Px9bjV8K; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235729AbjEXRKn (ORCPT + 99 others); Wed, 24 May 2023 13:10:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235354AbjEXRKX (ORCPT ); Wed, 24 May 2023 13:10:23 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72014E9; Wed, 24 May 2023 10:10:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948222; x=1716484222; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ijlcQTB25JWDSKBgGP5PTsXN7m/nFm/uJsgOpwUsehM=; b=Px9bjV8KcPQp8HlylKwq9Ltagdu/1WgscYrrdRJzGGTiv6bJNJddONRC xCO0URfB1srE7vNMaeHaoeEmlygq3PKs/EZxuP+hTp57jGMLwKX4dsb5K mxZGQx0txQQRpAuZ4+mAazllr6MJ1MWY5IgurGc4tL4c2tkAqCXBnPNMJ EM0ELV0mLdKu8SNC4EHHR4SI9XNKsAWtZ0HENK6DyHjNm0eSasXdMxNeI 9ODPVHsbBZnZjZufBMrhD7/9SI7ziHMA4G3chWZj60pMWhTaE+CS3GO53 cnVoDyBuk5KYfBCca3vTqq9TvgQFK3XdFhJmJIFyu61vf98azwKGSTesc A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206700" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206700" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427344" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427344" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:52 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , Peter Zijlstra Subject: [PATCH v7 04/12] x86/asm: Add a wrapper function for the LOADIWKEY instruction Date: Wed, 24 May 2023 09:57:09 -0700 Message-Id: <20230524165717.14062-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Key Locker introduces a CPU-internal wrapping key to encode a user key to a key handle. Then a key handle is referenced instead of the plain text key. LOADIWKEY loads a wrapping key in the software-inaccessible CPU state. It operates only in kernel mode. The kernel will use this to load a new key at boot time. Establish a wrapper to prepare for this use. Also, define struct iwkey to pass the key value to it. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Massage the changelog -- clarify the reason and the changes a bit. Changes from v5: * Fix a typo: kernel_cpu_begin() -> kernel_fpu_begin() Changes from RFC v2: * Separate out the code as a new patch. * Improve the usability with the new struct as an argument. (Dan Williams) Note, Dan wondered if: WARN_ON(!irq_fpu_usable()); would be appropriate in the load_xmm_iwkey() function. --- arch/x86/include/asm/keylocker.h | 25 ++++++++++++++++++++++ arch/x86/include/asm/special_insns.h | 32 ++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) create mode 100644 arch/x86/include/asm/keylocker.h diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h new file mode 100644 index 000000000000..9b3bec452b31 --- /dev/null +++ b/arch/x86/include/asm/keylocker.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_KEYLOCKER_H +#define _ASM_KEYLOCKER_H + +#ifndef __ASSEMBLY__ + +#include + +/** + * struct iwkey - A temporary wrapping key storage. + * @integrity_key: A 128-bit key to check that key handles have not + * been tampered with. + * @encryption_key: A 256-bit encryption key used in + * wrapping/unwrapping a clear text key. + * + * This storage should be flushed immediately after loaded. + */ +struct iwkey { + struct reg_128_bit integrity_key; + struct reg_128_bit encryption_key[2]; +}; + +#endif /*__ASSEMBLY__ */ +#endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index de48d1389936..dd2d8b40fce3 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -9,6 +9,7 @@ #include #include #include +#include /* * The compiler should not reorder volatile asm statements with respect to each @@ -283,6 +284,37 @@ static __always_inline void tile_release(void) asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0"); } +/** + * load_xmm_iwkey - Load a CPU-internal wrapping key + * @key: A struct iwkey pointer. + * + * Load @key to XMMs then do LOADIWKEY. After this, flush XMM + * registers. Caller is responsible for kernel_fpu_begin(). + */ +static inline void load_xmm_iwkey(struct iwkey *key) +{ + struct reg_128_bit zeros = { 0 }; + + asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;" + :: "m"(key->integrity_key), "m"(key->encryption_key[0]), + "m"(key->encryption_key[1])); + + /* + * LOADIWKEY %xmm1,%xmm2 + * + * EAX and XMM0 are implicit operands. Load a key value + * from XMM0-2 to a software-invisible CPU state. With zero + * in EAX, CPU does not do hardware randomization and the key + * backup is allowed. + * + * This instruction is supported by binutils >= 2.36. + */ + asm volatile (".byte 0xf3,0x0f,0x38,0xdc,0xd1" :: "a"(0)); + + asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;" + :: "m"(zeros)); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ -- 2.17.1