Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp5427780imm; Tue, 12 Jun 2018 07:43:40 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLrqZjViKbBK1t7BypGqDKCLHzOWdWloCJ4b684kAekqbCw3Veyv5BEGpjLuAwZRUnGmwyG X-Received: by 2002:a63:8f0d:: with SMTP id n13-v6mr582620pgd.109.1528814620517; Tue, 12 Jun 2018 07:43:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528814620; cv=none; d=google.com; s=arc-20160816; b=FWvD47VBEzTN7TRb6l9Z3eHtp6CErNyYISqqYECxFdDBiuFdOXgOFDUEd31go0eBN0 sNN+zxZkfIPTwSjb5GQg4804IxTKg2Dtc2AM119bslC8p/IkJ86akHEdKMfPb1lcsIIH 6pSGCI2zKFx5sCHr7ZL1+o4bc0+gXGxMzX/BJwtgzKvf2hghogRMmgtu66MAKZbueYtJ TBMnNdLRYDA3TiGxr5uKdYihRXpW9GbHA7XRZVR+IskHoCCWtnsNuhU1iPQZynXTn0+3 mTMQMAzeVrIKQB7o9R2JdX2Xwn9/4pX0rIHven0zUt66TM4S60dpNKbrFeKSYmVmK1CP wBDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=+cvK/InyhjsFStFXxa2lUeGqoVu1PaEPorfN+UkZVIY=; b=EOWrzycw5ZvVDNTSKXxtamvMpjpa4JhDy8r+yHwro/CfpqmPtZ0frlrTT+yZWoxenS N+AAS82ejw0IdkbUXatDB7w1GgDWp81YVv0kq4Fmd4Tl1v1kGoNBRowhUOidcCMf6IJg mHD0PEPjiIEy4VS2CVZOLXxzDCfspJRtm29+Uc3sOg37ky0X+AvGFynXleldE2GxudiJ zsZz4V+cMoCe7R4jv/FVNKK6uZKFLF5TLmO868ZUcCzxGxJTC90+z0MFLWzn3moP65RS oXXiMZ7N6qLvQWy/DvBi/9R0MjE7GYl+1LsRdvj2QPxgZ1yJbioxWiWyAsTPCBZV3O63 4O8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y13-v6si247194pfc.302.2018.06.12.07.43.26; Tue, 12 Jun 2018 07:43:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934504AbeFLOmB (ORCPT + 99 others); Tue, 12 Jun 2018 10:42:01 -0400 Received: from mga14.intel.com ([192.55.52.115]:60221 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933408AbeFLOj1 (ORCPT ); Tue, 12 Jun 2018 10:39:27 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Jun 2018 07:39:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,215,1526367600"; d="scan'208";a="56706818" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 12 Jun 2018 07:39:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 13AB4477; Tue, 12 Jun 2018 17:39:21 +0300 (EEST) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv3 10/17] x86/mm: Implement prep_encrypted_page() and arch_free_page() Date: Tue, 12 Jun 2018 17:39:08 +0300 Message-Id: <20180612143915.68065-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180612143915.68065-1-kirill.shutemov@linux.intel.com> References: <20180612143915.68065-1-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The hardware/CPU does not enforce coherency between mappings of the same physical page with different KeyIDs or encryption keys. We are responsible for cache management. Flush cache on allocating encrypted page and on returning the page to the free pool. prep_encrypted_page() also takes care about zeroing the page. We have to do this after KeyID is set for the page. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/mktme.h | 6 ++++++ arch/x86/mm/mktme.c | 39 ++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/arch/x86/include/asm/mktme.h b/arch/x86/include/asm/mktme.h index 0fe0db424e48..ec7036abdb3f 100644 --- a/arch/x86/include/asm/mktme.h +++ b/arch/x86/include/asm/mktme.h @@ -11,6 +11,12 @@ extern phys_addr_t mktme_keyid_mask; extern int mktme_nr_keyids; extern int mktme_keyid_shift; +#define prep_encrypted_page prep_encrypted_page +void prep_encrypted_page(struct page *page, int order, int keyid, bool zero); + +#define HAVE_ARCH_FREE_PAGE +void arch_free_page(struct page *page, int order); + #define vma_is_encrypted vma_is_encrypted bool vma_is_encrypted(struct vm_area_struct *vma); diff --git a/arch/x86/mm/mktme.c b/arch/x86/mm/mktme.c index b02d5b9d4339..1821b87abb2f 100644 --- a/arch/x86/mm/mktme.c +++ b/arch/x86/mm/mktme.c @@ -1,4 +1,5 @@ #include +#include #include phys_addr_t mktme_keyid_mask; @@ -30,6 +31,44 @@ int vma_keyid(struct vm_area_struct *vma) return (prot & mktme_keyid_mask) >> mktme_keyid_shift; } +void prep_encrypted_page(struct page *page, int order, int keyid, bool zero) +{ + int i; + + /* + * The hardware/CPU does not enforce coherency between mappings of the + * same physical page with different KeyIDs or encrypt ion keys. + * We are responsible for cache management. + * + * We flush cache before allocating encrypted page + */ + clflush_cache_range(page_address(page), PAGE_SIZE << order); + + for (i = 0; i < (1 << order); i++) { + WARN_ON_ONCE(lookup_page_ext(page)->keyid); + lookup_page_ext(page)->keyid = keyid; + + /* Clear the page after the KeyID is set. */ + if (zero) + clear_highpage(page); + } +} + +void arch_free_page(struct page *page, int order) +{ + int i; + + if (!page_keyid(page)) + return; + + for (i = 0; i < (1 << order); i++) { + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids); + lookup_page_ext(page)->keyid = 0; + } + + clflush_cache_range(page_address(page), PAGE_SIZE << order); +} + static bool need_page_mktme(void) { /* Make sure keyid doesn't collide with extended page flags */ -- 2.17.1