Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S972478AbdDXP52 (ORCPT ); Mon, 24 Apr 2017 11:57:28 -0400 Received: from mga14.intel.com ([192.55.52.115]:40610 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S972313AbdDXP5S (ORCPT ); Mon, 24 Apr 2017 11:57:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,245,1488873600"; d="scan'208";a="1122941981" Subject: Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption To: Tom Lendacky , linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> <20170418211754.10190.25082.stgit@tlendack-t1.amdoffice.net> <0106e3fc-9780-e872-2274-fecf79c28923@intel.com> <9fc79e28-ad64-1c2f-4c46-a4efcdd550b0@amd.com> Cc: Rik van Riel , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov From: Dave Hansen Message-ID: <67926f62-a068-6114-92ee-39bc08488b32@intel.com> Date: Mon, 24 Apr 2017 08:57:17 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: <9fc79e28-ad64-1c2f-4c46-a4efcdd550b0@amd.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 843 Lines: 21 On 04/24/2017 08:53 AM, Tom Lendacky wrote: > On 4/21/2017 4:52 PM, Dave Hansen wrote: >> On 04/18/2017 02:17 PM, Tom Lendacky wrote: >>> @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void >>> *from, unsigned long vaddr, >>> __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) >>> >>> #ifndef __va >>> -#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET)) >>> +#define __va(x) ((void *)(__sme_clr(x) + PAGE_OFFSET)) >>> #endif >> >> It seems wrong to be modifying __va(). It currently takes a physical >> address, and this modifies it to take a physical address plus the SME >> bits. > > This actually modifies it to be sure the encryption bit is not part of > the physical address. If SME bits make it this far, we have a bug elsewhere. Right? Probably best not to paper over it.