Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751284AbdH1Kvh (ORCPT ); Mon, 28 Aug 2017 06:51:37 -0400 Received: from mx2.suse.de ([195.135.220.15]:34549 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751223AbdH1Kvf (ORCPT ); Mon, 28 Aug 2017 06:51:35 -0400 Date: Mon, 28 Aug 2017 12:51:19 +0200 From: Borislav Petkov To: Brijesh Singh , Tom Lendacky Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-efi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Tony Luck , Piotr Luc , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" , Laura Abbott , Ard Biesheuvel , Andrew Morton , Eric Biederman , Benjamin Herrenschmidt , Paul Mackerras , Konrad Rzeszutek Wilk , Jonathan Corbet , Dave Airlie , Kees Cook , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , Arnd Bergmann , Tejun Heo , Christoph Lameter Subject: Re: [RFC Part1 PATCH v3 15/17] x86: Add support for changing memory encryption attribute in early boot Message-ID: <20170828105119.xs73tinknqcmrgvk@pd.tnic> References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-16-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170724190757.11278-16-brijesh.singh@amd.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1696 Lines: 66 On Mon, Jul 24, 2017 at 02:07:55PM -0500, Brijesh Singh wrote: > Some KVM-specific custom MSRs shares the guest physical address with s/shares/share/ > hypervisor. "the hypervisor." > When SEV is active, the shared physical address must be mapped > with encryption attribute cleared so that both hypervsior and guest can > access the data. > > Add APIs to change memory encryption attribute in early boot code. > > Signed-off-by: Brijesh Singh > --- > arch/x86/include/asm/mem_encrypt.h | 17 ++++++ > arch/x86/mm/mem_encrypt.c | 117 +++++++++++++++++++++++++++++++++++++ > 2 files changed, 134 insertions(+) ... > +static int __init early_set_memory_enc_dec(resource_size_t paddr, > + unsigned long size, bool enc) > +{ > + unsigned long vaddr, vaddr_end, vaddr_next; > + unsigned long psize, pmask; > + int split_page_size_mask; > + pte_t *kpte; > + int level; > + > + vaddr = (unsigned long)__va(paddr); > + vaddr_next = vaddr; > + vaddr_end = vaddr + size; > + > + /* > + * We are going to change the physical page attribute from C=1 to C=0 > + * or vice versa. Flush the caches to ensure that data is written into > + * memory with correct C-bit before we change attribute. > + */ > + clflush_cache_range(__va(paddr), size); > + > + for (; vaddr < vaddr_end; vaddr = vaddr_next) { > + kpte = lookup_address(vaddr, &level); > + if (!kpte || pte_none(*kpte)) > + return 1; Return before flushing TLBs? Perhaps you mean ret = 1; goto out; here and out does __flush_tlb_all(); return ret; ? -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) --