Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751812AbdHVQxD (ORCPT ); Tue, 22 Aug 2017 12:53:03 -0400 Received: from mx2.suse.de ([195.135.220.15]:59451 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751414AbdHVQxB (ORCPT ); Tue, 22 Aug 2017 12:53:01 -0400 Date: Tue, 22 Aug 2017 18:52:48 +0200 From: Borislav Petkov To: Brijesh Singh , Tom Lendacky Cc: "H. Peter Anvin" , Arnd Bergmann , David Laight , "linux-kernel@vger.kernel.org" , "x86@kernel.org" , "linux-efi@vger.kernel.org" , "linuxppc-dev@lists.ozlabs.org" , "kvm@vger.kernel.org" , Fenghua Yu , Matt Fleming , David Howells , Paul Mackerras , Christoph Lameter , Jonathan Corbet , Radim =?utf-8?Q?Krcm=C3=A1r?= , Piotr Luc , Ingo Molnar , Dave Airlie , Kees Cook , Konrad Rzeszutek Wilk , Reza Arbab , Andy Lutomirski , Thomas Gleixner , Laura Abbott , Tony Luck , Ard.Biesheuvel@zytor.com Subject: Re: [RFC Part1 PATCH v3 13/17] x86/io: Unroll string I/O when SEV is active Message-ID: <20170822165248.rkbluikdgduu7ucy@pd.tnic> References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-14-brijesh.singh@amd.com> <063D6719AE5E284EB5DD2968C1650D6DD003FB85@AcuExch.aculab.com> <201707261927.v6QJR228008075@mail.zytor.com> <589d65a4-eb09-bae9-e8b4-a2d78ca6b509@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <589d65a4-eb09-bae9-e8b4-a2d78ca6b509@amd.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1300 Lines: 33 On Wed, Jul 26, 2017 at 03:07:14PM -0500, Brijesh Singh wrote: > Are you commenting on amount of code duplication ? If so, I can certainly improve > and use the similar macro used into header file to generate the functions body. So the argument about having CONFIG_AMD_MEM_ENCRYPT disabled doesn't bring a whole lot because distro kernels will all have it enabled. Optimally, it would be best if when SEV is enabled, we patch those IO insns but we can't patch at arbitrary times - we just do it once, at pre-SMP time. And from looking at the code, we do set sev_enabled very early, as part of __startup_64() -> sme_enable() so I guess we can make that set a synthetic X86_FEATURE_ bit and then patch REP IN/OUT* with a CALL, similar to what we do in arch/x86/include/asm/arch_hweight.h with POPCNT. But there you need to pay attention to registers being clobbered, see f5967101e9de ("x86/hweight: Get rid of the special calling convention") Yap, it does sound a bit more complex but if done right, we will be patching all call sites the same way we patch hweight*() calls and there should be no change to kernel size... As always, the devil is in the detail. -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) --