From: Brian Gerst Subject: Re: [RFC 06/22] kvm: Adapt assembly for PIE support Date: Tue, 18 Jul 2017 22:49:15 -0400 Message-ID: References: <20170718223333.110371-1-thgarnie@google.com> <20170718223333.110371-7-thgarnie@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Arnd Bergmann , Matthias Kaehlcke , Boris Ostrovsky , Juergen Gross , Paolo Bonzini , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Andy Lutomirski , Borislav Petkov , "Kirill A . Shutemov" , Borislav Petkov , Christian Borntraeger , "Rafael J . Wysocki" , Len Brown , Pavel Machek , Tejun Heo , Chris To: Thomas Garnier Return-path: List-Post: List-Help: List-Unsubscribe: List-Subscribe: In-Reply-To: <20170718223333.110371-7-thgarnie@google.com> List-Id: linux-crypto.vger.kernel.org On Tue, Jul 18, 2017 at 6:33 PM, Thomas Garnier wrote: > Change the assembly code to use only relative references of symbols for the > kernel to be PIE compatible. The new __ASM_GET_PTR_PRE macro is used to > get the address of a symbol on both 32 and 64-bit with PIE support. > > Position Independent Executable (PIE) support will allow to extended the > KASLR randomization range below the -2G memory limit. > > Signed-off-by: Thomas Garnier > --- > arch/x86/include/asm/kvm_host.h | 6 ++++-- > arch/x86/kernel/kvm.c | 6 ++++-- > arch/x86/kvm/svm.c | 4 ++-- > 3 files changed, 10 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 87ac4fba6d8e..3041201a3aeb 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1352,9 +1352,11 @@ asmlinkage void kvm_spurious_fault(void); > ".pushsection .fixup, \"ax\" \n" \ > "667: \n\t" \ > cleanup_insn "\n\t" \ > - "cmpb $0, kvm_rebooting \n\t" \ > + "cmpb $0, kvm_rebooting" __ASM_SEL(,(%%rip)) " \n\t" \ > "jne 668b \n\t" \ > - __ASM_SIZE(push) " $666b \n\t" \ > + __ASM_SIZE(push) "%%" _ASM_AX " \n\t" \ > + __ASM_GET_PTR_PRE(666b) "%%" _ASM_AX "\n\t" \ > + "xchg %%" _ASM_AX ", (%%" _ASM_SP ") \n\t" \ > "call kvm_spurious_fault \n\t" \ > ".popsection \n\t" \ > _ASM_EXTABLE(666b, 667b) > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 71c17a5be983..53b8ad162589 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -618,8 +618,10 @@ asm( > ".global __raw_callee_save___kvm_vcpu_is_preempted;" > ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" > "__raw_callee_save___kvm_vcpu_is_preempted:" > -"movq __per_cpu_offset(,%rdi,8), %rax;" > -"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" > +"leaq __per_cpu_offset(%rip), %rax;" > +"movq (%rax,%rdi,8), %rax;" > +"addq " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rip), %rax;" This doesn't look right. It's accessing a per-cpu variable. The per-cpu section is an absolute, zero-based section and not subject to relocation. > +"cmpb $0, (%rax); > "setne %al;" > "ret;" > ".popsection"); -- Brian Gerst