Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933729AbeAKVrl (ORCPT + 1 other); Thu, 11 Jan 2018 16:47:41 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:5414 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933307AbeAKVrK (ORCPT ); Thu, 11 Jan 2018 16:47:10 -0500 X-IronPort-AV: E=Sophos;i="5.46,346,1511827200"; d="scan'208";a="585776808" From: David Woodhouse To: Andi Kleen Cc: Paul Turner , LKML , Linus Torvalds , Greg Kroah-Hartman , Tim Chen , Dave Hansen , tglx@linutronix.de, Kees Cook , Rik van Riel , Peter Zijlstra , Andy Lutomirski , Jiri Kosina , gnomes@lxorguk.ukuu.org.uk, x86@kernel.org, thomas.lendacky@amd.com, Josh Poimboeuf Subject: [PATCH v8 10/12] x86/retpoline/checksum32: Convert assembler indirect jumps Date: Thu, 11 Jan 2018 21:46:32 +0000 Message-Id: <1515707194-20531-11-git-send-email-dwmw@amazon.co.uk> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515707194-20531-1-git-send-email-dwmw@amazon.co.uk> References: <1515707194-20531-1-git-send-email-dwmw@amazon.co.uk> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Convert all indirect jumps in 32bit checksum assembler code to use non-speculative sequences when CONFIG_RETPOLINE is enabled. Signed-off-by: David Woodhouse Signed-off-by: Thomas Gleixner Acked-by: Arjan van de Ven Acked-by: Ingo Molnar Cc: gnomes@lxorguk.ukuu.org.uk Cc: Rik van Riel Cc: Andi Kleen Cc: Peter Zijlstra Cc: Linus Torvalds Cc: Jiri Kosina Cc: Andy Lutomirski Cc: Dave Hansen Cc: Kees Cook Cc: Tim Chen Cc: Greg Kroah-Hartman Cc: Paul Turner Link: https://lkml.kernel.org/r/1515508997-6154-10-git-send-email-dwmw@amazon.co.uk --- arch/x86/lib/checksum_32.S | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S index 4d34bb5..46e71a7 100644 --- a/arch/x86/lib/checksum_32.S +++ b/arch/x86/lib/checksum_32.S @@ -29,7 +29,8 @@ #include #include #include - +#include + /* * computes a partial checksum, e.g. for TCP/UDP fragments */ @@ -156,7 +157,7 @@ ENTRY(csum_partial) negl %ebx lea 45f(%ebx,%ebx,2), %ebx testl %esi, %esi - jmp *%ebx + JMP_NOSPEC %ebx # Handle 2-byte-aligned regions 20: addw (%esi), %ax @@ -439,7 +440,7 @@ ENTRY(csum_partial_copy_generic) andl $-32,%edx lea 3f(%ebx,%ebx), %ebx testl %esi, %esi - jmp *%ebx + JMP_NOSPEC %ebx 1: addl $64,%esi addl $64,%edi SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl) -- 2.7.4