Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751865AbdHCRUf (ORCPT ); Thu, 3 Aug 2017 13:20:35 -0400 Received: from mail-io0-f179.google.com ([209.85.223.179]:37503 "EHLO mail-io0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751600AbdHCRUe (ORCPT ); Thu, 3 Aug 2017 13:20:34 -0400 MIME-Version: 1.0 In-Reply-To: <20170803171117.107992-1-ndesaulniers@google.com> References: <20170803132035.gixd7m7oxtehpquu@yury-thinkpad> <20170803171117.107992-1-ndesaulniers@google.com> From: Ard Biesheuvel Date: Thu, 3 Aug 2017 18:20:32 +0100 Message-ID: Subject: Re: [PATCH] arm64: avoid overflow in VA_START and PAGE_OFFSET To: Nick Desaulniers Cc: zijun_hu , Andrew Morton , Greg Hackmann , Doug Anderson , srhines@google.com, pirama@google.com, Catalin Marinas , Will Deacon , Mark Rutland , Laura Abbott , Oleksandr Andrushchenko , Alexander Popov , Neeraj Upadhyay , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2027 Lines: 50 On 3 August 2017 at 18:11, Nick Desaulniers wrote: > The bitmask used to define these values produces overflow, as seen by > this compiler warning: > > arch/arm64/kernel/head.S:47:8: warning: > integer overflow in preprocessor expression > #elif (PAGE_OFFSET & 0x1fffff) != 0 > ^~~~~~~~~~~ > arch/arm64/include/asm/memory.h:52:46: note: > expanded from macro 'PAGE_OFFSET' > #define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - > 1)) > ~~~~~~~~~~~~~~~~~~ ^ > > It would be preferrable to use GENMASK_ULL() instead, but it's not set > up to be used from assembly (the UL() macro token pastes UL suffixes > when not included in assembly sources). > > Suggested-by: Yury Norov > Suggested-by: Matthias Kaehlcke > Signed-off-by: Nick Desaulniers > --- > arch/arm64/include/asm/memory.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 32f82723338a..dde717a31dee 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -64,8 +64,9 @@ > * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. > */ > #define VA_BITS (CONFIG_ARM64_VA_BITS) > -#define VA_START (UL(0xffffffffffffffff) << VA_BITS) > -#define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1)) > +#define VA_START ((UL(0xffffffffffffffff) >> VA_BITS) << VA_BITS) > +#define PAGE_OFFSET ((UL(0xffffffffffffffff) >> (VA_BITS - 1)) \ > + << (VA_BITS - 1)) > #define KIMAGE_VADDR (MODULES_END) > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) > #define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE) > -- > 2.14.0.rc1.383.gd1ce394fe2-goog > Would #define VA_START (UL(0xffffffffffffffff) - (1 << VA_BITS) + 1) also work?