Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1286970imm; Wed, 20 Jun 2018 15:11:22 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKrM9D18Y3txxeLj1N3S+MFo6vexUuoHubo2q8yxTnvwYB1Qq2UF5dYUZQtqXwVax+abrme X-Received: by 2002:a63:9a52:: with SMTP id e18-v6mr20220202pgo.188.1529532682229; Wed, 20 Jun 2018 15:11:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529532682; cv=none; d=google.com; s=arc-20160816; b=pmWf6bAIGwjSJsGfGf4vCCdpkuhXGH59DLcMAsUJohTrChEB9j36nxJcvdkA2s2pUm k/Ip1NmuqD/hUHDbd+tT8dRa6wlk1J7jJ+6a/Y/XCOTr0igJlc58XeyL9t51IEPoNkw2 L8K6bvdyJU5BJf+S5dcgczqhkZukYhRP2PYb1UjV0I2U8+i0DuG+SEkrViRTWIsIIW0G lB9lbIEkAVsDXA5lN+pb+U0BVIdcf4J/ov4VBlcK4VrfqTlpAb5MzmeokmJPRzinmU/Y jP8LQDzA3MBTqlLx15zfqDFk7IPfbsJtpjAt6aaJDKxYeOX1eDxQrNNGi3nDKgFKCA3C xWuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=WTkB5tcz3DMpy0+nWN7Ne43CYx4GTjkaRdEgGc66sdU=; b=CTH/RHS8KVI1xio2PioXXwLpMSFSmi9a/Xkt7hyAItpncKq6UsVLnXmwu7+PQCIw5J bUA+iR/OH6NV4vunoyjIPTSixh13Qy6NX52Bwz1sOlOAYYG2uW5BjQERKU3b3Up1X+an numGuyo0Xi6rD+tGQXm+KXPk1zFWYN7Bkg/WrhEJYRNlCrO49xFJ0uDvG0wSlT96+p1b KkGMpCkHbvLV8lT8W4F55RJnfniNJPDTqT2X5Zr6yP+Jsd6RuXvSrGELT4+PGxmhHGdF vNsTY4NfttcLmnm4479Yma84b5mlLsusDHECx996Hsb8ZMKPbmuQhBe3tkVK0prR/6jC FQYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n88-v6si3314144pfj.251.2018.06.20.15.11.08; Wed, 20 Jun 2018 15:11:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933693AbeFTWJ6 (ORCPT + 99 others); Wed, 20 Jun 2018 18:09:58 -0400 Received: from mga01.intel.com ([192.55.52.88]:19370 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933578AbeFTWJc (ORCPT ); Wed, 20 Jun 2018 18:09:32 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Jun 2018 15:09:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,249,1526367600"; d="scan'208";a="66312716" Received: from linksys13920.jf.intel.com (HELO rpedgeco-HP-Z240-Tower-Workstation.jf.intel.com) ([10.7.197.56]) by orsmga001.jf.intel.com with ESMTP; 20 Jun 2018 15:09:32 -0700 From: Rick Edgecombe To: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Cc: kristen.c.accardi@intel.com, dave.hansen@intel.com, arjan.van.de.ven@intel.com, Rick Edgecombe Subject: [PATCH 2/3] x86/modules: Increase randomization for modules Date: Wed, 20 Jun 2018 15:09:29 -0700 Message-Id: <1529532570-21765-3-git-send-email-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1529532570-21765-1-git-send-email-rick.p.edgecombe@intel.com> References: <1529532570-21765-1-git-send-email-rick.p.edgecombe@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This changes the behavior of the KASLR logic for allocating memory for the text sections of loadable modules. It randomizes the location of each module text section with about 18 bits of entropy in typical use. This is enabled on X86_64 only. For 32 bit, the behavior is unchanged. The algorithm evenly breaks the module space in two, a random area and a backup area. For module text allocations, it first tries to allocate up to 10 randomly located starting pages inside the random section. If this fails, it will allocate in the backup area. The backup area base will be offset in the same way as the current algorithm does for the base area, 1024 possible locations. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/pgtable_64_types.h | 1 + arch/x86/kernel/module.c | 80 ++++++++++++++++++++++++++++++--- 2 files changed, 76 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 054765a..a98708a 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -141,6 +141,7 @@ extern unsigned int ptrs_per_p4d; /* The module sections ends with the start of the fixmap */ #define MODULES_END _AC(0xffffffffff000000, UL) #define MODULES_LEN (MODULES_END - MODULES_VADDR) +#define MODULES_RAND_LEN (MODULES_LEN/2) #define ESPFIX_PGD_ENTRY _AC(-2, UL) #define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << P4D_SHIFT) diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index f58336a..833ea81 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -77,6 +77,71 @@ static unsigned long int get_module_load_offset(void) } #endif +static unsigned long get_module_area_base(void) +{ + return MODULES_VADDR + get_module_load_offset(); +} + +#if defined(CONFIG_X86_64) && defined(CONFIG_RANDOMIZE_BASE) +static unsigned long get_module_vmalloc_start(void) +{ + if (kaslr_enabled()) + return MODULES_VADDR + MODULES_RAND_LEN + + get_module_load_offset(); + else + return get_module_area_base(); +} + +static void *try_module_alloc(unsigned long addr, unsigned long size) +{ + return __vmalloc_node_try_addr(addr, size, GFP_KERNEL, + PAGE_KERNEL_EXEC, 0, + NUMA_NO_NODE, + __builtin_return_address(0)); +} + +/* + * Try to allocate in 10 random positions starting in the random part of the + * module space. If these fail, return NULL. + */ +static void *try_module_randomize_each(unsigned long size) +{ + void *p = NULL; + unsigned int i; + unsigned long offset; + unsigned long addr; + unsigned long end; + const unsigned long nr_mod_positions = MODULES_RAND_LEN / MODULE_ALIGN; + + if (!kaslr_enabled()) + return NULL; + + for (i = 0; i < 10; i++) { + offset = (get_random_long() % nr_mod_positions) * MODULE_ALIGN; + addr = (unsigned long)MODULES_VADDR + offset; + end = addr + size; + + if (end > addr && end < MODULES_END) { + p = try_module_alloc(addr, size); + + if (p) + return p; + } + } + return NULL; +} +#else +static unsigned long get_module_vmalloc_start(void) +{ + return get_module_area_base(); +} + +static void *try_module_randomize_each(unsigned long size) +{ + return NULL; +} +#endif + void *module_alloc(unsigned long size) { void *p; @@ -84,11 +149,16 @@ void *module_alloc(unsigned long size) if (PAGE_ALIGN(size) > MODULES_LEN) return NULL; - p = __vmalloc_node_range(size, MODULE_ALIGN, - MODULES_VADDR + get_module_load_offset(), - MODULES_END, GFP_KERNEL, - PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + p = try_module_randomize_each(size); + + if (!p) + p = __vmalloc_node_range(size, MODULE_ALIGN, + get_module_vmalloc_start(), + MODULES_END, GFP_KERNEL, + PAGE_KERNEL_EXEC, 0, + NUMA_NO_NODE, + __builtin_return_address(0)); + if (p && (kasan_module_alloc(p, size) < 0)) { vfree(p); return NULL; -- 2.7.4