Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp8176642imu; Tue, 4 Dec 2018 04:20:59 -0800 (PST) X-Google-Smtp-Source: AFSGD/VgvTm/CVfvqXXW546lPRdcA9s4YwelmFtOf1mEfuL2NrQN/TQaAQICksgfJt0iZJ+Sc+CV X-Received: by 2002:a63:e40c:: with SMTP id a12mr16708840pgi.28.1543926059481; Tue, 04 Dec 2018 04:20:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543926059; cv=none; d=google.com; s=arc-20160816; b=iPFv/KGxku47385xmydPe3SLNkx5SjvNNtJU7a1Vlb3Gf7tTnK74OQkjKGwBic+U5z w8+wET9nXahDTeLre26iXLuirENxIWeqi3jPA8vh1fWs8AXbZwMCOPiGLW5HTzpJCKIH nsMBvhhtIaj1tQPkgfOEf0Y664PLJbu8ABBgTuEds9vCZHKEkJtU7tgSYMaY8DrxAU96 CH7V/Dd3An00qyNO9COr6k6B5J3zjvcAWnhBSBrvluBeSlUVNNlgSENHI/X0L0HMkQMK eQUs4tjtdFP21iOtUBDAMnfaaGqdpW3DE4x/fl3+Atsf6HUYf/3UzSslp3WPwUxVEyRY gX0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MKhXHv/I+suoXSvWY4nOkDQg0yGt8f18bNJLOYzoxHQ=; b=LVcwjRFhWlqE6vrmHSrR0Xi31va4SZj+M8eSJ1SYKwxRsKxPQicQLnaJ6PIFzNWFgj y5DP2FxEWZeZjuUiyc0D1SYEf35S82VA5wo9bbBsWA3mAw1bokWvl0gtsTy0XGfuvoQZ s69ssXN3qXoYdgxMW6lweDG5rWfhwdWdau14Q5ktVGIrfgrZcam/WjP+JNEwQI9hdtaO z+9WTalPC/iTbtpKHLzCtY8RUYQGckuMMVnvIcDYRCYIPQ9tfW/ZbX4F5zBMCobYg5Te R6F1FfDUZB3+S66dpct8cuqV3g9NmFJci+iuafYihIWS360IykQpjU88w3Gsd31PDa+B rFfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gfnQgWeQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x4si16790868plv.56.2018.12.04.04.20.44; Tue, 04 Dec 2018 04:20:59 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gfnQgWeQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726314AbeLDMSu (ORCPT + 99 others); Tue, 4 Dec 2018 07:18:50 -0500 Received: from mail-lj1-f193.google.com ([209.85.208.193]:37394 "EHLO mail-lj1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbeLDMSr (ORCPT ); Tue, 4 Dec 2018 07:18:47 -0500 Received: by mail-lj1-f193.google.com with SMTP id e5-v6so14715187lja.4; Tue, 04 Dec 2018 04:18:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=MKhXHv/I+suoXSvWY4nOkDQg0yGt8f18bNJLOYzoxHQ=; b=gfnQgWeQffX3wVdRPVcjmboL00JGC8t7TbC+RAMpKpuzEMI+2s/f8LK1imQ05mYwT/ xcZLvzDYnNbaxT+JydGhBz58/N/6lbErp5QaiSfpLPs72sqz3Yv9UaHeyz2OlijZsumy dm1JMso9OG00MasdYPVwBFor6ktklkuQlSzYutp2DuNYA8+DIrvg4WC7etgHnXvUAa4f HopTED6G7o74rDwIPEvyW9qPC3D7wRyd9dNfxyDU2+8GU61BQNgpvnSDb+LfKXm0t1fJ kYS+RDYeDQW48BCz2BseM+MmIUDB52S/kihoje7fOf4VxfTtqJQbN0PsbeGLxD4JKGWK NFGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to:mime-version:content-transfer-encoding; bh=MKhXHv/I+suoXSvWY4nOkDQg0yGt8f18bNJLOYzoxHQ=; b=TTewxZUdbU43jWMSyZouwq1UyQiAaiMSbBfkAvdGNRFOcr/IjL54K9+U0S8pJ7mXon ivHXMeR9FUZDQ/iMRNlIyThY/c3gMSzaiei8wg/YKu+4ujNl6Cm0lfu9jMz3+O/Vl4S2 6CX5dXzBtVHp+ztY7Qg8y5K2ghpu9XI9MEzkG1GbIt+p6MSZdDvF/bes1QpIubWBFD28 eEV2T+BroIHLBusqIwa+h4CbIt2zClonWZ1NzI+HIWyzxv1LoS2VVmvFEpku7gkWd4Ra tyCCyLZ92RxCLOZJ4thDB0hO4nmBBqvVeGBjI8bvDQ6CRD532+gPyRYMjdzf//T5omGg RhkA== X-Gm-Message-State: AA+aEWZaWCkCRc+A0CYNeiqhbac/abgL+A1JRW2B7GFRpEUCLRspFktX X6/oQQ//MhljsPTla6M++mi37B4ps84= X-Received: by 2002:a2e:2c02:: with SMTP id s2-v6mr12920743ljs.118.1543925922527; Tue, 04 Dec 2018 04:18:42 -0800 (PST) Received: from localhost.localdomain (91-156-179-117.elisa-laajakaista.fi. [91.156.179.117]) by smtp.gmail.com with ESMTPSA id h3sm2899653lfj.25.2018.12.04.04.18.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 Dec 2018 04:18:41 -0800 (PST) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Andy Lutomirski , Kees Cook , Matthew Wilcox Cc: igor.stoppa@huawei.com, Nadav Amit , Peter Zijlstra , Dave Hansen , linux-integrity@vger.kernel.org, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/6] __wr_after_init: write rare for static allocation Date: Tue, 4 Dec 2018 14:18:01 +0200 Message-Id: <20181204121805.4621-3-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181204121805.4621-1-igor.stoppa@huawei.com> References: <20181204121805.4621-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Implementation of write rare for statically allocated data, located in a specific memory section through the use of the __write_rare label. The basic functions are: - wr_memset(): write rare counterpart of memset() - wr_memcpy(): write rare counterpart of memcpy() - wr_assign(): write rare counterpart of the assignment ('=') operator - wr_rcu_assign_pointer(): write rare counterpart of rcu_assign_pointer() The implementation is based on code from Andy Lutomirski and Nadav Amit for patching the text on x86 [here goes reference to commits, once merged] The modification of write protected data is done through an alternate mapping of the same pages, as writable. This mapping is local to each core and is active only for the duration of each write operation. Local interrupts are disabled, while the alternate mapping is active. In theory, it could introduce a non-predictable delay, in a preemptible system, however the amount of data to be altered is likely to be far smaller than a page. Signed-off-by: Igor Stoppa CC: Andy Lutomirski CC: Nadav Amit CC: Matthew Wilcox CC: Peter Zijlstra CC: Kees Cook CC: Dave Hansen CC: linux-integrity@vger.kernel.org CC: kernel-hardening@lists.openwall.com CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/prmem.h | 133 ++++++++++++++++++++++++++++++++++++++++++ init/main.c | 2 + mm/Kconfig | 4 ++ mm/Makefile | 1 + mm/prmem.c | 124 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 264 insertions(+) create mode 100644 include/linux/prmem.h create mode 100644 mm/prmem.c diff --git a/include/linux/prmem.h b/include/linux/prmem.h new file mode 100644 index 000000000000..b0131c1f5dc0 --- /dev/null +++ b/include/linux/prmem.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * prmem.h: Header for memory protection library + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + * Support for: + * - statically allocated write rare data + */ + +#ifndef _LINUX_PRMEM_H +#define _LINUX_PRMEM_H + +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * memtst() - test n bytes of the source to match the c value + * @p: beginning of the memory to test + * @c: byte to compare against + * @len: amount of bytes to test + * + * Returns 0 on success, non-zero otherwise. + */ +static inline int memtst(void *p, int c, __kernel_size_t len) +{ + __kernel_size_t i; + + for (i = 0; i < len; i++) { + u8 d = *(i + (u8 *)p) - (u8)c; + + if (unlikely(d)) + return d; + } + return 0; +} + + +#ifndef CONFIG_PRMEM + +static inline void *wr_memset(void *p, int c, __kernel_size_t len) +{ + return memset(p, c, len); +} + +static inline void *wr_memcpy(void *p, const void *q, __kernel_size_t size) +{ + return memcpy(p, q, size); +} + +#define wr_assign(var, val) ((var) = (val)) + +#define wr_rcu_assign_pointer(p, v) \ + rcu_assign_pointer(p, v) + +#else + +enum wr_op_type { + WR_MEMCPY, + WR_MEMSET, + WR_RCU_ASSIGN_PTR, + WR_OPS_NUMBER, +}; + +void *__wr_op(unsigned long dst, unsigned long src, __kernel_size_t len, + enum wr_op_type op); + +/** + * wr_memset() - sets n bytes of the destination to the c value + * @p: beginning of the memory to write to + * @c: byte to replicate + * @len: amount of bytes to copy + * + * Returns true on success, false otherwise. + */ +static inline void *wr_memset(void *p, int c, __kernel_size_t len) +{ + return __wr_op((unsigned long)p, (unsigned long)c, len, WR_MEMSET); +} + +/** + * wr_memcpy() - copyes n bytes from source to destination + * @dst: beginning of the memory to write to + * @src: beginning of the memory to read from + * @n_bytes: amount of bytes to copy + * + * Returns pointer to the destination + */ +static inline void *wr_memcpy(void *p, const void *q, __kernel_size_t size) +{ + return __wr_op((unsigned long)p, (unsigned long)q, size, WR_MEMCPY); +} + +/** + * wr_assign() - sets a write-rare variable to a specified value + * @var: the variable to set + * @val: the new value + * + * Returns: the variable + * + * Note: it might be possible to optimize this, to use wr_memset in some + * cases (maybe with NULL?). + */ + +#define wr_assign(var, val) ({ \ + typeof(var) tmp = (typeof(var))val; \ + \ + wr_memcpy(&var, &tmp, sizeof(var)); \ + var; \ +}) + +/** + * wr_rcu_assign_pointer() - initialize a pointer in rcu mode + * @p: the rcu pointer + * @v: the new value + * + * Returns the value assigned to the rcu pointer. + * + * It is provided as macro, to match rcu_assign_pointer() + */ +#define wr_rcu_assign_pointer(p, v) ({ \ + __wr_op((unsigned long)&p, v, sizeof(p), WR_RCU_ASSIGN_PTR); \ + p; \ +}) +#endif +#endif diff --git a/init/main.c b/init/main.c index a461150adfb1..a36f2e54f937 100644 --- a/init/main.c +++ b/init/main.c @@ -498,6 +498,7 @@ void __init __weak thread_stack_cache_init(void) void __init __weak mem_encrypt_init(void) { } void __init __weak poking_init(void) { } +void __init __weak wr_poking_init(void) { } bool initcall_debug; core_param(initcall_debug, initcall_debug, bool, 0644); @@ -734,6 +735,7 @@ asmlinkage __visible void __init start_kernel(void) delayacct_init(); poking_init(); + wr_poking_init(); check_bugs(); acpi_subsystem_init(); diff --git a/mm/Kconfig b/mm/Kconfig index d85e39da47ae..9b09339c027f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -142,6 +142,10 @@ config ARCH_DISCARD_MEMBLOCK config MEMORY_ISOLATION bool +config PRMEM + def_bool n + depends on STRICT_KERNEL_RWX && X86_64 + # # Only be set on architectures that have completely implemented memory hotplug # feature. If you are not sure, don't touch it. diff --git a/mm/Makefile b/mm/Makefile index d210cc9d6f80..ef3867c16ce0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -58,6 +58,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PRMEM) += prmem.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/prmem.c b/mm/prmem.c new file mode 100644 index 000000000000..e8ab76701831 --- /dev/null +++ b/mm/prmem.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * prmem.c: Memory Protection Library + * + * (C) Copyright 2017-2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + */ + +#include +#include +#include +#include +#include +#include +#include + +static __ro_after_init bool wr_ready; +static __ro_after_init struct mm_struct *wr_poking_mm; +static __ro_after_init unsigned long wr_poking_base; + +/* + * The following two variables are statically allocated by the linker + * script at the the boundaries of the memory region (rounded up to + * multiples of PAGE_SIZE) reserved for __wr_after_init. + */ +extern long __start_wr_after_init; +extern long __end_wr_after_init; + +static inline bool is_wr_after_init(unsigned long ptr, __kernel_size_t size) +{ + unsigned long start = (unsigned long)&__start_wr_after_init; + unsigned long end = (unsigned long)&__end_wr_after_init; + unsigned long low = ptr; + unsigned long high = ptr + size; + + return likely(start <= low && low <= high && high <= end); +} + + +void *__wr_op(unsigned long dst, unsigned long src, __kernel_size_t len, + enum wr_op_type op) +{ + temporary_mm_state_t prev; + unsigned long flags; + unsigned long offset; + unsigned long wr_poking_addr; + + /* Confirm that the writable mapping exists. */ + BUG_ON(!wr_ready); + + if (WARN_ONCE(op >= WR_OPS_NUMBER, "Invalid WR operation.") || + WARN_ONCE(!is_wr_after_init(dst, len), "Invalid WR range.")) + return (void *)dst; + + offset = dst - (unsigned long)&__start_wr_after_init; + wr_poking_addr = wr_poking_base + offset; + local_irq_save(flags); + prev = use_temporary_mm(wr_poking_mm); + + kasan_disable_current(); + if (op == WR_MEMCPY) + memcpy((void *)wr_poking_addr, (void *)src, len); + else if (op == WR_MEMSET) + memset((u8 *)wr_poking_addr, (u8)src, len); + else if (op == WR_RCU_ASSIGN_PTR) + /* generic version of rcu_assign_pointer */ + smp_store_release((void **)wr_poking_addr, + RCU_INITIALIZER((void **)src)); + kasan_enable_current(); + + barrier(); /* XXX redundant? */ + + unuse_temporary_mm(prev); + /* XXX make the verification optional? */ + if (op == WR_MEMCPY) + BUG_ON(memcmp((void *)dst, (void *)src, len)); + else if (op == WR_MEMSET) + BUG_ON(memtst((void *)dst, (u8)src, len)); + else if (op == WR_RCU_ASSIGN_PTR) + BUG_ON(*(unsigned long *)dst != src); + local_irq_restore(flags); + return (void *)dst; +} + +struct mm_struct *copy_init_mm(void); +void __init wr_poking_init(void) +{ + unsigned long start = (unsigned long)&__start_wr_after_init; + unsigned long end = (unsigned long)&__end_wr_after_init; + unsigned long i; + unsigned long wr_range; + + wr_poking_mm = copy_init_mm(); + BUG_ON(!wr_poking_mm); + + /* XXX What if it's too large to fit in the task unmapped mem? */ + wr_range = round_up(end - start, PAGE_SIZE); + + /* Randomize the poking address base*/ + wr_poking_base = TASK_UNMAPPED_BASE + + (kaslr_get_random_long("Write Rare Poking") & PAGE_MASK) % + (TASK_SIZE - (TASK_UNMAPPED_BASE + wr_range)); + + /* Create alternate mapping for the entire wr_after_init range. */ + for (i = start; i < end; i += PAGE_SIZE) { + struct page *page; + spinlock_t *ptl; + pte_t pte; + pte_t *ptep; + unsigned long wr_poking_addr; + + BUG_ON(!(page = virt_to_page(i))); + wr_poking_addr = i - start + wr_poking_base; + + /* The lock is not needed, but avoids open-coding. */ + ptep = get_locked_pte(wr_poking_mm, wr_poking_addr, &ptl); + VM_BUG_ON(!ptep); + + pte = mk_pte(page, PAGE_KERNEL); + set_pte_at(wr_poking_mm, wr_poking_addr, ptep, pte); + spin_unlock(ptl); + } + wr_ready = true; +} -- 2.19.1