Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3695353imm; Mon, 18 Jun 2018 02:28:07 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLHw+YR2UAIRxvI6p3aJAPlDYe8X5TEwG/7zZHShHFfgTckq1XwP9Apeev6Ye1L8+qdBPQ8 X-Received: by 2002:a63:6107:: with SMTP id v7-v6mr10468404pgb.264.1529314087614; Mon, 18 Jun 2018 02:28:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529314087; cv=none; d=google.com; s=arc-20160816; b=i2KiwFrkNy1mrsbOpyjalyrkGR0ffYxw2AP/6aTLPTK3U0k5lmOuOHbOSMmEyY6Ace iA2094EhgCdQSVINHN6wvMVqmFY6BFRKCfL6r/F+6ILbBFJ8sMGaG2PpMzXWinBIJvUB 5fux2Ab3SiwGDRO7nTMX4MQY5gKTGwqX1TL4xUbH9UYRdLB2CgoOaFFtD06BH0x2dY14 tAMS1rqu3jC4cBXKTbdwc8LsKUv+WE8sAIAe0PaWJGqQlgnxcwa0LCL08Cl1zIu3jsju W4+cXR2r62oAMIcKWsAdMqW+B27mYOqJIXLxHte4HpxwcnFSFTH/T/zifnyh6Bn0J5v/ w6Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=ULxmk68WozexkjgB0STrEifNpCHJsrnR5qPlmysB7mk=; b=PaQXWBZOQcnxrDf0RHQZplzmVCqf7gpvTyHKKbX04OxVQoAozY8BqRjCFJ3rlLjlfO 2ERRxhdkHO22SyWzspzVLOGmulQqR5ngI1QplXnMApwYNLTijzNAC8+njlCjZipU95T5 ujr8vj5SuWO1Qe39VX4lXf/gDvCnq+LUINTRoxA1DBjrEWuo4JakN0GXAKWkWZ02UkUX JnP56U3LW7sAazk77u8T6bpclWT8zeTu8slYJgvlWKbWjNa7A19ENEAQpmX1QFU8Y02U ff9wtGeHzpbs8kc5uOMR1pJJg0akZxefoRQoCzEA9na9ohd/LuzewLPtJo1IJ5Kl18te 21cA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 34-v6si14115972plc.346.2018.06.18.02.27.54; Mon, 18 Jun 2018 02:28:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966145AbeFRJ1T (ORCPT + 99 others); Mon, 18 Jun 2018 05:27:19 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:57084 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965934AbeFRIZ6 (ORCPT ); Mon, 18 Jun 2018 04:25:58 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 35E01BAD; Mon, 18 Jun 2018 08:25:57 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jeffrey Hugo , Timur Tabi , Jan Glauber , Kees Cook , Ingo Molnar , Will Deacon , Laura Abbott , Mark Rutland , Ard Biesheuvel , Catalin Marinas , Stephen Smalley , Thomas Gleixner , Peter Zijlstra , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.16 228/279] init: fix false positives in W+X checking Date: Mon, 18 Jun 2018 10:13:33 +0200 Message-Id: <20180618080618.255327438@linuxfoundation.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180618080608.851973560@linuxfoundation.org> References: <20180618080608.851973560@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Jeffrey Hugo [ Upstream commit ae646f0b9ca135b87bc73ff606ef996c3029780a ] load_module() creates W+X mappings via __vmalloc_node_range() (from layout_and_allocate()->move_module()->module_alloc()) by using PAGE_KERNEL_EXEC. These mappings are later cleaned up via "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module(). This is a problem because call_rcu_sched() queues work, which can be run after debug_checkwx() is run, resulting in a race condition. If hit, the race results in a nasty splat about insecure W+X mappings, which results in a poor user experience as these are not the mappings that debug_checkwx() is intended to catch. This issue is observed on multiple arm64 platforms, and has been artificially triggered on an x86 platform. Address the race by flushing the queued work before running the arch-defined mark_rodata_ro() which then calls debug_checkwx(). Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@codeaurora.org Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings") Signed-off-by: Jeffrey Hugo Reported-by: Timur Tabi Reported-by: Jan Glauber Acked-by: Kees Cook Acked-by: Ingo Molnar Acked-by: Will Deacon Acked-by: Laura Abbott Cc: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Stephen Smalley Cc: Thomas Gleixner Cc: Peter Zijlstra Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- init/main.c | 7 +++++++ kernel/module.c | 5 +++++ 2 files changed, 12 insertions(+) --- a/init/main.c +++ b/init/main.c @@ -981,6 +981,13 @@ __setup("rodata=", set_debug_rodata); static void mark_readonly(void) { if (rodata_enabled) { + /* + * load_module() results in W+X mappings, which are cleaned up + * with call_rcu_sched(). Let's make sure that queued work is + * flushed so that we don't hit false positives looking for + * insecure pages which are W+X. + */ + rcu_barrier_sched(); mark_rodata_ro(); rodata_test(); } else --- a/kernel/module.c +++ b/kernel/module.c @@ -3521,6 +3521,11 @@ static noinline int do_init_module(struc * walking this with preempt disabled. In all the failure paths, we * call synchronize_sched(), but we don't want to slow down the success * path, so use actual RCU here. + * Note that module_alloc() on most architectures creates W+X page + * mappings which won't be cleaned up until do_free_init() runs. Any + * code such as mark_rodata_ro() which depends on those mappings to + * be cleaned up needs to sync with the queued work - ie + * rcu_barrier_sched() */ call_rcu_sched(&freeinit->rcu, do_free_init); mutex_unlock(&module_mutex);