Received: by 10.223.176.5 with SMTP id f5csp3383222wra; Mon, 29 Jan 2018 12:24:15 -0800 (PST) X-Google-Smtp-Source: AH8x224RjwsvfoOU87ecMvrw8vxlQipGfKY4eF9csS5fiJ9czMI3EVdCTJVA831fTcDj7MX44hOD X-Received: by 2002:a17:902:b78b:: with SMTP id e11-v6mr2326952pls.85.1517257455641; Mon, 29 Jan 2018 12:24:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517257455; cv=none; d=google.com; s=arc-20160816; b=wv3UD6FMh9FgekZOITz07Zhfa+VRjBl0Ucbg618otWMig3BDwhTfcxEAFoJhSgYVk/ 8kfhayBfQ0lr3S7Yn2ZvDqk0UEFZqNrlG50KVwR+Uu9/EhFeKeWEwVOmOBkxBhEyMawo rWe8drrCgwXsPDImnUQ1rpxpQ4OaoxxEESpB9IxRZ8IOePHcwT2OgLNaf1FG1YI0lc8D CrgenzgB7z3+aPsABLDz7rEcWQaiipotAg/9xUqU3g5X1qM45oNF+QF35vXpg72hdIJV BolNoLsVTz8TRGLUOBpX/3zBh8HKNqxxOGk69G6bNh1MaNe60WyWtEW2YQBrTiR7Qwhl alaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=QH/A2dSoINRIZUYrGZkbhlBTeaFWSkDobKwrtUfcS5c=; b=k+C/NTf2wEV472wSneIATAnuVDxWqJL/BVp5wltKPaxcj1nk9BYr3BP3amAzPneyYc vfxIZez5qGnaZRqIgHIyJ+PYr1R2CMZ+uOT7ALOB/z+XcZXt3yvtbRde88QSD8Gbto1/ 9FfA0xz3jhxDgFzHcmK42QKQbQmbQFZmHQvk/TaWxmXNmT5Yel+iO/tjjPBhMbmjLrIo nPPcRq7uesiLMKwkIS537EdSDfbUl0/mgzpubT2ytggonLY5urfH3rvNUxW4FBSb3xTi oKT0BgovHvyvJNEJNFzVijq7re40uZlNy0MIWThxBh+4fraGljrLx2MrZmz0hFu+hoHY N0xA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bh3-v6si362764plb.661.2018.01.29.12.24.01; Mon, 29 Jan 2018 12:24:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754852AbeA2UWk (ORCPT + 99 others); Mon, 29 Jan 2018 15:22:40 -0500 Received: from mail.efficios.com ([167.114.142.141]:53366 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753886AbeA2UWe (ORCPT ); Mon, 29 Jan 2018 15:22:34 -0500 Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 7E0163402EF; Mon, 29 Jan 2018 20:23:06 +0000 (UTC) Received: from mail.efficios.com ([127.0.0.1]) by localhost (evm-mail-1.efficios.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id BQGZfgPXkqW4; Mon, 29 Jan 2018 20:22:52 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id A1EE1340307; Mon, 29 Jan 2018 20:22:41 +0000 (UTC) X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([127.0.0.1]) by localhost (evm-mail-1.efficios.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id sbOKtf26YsIk; Mon, 29 Jan 2018 20:22:41 +0000 (UTC) Received: from thinkos.internal.efficios.com (192-222-157-41.qc.cable.ebox.net [192.222.157.41]) by mail.efficios.com (Postfix) with ESMTPSA id 3C46D340111; Mon, 29 Jan 2018 20:22:41 +0000 (UTC) From: Mathieu Desnoyers To: Ingo Molnar , Peter Zijlstra , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andy Lutomirski , "Paul E . McKenney" , Boqun Feng , Andrew Hunter , Maged Michael , Avi Kivity , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Dave Watson , "H . Peter Anvin" , Andrea Parri , Russell King , Greg Hackmann , Will Deacon , David Sehr , Linus Torvalds , x86@kernel.org, Mathieu Desnoyers , linux-arch@vger.kernel.org Subject: [PATCH for 4.16 v3 08/11] membarrier: Provide core serializing command Date: Mon, 29 Jan 2018 15:20:17 -0500 Message-Id: <20180129202020.8515-9-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180129202020.8515-1-mathieu.desnoyers@efficios.com> References: <20180129202020.8515-1-mathieu.desnoyers@efficios.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Provide core serializing membarrier command to support memory reclaim by JIT. Each architecture needs to explicitly opt into that support by documenting in their architecture code how they provide the core serializing instructions required when returning from the membarrier IPI, and after the scheduler has updated the curr->mm pointer (before going back to user-space). They should then select ARCH_HAS_MEMBARRIER_SYNC_CORE to enable support for that command on their architecture. Architectures selecting this feature need to either document that they issue core serializing instructions when returning to user-space, or implement their architecture-specific sync_core_before_usermode(). Signed-off-by: Mathieu Desnoyers Acked-by: Peter Zijlstra (Intel) CC: Andy Lutomirski CC: Paul E. McKenney CC: Boqun Feng CC: Andrew Hunter CC: Maged Michael CC: Avi Kivity CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: Michael Ellerman CC: Dave Watson CC: Thomas Gleixner CC: Ingo Molnar CC: "H. Peter Anvin" CC: Andrea Parri CC: Russell King CC: Greg Hackmann CC: Will Deacon CC: David Sehr CC: linux-arch@vger.kernel.org --- Changes since v1: - Include linux/sync_core.h Changes since v2: - Rework a comment and changelog based on Peter Zijlstra's feedback. --- include/linux/sched/mm.h | 18 ++++++++++++++ include/uapi/linux/membarrier.h | 32 ++++++++++++++++++++++++- init/Kconfig | 3 +++ kernel/sched/core.c | 18 ++++++++++---- kernel/sched/membarrier.c | 53 +++++++++++++++++++++++++++++++---------- 5 files changed, 106 insertions(+), 18 deletions(-) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index c7ad7a70cfef..a7840f0f8832 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -7,6 +7,7 @@ #include #include #include +#include /* * Routines for handling mm_structs @@ -223,12 +224,26 @@ enum { MEMBARRIER_STATE_PRIVATE_EXPEDITED = (1U << 1), MEMBARRIER_STATE_GLOBAL_EXPEDITED_READY = (1U << 2), MEMBARRIER_STATE_GLOBAL_EXPEDITED = (1U << 3), + MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY = (1U << 4), + MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE = (1U << 5), +}; + +enum { + MEMBARRIER_FLAG_SYNC_CORE = (1U << 0), }; #ifdef CONFIG_ARCH_HAS_MEMBARRIER_HOOKS #include #endif +static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) +{ + if (likely(!(atomic_read(&mm->membarrier_state) & + MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) + return; + sync_core_before_usermode(); +} + static inline void membarrier_execve(struct task_struct *t) { atomic_set(&t->mm->membarrier_state, 0); @@ -244,6 +259,9 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, static inline void membarrier_execve(struct task_struct *t) { } +static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) +{ +} #endif #endif /* _LINUX_SCHED_MM_H */ diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h index d252506e1b5e..5891d7614c8c 100644 --- a/include/uapi/linux/membarrier.h +++ b/include/uapi/linux/membarrier.h @@ -73,7 +73,7 @@ * to and return from the system call * (non-running threads are de facto in such a * state). This only covers threads from the - * same processes as the caller thread. This + * same process as the caller thread. This * command returns 0 on success. The * "expedited" commands complete faster than * the non-expedited ones, they never block, @@ -86,6 +86,34 @@ * Register the process intent to use * MEMBARRIER_CMD_PRIVATE_EXPEDITED. Always * returns 0. + * @MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE: + * In addition to provide memory ordering + * guarantees described in + * MEMBARRIER_CMD_PRIVATE_EXPEDITED, ensure + * the caller thread, upon return from system + * call, that all its running threads siblings + * have executed a core serializing + * instruction. (architectures are required to + * guarantee that non-running threads issue + * core serializing instructions before they + * resume user-space execution). This only + * covers threads from the same process as the + * caller thread. This command returns 0 on + * success. The "expedited" commands complete + * faster than the non-expedited ones, they + * never block, but have the downside of + * causing extra overhead. If this command is + * not implemented by an architecture, -EINVAL + * is returned. A process needs to register its + * intent to use the private expedited sync + * core command prior to using it, otherwise + * this command returns -EPERM. + * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE: + * Register the process intent to use + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE. + * If this command is not implemented by an + * architecture, -EINVAL is returned. + * Returns 0 on success. * @MEMBARRIER_CMD_SHARED: * Alias to MEMBARRIER_CMD_GLOBAL. Provided for * header backward compatibility. @@ -101,6 +129,8 @@ enum membarrier_cmd { MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED = (1 << 2), MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3), MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4), + MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5), + MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6), /* Alias for header backward compatibility. */ MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL, diff --git a/init/Kconfig b/init/Kconfig index 30208da2221f..30b65febeb23 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1415,6 +1415,9 @@ config USERFAULTFD config ARCH_HAS_MEMBARRIER_HOOKS bool +config ARCH_HAS_MEMBARRIER_SYNC_CORE + bool + config EMBEDDED bool "Embedded system" option allnoconfig_y diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f38c4c7e256a..48c3c1b83354 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2658,13 +2658,21 @@ static struct rq *finish_task_switch(struct task_struct *prev) fire_sched_in_preempt_notifiers(current); /* - * When transitioning from a kernel thread to a userspace - * thread, mmdrop()'s implicit full barrier is required by the - * membarrier system call, because the current active_mm can - * become the current mm without going through switch_mm(). + * When switching through a kernel thread, the loop in + * membarrier_{private,global}_expedited() may have observed that + * kernel thread and not issued an IPI. It is therefore possible to + * schedule between user->kernel->user threads without passing though + * switch_mm(). Membarrier requires a barrier after storing to + * rq->curr, before returning to userspace, so provide them here: + * + * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly + * provided by mmdrop(), + * - a sync_core for SYNC_CORE. */ - if (mm) + if (mm) { + membarrier_mm_sync_core_before_usermode(mm); mmdrop(mm); + } if (unlikely(prev_state == TASK_DEAD)) { if (prev->sched_class->task_dead) prev->sched_class->task_dead(prev); diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index d2087d5f9837..5d0762633639 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -26,11 +26,20 @@ * Bitmask made from a "or" of all commands within enum membarrier_cmd, * except MEMBARRIER_CMD_QUERY. */ +#ifdef CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE +#define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK \ + (MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE \ + | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE) +#else +#define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK 0 +#endif + #define MEMBARRIER_CMD_BITMASK \ (MEMBARRIER_CMD_GLOBAL | MEMBARRIER_CMD_GLOBAL_EXPEDITED \ | MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED \ | MEMBARRIER_CMD_PRIVATE_EXPEDITED \ - | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED) + | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED \ + | MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK) static void ipi_mb(void *info) { @@ -104,15 +113,23 @@ static int membarrier_global_expedited(void) return 0; } -static int membarrier_private_expedited(void) +static int membarrier_private_expedited(int flags) { int cpu; bool fallback = false; cpumask_var_t tmpmask; - if (!(atomic_read(¤t->mm->membarrier_state) - & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY)) - return -EPERM; + if (flags & MEMBARRIER_FLAG_SYNC_CORE) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE)) + return -EINVAL; + if (!(atomic_read(¤t->mm->membarrier_state) & + MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY)) + return -EPERM; + } else { + if (!(atomic_read(¤t->mm->membarrier_state) & + MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY)) + return -EPERM; + } if (num_online_cpus() == 1) return 0; @@ -205,20 +222,29 @@ static int membarrier_register_global_expedited(void) return 0; } -static int membarrier_register_private_expedited(void) +static int membarrier_register_private_expedited(int flags) { struct task_struct *p = current; struct mm_struct *mm = p->mm; + int state = MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY; + + if (flags & MEMBARRIER_FLAG_SYNC_CORE) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE)) + return -EINVAL; + state = MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY; + } /* * We need to consider threads belonging to different thread * groups, which use the same mm. (CLONE_VM but not * CLONE_THREAD). */ - if (atomic_read(&mm->membarrier_state) - & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY) + if (atomic_read(&mm->membarrier_state) & state) return 0; atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED, &mm->membarrier_state); + if (flags & MEMBARRIER_FLAG_SYNC_CORE) + atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE, + &mm->membarrier_state); if (!(atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1)) { /* * Ensure all future scheduler executions will observe the @@ -226,8 +252,7 @@ static int membarrier_register_private_expedited(void) */ synchronize_sched(); } - atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY, - &mm->membarrier_state); + atomic_or(state, &mm->membarrier_state); return 0; } @@ -283,9 +308,13 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags) case MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED: return membarrier_register_global_expedited(); case MEMBARRIER_CMD_PRIVATE_EXPEDITED: - return membarrier_private_expedited(); + return membarrier_private_expedited(0); case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED: - return membarrier_register_private_expedited(); + return membarrier_register_private_expedited(0); + case MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE: + return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE); + case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE: + return membarrier_register_private_expedited(MEMBARRIER_FLAG_SYNC_CORE); default: return -EINVAL; } -- 2.11.0