Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2879165pxu; Sat, 19 Dec 2020 05:07:45 -0800 (PST) X-Google-Smtp-Source: ABdhPJws+NCUefNVYznjnxhLsz9x/tOWIfF506dzPd7CF5A2KRb61fzGWz26Go9OXz1A4KFegUqv X-Received: by 2002:a17:906:7fcd:: with SMTP id r13mr8497631ejs.242.1608383264965; Sat, 19 Dec 2020 05:07:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608383264; cv=none; d=google.com; s=arc-20160816; b=j9rl2jRLooGIxoNhWDtoSrUqjAoWzyUH0iWt6GWWin+ebLbHj2JD4ZeScIJvbaL6SL DGzjcblwcm6EiWYo/WG3DtsECZ2Mm3TIAN1MYVF5k5nDPi/vAq1QDQrbxtY30hAW/woP MvTlpQ+PB0hldbjcB5H0uQ+k5X6zgdFQHLYU14IAjSvjk7F9FOzrvRGU4Giw+J5keDN/ 2jzXmgiVYcvTRYmEtYr4hBIid8E/tf+YT2R3PDzwQSqxMUbbe3VDw7mGjTok5QpRpMUX GcjRLUNmtV70QRyF/Ke8/wj6lT6w5yNsRtbrv+21RLGUOLc9hlPNdd+9p0i9e3JX9hFf LXnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=JUhXiRmLR9GEJxWMPOPX98meRcHmXwnNtXLSl3KorEk=; b=QN2NvnBHYdWPmD9xialdPTJ4t9j2FsCZhnkVkWtzMnkcK0yiXayN2KlaA4bvpcXWdR XzZTBJXckesEmhnYFWCtghr9NOTxjXuASL9QnEjPTvfAY0GxWMFrd/VrC57Vn+HfWEib 0uE3ENDLuL1VdaUBT6VH6BrvZuaujN8PWyWmEylpcaJb2PTG728bqbC+zrez1S2Q6QeK LTucWvVQTJ/L2RVTvsbtEDwzd+yRM88qdgyVONiin3WQ7qiPZjXTPxMq++r//PTsFo6F NTSqSQiT0XHE89lERuX1yhYfFO+gTjgMv7bIk24zrUikcuZxDYnx92dc51ZoSitEzB7b nxAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gt24si6485341ejb.248.2020.12.19.05.07.22; Sat, 19 Dec 2020 05:07:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726828AbgLSNFr (ORCPT + 99 others); Sat, 19 Dec 2020 08:05:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:51440 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728666AbgLSNEE (ORCPT ); Sat, 19 Dec 2020 08:04:04 -0500 From: Greg Kroah-Hartman Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andy Lutomirski , Thomas Gleixner , Mathieu Desnoyers Subject: [PATCH 5.4 32/34] membarrier: Explicitly sync remote cores when SYNC_CORE is requested Date: Sat, 19 Dec 2020 14:03:29 +0100 Message-Id: <20201219125342.979283107@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201219125341.384025953@linuxfoundation.org> References: <20201219125341.384025953@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andy Lutomirski commit 758c9373d84168dc7d039cf85a0e920046b17b41 upstream. membarrier() does not explicitly sync_core() remote CPUs; instead, it relies on the assumption that an IPI will result in a core sync. On x86, this may be true in practice, but it's not architecturally reliable. In particular, the SDM and APM do not appear to guarantee that interrupt delivery is serializing. While IRET does serialize, IPI return can schedule, thereby switching to another task in the same mm that was sleeping in a syscall. The new task could then SYSRET back to usermode without ever executing IRET. Make this more robust by explicitly calling sync_core_before_usermode() on remote cores. (This also helps people who search the kernel tree for instances of sync_core() and sync_core_before_usermode() -- one might be surprised that the core membarrier code doesn't currently show up in a such a search.) Fixes: 70216e18e519 ("membarrier: Provide core serializing command, *_SYNC_CORE") Signed-off-by: Andy Lutomirski Signed-off-by: Thomas Gleixner Reviewed-by: Mathieu Desnoyers Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/776b448d5f7bd6b12690707f5ed67bcda7f1d427.1607058304.git.luto@kernel.org Signed-off-by: Greg Kroah-Hartman --- kernel/sched/membarrier.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -30,6 +30,23 @@ static void ipi_mb(void *info) smp_mb(); /* IPIs should be serializing but paranoid. */ } +static void ipi_sync_core(void *info) +{ + /* + * The smp_mb() in membarrier after all the IPIs is supposed to + * ensure that memory on remote CPUs that occur before the IPI + * become visible to membarrier()'s caller -- see scenario B in + * the big comment at the top of this file. + * + * A sync_core() would provide this guarantee, but + * sync_core_before_usermode() might end up being deferred until + * after membarrier()'s smp_mb(). + */ + smp_mb(); /* IPIs should be serializing but paranoid. */ + + sync_core_before_usermode(); +} + static void ipi_sync_rq_state(void *info) { struct mm_struct *mm = (struct mm_struct *) info; @@ -134,6 +151,7 @@ static int membarrier_private_expedited( int cpu; cpumask_var_t tmpmask; struct mm_struct *mm = current->mm; + smp_call_func_t ipi_func = ipi_mb; if (flags & MEMBARRIER_FLAG_SYNC_CORE) { if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE)) @@ -141,6 +159,7 @@ static int membarrier_private_expedited( if (!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY)) return -EPERM; + ipi_func = ipi_sync_core; } else { if (!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY)) @@ -181,7 +200,7 @@ static int membarrier_private_expedited( rcu_read_unlock(); preempt_disable(); - smp_call_function_many(tmpmask, ipi_mb, NULL, 1); + smp_call_function_many(tmpmask, ipi_func, NULL, 1); preempt_enable(); free_cpumask_var(tmpmask);