Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp2704384ybx; Fri, 8 Nov 2019 08:11:26 -0800 (PST) X-Google-Smtp-Source: APXvYqywrKOm0v1IrYAu7oAw5VRf01fjwf0uEXVJvX1OyBebT4EQWCqHxZ+uX1SuRi7k5jy3iu7p X-Received: by 2002:a50:fa83:: with SMTP id w3mr10929363edr.272.1573229485844; Fri, 08 Nov 2019 08:11:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573229485; cv=none; d=google.com; s=arc-20160816; b=VH6ycQRHs+jO3jS1Bei/rvg7IAZG0qn8ENJBvZz23MryNHLVpbOFjT2Ezb4oB0SQAk GRcWHO+afoy6DTyL0XlNIaiKFRHJ4UCp8CIrVT+QaOGNt1VCUL0HskncN/uZE5xkTUqr 7w+a5g+cD0FnSdOaXcnB+CjPVcWwZ4fD6DHnsnzxqkO6ZQyaPtNcYXPdbRsBpDXESBGE 45TtGLj/uTz7X6WyPmLewtoBjsDEijTEMEsdLic00Isw/y5trJMNkbSMnDfSlBYtbAUA E6LCFJ39V0vM+q4ZL2xREdg/x0Sr4r6pwLXGYy9taflW8k+euNq2dCmPu4v9uM2jpf6J F6Pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=24b5OzGHy7GNKsPMHt0knO0IvhhSlp6qtU9Fi8LP8DU=; b=oQRmhrgZWPbhJ4WvTrQXKnDJe1EvIAF2fNO/mzaERiUxXsgdNGWnAHmrH83zU7GIRm FhGPXkHTybuNNG/zKj05VMvHVF0pAjGgJaBRL73CuLTfnpSGHMuk23QCV/6UdTn8DVzB /+oV+f/JyORLdnfFjr7T7A16LL/zzISJI4nPY6XXkYKZzpBDvKo9PkzlXgdEFcVowdqm wX3c31bxY2ob5mpfslsLtXaD4IXRXFrSNasPItIm5b9omxA4uSDoKQMWXGUgPX++TmFa 65iHkLG/+kLVFayfx89RsLIpp6kJkfOhtWk2m8umq7uUVGEYjBZLWG87MujirJXgIaJF uCbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yHAxhXRd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i24si3986796ejh.35.2019.11.08.08.11.01; Fri, 08 Nov 2019 08:11:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yHAxhXRd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728265AbfKHQJc (ORCPT + 99 others); Fri, 8 Nov 2019 11:09:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:55302 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728124AbfKHQJa (ORCPT ); Fri, 8 Nov 2019 11:09:30 -0500 Received: from lenoir.home (lfbn-ncy-1-150-155.w83-194.abo.wanadoo.fr [83.194.232.155]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 66EF3222C4; Fri, 8 Nov 2019 16:09:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1573229369; bh=s0nzmpWNJbD5C+AskEEvykR21U7P7AkJtAJ0XDJ+leU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yHAxhXRdH7GCOuexwhYHk/6J6PEls5JPFd6QC++qEPxawfDUh0qNJnu728bBaEVdk A1h4Zm4JIZokRxiWJFMV3vRBJC9/w+h2QAGrDZGbX06g81Jhl5xCGRpYNzG+TuHJ7K AdM8YHgAC5Tzzg3rGK2VvoekGUt1DVbJqyd/qZcg= From: Frederic Weisbecker To: Peter Zijlstra , Ingo Molnar Cc: LKML , Frederic Weisbecker , "Paul E . McKenney" , Thomas Gleixner Subject: [RFC PATCH 4/4] irq_work: Weaken ordering in irq_work_run_list() Date: Fri, 8 Nov 2019 17:08:58 +0100 Message-Id: <20191108160858.31665-5-frederic@kernel.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191108160858.31665-1-frederic@kernel.org> References: <20191108160858.31665-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (This patch is RFC because it makes the code less clear in favour of ordering optimization, ordering which has yet to pass under careful eyes. Not sure it's worth it.) Clearing IRQ_WORK_PENDING from atomic_fetch_andnot() let us know the old value of flags that we can reuse in the later cmpxchg() to clear IRQ_WORK_BUZY if necessary. However there is no need to read flags atomically here. Its value can't be concurrently changed before we clear the IRQ_WORK_PENDING bit. So we can safely read it before the call to atomic_fetch_andnot() which can then become atomic_andnot() followed by an smp_mb__after_atomic(). That preserves the ordering that makes sure we see the latest updates preceding irq_work_claim() while it doesn't raise a new IPI while observing the work already queued. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar --- kernel/irq_work.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 49c53f80a13a..b709ab05cbfd 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -34,8 +34,8 @@ static bool irq_work_claim(struct irq_work *work) oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags); /* * If the work is already pending, no need to raise the IPI. - * The pairing atomic_fetch_andnot() in irq_work_run() makes sure - * everything we did before is visible. + * The pairing atomic_andnot() followed by a barrier in irq_work_run() + * makes sure everything we did before is visible. */ if (oflags & IRQ_WORK_PENDING) return false; @@ -143,7 +143,7 @@ static void irq_work_run_list(struct llist_head *list) llnode = llist_del_all(list); llist_for_each_entry_safe(work, tmp, llnode, llnode) { - int flags; + int flags = atomic_read(&work->flags); /* * Clear the PENDING bit, after this point the @work * can be re-used. @@ -151,14 +151,16 @@ static void irq_work_run_list(struct llist_head *list) * to claim that work don't rely on us to handle their data * while we are in the middle of the func. */ - flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); + atomic_andnot(IRQ_WORK_PENDING, &work->flags); + smp_mb__after_atomic(); work->func(work); /* * Clear the BUSY bit and return to the free state if * no-one else claimed it meanwhile. */ - (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); + (void)atomic_cmpxchg(&work->flags, flags & ~IRQ_WORK_PENDING, + flags & ~IRQ_WORK_CLAIMED); } } -- 2.23.0