Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3122972ybi; Thu, 18 Jul 2019 21:28:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqycCCKjkpfVFgQ3Sep6XixMA48PeB6ueZr1dhmzYR3LEzAb5BtMxt1M4GQ/iMWYGf4osAJQ X-Received: by 2002:a17:902:2868:: with SMTP id e95mr50964850plb.319.1563510526945; Thu, 18 Jul 2019 21:28:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563510526; cv=none; d=google.com; s=arc-20160816; b=wIFX1gZtMSm+k0Art+wZFyErLL6cLdln9pNrco4FjZ+CqmnA0XxxK9iW4WbYBSMeUp UmS4XbIDjJq3uhvtZHRLMASI8+l/KB647fFWyFhpbWab1GhnmsnvFMy2/8Jeql7THdac 4m1e+yWyi1qebBCPyaoJPqHvncBv7zVhp7bZQQ7kHo7Pl7j8U6ESnO90zZzsF/MiJDtr J9JFs/8P5GXzcKFwb4sI1Qk3kca5bENCuVMspk8XlgvihUlD1PVnGGIJwHa30gbtkfMg JctuiUJH4TH5LYNNy2T88Ht6yI3ef0CPo8Wmf6gNxz3cABVhYd1sYFp3Cq5H+bgztZv7 9LUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4gOJ72d8Z3asX005sWwEX+eZ0vU9BN9QWFdz0Lm3Vuw=; b=su6Y7ZxqKUERi2Qo6Oj/wmwFlSUmDkX5bmBseiSSTyKvotLEMmfHIT1G2h732clqg9 LUpiJeNAttkElt7YOa4f4Opp6DPA5o3+skzIZC5jxi7DaOSbRGhsADQg3TtZdetrFmMJ 5YzkV9+4XbLA2/zVXy858BD+n96yGkWDx8qfHbptzYSsiH7tvu7QXCfR/Ifa7aENX2VS xcjPw8x37yeKMeqLsKRG9jYw84/j/82aJU7R4MJ4OkJQ0gawWYSuu0eouj4N2LujbVLE oEnOU7/ZPaj6ZR4d85nNCdw6BRT6CeqQltiO/zcfQd59grAopoMdosbJKrcLajLwlu7A cZWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UWyKZESY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cn1si1808614plb.204.2019.07.18.21.28.31; Thu, 18 Jul 2019 21:28:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UWyKZESY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732664AbfGSE1Y (ORCPT + 99 others); Fri, 19 Jul 2019 00:27:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:40434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732445AbfGSEG6 (ORCPT ); Fri, 19 Jul 2019 00:06:58 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EB88721873; Fri, 19 Jul 2019 04:06:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1563509217; bh=tyRPrOwKObReDHMdAgwbRdxfiTZLlY8WCnzQi9XBHBM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UWyKZESYBXt4RUSvIhxCwUmx1BCFI1c3ZGbDVd/+CWN0SdqPZSIp3SrNNN00bJQfa MpFb9laELaoX+hX2mEoyYuP0IxQd71oDoHoNTTCFi83J/64CuE5LgGNRyLrikZfOSZ 96KqqjzHovzT6NheY2J/KVfQaYWW6sM6oNZfmzzU= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Jean-Philippe Brucker , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Michal Hocko , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.1 132/141] mm/mmu_notifier: use hlist_add_head_rcu() Date: Fri, 19 Jul 2019 00:02:37 -0400 Message-Id: <20190719040246.15945-132-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190719040246.15945-1-sashal@kernel.org> References: <20190719040246.15945-1-sashal@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jean-Philippe Brucker [ Upstream commit 543bdb2d825fe2400d6e951f1786d92139a16931 ] Make mmu_notifier_register() safer by issuing a memory barrier before registering a new notifier. This fixes a theoretical bug on weakly ordered CPUs. For example, take this simplified use of notifiers by a driver: my_struct->mn.ops = &my_ops; /* (1) */ mmu_notifier_register(&my_struct->mn, mm) ... hlist_add_head(&mn->hlist, &mm->mmu_notifiers); /* (2) */ ... Once mmu_notifier_register() releases the mm locks, another thread can invalidate a range: mmu_notifier_invalidate_range() ... hlist_for_each_entry_rcu(mn, &mm->mmu_notifiers, hlist) { if (mn->ops->invalidate_range) The read side relies on the data dependency between mn and ops to ensure that the pointer is properly initialized. But the write side doesn't have any dependency between (1) and (2), so they could be reordered and the readers could dereference an invalid mn->ops. mmu_notifier_register() does take all the mm locks before adding to the hlist, but those have acquire semantics which isn't sufficient. By calling hlist_add_head_rcu() instead of hlist_add_head() we update the hlist using a store-release, ensuring that readers see prior initialization of my_struct. This situation is better illustated by litmus test MP+onceassign+derefonce. Link: http://lkml.kernel.org/r/20190502133532.24981-1-jean-philippe.brucker@arm.com Fixes: cddb8a5c14aa ("mmu-notifiers: core") Signed-off-by: Jean-Philippe Brucker Cc: Jérôme Glisse Cc: Michal Hocko Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/mmu_notifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 9c884abc7850..9f246c960e65 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -276,7 +276,7 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, * thanks to mm_take_all_locks(). */ spin_lock(&mm->mmu_notifier_mm->lock); - hlist_add_head(&mn->hlist, &mm->mmu_notifier_mm->list); + hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); spin_unlock(&mm->mmu_notifier_mm->lock); mm_drop_all_locks(mm); -- 2.20.1