Received: by 10.192.165.148 with SMTP id m20csp535900imm; Wed, 25 Apr 2018 03:52:50 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+tDHn2oWuvaFcyB9c+jbBHd1+bdfXJUSJoVPsbhwNzQITSLxUxxLsi8JyqGs5f/wG1O8x7 X-Received: by 2002:a17:902:529:: with SMTP id 38-v6mr28976761plf.64.1524653570473; Wed, 25 Apr 2018 03:52:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524653570; cv=none; d=google.com; s=arc-20160816; b=vaNHoucHCuJ80iNw3QoAkS0s7YmcpfHYKzNvvCVrr0Yif5Eg7bfgiz7qDrpUQN/bx8 fUQciptqZ9OGUsv3U2KEj2Za0K/K/6Ztxraqnh7bES93FTyYhL6LH4M39oRwcBLg2mwT SoBu2tdrCe2FbOSdPw9aAUf4wrc/ppAAbcEQq2SsMYrN7XoGkJhapgUi1REruXwuNWBz sDok+Xyk4YPo3NLZqPvgYQM1E96MAo+YaaFz2AUeHzXIfA/wId+c/EoDDHcLnH3xe2De xgdXHkySj2scqpi1tWiQcKprqkzzCNcVnAo2DLMlVCA4dssdp+fk4WBW6vtuYGZZ82Kr DUrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=6g8ShGALBI6pXTnb+RhCWhlrCnAdB5RnX5ofIIGTpIA=; b=Gb+AlG9G5pPjeU/mBp7B1CS0EWPH6sjL3zC+kuZimlc8vQLf8qJG83RVOkVuPz8rPu tkSk8G3KR+2ygHnAHaPEZF0hGLAUrhsTdklBHs9748fL7nDdI6dHW6WZ02e61RByh/CZ 8QAD7NA+Xvym7G27IX4ZHkqtdN/XFv+I8mAwZcnS3Fppo24HzdYGz29l/p7rmO74i/i+ BV/+g3gk/FO19EipOoYWvBo3mRLz86tj/1e5AfFSychDVclw9DcuJyhmKzQS/47WYaLI gN+LvK4RHoZ9rjjOLcJJ9nyqrKWqGUvlWuCYQ6TN5wWEL/vmHZbhXVzqXgPYsaCqRYLD sIJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f12si13099759pgo.64.2018.04.25.03.52.36; Wed, 25 Apr 2018 03:52:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754140AbeDYKvd (ORCPT + 99 others); Wed, 25 Apr 2018 06:51:33 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:53252 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754302AbeDYKoM (ORCPT ); Wed, 25 Apr 2018 06:44:12 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 5F5EB266; Wed, 25 Apr 2018 10:44:11 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , "Peter Zijlstra (Intel)" , Linus Torvalds , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 4.14 166/183] locking/qspinlock: Ensure node->count is updated before initialising node Date: Wed, 25 Apr 2018 12:36:26 +0200 Message-Id: <20180425103249.190249869@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180425103242.532713678@linuxfoundation.org> References: <20180425103242.532713678@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon [ Upstream commit 11dc13224c975efcec96647a4768a6f1bb7a19a8 ] When queuing on the qspinlock, the count field for the current CPU's head node is incremented. This needn't be atomic because locking in e.g. IRQ context is balanced and so an IRQ will return with node->count as it found it. However, the compiler could in theory reorder the initialisation of node[idx] before the increment of the head node->count, causing an IRQ to overwrite the initialised node and potentially corrupt the lock state. Avoid the potential for this harmful compiler reordering by placing a barrier() between the increment of the head node->count and the subsequent node initialisation. Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1518528177-19169-3-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- kernel/locking/qspinlock.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -379,6 +379,14 @@ queue: tail = encode_tail(smp_processor_id(), idx); node += idx; + + /* + * Ensure that we increment the head node->count before initialising + * the actual node. If the compiler is kind enough to reorder these + * stores, then an IRQ could overwrite our assignments. + */ + barrier(); + node->locked = 0; node->next = NULL; pv_init_node(node);