Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp230851yba; Mon, 20 May 2019 07:48:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqyQFzp/1rYvs2v0vAdWxymDJdIcPh0IYI3Ez+3b+qJqKU29RNwkh4EbuvzgaV+N3TzKcS7a X-Received: by 2002:a17:902:18b:: with SMTP id b11mr51080764plb.264.1558363730904; Mon, 20 May 2019 07:48:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558363730; cv=none; d=google.com; s=arc-20160816; b=iwCuT2wEgxh1Z5tHvoEwotn4TIsJ1PKOqLlXb1twV19I/JHmHGgdl2z9IOWMnykCaU na5SAM5StFoJRbYzqhktvWcRH0X0F8GPiro18Z97QECeeN1DvsKif8y3vhy16Dejc7bM R3I3e2QcvQoeMg5uDbBtCvygC0vdPHBqD7JBFT2awsPlzwCOOsp4jJjkWjaVNroTvJqT VeOriA7WHTEMucMRtv9I6Ic/0WntltADsWF2ognmMMrfx9+bp6IU4PGojaYsGSsiaGyl HfYHtRHuEvWXTSdLQUCswrSNWT1NqQ6C1ksFatmc5lpHnRURe58qBhE4ps3hjmzhBtS+ 6JbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=W/SmWP+IU7RERuYigVQv54OR9f0K4fh955hCtanOhiU=; b=RE848h/qqA3VwhlXvjB0nOXek1dS0Jz6bBj+RxW8wZKZfjpvKHFfLnIhMBUnk5pmBV vxnJUSfC1CwM+nwwMvwpXlNb/f7dXaJcbWDU65MrTpLj6A39WrWyvqDEmGCs1DrK2Isn ASuntVTptKnWw5UMPwQ1oUxwJmwBjEwkKx/TwX9QJrUn9GVEAOcg5Cnys7qoT1i/ORHg AtfvBslU8DfavURXKIAcQfAOyDqV3/BTG8+Dr6MK77/yDyZlmo4bJRvJklD5R3BsOvTJ E4C/dwXTGDa+Jn15kOEmeLM7Y2rwCB/qncnfHI2RYtv2DN8ZKwEq/e4T6AlhmnkfnVp/ lKWw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Iv9zuDc+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b13si17756610pgk.500.2019.05.20.07.48.35; Mon, 20 May 2019 07:48:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Iv9zuDc+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733304AbfETMQr (ORCPT + 99 others); Mon, 20 May 2019 08:16:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:57474 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733279AbfETMQo (ORCPT ); Mon, 20 May 2019 08:16:44 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 57B1720815; Mon, 20 May 2019 12:16:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558354603; bh=6c6dzpRvrL2ejRwUMcZ4Lhd68lVVW+Pkl5hcfPLvQ1o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Iv9zuDc+8T6TbjZmRW9HX/C+cnRBYm84kIL1rF+bblj2R/mP0ROz+6AcLKo5+H135 I+S6bePV764VAUz1pwOEgMqzvIrD0HpJUZoy8OQ48r4KFy51RJRn2T6rmhPVjSdPqC F+mB9g6f3Z3o3VlyiBUyJ4d72CzcdYU4j7VYdLeg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Waiman Long , Linus Torvalds , Borislav Petkov , Davidlohr Bueso , Peter Zijlstra , Thomas Gleixner , Tim Chen , Will Deacon , huang ying , Ingo Molnar , Sasha Levin Subject: [PATCH 4.9 02/44] locking/rwsem: Prevent decrement of reader count before increment Date: Mon, 20 May 2019 14:13:51 +0200 Message-Id: <20190520115231.178993431@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190520115230.720347034@linuxfoundation.org> References: <20190520115230.720347034@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [ Upstream commit a9e9bcb45b1525ba7aea26ed9441e8632aeeda58 ] During my rwsem testing, it was found that after a down_read(), the reader count may occasionally become 0 or even negative. Consequently, a writer may steal the lock at that time and execute with the reader in parallel thus breaking the mutual exclusion guarantee of the write lock. In other words, both readers and writer can become rwsem owners simultaneously. The current reader wakeup code does it in one pass to clear waiter->task and put them into wake_q before fully incrementing the reader count. Once waiter->task is cleared, the corresponding reader may see it, finish the critical section and do unlock to decrement the count before the count is incremented. This is not a problem if there is only one reader to wake up as the count has been pre-incremented by 1. It is a problem if there are more than one readers to be woken up and writer can steal the lock. The wakeup was actually done in 2 passes before the following v4.9 commit: 70800c3c0cc5 ("locking/rwsem: Scan the wait_list for readers only once") To fix this problem, the wakeup is now done in two passes again. In the first pass, we collect the readers and count them. The reader count is then fully incremented. In the second pass, the waiter->task is then cleared and they are put into wake_q to be woken up later. Signed-off-by: Waiman Long Acked-by: Linus Torvalds Cc: Borislav Petkov Cc: Davidlohr Bueso Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Tim Chen Cc: Will Deacon Cc: huang ying Fixes: 70800c3c0cc5 ("locking/rwsem: Scan the wait_list for readers only once") Link: http://lkml.kernel.org/r/20190428212557.13482-2-longman@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/locking/rwsem-xadd.c | 44 +++++++++++++++++++++++++------------ 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index be06c45cbe4f9..0cdbb636e3163 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -127,6 +127,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem, { struct rwsem_waiter *waiter, *tmp; long oldcount, woken = 0, adjustment = 0; + struct list_head wlist; /* * Take a peek at the queue head waiter such that we can determine @@ -185,18 +186,42 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem, * of the queue. We know that woken will be at least 1 as we accounted * for above. Note we increment the 'active part' of the count by the * number of readers before waking any processes up. + * + * We have to do wakeup in 2 passes to prevent the possibility that + * the reader count may be decremented before it is incremented. It + * is because the to-be-woken waiter may not have slept yet. So it + * may see waiter->task got cleared, finish its critical section and + * do an unlock before the reader count increment. + * + * 1) Collect the read-waiters in a separate list, count them and + * fully increment the reader count in rwsem. + * 2) For each waiters in the new list, clear waiter->task and + * put them into wake_q to be woken up later. */ - list_for_each_entry_safe(waiter, tmp, &sem->wait_list, list) { - struct task_struct *tsk; - + list_for_each_entry(waiter, &sem->wait_list, list) { if (waiter->type == RWSEM_WAITING_FOR_WRITE) break; woken++; - tsk = waiter->task; + } + list_cut_before(&wlist, &sem->wait_list, &waiter->list); + + adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment; + if (list_empty(&sem->wait_list)) { + /* hit end of list above */ + adjustment -= RWSEM_WAITING_BIAS; + } + + if (adjustment) + atomic_long_add(adjustment, &sem->count); + + /* 2nd pass */ + list_for_each_entry_safe(waiter, tmp, &wlist, list) { + struct task_struct *tsk; + tsk = waiter->task; get_task_struct(tsk); - list_del(&waiter->list); + /* * Ensure calling get_task_struct() before setting the reader * waiter to nil such that rwsem_down_read_failed() cannot @@ -212,15 +237,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem, /* wake_q_add() already take the task ref */ put_task_struct(tsk); } - - adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment; - if (list_empty(&sem->wait_list)) { - /* hit end of list above */ - adjustment -= RWSEM_WAITING_BIAS; - } - - if (adjustment) - atomic_long_add(adjustment, &sem->count); } /* -- 2.20.1