Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752081AbdLQHKY (ORCPT ); Sun, 17 Dec 2017 02:10:24 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:47856 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750896AbdLQHKV (ORCPT ); Sun, 17 Dec 2017 02:10:21 -0500 Date: Sun, 17 Dec 2017 09:10:10 +0200 From: Mike Rapoport To: Christoph Hellwig Cc: Ingo Molnar , Peter Zijlstra , Andrew Morton , Al Viro , Andrea Arcangeli , Jason Baron , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: Re: [PATCH 2/3] userfaultfd: use fault_wqh lock References: <20171214152344.6880-1-hch@lst.de> <20171214152344.6880-3-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171214152344.6880-3-hch@lst.de> User-Agent: Mutt/1.5.24 (2015-08-30) X-TM-AS-GCONF: 00 x-cbid: 17121707-0020-0000-0000-000003E0CDFC X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17121707-0021-0000-0000-00004271DBB6 Message-Id: <20171217071009.GA8631@rapoport-lnx> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-12-17_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1712170101 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2450 Lines: 71 Hi, On Thu, Dec 14, 2017 at 04:23:43PM +0100, Christoph Hellwig wrote: > From: Matthew Wilcox > > The epoll code currently uses the unlocked waitqueue helpers for managing The userfaultfd code > fault_wqh, but instead of holding the waitqueue lock for this waitqueue > around these calls, it the waitqueue lock of fault_pending_wq, which is > a different waitqueue instance. Given that the waitqueue is not exposed > to the rest of the kernel this actually works ok at the moment, but > prevents the epoll locking rules from being enforced using lockdep. ditto > Switch to the internally locked waitqueue helpers instead. This means > that the lock inside fault_wqh now nests inside the fault_pending_wqh > lock, but that's not a problem since it was entireyl unused before. spelling: entirely > Signed-off-by: Matthew Wilcox > [hch: slight changelog updates] > Signed-off-by: Christoph Hellwig Reviewed-by: Mike Rapoport > --- > fs/userfaultfd.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index ac9a4e65ca49..a39bc3237b68 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -879,7 +879,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) > */ > spin_lock(&ctx->fault_pending_wqh.lock); > __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range); > - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range); > + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range); > spin_unlock(&ctx->fault_pending_wqh.lock); > > /* Flush pending events that may still wait on event_wqh */ > @@ -1045,7 +1045,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait, > * anyway. > */ > list_del(&uwq->wq.entry); > - __add_wait_queue(&ctx->fault_wqh, &uwq->wq); > + add_wait_queue(&ctx->fault_wqh, &uwq->wq); > > write_seqcount_end(&ctx->refile_seq); > > @@ -1194,7 +1194,7 @@ static void __wake_userfault(struct userfaultfd_ctx *ctx, > __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, > range); > if (waitqueue_active(&ctx->fault_wqh)) > - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, range); > + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range); > spin_unlock(&ctx->fault_pending_wqh.lock); > } > > -- > 2.14.2 > -- Sincerely yours, Mike.