Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp5342383imm; Sun, 22 Jul 2018 20:04:11 -0700 (PDT) X-Google-Smtp-Source: AAOMgpemtfLXGlaUyxy5Ueg6hIAja3V2JG748drcNVTdKCSrYGpfUlwcAOt2DcmxIj4lioQWwZWM X-Received: by 2002:a62:e106:: with SMTP id q6-v6mr11573979pfh.75.1532315051597; Sun, 22 Jul 2018 20:04:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532315051; cv=none; d=google.com; s=arc-20160816; b=wwKDs1ADQfJEXEIeWQmXu2+Z5WNE2ADGSxLwr7dHltcB+0O2pXVhN/gSwUsNnkjoA2 jEL1VxEy67euuUzBIoGj5cgaKegY/lflQnXQdmr7XyhOa/0AoW9xu106JgwX6MlyVfPw 6JaMFRzDUnrxAb40lh4/3mHcxiCu14CGPzgCSxhBloTNay0Zy72bzYRlbNCQ73wNkPUa mznw8EsdUuhSAJI0qXdru3Ya4UIxT+huBNeFvvKy0wbqILLL1w24hxp8iOMiwgDSCPP4 WU7fHuBP78eug4asEUVJ0fufSet7DM5A6g4QNw1eg7bQWkSjd9Grp+QXSMZEPMQWJpSw 1n4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=zF9INOEOJ/7XMonJNHyrSX3LXglxmxjaZtgzTn88Wxw=; b=0KBXLKrfvAfjFF/hvwaaAi8NifoY7sS7OYcCjqRqdHOw/yqMHEiaGYPaaCT6HGv9XN 0kbxHODaiydZBwku0XRwk09RhVBrtaj1RuVRLaRRYudKZTqSYTPLcCrJFLkLHmundinZ sMqYryUjFemgdgVkj2owDyUed3IA+RJ6M1DUAVrUNtQpdEy0yDthJWrn4ZZ+fWZJR1Nn zmCSpRhz5IvZ7fe/Sa91sgelObVsUwHyCWYoKnVEq75f5Zxk9rZMRxOnh7d/B28fBEGH yibQKBkaUJez85FAmL+CLMBxOLAmpOaijTZoYnY7/zvKJt/RXc5Vw2vnDaG9izzY2kcH TNyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7-v6si7486986pfe.291.2018.07.22.20.03.56; Sun, 22 Jul 2018 20:04:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726978AbeGWECF (ORCPT + 99 others); Mon, 23 Jul 2018 00:02:05 -0400 Received: from nautica.notk.org ([91.121.71.147]:52507 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725754AbeGWECF (ORCPT ); Mon, 23 Jul 2018 00:02:05 -0400 Received: by nautica.notk.org (Postfix, from userid 1001) id 67A91C009; Mon, 23 Jul 2018 05:03:06 +0200 (CEST) Date: Mon, 23 Jul 2018 05:02:51 +0200 From: Dominique Martinet To: Tomas Bortoli Cc: ericvh@gmail.com, rminnich@sandia.gov, lucho@ionkov.net, jiangyiwen@huawei.com, davem@davemloft.net, v9fs-developer@lists.sourceforge.net, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, syzkaller@googlegroups.com Subject: Re: [PATCH] net/9p/trans_fd.c: fix double list_del() and race in access Message-ID: <20180723030251.GB24608@nautica> References: <20180720132801.22749-1-tomasbortoli@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20180720132801.22749-1-tomasbortoli@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Tomas Bortoli wrote on Fri, Jul 20, 2018: > This patch uses list_del_init() instead of list_del() to eliminate > "req_list". This to prevent double list_del()'s calls to the same list > from provoking a GPF. Furthermore, this patch fixes an access to > "req_list" that was made without getting the relative lock. Please see comment about locking. As for list_del to list_del_init, it feels a little wrong to me, but I don't have a better idea so let's go with that. Do you know what happened to trigger this? one thread running p9_conn_cancel then the other thread doing p9_fd_cancel ? > Signed-off-by: Tomas Bortoli > Reported-by: syzbot+735d926e9d1317c3310c@syzkaller.appspotmail.com > --- > > net/9p/trans_fd.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c > index a64b01c56e30..131bb1f059e6 100644 > --- a/net/9p/trans_fd.c > +++ b/net/9p/trans_fd.c > @@ -223,7 +223,9 @@ static void p9_conn_cancel(struct p9_conn *m, int err) > > list_for_each_entry_safe(req, rtmp, &cancel_list, req_list) { > p9_debug(P9_DEBUG_ERROR, "call back req %p\n", req); > - list_del(&req->req_list); > + spin_lock_irqsave(&m->client->lock, flags); > + list_del_init(&req->req_list); > + spin_unlock_irqrestore(&m->client->lock, flags); Just locking around one item if you're afraid it might change won't be enough - list_for_each_entry_safe is only "safe" from removing the current element from the list yourself, not from other threads messing with it, so you'd need to lock around the whole loop if that's what you're protecting against. (Also, since I've taken the other patchs to change spin locks on client->lock to spin_lock instead of spin_lock_irqsave, please use that function for new locking of that variable - in general just basing your patchs off linux-next's master branch is a good idea.) > if (!req->t_err) > req->t_err = err; > p9_client_cb(m->client, req, REQ_STATUS_ERROR); > @@ -369,7 +371,7 @@ static void p9_read_work(struct work_struct *work) > spin_lock(&m->client->lock); > if (m->req->status != REQ_STATUS_ERROR) > status = REQ_STATUS_RCVD; > - list_del(&m->req->req_list); > + list_del_init(&m->req->req_list); > spin_unlock(&m->client->lock); > p9_client_cb(m->client, m->req, status); > m->rc.sdata = NULL; > @@ -684,7 +686,7 @@ static int p9_fd_cancel(struct p9_client *client, struct p9_req_t *req) > spin_lock(&client->lock); > > if (req->status == REQ_STATUS_UNSENT) { > - list_del(&req->req_list); > + list_del_init(&req->req_list); > req->status = REQ_STATUS_FLSHD; > ret = 0; > } > @@ -701,7 +703,7 @@ static int p9_fd_cancelled(struct p9_client *client, struct p9_req_t *req) > * remove it from the list. > */ > spin_lock(&client->lock); > - list_del(&req->req_list); > + list_del_init(&req->req_list); > spin_unlock(&client->lock); > > return 0;