Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751525AbbH1DZ4 (ORCPT ); Thu, 27 Aug 2015 23:25:56 -0400 Received: from lucky1.263xmail.com ([211.157.147.130]:50977 "EHLO lucky1.263xmail.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751014AbbH1DZy (ORCPT ); Thu, 27 Aug 2015 23:25:54 -0400 X-263anti-spam: KSV:0;BIG:0;ABS:1;DNS:0;ATT:0;SPF:S; X-MAIL-GRAY: 1 X-MAIL-DELIVERY: 0 X-KSVirus-check: 0 X-ABS-CHECKED: 1 X-SKE-CHECKED: 1 X-ADDR-CHECKED: 0 X-RL-SENDER: shawn.lin@rock-chips.com X-FST-TO: linux-kernel@vger.kernel.org X-SENDER-IP: 58.22.7.114 X-LOGIN-NAME: shawn.lin@rock-chips.com X-UNIQUE-TAG: <655cbd88720687e47030af88e2254550> X-ATTACHMENT-NUM: 0 X-DNS-TYPE: 0 Subject: Re: [RESEND PATCH] mmc: core: fix race condition in mmc_wait_data_done To: Ulf Hansson References: <1440731589-22241-1-git-send-email-shawn.lin@rock-chips.com> Cc: Jialing Fu , linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org From: Shawn Lin Message-ID: <55DFD4B7.3070601@rock-chips.com> Date: Fri, 28 Aug 2015 11:25:43 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <1440731589-22241-1-git-send-email-shawn.lin@rock-chips.com> Content-Type: text/plain; charset=gbk; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4754 Lines: 120 On 2015/8/28 11:13, Shawn Lin wrote: > From: Jialing Fu > > The following panic is captured in ker3.14, but the issue still exists > in latest kernel. > --------------------------------------------------------------------- > [ 20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer dereference > at virtual address 00000578 > ...... > [ 20.738499] c0 3136 (Compiler) PC is at _raw_spin_lock_irqsave+0x24/0x60 > [ 20.738527] c0 3136 (Compiler) LR is at _raw_spin_lock_irqsave+0x20/0x60 > [ 20.740134] c0 3136 (Compiler) Call trace: > [ 20.740165] c0 3136 (Compiler) [] _raw_spin_lock_irqsave+0x24/0x60 > [ 20.740200] c0 3136 (Compiler) [] __wake_up+0x1c/0x54 > [ 20.740230] c0 3136 (Compiler) [] mmc_wait_data_done+0x28/0x34 > [ 20.740262] c0 3136 (Compiler) [] mmc_request_done+0xa4/0x220 > [ 20.740314] c0 3136 (Compiler) [] sdhci_tasklet_finish+0xac/0x264 > [ 20.740352] c0 3136 (Compiler) [] tasklet_action+0xa0/0x158 > [ 20.740382] c0 3136 (Compiler) [] __do_softirq+0x10c/0x2e4 > [ 20.740411] c0 3136 (Compiler) [] irq_exit+0x8c/0xc0 > [ 20.740439] c0 3136 (Compiler) [] handle_IRQ+0x48/0xac > [ 20.740469] c0 3136 (Compiler) [] gic_handle_irq+0x38/0x7c > ---------------------------------------------------------------------- > Because in SMP, "mrq" has race condition between below two paths: > path1: CPU0: > static void mmc_wait_data_done(struct mmc_request *mrq) > { > mrq->host->context_info.is_done_rcv = true; > // > // If CPU0 has just finished "is_done_rcv = true" in path1, and at > // this moment, IRQ or ICache line missing happens in CPU0. > // What happens in CPU1 (path2)? > // > // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode: > // path2 would have chance to break from wait_event_interruptible > // in mmc_wait_for_data_req_done and continue to run for next > // mmc_request (mmc_blk_rw_rq_prep). > // > // Within mmc_blk_rq_prep, mrq is cleared to 0. > // If below line still gets host from "mrq" as the result of > // compiler, the panic happens as we traced. > wake_up_interruptible(&mrq->host->context_info.wait); > } > > path2: CPU1: > static int mmc_wait_for_data_req_done(... > { > ... > while (1) { > wait_event_interruptible(context_info->wait, > (context_info->is_done_rcv || > context_info->is_new_req)); > static void mmc_blk_rw_rq_prep(... > { > ... > memset(brq, 0, sizeof(struct mmc_blk_request)); > > This issue happens very coincidentally; however adding mdelay(1) in > mmc_wait_data_done as below could duplicate it easily. > > static void mmc_wait_data_done(struct mmc_request *mrq) > { > mrq->host->context_info.is_done_rcv = true; > + mdelay(1); > wake_up_interruptible(&mrq->host->context_info.wait); > } > Hi, ulf We find this bug on Intel-C3230RK platform for very small probability. Whereas I can easily reproduce this case if I add a mdelay(1) or longer delay as Jialing did. This patch seems useful to me. Should we push it forward? :) > At runtime, IRQ or ICache line missing may just happen at the same place > of the mdelay(1). > > This patch gets the mmc_context_info at the beginning of function, it can > avoid this race condition. > > Signed-off-by: Jialing Fu > Tested-by: Shawn Lin > --- > > drivers/mmc/core/core.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c > index 664b617..0520064 100644 > --- a/drivers/mmc/core/core.c > +++ b/drivers/mmc/core/core.c > @@ -358,8 +358,10 @@ EXPORT_SYMBOL(mmc_start_bkops); > */ > static void mmc_wait_data_done(struct mmc_request *mrq) > { > - mrq->host->context_info.is_done_rcv = true; > - wake_up_interruptible(&mrq->host->context_info.wait); > + struct mmc_context_info *context_info = &mrq->host->context_info; > + > + context_info->is_done_rcv = true; > + wake_up_interruptible(&context_info->wait); > } > > static void mmc_wait_done(struct mmc_request *mrq) > -- Best Regards Shawn Lin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/