Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751788AbbH1JxO (ORCPT ); Fri, 28 Aug 2015 05:53:14 -0400 Received: from lucky1.263xmail.com ([211.157.147.132]:52886 "EHLO lucky1.263xmail.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751070AbbH1JxM (ORCPT ); Fri, 28 Aug 2015 05:53:12 -0400 X-263anti-spam: KSV:0; X-MAIL-GRAY: 1 X-MAIL-DELIVERY: 0 X-KSVirus-check: 0 X-ABS-CHECKED: 4 X-ADDR-CHECKED: 0 X-RL-SENDER: shawn.lin@rock-chips.com X-FST-TO: linux-kernel@vger.kernel.org X-SENDER-IP: 58.22.7.114 X-LOGIN-NAME: shawn.lin@rock-chips.com X-UNIQUE-TAG: X-ATTACHMENT-NUM: 0 X-DNS-TYPE: 0 Subject: Re: [RESEND PATCH] mmc: core: fix race condition in mmc_wait_data_done To: Ulf Hansson References: <1440731589-22241-1-git-send-email-shawn.lin@rock-chips.com> <55DFD4B7.3070601@rock-chips.com> Cc: shawn.lin@rock-chips.com, Jialing Fu , linux-mmc , "linux-kernel@vger.kernel.org" From: Shawn Lin Message-ID: <55E02F7C.1020204@rock-chips.com> Date: Fri, 28 Aug 2015 17:53:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4308 Lines: 118 在 2015/8/28 16:55, Ulf Hansson 写道: > On 28 August 2015 at 05:25, Shawn Lin wrote: >> On 2015/8/28 11:13, Shawn Lin wrote: >>> >>> From: Jialing Fu >>> >>> The following panic is captured in ker3.14, but the issue still exists >>> in latest kernel. >>> --------------------------------------------------------------------- >>> [ 20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer >>> dereference >>> at virtual address 00000578 >>> ...... >>> [ 20.738499] c0 3136 (Compiler) PC is at >>> _raw_spin_lock_irqsave+0x24/0x60 >>> [ 20.738527] c0 3136 (Compiler) LR is at >>> _raw_spin_lock_irqsave+0x20/0x60 >>> [ 20.740134] c0 3136 (Compiler) Call trace: >>> [ 20.740165] c0 3136 (Compiler) [] >>> _raw_spin_lock_irqsave+0x24/0x60 >>> [ 20.740200] c0 3136 (Compiler) [] __wake_up+0x1c/0x54 >>> [ 20.740230] c0 3136 (Compiler) [] >>> mmc_wait_data_done+0x28/0x34 >>> [ 20.740262] c0 3136 (Compiler) [] >>> mmc_request_done+0xa4/0x220 >>> [ 20.740314] c0 3136 (Compiler) [] >>> sdhci_tasklet_finish+0xac/0x264 >>> [ 20.740352] c0 3136 (Compiler) [] >>> tasklet_action+0xa0/0x158 >>> [ 20.740382] c0 3136 (Compiler) [] >>> __do_softirq+0x10c/0x2e4 >>> [ 20.740411] c0 3136 (Compiler) [] irq_exit+0x8c/0xc0 >>> [ 20.740439] c0 3136 (Compiler) [] >>> handle_IRQ+0x48/0xac >>> [ 20.740469] c0 3136 (Compiler) [] >>> gic_handle_irq+0x38/0x7c >>> ---------------------------------------------------------------------- >>> Because in SMP, "mrq" has race condition between below two paths: >>> path1: CPU0: >>> static void mmc_wait_data_done(struct mmc_request *mrq) >>> { >>> mrq->host->context_info.is_done_rcv = true; >>> // >>> // If CPU0 has just finished "is_done_rcv = true" in path1, and at >>> // this moment, IRQ or ICache line missing happens in CPU0. >>> // What happens in CPU1 (path2)? >>> // >>> // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode: >>> // path2 would have chance to break from wait_event_interruptible >>> // in mmc_wait_for_data_req_done and continue to run for next >>> // mmc_request (mmc_blk_rw_rq_prep). >>> // >>> // Within mmc_blk_rq_prep, mrq is cleared to 0. >>> // If below line still gets host from "mrq" as the result of >>> // compiler, the panic happens as we traced. >>> wake_up_interruptible(&mrq->host->context_info.wait); >>> } >>> >>> path2: CPU1: >>> static int mmc_wait_for_data_req_done(... >>> { >>> ... >>> while (1) { >>> wait_event_interruptible(context_info->wait, >>> (context_info->is_done_rcv || >>> context_info->is_new_req)); >>> static void mmc_blk_rw_rq_prep(... >>> { >>> ... >>> memset(brq, 0, sizeof(struct mmc_blk_request)); >>> >>> This issue happens very coincidentally; however adding mdelay(1) in >>> mmc_wait_data_done as below could duplicate it easily. >>> >>> static void mmc_wait_data_done(struct mmc_request *mrq) >>> { >>> mrq->host->context_info.is_done_rcv = true; >>> + mdelay(1); >>> wake_up_interruptible(&mrq->host->context_info.wait); >>> } >>> >> >> Hi, ulf >> >> We find this bug on Intel-C3230RK platform for very small probability. >> >> Whereas I can easily reproduce this case if I add a mdelay(1) or longer >> delay as Jialing did. >> >> This patch seems useful to me. Should we push it forward? :) > > It seems like a very good idea! > > Should we add a fixes tag to it? That's cool, but how to add a fixes tag? [Fixes] mmc: core: fix race condition in mmc_wait_data_done ? :) > > [...] > > Kind regards > Uffe > > > -- Best Regards Shawn Lin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/