Received: by 2002:a05:7412:f690:b0:e2:908c:2ebd with SMTP id ej16csp1088020rdb; Fri, 20 Oct 2023 08:08:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF+EVL/IeHChTEdUZapXpnShQlZxyFQndxOx/mfbvpuR21XeztfhJFUFo8tcKT2iSMeCYhv X-Received: by 2002:a05:6358:9047:b0:134:c8cb:6a00 with SMTP id f7-20020a056358904700b00134c8cb6a00mr1778516rwf.12.1697814481171; Fri, 20 Oct 2023 08:08:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697814481; cv=none; d=google.com; s=arc-20160816; b=h5C0S4QQmtt50h82XmDiUBsCXqutGj4zm2/IqEE4EsAI7+SaPTz08X3xN7bjT7WL5t g25B+iwSRk//JZ2+cRIKCzPmmlN3YR7WGGS3KOANha99ZHwyTWJKhE2ccAIqhiy0G1He NH5H2bZ8BqwDbrktJLa7RzWmWlq8vW5e1ZrmBf8IuqwSpRgZPxDbvbys02pP6IqVyzqe UZRBCjdcvb2XOsXvSYC5KUxeQHNFQ7lfVwcj2RGI6aD7D6YmrYgW8umiXdCR6c9EHE35 h6UROMVfJvbYx21tZOaX0jO0u8sFi/GBqpBan+54+PDoNUt9CjzSSHjj65grc0u71Evp GYLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=rvZ2OsEERZR7lNYjjWjEDiQgcG38qRhwX5ioNEeZYmQ=; fh=76/2EbnIhkCye407T3bAmdPmb9vuH/8Af3wrgnfvwCo=; b=JnA+w5Pbb0InY04i6JN+NNDdU8FNz4H6WAO7BjjUBBdUSO1DPrpE6+MlcXkrWbArdg 6S9U+7kfL8wjAfbgrfkWAkp/z2CU8U1m+uVP+rB6d7ckSZwROk6volU+gx1xRs9MBRki 7VaScTPbbChY41yBJG5amRcqoh/X3JsaFWZ1vLiXO1SGimf2cOztSlZMfH4V9b4hLOuE N1U0OZT7TZmsfTS1OAMAg6DD8l3xdv4jhVuguwYBa5C7QxvsCWGKg++pT/E1q5QKETz2 LLIIbZr6d5UwtaxQYqxlSZEXYmrZlnkz2EpaTlm+JVpXxj9Rd4Z9DDKJgkOPxYN1zJRh wQMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b=JAX3tMov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id j191-20020a6380c8000000b0059d48c43152si1939412pgd.40.2023.10.20.08.07.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Oct 2023 08:08:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b=JAX3tMov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id E63528047057; Fri, 20 Oct 2023 08:07:58 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377618AbjJTPH4 (ORCPT + 99 others); Fri, 20 Oct 2023 11:07:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377578AbjJTPHz (ORCPT ); Fri, 20 Oct 2023 11:07:55 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 985F3A3; Fri, 20 Oct 2023 08:07:50 -0700 (PDT) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39KE9HAU027601; Fri, 20 Oct 2023 15:07:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date : mime-version : subject : to : cc : references : from : in-reply-to : content-type : content-transfer-encoding; s=qcppdkim1; bh=rvZ2OsEERZR7lNYjjWjEDiQgcG38qRhwX5ioNEeZYmQ=; b=JAX3tMov9C+U6/QqfSpozQrE5mPbHHU9XZRummjet4SeBkc74Xv8PQ+yvRhuuEHpCQvO jv3pzEUfqYgcDfu7dY1SrZeJCJXu6/rpHyYlWr1MXjds8vACVs8aa2Qy2HOGYtq6Z7iB 3PhLRwQ+eMvnNwmn7oZKpZspA7c3A/jxuMMP0f6redioG5ahu54IrDtAY1Hqz5C3FHJf QgX0twZ8PhWdRue2aTwhPq9MPE67lvbXMre07Bf2gnpbhFsfU8Kh1P3HSy0ot285lYck w/n4fBgLiOXScoFZ7EI/qJMu6W4qVogqJsboRZLCLKXVXrUN4cMaZgMynCQAGXj7BGli AA== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tubxha8wd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 20 Oct 2023 15:07:38 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39KF7bHM007903 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 20 Oct 2023 15:07:37 GMT Received: from [10.226.59.182] (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Fri, 20 Oct 2023 08:07:36 -0700 Message-ID: <472817a7-78bb-25d9-b8c6-2d70f713b7fb@quicinc.com> Date: Fri, 20 Oct 2023 09:07:35 -0600 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.6.0 Subject: Re: [PATCH v2 1/2] bus: mhi: host: Add spinlock to protect WP access when queueing TREs Content-Language: en-US To: Qiang Yu , CC: , , , , References: <1694594861-12691-1-git-send-email-quic_qianyu@quicinc.com> <1694594861-12691-2-git-send-email-quic_qianyu@quicinc.com> <15526b95-518c-445a-be64-6a15259405fb@quicinc.com> From: Jeffrey Hugo In-Reply-To: <15526b95-518c-445a-be64-6a15259405fb@quicinc.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: yWcsbcj67-mNrVlAi4lX74Xtu1tRrjNL X-Proofpoint-ORIG-GUID: yWcsbcj67-mNrVlAi4lX74Xtu1tRrjNL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-20_10,2023-10-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 lowpriorityscore=0 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 priorityscore=1501 spamscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310200125 X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 20 Oct 2023 08:07:59 -0700 (PDT) On 10/16/2023 2:46 AM, Qiang Yu wrote: > > On 9/29/2023 11:22 PM, Jeffrey Hugo wrote: >> On 9/24/2023 9:10 PM, Qiang Yu wrote: >>> >>> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote: >>>> On 9/13/2023 2:47 AM, Qiang Yu wrote: >>>>> From: Bhaumik Bhatt >>>>> >>>>> Protect WP accesses such that multiple threads queueing buffers for >>>>> incoming data do not race and access the same WP twice. Ensure read >>>>> and >>>>> write locks for the channel are not taken in succession by dropping >>>>> the >>>>> read lock from parse_xfer_event() such that a callback given to client >>>>> can potentially queue buffers and acquire the write lock in that >>>>> process. >>>>> Any queueing of buffers should be done without channel read lock >>>>> acquired >>>>> as it can result in multiple locks and a soft lockup. >>>>> >>>>> Signed-off-by: Bhaumik Bhatt >>>>> Signed-off-by: Qiang Yu >>>>> --- >>>>>   drivers/bus/mhi/host/main.c | 11 ++++++++++- >>>>>   1 file changed, 10 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c >>>>> index dcf627b..13c4b89 100644 >>>>> --- a/drivers/bus/mhi/host/main.c >>>>> +++ b/drivers/bus/mhi/host/main.c >>>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct >>>>> mhi_controller *mhi_cntrl, >>>>>               mhi_del_ring_element(mhi_cntrl, tre_ring); >>>>>               local_rp = tre_ring->rp; >>>>>   +            read_unlock_bh(&mhi_chan->lock); >>>> >>>> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, >>>> flags); on line 591. >>> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code >>> >= MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while >>> ev_code < MHI_EV_CC_OOB. >> >> Sorry.  OOB != EOB >> >>>> >>>> I really don't like that we are unlocking the mhi_chan while still >>>> using it.  It opens up a window where the mhi_chan state can be >>>> updated between here and the client using the callback to queue a buf. >>>> >>>> Perhaps we need a new lock that just protects the wp, and needs to >>>> be only grabbed while mhi_chan->lock is held? >>> >>> Since we have employed mhi_chan lock to protect the channel and what >>> we are concerned here is that client may queue buf to a disabled or >>> stopped channel, can we check channel state after getting >>> mhi_chan->lock like line 595. >>> >>> We can add the check after getting write lock in mhi_gen_tre() and >>> after getting read lock again here. >> >> I'm not sure that is sufficient.  After you unlock to notify the >> client, MHI is going to manipulate the packet count and runtime_pm >> without the lock (648-652).  It seems like that adds additional races >> which won't be covered by the additional check you propose. > > I don't think read_lock_bh(&mhi_chan->lock) can protect runtime_pm and > the packet count here. Even if we do not unlock, mhi state and packet > count can still be changed because we did not get pm_lock here, which is > used in all mhi state transition function. > > I also checked all places that mhi_chan->lock is grabbed, did not see > packet count and runtime_pm be protected by write_lock(&mhi_chan->lock). > > > If you really don't like the unlock operation, we can also take a new > lock. But I think we only need to add the new lock in two places, > mhi_gen_tre and mhi_pm_m0_transition while mhi_chan->lock is held. Mani, if I recall correctly, you were the architect of the locking. Do you have an opinion?