Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp1050544iob; Wed, 18 May 2022 20:35:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyG1chbk5clH5VNDe2QlF75wj+bVAdXw9dRTsScM7qvZuSJxGbIl5rlJ1w3SWDMHR9uCtI2 X-Received: by 2002:a17:90b:8cd:b0:1df:3a31:e4cc with SMTP id ds13-20020a17090b08cd00b001df3a31e4ccmr3568419pjb.66.1652931356732; Wed, 18 May 2022 20:35:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652931356; cv=none; d=google.com; s=arc-20160816; b=dXvRkUwQHsW//uD1VzHqrLKBPVIQeMyCnYRgLfiPfKbWF11ckp/lBLlsB8sCBq95YO QTal4uR1RKQttpnhzgP3MhewCnDnxjsqph0DjvfgE8ImA8FyZqU0ulpNetaov9r4/ToL j1gPBjVAT293KjGVZux2k2lZBIa3VZgwj/OuHbZHWkGsRmD4+zAv8HCP9Xy9sd/CGumg mzdEE2MVGN5ROrzumd7PcQyJ4RngpD9Z/qToFrJ0+C+CZgtOXAS6s7HglKkiAVl7+yrn bvsmgJkXd+Z3txRIkkiT3sVkCCr4E0nlhxIVYCmEYq4lopbt+dOFGmdiG4VIra6UiI2O tiQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=p2XoHLULF0ikk8vRkrecs0ZH5kiEWglWxTYWMtIMMY8=; b=XEpO3AifV86QFDhFKPdBVUGb97x1I6TRO+c+L2nvVQBUUiGE3yLWorG0ZAjHDaC9KR mS6p351dQ++dg7Qlq3x/jl6S+qtHk9gwyv3E4REaS8Hwpr11KJZPGE3NoOKGdAnJ4KUc N5/QkOLZZN1dLm+ZQO7KBouIn4xrePH3Bq6gu3S+q6ulGNxQOpJL0NIIR/vT7AWChZah QIbTdEuocXp6LdUj9y9iyilSIFczTasYa1LdDOiESQhHg8IqJ2hkJX5osb1fc2p4N8pZ 5eUpqm2V1y+VEqSzuaDqSjDWhhv54X6aRrr2fpSmnc5CoLzKVEqGRDrnUUVp2RPLcRta wQeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bh11-20020a170902a98b00b0015baf4aee8dsi2427673plb.151.2022.05.18.20.35.39; Wed, 18 May 2022 20:35:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232693AbiESBwi (ORCPT + 99 others); Wed, 18 May 2022 21:52:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232429AbiESBwe (ORCPT ); Wed, 18 May 2022 21:52:34 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 081A8C3D01; Wed, 18 May 2022 18:52:33 -0700 (PDT) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4L3XsR0bBgzhZ9w; Thu, 19 May 2022 09:51:55 +0800 (CST) Received: from dggpemm500007.china.huawei.com (7.185.36.183) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 19 May 2022 09:52:31 +0800 Received: from [10.174.178.174] (10.174.178.174) by dggpemm500007.china.huawei.com (7.185.36.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 19 May 2022 09:52:30 +0800 Subject: Re: [PATCH -next] net: wwan: t7xx: use GFP_ATOMIC under spin lock in t7xx_cldma_gpd_set_next_ptr() To: Loic Poulain CC: , , , , , , , , References: <20220518090738.2694556-1-yangyingliang@huawei.com> From: Yang Yingliang Message-ID: <53b384f6-5507-b767-f64a-574cfd511e13@huawei.com> Date: Thu, 19 May 2022 09:52:30 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [10.174.178.174] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500007.china.huawei.com (7.185.36.183) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-6.3 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 2022/5/18 17:13, Loic Poulain wrote: > Hi Yang, > > On Wed, 18 May 2022 at 10:57, Yang Yingliang wrote: >> Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock, >> so add a parameter in t7xx_cldma_gpd_set_next_ptr() to make if it >> use GFP_ATOMIC flag. >> >> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface") >> Reported-by: Hulk Robot >> Signed-off-by: Yang Yingliang >> --- >> drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 ++++++++----- >> 1 file changed, 8 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c >> index 0c52801ed0de..1fa9bb763831 100644 >> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c >> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c >> @@ -91,9 +91,12 @@ static void t7xx_cldma_gpd_set_next_ptr(struct cldma_gpd *gpd, dma_addr_t next_p >> } >> >> static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req, >> - size_t size) >> + size_t size, bool is_atomic) > Would be simpler to directly pass the gfp_mask as a parameter. Yes, I will send a v2 with this change later. Thanks, Yang > > >> { >> - req->skb = __dev_alloc_skb(size, GFP_KERNEL); >> + if (is_atomic) >> + req->skb = __dev_alloc_skb(size, GFP_ATOMIC); >> + else >> + req->skb = __dev_alloc_skb(size, GFP_KERNEL); >> if (!req->skb) >> return -ENOMEM; >> >> @@ -174,7 +177,7 @@ static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool >> spin_unlock_irqrestore(&queue->ring_lock, flags); >> req = queue->rx_refill; >> >> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size); >> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size, false); >> if (ret) >> return ret; >> >> @@ -402,7 +405,7 @@ static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, s >> if (!req->gpd) >> goto err_free_req; >> >> - val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size); >> + val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size, false); >> if (val) >> goto err_free_pool; >> >> @@ -801,7 +804,7 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum) >> if (req->skb) >> continue; >> >> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size); >> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size, true); >> if (ret) >> break; >> >> -- >> 2.25.1 >> > .