2022-05-18 09:15:25

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH -next] net: wwan: t7xx: use GFP_ATOMIC under spin lock in t7xx_cldma_gpd_set_next_ptr()

Hi Yang,

On Wed, 18 May 2022 at 10:57, Yang Yingliang <[email protected]> wrote:
>
> Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock,
> so add a parameter in t7xx_cldma_gpd_set_next_ptr() to make if it
> use GFP_ATOMIC flag.
>
> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
> Reported-by: Hulk Robot <[email protected]>
> Signed-off-by: Yang Yingliang <[email protected]>
> ---
> drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> index 0c52801ed0de..1fa9bb763831 100644
> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> @@ -91,9 +91,12 @@ static void t7xx_cldma_gpd_set_next_ptr(struct cldma_gpd *gpd, dma_addr_t next_p
> }
>
> static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req,
> - size_t size)
> + size_t size, bool is_atomic)

Would be simpler to directly pass the gfp_mask as a parameter.


> {
> - req->skb = __dev_alloc_skb(size, GFP_KERNEL);
> + if (is_atomic)
> + req->skb = __dev_alloc_skb(size, GFP_ATOMIC);
> + else
> + req->skb = __dev_alloc_skb(size, GFP_KERNEL);
> if (!req->skb)
> return -ENOMEM;
>
> @@ -174,7 +177,7 @@ static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool
> spin_unlock_irqrestore(&queue->ring_lock, flags);
> req = queue->rx_refill;
>
> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size, false);
> if (ret)
> return ret;
>
> @@ -402,7 +405,7 @@ static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, s
> if (!req->gpd)
> goto err_free_req;
>
> - val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size);
> + val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size, false);
> if (val)
> goto err_free_pool;
>
> @@ -801,7 +804,7 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
> if (req->skb)
> continue;
>
> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size);
> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size, true);
> if (ret)
> break;
>
> --
> 2.25.1
>


2022-05-19 03:35:56

by Yang Yingliang

[permalink] [raw]
Subject: Re: [PATCH -next] net: wwan: t7xx: use GFP_ATOMIC under spin lock in t7xx_cldma_gpd_set_next_ptr()

Hi,

On 2022/5/18 17:13, Loic Poulain wrote:
> Hi Yang,
>
> On Wed, 18 May 2022 at 10:57, Yang Yingliang <[email protected]> wrote:
>> Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock,
>> so add a parameter in t7xx_cldma_gpd_set_next_ptr() to make if it
>> use GFP_ATOMIC flag.
>>
>> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
>> Reported-by: Hulk Robot <[email protected]>
>> Signed-off-by: Yang Yingliang <[email protected]>
>> ---
>> drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 ++++++++-----
>> 1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> index 0c52801ed0de..1fa9bb763831 100644
>> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> @@ -91,9 +91,12 @@ static void t7xx_cldma_gpd_set_next_ptr(struct cldma_gpd *gpd, dma_addr_t next_p
>> }
>>
>> static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req,
>> - size_t size)
>> + size_t size, bool is_atomic)
> Would be simpler to directly pass the gfp_mask as a parameter.
Yes, I will send a v2 with this change later.

Thanks,
Yang
>
>
>> {
>> - req->skb = __dev_alloc_skb(size, GFP_KERNEL);
>> + if (is_atomic)
>> + req->skb = __dev_alloc_skb(size, GFP_ATOMIC);
>> + else
>> + req->skb = __dev_alloc_skb(size, GFP_KERNEL);
>> if (!req->skb)
>> return -ENOMEM;
>>
>> @@ -174,7 +177,7 @@ static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool
>> spin_unlock_irqrestore(&queue->ring_lock, flags);
>> req = queue->rx_refill;
>>
>> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
>> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size, false);
>> if (ret)
>> return ret;
>>
>> @@ -402,7 +405,7 @@ static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, s
>> if (!req->gpd)
>> goto err_free_req;
>>
>> - val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size);
>> + val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size, false);
>> if (val)
>> goto err_free_pool;
>>
>> @@ -801,7 +804,7 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>> if (req->skb)
>> continue;
>>
>> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size);
>> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size, true);
>> if (ret)
>> break;
>>
>> --
>> 2.25.1
>>
> .