Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751686AbdIUMtM (ORCPT ); Thu, 21 Sep 2017 08:49:12 -0400 Received: from mx08-00178001.pphosted.com ([91.207.212.93]:2607 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751387AbdIUMtK (ORCPT ); Thu, 21 Sep 2017 08:49:10 -0400 Subject: Re: [PATCH v4 2/4] dmaengine: Add STM32 DMAMUX driver To: Peter Ujfalusi , Vinod Koul , Rob Herring , Mark Rutland , Maxime Coquelin , Alexandre Torgue , Russell King , Dan Williams , "M'boumba Cedric Madianga" , Fabrice GASNIER , Herbert Xu , Fabien DESSENNE , Amelie Delaunay , , , , References: <1504785168-26572-1-git-send-email-pierre-yves.mordret@st.com> <1504785168-26572-3-git-send-email-pierre-yves.mordret@st.com> From: Pierre Yves MORDRET Message-ID: Date: Thu, 21 Sep 2017 14:47:59 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.75.127.47] X-ClientProxiedBy: SFHDAG7NODE3.st.com (10.75.127.21) To SFHDAG5NODE2.st.com (10.75.127.14) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-09-21_02:,, signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1636 Lines: 50 On 09/21/2017 01:25 PM, Peter Ujfalusi wrote: > > Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki > > > Great that you got it working w/o a custom API! > I have one comment, which actually valid for the ti-dma-crossbar driver > as well... Yes. That cleans up a little bit the sw architecture. But still this custom API allowed both DMAMUX and DMA at the same time since using the same channel ID allocator. Ok this is another story to be addressed out of this thread ;) > >> +static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec, >> + struct of_dma *ofdma) >> + >> + spin_lock_irqsave(&dmamux->lock, flags); >> + mux->chan_id = find_first_zero_bit(dmamux->dma_inuse, >> + dmamux->dma_requests); > > you pick the first available chan_id here under the lock. > >> + spin_unlock_irqrestore(&dmamux->lock, flags); >> + if (mux->chan_id == dmamux->dma_requests) { >> ... >> + /* Set dma request */ >> + spin_lock_irqsave(&dmamux->lock, flags); >> + if (!IS_ERR(dmamux->clk)) { >> ... >> + spin_unlock_irqrestore(&dmamux->lock, flags); >> + >> + set_bit(mux->chan_id, dmamux->dma_inuse); > > But nothing stops other parallel threads to pick the same chan_id since > you have released the lock (released, got the lock to protect the set > dma request and released it again). imho the find_first_zero_bit() and > the set_bit() should be done within the same lock to avoid race conditions. > > - Péter > Yep good catch : That's correct. Even if probability to happen is rather low, it may happen. Will solve that. Py