Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp424125pxk; Thu, 24 Sep 2020 08:51:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwdJ12hdD2LKX926zrH1MCmCii/2h4v8xo7/QKviyV1Gg8SjzokSG6e8YuPoQAYdGmEOLcU X-Received: by 2002:a05:6402:12d1:: with SMTP id k17mr503981edx.323.1600962693810; Thu, 24 Sep 2020 08:51:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600962693; cv=none; d=google.com; s=arc-20160816; b=MofrSt/scAMGanmg2h8e+T7WvtbVWIsCHnkMK7br2cBD0KvsDSi/s2/rZHxMQvT26f ZJcVPdqijo5ExJ92h50zvXOjgL24A+6XZWoh6RHINXfBYbU1lUcfsEcD8AVzFTkf4/Uk qyM+WUmCk68Y1935j/fNjoC6e1qzhsGtIuyeuhM/dMJ2vCgR4VZUnQAAH57JgOXJh2Tg IPNXRclF7uwaADD2YHCCTCCLapJ4WFuNIqhx/f0FzN+YgTl19RIRoQoU3CYm7K94Xg78 FHtVwJZ51xknIVjP1X7GQKLQi8gsjFrfaY22aKmIqENjetoNx6Jt6lsjx6T2y2aUCBXA ZHsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=8sBQOLMvDa/1HRjag0qzBz/EV8/LEUEspWkel8FurUI=; b=di0MXgAe0VkOink7h3KJq+L6cWRPqMrSxAXfzF8WSCvWDbGVNRvTL1SfGmchNLLFKv wanQ6Fsj60/9aQUz8eCl4uSbPQOapLLG88nozBSA8AQB+gKOVv2g3gR9Zqr9R5byhvWI zXFeZPyknnZD3TZtsP+ItqD9D+qgaDgkLGQP1J27mR1qcOP8erKine2KNbRSf2jbDKqP xKTKwRsA7F9jrYZ9WdRdrNVROPLqVdgswxpIv71SGqciOVbMH+DjatbQ6hyae4cXd3Xs n3lYzUckdv6FCDA/soAXQ1NX/di5Jf8ZXf+oZVRZcJ7iO3+8mFRihNOgvtzKWKtJdvw1 RgjA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r22si2241437ejo.88.2020.09.24.08.51.09; Thu, 24 Sep 2020 08:51:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728492AbgIXPuG (ORCPT + 99 others); Thu, 24 Sep 2020 11:50:06 -0400 Received: from netrider.rowland.org ([192.131.102.5]:35431 "HELO netrider.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1728139AbgIXPuG (ORCPT ); Thu, 24 Sep 2020 11:50:06 -0400 Received: (qmail 1341527 invoked by uid 1000); 24 Sep 2020 11:50:05 -0400 Date: Thu, 24 Sep 2020 11:50:05 -0400 From: Alan Stern To: Felipe Balbi Cc: Wesley Cheng , gregkh@linuxfoundation.org, Thinh.Nguyen@synopsys.com, linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, jackp@codeaurora.org Subject: Re: [PATCH v3] usb: dwc3: Stop active transfers before halting the controller Message-ID: <20200924155005.GB1337044@rowland.harvard.edu> References: <20200903210954.24504-1-wcheng@codeaurora.org> <87o8mi151l.fsf@kernel.org> <010101746fab2ee1-91b46c27-fef0-4266-94cb-14dea5ca350e-000000@us-west-2.amazonses.com> <877dsjei8j.fsf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <877dsjei8j.fsf@kernel.org> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 24, 2020 at 10:39:24AM +0300, Felipe Balbi wrote: > >>> + /* > >>> + * Synchronize and disable any further event handling while controller > >>> + * is being enabled/disabled. > >>> + */ > >>> + disable_irq(dwc->irq_gadget); > >> > >> why isn't dwc3_gadget_disable_irq() enough? > >> > >>> spin_lock_irqsave(&dwc->lock, flags); > >> > >> spin_lock_irqsave() will disable interrupts, why disable_irq() above? > >> > > > > In the discussion I had with Thinh, the concern was that with the newly > > added code to override the lpos here, if the interrupt routine > > (dwc3_check_event_buf()) runs, then it will reference the lpos for > > that's running in hardirq context. All interrupts are disabled while > that runs, there's no risk of race, right? > > > copying the event buffer contents to the event cache, and potentially > > process events. There is no locking in place, so it could be possible > > to have both run in parallel. > > Is this academic or have you actually found a situation where this > could, indeed, happen? The spin_lock_irqsave() should be enough to > synchronize dwc3_gadget_pullup() and the interrupt handler. > > > Hence, the reason if there was already a pending IRQ triggered, the > > dwc3_gadget_disable_irq() won't ensure the IRQ is handled. We can do > > something like: > > if (!is_on) > > dwc3_gadget_disable_irq() > > synchronize_irq() > > spin_lock_irqsave() > > if(!is_on) { > > ... > > > > But the logic to only apply this on the pullup removal case is a little > > messy. Also, from my understanding, the spin_lock_irqsave() will only > > disable the local CPU IRQs, but not the interrupt line on the GIC, which > > means other CPUs can handle it, unless we explicitly set the IRQ > > affinity to CPUX. > > Yeah, the way I understand this can't really happen. But I'm open to > being educated. Maybe Alan can explain if this is really possibility? It depends on the details of the hardware, but yes, it is possible in general for an interrupt handler to run after you have turned off the device's interrupt-request line. For example: CPU A CPU B --------------------------- ---------------------- Gets an IRQ from the device Calls handler routine spin_lock_irq spin_lock_irq Turns off the IRQ line ...spins... spin_unlock_irq Rest of handler runs spin_unlock_irq That's why we have synchronize_irq(). The usual pattern is something like this: spin_lock_irq(&priv->lock); priv->disconnected = true; my_disable_irq(priv); spin_unlock_irq(&priv->lock); synchronize_irq(priv->irq); And of course this has to be done in a context that can sleep. Does this answer your question? Alan Stern