Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp2288725ybz; Thu, 30 Apr 2020 14:25:59 -0700 (PDT) X-Google-Smtp-Source: APiQypKJRE95elGjDRsiyuE8thnh8hGKgCYJjSHvDY3RuSoXeImXhj0l1et2cZvDM2E+nKZlZlOs X-Received: by 2002:a17:906:3952:: with SMTP id g18mr413797eje.191.1588281959346; Thu, 30 Apr 2020 14:25:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588281959; cv=none; d=google.com; s=arc-20160816; b=e8UH4YuhbMBlyDg4ePapndhbHMNIgk1dtsBrUa37GjGMxZce3iuCiCqtEPfNuloZg9 7dKQAtCEO/sC2wM9S7s0KT5xChAMmVCfPx+hI9HM/XaUCh+y5hKWknvkiUWCyyuHkn1U qjHawV0On0IACBclDIU8eohd0da8Lmau9fErhfwN3TEy/5R6JAcaeWePr5yLmauFZn/L OJr6uLUGXtdpztfLl6uGu41d5MpdgFC1iEjT9KLymPbB7PmuJNHYdb56SI571KV7I9M/ n+/nX/dk7U967s7Pky2Kh4w4+gQTdSQ7Hi1RLyckyNKDU1FFzbnjnl0Eqi23kSsKLQjg zlrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=66RrBwsVgigduNURzALP1R3XJtKJbee5TJBvBJL375g=; b=VnAs8eWP2y61OphOAr0pGrCxDehGHx00j8fCMtzyy8WLikZw6eJHA8oPDa3pyVE03T cYJ+8Fr2EwbdyeZmp51gW+acTC6S8FJ7VYIDMdPGiDYyuaGv4u2BQRs/EtQVCieuuKpZ MBvX7TX0i7KkvLwNM7ypgowBT1M/XXrJgm8IAZRNchE+VJeJV0UVyFo/i9qMvA5WZI2X tuk5CrubXWH8/5c/chyJOlpFDTud7DRcDhmN6xAiLSO3OPWyPuPPiWyIOTx31VJigo4z VbW8lRalUqOIVfG+mtg/+Vpl6YIv5ZjxTJn2DVugnhm87mGbojA0yBLEz/RLPIEPKnIO yIZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=VQU+2xkw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x21si511392ejw.432.2020.04.30.14.25.35; Thu, 30 Apr 2020 14:25:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=VQU+2xkw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727068AbgD3VXk (ORCPT + 99 others); Thu, 30 Apr 2020 17:23:40 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:19045 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726463AbgD3VXk (ORCPT ); Thu, 30 Apr 2020 17:23:40 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 30 Apr 2020 14:21:35 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 30 Apr 2020 14:23:39 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 30 Apr 2020 14:23:39 -0700 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 30 Apr 2020 21:23:39 +0000 Received: from [10.2.165.152] (10.124.1.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 30 Apr 2020 21:23:33 +0000 Subject: Re: [RFC PATCH v11 6/9] media: tegra: Add Tegra210 Video input driver From: Sowjanya Komatineni To: Dmitry Osipenko , , , , , , CC: , , , , , References: <1588197606-32124-1-git-send-email-skomatineni@nvidia.com> <1588197606-32124-7-git-send-email-skomatineni@nvidia.com> <4da289e6-036f-853b-beb4-379d6462adb0@gmail.com> <7d31d24f-f353-7e82-3ff9-cdba8b773d1e@nvidia.com> <06a4a067-8d54-4322-b2a6-14e82eaeda29@nvidia.com> <47873bbd-cf90-4595-5a99-7e9113327ecc@nvidia.com> <71532440-f455-cc24-74f7-9ccad5947099@gmail.com> <298187f6-2425-4813-1ae1-f256c179623e@nvidia.com> Message-ID: <35c139b7-8688-6044-5b04-db4f7604fdcf@nvidia.com> Date: Thu, 30 Apr 2020 14:22:04 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <298187f6-2425-4813-1ae1-f256c179623e@nvidia.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: quoted-printable Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1588281695; bh=66RrBwsVgigduNURzALP1R3XJtKJbee5TJBvBJL375g=; h=X-PGP-Universal:Subject:From:To:CC:References:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Transfer-Encoding: Content-Language; b=VQU+2xkw7LmsHFaK7cUwu/dj4y2N6px7I1E1LsaUH/vRY8UWE9rHRGaP0XAlcWQKZ Fj1byxoRB2qiOSIBxZngYdVMnAOydiS9IJO6eE4Wb0g+PfDsOFj8KV8MXACmyR9XpW udouDJVTcEQLqhTH7ljV9lSG1hXI1BzxEOfQNBVYoCX+NIFiBsN5i1wIgCvg6UBiT/ LI1DYt61ZMUJI78/Q5qFf/LLsiNNGTMtqJfBxmAXfchwz0GChVCe5qnIEibLGIATrE BvqAdvTdNkCf29Re5EFt/76wV9SS3VZV3Gh85GryliZc5Unsb9oy2BG8OzzPwM7gTU +Jn/m7zIU1N+w== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/30/20 12:53 PM, Sowjanya Komatineni wrote: > > On 4/30/20 12:46 PM, Sowjanya Komatineni wrote: >> >> On 4/30/20 12:33 PM, Dmitry Osipenko wrote: >>> 30.04.2020 22:09, Sowjanya Komatineni =D0=BF=D0=B8=D1=88=D0=B5=D1=82: >>>> On 4/30/20 11:18 AM, Sowjanya Komatineni wrote: >>>>> On 4/30/20 10:06 AM, Sowjanya Komatineni wrote: >>>>>> On 4/30/20 9:29 AM, Sowjanya Komatineni wrote: >>>>>>> On 4/30/20 9:04 AM, Sowjanya Komatineni wrote: >>>>>>>> On 4/30/20 7:13 AM, Dmitry Osipenko wrote: >>>>>>>>> 30.04.2020 17:02, Dmitry Osipenko =D0=BF=D0=B8=D1=88=D0=B5=D1=82: >>>>>>>>>> 30.04.2020 16:56, Dmitry Osipenko =D0=BF=D0=B8=D1=88=D0=B5=D1=82= : >>>>>>>>>>> 30.04.2020 01:00, Sowjanya Komatineni =D0=BF=D0=B8=D1=88=D0=B5= =D1=82: >>>>>>>>>>>> +static int chan_capture_kthread_finish(void *data) >>>>>>>>>>>> +{ >>>>>>>>>>>> +=C2=A0=C2=A0=C2=A0 struct tegra_vi_channel *chan =3D data; >>>>>>>>>>>> +=C2=A0=C2=A0=C2=A0 struct tegra_channel_buffer *buf; >>>>>>>>>>>> + >>>>>>>>>>>> +=C2=A0=C2=A0=C2=A0 set_freezable(); >>>>>>>>>>>> + >>>>>>>>>>>> +=C2=A0=C2=A0=C2=A0 while (1) { >>>>>>>>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 try_to_freeze(); >>>>>>>>>>> I guess it won't be great to freeze in the middle of a capture >>>>>>>>>>> process, so: >>>>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (list_empty= (&chan->done)) >>>>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 try_to_freeze(); >>>>>>>>>> And here should be some locking protection in order not race=20 >>>>>>>>>> with >>>>>>>>>> the >>>>>>>>>> chan_capture_kthread_start because kthread_finish could freeze >>>>>>>>>> before >>>>>>>>>> kthread_start. >>>>>>>>> Or maybe both start / finish threads should simply be allowed to >>>>>>>>> freeze >>>>>>>>> only when both capture and done lists are empty. >>>>>>>>> >>>>>>>>> if (list_empty(&chan->capture) && >>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_empty(&chan->done)) >>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0try_to_freeze(); >>>>>>>> good to freeze when not in middle of the frame capture but why >>>>>>>> should we not allow freeze in between captures? >>>>>>>> >>>>>>>> Other drivers do allow freeze in between frame captures. >>>>>>>> >>>>>>>> I guess we can freeze before dequeue for capture and in finish >>>>>>>> thread we can freeze after capture done. This also don't need to >>>>>>>> check for list_empty with freeze to allow between frame captures. >>>>>>>> >>>>>>> Also if we add check for both lists empty, freeze is not allowed as >>>>>>> long as streaming is going on and in case of continuous streaming >>>>>>> freeze will never happen. >>>>> To allow freeze b/w frames (but not in middle of a frame), >>>>> >>>>> for capture_start thread, probably we can do unconditional >>>>> try_to_freeze() >>> Is it possible to use wait_event_freezable()? >>> >>> https://www.kernel.org/doc/Documentation/power/freezing-of-tasks.txt >>> >>> Will the wait_event_interruptible() be woken up when system freezes? >> >> Based on wait_event_freezable implementation, looks like it similar=20 >> to wait_event_interruptible + try_to_free() as it does=20 >> freezable_schedule unlike schedule with wait_event_interruptible. >> >> So using this for capture_start may be ok to allow freeze before=20 >> start of frame. But can't use for capture_finish as this is same as=20 >> wait_event_interruptible followed by unconditional try_to_freeze. >> >>> >>>>> for capture_finish thread, at end of capture done we can do >>>>> try_to_freeze() only when done list is empty >>> This doesn't prevent situation where the done-list is empty and the >>> "finish" thread freezes, in the same time the "start" thread issues new >>> capture and then freezes too. >>> >>> 1. "start" thread issues capture >>> >>> 2. "finish" thread wakes and waits for the capture to complete >>> >>> 3. "start" thread begins another capture, waits for FRAME_START >>> >>> 4. system freezing activates >>> >>> 5. "finish" thread completes the capture and freezes because done-list >>> is empty >>> >>> 6. "start" thread gets FRAME_START, issues another capture and freezes >> >> This will not happen as we allow double buffering done list will not=20 >> be empty till stream stop happens >> >> There will always be 1 outstanding frame in done list > > Correction, there will always be 1 outstanding buffer except beginning=20 > during beginning of stream. > > Except during beginning frames, done list will not be empty for all=20 > subsequent streaming process > or probably we should add pending buffers b/w 2 threads w.r.t single=20 shot issues and allow freeze only when no pending frame to cover any=20 corner case b/w done list and capture list. >> >>>> My understanding is buffer updates/release should not happen after >>>> frozen state. So we should let frame capture of outstanding buffer to >>>> finish before freezing in capture_finish thread. >>>> >>>> But for capture_start thread we can unconditionally freeze before >>>> dequeuing next buffer for capture. >>>> >>>> With this when both threads are in frozen state and no buffer >>>> updates/captures will happen after frozen state. >>>> >>>> I think its not required to finish streaming of all frames=20 >>>> completely to >>>> let threads to enter frozen state as streaming can be continuous as=20 >>>> well. >>> Yes, only freezing in the middle of IO should be avoided. >>> >>> https://lwn.net/Articles/705269/ >>> >>>>>> Hi Dmitry, >>>>>> >>>>>> Will update in v12 to not allow freeze in middle of a frame capture. >>>>>> >>>>>> Can you please confirm on above if you agree to allow freeze to >>>>>> happen in b/w frame captures? >>>>>> >>>>>> Also as most feedback has been received from you by now, appreciate >>>>>> if you can provide all in this v11 if you have anything else so we >>>>>> will not have any new changes after v12. >>> I'll take another look tomorrow / during weekend and let you know.