Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2008395pxj; Fri, 18 Jun 2021 23:55:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxY/KWgesWM4FxY9nwazrE3TYJF7WZQZ45yWCHa3Ac+ia5UkI4rIgUrCI5lBby4iBditU4M X-Received: by 2002:a05:6e02:d05:: with SMTP id g5mr9492284ilj.34.1624085721609; Fri, 18 Jun 2021 23:55:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624085721; cv=none; d=google.com; s=arc-20160816; b=WeuJ7uATcy8pBp9R1lzjYEPGTFFZHRMEvCDWfB+bIb5Jowvit07lrqsXrRIb0T/yG7 srB4MU0j2O3Sio2fIf3J3L+ijR4DpAcrPUfCFW7Ft0G8SwwM4N6MWLw9xi0cnmYFL+yC btppqJvjCXRovEqL4rtWbDw97ez6kHEXvldkVDSYwyS+odhrE8C5QnOmTMTBZ/szX6tB zoNe7YEEpUYj7LDesP0+xOme2t6XN7syJMEsVzuR9b3VlnEVt69brOzJAivm592k6cqR thCRL4RGa4W9/s6badsRTVAAdVd8FpS2RiSx74TzNlAZV22HRW74RYdObnpPlfQjIz82 zuVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:organization:references:in-reply-to:date:to:from:subject :message-id; bh=xDNScRsMqfgDCokOdmmzV7LAT8pONbNfWk25ZvRJszA=; b=TnWWlkFeZLEjIpU/ntFVX5B822h4ZJ+GWpo2sTb94x/6qHrLjlL7W0xP9KDxLN1Q9i hu1RaCCZNJ3Y25o64ZAA7fw+NOlLQKv+aql6TysPQoANAx4lkzZWHF036BPYJaZExNl3 vMm5L+JMgXGzrOss74o+CdxeFYZItC7P+LFaN9H76+Ra6MLYax2SxZN4iJXq+5WTzRrj zmEUID+IcMROfH+x98e6t9BvpKHX52uSJn9dfN/6ikdAabFLafgVIEg08elEk60uLLwA X9J2M7AHKvUepbn/J3c7az9I8410uBwBbZZnBAbT97ZdPH9HK+qQBpE3a2UVahHvJMXn v8qg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m18si11313596ion.52.2021.06.18.23.55.09; Fri, 18 Jun 2021 23:55:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232144AbhFRV22 (ORCPT + 99 others); Fri, 18 Jun 2021 17:28:28 -0400 Received: from cloud48395.mywhc.ca ([173.209.37.211]:36514 "EHLO cloud48395.mywhc.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbhFRV2Y (ORCPT ); Fri, 18 Jun 2021 17:28:24 -0400 Received: from modemcable064.203-130-66.mc.videotron.ca ([66.130.203.64]:33080 helo=[192.168.1.179]) by cloud48395.mywhc.ca with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1luM0E-0002qd-4h; Fri, 18 Jun 2021 17:26:14 -0400 Message-ID: Subject: Re: [PATCH] io_uring: store back buffer in case of failure From: Olivier Langlois To: Pavel Begunkov , Jens Axboe , io-uring@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 18 Jun 2021 17:26:13 -0400 In-Reply-To: <5007a641-23cf-195d-87ee-de193e19dc20@gmail.com> References: <60c83c12.1c69fb81.e3bea.0806SMTPIN_ADDED_MISSING@mx.google.com> <93256513-08d8-5b15-aa98-c1e83af60b54@gmail.com> <4f32f06306eac4dd7780ed28c06815e3d15b43ad.camel@trillion01.com> <6bf916b4-ba6f-c401-9e8b-341f9a7b88f7@kernel.dk> <5007a641-23cf-195d-87ee-de193e19dc20@gmail.com> Organization: Trillion01 Inc Content-Type: text/plain; charset="ISO-8859-1" User-Agent: Evolution 3.40.2 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - cloud48395.mywhc.ca X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - trillion01.com X-Get-Message-Sender-Via: cloud48395.mywhc.ca: authenticated_id: olivier@trillion01.com X-Authenticated-Sender: cloud48395.mywhc.ca: olivier@trillion01.com X-Source: X-Source-Args: X-Source-Dir: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2021-06-16 at 16:37 +0100, Pavel Begunkov wrote: > On 6/16/21 3:44 PM, Jens Axboe wrote: > > On 6/16/21 8:01 AM, Pavel Begunkov wrote: > > > On 6/16/21 2:42 PM, Olivier Langlois wrote: > > > > On Tue, 2021-06-15 at 15:51 -0600, Jens Axboe wrote: > > > > > Ditto for this one, don't see it in my email nor on the list. > > > > > > > > > I can resend you a private copy of this one but as Pavel > > > > pointed out, > > > > it contains fatal flaws. > > > > > > > > So unless someone can tell me that the idea is interesting and > > > > has > > > > potential and can give me some a hint or 2 about how to address > > > > the > > > > challenges to fix the current flaws, it is pretty much a show > > > > stopper > > > > to me and I think that I am going to let it go... > > > > > > It'd need to go through some other context, e.g. task context. > > > task_work_add() + custom handler would work, either buf-select > > > synchronisation can be reworked, but both would rather be > > > bulky and not great. > > > > Indeed - that'd solve both the passing around of locking state > > which > > I really don't like, and make it much simpler. Just use task work > > for > > the re-insert, and you can grab the ring lock unconditionally from > > there. > > Hmm, it might be much simpler than I thought if we allocate > a separate struct callback_head, i.e. task_work, queued it > with exactly task_work_add() but not io_req_task_work_add(), > and continue with the request handler. > ok thx a lot for the excellent suggestions! I think that you have provided me everything that I need to give a shot for a second version of this patch.