Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp2057591ybg; Thu, 30 Jul 2020 09:16:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwplELwVUv/LKkoLIe9oN6FbSupS5dbjzoykGlD7Ae83X7MoQg8253SRQZDT7rvFlVW/zP/ X-Received: by 2002:a05:6402:1bc1:: with SMTP id ch1mr3477894edb.142.1596125808856; Thu, 30 Jul 2020 09:16:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596125808; cv=none; d=google.com; s=arc-20160816; b=ZTWbsG4Vxnr5qEupb5kGNGjCmB4IkgagQoWpLoAW9Y6LB3PI6TikoatHE7pTVUNpWw haxtEda6b8k65OUNStW6h+2wL7pYyMaQUiPzZF0frPkKqLqFpjEM9lNKilnXkBjqsLpc lmAeXGZOQB+z8j9J14pSiWCprKndHu9tmFkCAnbT67kV8FFSWYKpxVL+CPVTzlB2TumX otIA4JYOKYb5tCfbZxhn8GqVqfCxS3g/zTK+/o0amwRz2+yEnJQ+zhpV5Vp8JRKfdvnm 27PpiiHASyE5g06btRnIuhRMSpOe4GG0uVXMbSu3M2MUNWYNtZJ6R+PmQQPnUDYQKn5Z FYKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=2cEXkEXVTWwLucH9omujSAfOqa3WLkWR4PelWjXQFNg=; b=dRSrjeX7rGZ0DSzBq8FpdUQYEVow0zL+DgZl7JJd5r464slK8Kvwq8Qlzxw4Gfq7E2 nevVYNWb+vEzCYGXI5W1YmZTiKJavahEDSrkltH7CvySqe48KGbJCOKz/ZIizSDUj+z6 YS4tZyhjzzRWp/ES4lP5h8ua7fvLSFxLzPEBmdYiQoa7f+dC+4jxd2Dar2msCNVoCQ4y c0Ui9FcZXZ2I7MrVQGSlZBAbsTLJ+Lzedz1eeBQZcDWBFZyA9tVRfFmHEK7zcpAI6CeQ 60L2Q3frKxG49BQB70WCBGEe0jOGvtMH9opWI+DealPyvV2mWYCol/Gf2nrpKU1+ekAC BLeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b="VP/dlrNE"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a5si2507170ejb.582.2020.07.30.09.16.26; Thu, 30 Jul 2020 09:16:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b="VP/dlrNE"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729997AbgG3QOC (ORCPT + 99 others); Thu, 30 Jul 2020 12:14:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729459AbgG3QOA (ORCPT ); Thu, 30 Jul 2020 12:14:00 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F28BC061756 for ; Thu, 30 Jul 2020 09:14:00 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id e4so1384790pjd.0 for ; Thu, 30 Jul 2020 09:14:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=2cEXkEXVTWwLucH9omujSAfOqa3WLkWR4PelWjXQFNg=; b=VP/dlrNE4R3fh1kC0zEFNVslfKeWiqHg2rvnG3XvMO07cpQGdmqp1i+GAdgwDXQ548 baQ8y57LmBTqQhwBesX4FCAiukcf+aLMcOY1K2XaEmz1fU1G+P6bYrK8OPRXyiSMU/VX GFxZF3EyY+OvPy3BEbR1gOZ37VRpyEcUTnT+0ZR2Z1iOJ2p3IN870t/aTJRb3HipmO22 1Tuj5S3jgOO/XEQUo8AB3NvrinwFrH5SAnkrxF/g9dayxQza2pg5EiNhiDQPmWpvtcJ7 6ksBjlZNdtzEJTd9RO1LYI2wKShLlpjQ2Cx6aF+CsXn8NnMUuaty0oYsysIntLRNUKrd bbqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=2cEXkEXVTWwLucH9omujSAfOqa3WLkWR4PelWjXQFNg=; b=FPSWcaPTH+BucH+bGY22qspjgoKwANpFfjkdwzl0+miwZohNbA55D1eaVJ7/alW08S AxSrd57ZvOFzZso7qPweKgr1JMJKqCgczrj1A9iGCl/SHzYVffLRD69Nwd6J1Qk0CVdU D0xnUwsw8p5I3KVpKlKhr6ulpLQpkQ1joJV60Zaun1QmC2G9vsoDihN5t5g5VXqRkNh7 vdOsDOJC6QIQgoi6QtghVtjI7UzQqp6ZdjOui+scAZ53PqvKBO4CLYaatWa5w0a/8JMX Xw4ih0cjwpGPGxWU3TeIwXGDYW3TZgXt5hijQkI0nNM21h3JEo267QA13VPnARKcvBJN yAqQ== X-Gm-Message-State: AOAM531VOndNhHF5kmZKqHyLmICoQmJmR76BuDc+R9PkXhNQEr6FD2kY P36N7AJaWgfXDv/842OXaWP6Xg== X-Received: by 2002:a17:90a:14a5:: with SMTP id k34mr16416649pja.37.1596125639842; Thu, 30 Jul 2020 09:13:59 -0700 (PDT) Received: from ?IPv6:2620:10d:c085:21e8::11c1? ([2620:10d:c090:400::5:6c5a]) by smtp.gmail.com with ESMTPSA id i196sm6694268pgc.55.2020.07.30.09.13.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Jul 2020 09:13:59 -0700 (PDT) Subject: Re: [PATCH v4 6/6] io_uring: add support for zone-append To: Pavel Begunkov , Kanchan Joshi Cc: Kanchan Joshi , viro@zeniv.linux.org.uk, bcrl@kvack.org, Matthew Wilcox , Christoph Hellwig , Damien Le Moal , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-api@vger.kernel.org, SelvaKumar S , Nitesh Shetty , Javier Gonzalez References: <1595605762-17010-1-git-send-email-joshi.k@samsung.com> <1595605762-17010-7-git-send-email-joshi.k@samsung.com> From: Jens Axboe Message-ID: <80d27717-080a-1ced-50d5-a3a06cf06cd3@kernel.dk> Date: Thu, 30 Jul 2020 10:13:57 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/30/20 10:08 AM, Pavel Begunkov wrote: > On 27/07/2020 23:34, Jens Axboe wrote: >> On 7/27/20 1:16 PM, Kanchan Joshi wrote: >>> On Fri, Jul 24, 2020 at 10:00 PM Jens Axboe wrote: >>>> >>>> On 7/24/20 9:49 AM, Kanchan Joshi wrote: >>>>> diff --git a/fs/io_uring.c b/fs/io_uring.c >>>>> index 7809ab2..6510cf5 100644 >>>>> --- a/fs/io_uring.c >>>>> +++ b/fs/io_uring.c >>>>> @@ -1284,8 +1301,15 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags) >>>>> cqe = io_get_cqring(ctx); >>>>> if (likely(cqe)) { >>>>> WRITE_ONCE(cqe->user_data, req->user_data); >>>>> - WRITE_ONCE(cqe->res, res); >>>>> - WRITE_ONCE(cqe->flags, cflags); >>>>> + if (unlikely(req->flags & REQ_F_ZONE_APPEND)) { >>>>> + if (likely(res > 0)) >>>>> + WRITE_ONCE(cqe->res64, req->rw.append_offset); >>>>> + else >>>>> + WRITE_ONCE(cqe->res64, res); >>>>> + } else { >>>>> + WRITE_ONCE(cqe->res, res); >>>>> + WRITE_ONCE(cqe->flags, cflags); >>>>> + } >>>> >>>> This would be nice to keep out of the fast path, if possible. >>> >>> I was thinking of keeping a function-pointer (in io_kiocb) during >>> submission. That would have avoided this check......but argument count >>> differs, so it did not add up. >> >> But that'd grow the io_kiocb just for this use case, which is arguably >> even worse. Unless you can keep it in the per-request private data, >> but there's no more room there for the regular read/write side. >> >>>>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h >>>>> index 92c2269..2580d93 100644 >>>>> --- a/include/uapi/linux/io_uring.h >>>>> +++ b/include/uapi/linux/io_uring.h >>>>> @@ -156,8 +156,13 @@ enum { >>>>> */ >>>>> struct io_uring_cqe { >>>>> __u64 user_data; /* sqe->data submission passed back */ >>>>> - __s32 res; /* result code for this event */ >>>>> - __u32 flags; >>>>> + union { >>>>> + struct { >>>>> + __s32 res; /* result code for this event */ >>>>> + __u32 flags; >>>>> + }; >>>>> + __s64 res64; /* appending offset for zone append */ >>>>> + }; >>>>> }; >>>> >>>> Is this a compatible change, both for now but also going forward? You >>>> could randomly have IORING_CQE_F_BUFFER set, or any other future flags. >>> >>> Sorry, I didn't quite understand the concern. CQE_F_BUFFER is not >>> used/set for write currently, so it looked compatible at this point. >> >> Not worried about that, since we won't ever use that for writes. But it >> is a potential headache down the line for other flags, if they apply to >> normal writes. >> >>> Yes, no room for future flags for this operation. >>> Do you see any other way to enable this support in io-uring? >> >> Honestly I think the only viable option is as we discussed previously, >> pass in a pointer to a 64-bit type where we can copy the additional >> completion information to. > > TBH, I hate the idea of such overhead/latency at times when SSDs can > serve writes in less than 10ms. Any chance you measured how long does it 10us? :-) > take to drag through task_work? A 64-bit value copy is really not a lot of overhead... But yes, we'd need to push the completion through task_work at that point, as we can't do it from the completion side. That's not a lot of overhead, and most notably, it's overhead that only affects this particular type. That's not a bad starting point, and something that can always be optimized later if need be. But I seriously doubt it'd be anything to worry about. -- Jens Axboe