Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp1039631ybi; Fri, 31 May 2019 12:47:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqwhdD12ShzPXAZvqQS5I8cpIKCKJpvYhnRbil6JiV1vh/bG1iXUZq/AH8VL5RnOCJ6yicGy X-Received: by 2002:a17:90a:25ca:: with SMTP id k68mr12059796pje.14.1559332024975; Fri, 31 May 2019 12:47:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559332024; cv=none; d=google.com; s=arc-20160816; b=mD/f3he3cSCdXPoCQHMkNyZgKgDprv0bK0EakSVdMbT7VT+Jhzp0BuTQBp/uM7vc3o hQQizyUKjZhLIQhv0hYHgNNdGswjlmY0hzUp0qsWt12GWC/0cGrAfG7FkZ0tvpxmiFTZ ywanhYOfzieUgErMfsaPpOAHWP4M3P+Y950tGH8rKXfeymwyX1jUoeg6+dXQJkBDO4gY mxLZ1NQTQO0Zc6QWSvKgdBWVRiJKGkQ1+v9tTZ731YvINLMIgbDG3AMuBW89pliPNUZ/ /O7KdqT7cUNw/A6SRSHVRiIs9Zsr/n6+zs6YpNzP09gC5ya7GRbYnUZNDszwpHktqE1V MwWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:subject:cc:to:from:date:content-transfer-encoding :mime-version; bh=nU3p+4u4vdZGH9jNii8FYVnGMfPMronIicXb2RloY/o=; b=HmGTCxyZVapqLjqLLMYkjkrf3uILncQLuJXj10OLeEbjJ5A9VihOtSo9qW1niPCwps 44T+mvmqqDwm3IGbYg6WAsRp3V8eJr6pGn6ld60157/IWd+o3Nx1lhsG9sn/IqJOMfju z6Hyy9yBCXuosqMWODt60KHJkm+ZZuixVSjf7ot/AwRRUu1yjfExAR/foc9dCj5fDL1y 8527hE0WIXng8m4Ut7ll29NG9EvBvL0zNIRvUt0QoVFgnXOdYkyzZuxT3nE5GdIaSIWD uUeiWfg4MIlnG9XaA+dWjBcRHMcz4ebVke/vnLw6MOlaNJjyS4cWiX6CJeFjhOS0LOfr W/sQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x2si7119362pgf.505.2019.05.31.12.46.47; Fri, 31 May 2019 12:47:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727447AbfEaTpF (ORCPT + 99 others); Fri, 31 May 2019 15:45:05 -0400 Received: from mx2.suse.de ([195.135.220.15]:55000 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727147AbfEaTpF (ORCPT ); Fri, 31 May 2019 15:45:05 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D2426AE76; Fri, 31 May 2019 19:45:02 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 31 May 2019 21:45:02 +0200 From: Roman Penyaev To: Jens Axboe Cc: Azat Khuzhin , Andrew Morton , Al Viro , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 00/13] epoll: support pollable epoll from userspace In-Reply-To: References: <20190516085810.31077-1-rpenyaev@suse.de> <1d47ee76735f25ae5e91e691195f7aa5@suse.de> Message-ID: <8b3bade3c5fffdd8f1ab24940258d4e1@suse.de> X-Sender: rpenyaev@suse.de User-Agent: Roundcube Webmail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-05-31 18:54, Jens Axboe wrote: > On 5/31/19 10:02 AM, Roman Penyaev wrote: >> On 2019-05-31 16:48, Jens Axboe wrote: >>> On 5/16/19 2:57 AM, Roman Penyaev wrote: >>>> Hi all, >>>> >>>> This is v3 which introduces pollable epoll from userspace. >>>> >>>> v3: >>>> - Measurements made, represented below. >>>> >>>> - Fix alignment for epoll_uitem structure on all 64-bit archs >>>> except >>>> x86-64. epoll_uitem should be always 16 bit, proper >>>> BUILD_BUG_ON >>>> is added. (Linus) >>>> >>>> - Check pollflags explicitly on 0 inside work callback, and do >>>> nothing >>>> if 0. >>>> >>>> v2: >>>> - No reallocations, the max number of items (thus size of the >>>> user >>>> ring) >>>> is specified by the caller. >>>> >>>> - Interface is simplified: -ENOSPC is returned on attempt to add >>>> a >>>> new >>>> epoll item if number is reached the max, nothing more. >>>> >>>> - Alloced pages are accounted using user->locked_vm and limited >>>> to >>>> RLIMIT_MEMLOCK value. >>>> >>>> - EPOLLONESHOT is handled. >>>> >>>> This series introduces pollable epoll from userspace, i.e. user >>>> creates >>>> epfd with a new EPOLL_USERPOLL flag, mmaps epoll descriptor, gets >>>> header >>>> and ring pointers and then consumes ready events from a ring, >>>> avoiding >>>> epoll_wait() call. When ring is empty, user has to call >>>> epoll_wait() >>>> in order to wait for new events. epoll_wait() returns -ESTALE if >>>> user >>>> ring has events in the ring (kind of indication, that user has to >>>> consume >>>> events from the user ring first, I could not invent anything better >>>> than >>>> returning -ESTALE). >>>> >>>> For user header and user ring allocation I used vmalloc_user(). I >>>> found >>>> that it is much easy to reuse remap_vmalloc_range_partial() instead >>>> of >>>> dealing with page cache (like aio.c does). What is also nice is >>>> that >>>> virtual address is properly aligned on SHMLBA, thus there should not >>>> be >>>> any d-cache aliasing problems on archs with vivt or vipt caches. >>> >>> Why aren't we just adding support to io_uring for this instead? Then >>> we >>> don't need yet another entirely new ring, that's is just a little >>> different from what we have. >>> >>> I haven't looked into the details of your implementation, just >>> curious >>> if there's anything that makes using io_uring a non-starter for this >>> purpose? >> >> Afaict the main difference is that you do not need to recharge an fd >> (submit new poll request in terms of io_uring): once fd has been added >> to >> epoll with epoll_ctl() - we get events. When you have thousands of >> fds >> - >> that should matter. >> >> Also interesting question is how difficult to modify existing event >> loops >> in event libraries in order to support recharging (EPOLLONESHOT in >> terms >> of epoll). >> >> Maybe Azat who maintains libevent can shed light on this (currently I >> see >> that libevent does not support "EPOLLONESHOT" logic). > > In terms of existing io_uring poll support, which is what I'm guessing > you're referring to, it is indeed just one-shot. Yes, yes. > But there's no reason why we can't have it persist until explicitly > canceled with POLL_REMOVE. It seems not so easy. The main problem is that with only a ring it is impossible to figure out on kernel side what event bits have been already seen by the userspace and what bits are new. So every new cqe has to be added to a completion ring on each wake_up_interruptible() call. (I mean when fd wants to report that something is ready). IMO that can lead to many duplicate events (tens? hundreds? honestly no idea), which userspace has to handle with subsequent read/write calls. It can kill all performance benefits of a uring. In uepoll this is solved with another piece of shared memory, where userspace atomically clears bits and kernel side sets bits. If kernel observes that bits were set (i.e. userspace has not seen this event) - new index is added to a ring. Can we extend the io_uring API to support this behavior? Also would be great if we can make event path lockless. On a big number of fds and frequent events - this matters, please take a look, recently I did some measurements: https://lkml.org/lkml/2018/12/12/305 -- Roman