Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp10539393imu; Thu, 6 Dec 2018 02:54:47 -0800 (PST) X-Google-Smtp-Source: AFSGD/V6Uyl/YNx0rCs+w0bYiMtvyndRQ7nHrP+NQqAo4cfBotj1kotkyw24TvSfLKteenWoQEph X-Received: by 2002:a63:4d0e:: with SMTP id a14mr23789379pgb.408.1544093687391; Thu, 06 Dec 2018 02:54:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544093687; cv=none; d=google.com; s=arc-20160816; b=NpNiFxuQBuOfHPjbJ2CQ83pIO6Jl/EEg3kiBT+0rFjirmuetFJ1n19jQ8RW78ASoh1 KmQOK7kZxzwxJHSQvtXi03QmhWZ3Lk0jG0Z/ELcVachnaOhTiKkK8Wd7Kjuf4C/VTrju gt6SD0G8YFW5x1VkRKf7PJ/OZg/SJmXgmTdObi1NZ+jvSozM6UyoxT0XktDXGS6U1XE9 A8vPta1E4RC+qI/EZEdpaKPS0QNSpNgoJlxogLE1mkVTCUhq34Zv0gqk1vtmoDSxigXN FelBLAgqjC9tDP6PJpR5QsR+/3JWSZutvdGrjNUzO+nY1oFD2ogP2W7J1ejt7O0FKSNK XxuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:subject:cc:to:from:date:content-transfer-encoding :mime-version; bh=kLGxIz8QHlYqgukuoOXiNHfcKGGORhXylY2ZaxWMZEs=; b=OoHr5OOB4urkG4vAlZ9lyAm5SvA3xmKshQGy4jnD0mGxeRzOpDkKarWXoUI050g1Xm 4dRtwNwyeF/6QOQBfw/vQzTZ4RNZ3qjU4NcMoL+//iAsi6VXOtUHVLH/uEb+pgE+e77b 2o4AAgFuhKwd48ykgYdgBuvdNoohJ9nprRCZpMD6290ooX98Jpfu9UxU859x3Fx5M8E1 jcbTuVCb9b9ZKQiOWL2ub/+Vn69c1hdtCZa6csQlQ+jjaFqUfNj6ux1EJjgfx8FxOcaR 5U+v3LcpM5bvjCN2cEfEgDGQzl3N6nEmLBdsLMOGi9P08Xqxi8D9Uf7e/icMEPCyw9Tg ZkWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c32si32842plj.38.2018.12.06.02.54.31; Thu, 06 Dec 2018 02:54:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729403AbeLFKwO (ORCPT + 99 others); Thu, 6 Dec 2018 05:52:14 -0500 Received: from mx2.suse.de ([195.135.220.15]:45818 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727806AbeLFKwO (ORCPT ); Thu, 6 Dec 2018 05:52:14 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 07B0BAFA4; Thu, 6 Dec 2018 10:52:11 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 06 Dec 2018 11:52:11 +0100 From: Roman Penyaev To: Eric Wong Cc: Alexander Viro , "Paul E. McKenney" , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Mathieu Desnoyers Subject: Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention In-Reply-To: <20181205234649.ssvmv4ulwevgdla4@dcvr> References: <20181203110237.14787-1-rpenyaev@suse.de> <20181205234649.ssvmv4ulwevgdla4@dcvr> Message-ID: <39192b9caf1114c95cd23e786a9c3e60@suse.de> X-Sender: rpenyaev@suse.de User-Agent: Roundcube Webmail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018-12-06 00:46, Eric Wong wrote: > Roman Penyaev wrote: >> Hi all, >> >> The goal of this patch is to reduce contention of ep_poll_callback() >> which >> can be called concurrently from different CPUs in case of high events >> rates and many fds per epoll. Problem can be very well reproduced by >> generating events (write to pipe or eventfd) from many threads, while >> consumer thread does polling. In other words this patch increases the >> bandwidth of events which can be delivered from sources to the poller >> by >> adding poll items in a lockless way to the list. > > Hi Roman, > > I also tried to solve this problem many years ago with help of > the well-tested-in-userspace wfcqueue from Mathieu's URCU. > > I was also looking to solve contention with parallel epoll_wait > callers with this. AFAIK, it worked well; but needed the > userspace tests from wfcqueue ported over to the kernel and more > review. > > I didn't have enough computing power to show the real-world > benefits or funding to continue: > > https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501 Hi Eric, Nice work. That was a huge change by itself and by dependency on wfcqueue. I could not find any valuable discussion on this, what was the reaction of the community? > It might not be too much trouble for you to brush up the wait-free > patches and test them against the rwlock implementation. Ha :) I may try to cherry-pick these patches, let's see how many conflicts I have to resolve, eventpoll.c has been changed a lot since that (6 years passed, right?) But reading your work description I can assume that epoll_wait() calls should be faster, because they do not content with ep_poll_callback(), and I did not try to solve this, only contention between producers, which make my change tiny. I also found your https://yhbt.net/eponeshotmt.c , where you count number of bare epoll_wait() calls, which IMO is not correct, because we need to count how many events are delivered, but not how fast you've returned from epoll_wait(). But as I said no doubts that getting rid of contention between consumer and producers will show even better results. -- Roman