Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3149362imu; Mon, 17 Dec 2018 14:18:36 -0800 (PST) X-Google-Smtp-Source: AFSGD/VPObwXqYR27lDosxG84R+ShCM4fK2+qboqyV9pyrI9wl1GJhlFxqhE0/pUnoledknJs+nD X-Received: by 2002:a17:902:142:: with SMTP id 60mr14613705plb.330.1545085116606; Mon, 17 Dec 2018 14:18:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545085116; cv=none; d=google.com; s=arc-20160816; b=mDzXeNHiB0Y3iuOIa72/NYkyluvPeSaURf1FTBO1ulxgyQHWYAKH/UKAaPnfvDmzZh YAVIw3TkSWHhXZOmDDEXx37Skv/8ixfrv3pv8Cf0OoLRc0rsqoeJAp/takjjg4gamKjQ z0DRy2azAvHDeq5iP3NmbmrORSkj1rK0PYvvvMyaseT5whrqLflmiLp3XrXGySqof0bz nfmJvhxPs4F1T7m+6DunaC/kwPWQIhWlWZV5X+nhqlS/4C8IKWI9yFCj1mxbEr/Mgd8E Eyjj8Bw3P6m2uDAYahLB1fjymLOVdmZ6ty2jwYJgI8x9/VXLXlU/Jn8tuNUop3/7fDad vB/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:organization:subject:cc:to:from:date :content-transfer-encoding:mime-version; bh=sLIyxKr7FewYpvtICz1+VO+qpCfCYLCYXhbHkNmBVyk=; b=fKVEIhvNf5TuFEvevzpMU866x74Edk0nragwa4EZUqI5vMgEdWHWxf3oAMDZWY6mZR 2MCMYJVNvqkK0Wx1lV5OQwDEIgAO+U0nBS5iivG6BI8MSi2lbdXdiWyjWK2LnQ0Mr8QZ OFIgxGqbEKw60WdJAnsZPa8JbNH0Ikn6cXLh3cwIXwYcQEcCgBYvxhOj3JrkG4JqfHja jShd/U1/hrnbrZKtuZM7j1mTmmSvAi04a6PZYCCitNkTTwGS38mMZWOWhnf+xKIo0rF/ iiKVIloxLyb48vtgFpsIi6/lj2faorat+sGrBpPqd6EQVMTpj5NAxfkKWwGmIKwpHr1E NaUg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f61si12012495plb.51.2018.12.17.14.18.20; Mon, 17 Dec 2018 14:18:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388529AbeLQSBV (ORCPT + 99 others); Mon, 17 Dec 2018 13:01:21 -0500 Received: from mx2.suse.de ([195.135.220.15]:55164 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727650AbeLQSBV (ORCPT ); Mon, 17 Dec 2018 13:01:21 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 814A0AF07; Mon, 17 Dec 2018 18:01:19 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 17 Dec 2018 10:01:19 -0800 From: Davidlohr Bueso To: Roman Penyaev Cc: Jason Baron , Al Viro , "Paul E. McKenney" , Linus Torvalds , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/3] use rwlock in order to reduce ep_poll_callback() contention Organization: SUSE Labs In-Reply-To: <73608dd0e5839634966b3b8e03e4b3c9@suse.de> References: <20181212110357.25656-1-rpenyaev@suse.de> <73608dd0e5839634966b3b8e03e4b3c9@suse.de> Message-ID: <275da18a1d286eabf7c9f6588d66baf4@suse.de> X-Sender: dbueso@suse.de User-Agent: Roundcube Webmail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018-12-17 03:49, Roman Penyaev wrote: > On 2018-12-13 19:13, Davidlohr Bueso wrote: > Yes, good idea. But frankly I do not want to bloat epoll-wait.c with > my multi-writers-single-reader test case, because soon epoll-wait.c > will become unmaintainable with all possible loads and set of > different options. > > Can we have a single, small and separate source for each epoll load? > Easy to fix, easy to maintain, debug/hack. Yes completely agree; I was actually thinking along those lines. > >> I ran these patches on the 'wait' workload which is a epoll_wait(2) >> stresser. On a 40-core IvyBridge it shows good performance >> improvements for increasing number of file descriptors each of the 40 >> threads deals with: >> >> 64 fds: +20% >> 512 fds: +30% >> 1024 fds: +50% >> >> (Yes these are pretty raw measurements ops/sec). Unlike your >> benchmark, though, there is only single writer thread, and therefore >> is less ideal to measure optimizations when IO becomes available. >> Hence it would be nice to also have this. > > That's weird. One writer thread does not content with anybody, only > with > consumers, so should not be any big difference. Yeah so the irq optimization patch, which is known to boost numbers on this microbench, plays an important factor. I just put them all together when testing. Thanks, Davidlohr