Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2665089imu; Mon, 17 Dec 2018 05:59:29 -0800 (PST) X-Google-Smtp-Source: AFSGD/WRcAy0jNtzsQDcoaSw4BkPdu8WzZazWOVh08br1fACrMvxn0vy9Jdn667/ytXNesza25P7 X-Received: by 2002:a63:c64f:: with SMTP id x15mr11848358pgg.16.1545055169301; Mon, 17 Dec 2018 05:59:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545055169; cv=none; d=google.com; s=arc-20160816; b=0TKgHmQ/fasBJlmVnlLyxEL3Ky+pvcc0gVWCNcPuXzQX66VYDq0JpH29f/dFt/u91D gO/e1TY4R87jMkYSA34cI7wVBhJ/UWl8+fG8E6K2zJOprNHMLsPXOsOec+ONixMBO7eL JzQXlM0Pt47H7U3JjggxQXh9M+vNX3tU8mrtVQV2uDSt0w0FGYmllS8bGjyL2lvxiFeF RydNmX3YwL7jlPA+b6ShT18FcOarthsU9h9Nw0/HGyLqqt/rLEJaPF2mm2mIYIqLA4j0 BVReBxaNufzP2wgYDO4vcX8zhaiE4fFPseDDWYV52+J9HX04klHhkRpvTi913CdhJJRh IcVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:subject:cc:to:from:date:content-transfer-encoding :mime-version; bh=kz9YR41AbTutQETQgZBokr12g2Sq3sjzs/lmWb0Bfk0=; b=AJsAr9hV/NQjvmDLC07XEWzgSPBTR22LxGNLFIz4yeOX3QnkWhgzhsJWwn0nTIQwpa KYH5gv7k2vPGIfKSEnYRa+TPK7c0SN61hQiP5Hd0EZICPB4DfJtYeVEgmdn+5L7W9yhV uedXmYWK7hlxIag5aRCCJjLfBJEzywS9mAE6bLontjAfdx0t0Z4IMKqy/bKYYX8mStUf MldohDKNS5+a23B/9GcKQCCXif/O6hj4JelSXFoj1ec4B8aKNDAZ0CYbnf+lF0waU8zw gWTGaGr+BW2CrS8hsA0jUQmpDwKyCZshsl80sdXTZKdcpcCViOSvZF1GpbjZN6luy0Vb +KJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l81si11196327pfj.230.2018.12.17.05.59.13; Mon, 17 Dec 2018 05:59:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732440AbeLQLtJ (ORCPT + 99 others); Mon, 17 Dec 2018 06:49:09 -0500 Received: from mx2.suse.de ([195.135.220.15]:55756 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732301AbeLQLtI (ORCPT ); Mon, 17 Dec 2018 06:49:08 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A14B0AE4F; Mon, 17 Dec 2018 11:49:06 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 17 Dec 2018 12:49:06 +0100 From: Roman Penyaev To: Davidlohr Bueso Cc: Jason Baron , Al Viro , "Paul E. McKenney" , Linus Torvalds , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/3] use rwlock in order to reduce ep_poll_callback() contention In-Reply-To: References: <20181212110357.25656-1-rpenyaev@suse.de> Message-ID: <73608dd0e5839634966b3b8e03e4b3c9@suse.de> X-Sender: rpenyaev@suse.de User-Agent: Roundcube Webmail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018-12-13 19:13, Davidlohr Bueso wrote: > On 2018-12-12 03:03, Roman Penyaev wrote: >> The last patch targets the contention problem in ep_poll_callback(), >> which >> can be very well reproduced by generating events (write to pipe or >> eventfd) >> from many threads, while consumer thread does polling. >> >> The following are some microbenchmark results based on the test [1] >> which >> starts threads which generate N events each. The test ends when all >> events >> are successfully fetched by the poller thread: >> >> spinlock >> ======== >> >> threads events/ms run-time ms >> 8 6402 12495 >> 16 7045 22709 >> 32 7395 43268 >> >> rwlock + xchg >> ============= >> >> threads events/ms run-time ms >> 8 10038 7969 >> 16 12178 13138 >> 32 13223 24199 >> >> >> According to the results bandwidth of delivered events is >> significantly >> increased, thus execution time is reduced. >> >> This series is based on linux-next/akpm and differs from RFC in that >> additional cleanup patches and explicit comments have been added. >> >> [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c > > Care to "port" this to 'perf bench epoll', in linux-next? I've been > trying to unify into perf bench the whole epoll performance testcases > kernel developers can use when making changes and it would be useful. Yes, good idea. But frankly I do not want to bloat epoll-wait.c with my multi-writers-single-reader test case, because soon epoll-wait.c will become unmaintainable with all possible loads and set of different options. Can we have a single, small and separate source for each epoll load? Easy to fix, easy to maintain, debug/hack. > I ran these patches on the 'wait' workload which is a epoll_wait(2) > stresser. On a 40-core IvyBridge it shows good performance > improvements for increasing number of file descriptors each of the 40 > threads deals with: > > 64 fds: +20% > 512 fds: +30% > 1024 fds: +50% > > (Yes these are pretty raw measurements ops/sec). Unlike your > benchmark, though, there is only single writer thread, and therefore > is less ideal to measure optimizations when IO becomes available. > Hence it would be nice to also have this. That's weird. One writer thread does not content with anybody, only with consumers, so should not be any big difference. -- Roman