Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3439227imm; Fri, 20 Jul 2018 17:24:09 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeMgBW7h4mcO6z/3SjWzN+wDNA6ZGEB+5s1qvsBM24i4fCs+l5F9ZsKVEXk+A6K/cjWxbaI X-Received: by 2002:a62:34c4:: with SMTP id b187-v6mr4128007pfa.15.1532132649526; Fri, 20 Jul 2018 17:24:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532132649; cv=none; d=google.com; s=arc-20160816; b=MuedequgMYQNutUxf+tuTIVKtAoI/8fSs+Srq41NZKXrz3WBIKHX975B37OeOpW38L qElm8YENM2mFMfReXxrTAy7IYSdCKaRCSDuElpbqqGyqmG0/GvkgzrcjdbHcKcfBhXNB +LHxSgVLg9TA+AV6sYsQSAmpAv6FUGWBZ11/shLm2imO7O5nlHyYTsSO0PGviSJSNseA YeKzj95D5LGuoajbNFHpqxWdrKo9HNvE3xKKiavWOthgv3TjOA1lMOMCDWcNXC3VzTSx V75vIPb7xRjXONcCHUsI7HUVQduLch0dTu6q5kCluo8TqjBqYoZyZCUnFlfEiLxCdX4E gvqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=qZOf8GOiIL35fyssv+tT+gwQH3xawf0N+/hVzmfs+Lo=; b=Hu8E+3m0q5UK5a3oGatFsPB70Vxc173aBJ5lIKMtPRCt1TSuZ+ORcMhhpRpOTROzyS 13AWLCUITKe6qMrZHV4cMSBVHuzwsDlIpDkBvIgvUIUIUj2ueSovlTChkLhIZ4e/zTqa vF1Pq5XYTxs8i8D5AXq/g8+3odppzFHmzfp/pJ1mwK45gwOrk251JbqwfcY93Rv0EwVQ V5lSZu4J9Y2TRdCnec37kts4cDd5WVjMtascQyxwV3VIb9cNnoefEC7SUWsk3S0nE7Gy gC69yYpovFJnPHurJJrCmeczadI7aSuSHSpEWD/Byvyy1zG68BwAPFjnx0yuNGq9kXUr AgkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t188-v6si2952375pfd.148.2018.07.20.17.23.52; Fri, 20 Jul 2018 17:24:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbeGUBNi (ORCPT + 99 others); Fri, 20 Jul 2018 21:13:38 -0400 Received: from mx2.suse.de ([195.135.220.15]:58982 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727224AbeGUBNh (ORCPT ); Fri, 20 Jul 2018 21:13:37 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 7D181AE7A; Sat, 21 Jul 2018 00:22:59 +0000 (UTC) Date: Fri, 20 Jul 2018 17:22:54 -0700 From: Davidlohr Bueso To: Andrew Morton Cc: jbaron@akamai.com, viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org, Peter Zijlstra Subject: Re: [PATCH -next 0/2] fs/epoll: loosen irq safety when possible Message-ID: <20180721002254.kwsdw7xhlogx7fr4@linux-r8p5> References: <20180720172956.2883-1-dave@stgolabs.net> <20180720124212.7260d76d83e2b8e5e3349ea5@linux-foundation.org> <20180720200559.27nc7j2rrxpy5p3n@linux-r8p5> <20180720134429.1ba61018934b084bb2e17bdb@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20180720134429.1ba61018934b084bb2e17bdb@linux-foundation.org> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 20 Jul 2018, Andrew Morton wrote: >Did you try measuring it on bare hardware? I did and wasn't expecting much difference. For a 2-socket 40-core (ht) IvyBridge on a few workloads, unfortunately I don't have a xen environment and the results for Xen I do have (which numbers are in patch 1) I don't have the actual workload, so cannot compare them directly. 1) Different configurations were used for a epoll_wait (pipes io) microbench (http://linux-scalability.org/epoll/epoll-test.c) and shows around a 7-10% improvement in overall total number of times the epoll_wait() loops when using both regular and nested epolls, so very raw numbers, but measurable nonetheless. # threads vanilla dirty 1 1677717 1805587 2 1660510 1854064 4 1610184 1805484 8 1577696 1751222 16 1568837 1725299 32 1291532 1378463 64 752584 787368 Note that stddev is pretty small. 2) Another pipe test, which shows no real measurable improvement. (http://www.xmailserver.org/linux-patches/pipetest.c) >> > >> >I'd have more confidence if we had some warning mechanism if we run >> >spin_lock_irq() when IRQs are disabled, which is probably-a-bug. But >> >afaict we don't have that. Probably for good reasons - I wonder what >> >they are? > >Well ignored ;) > >We could open-code it locally. Add a couple of >WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with >Xen but surely just reading the thing isn't too expensive? I agree, I'll see what I can come up with and also ask the customer to test in his setup. Bare metal would also need some new numbers I guess. Thanks, Davidlohr