Received: by 10.213.65.68 with SMTP id h4csp47368imn; Fri, 30 Mar 2018 13:59:59 -0700 (PDT) X-Google-Smtp-Source: AIpwx48mqcxYFTj2zm0RvB8XaOQN9fCAlwvqmQqs3ULLstAtle21FGjmneUj1H8UgO3cm2l89u/4 X-Received: by 2002:a17:902:143:: with SMTP id 61-v6mr524253plb.345.1522443599321; Fri, 30 Mar 2018 13:59:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522443599; cv=none; d=google.com; s=arc-20160816; b=JDN3f6YUbb/Kfe296qA6/F4u4rZKQSnJyKgV9+SVT03t70SBGzm8GR4WdVVVj99qtR 3/KTZ/yRMbyEZP3D4c8L0Vypal7D7Ay1hjU6yl6FR0gk+syr4wGYtQHZeLIvx67tLkQD IEg7P/ThJoTr9SXxAXSpSzpb5JASbTxIbTsmJ7t1H/6flMdB6FKNYn8oaRHg97KPskHv Rp2iSIAbleYb/nXq1kaB4M81HN0C4/Clxk4PxN0TUC91XFjmm7PvTU9iuHZNhcy9cv5F XmimmmTerXxbwLXSew6NsVNPOv4tugX1x+0AhCwapCnIFuu5mQD7KOX6wzumhruiP2Ez uN6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=6cnYYRnHSNEed+Yp6EyLRcZUyS9mbwA2hxlUf5q51GM=; b=V+JkmvtnxHjQ+6L/tqckJPxtnMeUeK1nytdIkL9X0BetmEwsMbC5rBjbgHbyhQ9kkl vhCSWXF+89/YhOcBHQ6zWm+n+oCYoW/wtP4BoMD5zQ1gcowMNmt7NhPvUKVoOrwCaY8l uplQ+QXReym6XnlHrFcFEGV9pcj7Q8Z/y0wlJuzcVOHICOdOBmYYtHP8mS0m8JSdtn8x ORusmD1P0hZ1JTzhl6hJybwQV4LJDiCZA+ObzSqBCIjlEiUT5ZszxJrU+7BtKGo2Nn3f hQcR1iFsGRurZLp5uNcvwzsxrIwJcrb7E1NrWoQDfGFweejlGU5Am9S5LBsIKLhRkCw2 ZTLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m12si6112659pgd.240.2018.03.30.13.59.44; Fri, 30 Mar 2018 13:59:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752781AbeC3U6Z (ORCPT + 99 others); Fri, 30 Mar 2018 16:58:25 -0400 Received: from mx2.suse.de ([195.135.220.15]:49758 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752568AbeC3U6X (ORCPT ); Fri, 30 Mar 2018 16:58:23 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 590B0AD08; Fri, 30 Mar 2018 20:58:21 +0000 (UTC) Date: Fri, 30 Mar 2018 13:45:57 -0700 From: Davidlohr Bueso To: "Eric W. Biederman" Cc: manfred@colorfullife.com, Linux Containers , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, khlebnikov@yandex-team.ru, prakash.sangappa@oracle.com, luto@kernel.org, akpm@linux-foundation.org, oleg@redhat.com, serge.hallyn@ubuntu.com, esyr@redhat.com, jannh@google.com, linux-security-module@vger.kernel.org, Pavel Emelyanov , Nagarathnam Muthusamy Subject: Re: [REVIEW][PATCH 11/11] ipc/sem: Fix semctl(..., GETPID, ...) between pid namespaces Message-ID: <20180330204557.5cgyipyqawfte3ml@linux-n805> References: <87vadmobdw.fsf_-_@xmission.com> <20180323191614.32489-11-ebiederm@xmission.com> <20180329005209.fnzr3hzvyr4oy3wi@linux-n805> <20180330190951.nfcdwuzp42bl2lfy@linux-n805> <87y3i91fxh.fsf@xmission.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <87y3i91fxh.fsf@xmission.com> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 30 Mar 2018, Eric W. Biederman wrote: >Davidlohr Bueso writes: > >> I ran this on a 40-core (no ht) Westmere with two benchmarks. The first >> is Manfred's sysvsem lockunlock[1] program which uses _processes_ to, >> well, lock and unlock the semaphore. The options are a little >> unconventional, to keep the "critical region small" and the lock+unlock >> frequency high I added busy_in=busy_out=10. Similarly, to get the >> worst case scenario and have everyone update the same semaphore, a single >> one is used. Here are the results (pretty low stddev from run to run) >> for doing 100,000 lock+unlock. >> >> - 1 proc: >> * vanilla >> total execution time: 0.110638 seconds for 100000 loops >> * dirty >> total execution time: 0.120144 seconds for 100000 loops >> >> - 2 proc: >> * vanilla >> total execution time: 0.379756 seconds for 100000 loops >> * dirty >> total execution time: 0.477778 seconds for 100000 loops >> >> - 4 proc: >> * vanilla >> total execution time: 6.749710 seconds for 100000 loops >> * dirty >> total execution time: 4.651872 seconds for 100000 loops >> >> - 8 proc: >> * vanilla >> total execution time: 5.558404 seconds for 100000 loops >> * dirty >> total execution time: 7.143329 seconds for 100000 loops >> >> - 16 proc: >> * vanilla >> total execution time: 9.016398 seconds for 100000 loops >> * dirty >> total execution time: 9.412055 seconds for 100000 loops >> >> - 32 proc: >> * vanilla >> total execution time: 9.694451 seconds for 100000 loops >> * dirty >> total execution time: 9.990451 seconds for 100000 loops >> >> - 64 proc: >> * vanilla >> total execution time: 9.844984 seconds for 100032 loops >> * dirty >> total execution time: 10.016464 seconds for 100032 loops >> >> Lower task counts show pretty massive performance hits of ~9%, ~25% >> and ~30% for single, two and four/eight processes. As more are added >> I guess the overhead tends to disappear as for one you have a lot >> more locking contention going on. > >Can you check your notes on the 4 process case? As I read the 4 process >case above it is ~30% improvement. Either that is a typo or there is the >potential for quite a bit of noise in the test case. Yeah, sorry that was a typo. Unlike the second benchmark I didn't have this one automated but it's always the vanilla kernel that outperforms the dirty. Thanks, Davidlohr