Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753893Ab3JCHcS (ORCPT ); Thu, 3 Oct 2013 03:32:18 -0400 Received: from mail-ea0-f180.google.com ([209.85.215.180]:41799 "EHLO mail-ea0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752013Ab3JCHcR (ORCPT ); Thu, 3 Oct 2013 03:32:17 -0400 Date: Thu, 3 Oct 2013 09:32:12 +0200 From: Ingo Molnar To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Linus Torvalds , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , "Paul E.McKenney" , Jason Low , Waiman Long , linux-kernel@vger.kernel.org, linux-mm Subject: Re: [PATCH v8 0/9] rwsem performance optimizations Message-ID: <20131003073212.GC5775@gmail.com> References: <1380753493.11046.82.camel@schen9-DESK> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1380753493.11046.82.camel@schen9-DESK> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1663 Lines: 66 * Tim Chen wrote: > For version 8 of the patchset, we included the patch from Waiman to > streamline wakeup operations and also optimize the MCS lock used in > rwsem and mutex. I'd be feeling a lot easier about this patch series if you also had performance figures that show how mmap_sem is affected. These: > Tim got the following improvement for exim mail server > workload on 40 core system: > > Alex+Tim's patchset: +4.8% > Alex+Tim+Waiman's patchset: +5.3% appear to be mostly related to the anon_vma->rwsem. But once that lock is changed to an rwlock_t, this measurement falls away. Peter Zijlstra suggested the following testcase: ===============================> In fact, try something like this from userspace: n-threads: pthread_mutex_lock(&mutex); foo = mmap(); pthread_mutex_lock(&mutex); /* work */ pthread_mutex_unlock(&mutex); munma(foo); pthread_mutex_unlock(&mutex); vs n-threads: foo = mmap(); /* work */ munmap(foo); I've had reports that the former was significantly faster than the latter. <=============================== this could be put into a standalone testcase, or you could add it as a new subcommand of 'perf bench', which already has some pthread code, see for example in tools/perf/bench/sched-messaging.c. Adding: perf bench mm threads or so would be a natural thing to have. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/