Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757940Ab3IBANB (ORCPT ); Sun, 1 Sep 2013 20:13:01 -0400 Received: from mail-vc0-f178.google.com ([209.85.220.178]:42879 "EHLO mail-vc0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757924Ab3IBAM7 (ORCPT ); Sun, 1 Sep 2013 20:12:59 -0400 MIME-Version: 1.0 In-Reply-To: <20130901233005.GX13318@ZenIV.linux.org.uk> References: <20130901212355.GU13318@ZenIV.linux.org.uk> <20130901233005.GX13318@ZenIV.linux.org.uk> Date: Sun, 1 Sep 2013 17:12:58 -0700 X-Google-Sender-Auth: pdWUe1HV5tfGsVoS-7ybxyFtrYY Message-ID: Subject: Re: [PATCH v7 1/4] spinlock: A new lockref structure for lockless update of refcount From: Linus Torvalds To: Al Viro Cc: Sedat Dilek , Waiman Long , Ingo Molnar , Benjamin Herrenschmidt , Jeff Layton , Miklos Szeredi , Ingo Molnar , Thomas Gleixner , linux-fsdevel , Linux Kernel Mailing List , Peter Zijlstra , Steven Rostedt , Andi Kleen , "Chandramouleeswaran, Aswin" , "Norton, Scott J" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1973 Lines: 45 On Sun, Sep 1, 2013 at 4:30 PM, Al Viro wrote: > > Hrm... It excludes sharing between the locks, all right. AFAICS, that > won't exclude sharing with plain per-cpu vars, will it? Yes it will. DEFINE_PER_CPU_SHARED_ALIGNED not only aligns the data, it also puts it in a separate section with only other aligned data entries. So now the percpu address map around it looks like this: ... 0000000000013a80 d call_single_queue 0000000000013ac0 d cfd_data 0000000000013b00 d files_lglock_lock 0000000000013b40 d vfsmount_lock_lock 0000000000013b80 d file_lock_lglock_lock 0000000000013bc0 D softnet_data 0000000000013d40 D __per_cpu_end .. So there shouldn't be anything to share falsely with. I'd like to say that the profile is bad, but this is *so* consistent, and the profile data really looks perfectly fine in every other way. I'm using "-e cycles:pp", so it's using hardware profiling and all the other functions really look correct. It *is* one of the few locked accesses remaining, and it's clearly getting called a lot (three calls per system call: two mntput's - one for the root path, one for the result path, and one from path_init -> rcu_walk_init), but with up to 8% CPU time for basically that one "lock xadd" instruction is damn odd. I can't see how that could happen without seriously nasty cacheline bouncing, but I can't see how *that* can happen when all the accesses seem to be from the current CPU. This is a new Haswell-based machine that I put together yesterdat, and I haven't used it for profiling before. So maybe it _is_ something odd with the profiling after all, and atomic serializing instructions get incorrect profile counts. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/