Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753387AbdLMQVC (ORCPT ); Wed, 13 Dec 2017 11:21:02 -0500 Received: from mail.free-electrons.com ([62.4.15.54]:40183 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750916AbdLMQU5 (ORCPT ); Wed, 13 Dec 2017 11:20:57 -0500 Date: Wed, 13 Dec 2017 17:20:43 +0100 From: Boris Brezillon To: Greg Kroah-Hartman Cc: Wolfram Sang , linux-i2c@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, Arnd Bergmann , Przemyslaw Sroka , Arkadiusz Golec , Alan Douglas , Bartosz Folta , Damian Kos , Alicja Jurasik-Urbaniak , Jan Kotas , Cyprian Wronka , Alexandre Belloni , Thomas Petazzoni , Nishanth Menon , Rob Herring , Pawel Moll , Mark Rutland , Ian Campbell , Kumar Gala , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC 2/5] i3c: Add core I3C infrastructure Message-ID: <20171213172043.24e4d4bc@bbrezillon> In-Reply-To: <20170802021327.GB23033@kroah.com> References: <1501518290-5723-1-git-send-email-boris.brezillon@free-electrons.com> <1501518290-5723-3-git-send-email-boris.brezillon@free-electrons.com> <20170801014021.GA20004@kroah.com> <20170801124801.38bbc431@bbrezillon> <20170801175133.GA30520@kroah.com> <20170801233001.4b08834f@bbrezillon> <20170802021327.GB23033@kroah.com> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2532 Lines: 62 Hi Greg, On Tue, 1 Aug 2017 19:13:27 -0700 Greg Kroah-Hartman wrote: > > > > Unless you see a good reason to not use a R/W lock, I'd like to keep it > > > > this way because master IPs are likely to implement advanced queuing > > > > mechanism (allows one to queue new transfers even if the master is > > > > already busy processing other requests), and serializing things at the > > > > framework level will just prevent us from using this kind of > > > > optimization. > > > > > > Unless you can prove otherwise, using a rw lock is almost always worse > > > than just a mutex. > > > > Is it still true when it's taken in non-exclusive mode most of the > > time, and the time you spend in the critical section is non-negligible? > > > > I won't pretend I know better than you do what is preferable, it's just > > that the RW lock seemed appropriate to me for the situation I tried to > > described here. > > Again, measure it. If you can't measure it, then don't use it. Use a > simple lock instead. Seriously, don't make it more complex until you > really have to. It sounds like you didn't measure it at all, which > isn't good, please do so. > I'm resurrecting this thread because I finally had the time to implement message queuing in Cadence I3C master driver. So I did a test with 2 I3C devices on the bus, and their drivers sending as much SDR messages as they can in 10s. Here are the results: | mutex | rwsem | --------------------------------------- dev1 | 19087 | 29532 | dev2 | 19341 | 29118 | ======================================= total | 38428 | 58650 | msg/sec | ~3843 | ~5865 | The results I'm obtaining here are not so surprising since all normal transfers are taking the lock in read mode, so there's no contention. I didn't measure the impact on performances when there's one maintenance operation taking the lock in write mode and several normal transfers waiting for this lock, but really, maintenance operations are infrequent, and that's not where performance matters in our use case. I also did the same test with only one device doing transfers on the bus, and this time the mutex wins, but there's not a huge difference. | mutex | rwsem | --------------------------------------- total | 67116 | 66561 | msg/sec | ~6712 | ~6656 | Let me know if you want more information on the test procedure. Regards, Boris