Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965014AbcJVSvH (ORCPT ); Sat, 22 Oct 2016 14:51:07 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:36108 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935899AbcJVSvG (ORCPT ); Sat, 22 Oct 2016 14:51:06 -0400 MIME-Version: 1.0 In-Reply-To: References: <20161019183340.9e3738b403ddda1a04c8f906@gmail.com> <20161019183557.5371f48b064079807c65c92a@gmail.com> From: Vitaly Wool Date: Sat, 22 Oct 2016 20:51:04 +0200 Message-ID: Subject: Re: [PATCH 2/3] z3fold: remove redundant locking To: Dan Streetman Cc: Linux-MM , linux-kernel , Andrew Morton Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 904 Lines: 20 On Thu, Oct 20, 2016 at 10:15 PM, Dan Streetman wrote: > On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote: >> The per-pool z3fold spinlock should generally be taken only when >> a non-atomic pool variable is modified. There's no need to take it >> to map/unmap an object. This patch introduces per-page lock that >> will be used instead to protect per-page variables in map/unmap >> functions. > > I think the per-page lock must be held around almost all access to any > page zhdr data; previously that was protected by the pool lock. Right, except for list operations. At this point I think per-page locks will have to be thought over again, and there is some nice performance gain from making spinlock a rwlock anyway, so I'll stick with the latest patchset, fixing tiny bits like wrong unbuddied_nr increment in the other patch. Best regards, Vitaly