Received: by 2002:a05:6a10:6006:0:0:0:0 with SMTP id w6csp404789pxa; Thu, 27 Aug 2020 05:50:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzphqky+I5pCqEs7i9M+MPWAL+9Xz+LzOsy7Mt51Y/96A5aWnZeirnVEHAjmdxC7ZgPWwua X-Received: by 2002:a05:6402:b35:: with SMTP id bo21mr10622736edb.99.1598532605951; Thu, 27 Aug 2020 05:50:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598532605; cv=none; d=google.com; s=arc-20160816; b=D6h4GzkhAm7o9HTAJj/rem8Tkby3a81QBtfQ1vRiRzPTLB5C3vAxvpzFr7hv9SLsLU gHYCtiKudmKP1ERFNioz3gCQsd8rr83v0s0FfJOL0GLGZd3B1y61ULYrBnWJu0nbuBqm YzFY+0lj0WEWCGNy/WD4fZlaklAWEgrpcYMnd6o0IG9PruaGFpKY0+GPq+50oHHMGZLk zgZB+uFcic/Q21G8O83g43/2O69rOYH9SWqfxZpGI8RR8lzH8E2eyWFceZlIQPHl03WI cUrADF3iQoJjYKKdX83/YElKvId8NE2L+1MI+8jh0bnOspv2tODCKur09kg95ynuWHnU sSfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=sQrO7zSmwpoV78sFk/k8GlXhRlZp5fVDS/YXvPmt6Ks=; b=oqKrVrxbl1ur48Tev4087+OglzTdnOXtBpgnHaq/+VN33pv1jQxcPKOORTAlMXee7p cp+ez5GudJzVFkAGakUdtUhF1lVROZ9ibOqKufQaNPzL+oYSb7rXxDE1oHjHCl1yT2lG fON3oxijKxZPt98cTTdRWXMVRPg7ZPARcbJTU/2jv9BoNe68btCvvuQhWcdFtf7s3I/z +JFMzZIA9KWEogge4PIWS3mn3HIOFXKmNqrXVH6L7BEhV5wlHHohYSWW8K6ERAbJMMun zZA29HIHuhBo8/zIvVjgCwNmEcKUf84mQW+Q4FHCroOMlV5oOudfqgvrgV+ZxIMU8QRg KTUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=XEFUWH1V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gs18si1196678ejb.634.2020.08.27.05.49.42; Thu, 27 Aug 2020 05:50:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=XEFUWH1V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729135AbgH0MrP (ORCPT + 99 others); Thu, 27 Aug 2020 08:47:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728425AbgH0Mau (ORCPT ); Thu, 27 Aug 2020 08:30:50 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2389DC061264; Thu, 27 Aug 2020 05:30:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=sQrO7zSmwpoV78sFk/k8GlXhRlZp5fVDS/YXvPmt6Ks=; b=XEFUWH1VZvasCNuBMkR9BGN7FI Xqf7NbTIficZUx7OfNX+7TQyHPA+9htVxHBJr8YZIiOMRI6HcRA7yIxbsMCIOEALPFimM8Fq2AeP3 eJTCS+2a8Iyxg1AubpNLaz+DgggU0X9e4dK4Iq275YmvXqxfks3oTE/DWzcnUn4VvMHZTgJco83E8 xqdvYYN7TuGjToI4/pu6nCaE/lWCF9QKiCMtAlwJsX6aDIBBKQnzuP+ILVk0EGLYPbtk8qel5vuFR 8Ma+ZfpkYHnvMla73yOy6xeoZ88qwYhwN9CMEYMlJYqCaXLNgwH9mr7kimEM+5kBAedWncNsfg8t6 bt3c3XCg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBH3A-0002SH-NF; Thu, 27 Aug 2020 12:30:40 +0000 Date: Thu, 27 Aug 2020 13:30:40 +0100 From: Matthew Wilcox To: Shaokun Zhang Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Yuqi Jin , Will Deacon , Mark Rutland Subject: Re: [PATCH] fs: Optimized fget to improve performance Message-ID: <20200827123040.GE14765@casper.infradead.org> References: <1598523584-25601-1-git-send-email-zhangshaokun@hisilicon.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1598523584-25601-1-git-send-email-zhangshaokun@hisilicon.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 27, 2020 at 06:19:44PM +0800, Shaokun Zhang wrote: > From: Yuqi Jin > > It is well known that the performance of atomic_add is better than that of > atomic_cmpxchg. I don't think that's well-known at all. > +static inline bool get_file_unless_negative(atomic_long_t *v, long a) > +{ > + long c = atomic_long_read(v); > + > + if (c <= 0) > + return 0; > + > + return atomic_long_add_return(a, v) - 1; > +} > + > #define get_file_rcu_many(x, cnt) \ > - atomic_long_add_unless(&(x)->f_count, (cnt), 0) > + get_file_unless_negative(&(x)->f_count, (cnt)) > #define get_file_rcu(x) get_file_rcu_many((x), 1) > #define file_count(x) atomic_long_read(&(x)->f_count) I think you should be proposing a patch to fix atomic_long_add_unless() on arm64 instead.