Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp3226200imd; Mon, 29 Oct 2018 03:57:11 -0700 (PDT) X-Google-Smtp-Source: AJdET5cEEhhznA4Y1CjaGdoWyQmBNXMJUgKIsk1r1wfkuN2uUy3ti9G3v8qVtEpw3Asdmy4UCWkS X-Received: by 2002:a65:6684:: with SMTP id b4mr13288421pgw.55.1540810631168; Mon, 29 Oct 2018 03:57:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540810631; cv=none; d=google.com; s=arc-20160816; b=utRhOslYMsIXY/b9Y6hU382orvYBJkgcKDO7iH3j+pEt+yRAwsA6mNyLStczwbLaSc +QRs5NNUGokFh7sQzZLhNnljSojnh0dPnURvIJ+KIVYWp83sCGy5JT9Tp6qpgjqSjOmC 9ATPxaa4Z/8aB2jL83cuUu0L903ppvodzxrd3T5mfg2hYCzcSHPab2nv0/rTDVvYYegS GZOW39TbeseCzP/umteG6bt2ANRBZ62KvSKH/tQdeqiCuAtFxPfy3UVtKR7KddVRULvB oby5mpX2YpCfCqps0/3DLy6TLJhwFGkwkEKl4zne/tE4BYQFxozUpw4osTiv9uQ6Sw36 mDrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=qcOAuyWMawlCfmN5j7BNX9s1aicg+e0DX/yjhUSElo4=; b=kzBkFwsSsahrAZFzGWeA6mACbJJRtucXJ2GaFvO3QLQOh51vAWCbG4g3d0honeNXQ8 +ToCbzfwFaDvXPZO5iz5a54t1wdvFzz3Q1kWfLoOdR+rOc5yCFlRSfi7rZAAsKeLwJPO qvpXx88EcW7OH+5V27OYXdPmcrY5EimkIceZ6EgioFLs2/mK2R8KZjNXaNbtqz5AccBB 7asTLMSC6A5R9u2LwkvWn0nZj/VpDAwYuuiwsT+JZnWbDnYpElaIe7Iut63aN+ZGn2WU xXJ2Yekq179bXNEjIfYubtjMSZf4RZEFGiTMgCwu9/Z3BVnjxs+M+Y8tj9ckcdzqf04W 1sFA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g9-v6si16157229pge.245.2018.10.29.03.56.55; Mon, 29 Oct 2018 03:57:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729111AbeJ2TnS (ORCPT + 99 others); Mon, 29 Oct 2018 15:43:18 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:38784 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725916AbeJ2TnS (ORCPT ); Mon, 29 Oct 2018 15:43:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C330341; Mon, 29 Oct 2018 03:55:09 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1CA143F6A8; Mon, 29 Oct 2018 03:55:09 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 11B021AE091C; Mon, 29 Oct 2018 10:55:16 +0000 (GMT) Date: Mon, 29 Oct 2018 10:55:16 +0000 From: Will Deacon To: Ashish Mhetre Cc: mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, linux-tegra@vger.kernel.org, avanbrunt@nvidia.com, Snikam@nvidia.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH V3] arm64: Don't flush tlb while clearing the accessed bit Message-ID: <20181029105515.GD14127@arm.com> References: <1540805158-618-1-git-send-email-amhetre@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1540805158-618-1-git-send-email-amhetre@nvidia.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 29, 2018 at 02:55:58PM +0530, Ashish Mhetre wrote: > From: Alex Van Brunt > > Accessed bit is used to age a page and in generic implementation there is > flush_tlb while clearing the accessed bit. > Flushing a TLB is overhead on ARM64 as access flag faults don't get > translation table entries cached into TLB's. Flushing TLB is not necessary > for this. Clearing the accessed bit without flushing TLB doesn't cause data > corruption on ARM64. > In our case with this patch, speed of reading from fast NVMe/SSD through > PCIe got improved by 10% ~ 15% and writing got improved by 20% ~ 40%. > So for performance optimisation don't flush TLB when clearing the accessed > bit on ARM64. > x86 made the same optimization even though their TLB invalidate is much > faster as it doesn't broadcast to other CPUs. Ok, but they may end up using IPIs so lets avoid these vague performance claims in the log unless they're backed up with numbers. > Please refer to: > 'commit b13b1d2d8692 ("x86/mm: In the PTE swapout page reclaim case clear > the accessed bit instead of flushing the TLB")' > > Signed-off-by: Alex Van Brunt > Signed-off-by: Ashish Mhetre > --- > arch/arm64/include/asm/pgtable.h | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 2ab2031..080d842 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -652,6 +652,26 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, > return __ptep_test_and_clear_young(ptep); > } > > +#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH > +static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep) > +{ > + /* > + * On ARM64 CPUs, clearing the accessed bit without a TLB flush > + * doesn't cause data corruption. [ It could cause incorrect > + * page aging and the (mistaken) reclaim of hot pages, but the > + * chance of that should be relatively low. ] > + * > + * So as a performance optimization don't flush the TLB when > + * clearing the accessed bit, it will eventually be flushed by > + * a context switch or a VM operation anyway. [ In the rare > + * event of it not getting flushed for a long time the delay > + * shouldn't really matter because there's no real memory > + * pressure for swapout to react to. ] This is blindly copied from x86 and isn't true for us: we don't invalidate the TLB on context switch. That means our window for keeping the stale entries around is potentially much bigger and might not be a great idea. If we roll a TLB invalidation routine without the trailing DSB, what sort of performance does that get you? Will