Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp3001871imd; Sun, 28 Oct 2018 23:32:01 -0700 (PDT) X-Google-Smtp-Source: AJdET5cnu7FvTrFKFgNpRJEr56IT3caNtRSuhwoidry7hGTw3bkI3EXlsvbFiBzWuG6PLH5DMxsM X-Received: by 2002:a17:902:e201:: with SMTP id ce1-v6mr12654058plb.47.1540794721010; Sun, 28 Oct 2018 23:32:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540794720; cv=none; d=google.com; s=arc-20160816; b=z9ZYh9LfPL5J3NfEN7Qmlor3HdQJZOmeRdY+8Qc/wtJwg7jdioD6MB42cYlZSodnAw at8bNwo+w61QK+nVlH03R0g2vO0rryJrdLNm3Kh7gZ3B6niSZsPKYZH/ejm2gqs3Xf18 EZ33/rQeQ3v8sV6V9NY6V5Y1LLaggD547HPiMg3ySoetTSyXl0f+0esbb3Pn8ftdD+CK Zk/UGaBj1E0Nwytu5/KyYBB1bG4EaxEnAQfHikoDgIb2g2PtgMyqDH/dWfFdQ4WUY/ta IIsJf6k2WvJ0YrFPBlhZDu6axHQdb3biMXjHi0p1OVmFWiDmx5ztIyDUAiF1efn7EIDD GhDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:mime-version:message-id :date:subject:cc:to:from; bh=kP6gRkxdVxPNw0KVZ/gw4PC00mKxd4cqx2ODoYa5Fso=; b=YL9GHVzqAaYYhDWxPz/XwZ6XIKv/7swSiFRPyil/enzw4qUYKj6Vzo3VfgftfYI8xE MLltftL5YmlTbqopopIZK21ZzsJGBP9nQouzljdSw5vy4Ux/1rhhB9giDBV0YHSOZNtV MmUMQThfcF2pIYXQtPbZfOEAWTLMakqwK0Y3C0XYS35SdeAARcMqAD0RyeUdNy6XxQiu b4HP1+XPe9+EP93zG90UiGHnd7lc/uUtdDWwDUfuvZIaT3BFhPAYoiab8vLN5GmgFHDg EXvsqRo1auA5cVgkbZCJP9jzkyiUXLLIlRqiu+wb1eoMmcPd8HUw63EsBhALOspoJw/z j/hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=cCX0XkJn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b2-v6si20514240plk.356.2018.10.28.23.31.45; Sun, 28 Oct 2018 23:32:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=cCX0XkJn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729274AbeJ2PSj (ORCPT + 99 others); Mon, 29 Oct 2018 11:18:39 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:8139 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729225AbeJ2PSi (ORCPT ); Mon, 29 Oct 2018 11:18:38 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 28 Oct 2018 23:31:10 -0700 Received: from HQMAIL106.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Sun, 28 Oct 2018 23:31:22 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Sun, 28 Oct 2018 23:31:22 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 29 Oct 2018 06:31:22 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Mon, 29 Oct 2018 06:31:22 +0000 Received: from amhetre.nvidia.com (Not Verified[10.24.229.42]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sun, 28 Oct 2018 23:31:21 -0700 From: Ashish Mhetre To: , , , CC: , , Ashish Mhetre Subject: [PATCH V2] arm64: Don't flush tlb while clearing the accessed bit Date: Mon, 29 Oct 2018 11:59:59 +0530 Message-ID: <1540794599-30922-1-git-send-email-amhetre@nvidia.com> X-Mailer: git-send-email 2.7.4 X-NVConfidentiality: public MIME-Version: 1.0 Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1540794670; bh=kP6gRkxdVxPNw0KVZ/gw4PC00mKxd4cqx2ODoYa5Fso=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: X-NVConfidentiality:MIME-Version:Content-Type; b=cCX0XkJnodU6KQYn9O+YSaZQutJBpcJuz9VIAo6/mykkdf3MNuPuvwBWi6RGhxtAh dWqL4Ul8tMc4E6om6PE/gp24qEWDnsmG3nO82oBm2H627Tl9p3qIwsBEJ6HdIPy6s4 RsT4FI7jkBWWyQO0pWESROX0QJVCibu3gAUou9hNM69/aO1EuECVT+ms3EylG3N2da 0JTbxwa62hy53mVmwyts35s4fg4nPGpoiDYLWILeq1YNJ9IcNCdRk+kywb8fIkGfVt oFNcFODaSSdkU/b69lu5S5dnCK0b2rghC4I+wEL78Jh3p13hgr3Ql3JLS3CfvUJoRf SbPj2sqwfN00A== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alex Van Brunt Accessed bit is used to age a page and in generic implementation there is flush_tlb while clearing the accessed bit. Flushing a TLB is overhead on ARM64 as access flag faults don't get translation table entries cached into TLB's. Flushing TLB is not necessary for this. Clearing the accessed bit without flushing TLB doesn't cause data corruption on ARM64. [It may cause incorrect page aging but chances of that should be relatively low.] In our case with this patch, speed of reading from fast NVMe/SSD through PCIe got improved by 10% ~ 15% and writing got improved by 20% ~ 40%. So for performance optimisation don't flush TLB when clearing the accessed bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre --- v2: Added comments about why flushing is not needed while clearing accessed bit arch/arm64/include/asm/pgtable.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2ab2031..33e1940 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -652,6 +652,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, return __ptep_test_and_clear_young(ptep); } +#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH +static inline int ptep_clear_flush_young(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + /* + * Flushing a TLB is overhead on ARM64 as access flag faults don't get + * translation table entries cached into TLB's. Flushing TLB is not + * necessary for this. Clearing the accessed bit without flushing TLB + * doesn't cause data corruption on ARM64.[ It may cause imcorrect page + * aging but chances of this should be comparatively low. ] + * So as a performance optimization don't flush the TLB when clearing + * the accessed bit. + */ + return ptep_test_and_clear_young(vma, address, ptep); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, -- 2.7.4