Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp47410pxk; Wed, 30 Sep 2020 17:27:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAXOgSBf9xLfslTdVaakIWJhVifPSc0QszvqUy7afrYBLZn0D666dSaZstjbd54rP+URou X-Received: by 2002:aa7:d458:: with SMTP id q24mr5632991edr.23.1601512050874; Wed, 30 Sep 2020 17:27:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601512050; cv=none; d=google.com; s=arc-20160816; b=w6t3Drm2aInQcSR2VU2pCdyQJ2wyztIjYLnMQ/srNbYFFRIb2483+HDW1fMc0wBZ38 KRY1TbjkPlJEXVKIf7+qTo46pq4jJK2i7Bo5PD9HzvPWoYBeswZMta2nmX1VYR/PZ8VX a16vNMrpHMxvCZy9BkkkZUQmQ2VfXOUH8tS695EXU5IFucynGcD7rlIYliia+ON9A+X6 zdR25HUajS/k/AuB389LMV8oqIZGDsVJ6E1R8hlcGfyzVzY04dG1uywEXVjTMEeSpElz nwck984z8t1+chbP3xi2nTAggNWdj5b2jg6PjBJCHL9Zz8mFiJOeIw8qWbe5KWPMkrRs OwEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=weXqoR5W7pMz/Wlact2MjKFoSWb9gtGm8NYhMTne0Q8=; b=ZBjJoWRSA4+f+kIZ4aNy0qJOqO1P0WGjVXRNCAR7Jpde16a72Au3qqKXOFPV1UL5M4 in38i+QtOHNFWx55UrE4WA2dh9+5670IxEcVvvBNFfm2iFwuPWFRAZP3ooT8z/84zaDm w6y/YYFUgUivY48DCTiVYGjXaOaqFyuENCe+2k031ZzHdUcRHFBECtprTHOnH1eotPRm No3wdDew4fJJ0GdCkbZ/+bzFmUQgMkX0EFsqVsA0/WfDxTDoJDTEpyGQ8uRbWAq2VuDi iA0j2mfpQsVwSkk6vxGnwypbv2P+fr88VgvnIJQtLTA/siwx007eSk4mqfJGXKIWDprk jZdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=VQAtApKt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ox7si2383853ejb.336.2020.09.30.17.27.08; Wed, 30 Sep 2020 17:27:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=VQAtApKt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730489AbgI3Wma (ORCPT + 99 others); Wed, 30 Sep 2020 18:42:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730178AbgI3Wma (ORCPT ); Wed, 30 Sep 2020 18:42:30 -0400 Received: from mail-io1-xd44.google.com (mail-io1-xd44.google.com [IPv6:2607:f8b0:4864:20::d44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2942CC0613D0 for ; Wed, 30 Sep 2020 15:42:30 -0700 (PDT) Received: by mail-io1-xd44.google.com with SMTP id v8so4324490iom.6 for ; Wed, 30 Sep 2020 15:42:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=weXqoR5W7pMz/Wlact2MjKFoSWb9gtGm8NYhMTne0Q8=; b=VQAtApKtw+J+bOGAB2Gj+wwZD9R7fUMLw2yf62MxjMzL4gj8Fke7JTzZ0KPkYi0AWp PekMhLzMoNGufs16UQ6duvgRLBQRbFjE7tWb5RNEJtCTVsEExWOh3PRpedBMvdkQ89mZ Y8lz3aeibSPo4298ucrz0JWgm53AvevXqozHoLlVbeBkTCv0Ysj+/Tb2BcXc+1z5O9z8 JaiXZ3tuzOl00t5EsIifnuU093RtR10+pbJDJnKH4eaLSUs/ynjWlDuY9OHU+QrZopgy Euc0BFhq7GEfn4Dau5owRNA70E9XB1K1/2YeHOSeshcDd//KMLVykIsv+2DhFd0dEi0I JzDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=weXqoR5W7pMz/Wlact2MjKFoSWb9gtGm8NYhMTne0Q8=; b=SgRSXFtrsi76ZlsCT/ntz1Hrfwk8SNmtsa1LWVJnLZJtY4f+zkwancHWQV9CgwyTyU De+2T/J/hR5GNvWo/mUR7+gyycEzk72cBHoXuaSzP+Cdf7Kjf9XTegDv1dZqCdzUrJ8g qAHYYwh5QzHLaFbdcIgiXlxLFo3trGzXQPepfGRv1v965lRjmy+FwSDIHT+QlVBfE/dI OStRr1Mu88Yi4pE2k614fB8QxmSEc/oQzNl7duwR4q5CvATSf5I5oUq437/TyIwMErto 9GPI518EZir/rqB8ZsO1JWt1ejlFHl1iPpQ+KwvCU9jthsMx/IGineQBlgP2A/IHJ1dn kh6w== X-Gm-Message-State: AOAM5306+d2RvLAAMzWpTrSwB9k4UYc+9M3xpWtsuWOPu1Oumy2u90Xs 4tawS9a1B7cLPBlo1QkyfdftOr5UG3P6KQk/bw19KA== X-Received: by 2002:a02:c914:: with SMTP id t20mr3714423jao.117.1601505749184; Wed, 30 Sep 2020 15:42:29 -0700 (PDT) MIME-Version: 1.0 References: <20200930222130.4175584-1-kaleshsingh@google.com> <20200930223207.5xepuvu6wr6xw5bb@black.fi.intel.com> In-Reply-To: <20200930223207.5xepuvu6wr6xw5bb@black.fi.intel.com> From: Lokesh Gidra Date: Wed, 30 Sep 2020 15:42:17 -0700 Message-ID: Subject: Re: [PATCH 0/5] Speed up mremap on large regions To: "Kirill A. Shutemov" Cc: Kalesh Singh , Suren Baghdasaryan , Minchan Kim , Joel Fernandes , kernel-team@android.com, Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , Shuah Khan , "Aneesh Kumar K.V" , Kees Cook , Peter Zijlstra , Sami Tolvanen , Masahiro Yamada , Arnd Bergmann , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Christian Brauner , Mark Rutland , Mike Rapoport , Gavin Shan , Zhenyu Ye , Jia He , John Hubbard , William Kucharski , Sandipan Das , Ralph Campbell , Mina Almasry , Ram Pai , Dave Hansen , Kamalesh Babulal , Masami Hiramatsu , Brian Geffon , SeongJae Park , linux-kernel , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov wrote: > > On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote: > > mremap time can be optimized by moving entries at the PMD/PUD level if > > the source and destination addresses are PMD/PUD-aligned and > > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and > > x86. Other architectures where this type of move is supported and known to > > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD > > and HAVE_MOVE_PUD. > > > > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized > > region on x86 and arm64: > > > > - HAVE_MOVE_PMD is already enabled on x86 : N/A > > - Enabling HAVE_MOVE_PUD on x86 : ~13x speed up > > > > - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up > > - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up > > > > Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD > > give a total of ~150x speed up on arm64. > > Is there a *real* workload that benefit from HAVE_MOVE_PUD? > We have a Java garbage collector under development which requires moving physical pages of multi-gigabyte heap using mremap. During this move, the application threads have to be paused for correctness. It is critical to keep this pause as short as possible to avoid jitters during user interaction. This is where HAVE_MOVE_PUD will greatly help. > -- > Kirill A. Shutemov