Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp2870463pxb; Sat, 25 Sep 2021 21:26:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxAyILbyIUAZbQaCXa6LKaYa3p84tbzGXthZOD6BB6r9l6IxPMZG2NQEIHqMjy1N0btzUJt X-Received: by 2002:a05:6e02:8a3:: with SMTP id a3mr11858678ilt.88.1632630398514; Sat, 25 Sep 2021 21:26:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632630398; cv=none; d=google.com; s=arc-20160816; b=tF/PxviDy6+KNX0bhHz7AqDRQCFbGe534qky/mfibUm+s7AQHV7wdEn+YCYU9IoiHl 4hnUU31A1UE/Qv0ij2Egv9dNLlyCD+QCe5UkTfZ9uEbbjV5Kh+ipNZmIKhYH8Sr1ZuHH FX+jGGVvafhpv4tmh7qcqqi2Dy+cll/B1mxTw+vykgh7cfbXldM4uufBEM8fzEnpjmFS tWYrr2HZ//GElHkqORzcqPDlLthpMrGugSn5GeC1q1r7p9U4hy2GJ7i//xYY3q+PNQQz yQpgP3Qm9YgFRiS1QiOWmWa08B1Mg8eZQtQUhcQ4TtAo4yCIBesQhBmojKy5yh7rxn7D 1TNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=Eci95qFJwxt5whQI1jVEQofO7PeEx2QBhVEGTRmpE3E=; b=O8zV1fOx5V6sgceJhYd2/dCWvt78wFg86FX07HGNGb+H7SQDWP8TQFuoJ6WWVJiz1z lMPFWGUFOYDtRpsKU/B5bWsRclSHUrAVjOf9KDANTeb7Fb6NSoRqhIaPh83eaboEcQTs 5Pd3CXl1FGnzX2pjhYGKM0y0+OPdHgGOkev2rgc8pP8Umpsk6c1zDP/VzXVU0DS2lDow hgdLBEVbk9CsgSRwwm1setV8rssmFCoY3f9Oi8vhC5HHfasPWN/b/07IWuC707yOGy8r HNJljFXPNjPnU/VGyrPpIbBeGl+wyHoy7SGqtAQatKYHv1i3wOuU4OaB5mCWY44esyZ5 mu9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b="jpfhB/aA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m10si22147702ilu.53.2021.09.25.21.26.26; Sat, 25 Sep 2021 21:26:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b="jpfhB/aA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230430AbhIZE1W (ORCPT + 99 others); Sun, 26 Sep 2021 00:27:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229677AbhIZE1U (ORCPT ); Sun, 26 Sep 2021 00:27:20 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74CBBC061570 for ; Sat, 25 Sep 2021 21:25:44 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id l6so9338120plh.9 for ; Sat, 25 Sep 2021 21:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Eci95qFJwxt5whQI1jVEQofO7PeEx2QBhVEGTRmpE3E=; b=jpfhB/aAUgTyiQ8HW32VMzVZX4P07EEdBZmcgdgfXmTh4Z5lBF3TjM+QFJfr1JM22H c6jRNJ3UEZe/+OPQjr96wTQI5N3ozGl4cnag5nw7X8Kp1FUYh3/dmT0lZuGJYuKygaxP QzI7K6TNX6dDCniEU6Pfw8zDxuzUdeQe7OWAtrm8GsUTShkGtazrvKFkjD2wGUTKbpM8 ideyc3zSfk3QeiTWtv5VuoD0f5tIGE6x/UfXxuEs6Gx6azPajTIM6fAaiJsrUpl+J0Bi QPCwV4oFfzp9pqG2I1F/v0LbcP0cSVPrnX1EZXrssYJ4Y5F9Yoo828moqfXZ/a/q/vo+ +k2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Eci95qFJwxt5whQI1jVEQofO7PeEx2QBhVEGTRmpE3E=; b=ely94heo1mgUUjUFMe7auBV7iAKZ2m4+ZrulZzZQwEeP9RF7ZJtxjqXw8BInnAYO4p pZ5hHx98M4/i152V2X1OM1nSDYSvm1edYrA+SwjCIG3ekpOfeH5Nv/EUNYcGuNYqTNuc LlIZ9FaneKcvvjEaL+aulrAhRIKdDunfkk1t7EVOpW1DyH+AoqXjIwL2aC8g7Q9hh4H0 5ikUxNcM4fmGRzQ5F3NynBGzPgMp52lSF6xc3wA9wdqNfYbIWpRH2XS59DiQNM00CxZe DDbQXdsIH3gzbBgydGQP4evCyo6vaS3vBE85JQpst8ujzaP1wSaeP2qvNMcOBtl2ozYo OC6w== X-Gm-Message-State: AOAM530ExwNQzCiOrvHkY5oY1TfEzoPB3Q7n2fz3zEd0MzcSHE8oHgFI h6NTihe4VNxpA1sxVTNi0pY= X-Received: by 2002:a17:90a:4207:: with SMTP id o7mr11569863pjg.192.1632630343776; Sat, 25 Sep 2021 21:25:43 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:43 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 0/2] mm/mprotect: avoid unnecessary TLB flushes Date: Sat, 25 Sep 2021 13:54:21 -0700 Message-Id: <20210925205423.168858-1-namit@vmware.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Nadav Amit This patch-set is based on a very small subset of an old RFC (see link below), and intended to avoid TLB flushes when they are not necessary architecturally. Specifically, memory-unprotect using userfaultfd (i.e., using userfaultfd IOCTL) triggers a TLB flush when in fact no architectural data, other than a software flag, is updated. This overhead shows up in my development workload profiles. Instead of tailoring a solution for this specific scenario, it is arguably better to use this opportunity to consolidate the interfaces that are used for TLB batching, avoid the open-coded [inc|dec]_tlb_flush_pending() and use the tlb_[gather|finish]_mmu() interface instead. Avoiding the TLB flushes is done very conservatively (unlike the RFC): 1. According to x86 specifications no flushes are necessary on permission promotion and changes to software bits. 2. Linux does not flush PTEs after the access bit is cleared. I considered the feedback of Andy Lutomirski and Andrew Cooper for the RFC regarding avoiding TLB invalidations when RW is cleared for clean PTEs. Although the bugs they pointed out can be easily addressed, I am concerned since I could not find specifications that explicitly clarify this optimization is valid. -- RFC -> v1: * Do not skip TLB flushes when clearing RW on clean PTEs * Do not defer huge PMD flush as it is already done inline Link: https://lore.kernel.org/lkml/20210131001132.3368247-1-namit@vmware.com/ Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Nadav Amit (2): mm/mprotect: use mmu_gather mm/mprotect: do not flush on permission promotion arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++++++++++ include/asm-generic/tlb.h | 4 +++ mm/mprotect.c | 51 +++++++++++++++++++-------------- 3 files changed, 73 insertions(+), 22 deletions(-) -- 2.25.1