Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp111481rdb; Tue, 19 Dec 2023 10:46:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IHFjRsQ6ynUfAK/1oK53kDGItF6fCsv3uyf39a16x/uSTgU0YK5ME1pV2mk02eafKnPpGur X-Received: by 2002:ac2:4857:0:b0:50b:f816:f404 with SMTP id 23-20020ac24857000000b0050bf816f404mr7943235lfy.37.1703011568714; Tue, 19 Dec 2023 10:46:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703011568; cv=none; d=google.com; s=arc-20160816; b=pMjkXX8g2tmxJhU3JTu3Nu2qsuatdPNhnSckAYULT4ueer+/tqA/xmzQslOOkKTYXR XfFTMm3xBfszXfF2a3RvbOwklAFpshkq7zg/KCmC2tTl3RaFVObVpw0WA4jB4GeXDOEK wq51mEfwEFKE5IzjWX28nRDexb+6na9tPscTzH4pe34jq3ZlbqnPc/dGSMwmu/HSshTT OZ2q3Mo2Wfn7oaO4YTAHHi8bVeoS0Yw+tyb550suBGzRtIpGt4rkDQ9bEeyuIunsIPDN P0lhNBdS4OuweBEzIZvV4NfFiUcYmHHttXrFo8zMyA1Kj4RT4l90NijtJxUBugU1GuxM hH5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=DRTfjTzKB6pmfIrGrFKfi8dTXOFErbhekmtAC+88+/4=; fh=0Glk/ayqI8+vZzfC3nNOQX076ObCaIyGiP/4knLjRm0=; b=oUrpE3rP1uBiSwOVK9mmcG619i8dKhgjInFGPTEjmJvOo7ivgih+aB4YVqO4uXET4J H9AVFWzQezxEit7gJ9cPnmqhQ6wdWt2D8HVnAz4LQua6t2q8GRAX5LzDfsOAQZI7/SJ0 ygg9noY6EPz5yWyP+pcDts0X6reFTyDgoNuObo2h+sulDhFw1L1kiFoqoOttiqv6+U2q q0sr0BTXcEuq7EPNMfrzu9S+RX3XcgtoSnbbyH4amf45h4olT/nca6zeuvLndSjVYblT LbQSmMDC8sXaTW7BG0gMk3f9hJx7l0Unv2hHFj8IokshHHYEpTXZoUeSmRfjDAgS8795 SpXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Tx6P/DJj"; spf=pass (google.com: domain of linux-kernel+bounces-5757-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5757-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id w6-20020a1709060a0600b00a2328f1ed9bsi3131603ejf.748.2023.12.19.10.46.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 10:46:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5757-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Tx6P/DJj"; spf=pass (google.com: domain of linux-kernel+bounces-5757-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5757-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 3AD2C1F2AB9F for ; Tue, 19 Dec 2023 18:03:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C55B937D31; Tue, 19 Dec 2023 18:03:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tx6P/DJj" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E62ED37D0A; Tue, 19 Dec 2023 18:03:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 248D2C433C7; Tue, 19 Dec 2023 18:03:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703009002; bh=UL2kffNuUbkPlnJBN+qpCzpP5kui14dRs7DZfONdDT0=; h=From:To:Cc:Subject:Date:From; b=Tx6P/DJjFhfS2hJiQefr5BQzr7JnNvODIRRy7UQlV7ksSfFMQfUEGi2O7ZYHnX46p 7zZWYPi/ssCCH0Z/Xb/HvTsstn+U5ywq4poEVpAPlmY6tUmyi39QY7UPJV0EdHJBJK h0auiil7imG1WcV2nDuuziUeVKwCTF7BrLxBlSrcxkeJBHc6uElnSju2pnjNw0TYZp I7zsC2VlkFB6MNWsigTWEybThY55+MS4U5/x/cM3+JUt6CUBm0K1KrUnhmeG94M1Ft SCXWVZSV2TaN4upbjHB+1U/m0Kt3CkLh+nuKeRkxaQ0T4ePmCHdLqAh0Us6nZH2L+K ncYMPB83i9ceg== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 0/4] riscv: support fast gup Date: Wed, 20 Dec 2023 01:50:42 +0800 Message-Id: <20231219175046.2496-1-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit This series adds fast gup support to riscv. The First patch fixes a bug in __p*d_free_tlb(). Per the riscv privileged spec, if non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is a must. The 2nd patch is a preparation patch. The last two patches do the real work: In order to implement fast gup we need to ensure that the page table walker is protected from page table pages being freed from under it. riscv situation is more complicated than other architectures: some riscv platforms may use IPI to perform TLB shootdown, for example, those platforms which support AIA, usually the riscv_ipi_for_rfence is true on these platforms; some riscv platforms may rely on the SBI to perform TLB shootdown, usually the riscv_ipi_for_rfence is false on these platforms. To keep software pagetable walkers safe in this case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h for more details. This patch enables MMU_GATHER_RCU_TABLE_FREE, then use *tlb_remove_page_ptdesc() for those platforms which use IPI to perform TLB shootdown; *tlb_remove_ptdesc() for those platforms which use SBI to perform TLB shootdown; Both case mean that disabling interrupts will block the free and protect the fast gup page walker. So after the 3rd patch, everything is well prepared, let's select HAVE_FAST_GUP if MMU. Jisheng Zhang (4): riscv: tlb: fix __p*d_free_tlb() riscv: tlb: convert __p*d_free_tlb() to inline functions riscv: enable MMU_GATHER_RCU_TABLE_FREE for SMP && MMU riscv: enable HAVE_FAST_GUP if MMU arch/riscv/Kconfig | 2 ++ arch/riscv/include/asm/pgalloc.h | 53 +++++++++++++++++++++++++++----- arch/riscv/include/asm/pgtable.h | 6 ++++ arch/riscv/include/asm/tlb.h | 18 +++++++++++ 4 files changed, 71 insertions(+), 8 deletions(-) -- 2.40.0