Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp287840pxm; Tue, 22 Feb 2022 23:44:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJxqAyLDK+tU9N5ox5IpK4nrpvHWWkzKDk9wPG6OKd7rD9GTE6fnjYHrnGvENKDL+nTVSzq5 X-Received: by 2002:aa7:cb18:0:b0:413:3a7a:b5d6 with SMTP id s24-20020aa7cb18000000b004133a7ab5d6mr1424342edt.254.1645602293403; Tue, 22 Feb 2022 23:44:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1645602293; cv=none; d=google.com; s=arc-20160816; b=NmogdObhsRudKLtRT2yzUAIUV6cBSTMX2FYrtFnpTFM3KVAhRMD5pZPPuCmr/LQRHS +epH5xaTG+HafzvZrNcsv0GDzCwPv6Bvt0ILUCl+ZJ0IB7hIfpbCqPdGhBWRJ2s9a1DF R22ZKJx8B0/UZQueOWvNsiDzS1zHRVQDoaARCSZx1hbcyHRNZDTI66ba+2L2ozVwTI9t 5K8ikGp3Md3Yl3oeWo8njC3pe0ArZ+ogCfLlhSQnOPkw/DqE4qqeUilhL/7JVJAOK/OC 7t//Bk82MIoyT0NQ13jKOCyR3mldaPt+bmfLRHiG1mHguBh6t1QaXMX6eMDkdQ7cN+UY GtYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=migmFG7UQDaW8MeCncNCopIOLc7/KCi4nAx8Y23Kqls=; b=njiaWawva8GrZKFZ57yLBMLBVx8wnnfoyMHwmD+1/5Pa1Z75P8HDEF73T9ep38mlIC 2Y3/l3iN2/Po6Z+mDxA6HB/c1KTyBUspUnLTBvXDf28hau61zkAa0kEcmIEJFM1BXk25 VPXStXcawZtwKAcIrxVfzskXKhEP+psPey6LYX69SNPvDfDrqhQPEw2iyhtEwfgB8y4N ebVMQ5mvWRA+cMVr+9OQDltphntFkE8pSNud8Vtfcr2d+7bL3oJNbMkGA3V5n1eu4ie0 uziyzGTr2Y6JTpnEeabj5PiudGKpsJrsmWyD9uWIRZyrOhSbKnJkcZpH5zxMxIlcBQEP uh0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ZIf827+E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t6si11220572ejs.855.2022.02.22.23.44.30; Tue, 22 Feb 2022 23:44:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ZIf827+E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238194AbiBWF0a (ORCPT + 99 others); Wed, 23 Feb 2022 00:26:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238331AbiBWFZm (ORCPT ); Wed, 23 Feb 2022 00:25:42 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F27386D964 for ; Tue, 22 Feb 2022 21:24:49 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2d74a0ff060so75803747b3.6 for ; Tue, 22 Feb 2022 21:24:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=migmFG7UQDaW8MeCncNCopIOLc7/KCi4nAx8Y23Kqls=; b=ZIf827+EcfBkAYznkNAzIxjepuJOo5jGJUpKJJAWVlbJLp/v2xx9WndG0Zfv0Y30pp QVoiE4x5/isYQS6r529V5FuPTwvJXepA8/Ig/UwtVbwxtSPhO5eBWW/xZu7GdKR2hwRD noJsJ3Ur8bMBMS1eL71R7PXbr8K1C4T4AmgEfBVK5/+rkrUOvlO+o5qBs5UJm8vfEFH8 iWvYMGLx9ZnbGLPu3ZtgRIXCqWDVhmvwLhi1ORJZdwX1PxFlhSsfJbPaCpw4raGaKYCV 43iNhXkj/THFgjjQb9mrwbCVq0tdVwLBhSI29IV5lijsBJ9zHJayAmn0j12b9dgg/i30 Q+4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=migmFG7UQDaW8MeCncNCopIOLc7/KCi4nAx8Y23Kqls=; b=tyC8PEAm+nHdFxM9SbddE+nmk4SbGQTtDT7y9ENUnSRRXXPJs6PCbuIAb9OLtbrUR1 JkvZy1y1Jz+WzRsmmvLQejtn6WtQDD1Ub+zUrtB73Ro9EPj8UVIt6HcfQ1zLfjQrMYAG r6iYGN1y0y8pCTZp/img5Lk9X9PZF6tRv5xoBhYQwfqARBbYEpOnhnfoppWtOGWRoTIA uCu2z5FoR1JCMOBPT0GAfc/EYsEPm5rihp+thouE7U+3LaCMmwnOHyvqtpytXam+V1xF 7AWvibOFH2s/MiDESyesvKIvuyNI6ElYCmvxg25TSYLfxeb4XPIV9LovQYPvSoiJxdpN /tKg== X-Gm-Message-State: AOAM533I7XGRq/O6ns+TFI5J6Q/CqbRYi1m53s7hoBdGr9lPTeZRWyty SdwEgo0/orVTVoVjdRUL01ChilgrCUCQb6SJDmxxZaRLRY01r9N0W51pkKu7YCkz5XV8+lFe5Lq AmBJGIrjXHhT347RCfWmUKCT8S9i+0PoiiQc8crV27qw2+TFcTQ8OOvXUBlAvJyvO2HmVrjHc X-Received: from js-desktop.svl.corp.google.com ([2620:15c:2cd:202:ccbe:5d15:e2e6:322]) (user=junaids job=sendgmr) by 2002:a81:84d5:0:b0:2d1:e85:bf04 with SMTP id u204-20020a8184d5000000b002d10e85bf04mr27926930ywf.465.1645593877093; Tue, 22 Feb 2022 21:24:37 -0800 (PST) Date: Tue, 22 Feb 2022 21:21:59 -0800 In-Reply-To: <20220223052223.1202152-1-junaids@google.com> Message-Id: <20220223052223.1202152-24-junaids@google.com> Mime-Version: 1.0 References: <20220223052223.1202152-1-junaids@google.com> X-Mailer: git-send-email 2.35.1.473.g83b2b277ed-goog Subject: [RFC PATCH 23/47] mm: asi: Add support for mapping all userspace memory into ASI From: Junaid Shahid To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com, pjt@google.com, oweisse@google.com, alexandre.chartre@oracle.com, rppt@linux.ibm.com, dave.hansen@linux.intel.com, peterz@infradead.org, tglx@linutronix.de, luto@kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This adds a new ASI class flag, ASI_MAP_ALL_USERSPACE, which if set, would automatically map all userspace addresses into that ASI address space. This is achieved by lazily cloning the userspace PGD entries during page faults encountered while in that restricted address space. When the userspace PGD entry is cleared (e.g. in munmap()), we go through all restricted address spaces with the ASI_MAP_ALL_USERSPACE flag and clear the corresponding entry in those address spaces as well. Signed-off-by: Junaid Shahid --- arch/x86/include/asm/asi.h | 2 + arch/x86/mm/asi.c | 81 ++++++++++++++++++++++++++++++++++++++ include/asm-generic/asi.h | 7 ++++ mm/memory.c | 2 + 4 files changed, 92 insertions(+) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index 2dc465f78bcc..062ccac07fd9 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -68,6 +68,8 @@ void asi_unmap(struct asi *asi, void *addr, size_t len, bool flush_tlb); void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len); void asi_sync_mapping(struct asi *asi, void *addr, size_t len); void asi_do_lazy_map(struct asi *asi, size_t addr); +void asi_clear_user_pgd(struct mm_struct *mm, size_t addr); +void asi_clear_user_p4d(struct mm_struct *mm, size_t addr); static inline void asi_init_thread_state(struct thread_struct *thread) { diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index ac35323193a3..a3d96be76fa9 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -702,6 +702,41 @@ static bool is_addr_in_local_nonsensitive_range(size_t addr) addr < VMALLOC_GLOBAL_NONSENSITIVE_START; } +static void asi_clone_user_pgd(struct asi *asi, size_t addr) +{ + pgd_t *src = pgd_offset_pgd(asi->mm->pgd, addr); + pgd_t *dst = pgd_offset_pgd(asi->pgd, addr); + pgdval_t old_src, curr_src; + + if (pgd_val(*dst)) + return; + + VM_BUG_ON(!irqs_disabled()); + + /* + * This synchronizes against the PGD entry getting cleared by + * free_pgd_range(). That path has the following steps: + * 1. pgd_clear + * 2. asi_clear_user_pgd + * 3. Remote TLB Flush + * 4. Free page tables + * + * (3) will be blocked for the duration of this function because the + * IPI will remain pending until interrupts are re-enabled. + * + * The following loop ensures that if we read the PGD value before + * (1) and write it after (2), we will re-read the value and write + * the new updated value. + */ + curr_src = pgd_val(*src); + do { + set_pgd(dst, __pgd(curr_src)); + smp_mb(); + old_src = curr_src; + curr_src = pgd_val(*src); + } while (old_src != curr_src); +} + void asi_do_lazy_map(struct asi *asi, size_t addr) { if (!static_cpu_has(X86_FEATURE_ASI) || !asi) @@ -710,6 +745,9 @@ void asi_do_lazy_map(struct asi *asi, size_t addr) if ((asi->class->flags & ASI_MAP_STANDARD_NONSENSITIVE) && is_addr_in_local_nonsensitive_range(addr)) asi_clone_pgd(asi->pgd, asi->mm->asi[0].pgd, addr); + else if ((asi->class->flags & ASI_MAP_ALL_USERSPACE) && + addr < TASK_SIZE_MAX) + asi_clone_user_pgd(asi, addr); } /* @@ -766,3 +804,46 @@ void __init asi_vmalloc_init(void) VM_BUG_ON(vmalloc_local_nonsensitive_end >= VMALLOC_END); VM_BUG_ON(vmalloc_global_nonsensitive_start <= VMALLOC_START); } + +static void __asi_clear_user_pgd(struct mm_struct *mm, size_t addr) +{ + uint i; + + if (!static_cpu_has(X86_FEATURE_ASI) || !mm_asi_enabled(mm)) + return; + + /* + * This function is called right after pgd_clear/p4d_clear. + * We need to be sure that the preceding pXd_clear is visible before + * the ASI pgd clears below. Compare with asi_clone_user_pgd(). + */ + smp_mb__before_atomic(); + + /* + * We need to ensure that the ASI PGD tables do not get freed from + * under us. We can potentially use RCU to avoid that, but since + * this path is probably not going to be too performance sensitive, + * so we just acquire the lock to block asi_destroy(). + */ + mutex_lock(&mm->asi_init_lock); + + for (i = 1; i < ASI_MAX_NUM; i++) + if (mm->asi[i].class && + (mm->asi[i].class->flags & ASI_MAP_ALL_USERSPACE)) + set_pgd(pgd_offset_pgd(mm->asi[i].pgd, addr), + native_make_pgd(0)); + + mutex_unlock(&mm->asi_init_lock); +} + +void asi_clear_user_pgd(struct mm_struct *mm, size_t addr) +{ + if (pgtable_l5_enabled()) + __asi_clear_user_pgd(mm, addr); +} + +void asi_clear_user_p4d(struct mm_struct *mm, size_t addr) +{ + if (!pgtable_l5_enabled()) + __asi_clear_user_pgd(mm, addr); +} diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 7c50d8b64fa4..8513d0d7865a 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -6,6 +6,7 @@ /* ASI class flags */ #define ASI_MAP_STANDARD_NONSENSITIVE 1 +#define ASI_MAP_ALL_USERSPACE 2 #ifndef CONFIG_ADDRESS_SPACE_ISOLATION @@ -85,6 +86,12 @@ void asi_unmap(struct asi *asi, void *addr, size_t len, bool flush_tlb) { } static inline void asi_do_lazy_map(struct asi *asi, size_t addr) { } +static inline +void asi_clear_user_pgd(struct mm_struct *mm, size_t addr) { } + +static inline +void asi_clear_user_p4d(struct mm_struct *mm, size_t addr) { } + static inline void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len) { } diff --git a/mm/memory.c b/mm/memory.c index 8f1de811a1dc..667ece86e051 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -296,6 +296,7 @@ static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, pud = pud_offset(p4d, start); p4d_clear(p4d); + asi_clear_user_p4d(tlb->mm, start); pud_free_tlb(tlb, pud, start); mm_dec_nr_puds(tlb->mm); } @@ -330,6 +331,7 @@ static inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, p4d = p4d_offset(pgd, start); pgd_clear(pgd); + asi_clear_user_pgd(tlb->mm, start); p4d_free_tlb(tlb, p4d, start); } -- 2.35.1.473.g83b2b277ed-goog