Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp2642503rwl; Mon, 26 Dec 2022 19:27:38 -0800 (PST) X-Google-Smtp-Source: AMrXdXuFxaPjyvfmNQtz9mC2jqthngH+WVKdSuhdgMjNs7PjQ+VoYgXpgRF4oajdSCWgmfQHjhL2 X-Received: by 2002:a05:6402:3998:b0:46b:203:f388 with SMTP id fk24-20020a056402399800b0046b0203f388mr23755670edb.39.1672111658286; Mon, 26 Dec 2022 19:27:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672111658; cv=none; d=google.com; s=arc-20160816; b=xgSRYKpjcmQy2XHgvHNQ2B/h1z68u7rucJFSrCMotRoqXbEQWIII086d9zlyI19wwC raFMf8JbJRELN3A3CIAeKuAXlJgj9r6bb/xzlDs4t3I/VGEmQicUQMgxuCKcXlV43mY3 dJwobEsHBke0Og+7fMNWo8S+zM1LxMsVGMCorRAq+mdNkECEsmBjGRnMNDEwc/kNl07+ GAIliaOCIuZDoXoYtvT3aliqCH39io01XvKdqCNfW8GGV1LxeT5iYBWqthUIaDtG42W+ pcuYCylChRqj1HWhQKZbEfpJWTWK8DX5XeIv1u0cN+twL4Js52TAUE2hQuE9+AFapu/h RTCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+IRLHyVHXI80YSVYaqVuD5iuaMqaAmx3HLnreDtNbNE=; b=Rtr/AsMe3AR/6NFyCDZ657wNKCWXQC3e6GUdcBDbr2Qsw4gF3OViifU0N4m21uBwrC dniDOlxbz3BumjoSGtW1+L6+jaFh1giOQhyT7KEgE11gQUTi+S7RerHq4cGVKYkoReFr Rv7p2qHtNaW7pV3axtKZSE8+2l2Lf3LXokiQeAMz0bB/8urpAno019AiZ8FSzOr6t7Om SQTsEhaqKe0kV3mSYygLBxjsyVx9T9YVb/gjMl8mESjxvHn8eOB3l5vYGfmsMoB9phfB BDK9HVOX7aIV8GT2yNLT+xEqzHfIPStNW5atHJSgXUTyjcV5fPFicoxqG4UgrEnz2Fld J4gQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PRVgfvZa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z43-20020a509e2e000000b00468514cea79si10074159ede.204.2022.12.26.19.27.22; Mon, 26 Dec 2022 19:27:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PRVgfvZa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230164AbiL0DJn (ORCPT + 66 others); Mon, 26 Dec 2022 22:09:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbiL0DJG (ORCPT ); Mon, 26 Dec 2022 22:09:06 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45FD12AFD for ; Mon, 26 Dec 2022 19:08:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672110535; x=1703646535; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NXgfwVMopqIsMq8iO8GLDE0ia9jK476IcYsVpRyLMg4=; b=PRVgfvZaTdpzBmV6zYB3d8CaSen6KVElpuCgazTXJ6NFn1nflMCrAe3t rlhx7N6omnjDkybvAjGxQQ++hvc9W6fsN3/u97Sbp5mTdmpylhMlaX5Hm 8XrEvJSaxVOBqNeDamGqDNR10Q/DxKJ/cJmPVMUwEJ/n4YWLKn9bwCtK+ PphZLrXrDiRQojpM3wLT9iwS/Tglu7D5VZfcbNUrwA3SrZUvhDnWpsrAR Ct7UNrgBVdt4hpliuFzbFdwahV2o0sy0eI2HsxdQXzVDFfSg+qQhkeLOQ OHN9KGR9xG//95dZTDENAMqIwc2uCgufoZNi/xCIbH07h5gwMkhj8EIBO w==; X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="321869797" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="321869797" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 19:08:53 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="652917550" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="652917550" Received: from ppogotov-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.62.152]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 19:08:47 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id DEAA810BBB7; Tue, 27 Dec 2022 06:08:36 +0300 (+03) From: "Kirill A. Shutemov" To: Dave Hansen , Andy Lutomirski , Peter Zijlstra Cc: x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Taras Madan , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , Bharata B Rao , Jacob Pan , Ashok Raj , Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv13 10/16] x86/mm/iommu/sva: Make LAM and SVA mutually exclusive Date: Tue, 27 Dec 2022 06:08:23 +0300 Message-Id: <20221227030829.12508-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.38.2 In-Reply-To: <20221227030829.12508-1-kirill.shutemov@linux.intel.com> References: <20221227030829.12508-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org IOMMU and SVA-capable devices know nothing about LAM and only expect canonical addresses. An attempt to pass down tagged pointer will lead to address translation failure. By default do not allow to enable both LAM and use SVA in the same process. The new ARCH_FORCE_TAGGED_SVA arch_prctl() overrides the limitation. By using the arch_prctl() userspace takes responsibility to never pass tagged address to the device. Signed-off-by: Kirill A. Shutemov Reviewed-by: Ashok Raj Reviewed-by: Jacob Pan --- arch/x86/include/asm/mmu.h | 2 ++ arch/x86/include/asm/mmu_context.h | 6 ++++++ arch/x86/include/uapi/asm/prctl.h | 1 + arch/x86/kernel/process_64.c | 7 +++++++ drivers/iommu/iommu-sva.c | 4 ++++ include/linux/mmu_context.h | 7 +++++++ 6 files changed, 27 insertions(+) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 54e4a3e9b5c5..90d20679e4d7 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -14,6 +14,8 @@ #define MM_CONTEXT_HAS_VSYSCALL 1 /* Do not allow changing LAM mode */ #define MM_CONTEXT_LOCK_LAM 2 +/* Allow LAM and SVA coexisting */ +#define MM_CONTEXT_FORCE_TAGGED_SVA 3 /* * x86 has arch-specific MMU state beyond what lives in mm_struct. diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 7f9f9978c811..4bc95c35cbd3 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -114,6 +114,12 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) mm->context.untag_mask = -1UL; } +#define arch_pgtable_dma_compat arch_pgtable_dma_compat +static inline bool arch_pgtable_dma_compat(struct mm_struct *mm) +{ + return !mm_lam_cr3_mask(mm) || + test_bit(MM_CONTEXT_FORCE_TAGGED_SVA, &mm->context.flags); +} #else static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index a31e27b95b19..eb290d89cb32 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -23,5 +23,6 @@ #define ARCH_GET_UNTAG_MASK 0x4001 #define ARCH_ENABLE_TAGGED_ADDR 0x4002 #define ARCH_GET_MAX_TAG_BITS 0x4003 +#define ARCH_FORCE_TAGGED_SVA 0x4004 #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 09e7f3d3fb5c..add85615d5ae 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -756,6 +756,10 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits) if (test_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags)) return -EBUSY; + if (mm_valid_pasid(mm) && + !test_bit(MM_CONTEXT_FORCE_TAGGED_SVA, &mm->context.flags)) + return -EINTR; + if (mmap_write_lock_killable(mm)) return -EINTR; @@ -872,6 +876,9 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2) (unsigned long __user *)arg2); case ARCH_ENABLE_TAGGED_ADDR: return prctl_enable_tagged_addr(task->mm, arg2); + case ARCH_FORCE_TAGGED_SVA: + set_bit(MM_CONTEXT_FORCE_TAGGED_SVA, &task->mm->context.flags); + return 0; case ARCH_GET_MAX_TAG_BITS: if (!cpu_feature_enabled(X86_FEATURE_LAM)) return put_user(0, (unsigned long __user *)arg2); diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index 4ee2929f0d7a..dd76a1a09cf7 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -2,6 +2,7 @@ /* * Helpers for IOMMU drivers implementing SVA */ +#include #include #include #include @@ -32,6 +33,9 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max) min == 0 || max < min) return -EINVAL; + if (!arch_pgtable_dma_compat(mm)) + return -EBUSY; + mutex_lock(&iommu_sva_lock); /* Is a PASID already associated with this mm? */ if (mm_valid_pasid(mm)) { diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h index 14b9c1fa05c4..f2b7a3f04099 100644 --- a/include/linux/mmu_context.h +++ b/include/linux/mmu_context.h @@ -35,4 +35,11 @@ static inline unsigned long mm_untag_mask(struct mm_struct *mm) } #endif +#ifndef arch_pgtable_dma_compat +static inline bool arch_pgtable_dma_compat(struct mm_struct *mm) +{ + return true; +} +#endif + #endif -- 2.38.2