Received: by 2002:a17:90a:88:0:0:0:0 with SMTP id a8csp17921pja; Fri, 22 Nov 2019 02:52:59 -0800 (PST) X-Google-Smtp-Source: APXvYqwHbwaFGZexwx79HRyXjTf5M16joGWZbiZgsMwWS+wCfYGbX8UaE1RBp8bVfsey7jqzoe7p X-Received: by 2002:a05:6402:1299:: with SMTP id w25mr257732edv.10.1574419979550; Fri, 22 Nov 2019 02:52:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574419979; cv=none; d=google.com; s=arc-20160816; b=kB/Nd3mnQdGy9n1Tity4vbungj5jxmCVBbbw/mu0mqu5XugEkbqvTXVV0qGPfBxnvY 16EN2dUlmroMn3eagnX3oaOOpjOXx50758Go1GBOW6rJORV0o8rSO5stDSqF7x2aKK9w 1E1D7wf+8eJw+KH9PkN8/2x+G7weLAI1MuU7F4ENyLMiijbWhcfdvnqlRfyZAun+/mFL B/xXDJHcP7toXql7erLnvQrM0EpPXmBbyyP2L0w5EMctFCV4cFYKeAe3mr8KQ8X0xKUz Ztj9oXE3V/dXRs2ombNCuvHS6UCHQQlYZCp7vMgN/p9yYxziRv7wQ+OdCeRFjUmqN1XO Vt7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=E2ulkXVXKJX7OaUJBw3j0bbZ5UptRn3cQSwyQf8mR4g=; b=LbXy255UDvLm3CXQv6hx67pJbHqfijq287c5PeIVod4rJSVX0s8sxksCquSqJQwu7S VOpJxUTeZ+5fXMOX/JBSz/nI4bex369DLrguH5CgQJeTGMi9GYSOzwNWVrLU8M8Yt8I/ ZqgMwAPNGbSZE1ydWIQimt6gKwmqXImXscU5136BEJy9KbTeKxUXT6y3gUn0P2wrW3Rp 4w44G0d3ELqP5OhOEU++LimOWMf+YqKmbAE6rDWsMRymwjtYtpD1wzEJuXzif066FP3i 4sUP52ZYnb3ZaBjXRS0pPLAtlOnGbVha0vUqPYtCZ/Tr7N0tnepD2XVJB1815VnJpFqX eYdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=sxPuR7ly; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id sd11si2128535ejb.368.2019.11.22.02.52.35; Fri, 22 Nov 2019 02:52:59 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=sxPuR7ly; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730011AbfKVKtX (ORCPT + 99 others); Fri, 22 Nov 2019 05:49:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:58532 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729086AbfKVKtR (ORCPT ); Fri, 22 Nov 2019 05:49:17 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E77142071F; Fri, 22 Nov 2019 10:49:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574419756; bh=7NbcbOX9UA3F0MFEH7FNxiYd1n69q/bArO3OE8ol0a0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sxPuR7lyyThikTIY4TvtJP4mRcBoF2Ig4N1B3SMxmeEL6s9UPBPfbeVVdmw4WPje0 zrsaWCWvJ+1pJHsVr2yAqcKjOf/6/dqnVwW9t63xikpeg8QtTDXYsjEwUUtEATWWj8 l7QrfktGgTxYF3aGpiFXUx7A3O3q8hTY39Zp21Fw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Catalin Marinas , Mark Rutland , Pavel Tatashin , Will Deacon Subject: [PATCH 4.9 222/222] arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault Date: Fri, 22 Nov 2019 11:29:22 +0100 Message-Id: <20191122100918.689070544@linuxfoundation.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191122100830.874290814@linuxfoundation.org> References: <20191122100830.874290814@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Pavel Tatashin commit 94bb804e1e6f0a9a77acf20d7c70ea141c6c821e upstream. A number of our uaccess routines ('__arch_clear_user()' and '__arch_copy_{in,from,to}_user()') fail to re-enable PAN if they encounter an unhandled fault whilst accessing userspace. For CPUs implementing both hardware PAN and UAO, this bug has no effect when both extensions are in use by the kernel. For CPUs implementing hardware PAN but not UAO, this means that a kernel using hardware PAN may execute portions of code with PAN inadvertently disabled, opening us up to potential security vulnerabilities that rely on userspace access from within the kernel which would usually be prevented by this mechanism. In other words, parts of the kernel run the same way as they would on a CPU without PAN implemented/emulated at all. For CPUs not implementing hardware PAN and instead relying on software emulation via 'CONFIG_ARM64_SW_TTBR0_PAN=y', the impact is unfortunately much worse. Calling 'schedule()' with software PAN disabled means that the next task will execute in the kernel using the page-table and ASID of the previous process even after 'switch_mm()', since the actual hardware switch is deferred until return to userspace. At this point, or if there is a intermediate call to 'uaccess_enable()', the page-table and ASID of the new process are installed. Sadly, due to the changes introduced by KPTI, this is not an atomic operation and there is a very small window (two instructions) where the CPU is configured with the page-table of the old task and the ASID of the new task; a speculative access in this state is disastrous because it would corrupt the TLB entries for the new task with mappings from the previous address space. As Pavel explains: | I was able to reproduce memory corruption problem on Broadcom's SoC | ARMv8-A like this: | | Enable software perf-events with PERF_SAMPLE_CALLCHAIN so userland's | stack is accessed and copied. | | The test program performed the following on every CPU and forking | many processes: | | unsigned long *map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, | MAP_SHARED | MAP_ANONYMOUS, -1, 0); | map[0] = getpid(); | sched_yield(); | if (map[0] != getpid()) { | fprintf(stderr, "Corruption detected!"); | } | munmap(map, PAGE_SIZE); | | From time to time I was getting map[0] to contain pid for a | different process. Ensure that PAN is re-enabled when returning after an unhandled user fault from our uaccess routines. Cc: Catalin Marinas Reviewed-by: Mark Rutland Tested-by: Mark Rutland Cc: Fixes: 338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: Pavel Tatashin [will: rewrote commit message] [will: backport for 4.9.y stable kernels] Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/lib/clear_user.S | 2 ++ arch/arm64/lib/copy_from_user.S | 2 ++ arch/arm64/lib/copy_in_user.S | 2 ++ arch/arm64/lib/copy_to_user.S | 2 ++ 4 files changed, 8 insertions(+) --- a/arch/arm64/lib/clear_user.S +++ b/arch/arm64/lib/clear_user.S @@ -62,5 +62,7 @@ ENDPROC(__arch_clear_user) .section .fixup,"ax" .align 2 9: mov x0, x2 // return the original size +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ + CONFIG_ARM64_PAN) ret .previous --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -80,5 +80,7 @@ ENDPROC(__arch_copy_from_user) .section .fixup,"ax" .align 2 9998: sub x0, end, dst // bytes not copied +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ + CONFIG_ARM64_PAN) ret .previous --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -81,5 +81,7 @@ ENDPROC(__arch_copy_in_user) .section .fixup,"ax" .align 2 9998: sub x0, end, dst // bytes not copied +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ + CONFIG_ARM64_PAN) ret .previous --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -79,5 +79,7 @@ ENDPROC(__arch_copy_to_user) .section .fixup,"ax" .align 2 9998: sub x0, end, dst // bytes not copied +ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ + CONFIG_ARM64_PAN) ret .previous