Received: by 10.192.165.148 with SMTP id m20csp168864imm; Thu, 3 May 2018 17:19:05 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqSD0L61xLybPvP+Lj+vLbAGJwDRZfrK8B6HVR5ZKVVwPz1CYm9n8E5A/ATC5fLVfEbYHWx X-Received: by 2002:a17:902:76c1:: with SMTP id j1-v6mr25544175plt.284.1525393145450; Thu, 03 May 2018 17:19:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525393145; cv=none; d=google.com; s=arc-20160816; b=XarTyaLlQpcNaH5Oxzifw7ZrvhmVk6rfegIWcGqTBHUs10zmGXP5z2n9rkQ2TJ510x 69tpWzJ3AxK+1QO1auTGE/P899sUO06n9PFulzExq13l//LnNRfspRg2uglzNoaKeQRr sUt/G89aSoFTYbA/uE9xKlQZnwIbtAOiyU752aIR+zdFu4x7e2YGzxLs0+tCP6ah29sP VCoWheUGFga7anE4FW91nWPEaxKJkqTNFFEYq3OAUfPMtmgfYSsRIHJZbubM7gsKk/Mq uCNoIjvBeTmpo5YC25RKRhqmpswLCE9w1TV+RRysP5C+cr3kzTz23EMWFDoTYI4F8As7 h94g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:arc-authentication-results; bh=q0TzkNp/geceBcuJreu7OmLMd00L6x3XMQPh47/Hegk=; b=P4dbq9glpaXQwc7or7Ndfj/VNgbzrMiZq+1P+gXfjcIrffL6IeBD1xU7uU+eYUp3TB kb7BwxBUqu+Opodxnktn89U7FgBr0P462H/rdQzZbfHfY5RXONewWfl9c1yWVwI1+nUI /J0T4GT0IZVPuMaFsfqBZJdPPSkgZwhVlTrOTt7O7g/d34ynyfYJnZdRVmswKNpjjzIX rv61qpj+msD3yGUUxoeI4fcgW3VZpDEUqUS4vdf3iCXxQXgr3UiQtD0bq2Q0SWSAqyL1 dXaM7IsLAGPcneIrkvLFiouXtNp9P4irBFzl9UAdfFMa5XCv42HyGFJydMPziUvyzDCr cACA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u2-v6si7269726pgv.246.2018.05.03.17.18.51; Thu, 03 May 2018 17:19:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751477AbeEDAQZ (ORCPT + 99 others); Thu, 3 May 2018 20:16:25 -0400 Received: from mga04.intel.com ([192.55.52.120]:25618 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751445AbeEDAQX (ORCPT ); Thu, 3 May 2018 20:16:23 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 May 2018 17:16:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,360,1520924400"; d="scan'208";a="38537640" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga008.jf.intel.com with ESMTP; 03 May 2018 17:16:22 -0700 Subject: [PATCH v3 4/9] x86, memcpy_mcsafe: add write-protection-fault handling From: Dan Williams To: linux-nvdimm@lists.01.org Cc: x86@kernel.org, Ingo Molnar , Borislav Petkov , Tony Luck , Al Viro , Thomas Gleixner , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Linus Torvalds , Mika =?utf-8?b?UGVudHRpbMOk?= , hch@lst.de, linux-kernel@vger.kernel.org, tony.luck@intel.com, linux-fsdevel@vger.kernel.org Date: Thu, 03 May 2018 17:06:26 -0700 Message-ID: <152539238635.31796.14056325365122961778.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <152539236455.31796.7516599166555186700.stgit@dwillia2-desk3.amr.corp.intel.com> References: <152539236455.31796.7516599166555186700.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for using memcpy_mcsafe() to handle user copies it needs to be to handle write-protection faults while writing user pages. Add MMU-fault handlers alongside the machine-check exception handlers. Note that the machine check fault exception handling makes assumptions about source buffer alignment and poison alignment. In the write fault case, given the destination buffer is arbitrarily aligned, it needs a separate / additional fault handling approach. The mcsafe_handle_tail() helper is reused. The @limit argument is set to @len since there is no safety concern about retriggering an MMU fault, and this simplifies the assembly. Cc: Cc: Ingo Molnar Cc: Borislav Petkov Cc: Tony Luck Cc: Al Viro Cc: Thomas Gleixner Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Andrew Morton Cc: Linus Torvalds Reported-by: Mika Penttilä Co-developed-by: Tony Luck Signed-off-by: Dan Williams --- arch/x86/include/asm/uaccess_64.h | 3 +++ arch/x86/lib/memcpy_64.S | 14 ++++++++++++++ arch/x86/lib/usercopy_64.c | 21 +++++++++++++++++++++ 3 files changed, 38 insertions(+) diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 62546b3a398e..c63efc07891f 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -194,4 +194,7 @@ __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size) unsigned long copy_user_handle_tail(char *to, char *from, unsigned len); +unsigned long +mcsafe_handle_tail(char *to, char *from, unsigned len); + #endif /* _ASM_X86_UACCESS_64_H */ diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index f01a88391c98..c3b527a9f95d 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -265,9 +265,23 @@ EXPORT_SYMBOL_GPL(__memcpy_mcsafe) mov %ecx, %eax ret + /* + * For write fault handling, given the destination is unaligned, + * we handle faults on multi-byte writes with a byte-by-byte + * copy up to the write-protected page. + */ +.E_write_words: + shll $3, %ecx + addl %edx, %ecx + movl %ecx, %edx + jmp mcsafe_handle_tail + .previous _ASM_EXTABLE_FAULT(.L_read_leading_bytes, .E_leading_bytes) _ASM_EXTABLE_FAULT(.L_read_words, .E_read_words) _ASM_EXTABLE_FAULT(.L_read_trailing_bytes, .E_trailing_bytes) + _ASM_EXTABLE(.L_write_leading_bytes, .E_leading_bytes) + _ASM_EXTABLE(.L_write_words, .E_write_words) + _ASM_EXTABLE(.L_write_trailing_bytes, .E_trailing_bytes) #endif diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c index 75d3776123cc..7ebc9901dd05 100644 --- a/arch/x86/lib/usercopy_64.c +++ b/arch/x86/lib/usercopy_64.c @@ -75,6 +75,27 @@ copy_user_handle_tail(char *to, char *from, unsigned len) return len; } +/* + * Similar to copy_user_handle_tail, probe for the write fault point, + * but reuse __memcpy_mcsafe in case a new read error is encountered. + * clac() is handled in _copy_to_iter_mcsafe(). + */ +__visible unsigned long +mcsafe_handle_tail(char *to, char *from, unsigned len) +{ + for (; len; --len, to++, from++) { + /* + * Call the assembly routine back directly since + * memcpy_mcsafe() may silently fallback to memcpy. + */ + unsigned long rem = __memcpy_mcsafe(to, from, 1); + + if (rem) + break; + } + return len; +} + #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE /** * clean_cache_range - write back a cache range with CLWB