Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp5775268rwl; Tue, 11 Apr 2023 09:46:55 -0700 (PDT) X-Google-Smtp-Source: AKy350ZHZVLP9dmZxS+V2aKnh7E2mjTZJH3IL5CRXprM5ppMRq8htagDaRFy22//C2UyWohbggZf X-Received: by 2002:aa7:9590:0:b0:627:e69c:8488 with SMTP id z16-20020aa79590000000b00627e69c8488mr3829896pfj.14.1681231615075; Tue, 11 Apr 2023 09:46:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681231615; cv=none; d=google.com; s=arc-20160816; b=XiMbMk8F3Xeqoo21nN+dQg5B4f4S27AeO6MIhzieBphj1+pYyYYLJf+p8mPXkao/SS tSm6cHDiYFZXMfvRSwU5pRqd7yfAQqfGu/6QHmqFLrRVBKehU570/iGwJW8EkQw552lN w/TnpvVxahSj3lAK8Nye0ObU3RZ91bgQbBI0ClNGpdSiryIhyB0otiTdPCIv2ZSSSMjx r7SZkWMBAE1uBM60tUVbnAbeNkTZDN7gfCnk37rk7rGZzeP5c/BD42M/ArQCc2BqNYu9 Ps5cDsxVm48VWFntuMwXkK1bELVByDlvEotuedQTuDalukI/Xs82iMWUXDxjE6kMO6we Ps1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Gif9uwiL2TyxtUCV66SCVGT2RXcB0LSxzBtC+Oe/c6w=; b=jUefwhloBo9IzUMijlQZW5GIwyal+UB9W6YCisEUKOoisNJhLcABspTdBbKRwgDTWp AdY68bzGdxKoNgkCP0QLRpKG/2CqZbfl+Je2zBaYfHQ8qUT5wfX/sDKM3R3B4kHiHdQz WylWg8u5vw35IkXY2jTv9/pxHrMyCul2cBEiRYGwUn51dP9I0WgwQE2X7qdOIS9YpTV3 bViUzrl6qWTr6B4/F1e2gDEtx5u4b1c9pZeDd0V61mv38fxg9bxmgOoLqJfc7zU/6IA3 Vncnoa+OyMm92nvjNc0GDB+0tDub7fO5hzkKBWCsqf5jKHQLOdw43e/ipMGgf09wzCIU RocA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k29-20020aa7999d000000b006262bc88219si9794467pfh.160.2023.04.11.09.46.43; Tue, 11 Apr 2023 09:46:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230012AbjDKQqW (ORCPT + 99 others); Tue, 11 Apr 2023 12:46:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230036AbjDKQqR (ORCPT ); Tue, 11 Apr 2023 12:46:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A364759F0 for ; Tue, 11 Apr 2023 09:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C8D5362964 for ; Tue, 11 Apr 2023 16:45:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A883C433EF; Tue, 11 Apr 2023 16:45:21 +0000 (UTC) Date: Tue, 11 Apr 2023 17:45:18 +0100 From: Catalin Marinas To: Tong Tiangen Cc: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , x86@kernel.org, "H . Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kefeng Wang , Guohanjun , Xie XiuQi Subject: Re: [PATCH -next v8 4/4] arm64: add cow to machine check safe Message-ID: References: <20221219120008.3818828-1-tongtiangen@huawei.com> <20221219120008.3818828-5-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221219120008.3818828-5-tongtiangen@huawei.com> X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 19, 2022 at 12:00:08PM +0000, Tong Tiangen wrote: > At present, Recover from poison consumption from copy-on-write has been > supported[1], arm64 should also support this mechanism. > > Add new helper copy_mc_page() which provide a page copy implementation with > machine check safe. At present, only used in cow. In the future, we can > expand more scenes. As long as the consequences of page copy failure are > not fatal(eg: only affect user process), we can use this helper. > > The copy_mc_page() in copy_page_mc.S is largely borrows from copy_page() > in copy_page.S and the main difference is copy_mc_page() add extable entry > to every load/store insn to support machine check safe. largely to keep the > patch simple. If needed those optimizations can be folded in. > > Add new extable type EX_TYPE_COPY_MC_PAGE which used in copy_mc_page(). > > [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/ > > Signed-off-by: Tong Tiangen This series needs rebasing onto a newer kernel. Some random comments below. > diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S > new file mode 100644 > index 000000000000..03d657a182f6 > --- /dev/null > +++ b/arch/arm64/lib/copy_mc_page.S > @@ -0,0 +1,82 @@ [...] > +SYM_FUNC_START(__pi_copy_mc_page) > +alternative_if ARM64_HAS_NO_HW_PREFETCH > + // Prefetch three cache lines ahead. > + prfm pldl1strm, [x1, #128] > + prfm pldl1strm, [x1, #256] > + prfm pldl1strm, [x1, #384] > +alternative_else_nop_endif > + > +CPY_MC(9998f, ldp x2, x3, [x1]) > +CPY_MC(9998f, ldp x4, x5, [x1, #16]) > +CPY_MC(9998f, ldp x6, x7, [x1, #32]) > +CPY_MC(9998f, ldp x8, x9, [x1, #48]) > +CPY_MC(9998f, ldp x10, x11, [x1, #64]) > +CPY_MC(9998f, ldp x12, x13, [x1, #80]) > +CPY_MC(9998f, ldp x14, x15, [x1, #96]) > +CPY_MC(9998f, ldp x16, x17, [x1, #112]) [...] [...] > +9998: ret What I don't understand, is there any error returned here or the bytes not copied? I can see its return value is never used in this series. Also, do we need to distinguish between fault on the source or the destination? > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index 5018ac03b6bf..bf4dd861c41c 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags) > ret > SYM_FUNC_END(mte_copy_page_tags) > > +/* > + * Copy the tags from the source page to the destination one wiht machine check safe > + * x0 - address of the destination page > + * x1 - address of the source page > + */ > +SYM_FUNC_START(mte_copy_mc_page_tags) > + mov x2, x0 > + mov x3, x1 > + multitag_transfer_size x5, x6 > +1: > +CPY_MC(2f, ldgm x4, [x3]) > + stgm x4, [x2] > + add x2, x2, x5 > + add x3, x3, x5 > + tst x2, #(PAGE_SIZE - 1) > + b.ne 1b > +2: ret > +SYM_FUNC_END(mte_copy_mc_page_tags) While the data copy above handles errors on both source and destination, here you skip the destination. Any reason? > diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c > index 8dd5a8fe64b4..005ee2a3cb4e 100644 > --- a/arch/arm64/mm/copypage.c > +++ b/arch/arm64/mm/copypage.c [...] > +#ifdef CONFIG_ARCH_HAS_COPY_MC > +void copy_mc_highpage(struct page *to, struct page *from) > +{ > + void *kto = page_address(to); > + void *kfrom = page_address(from); > + > + copy_mc_page(kto, kfrom); > + do_mte(to, from, kto, kfrom, true); > +} > +EXPORT_SYMBOL(copy_mc_highpage); > + > +int copy_mc_user_highpage(struct page *to, struct page *from, > + unsigned long vaddr, struct vm_area_struct *vma) > +{ > + copy_mc_highpage(to, from); > + flush_dcache_page(to); > + return 0; > +} This one always returns 0. Does it actually catch any memory failures? > +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); > +#endif > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c > index 28ec35e3d210..0fdab18f2f07 100644 > --- a/arch/arm64/mm/extable.c > +++ b/arch/arm64/mm/extable.c > @@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex) > return ((unsigned long)&ex->fixup + ex->fixup); > } > > +static bool ex_handler_fixup(const struct exception_table_entry *ex, > + struct pt_regs *regs) > +{ > + regs->pc = get_ex_fixup(ex); > + return true; > +} Should we prepare some error here like -EFAULT? That's done in ex_handler_uaccess_err_zero(). -- Catalin