Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp396841pxb; Tue, 1 Feb 2022 02:13:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJxMvqTfnq418vrqxgr/ECxBE/h37PS5nFowp5lME7GRQYwjUojkffUDioi0f1c3EwU5PZEX X-Received: by 2002:a17:907:c15:: with SMTP id ga21mr11126433ejc.356.1643710432828; Tue, 01 Feb 2022 02:13:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643710432; cv=none; d=google.com; s=arc-20160816; b=lctbsp9AsgdONDGK6ZWx9ZK3N7mk3ZI6K7/x9b/L6yEhfSXElPTCM82RRzwUxIBjm0 XZSJ4TIZQ0l0MAugKWnU9ZPcviODye9Ul1+BAkS/pipYqVxpCGePDSi0WBSX8b7IIKNw tqdnSVBojJOqGEyB55khMZzOkQsyOKYOu/yY+tkAkEfvTRFc29pbkS+MmBJR0rocBjNu kSbSDHoyqBt4EkOM9t/m63HyrpmkT1GDbBYAh7xSpywqAPjUfL7Q8DGUeZ1aLT/aKkgy d11ExlUHn/5CkRQfl21uL3zeBCOO8rPaeJpUYmrvNZm35UKwZQvk870bosBlxpKI3UG9 zU2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=tkpfHRqIv8yF34+Yv+3htCAdKywYHovydAI65qQxdqI=; b=LNSKB5iGrTcxT5C3oFzqHbKu2+Z+ESv4wKp+5clXEsjD7t9rIpXtthmPjO7Hfyx0t0 a5kUZaTTjMdgZyC72a7fkmjcZzkxuJFcz7sIEuoPxScRrvOakjMs++Lr2zViVwH0vPJx jFRE2mxmTsdMOblisydp7Z1nKeGXj2eg9zoXgOHWvqHHvNc+ev5SD1Zv7DQ/l8L9sncZ eKlfCZE6z9YCUccl1CMs8p/FJZLaDUgPtDK79qVo4fONqKKRKMlv9xELRtuBnKRqIVSv mERBZ3kXnyf1gZm/9BT5sf0hk76zHQe55C4TaeNuTjMNgXZK+OuSbGOGZKNPn7xQDWaV RM7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=a+gXD0U3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ds9si8861271ejc.775.2022.02.01.02.13.26; Tue, 01 Feb 2022 02:13:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=a+gXD0U3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354441AbiA3JNn (ORCPT + 99 others); Sun, 30 Jan 2022 04:13:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354000AbiA3JNm (ORCPT ); Sun, 30 Jan 2022 04:13:42 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 300FEC061714 for ; Sun, 30 Jan 2022 01:13:42 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DEDCAB8286B for ; Sun, 30 Jan 2022 09:13:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0ADE5C340E4; Sun, 30 Jan 2022 09:13:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643534019; bh=cM8a1BT6LaZu1DHKkrAqCukI0JB0jBgJb3lYj9h6XPA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=a+gXD0U3XoWUJpZx0ZUh1LuX+WqDimVnzKpgu+HDf0we7IkWiCUO6E/Rw574s6zGM n2m78pjt2B6yGu6BLWYm+5buzmGJrY0tWokjFw627i5JVNY6mxIfRON1qudqp6R1T6 bzg+BEIAGwUKxFPwn8Wy91r7wkndzEn6fgKlmMBfQeMNTbUKyMb70aD8JwHaSRUBmY 9ZLcRlIgwLHWnWdlVpRwSlZEMqt/l6Dlg/oYb0XpibcKnIe5aH4WMb5x6nKL8UXJKE S8jgdcRdn88DTjtOQzt+3mHvwGzulfZoFkdrsPr6XZ8IenHn+oivn4p3IArg7Hwkmj yEPgTiArK9pkw== Date: Sun, 30 Jan 2022 11:13:26 +0200 From: Mike Rapoport To: Michel Lespinasse Cc: Linux-MM , linux-kernel@vger.kernel.org, Andrew Morton , kernel-team@fb.com, Laurent Dufour , Jerome Glisse , Peter Zijlstra , Michal Hocko , Vlastimil Babka , Davidlohr Bueso , Matthew Wilcox , Liam Howlett , Rik van Riel , Paul McKenney , Song Liu , Suren Baghdasaryan , Minchan Kim , Joel Fernandes , David Rientjes , Axel Rasmussen , Andy Lutomirski Subject: Re: [PATCH v2 33/35] arm64/mm: attempt speculative mm faults first Message-ID: References: <20220128131006.67712-1-michel@lespinasse.org> <20220128131006.67712-34-michel@lespinasse.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220128131006.67712-34-michel@lespinasse.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 28, 2022 at 05:10:04AM -0800, Michel Lespinasse wrote: > Attempt speculative mm fault handling first, and fall back to the > existing (non-speculative) code if that fails. > > This follows the lines of the x86 speculative fault handling code, > but with some minor arch differences such as the way that the > VM_FAULT_BADACCESS case is handled. > > Signed-off-by: Michel Lespinasse > --- > arch/arm64/mm/fault.c | 62 +++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 77341b160aca..2598795f4e70 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -25,6 +25,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -524,6 +525,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, > unsigned long vm_flags; > unsigned int mm_flags = FAULT_FLAG_DEFAULT; > unsigned long addr = untagged_addr(far); > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + struct vm_area_struct *vma; > + struct vm_area_struct pvma; > + unsigned long seq; > +#endif > > if (kprobe_page_fault(regs, esr)) > return 0; > @@ -574,6 +580,59 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, > > perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); > > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + /* > + * No need to try speculative faults for kernel or > + * single threaded user space. > + */ > + if (!(mm_flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) > + goto no_spf; > + > + count_vm_event(SPF_ATTEMPT); > + seq = mmap_seq_read_start(mm); > + if (seq & 1) { > + count_vm_spf_event(SPF_ABORT_ODD); > + goto spf_abort; > + } > + rcu_read_lock(); > + vma = __find_vma(mm, addr); > + if (!vma || vma->vm_start > addr) { > + rcu_read_unlock(); > + count_vm_spf_event(SPF_ABORT_UNMAPPED); > + goto spf_abort; > + } > + if (!vma_is_anonymous(vma)) { > + rcu_read_unlock(); > + count_vm_spf_event(SPF_ABORT_NO_SPECULATE); > + goto spf_abort; > + } > + pvma = *vma; > + rcu_read_unlock(); > + if (!mmap_seq_read_check(mm, seq, SPF_ABORT_VMA_COPY)) > + goto spf_abort; > + vma = &pvma; > + if (!(vma->vm_flags & vm_flags)) { > + count_vm_spf_event(SPF_ABORT_ACCESS_ERROR); > + goto spf_abort; > + } > + fault = do_handle_mm_fault(vma, addr & PAGE_MASK, > + mm_flags | FAULT_FLAG_SPECULATIVE, seq, regs); > + > + /* Quick path to respond to signals */ > + if (fault_signal_pending(fault, regs)) { > + if (!user_mode(regs)) > + goto no_context; > + return 0; > + } > + if (!(fault & VM_FAULT_RETRY)) > + goto done; > + > +spf_abort: > + count_vm_event(SPF_ABORT); > +no_spf: > + > +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ The speculative page fault implementation here (and for PowerPC as well) looks very similar to x86. Can we factor it our rather than copy 3 (or more) times? > + > /* > * As per x86, we may deadlock here. However, since the kernel only > * validly references user space from well defined areas of the code, > @@ -612,6 +671,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, > goto retry; > } > mmap_read_unlock(mm); > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > +done: > +#endif > > /* > * Handle the "normal" (no error) case first. > -- > 2.20.1 > > -- Sincerely yours, Mike.