Received: by 10.192.165.148 with SMTP id m20csp721288imm; Wed, 2 May 2018 07:47:44 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqPXtWXpsuLRrTIRj3YfjzzDvrP4MkofTFmvudhFc02HPcuplzeOyByfI0zGw7QL6dmEXwA X-Received: by 2002:a17:902:d68c:: with SMTP id v12-v6mr20911248ply.190.1525272464061; Wed, 02 May 2018 07:47:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525272464; cv=none; d=google.com; s=arc-20160816; b=x9kj3xMxtRYx0qdXO2pqUy09Aja05BrwBFl1noomhRjcFYHawfvZ+JyPVM4XCQx0Aq RNUHyee2QR5/nH9g40pFpUvtZELYvVXQCrdebd4Kwgcp0upbKtkDyc0C+8mUr37zrK+q 3/oWq79FYuWwStvKQGE7vNSiGaqYLTQ85nMC5M3lwMLROvVlcuHGDsTv8wD9pf1GuA3C CdLhJKInY73nrxNvrOAUfLSE+E3faYH2ZACQU6Ez6rQIKSEkBM/WeZ5uIc3BXeg9Zdoo co6+iIUY9NgOBJ99zNe7hjJUk2+okP7R6t3KEhRSgtGVyOHlnrMqIJttaN6KbtWn2d5G 8tCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=wc6aX9E1B/GM+r+CJGTvBGTpoZCmYoHCA9ClQVH7zbQ=; b=yykpyepiK5FQYLp447f0FCCW2RHFNVC+a7asDMm3Ov9CUcOqjQrYSjIFZ6qYZLa1mF APdeG7l/VJ/gsCh2YAduPxC4OnIAqhwaMN0yg17cewqeLgPQtYLqey7k1YnHDk3nINMv 4qDeYKuyZMwLpVca26TmhgNP8v/45Qr8Low4dwm1ZRnN7kQmUFQsoNGMBsUqMJ3LSHzn Lp9ipQGlNRnYvyfraBvXaOL0TbdhVU0c6YxE7l97J/7KH10mDbYjTVZeXRXWt5liB91y cg+QrVq25cEOokiC8FqXJEQWqZNAQZb+ytkc3aEyutL7zBuomyBlFhs/xHGyzXGEXIDu ioTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g3-v6si11168862pld.309.2018.05.02.07.47.29; Wed, 02 May 2018 07:47:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751492AbeEBOqH (ORCPT + 99 others); Wed, 2 May 2018 10:46:07 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59824 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750921AbeEBOqE (ORCPT ); Wed, 2 May 2018 10:46:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 116DDF; Wed, 2 May 2018 07:46:04 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA68B3F587; Wed, 2 May 2018 07:46:03 -0700 (PDT) From: Punit Agrawal To: Ganesh Mahendran Cc: ldufour@linux.vnet.ibm.com, catalin.marinas@arm.com, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 2/2] arm64/mm: add speculative page fault References: <1525247672-2165-1-git-send-email-opensource.ganesh@gmail.com> <1525247672-2165-2-git-send-email-opensource.ganesh@gmail.com> Date: Wed, 02 May 2018 15:46:02 +0100 In-Reply-To: <1525247672-2165-2-git-send-email-opensource.ganesh@gmail.com> (Ganesh Mahendran's message of "Wed, 2 May 2018 15:54:32 +0800") Message-ID: <871seunmj9.fsf@e105922-lin.cambridge.arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Ganesh, I was looking at evaluating speculative page fault handling on arm64 and noticed your patch. Some comments below - Ganesh Mahendran writes: > This patch enables the speculative page fault on the arm64 > architecture. > > I completed spf porting in 4.9. From the test result, > we can see app launching time improved by about 10% in average. > For the apps which have more than 50 threads, 15% or even more > improvement can be got. > > Signed-off-by: Ganesh Mahendran > --- > This patch is on top of Laurent's v10 spf > --- > arch/arm64/mm/fault.c | 38 +++++++++++++++++++++++++++++++++++--- > 1 file changed, 35 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 4165485..e7992a3 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -322,11 +322,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re > > static int __do_page_fault(struct mm_struct *mm, unsigned long addr, > unsigned int mm_flags, unsigned long vm_flags, > - struct task_struct *tsk) > + struct task_struct *tsk, struct vm_area_struct *vma) > { > - struct vm_area_struct *vma; > int fault; > > + if (!vma || !can_reuse_spf_vma(vma, addr)) > + vma = find_vma(mm, addr); > + It would be better to move this hunk to do_page_fault(). It'll help localise the fact that handle_speculative_fault() is a stateful call which needs a corresponding can_reuse_spf_vma() to properly update the vma reference counting. > vma = find_vma(mm, addr); Remember to drop this call in the next version. As it stands the call the find_vma() needlessly gets duplicated. > fault = VM_FAULT_BADMAP; > if (unlikely(!vma)) > @@ -371,6 +373,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > int fault, major = 0; > unsigned long vm_flags = VM_READ | VM_WRITE; > unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; > + struct vm_area_struct *vma; > > if (notify_page_fault(regs, esr)) > return 0; > @@ -409,6 +412,25 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > > perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); > > + if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT)) { You don't need the IS_ENABLED() check. The alternate implementation of handle_speculative_fault() when CONFIG_SPECULATIVE_PAGE_FAULT is not enabled takes care of this. > + fault = handle_speculative_fault(mm, addr, mm_flags, &vma); > + /* > + * Page fault is done if VM_FAULT_RETRY is not returned. > + * But if the memory protection keys are active, we don't know > + * if the fault is due to key mistmatch or due to a > + * classic protection check. > + * To differentiate that, we will need the VMA we no > + * more have, so let's retry with the mmap_sem held. > + */ As there is no support for memory protection keys on arm64 most of this comment can be dropped. > + if (fault != VM_FAULT_RETRY && > + fault != VM_FAULT_SIGSEGV) { Not sure if you need the VM_FAULT_SIGSEGV here. > + perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, addr); > + goto done; > + } > + } else { > + vma = NULL; > + } > + If vma is initiliased to NULL during declaration, the else part can be dropped. > /* > * As per x86, we may deadlock here. However, since the kernel only > * validly references user space from well defined areas of the code, > @@ -431,7 +453,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > #endif > } > > - fault = __do_page_fault(mm, addr, mm_flags, vm_flags, tsk); > + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, tsk, vma); > major |= fault & VM_FAULT_MAJOR; > > if (fault & VM_FAULT_RETRY) { > @@ -454,11 +476,21 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > if (mm_flags & FAULT_FLAG_ALLOW_RETRY) { > mm_flags &= ~FAULT_FLAG_ALLOW_RETRY; > mm_flags |= FAULT_FLAG_TRIED; > + > + /* > + * Do not try to reuse this vma and fetch it > + * again since we will release the mmap_sem. > + */ > + if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT)) > + vma = NULL; Please drop the IS_ENABLED() check. Thanks, Punit > + > goto retry; > } > } > up_read(&mm->mmap_sem); > > +done: > + > /* > * Handle the "normal" (no error) case first. > */