Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp5259048imm; Wed, 12 Sep 2018 03:32:39 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdau95Tp73LZLv5CsYXlSRt1xJgS+D+JlZZoalthAl12FdQ+XNsMnRBsLT13H9ZTqP5NaNxI X-Received: by 2002:a65:6143:: with SMTP id o3-v6mr1506399pgv.52.1536748359277; Wed, 12 Sep 2018 03:32:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536748359; cv=none; d=google.com; s=arc-20160816; b=wIrS35kDVrgqmeZSV9MG3GNwcxxctANhhSDrsjO2eY8vD8F5MdaugczeLvGckje51i Y/VSS45RzGa+UgYgrSeI1WpQsdC11lwubou76ZA3aBlfexQe+BL8PslOZhyNhiozriy5 6qRwe2O59/iDWBIMdXLKi+Ycr0im6fOIh+ldVG3Yi4u4XPe9oxd5C5gTAnoFUGBj1zp4 PLpO4u1CQtLdrNlH4Okm+N8DNclRYtYec5z3nBWSES6Vd2taQH0CkjYIAYma5phEcnK/ WBvC5pVzGPaUTI3hksvD5RBl7mG8QdFCeZDBSinq3qqemgMQrZBZLhLzGK+3RQKpH5Ms mfXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=R4rNdZjSWlKP9AazhF1BAqcNQgkX90Idky/nC7/GoO0=; b=SWCOyh2YyTpOh+l+vtZr54gb99wj7KYuDKzkRzKQlEuzlpVqQZxRJc8GGhxfnE1QZa 6hTcWYNrxtUYVVgZw1m6LiKUOX5okKLqO1O3k9qAfN8LiH7UVnF3VmYRAsspUTWDB6a5 SdesSIp1to3atx8TaROLmwclbx0isVjH6xB6HMMA+sCkFtoJzhompFzW1Ln0HHUNYCZC aBojlq3PKH5TZ+HubNiTKyhUQmiMoSqDtm2KfTgLQghlUr68cPSZj+Z6TGtqoPX49SgX UXyl2uphOhgj452IU80X2G5a2oUQV3XLEtmBZBmLdHHNq0Sub700Yx5E8ouud/7JnPo7 iL0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t21-v6si589986plj.261.2018.09.12.03.32.18; Wed, 12 Sep 2018 03:32:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727774AbeILPf0 (ORCPT + 99 others); Wed, 12 Sep 2018 11:35:26 -0400 Received: from foss.arm.com ([217.140.101.70]:57230 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726855AbeILPf0 (ORCPT ); Wed, 12 Sep 2018 11:35:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F5407A9; Wed, 12 Sep 2018 03:31:31 -0700 (PDT) Received: from [10.4.12.81] (melchizedek.emea.arm.com [10.4.12.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CCF2C3F614; Wed, 12 Sep 2018 03:31:28 -0700 (PDT) Subject: Re: [PATCH v5 06/27] arm64: Delay daif masking for user return To: Julien Thierry Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, daniel.thompson@linaro.org, joel@joelfernandes.org, marc.zyngier@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, catalin.marinas@arm.com, will.deacon@arm.com References: <1535471497-38854-1-git-send-email-julien.thierry@arm.com> <1535471497-38854-7-git-send-email-julien.thierry@arm.com> From: James Morse Message-ID: <59fa96d5-6bfa-c3aa-94d6-5941a7576bfa@arm.com> Date: Wed, 12 Sep 2018 11:31:24 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <1535471497-38854-7-git-send-email-julien.thierry@arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Julien, On 28/08/18 16:51, Julien Thierry wrote: > Masking daif flags is done very early before returning to EL0. > > Only toggle the interrupt masking while in the vector entry and mask daif > once in kernel_exit. I had an earlier version that did this, but it showed up as a performance problem. commit 8d66772e869e ("arm64: Mask all exceptions during kernel_exit") described it as: | Adding a naked 'disable_daif' to kernel_exit causes a performance problem | for micro-benchmarks that do no real work, (e.g. calling getpid() in a | loop). This is because the ret_to_user loop has already masked IRQs so | that the TIF_WORK_MASK thread flags can't change underneath it, adding | disable_daif is an additional self-synchronising operation. | | In the future, the RAS APEI code may need to modify the TIF_WORK_MASK | flags from an SError, in which case the ret_to_user loop must mask SError | while it examines the flags. We may decide that the benchmark is silly, and we don't care about this. (At the time it was easy enough to work around). We need regular-IRQs masked when we read the TIF flags, and to stay masked until we return to user-space. I assume you're changing this so that psuedo-NMI are unmasked for EL0 until kernel_exit. I'd like to be able to change the TIF flags from the SError handlers for RAS, which means masking SError for do_notify_resume too. (The RAS code that does this doesn't exist today, so you can make this my problem to work out later!) I think we should have psuedo_NMI masked if SError is masked too. Is there a strong reason for having psuedo-NMI unmasked during do_notify_resume(), or is it just for having the maximum amount of code exposed? Thanks, James > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S > index 09dbea22..85ce06ac 100644 > --- a/arch/arm64/kernel/entry.S > +++ b/arch/arm64/kernel/entry.S > @@ -259,9 +259,9 @@ alternative_else_nop_endif > .endm > > .macro kernel_exit, el > - .if \el != 0 > disable_daif > > + .if \el != 0 > /* Restore the task's original addr_limit. */ > ldr x20, [sp, #S_ORIG_ADDR_LIMIT] > str x20, [tsk, #TSK_TI_ADDR_LIMIT] > @@ -896,7 +896,7 @@ work_pending: > * "slow" syscall return path. > */ > ret_to_user: > - disable_daif > + disable_irq // disable interrupts > ldr x1, [tsk, #TSK_TI_FLAGS] > and x2, x1, #_TIF_WORK_MASK > cbnz x2, work_pending >