Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp393146yba; Fri, 5 Apr 2019 08:43:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqyumJ02Gd5mAws5UgRs7Nc/sVIJzpPEJHkX9TJtbNuKACkFA7rc+RIu4H0QrpIK1moy1Thd X-Received: by 2002:a17:902:b191:: with SMTP id s17mr13284223plr.70.1554479036222; Fri, 05 Apr 2019 08:43:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554479036; cv=none; d=google.com; s=arc-20160816; b=XOyc3N29IkuBQsaayOTa4zzj1r/izEhOVXFS9xrWgEJa9eln7ZW4IaU2uX7uC89J8G tgtcIPaGadATiPzhB8ReGV36+Cvf/6XVcZpsA1iCCA88G/G/P+Vfxq2tjZWmGn9QlDg2 Nwnu+8V2fOG5OWXgyzd13uV1QldfF6IK5dRtlpLuM/hNwsXvNjOm2hFfOPzQqosF4Cy7 wy37skDXR8j0Obh+wfRuOr2cG+HKDnvYaQK9XehyL85FspVoTOSns6Wgt6cNCoLAZeqm KMbsIBtIPOmqHz4SBn1pK0c0Tr5OxrJN2zveWytiCXJ1kHVKjBD6MlmSMHI7AkGfPHDg 7xBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=1QUI10gkck8wItMN8mTJifUFTQm0AH7TdNY12dUMiiY=; b=XjU+cyANZouKFSvIjtZ0GA7Ffd1ONd2snwZkI95j9Y3ghOMy47I1JAJEswHJCKN5HR 06fg1CA9Td740gjcVxZ8uPFwfTN8S1oogDGkTVxhXOfmOyDX9aC4owNLHCqK3SHpErfr ywHn0UUrBgQJa45kHiylx6AX3ymB4YWLAmHqHPKMV7KfXsYouu4jvgMg+s5U2q6gexfG BAQEhsJnPE7pCdMrhegLyev5/OW1MeLKSWar2I5Nx8dJ1mzduSeZanLBDN25tui24A91 txg9PBByikurxQZEkBtJjlmaG6SfIadpXeeEvWSEQN42zozMw6FTstKebS842q7UgoMV Z9AA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h189si8667117pge.378.2019.04.05.08.43.40; Fri, 05 Apr 2019 08:43:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731161AbfDEPnF (ORCPT + 99 others); Fri, 5 Apr 2019 11:43:05 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:49189 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727152AbfDEPnF (ORCPT ); Fri, 5 Apr 2019 11:43:05 -0400 Received: from bigeasy by Galois.linutronix.de with local (Exim 4.80) (envelope-from ) id 1hCQzb-000729-Py; Fri, 05 Apr 2019 17:42:59 +0200 Date: Fri, 5 Apr 2019 17:42:59 +0200 From: Sebastian Andrzej Siewior To: Julien Grall Cc: Dave Martin , linux-arm-kernel@lists.infradead.org, linux-rt-users@vger.kernel.org, catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH] arm64/fpsimd: Don't disable softirq when touching FPSIMD/SVE state Message-ID: <20190405154259.3g3fv72miizm64hc@linutronix.de> References: <20190208165513.8435-1-julien.grall@arm.com> <20190404105233.GD3567@e103592.cambridge.arm.com> <20190405143942.xtpq5443qbc4gziy@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-04-05 16:17:50 [+0100], Julien Grall wrote: > Hi, Hi, > > A per-CPU lock? It has to be a raw_spinlock_t because a normal > > spin_lock() / local_lock() would allow scheduling and might be taken as > > part of the context switch or soon after. > raw_spinlock_t would not work here without disabling preemption. Otherwise > you may end up to recurse on the lock and therefore deadlock. But then it > raise the question of the usefulness of the lock here. > > However, I don't really understand why allowing the scheduling would be a > problem here. Is it a concern because we will waste cycle trying to > restore/save a context that will be scratched as soon as we release the > lock? If you hold the lock within the kernel thread and every kernel thread acquires it before doing any SIMD operations then you are good. It could be a sleeping lock. What happens if you hold the lock, are scheduled out and a user task is about to be scheduled? How do you force the kernel thread out / give up the FPU registers? That preempt_disable() + local_bh_disable() might not be the pretties thing but how bad is it actually? Latency wise you can't schedule(). From RT point of view you need to enable preemption while going from page to page because of the possible kmap() or kmalloc() (on baldy aligned src/dst) with the crypto's page-walk code. If that is not good enough latency wise you could do kernel_fpu_resched() after a few iterations. Currently I'm trying to get kernel_fpu_begin()/end() cheap on x86 so it doesn't always store/restore the FPU context. Then kernel_fpu_resched() shouldn't be that bad. > Cheers, Sebastian