Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp383204ybt; Wed, 24 Jun 2020 01:27:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzaZFJo9bAvgOhtR/+OenSZEj50aFXx0CSghl+cFAwySc9kKM2IBAKxH3scKiCsnRW6/Bu X-Received: by 2002:a17:906:46cd:: with SMTP id k13mr18300114ejs.312.1592987261585; Wed, 24 Jun 2020 01:27:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592987261; cv=none; d=google.com; s=arc-20160816; b=Anxb6rfxtN4i394pXK/eAZrRc/3uLnCPWbtTlvcxlMdQpbGUUIXI/0giVCZR6EV27d cmc9iuyCrDi8RYdRQ9f6cDk/JM5QlEK2xB0LwkZnWzy7m/UnMlxRyDCXxLzTqvki0/+a L2omL50BRkeIczIltxwDvxuwIsvE1bLKSMjlWx90dd0eaK7L3luDfjTQnr6pgqK6kpkF ARhxyBbEELTDaiSFBk6G/eGe1lE0sr9NBhFTtnCSH0YlCvO2tNCHVKwAAepSBJZ1FJxX TkhQgKPbhtxRwtFEzLU56VpwnFM1oNCcbFOO50obpnDTji2IpeWPXdfY5xfRnrYMTQq0 8jiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=ZKUQqWYXss6UwsektNBxaUWSfkOypOmk+SBkux4cMko=; b=Jkh1aqBpLF6mx4YuEbUPYZNYbIHRewfHG1u/sZ+V/4V45aE0pYxkLfn5aX847Erj6/ nxo+Gsgr4kfKTq5jFBlaR7IUJ4Pg+dI5NqtxgSF1JX5/mJ5geT85U/fpqb+Hce6uZrhY SFYK6/fF65VBOmqfcJVm35rAOPS7EsFNR5M785YkQRXq6bzwQ6ZXJrVb6QAgPbHCVaYz 6dAoAJv83Naw2fK/JT7nv6T0X2dctmHo+bXwpQmRBZusOTKdYlsDdBYLlnarTfoM3x9T dx8BFBueC6pR0h6mbDkGy/P92sVhLbEjIcs751E8CD0DZsULjRcFW5JEDTiWr3LqAm1d wQTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cloudflare.com header.s=google header.b=mMFvGWgs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=cloudflare.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z14si12391275eji.192.2020.06.24.01.27.16; Wed, 24 Jun 2020 01:27:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cloudflare.com header.s=google header.b=mMFvGWgs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=cloudflare.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388258AbgFXIYU (ORCPT + 99 others); Wed, 24 Jun 2020 04:24:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387686AbgFXIYT (ORCPT ); Wed, 24 Jun 2020 04:24:19 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87857C061573 for ; Wed, 24 Jun 2020 01:24:19 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id b6so1337782wrs.11 for ; Wed, 24 Jun 2020 01:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZKUQqWYXss6UwsektNBxaUWSfkOypOmk+SBkux4cMko=; b=mMFvGWgskiXX5B9MQk1cfQBjIG3qZlKBwOJPyUMMYjhIZ/EBo+myrbFP36HlSsm8r4 pod2Xgm1SBGmSNNMs0AUKNvEmgTkpqaIzxXPavKtkIZ9sMVO11c4F4vkYXC5DDPJEPZg zUh7ITRRHE2v5azCJkuMT+m7EOaKDYUnqMf3M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZKUQqWYXss6UwsektNBxaUWSfkOypOmk+SBkux4cMko=; b=MtLc4N7kJGbMi6Wjqlf+KG0H68P7an5QuBrDo3bngITji8iwn4jJuaXg8E2KFdjApw YyoPG0liZw2b5H6fzeQvU4E95QDdLR/vBTLZ/P78G2+MTrVLSaaXZAotLl8TJmGog1Ya 69UkSq5fmu4wQlOrQIQtNvUlKODLzyiupYjaHDlvAbHImPttIzkbqBubarycgicFaZtP x4778sk87rCiDifOLxFJViJ8aBytlI6Woedbfh4gJ61YLI2QQ5DU9uRL7Rk4O+cWPyPN 4Z9rvYVZR7oePYUOXWxTdvBAjepc/aieYo7libU6cFRdlLCx3CZ+QmXIesK40EVHDyfD If/g== X-Gm-Message-State: AOAM531LqQAi02S8MvSOgGJ6cIK0AGtvTdXW/YBNcs+j/1YPgxw9I5v6 FWercFXkK5h4ugeiCppncU1zRcqoMuzD6+Es7AEWlg== X-Received: by 2002:a5d:4992:: with SMTP id r18mr3963401wrq.323.1592987058129; Wed, 24 Jun 2020 01:24:18 -0700 (PDT) MIME-Version: 1.0 References: <20200619164132.1648-1-ignat@cloudflare.com> <20200619164132.1648-2-ignat@cloudflare.com> <20200624050452.GB844@sol.localdomain> In-Reply-To: <20200624050452.GB844@sol.localdomain> From: Ignat Korchagin Date: Wed, 24 Jun 2020 09:24:07 +0100 Message-ID: Subject: Re: [dm-crypt] [RFC PATCH 1/1] Add DM_CRYPT_FORCE_INLINE flag to dm-crypt target To: Eric Biggers Cc: agk@redhat.com, Mike Snitzer , dm-devel@redhat.com, dm-crypt@saout.de, linux-kernel , kernel-team Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 24, 2020 at 6:04 AM Eric Biggers wrote: > > On Fri, Jun 19, 2020 at 05:41:32PM +0100, Ignat Korchagin wrote: > > Sometimes extra thread offloading imposed by dm-crypt hurts IO latency. This is > > especially visible on busy systems with many processes/threads. Moreover, most > > Crypto API implementaions are async, that is they offload crypto operations on > > their own, so this dm-crypt offloading is excessive. > > This really should say "some Crypto API implementations are async" instead of > "most Crypto API implementations are async". The most accurate would probably be: most hardware-accelerated Crypto API implementations are async > Notably, the AES-NI implementation of AES-XTS is synchronous if you call it in a > context where SIMD instructions are usable. It's only asynchronous when SIMD is > not usable. (This seems to have been missed in your blog post.) No, it was not. This is exactly why we made xts-proxy Crypto API module as a second patch. But it seems now it does not make a big difference if a used Crypto API implementation is synchronous as well (based on some benchmarks outlined in the cover letter to this patch). I think the v2 of this patch will not require a synchronous Crypto API. This is probably a right thing to do, as the "inline" flag should control the way how dm-crypt itself handles requests, not how Crypto API handles requests. If a user wants to ensure a particular synchronous Crypto API implementation, they can already reconfigure dm-crypt and specify the implementation with a "capi:" prefix in the the dm table description. > > This adds a new flag, which directs dm-crypt not to offload crypto operations > > and process everything inline. For cases, where crypto operations cannot happen > > inline (hard interrupt context, for example the read path of the NVME driver), > > we offload the work to a tasklet rather than a workqueue. > > This patch both removes some dm-crypt specific queueing, and changes decryption > to use softIRQ context instead of a workqueue. It would be useful to know how > much of a difference the workqueue => softIRQ change makes by itself. Such a > change could be useful for fscrypt as well. (fscrypt uses a workqueue for > decryption, but besides that doesn't use any other queueing.) > > > @@ -127,7 +128,7 @@ struct iv_elephant_private { > > * and encrypts / decrypts at the same time. > > */ > > enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID, > > - DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD }; > > + DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD, DM_CRYPT_FORCE_INLINE = (sizeof(unsigned long) * 8 - 1) }; > > Assigning a specific enum value isn't necessary. Yes, this is a leftover from our "internal" patch which I wanted to make "future proof" in case future iterations of dm-crypt will add some flags to avoid flag collisions. Will remove in v2. > > > @@ -1458,13 +1459,18 @@ static void crypt_alloc_req_skcipher(struct crypt_config *cc, > > > > skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]); > > > > - /* > > - * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs > > - * requests if driver request queue is full. > > - */ > > - skcipher_request_set_callback(ctx->r.req, > > - CRYPTO_TFM_REQ_MAY_BACKLOG, > > - kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); > > + if (test_bit(DM_CRYPT_FORCE_INLINE, &cc->flags)) > > + /* make sure we zero important fields of the request */ > > + skcipher_request_set_callback(ctx->r.req, > > + 0, NULL, NULL); > > + else > > + /* > > + * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs > > + * requests if driver request queue is full. > > + */ > > + skcipher_request_set_callback(ctx->r.req, > > + CRYPTO_TFM_REQ_MAY_BACKLOG, > > + kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); > > } > > This looks wrong. Unless type=0 and mask=CRYPTO_ALG_ASYNC are passed to > crypto_alloc_skcipher(), the skcipher implementation can still be asynchronous, > in which case providing a callback is required. > > Do you intend that the "force_inline" option forces the use of a synchronous > skcipher (alongside the other things it does)? Or should it still allow > asynchronous ones? As mentioned above, I don't think we should require synchronous crypto with the "force_inline" flag anymore. Although we may remove CRYPTO_TFM_REQ_MAY_BACKLOG with the inline flag. > We may not actually have a choice in that matter, since xts-aes-aesni has the > CRYPTO_ALG_ASYNC bit set (as I mentioned) despite being synchronous in most > cases; thus, the crypto API won't give you it if you ask for a synchronous > cipher. So I think you still need to allow async skciphers? That means a > callback is still always required. > > - Eric