Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3076109ybt; Mon, 29 Jun 2020 14:41:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxEC4mYJrPq/v3A8IQhYzGtrPV80d3i4LrfwNKoGBBzIxt6+kYdngzV+RqqgVwwyheoBQs+ X-Received: by 2002:a17:906:4f09:: with SMTP id t9mr15506190eju.110.1593466890444; Mon, 29 Jun 2020 14:41:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593466890; cv=none; d=google.com; s=arc-20160816; b=rtcFKx9gcDY1QV70JXJlyqAAZOc79oJA0JsfZKX25iRTZqt7YSv5/QEM5/dkPy4q4m bztxFMaD1x2Sn2yslNL9AckIB+xPkArSV9j+u+NDjmcbfWqPOXjs+jI8McO+AL5nEacr IoTGao8AHcEygaYimG4oheYSEIMmc3duDqBOft/nQyPL8+PDgggyvV3EwSx8qoVXpVJL D6eETe9sD3CSmVPzPt0N+kfneSjlPYJfMJyfvznBTQO/CApt4iLaSTlhBQV/NppBWWv3 871Pn95U11jzewG9vZcJONYPsgCSVWkJJfVjfLLgYRMjiET1Fqbl0UJ9jaRT3fymoiaT ykrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=JKkwG4X+bM+CrXtxwhsWt9CBs75yO/YVDrciqiniJZs=; b=z7tm9GiIazw6ik5kAfFl0vIXU1xvPcdCnTl1jYT2yio2amid+2V2sViyZDWn6aSAuZ ik8z9apQfTA7moJvp9RUcOKD26iN33Z0nUaYSUR6+vxd6uoFlSuZcv4BuH3CkQqfeGAe RG3QAcnQsUdO9nFNrMQuxE/j8EOgEfEx7ndLl7qerGBFlRKEgfYnbc1Btxy1WS6UnBeg b8xbF3qUqoLlW6ztKfgdmtdD7TC7jbnuMmzgOvcIj50xonAPhnLoSbqIR7WQTStmvj11 hlavo4j+kcMPInBx3aouxtYkmoSTGKdIxmae9QXP7ufCLUf62NNX0XVA9Z0wkSuW4w6q nwjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f2WxZRsr; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d4si508323edo.16.2020.06.29.14.41.03; Mon, 29 Jun 2020 14:41:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f2WxZRsr; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726500AbgF2Vkg (ORCPT + 99 others); Mon, 29 Jun 2020 17:40:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:60642 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728225AbgF2SkW (ORCPT ); Mon, 29 Jun 2020 14:40:22 -0400 Received: from sol.localdomain (c-107-3-166-239.hsd1.ca.comcast.net [107.3.166.239]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 16B2A255BA; Mon, 29 Jun 2020 18:22:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593454972; bh=Y7LpjQLfAxhqBzjm0yYAEkclb4h+QWTLcv+9I/39AOQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=f2WxZRsr3ztXYnZzBr4mn4P+cVb5g5ZFjuVs6Q7CP9OHMSAstoPLftkfQGr7XYarS gRb9BsrxNCmYy773wNWZK69W9ONwYM5OkVOGzd56ZWIrqYKwAflXXkTPrzKgK3PTCd Z5aGgScPalZ5l1nSTCvh0rTphGGTlBxpvGf0MhSc= Date: Mon, 29 Jun 2020 11:22:50 -0700 From: Eric Biggers To: Satya Tangirala Cc: linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org, Jaegeuk Kim Subject: Re: [PATCH v2 2/4] fscrypt: add inline encryption support Message-ID: <20200629182250.GD20492@sol.localdomain> References: <20200629120405.701023-1-satyat@google.com> <20200629120405.701023-3-satyat@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200629120405.701023-3-satyat@google.com> Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Mon, Jun 29, 2020 at 12:04:03PM +0000, Satya Tangirala via Linux-f2fs-devel wrote: > diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c > index 4fa18fff9c4e..1ea9369a7688 100644 > --- a/fs/crypto/bio.c > +++ b/fs/crypto/bio.c > @@ -41,6 +41,52 @@ void fscrypt_decrypt_bio(struct bio *bio) > } > EXPORT_SYMBOL(fscrypt_decrypt_bio); > > +static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode, > + pgoff_t lblk, sector_t pblk, > + unsigned int len) > +{ > + const unsigned int blockbits = inode->i_blkbits; > + const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits); > + struct bio *bio; > + int ret, err = 0; > + int num_pages = 0; > + > + /* This always succeeds since __GFP_DIRECT_RECLAIM is set. */ > + bio = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); > + > + while (len) { > + unsigned int blocks_this_page = min(len, blocks_per_page); > + unsigned int bytes_this_page = blocks_this_page << blockbits; > + > + if (num_pages == 0) { > + fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS); > + bio_set_dev(bio, inode->i_sb->s_bdev); > + bio->bi_iter.bi_sector = > + pblk << (blockbits - SECTOR_SHIFT); > + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); > + } > + ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0); > + if (WARN_ON(ret != bytes_this_page)) { > + err = -EIO; > + goto out; > + } > + num_pages++; > + len -= blocks_this_page; > + lblk += blocks_this_page; > + pblk += blocks_this_page; > + if (num_pages == BIO_MAX_PAGES || !len) { > + err = submit_bio_wait(bio); > + if (err) > + goto out; > + bio_reset(bio); > + num_pages = 0; > + } > + } > +out: > + bio_put(bio); > + return err; > +} I just realized we missed something. With the new IV_INO_LBLK_32 IV generation strategy, logically contiguous blocks don't necessarily have contiguous IVs. So we need to check fscrypt_mergeable_bio() here. Also it *should* be checked once per block, not once per page. However, that means that ext4_mpage_readpages() and f2fs_mpage_readpages() are wrong too, since they only check fscrypt_mergeable_bio() once per page. Given that difficulty, and the fact that IV_INO_LBLK_32 only has limited use cases on specific hardware, I suggest that for now we simply restrict inline encryption with IV_INO_LBLK_32 to the blocksize == PAGE_SIZE case. (Checking fscrypt_mergeable_bio() once per page is still needed.) I.e., on top of this patch: diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index 1ea9369a7688..b048a0e38516 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -74,7 +74,8 @@ static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode, len -= blocks_this_page; lblk += blocks_this_page; pblk += blocks_this_page; - if (num_pages == BIO_MAX_PAGES || !len) { + if (num_pages == BIO_MAX_PAGES || !len || + !fscrypt_mergeable_bio(bio, inode, lblk)) { err = submit_bio_wait(bio); if (err) goto out; diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index ec514bc8ee86..097c5a565a21 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -84,6 +84,19 @@ int fscrypt_select_encryption_impl(struct fscrypt_info *ci) if (!(sb->s_flags & SB_INLINECRYPT)) return 0; + /* + * When a page contains multiple logically contiguous filesystem blocks, + * some filesystem code only calls fscrypt_mergeable_bio() for the first + * block in the page. This is fine for most of fscrypt's IV generation + * strategies, where contiguous blocks imply contiguous IVs. But it + * doesn't work with IV_INO_LBLK_32. For now, simply exclude + * IV_INO_LBLK_32 with blocksize != PAGE_SIZE from inline encryption. + */ + if ((fscrypt_policy_flags(&ci->ci_policy) & + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) && + sb->s_blocksize != PAGE_SIZE) + return 0; +