Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp3515564pxb; Mon, 16 Nov 2020 17:28:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJw3+kxmwqD2ZNPL8VAkhiD8hloKOpQvl5Fxq3Syw5rhI1GUjQ0jvs6bwqaX7LPiTwQhJQVt X-Received: by 2002:a17:906:c084:: with SMTP id f4mr3143180ejz.4.1605576485875; Mon, 16 Nov 2020 17:28:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605576485; cv=none; d=google.com; s=arc-20160816; b=VSacHhQxrSs2LanSOTOzctPipD8V7tuy12wuGqKXdZjs0IPwz0x6BuTdO93YFGfbWK G9DawrFmPJ9cvhF46bMZrRuUcCUe5xRt2t2mDezfFbgflO9qUrkzqZwfY52PNSMhZP7A oAIM3/gO6GRCD0pa8XRBBcVI9tk55cRNmBAQ3lXRTF6mHmFBGm53VZmaWqbRHv9LKaCh 5oOBMeNqUKxGG2S8lUSCuEMRf5LUHatqHCs80xxUaU0KPlSsa+q16OjI7vxsNUu1QQma pcUK9Q3DygMUZKblST8yMr6mzUPO3qyQ9gATWlSQaFdS2fJohTSx63EWvGGb2W825uqQ B1cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=UUoo1xfxgS+Z4vWjkWfLY2TyFQ6VJigKe6MZQPyxHws=; b=HK50wDYayEVA/1YUJmKncEZqU0aTsAQuvQZYdPu/rvQP57YKFK/ECt83Gva5HnSkuX l6D9Bulhu+4PPWOk20C65FU9+AIgFyIR9uLhp564b957QOIs+zK/GUHWC/V7/jCrG6UO tniU7Sj33lM3+B4nW88yjqhFpQbaV/zYaFym2FgjxJKH/zT1+gW/3v5Qp6aMwVWFDnie FK4fVpW/5D0xwetEGcC0CQhBSYhl1PHLpsRYoZDku4rgdIj+462Q4Ws+OxMo7JRrdK7m WVom6ADXf99tcXFkhFftJp+fyaamivFJs6W/l/Ye4WOD41v7PeRkLZMWwWJ0dSe9Wrij Wt8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id rl2si12480052ejb.720.2020.11.16.17.27.43; Mon, 16 Nov 2020 17:28:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731705AbgKQBZN (ORCPT + 99 others); Mon, 16 Nov 2020 20:25:13 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:7690 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726387AbgKQBZN (ORCPT ); Mon, 16 Nov 2020 20:25:13 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4CZpC86y4JzkYxZ; Tue, 17 Nov 2020 09:24:52 +0800 (CST) Received: from [10.174.176.185] (10.174.176.185) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 17 Nov 2020 09:25:09 +0800 Subject: Re: [PATCH] ubifs: wbuf: Don't leak kernel memory to flash To: Richard Weinberger , CC: , References: <20201116210530.26230-1-richard@nod.at> From: Zhihao Cheng Message-ID: Date: Tue, 17 Nov 2020 09:25:08 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <20201116210530.26230-1-richard@nod.at> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.176.185] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2020/11/17 5:05, Richard Weinberger ะด??: > Write buffers use a kmalloc()'ed buffer, they can leak > up to seven bytes of kernel memory to flash if writes are not > aligned. > So use ubifs_pad() to fill these gaps with padding bytes. > This was never a problem while scanning because the scanner logic > manually aligns node lengths and skips over these gaps. > > Cc: > Fixes: 1e51764a3c2ac05a2 ("UBIFS: add new flash file system") > Signed-off-by: Richard Weinberger > --- > fs/ubifs/io.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c > index 7e4bfaf2871f..eae9cf5a57b0 100644 > --- a/fs/ubifs/io.c > +++ b/fs/ubifs/io.c > @@ -319,7 +319,7 @@ void ubifs_pad(const struct ubifs_info *c, void *buf, int pad) > { > uint32_t crc; > > - ubifs_assert(c, pad >= 0 && !(pad & 7)); > + ubifs_assert(c, pad >= 0); > > if (pad >= UBIFS_PAD_NODE_SZ) { > struct ubifs_ch *ch = buf; > @@ -764,6 +764,10 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len) > * write-buffer. > */ > memcpy(wbuf->buf + wbuf->used, buf, len); > + if (aligned_len > len) { > + ubifs_assert(c, aligned_len - len < 8); > + ubifs_pad(c, wbuf->buf + wbuf->used + len, aligned_len - len); > + } > > if (aligned_len == wbuf->avail) { > dbg_io("flush jhead %s wbuf to LEB %d:%d", > @@ -856,13 +860,18 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len) > } > > spin_lock(&wbuf->lock); > - if (aligned_len) > + if (aligned_len) { > /* > * And now we have what's left and what does not take whole > * max. write unit, so write it to the write-buffer and we are > * done. > */ > memcpy(wbuf->buf, buf + written, len); > + if (aligned_len > len) { > + ubifs_assert(c, aligned_len - len < 8); > + ubifs_pad(c, wbuf->buf + len, aligned_len - len); > + } > + } > > if (c->leb_size - wbuf->offs >= c->max_write_size) > wbuf->size = c->max_write_size; > Reviewed-by: Zhihao Cheng