Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp1639077rwr; Wed, 3 May 2023 19:28:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7OCqTaINg6YsQloMMJbndyJa1bbPP7rxM8xN5JAXEBG5FfG0ZLvAuMvKKl+qnoPavyCfuj X-Received: by 2002:a05:6a20:4292:b0:f2:b01b:af93 with SMTP id o18-20020a056a20429200b000f2b01baf93mr909778pzj.27.1683167285159; Wed, 03 May 2023 19:28:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683167285; cv=none; d=google.com; s=arc-20160816; b=YGzez8/L3Yia/uo1K3y1KpqNMvH+xNj8AkCFW5Pun8Rwa5ULbs6YFPsSLlC8j0kkmy TFWMlUsWXsu65p7upBbOjiTIGvq8hOX3dLMtu/BSjCmyCfy7udSAFCq13DXuEYEnfrVG 4p5UPMWVUG9hh6n0xH5NYO1bIOJ71Tqfemmbkg2F7V3CE2g84NlMBWHRIaFG4h9CZIm7 XVA9W3z0LCnFGv+2P72kjHVrSK5YMVsIxMpKIKXfQBC1E/1i3CT/ZDA1tt79hG74NBnv s7JrwiLe8IVziHI1Py+7HXOcHvG7aDCCcVn2sLQ2b5nfq3evhkiR/x0bNVcForlEfd/G tMlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=l8PMsKpeglgjYsZH/Q1+IUO6H8yRThBLuryrhCY14qU=; b=JdQSr/GMtoksxqH99TYQ8PIZkefA7aCc9/WeE698oqJiwWkQxJ86Zq5sotxFqlJIUN Bhx0Smr5lq66oGKgA8yWw0ZJmU3GB7wZXyFrmvfFgkH3Dc4nG1zSBJOSC1lEbHxRqnI9 kb21hN72ZiX3T+zYiClpEsCDoc0X0OcUSnspI7zZVobpD5xMAkw9n9DsNcJCqxCisW+F 0q5F2mzv44Nm48HNQi6hyb30T/3eHnneQ41XO7AgnGmLF01p0ujXwO9HBjWLsachBsgx Aqn7PAgmm85bch8EtuW8//Cva0WRWYhwSl4yydcOZW6Jha6WBj2W4FyZXMtqntqmVbeo 69XQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e3-20020aa798c3000000b00638f2eb16f9si34196189pfm.343.2023.05.03.19.27.52; Wed, 03 May 2023 19:28:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229636AbjEDCMe (ORCPT + 99 others); Wed, 3 May 2023 22:12:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229470AbjEDCMd (ORCPT ); Wed, 3 May 2023 22:12:33 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A079135 for ; Wed, 3 May 2023 19:12:32 -0700 (PDT) Received: from dggpemm500009.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QBcfT1nxwzTjx2; Thu, 4 May 2023 10:08:01 +0800 (CST) Received: from [10.174.178.209] (10.174.178.209) by dggpemm500009.china.huawei.com (7.185.36.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 4 May 2023 10:12:29 +0800 Message-ID: <307f61a1-b72c-c310-797c-013a8914ec1c@huawei.com> Date: Thu, 4 May 2023 10:12:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [PATCH 1/2] ubi: fix slab-out-of-bounds in ubi_eba_get_ldesc+0xfb/0x130 To: Zhihao Cheng , , , CC: , , References: <20230406071331.1247429-1-wangzhaolong1@huawei.com> <20230406071331.1247429-2-wangzhaolong1@huawei.com> From: ZhaoLong Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.178.209] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500009.china.huawei.com (7.185.36.225) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-8.5 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Yes, that could happen. I was able to reproduce the problem despite the low probability of triggering it. This race between wear_leveling_work() and ubi_resize_volume() can cause data corruption in the UBIFS running on the UBI volume.. ubi->volumes_lock must be added to protect the update of eba_tbl in the ubi_eba_copy_leb(). I'll do a V2 patch later to fix this issue. With appreciation ZhaoLong Wang > HI, >> From: Guo Xuenan >> >> When using ioctl interface to resize ubi volume, ubi_resize_volume will >> resize eba table first, but not change vol->reserved_pebs in the same >> atomic context which may cause concurrency access eba table. >> >> For example, When user do shrink ubi volume A calling ubi_resize_volume, >> while the other thread is writing (volume B) and triggering >> wear-leveling, >> which may calling ubi_write_fastmap, under these circumstances, KASAN >> may >> report: slab-out-of-bounds in ubi_eba_get_ldesc+0xfb/0x130. >> > [...] >> diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c >> index 2c867d16f89f..97294def01eb 100644 >> --- a/drivers/mtd/ubi/vmt.c >> +++ b/drivers/mtd/ubi/vmt.c >> @@ -408,6 +408,7 @@ int ubi_resize_volume(struct ubi_volume_desc >> *desc, int reserved_pebs) >>       struct ubi_device *ubi = vol->ubi; >>       struct ubi_vtbl_record vtbl_rec; >>       struct ubi_eba_table *new_eba_tbl = NULL; >> +    struct ubi_eba_table *old_eba_tbl = NULL; >>       int vol_id = vol->vol_id; >>         if (ubi->ro_mode) >> @@ -453,10 +454,13 @@ int ubi_resize_volume(struct ubi_volume_desc >> *desc, int reserved_pebs) >>               err = -ENOSPC; >>               goto out_free; >>           } >> + >>           ubi->avail_pebs -= pebs; >>           ubi->rsvd_pebs += pebs; >>           ubi_eba_copy_table(vol, new_eba_tbl, vol->reserved_pebs); >> -        ubi_eba_replace_table(vol, new_eba_tbl); >> +        old_eba_tbl = vol->eba_tbl; >> +        vol->eba_tbl = new_eba_tbl; >> +        vol->reserved_pebs = reserved_pebs; >>           spin_unlock(&ubi->volumes_lock); >>       } >>   @@ -471,7 +475,9 @@ int ubi_resize_volume(struct ubi_volume_desc >> *desc, int reserved_pebs) >>           ubi->avail_pebs -= pebs; >>           ubi_update_reserved(ubi); >>           ubi_eba_copy_table(vol, new_eba_tbl, reserved_pebs); >> -        ubi_eba_replace_table(vol, new_eba_tbl); >> +        old_eba_tbl = vol->eba_tbl; >> +        vol->eba_tbl = new_eba_tbl; >> +        vol->reserved_pebs = reserved_pebs; >>           spin_unlock(&ubi->volumes_lock); >>       } >>   @@ -493,7 +499,6 @@ int ubi_resize_volume(struct ubi_volume_desc >> *desc, int reserved_pebs) >>       if (err) >>           goto out_acc; >>   -    vol->reserved_pebs = reserved_pebs; >>       if (vol->vol_type == UBI_DYNAMIC_VOLUME) { >>           vol->used_ebs = reserved_pebs; >>           vol->last_eb_bytes = vol->usable_leb_size; >> @@ -501,19 +506,24 @@ int ubi_resize_volume(struct ubi_volume_desc >> *desc, int reserved_pebs) >>               (long long)vol->used_ebs * vol->usable_leb_size; >>       } >>   +    /* destroy old table */ >> +    ubi_eba_destroy_table(old_eba_tbl); >>       ubi_volume_notify(ubi, vol, UBI_VOLUME_RESIZED); >>       self_check_volumes(ubi); >>       return err; >>     out_acc: >> +    spin_lock(&ubi->volumes_lock); >> +    vol->reserved_pebs = reserved_pebs - pebs; >>       if (pebs > 0) { >> -        spin_lock(&ubi->volumes_lock); >>           ubi->rsvd_pebs -= pebs; >>           ubi->avail_pebs += pebs; >> -        spin_unlock(&ubi->volumes_lock); >> +        ubi_eba_copy_table(vol, old_eba_tbl, vol->reserved_pebs); >> +    } else { >> +        ubi_eba_copy_table(vol, old_eba_tbl, reserved_pebs); >>       } >> -    return err; >> - >> +    vol->eba_tbl = old_eba_tbl; >> +    spin_unlock(&ubi->volumes_lock); >>   out_free: >>       ubi_eba_destroy_table(new_eba_tbl); >>       return err; >> > > > Besides that, it's better to protect 'vol->eba_tbl->entries' > assignment like: > diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c > index 403b79d6efd5..5ae0c1bc6f41 100644 > --- a/drivers/mtd/ubi/eba.c > +++ b/drivers/mtd/ubi/eba.c > @@ -1450,7 +1450,9 @@ int ubi_eba_copy_leb(struct ubi_device *ubi, int > from, int to, >         } > >         ubi_assert(vol->eba_tbl->entries[lnum].pnum == from); > +       spin_lock(&ubi->volumes_lock); >         vol->eba_tbl->entries[lnum].pnum = to; > +       spin_unlock(&ubi->volumes_lock); > >  out_unlock_buf: >         mutex_unlock(&ubi->buf_mutex); > > Otherwise, a race between wear_leveling_work and shrinking volume > could happen: > >  ubi_resize_volume         wear_leveling_worker >   ubi_eba_copy_table(vol, new_eba_tbl, reserved_pebs); > vol->eba_tbl->entries[lnum].pnum = to; // update old eba_tbl >   vol->eba_tbl = new_eba_tbl