Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp3711242pxb; Mon, 27 Sep 2021 00:27:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyKryy01q/+qJWA7viynRuCuFX2yra1ZIVlBhEiFRavcVS9GnhudwmzE11/ICa1+f0os3P5 X-Received: by 2002:a50:cfc3:: with SMTP id i3mr20996578edk.36.1632727641700; Mon, 27 Sep 2021 00:27:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632727641; cv=none; d=google.com; s=arc-20160816; b=LJ9VtxOTgCb/Mh/J93vFptXrUvZzN+Rw/Z7qJmOEsEuERPLJlPQ2aUEhCxVxqP0ZGC N7TbmpIJ2/TPYHwI7D1GnYgyBOOY6VeccxLKCWWV9C0YPxkSMsbThOH1FuyUGIPThLTk BC4GVcEmN8YMihGWfaBz6OEvLq8lFfIotcJUcESO2ZCGvaXWgFu+4bdvJvz0ikYacVg+ QU8JSqrc2d9NvI2BbitDEF8Ikj0hbQWzBi3enSDqZ9iJnJ4lSpwVsOz1tCKF/5VuYKYU ggM0sUfxoDoskQe298tuZOGikgAVBvSLkof0vi2Jod2rtOk/wZjhWo/EpCa3KesZlSxa zHBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=FruM07nc3pMEiGUwfNQ3BotgfBJgDCunIKGnw/A74PA=; b=qL6wFhTEB7IVY7vbjr2goDB5RKcZvhhs5aeI8L7Mkq67MNBdk3PxVRdc1hC+OUfvUk kVmWpX4rAHjoel7iDZgxjggl/bnEMeBm7w8v8fu/Pbxke/YOzzrsIeaVma5PNbY3UVtt EZ7Hclcfo36M2xdphxsE79lMXdZz0SznFOa9b5NE+VhgJY9rHhfHlvyVRVVVGD/ddmCD bbDlgxGfATOnEB+kh48SissYmVYz64sIgOS14ON04B6fu1Bvf3HZrpWVl748X+GAJ/BL /lNKJyt3bgWrm3XAikNG44SGDBmIxviBeZwaQN0OAelQIEEFK1HW0uzZndrD3EmMcrRy VsCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=II1m1LcP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i1si23279834ejy.380.2021.09.27.00.26.58; Mon, 27 Sep 2021 00:27:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=II1m1LcP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233278AbhI0H1B (ORCPT + 99 others); Mon, 27 Sep 2021 03:27:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233137AbhI0H0v (ORCPT ); Mon, 27 Sep 2021 03:26:51 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1271C061570; Mon, 27 Sep 2021 00:25:12 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id rm6-20020a17090b3ec600b0019ece2bdd20so3133603pjb.1; Mon, 27 Sep 2021 00:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=message-id:date:mime-version:user-agent:subject:content-language:to :cc:references:from:in-reply-to:content-transfer-encoding; bh=FruM07nc3pMEiGUwfNQ3BotgfBJgDCunIKGnw/A74PA=; b=II1m1LcPcgRJhendDycD3I9xg42vM2EVmYJth1FY1m3oKpDHnrJ+c/+0p8iN+4heQU wYuVAWf1OPM169BYO7RU2jkIkJtMmxPjWrDmM1qDg2qXWnIxIHWPdCDr//O5fynthB5A jf4qx5GHfk7gdEA0e1CwomdzlwvpHuNmmWtpz6AcHlVfZh9rCz2JxURUwLV/xL/KVyna DbpRpCvI7VbCLuzcOFdAuXVIt1hS/AJSEmpQKtkTX46zeM6eGvM2Fwh/PPVNaXonDMtM jpmcU6YsC8+FtIcYzzEF7NFc0uHCL2snd12iAh2KpjqXa6xfVm7elBEQ4+rBgTqVSnr0 Tevg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=FruM07nc3pMEiGUwfNQ3BotgfBJgDCunIKGnw/A74PA=; b=cXjeJW3HXs1t08EEtUKW4+GMOFFinPRbMkWWwaOgP3hxz7suAA3VcIOstMXP7pN0WY WfYBkYTWCkjwEV+CljJqa/R1dYn3MqkfUVRUEK07a4xuoVHMoF4TQvv1oauLTjJ9uMkU FY5BuOGei/Wqe9HvqcFzPg3cv6+6ne/bj2etXt7RIOpWRTCkA+aaFZXsM9xj9OJkd4sm Lf726a4O4fHTsT68GoQGhz/hLU3V22yqBB8e+K2PKdZnAzlhwO/5VIXkSa0pNf8cC2iW 4ynH38UpSPsCNL0HV+s7Eu71lSERStzQ2kZRyu2dpgoDLbdQIGouZWTCiONeuuT09Khj ZFhQ== X-Gm-Message-State: AOAM532ujiBhjgUiY+8+oVjERlQG5oHd4Nugt1XI0CGcPY3N3PPVGYWQ Ipd6yQjy64ReWHncO++Teo8= X-Received: by 2002:a17:90a:1548:: with SMTP id y8mr1414007pja.151.1632727512542; Mon, 27 Sep 2021 00:25:12 -0700 (PDT) Received: from [10.239.207.187] ([43.224.245.179]) by smtp.gmail.com with ESMTPSA id h10sm17922027pjs.51.2021.09.27.00.25.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 27 Sep 2021 00:25:12 -0700 (PDT) Message-ID: Date: Mon, 27 Sep 2021 15:25:08 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 Subject: Re: [PATCH v3 2/6] badblocks: add helper routines for badblock ranges handling Content-Language: en-US To: Coly Li , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev Cc: antlists@youngman.org.uk, Dan Williams , Hannes Reinecke , Jens Axboe , NeilBrown , Richard Fan , Vishal L Verma References: <20210913163643.10233-1-colyli@suse.de> <20210913163643.10233-3-colyli@suse.de> From: Geliang Tang In-Reply-To: <20210913163643.10233-3-colyli@suse.de> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/14/21 00:36, Coly Li wrote: > This patch adds several helper routines to improve badblock ranges > handling. These helper routines will be used later in the improved > version of badblocks_set()/badblocks_clear()/badblocks_check(). > > - Helpers prev_by_hint() and prev_badblocks() are used to find the bad > range from bad table which the searching range starts at or after. > > - The following helpers are to decide the relative layout between the > manipulating range and existing bad block range from bad table. > - can_merge_behind() > Return 'true' if the manipulating range can backward merge with the > bad block range. > - can_merge_front() > Return 'true' if the manipulating range can forward merge with the > bad block range. > - can_combine_front() > Return 'true' if two adjacent bad block ranges before the > manipulating range can be merged. > - overlap_front() > Return 'true' if the manipulating range exactly overlaps with the > bad block range in front of its range. > - overlap_behind() > Return 'true' if the manipulating range exactly overlaps with the > bad block range behind its range. > - can_front_overwrite() > Return 'true' if the manipulating range can forward overwrite the > bad block range in front of its range. > > - The following helpers are to add the manipulating range into the bad > block table. Different routine is called with the specific relative > layout between the maniplating range and other bad block range in the > bad block table. > - behind_merge() > Merge the maniplating range with the bad block range behind its > range, and return the number of merged length in unit of sector. > - front_merge() > Merge the maniplating range with the bad block range in front of > its range, and return the number of merged length in unit of sector. > - front_combine() > Combine the two adjacent bad block ranges before the manipulating > range into a larger one. > - front_overwrite() > Overwrite partial of whole bad block range which is in front of the > manipulating range. The overwrite may split existing bad block range > and generate more bad block ranges into the bad block table. > - insert_at() > Insert the manipulating range at a specific location in the bad > block table. > > All the above helpers are used in later patches to improve the bad block > ranges handling for badblocks_set()/badblocks_clear()/badblocks_check(). > > Signed-off-by: Coly Li > Cc: Dan Williams > Cc: Hannes Reinecke > Cc: Jens Axboe > Cc: NeilBrown > Cc: Richard Fan > Cc: Vishal L Verma > --- > block/badblocks.c | 374 ++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 374 insertions(+) > > diff --git a/block/badblocks.c b/block/badblocks.c > index d39056630d9c..efe316181e05 100644 > --- a/block/badblocks.c > +++ b/block/badblocks.c > @@ -16,6 +16,380 @@ > #include > #include > > +/* > + * Find the range starts at-or-before 's' from bad table. The search > + * starts from index 'hint' and stops at index 'hint_end' from the bad > + * table. > + */ > +static int prev_by_hint(struct badblocks *bb, sector_t s, int hint) > +{ > + u64 *p = bb->page; > + int ret = -1; > + int hint_end = hint + 2; How about declaring these variables following the "reverse Xmas tree" order. > + > + while ((hint < hint_end) && ((hint + 1) <= bb->count) && > + (BB_OFFSET(p[hint]) <= s)) { > + if ((hint + 1) == bb->count || BB_OFFSET(p[hint + 1]) > s) { > + ret = hint; > + break; > + } > + hint++; > + } > + > + return ret; > +} > + > +/* > + * Find the range starts at-or-before bad->start. If 'hint' is provided > + * (hint >= 0) then search in the bad table from hint firstly. It is > + * very probably the wanted bad range can be found from the hint index, > + * then the unnecessary while-loop iteration can be avoided. > + */ > +static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad, > + int hint) > +{ > + u64 *p; > + int lo, hi; > + sector_t s = bad->start; > + int ret = -1; > + > + if (!bb->count) > + goto out; > + > + if (hint >= 0) { > + ret = prev_by_hint(bb, s, hint); > + if (ret >= 0) > + goto out; > + } > + > + lo = 0; > + hi = bb->count; > + p = bb->page; > + > + while (hi - lo > 1) { > + int mid = (lo + hi)/2; > + sector_t a = BB_OFFSET(p[mid]); > + > + if (a <= s) > + lo = mid; > + else > + hi = mid; > + } > + > + if (BB_OFFSET(p[lo]) <= s) > + ret = lo; > +out: > + return ret; > +} > + > +/* > + * Return 'true' if the range indicated by 'bad' can be backward merged > + * with the bad range (from the bad table) index by 'behind'. > + */ > +static bool can_merge_behind(struct badblocks *bb, struct badblocks_context *bad, > + int behind) > +{ > + u64 *p = bb->page; > + sector_t s = bad->start; > + sector_t sectors = bad->len; > + int ack = bad->ack; > + > + if ((s <= BB_OFFSET(p[behind])) && > + ((s + sectors) >= BB_OFFSET(p[behind])) && > + ((BB_END(p[behind]) - s) <= BB_MAX_LEN) && > + BB_ACK(p[behind]) == ack) > + return true; > + return false; > +} > + > +/* > + * Do backward merge for range indicated by 'bad' and the bad range > + * (from the bad table) indexed by 'behind'. The return value is merged > + * sectors from bad->len. > + */ > +static int behind_merge(struct badblocks *bb, struct badblocks_context *bad, > + int behind) > +{ > + u64 *p = bb->page; > + sector_t s = bad->start; > + sector_t sectors = bad->len; > + int ack = bad->ack; > + int merged = 0; > + > + WARN_ON(s > BB_OFFSET(p[behind])); > + WARN_ON((s + sectors) < BB_OFFSET(p[behind])); > + > + if (s < BB_OFFSET(p[behind])) { > + WARN_ON((BB_LEN(p[behind]) + merged) >= BB_MAX_LEN); > + > + merged = min_t(sector_t, sectors, BB_OFFSET(p[behind]) - s); > + p[behind] = BB_MAKE(s, BB_LEN(p[behind]) + merged, ack); > + } else { > + merged = min_t(sector_t, sectors, BB_LEN(p[behind])); > + } > + > + WARN_ON(merged == 0); > + > + return merged; > +} > + > +/* > + * Return 'true' if the range indicated by 'bad' can be forward > + * merged with the bad range (from the bad table) indexed by 'prev'. > + */ > +static bool can_merge_front(struct badblocks *bb, int prev, > + struct badblocks_context *bad) > +{ > + u64 *p = bb->page; > + sector_t s = bad->start; > + int ack = bad->ack; > + > + if (BB_ACK(p[prev]) == ack && > + (s < BB_END(p[prev]) || > + (s == BB_END(p[prev]) && (BB_LEN(p[prev]) < BB_MAX_LEN)))) > + return true; > + return false; > +} > + > +/* > + * Do forward merge for range indicated by 'bad' and the bad range > + * (from bad table) indexed by 'prev'. The return value is sectors > + * merged from bad->len. > + */ > +static int front_merge(struct badblocks *bb, int prev, struct badblocks_context *bad) > +{ > + sector_t sectors = bad->len; > + sector_t s = bad->start; > + int ack = bad->ack; > + u64 *p = bb->page; > + int merged = 0; > + > + WARN_ON(s > BB_END(p[prev])); > + > + if (s < BB_END(p[prev])) { > + merged = min_t(sector_t, sectors, BB_END(p[prev]) - s); > + } else { > + merged = min_t(sector_t, sectors, BB_MAX_LEN - BB_LEN(p[prev])); > + if ((prev + 1) < bb->count && > + merged > (BB_OFFSET(p[prev + 1]) - BB_END(p[prev]))) { > + merged = BB_OFFSET(p[prev + 1]) - BB_END(p[prev]); > + } > + > + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), > + BB_LEN(p[prev]) + merged, ack); > + } > + > + return merged; > +} > + > +/* > + * 'Combine' is a special case which can_merge_front() is not able to > + * handle: If a bad range (indexed by 'prev' from bad table) exactly > + * starts as bad->start, and the bad range ahead of 'prev' (indexed by > + * 'prev - 1' from bad table) exactly ends at where 'prev' starts, and > + * the sum of their lengths does not exceed BB_MAX_LEN limitation, then > + * these two bad range (from bad table) can be combined. > + * > + * Return 'true' if bad ranges indexed by 'prev' and 'prev - 1' from bad > + * table can be combined. > + */ > +static bool can_combine_front(struct badblocks *bb, int prev, > + struct badblocks_context *bad) > +{ > + u64 *p = bb->page; > + > + if ((prev > 0) && > + (BB_OFFSET(p[prev]) == bad->start) && > + (BB_END(p[prev - 1]) == BB_OFFSET(p[prev])) && > + (BB_LEN(p[prev - 1]) + BB_LEN(p[prev]) <= BB_MAX_LEN) && > + (BB_ACK(p[prev - 1]) == BB_ACK(p[prev]))) > + return true; > + return false; > +} > + > +/* > + * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad > + * table) into one larger bad range, and the new range is indexed by > + * 'prev - 1'. > + */ > +static void front_combine(struct badblocks *bb, int prev) > +{ > + u64 *p = bb->page; > + > + p[prev - 1] = BB_MAKE(BB_OFFSET(p[prev - 1]), > + BB_LEN(p[prev - 1]) + BB_LEN(p[prev]), > + BB_ACK(p[prev])); > + if ((prev + 1) < bb->count) > + memmove(p + prev, p + prev + 1, (bb->count - prev - 1) * 8); > +} > + > +/* > + * Return 'true' if the range indicated by 'bad' is exactly forward > + * overlapped with the bad range (from bad table) indexed by 'front'. > + * Exactly forward overlap means the bad range (from bad table) indexed > + * by 'prev' does not cover the whole range indicated by 'bad'. > + */ > +static bool overlap_front(struct badblocks *bb, int front, > + struct badblocks_context *bad) > +{ > + u64 *p = bb->page; > + > + if (bad->start >= BB_OFFSET(p[front]) && > + bad->start < BB_END(p[front])) > + return true; > + return false; > +} > + > +/* > + * Return 'true' if the range indicated by 'bad' is exactly backward > + * overlapped with the bad range (from bad table) indexed by 'behind'. > + */ > +static bool overlap_behind(struct badblocks *bb, struct badblocks_context *bad, > + int behind) > +{ > + u64 *p = bb->page; > + > + if (bad->start < BB_OFFSET(p[behind]) && > + (bad->start + bad->len) > BB_OFFSET(p[behind])) > + return true; > + return false; > +} > + > +/* > + * Return 'true' if the range indicated by 'bad' can overwrite the bad > + * range (from bad table) indexed by 'prev'. > + * > + * The range indicated by 'bad' can overwrite the bad range indexed by > + * 'prev' when, > + * 1) The whole range indicated by 'bad' can cover partial or whole bad > + * range (from bad table) indexed by 'prev'. > + * 2) The ack value of 'bad' is larger or equal to the ack value of bad > + * range 'prev'. > + * > + * If the overwriting doesn't cover the whole bad range (from bad table) > + * indexed by 'prev', new range might be split from existing bad range, > + * 1) The overwrite covers head or tail part of existing bad range, 1 > + * extra bad range will be split and added into the bad table. > + * 2) The overwrite covers middle of existing bad range, 2 extra bad > + * ranges will be split (ahead and after the overwritten range) and > + * added into the bad table. > + * The number of extra split ranges of the overwriting is stored in > + * 'extra' and returned for the caller. > + */ > +static bool can_front_overwrite(struct badblocks *bb, int prev, > + struct badblocks_context *bad, int *extra) > +{ > + u64 *p = bb->page; > + int len; > + > + WARN_ON(!overlap_front(bb, prev, bad)); > + > + if (BB_ACK(p[prev]) >= bad->ack) > + return false; > + > + if (BB_END(p[prev]) <= (bad->start + bad->len)) { > + len = BB_END(p[prev]) - bad->start; > + if (BB_OFFSET(p[prev]) == bad->start) > + *extra = 0; > + else > + *extra = 1; > + > + bad->len = len; > + } else { > + if (BB_OFFSET(p[prev]) == bad->start) > + *extra = 1; > + else > + /* > + * prev range will be split into two, beside the overwritten > + * one, an extra slot needed from bad table. > + */ > + *extra = 2; > + } > + > + if ((bb->count + (*extra)) >= MAX_BADBLOCKS) > + return false; > + > + return true; > +} > + > +/* > + * Do the overwrite from the range indicated by 'bad' to the bad range > + * (from bad table) indexed by 'prev'. > + * The previously called can_front_overwrite() will provide how many > + * extra bad range(s) might be split and added into the bad table. All > + * the splitting cases in the bad table will be handled here. > + */ > +static int front_overwrite(struct badblocks *bb, int prev, > + struct badblocks_context *bad, int extra) > +{ > + u64 *p = bb->page; > + int n = extra; > + sector_t orig_end = BB_END(p[prev]); > + int orig_ack = BB_ACK(p[prev]); > + > + switch (extra) { > + case 0: > + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), BB_LEN(p[prev]), > + bad->ack); > + break; > + case 1: > + if (BB_OFFSET(p[prev]) == bad->start) { > + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), > + bad->len, bad->ack); > + memmove(p + prev + 2, p + prev + 1, > + (bb->count - prev - 1) * 8); > + p[prev + 1] = BB_MAKE(bad->start + bad->len, > + orig_end - BB_END(p[prev]), > + orig_ack); > + } else { > + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), > + bad->start - BB_OFFSET(p[prev]), > + BB_ACK(p[prev])); > + memmove(p + prev + 1 + n, p + prev + 1, > + (bb->count - prev - 1) * 8); > + p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); > + } > + break; > + case 2: > + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), > + bad->start - BB_OFFSET(p[prev]), > + BB_ACK(p[prev])); > + memmove(p + prev + 1 + n, p + prev + 1, > + (bb->count - prev - 1) * 8); > + p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); > + p[prev + 2] = BB_MAKE(BB_END(p[prev + 1]), > + orig_end - BB_END(p[prev + 1]), > + BB_ACK(p[prev])); > + break; > + default: > + break; > + } > + > + return bad->len; > +} > + > +/* > + * Explicitly insert a range indicated by 'bad' to the bad table, where > + * the location is indexed by 'at'. > + */ > +static int insert_at(struct badblocks *bb, int at, struct badblocks_context *bad) > +{ > + u64 *p = bb->page; > + sector_t sectors = bad->len; > + sector_t s = bad->start; > + int ack = bad->ack; > + int len; > + > + WARN_ON(badblocks_full(bb)); > + > + len = min_t(sector_t, sectors, BB_MAX_LEN); > + if (at < bb->count) > + memmove(p + at + 1, p + at, (bb->count - at) * 8); > + p[at] = BB_MAKE(s, len, ack); > + > + return len; > +} > + > /** > * badblocks_check() - check a given range for bad sectors > * @bb: the badblocks structure that holds all badblock information >