2023-06-26 08:16:36

by Li Nan

[permalink] [raw]
Subject: [PATCH v4 0/4] block/badblocks: fix badblocks setting error

From: Li Nan <[email protected]>

This patch series fixes some simple bugs of setting badblocks and
optimizing struct badblocks. Coly Li has been trying to refactor badblocks
in patch series "badblocks improvement for multiple bad block ranges"
(https://lore.kernel.org/all/[email protected]). but the
workload is significant. Before that, I will fix some easily triggered
issues and optimize some code that does not conflict with Coly's changes.

Changes in v4:
- patch 1, remove the part of reorder fields
- patch 3/4, improve commit log.

Changes in v3:
- delete patchs with significant changes.

Li Nan (4):
block/badblocks: change some members of badblocks to bool
block/badblocks: only set bb->changed/unacked_exist when badblocks
changes
block/badblocks: fix badblocks loss when badblocks combine
block/badblocks: fix the bug of reverse order

include/linux/badblocks.h | 9 +++++----
block/badblocks.c | 38 ++++++++++++++++++++++----------------
2 files changed, 27 insertions(+), 20 deletions(-)

--
2.39.2



2023-06-26 08:17:47

by Li Nan

[permalink] [raw]
Subject: [PATCH v4 1/4] block/badblocks: change some members of badblocks to bool

From: Li Nan <[email protected]>

"changed" and "unacked_exist" are used as boolean type. Change the type
of them to bool. No functional changed intended.

Signed-off-by: Li Nan <[email protected]>
---
include/linux/badblocks.h | 9 +++++----
block/badblocks.c | 14 +++++++-------
2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
index 2426276b9bd3..2feb143a1856 100644
--- a/include/linux/badblocks.h
+++ b/include/linux/badblocks.h
@@ -27,15 +27,16 @@
struct badblocks {
struct device *dev; /* set by devm_init_badblocks */
int count; /* count of bad blocks */
- int unacked_exist; /* there probably are unacknowledged
- * bad blocks. This is only cleared
- * when a read discovers none
+ bool unacked_exist; /* there probably are unacknowledged
+ * bad blocks. This is only cleared
+ * when a read of unacknowledged bad
+ * blocks discovers none
*/
int shift; /* shift from sectors to block size
* a -ve shift means badblocks are
* disabled.*/
u64 *page; /* badblock list */
- int changed;
+ bool changed;
seqlock_t lock;
sector_t sector;
sector_t size; /* in sectors */
diff --git a/block/badblocks.c b/block/badblocks.c
index 3afb550c0f7b..f28e971a4b94 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -141,7 +141,7 @@ static void badblocks_update_acked(struct badblocks *bb)
}

if (!unacked)
- bb->unacked_exist = 0;
+ bb->unacked_exist = false;
}

/**
@@ -302,9 +302,9 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
}
}

- bb->changed = 1;
+ bb->changed = true;
if (!acknowledged)
- bb->unacked_exist = 1;
+ bb->unacked_exist = true;
else
badblocks_update_acked(bb);
write_sequnlock_irqrestore(&bb->lock, flags);
@@ -414,7 +414,7 @@ int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
}

badblocks_update_acked(bb);
- bb->changed = 1;
+ bb->changed = true;
out:
write_sequnlock_irq(&bb->lock);
return rv;
@@ -435,7 +435,7 @@ void ack_all_badblocks(struct badblocks *bb)
return;
write_seqlock_irq(&bb->lock);

- if (bb->changed == 0 && bb->unacked_exist) {
+ if (!bb->changed && bb->unacked_exist) {
u64 *p = bb->page;
int i;

@@ -447,7 +447,7 @@ void ack_all_badblocks(struct badblocks *bb)
p[i] = BB_MAKE(start, len, 1);
}
}
- bb->unacked_exist = 0;
+ bb->unacked_exist = false;
}
write_sequnlock_irq(&bb->lock);
}
@@ -493,7 +493,7 @@ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack)
length << bb->shift);
}
if (unack && len == 0)
- bb->unacked_exist = 0;
+ bb->unacked_exist = false;

if (read_seqretry(&bb->lock, seq))
goto retry;
--
2.39.2


2023-06-26 08:23:54

by Li Nan

[permalink] [raw]
Subject: [PATCH v4 3/4] block/badblocks: fix badblocks loss when badblocks combine

From: Li Nan <[email protected]>

badblocks will loss if set it as below:
$ echo 1 1 > bad_blocks
$ echo 3 1 > bad_blocks
$ echo 1 4 > bad_blocks
$ cat bad_blocks
1 3

After the fix, in the same scenario, it will be:
$ cat bad_blocks
1 4

In badblocks_set(), if set a new badblocks, first find two existing
badblocks adjacent to it, named lo and hi. Then try to merge new with lo.
If merge success and there is an intersection between lo and hi, try to
combine lo and hi.

set 1 4
binary-search:
lo: 1 1 |____|
hi: 3 1 |____|

merge with lo:
lo: 1 4 |___________________|
hi: 3 1 |____|

combine lo and hi:
result: 1 3 |______________|
| -> hi's end
|____| -> lost

Now, the end of combined badblocks must be hi's end. However, it should be
the larger one between lo and hi. Fix it.

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <[email protected]>
---
block/badblocks.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/block/badblocks.c b/block/badblocks.c
index 7b1ad364e85c..c1745b76d8f1 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -267,16 +267,14 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
if (sectors == 0 && hi < bb->count) {
/* we might be able to combine lo and hi */
/* Note: 's' is at the end of 'lo' */
- sector_t a = BB_OFFSET(p[hi]);
- int lolen = BB_LEN(p[lo]);
- int hilen = BB_LEN(p[hi]);
- int newlen = lolen + hilen - (s - a);
+ sector_t a = BB_OFFSET(p[lo]);
+ int newlen = max(s, BB_OFFSET(p[hi]) + BB_LEN(p[hi])) - a;

- if (s >= a && newlen < BB_MAX_LEN) {
+ if (s >= BB_OFFSET(p[hi]) && newlen < BB_MAX_LEN) {
/* yes, we can combine them */
int ack = BB_ACK(p[lo]) && BB_ACK(p[hi]);

- p[lo] = BB_MAKE(BB_OFFSET(p[lo]), newlen, ack);
+ p[lo] = BB_MAKE(a, newlen, ack);
memmove(p + hi, p + hi + 1,
(bb->count - hi - 1) * 8);
bb->count--;
--
2.39.2


2023-06-26 08:28:29

by Li Nan

[permalink] [raw]
Subject: [PATCH v4 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes

From: Li Nan <[email protected]>

In badblocks_set(), even if no badblocks changes, bb->changed and
unacked_exist will still be set. Only set them when badblocks changes.

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <[email protected]>
---
block/badblocks.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/block/badblocks.c b/block/badblocks.c
index f28e971a4b94..7b1ad364e85c 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -166,6 +166,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
int lo, hi;
int rv = 0;
unsigned long flags;
+ bool changed = false;

if (bb->shift < 0)
/* badblocks are disabled */
@@ -229,6 +230,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
s = a + BB_MAX_LEN;
}
sectors = e - s;
+ changed = true;
}
}
if (sectors && hi < bb->count) {
@@ -259,6 +261,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
sectors = e - s;
lo = hi;
hi++;
+ changed = true;
}
}
if (sectors == 0 && hi < bb->count) {
@@ -277,6 +280,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
memmove(p + hi, p + hi + 1,
(bb->count - hi - 1) * 8);
bb->count--;
+ changed = true;
}
}
while (sectors) {
@@ -299,14 +303,17 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
p[hi] = BB_MAKE(s, this_sectors, acknowledged);
sectors -= this_sectors;
s += this_sectors;
+ changed = true;
}
}

- bb->changed = true;
- if (!acknowledged)
- bb->unacked_exist = true;
- else
- badblocks_update_acked(bb);
+ if (changed) {
+ bb->changed = changed;
+ if (!acknowledged)
+ bb->unacked_exist = true;
+ else
+ badblocks_update_acked(bb);
+ }
write_sequnlock_irqrestore(&bb->lock, flags);

return rv;
--
2.39.2


2023-06-26 08:31:31

by Li Nan

[permalink] [raw]
Subject: [PATCH v4 4/4] block/badblocks: fix the bug of reverse order

From: Li Nan <[email protected]>

Badblocks are arranged from small to large, but order of badblocks will
be reversed if we set a large area at once as below:
$ echo 0 2048 > bad_blocks
$ cat bad_blocks
1536 512
1024 512
512 512
0 512

Actually, it should be:
$ echo 0 2048 > bad_blocks
$ cat bad_blocks
0 512
512 512
1024 512
1536 512

'hi' remains unchanged while setting continuous badblocks is wrong, the
next badblocks is greater than 'p[hi]', and it should be added to
'p[hi+1]'. Let 'hi' +1 each cycle.

(0 512) 0 512
|_________| |_________|
p[hi] p[hi]

(512 512) (0 512) fix 0 512 1024
|_________|_________| ===> |_________|_________|
p[hi] p[hi+1]

(1024 512)(512 512) (0 512) 0 512 1024 1536
|_________|_________|_________| |_________|_________|_________|
p[hi] p[hi+2]

...

Fixes: 9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Li Nan <[email protected]>
---
block/badblocks.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/block/badblocks.c b/block/badblocks.c
index c1745b76d8f1..b79d37a4bf0e 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -301,6 +301,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
p[hi] = BB_MAKE(s, this_sectors, acknowledged);
sectors -= this_sectors;
s += this_sectors;
+ hi++;
changed = true;
}
}
--
2.39.2


2023-08-16 12:02:43

by Li Nan

[permalink] [raw]
Subject: Re: [PATCH v4 0/4] block/badblocks: fix badblocks setting error

Friendly ping.

在 2023/6/26 16:09, [email protected] 写道:
> From: Li Nan <[email protected]>
>
> This patch series fixes some simple bugs of setting badblocks and
> optimizing struct badblocks. Coly Li has been trying to refactor badblocks
> in patch series "badblocks improvement for multiple bad block ranges"
> (https://lore.kernel.org/all/[email protected]). but the
> workload is significant. Before that, I will fix some easily triggered
> issues and optimize some code that does not conflict with Coly's changes.
>
> Changes in v4:
> - patch 1, remove the part of reorder fields
> - patch 3/4, improve commit log.
>
> Changes in v3:
> - delete patchs with significant changes.
>
> Li Nan (4):
> block/badblocks: change some members of badblocks to bool
> block/badblocks: only set bb->changed/unacked_exist when badblocks
> changes
> block/badblocks: fix badblocks loss when badblocks combine
> block/badblocks: fix the bug of reverse order
>
> include/linux/badblocks.h | 9 +++++----
> block/badblocks.c | 38 ++++++++++++++++++++++----------------
> 2 files changed, 27 insertions(+), 20 deletions(-)
>

--
Thanks,
Nan


2023-08-18 09:39:23

by Li Nan

[permalink] [raw]
Subject: Re: [PATCH v4 0/4] block/badblocks: fix badblocks setting error



在 2023/6/26 16:09, [email protected] 写道:
> From: Li Nan <[email protected]>
>
> This patch series fixes some simple bugs of setting badblocks and
> optimizing struct badblocks. Coly Li has been trying to refactor badblocks
> in patch series "badblocks improvement for multiple bad block ranges"
> (https://lore.kernel.org/all/[email protected]). but the

Coly has sent the lastest version of his patch series.

Now this patch series can be discarded.

> workload is significant. Before that, I will fix some easily triggered
> issues and optimize some code that does not conflict with Coly's changes.
>
> Changes in v4:
> - patch 1, remove the part of reorder fields
> - patch 3/4, improve commit log.
>
> Changes in v3:
> - delete patchs with significant changes.
>
> Li Nan (4):
> block/badblocks: change some members of badblocks to bool
> block/badblocks: only set bb->changed/unacked_exist when badblocks
> changes
> block/badblocks: fix badblocks loss when badblocks combine
> block/badblocks: fix the bug of reverse order
>
> include/linux/badblocks.h | 9 +++++----
> block/badblocks.c | 38 ++++++++++++++++++++++----------------
> 2 files changed, 27 insertions(+), 20 deletions(-)
>

--
Thanks,
Nan