2021-03-25 03:34:57

by Bhaskar Chowdhury

[permalink] [raw]
Subject: [PATCH V2] mm/slub.c: Trivial typo fixes

s/operatios/operations/
s/Mininum/Minimum/
s/mininum/minimum/ ......two different places.

Signed-off-by: Bhaskar Chowdhury <[email protected]>
---
Changes from V1:
David's finding incorporated.i.e operation->operations
mm/slub.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3021ce9bf1b3..75d103ad5d2e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3,7 +3,7 @@
* SLUB: A slab allocator that limits cache line use instead of queuing
* objects in per cpu and per node lists.
*
- * The allocator synchronizes using per slab locks or atomic operatios
+ * The allocator synchronizes using per slab locks or atomic operations
* and only uses a centralized lock to manage a pool of partial slabs.
*
* (C) 2007 SGI, Christoph Lameter
@@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
#undef SLUB_DEBUG_CMPXCHG

/*
- * Mininum number of partial slabs. These will be left on the partial
+ * Minimum number of partial slabs. These will be left on the partial
* lists even if they are empty. kmem_cache_shrink may reclaim them.
*/
#define MIN_PARTIAL 5
@@ -832,7 +832,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
*
* A. Free pointer (if we cannot overwrite object on free)
* B. Tracking data for SLAB_STORE_USER
- * C. Padding to reach required alignment boundary or at mininum
+ * C. Padding to reach required alignment boundary or at minimum
* one word if debugging is on to be able to detect writes
* before the word boundary.
*
@@ -3421,7 +3421,7 @@ static unsigned int slub_min_objects;
*
* Higher order allocations also allow the placement of more objects in a
* slab and thereby reduce object handling overhead. If the user has
- * requested a higher mininum order then we start with that one instead of
+ * requested a higher minimum order then we start with that one instead of
* the smallest order which will fit the object.
*/
static inline unsigned int slab_order(unsigned int size,
--
2.30.1


2021-03-25 03:36:30

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH V2] mm/slub.c: Trivial typo fixes

On 3/24/21 9:49 PM, Bhaskar Chowdhury wrote:
> s/operatios/operations/
> s/Mininum/Minimum/
> s/mininum/minimum/ ......two different places.
>
> Signed-off-by: Bhaskar Chowdhury <[email protected]>

Acked-by: Randy Dunlap <[email protected]>

> ---
> Changes from V1:
> David's finding incorporated.i.e operation->operations
> mm/slub.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 3021ce9bf1b3..75d103ad5d2e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3,7 +3,7 @@
> * SLUB: A slab allocator that limits cache line use instead of queuing
> * objects in per cpu and per node lists.
> *
> - * The allocator synchronizes using per slab locks or atomic operatios
> + * The allocator synchronizes using per slab locks or atomic operations
> * and only uses a centralized lock to manage a pool of partial slabs.
> *
> * (C) 2007 SGI, Christoph Lameter
> @@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
> #undef SLUB_DEBUG_CMPXCHG
>
> /*
> - * Mininum number of partial slabs. These will be left on the partial
> + * Minimum number of partial slabs. These will be left on the partial
> * lists even if they are empty. kmem_cache_shrink may reclaim them.
> */
> #define MIN_PARTIAL 5
> @@ -832,7 +832,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
> *
> * A. Free pointer (if we cannot overwrite object on free)
> * B. Tracking data for SLAB_STORE_USER
> - * C. Padding to reach required alignment boundary or at mininum
> + * C. Padding to reach required alignment boundary or at minimum
> * one word if debugging is on to be able to detect writes
> * before the word boundary.
> *
> @@ -3421,7 +3421,7 @@ static unsigned int slub_min_objects;
> *
> * Higher order allocations also allow the placement of more objects in a
> * slab and thereby reduce object handling overhead. If the user has
> - * requested a higher mininum order then we start with that one instead of
> + * requested a higher minimum order then we start with that one instead of
> * the smallest order which will fit the object.
> */
> static inline unsigned int slab_order(unsigned int size,
> --


--
~Randy