Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753317AbbBMWT5 (ORCPT ); Fri, 13 Feb 2015 17:19:57 -0500 Received: from youngberry.canonical.com ([91.189.89.112]:32799 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751263AbbBMWT4 (ORCPT ); Fri, 13 Feb 2015 17:19:56 -0500 From: Chris J Arges To: linux-kernel@vger.kernel.org Cc: Chris J Arges , Jonathan Corbet , linux-doc@vger.kernel.org Subject: [PATCH 1/3] Documentation: minor spelling and grammar fixes Date: Fri, 13 Feb 2015 16:19:35 -0600 Message-Id: <1423865980-10417-1-git-send-email-chris.j.arges@canonical.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3284 Lines: 71 Signed-off-by: Chris J Arges --- Documentation/vm/slub.txt | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt index b0c6d1b..e159c04 100644 --- a/Documentation/vm/slub.txt +++ b/Documentation/vm/slub.txt @@ -64,7 +64,7 @@ to the dentry cache with Debugging options may require the minimum possible slab order to increase as a result of storing the metadata (for example, caches with PAGE_SIZE object -sizes). This has a higher liklihood of resulting in slab allocation errors +sizes). This has a higher likelihood of resulting in slab allocation errors in low memory situations or if there's high fragmentation of memory. To switch off debugging for such caches by default, use @@ -95,7 +95,7 @@ slabinfo -a displays which slabs were merged together. Slab validation --------------- -SLUB can validate all object if the kernel was booted with slub_debug. In +SLUB can validate all objects if the kernel was booted with slub_debug. In order to do so you must have the slabinfo tool. Then you can do slabinfo -v @@ -125,7 +125,7 @@ In general slub will be able to perform this number of allocations on a slab without consulting centralized resources (list_lock) where contention may occur. -slub_min_order specifies a minim order of slabs. A similar effect like +slub_min_order specifies a minimum order of slabs. A similar effect like slub_min_objects. slub_max_order specified the order at which slub_min_objects should no @@ -133,7 +133,7 @@ longer be checked. This is useful to avoid SLUB trying to generate super large order pages to fit slub_min_objects of a slab cache with large object sizes into one high order page. Setting command line parameter debug_guardpage_minorder=N (N > 0), forces setting -slub_max_order to 0, what cause minimum possible order of slabs +slub_max_order to 0, which causes the minimum possible order of slab allocation. SLUB Debug output @@ -234,7 +234,7 @@ Padding
: 3. A stackdump The stackdump describes the location where the error was detected. The cause -of the corruption is may be more likely found by looking at the function that +of the corruption may be more likely found by looking at the function that allocated or freed the object. 4. Report on how the problem was dealt with in order to ensure the continued @@ -246,7 +246,7 @@ FIX : In the above sample SLUB found that the Redzone of an active object has been overwritten. Here a string of 8 characters was written into a slab that -has the length of 8 characters. However, a 8 character string needs a +has the length of 8 characters. However, an 8 character string needs a terminating 0. That zero has overwritten the first byte of the Redzone field. After reporting the details of the issue encountered the FIX SLUB message tells us that SLUB has restored the Redzone to its proper value and then -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/