Received: by 10.192.165.156 with SMTP id m28csp958909imm; Wed, 18 Apr 2018 01:12:12 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/z31O1Xh/zRM49sfqo9RugnkGGd1YC6kouvHftFsL/TEMHu/V11W4qWhaMRxho16dyvQl4 X-Received: by 10.101.80.204 with SMTP id s12mr944301pgp.191.1524039132192; Wed, 18 Apr 2018 01:12:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524039132; cv=none; d=google.com; s=arc-20160816; b=Owd8duFDx4jUm+mXBUMrbilIL+ABvVA8HN0wtxiNpqKSdUrnoo+74lj/7C6vhUfFC9 SMwv8PNIVd2Mp2806/mhZhbyApKh2M4/R3FKHyPlX2QNWNX1ZxQt+rpkQKaAKvoE4R+o RHIldFUj/zryFaE4Lt8nztpzzT5DzMLp3Av01hq0dnCcC+lCeSDJQmWyn/esmGi0IBNZ LQ2rj3VeMssPH67vS+k0BrkNy4EOqkz3CptG73PNx1uiGXbLXOaw1ra63E5FDX/AE7ul oA30W3GcU35Sq6acTrenrdOyMG+AWmM6BsFka815WGiqohoEWWKzXEjG8iFyBWOfZ313 rP4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:references:in-reply-to:date :subject:cc:to:from:arc-authentication-results; bh=FHFNko881p9k/gSyUjde0iwr/2YwoOz9AVPUJVDY8W4=; b=CKIBBbq/Ue+o//S2mrofOnKMCVNppWRqFz2h48dRKBr6cuCcoXRSCfbtX4y/HUkcRL hUSL+oOXrY5vdLhVUWlI8vz+7wPq5/7AEV8kpTDZrb3XjFdzfVNegkCKI4smCU3HYLon 8JnnY/AD7w11whB46U3wfeyNZS6esxIteV9IgCcoRZalVhRQR1PUvns6LEc1tEwI6Szd bT4ZEuBIgw853yzwplfJoVgM3hUpgI4IkLjAWlhYNaSR8+i8slP6D9AVb/Ib7kFpoEhZ fOBabFrvuOqQfg4Egemex9NijKk3VSEmDYG2jLa8I0/DrdlsGCOqaqBHikZXKSF9KECC bDoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j73si681280pgc.142.2018.04.18.01.11.58; Wed, 18 Apr 2018 01:12:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753256AbeDRIIO (ORCPT + 99 others); Wed, 18 Apr 2018 04:08:14 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:46652 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753114AbeDRIIE (ORCPT ); Wed, 18 Apr 2018 04:08:04 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w3I85tLb113525 for ; Wed, 18 Apr 2018 04:08:04 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2he24qrd0g-1 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT) for ; Wed, 18 Apr 2018 04:08:03 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 18 Apr 2018 09:08:01 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 18 Apr 2018 09:07:58 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w3I87vLl12321060; Wed, 18 Apr 2018 08:07:57 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B112A52041; Wed, 18 Apr 2018 07:58:45 +0100 (BST) Received: from rapoport-lnx (unknown [9.148.205.155]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTPS id DA83752047; Wed, 18 Apr 2018 07:58:43 +0100 (BST) Received: by rapoport-lnx (sSMTP sendmail emulation); Wed, 18 Apr 2018 11:07:55 +0300 From: Mike Rapoport To: Jonathan Corbet Cc: Andrew Morton , Alexander Viro , Matthew Wilcox , linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Mike Rapoport Subject: [PATCH 1/7] docs/vm: hugetlbpage: minor improvements Date: Wed, 18 Apr 2018 11:07:44 +0300 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1524038870-413-1-git-send-email-rppt@linux.vnet.ibm.com> References: <1524038870-413-1-git-send-email-rppt@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18041808-0012-0000-0000-000005CC1E1E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18041808-0013-0000-0000-000019486DAA Message-Id: <1524038870-413-2-git-send-email-rppt@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-04-18_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1804180076 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * fixed mistypes * added internal cross-references for sections Signed-off-by: Mike Rapoport --- Documentation/vm/hugetlbpage.rst | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/Documentation/vm/hugetlbpage.rst b/Documentation/vm/hugetlbpage.rst index a5da14b..99ad5d9 100644 --- a/Documentation/vm/hugetlbpage.rst +++ b/Documentation/vm/hugetlbpage.rst @@ -87,7 +87,7 @@ memory pressure. Once a number of huge pages have been pre-allocated to the kernel huge page pool, a user with appropriate privilege can use either the mmap system call or shared memory system calls to use the huge pages. See the discussion of -Using Huge Pages, below. +:ref:`Using Huge Pages `, below. The administrator can allocate persistent huge pages on the kernel boot command line by specifying the "hugepages=N" parameter, where 'N' = the @@ -115,8 +115,9 @@ over all the set of allowed nodes specified by the NUMA memory policy of the task that modifies ``nr_hugepages``. The default for the allowed nodes--when the task has default memory policy--is all on-line nodes with memory. Allowed nodes with insufficient available, contiguous memory for a huge page will be -silently skipped when allocating persistent huge pages. See the discussion -below of the interaction of task memory policy, cpusets and per node attributes +silently skipped when allocating persistent huge pages. See the +:ref:`discussion below ` +of the interaction of task memory policy, cpusets and per node attributes with the allocation and freeing of persistent huge pages. The success or failure of huge page allocation depends on the amount of @@ -158,7 +159,7 @@ normal page pool. Caveat: Shrinking the persistent huge page pool via ``nr_hugepages`` such that it becomes less than the number of huge pages in use will convert the balance of the in-use huge pages to surplus huge pages. This will occur even if -the number of surplus pages it would exceed the overcommit value. As long as +the number of surplus pages would exceed the overcommit value. As long as this condition holds--that is, until ``nr_hugepages+nr_overcommit_hugepages`` is increased sufficiently, or the surplus huge pages go out of use and are freed-- no more surplus huge pages will be allowed to be allocated. @@ -187,6 +188,7 @@ Inside each of these directories, the same set of files will exist:: which function as described above for the default huge page-sized case. +.. _mem_policy_and_hp_alloc: Interaction of Task Memory Policy with Huge Page Allocation/Freeing =================================================================== @@ -282,6 +284,7 @@ Note that the number of overcommit and reserve pages remain global quantities, as we don't know until fault time, when the faulting task's mempolicy is applied, from which node the huge page allocation will be attempted. +.. _using_huge_pages: Using Huge Pages ================ @@ -295,7 +298,7 @@ type hugetlbfs:: min_size=,nr_inodes= none /mnt/huge This command mounts a (pseudo) filesystem of type hugetlbfs on the directory -``/mnt/huge``. Any files created on ``/mnt/huge`` uses huge pages. +``/mnt/huge``. Any file created on ``/mnt/huge`` uses huge pages. The ``uid`` and ``gid`` options sets the owner and group of the root of the file system. By default the ``uid`` and ``gid`` of the current process @@ -345,8 +348,8 @@ applications are going to use only shmat/shmget system calls or mmap with MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see :ref:`map_hugetlb ` below. -Users who wish to use hugetlb memory via shared memory segment should be a -member of a supplementary group and system admin needs to configure that gid +Users who wish to use hugetlb memory via shared memory segment should be +members of a supplementary group and system admin needs to configure that gid into ``/proc/sys/vm/hugetlb_shm_group``. It is possible for same or different applications to use any combination of mmaps and shm* calls, though the mount of filesystem will be required for using mmap calls without MAP_HUGETLB. -- 2.7.4