Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp455720imm; Tue, 14 Aug 2018 23:36:03 -0700 (PDT) X-Google-Smtp-Source: AA+uWPy+qRU+rbAnPqP7v3Bj+LYL4Joy4FDDH3MI1BZXxPJP7kpAfyMxKZ7ULFDT7IPsi4hcCRab X-Received: by 2002:a17:902:b60e:: with SMTP id b14-v6mr23023588pls.111.1534314963231; Tue, 14 Aug 2018 23:36:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534314963; cv=none; d=google.com; s=arc-20160816; b=souzarADAnR033jJ5Tou/H06Xcb6SbmE2iUZmOLUVw466s+JA64pCmNXTBVPq6eRmR WKg5qwKRq9ZFW79Jh6euqbgbtuSGeLUhHF+OS9WKWLQjG66B0TkIc9qiJ/d+BpQVZIUG Iv2/2IJXMNNMMgtJjlmwAQPh85D6FkT/7S/JlxmkVV4UoZtCIkxV6VxMW6ajgDTMv1hd 6z3dFDf1b38+u39uzyn6qYpdCm3CVBcahU159QfWiebLPuWrEJR8Qa21q8fOaRRa1zuc gDzh6+MV1TgnG61CJjak2CtfXt0JwKOslCxy1X+L2+JfFfHJy3XAOhu+OEknqtncTynU m4AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=5m7m6nvFurxwpxSw/fGUFBF7pslYseBsq43Y9+UiQHs=; b=CDaEkVfoyN4fG6dxDLZEngMTD9yvZG6HVKeBOjSzOWZXfX1pBuRymHBIv7ROb4yTz4 LCMlSjvi/rLMreZeIT50jkBM1TElzeOXABzeWMcf3vA73dK/QaZIBPXFyKc2gcCZYNh/ 3kLoliZ1HRbdvaVwEQQ0sHVqwRNTiZAPELOHqNzJ2Y1RTSjg7CWbp5s+qam6F1h5WgPw FSJVqYJzfWXOF7l3DO/9a39UHrnqlbwHGCDI7oqQ7/1AEWdgKvEV4mmvm1VPPV9qdPnF pYeRdgwVdxfuvDonRiN9dpz3LddaQ5pAWrhDPWiq1yUU7KprsZWZo6Ju+j+hHQ0/bI5o Q9KQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t17-v6si23067922pgl.488.2018.08.14.23.35.48; Tue, 14 Aug 2018 23:36:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728717AbeHOJZv (ORCPT + 99 others); Wed, 15 Aug 2018 05:25:51 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:53186 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728586AbeHOJZu (ORCPT ); Wed, 15 Aug 2018 05:25:50 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w7F6Y5lK079715 for ; Wed, 15 Aug 2018 02:34:58 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2kveumre0v-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 15 Aug 2018 02:34:58 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 15 Aug 2018 07:34:56 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 15 Aug 2018 07:34:52 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w7F6Yp9V45285386 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 15 Aug 2018 06:34:51 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DA67E4C040; Wed, 15 Aug 2018 09:34:56 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 381E74C04E; Wed, 15 Aug 2018 09:34:55 +0100 (BST) Received: from rapoport-lnx (unknown [9.148.8.197]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Wed, 15 Aug 2018 09:34:55 +0100 (BST) Received: by rapoport-lnx (sSMTP sendmail emulation); Wed, 15 Aug 2018 09:34:49 +0300 From: Mike Rapoport To: linux-mm@kvack.org Cc: Jonathan Corbet , Matthew Wilcox , Michal Hocko , Vlastimil Babka , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Hi@d06av22.portsmouth.uk.ibm.com, Mike Rapoport Subject: Date: Wed, 15 Aug 2018 09:34:47 +0300 X-Mailer: git-send-email 2.7.4 X-TM-AS-GCONF: 00 x-cbid: 18081506-0012-0000-0000-00000299FEB5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18081506-0013-0000-0000-000020CD23B6 Message-Id: <1534314887-9202-1-git-send-email-rppt@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-08-15_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1808150071 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As Vlastimil mentioned at [1], it would be nice to have some guide about memory allocation. I've drafted an initial version that tries to summarize "best practices" for allocation functions and GFP usage. [1] https://www.spinics.net/lists/netfilter-devel/msg55542.html From 8027c0d4b750b8dbd687234feda63305d0d5a057 Mon Sep 17 00:00:00 2001 From: Mike Rapoport Date: Wed, 15 Aug 2018 09:10:06 +0300 Subject: [RFC PATCH] docs/core-api: add memory allocation guide Signed-off-by: Mike Rapoport --- Documentation/core-api/gfp_mask-from-fs-io.rst | 2 + Documentation/core-api/index.rst | 1 + Documentation/core-api/memory-allocation.rst | 117 +++++++++++++++++++++++++ Documentation/core-api/mm-api.rst | 2 + 4 files changed, 122 insertions(+) create mode 100644 Documentation/core-api/memory-allocation.rst diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst index e0df8f4..e7c32a8 100644 --- a/Documentation/core-api/gfp_mask-from-fs-io.rst +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst @@ -1,3 +1,5 @@ +.. _gfp_mask_from_fs_io: + ================================= GFP masks used from FS/IO context ================================= diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index cdc2020..8afc0da 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -27,6 +27,7 @@ Core utilities errseq printk-formats circular-buffers + memory-allocation mm-api gfp_mask-from-fs-io timekeeping diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst new file mode 100644 index 0000000..b1f2ad5 --- /dev/null +++ b/Documentation/core-api/memory-allocation.rst @@ -0,0 +1,117 @@ +======================= +Memory Allocation Guide +======================= + +Linux supplies variety of APIs for memory allocation. You can allocate +small chunks using `kmalloc` or `kmem_cache_alloc` families, large +virtually contiguous areas using `vmalloc` and it's derivatives, or +you can directly request pages from the page allocator with +`__get_free_pages`. It is also possible to use more specialized +allocators, for instance `cma_alloc` or `zs_malloc`. + +Most of the memory allocations APIs use GFP flags to express how that +memory should be allocated. The GFP acronym stands for "get free +pages", the underlying memory allocation function. + +Diversity of the allocation APIs combined with the numerous GFP flags +makes the question "How should I allocate memory?" not that easy to +answer, although very likely you should use + +:: + + kzalloc(, GFP_KERNEL); + +Of course there are cases when other allocation APIs and different GFP +flags must be used. + +Get Free Page flags +=================== + +The GFP flags control the allocators behavior. They tell what memory +zones can be used, how hard the allocator should try to find a free +memory, whether the memory can be accessed by the userspace etc. The +:ref:`Documentation/core-api/mm-api.rst ` provides +reference documentation for the GFP flags and their combinations and +here we briefly outline their recommended usage: + + * Most of the times ``GFP_KERNEL`` is what you need. Memory for the + kernel data structures, DMAable memory, inode cache, all these and + many other allocations types can use ``GFP_KERNEL``. Note, that + using ``GFP_KERNEL`` implies ``GFP_RECLAIM``, which means that + direct reclaim may be triggered under memory pressure; the calling + context must be allowed to sleep. + * If the allocation is performed from an atomic context, e.g + interrupt handler, use ``GFP_ATOMIC``. + * Untrusted allocations triggered from userspace should be a subject + of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There + is handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL`` + allocations that should be accounted. + * Userspace allocations should use either of the ``GFP_USER``, + ``GFP_HIGHUSER`` and ``GFP_HIGHUSER_MOVABLE`` flags. The longer + the flag name the less restrictive it is. + + The ``GFP_HIGHUSER_MOVABLE`` does not require that allocated + memory will be directly accessible by the kernel or the hardware + and implies that the data may move. + + The ``GFP_HIGHUSER`` means that the allocated memory is not + movable, but it is not required to be directly accessible by the + kernel or the hardware. An example may be a hardware allocation + that maps data directly into userspace but has no addressing + limitations. + + The ``GFP_USER`` means that the allocated memory is not movable + and it must be directly accessible by the kernel or the + hardware. It is typically used by hardware for buffers that are + mapped to userspace (e.g. graphics) that hardware still must DMA + to. + +You may notice that quite a few allocations in the existing code +specify ``GFP_NOIO`` and ``GFP_NOFS``. Historically, they were used to +prevent recursion deadlocks caused by direct memory reclaim calling +back into the FS or IO paths and blocking on already held +resources. Since 4.12 the preferred way to address this issue is to +use new scope APIs described in +:ref:`Documentation/core-api/gfp_mask-from-fs-io.rst `. + +Another legacy GFP flags are ``GFP_DMA`` and ``GFP_DMA32``. They are +used to ensure that the allocated memory is accessible by hardware +with limited addressing capabilities. So unless you are writing a +driver for a device with such restrictions, avoid using these flags. + +Selecting memory allocator +========================== + +The most straightforward way to allocate memory is to use a function +from the `kmalloc` family. And, to be on the safe size it's best to +use routines that set memory to zero, like `kzalloc`. If you need to +allocate memory for an array, there are `kmalloc_array` and `kcalloc` +helpers. + +The maximal size of a chunk that can be allocated with `kmalloc` is +limited. The actual limit depends on the hardware and the kernel +configuration, but it is a good practice to use `kmalloc` for objects +smaller than page size. + +For large allocations you can use `vmalloc` and `vzalloc`, or directly +request pages from the page allocator. The memory allocated by +`vmalloc` and related functions is not physically contiguous. + +If you are not sure whether the allocation size is too large for +`kmalloc` it is possible to use `kvmalloc` and its derivatives. It +will try to allocate memory with `kmalloc` and if the allocation fails +it will be retried with `vmalloc`. There are restrictions on which GFP +flags can be used with `kvmalloc`, please see :c:func:`kvmalloc_node` +reference documentation. Note, that `kvmalloc` may return memory that +is not physically contiguous. + +If you need to allocate many identical objects you can use slab cache +allocator. The cache should be set up with `kmem_cache_create` before +it can be used. Afterwards `kmem_cache_alloc` and its convenience +wrappers can allocate memory from that cache. + +When the allocated memory is no longer needed it must be freed. You +can use `kvfree` for the memory allocated with `kmalloc`, `vmalloc` +and `kvmalloc`. The slab caches should be freed with +`kmem_cache_free`. And don't forget to destroy the cache with +`kmem_cache_destroy`. diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 46ae353..5ce1ec1 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -14,6 +14,8 @@ User Space Memory Access .. kernel-doc:: mm/util.c :functions: get_user_pages_fast +.. _mm-api-gfp-flags: + Memory Allocation Controls ========================== -- 2.7.4