Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754921AbdCJEiF (ORCPT ); Thu, 9 Mar 2017 23:38:05 -0500 Received: from mail-pg0-f67.google.com ([74.125.83.67]:33181 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754875AbdCJEiD (ORCPT ); Thu, 9 Mar 2017 23:38:03 -0500 From: Wei Yang To: akpm@linux-foundation.org, tj@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Wei Yang Subject: [PATCH] mm/sparse: refine usemap_size() a little Date: Fri, 10 Mar 2017 12:37:13 +0800 Message-Id: <20170310043713.96871-1-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.11.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1043 Lines: 34 Current implementation calculates usemap_size in two steps: * calculate number of bytes to cover these bits * calculate number of "unsigned long" to cover these bytes It would be more clear by: * calculate number of "unsigned long" to cover these bits * multiple it with sizeof(unsigned long) This patch refine usemap_size() a little to make it more easy to understand. Signed-off-by: Wei Yang --- mm/sparse.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index a0792526adfa..faa36ef9f9bd 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -249,10 +249,7 @@ static int __meminit sparse_init_one_section(struct mem_section *ms, unsigned long usemap_size(void) { - unsigned long size_bytes; - size_bytes = roundup(SECTION_BLOCKFLAGS_BITS, 8) / 8; - size_bytes = roundup(size_bytes, sizeof(unsigned long)); - return size_bytes; + return BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS) * sizeof(unsigned long); } #ifdef CONFIG_MEMORY_HOTPLUG -- 2.11.0