Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751895AbdHaPoU (ORCPT ); Thu, 31 Aug 2017 11:44:20 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:36322 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751794AbdHaPoQ (ORCPT ); Thu, 31 Aug 2017 11:44:16 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Danesh Petigara , Gregory Fong , Michal Nazarewicz , Andrew Morton , Linus Torvalds Subject: [PATCH 3.18 13/24] mm: cma: fix CMA aligned offset calculation Date: Thu, 31 Aug 2017 17:43:49 +0200 Message-Id: <20170831154105.849003215@linuxfoundation.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170831154105.116844281@linuxfoundation.org> References: <20170831154105.116844281@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2246 Lines: 64 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Danesh Petigara commit 850fc430f47aad52092deaaeb32b99f97f0e6aca upstream. The CMA aligned offset calculation is incorrect for non-zero order_per_bit values. For example, if cma->order_per_bit=1, cma->base_pfn= 0x2f800000 and align_order=12, the function returns a value of 0x17c00 instead of 0x400. This patch fixes the CMA aligned offset calculation. The previous calculation was wrong and would return too-large values for the offset, so that when cma_alloc looks for free pages in the bitmap with the requested alignment > order_per_bit, it starts too far into the bitmap and so CMA allocations will fail despite there actually being plenty of free pages remaining. It will also probably have the wrong alignment. With this change, we will get the correct offset into the bitmap. One affected user is powerpc KVM, which has kvm_cma->order_per_bit set to KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, or 18 - 12 = 6. [gregory.0xf0@gmail.com: changelog additions] Signed-off-by: Danesh Petigara Reviewed-by: Gregory Fong Acked-by: Michal Nazarewicz Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/cma.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- a/mm/cma.c +++ b/mm/cma.c @@ -64,15 +64,17 @@ static unsigned long cma_bitmap_aligned_ return (1UL << (align_order - cma->order_per_bit)) - 1; } +/* + * Find a PFN aligned to the specified order and return an offset represented in + * order_per_bits. + */ static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order) { - unsigned int alignment; - if (align_order <= cma->order_per_bit) return 0; - alignment = 1UL << (align_order - cma->order_per_bit); - return ALIGN(cma->base_pfn, alignment) - - (cma->base_pfn >> cma->order_per_bit); + + return (ALIGN(cma->base_pfn, (1UL << align_order)) + - cma->base_pfn) >> cma->order_per_bit; } static unsigned long cma_bitmap_maxno(struct cma *cma)