Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760551AbcLBLaI (ORCPT ); Fri, 2 Dec 2016 06:30:08 -0500 Received: from outbound-smtp08.blacknight.com ([46.22.139.13]:37096 "EHLO outbound-smtp08.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759021AbcLBLaF (ORCPT ); Fri, 2 Dec 2016 06:30:05 -0500 From: Mel Gorman To: Andrew Morton Cc: Christoph Lameter , Michal Hocko , Vlastimil Babka , Johannes Weiner , Jesper Dangaard Brouer , Joonsoo Kim , Linux-MM , Linux-Kernel , Mel Gorman Subject: [PATCH 0/2] High-order per-cpu cache v6 Date: Fri, 2 Dec 2016 11:29:49 +0000 Message-Id: <20161202112951.23346-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.10.2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 804 Lines: 24 Changelog since v5 o Changelog clarification in patch 1 o Additional comments in patch 2 Changelog since v4 o Avoid pcp->count getting out of sync if struct page gets corrupted Changelog since v3 o Allow high-order atomic allocations to use reserves Changelog since v2 o Correct initialisation to avoid -Woverflow warning The following is two patches that implement a per-cpu cache for high-order allocations, primarily aimed at SLUB. The first patch is a bug fix that is technically unrelated but was discovered by review and so batched together. The second is the patch that implements the high-order pcpu cache. include/linux/mmzone.h | 20 +++++++- mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++----------------- 2 files changed, 103 insertions(+), 46 deletions(-) -- 2.10.2