Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752036AbcDWLVM (ORCPT ); Sat, 23 Apr 2016 07:21:12 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:34507 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751786AbcDWLVK (ORCPT ); Sat, 23 Apr 2016 07:21:10 -0400 MIME-Version: 1.0 In-Reply-To: <20160422144957.64619ee9b19991e4fdf89668@linux-foundation.org> References: <1460444239-22475-1-git-send-email-chris@chris-wilson.co.uk> <20160414134926.GD19990@nuc-i3427.alporthouse.com> <20160415111431.GL19990@nuc-i3427.alporthouse.com> <20160422144957.64619ee9b19991e4fdf89668@linux-foundation.org> Date: Sat, 23 Apr 2016 13:21:08 +0200 Message-ID: Subject: Re: [PATCH] mm/vmalloc: Keep a separate lazy-free list From: Roman Peniaev To: Andrew Morton Cc: Chris Wilson , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Tvrtko Ursulin , Daniel Vetter , David Rientjes , Joonsoo Kim , Mel Gorman , Toshi Kani , Shawn Lin , linux-mm@kvack.org, "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 900 Lines: 26 On Fri, Apr 22, 2016 at 11:49 PM, Andrew Morton wrote: > On Fri, 15 Apr 2016 12:14:31 +0100 Chris Wilson wrote: > >> > > purge_fragmented_blocks() manages per-cpu lists, so that looks safe >> > > under its own rcu_read_lock. >> > > >> > > Yes, it looks feasible to remove the purge_lock if we can relax sync. >> > >> > what is still left is waiting on vmap_area_lock for !sync mode. >> > but probably is not that bad. >> >> Ok, that's bit beyond my comfort zone with a patch to change the free >> list handling. I'll chicken out for the time being, atm I am more >> concerned that i915.ko may call set_page_wb() frequently on individual >> pages. > > Nick Piggin's vmap rewrite. 20x (or more) faster. > https://lwn.net/Articles/285341/ > > 10 years ago, never finished. But that's exactly what we are changing making 20.5x faster :) -- Roman