Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753837AbdCOOTO (ORCPT ); Wed, 15 Mar 2017 10:19:14 -0400 Received: from mx2.suse.de ([195.135.220.15]:54023 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753225AbdCOOSQ (ORCPT ); Wed, 15 Mar 2017 10:18:16 -0400 Date: Wed, 15 Mar 2017 15:18:14 +0100 From: Michal Hocko To: Aaron Lu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Hansen , Tim Chen , Andrew Morton , Ying Huang Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory Message-ID: <20170315141813.GB32626@dhcp22.suse.cz> References: <1489568404-7817-1-git-send-email-aaron.lu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1489568404-7817-1-git-send-email-aaron.lu@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 991 Lines: 20 On Wed 15-03-17 16:59:59, Aaron Lu wrote: [...] > The proposed parallel free did this: if the process has many pages to be > freed, accumulate them in these struct mmu_gather_batch(es) one after > another till 256K pages are accumulated. Then take this singly linked > list starting from tlb->local.next off struct mmu_gather *tlb and free > them in a worker thread. The main thread can return to continue zap > other pages(after freeing pages pointed by tlb->local.pages). I didn't have a look at the implementation yet but there are two concerns that raise up from this description. Firstly how are we going to tune the number of workers. I assume there will be some upper bound (one of the patch subject mentions debugfs for tuning) and secondly if we offload the page freeing to the worker then the original context can consume much more cpu cycles than it was configured via cpu controller. How are we going to handle that? Or is this considered acceptable? -- Michal Hocko SUSE Labs