Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933325AbdCaOuC (ORCPT ); Fri, 31 Mar 2017 10:50:02 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:43068 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933017AbdCaOuB (ORCPT ); Fri, 31 Mar 2017 10:50:01 -0400 Date: Fri, 31 Mar 2017 10:49:48 -0400 From: Johannes Weiner To: "Huang, Ying" Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH -mm -v7 9/9] mm, THP, swap: Delay splitting THP during swap out Message-ID: <20170331144948.GA6408@cmpxchg.org> References: <20170328053209.25876-1-ying.huang@intel.com> <20170328053209.25876-10-ying.huang@intel.com> <20170329171654.GD31821@cmpxchg.org> <871stftn72.fsf@yhuang-dev.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <871stftn72.fsf@yhuang-dev.intel.com> User-Agent: Mutt/1.8.0 (2017-02-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2437 Lines: 61 On Thu, Mar 30, 2017 at 12:15:13PM +0800, Huang, Ying wrote: > Johannes Weiner writes: > > On Tue, Mar 28, 2017 at 01:32:09PM +0800, Huang, Ying wrote: > >> @@ -198,6 +240,18 @@ int add_to_swap(struct page *page, struct list_head *list) > >> VM_BUG_ON_PAGE(!PageLocked(page), page); > >> VM_BUG_ON_PAGE(!PageUptodate(page), page); > >> > >> + if (unlikely(PageTransHuge(page))) { > >> + err = add_to_swap_trans_huge(page, list); > >> + switch (err) { > >> + case 1: > >> + return 1; > >> + case 0: > >> + /* fallback to split firstly if return 0 */ > >> + break; > >> + default: > >> + return 0; > >> + } > >> + } > >> entry = get_swap_page(); > >> if (!entry.val) > >> return 0; > > > > add_to_swap_trans_huge() is too close a copy of add_to_swap(), which > > makes the code error prone for future modifications to the swap slot > > allocation protocol. > > > > This should read: > > > > retry: > > entry = get_swap_page(page); > > if (!entry.val) { > > if (PageTransHuge(page)) { > > split_huge_page_to_list(page, list); > > goto retry; > > } > > return 0; > > } > > If the swap space is used up, that is, get_swap_page() cannot allocate > even 1 swap entry for a normal page. We will split THP unnecessarily > with the change, but in the original code, we just skip the THP. There > may be a performance regression here. Similar problem exists for > mem_cgroup_try_charge_swap() too. If the mem cgroup exceeds the swap > limit, the THP will be split unnecessary with the change too. If we skip the page, we're swapping out another page hotter than this one. Giving THP preservation priority over LRU order is an issue best kept for a separate patch set; this one is supposed to be a mechanical implementation of THP swapping. Let's nail down the basics first. Such a decision would need proof that splitting THPs on full swap devices is a concern for real applications. I would assume that we're pretty close to OOM anyway; it's much more likely that a single slot frees up than a full cluster, at which point we'll be splitting THPs anyway; etc. I have my doubts that this would be measurable. But even if so, I don't think we'd have to duplicate the main code flow to handle this corner case. You can extend get_swap_page() to return an error code that tells add_to_swap() whether to split and retry, or to fail and move on. So this way should be future proof.