Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp754188imm; Mon, 2 Jul 2018 22:32:50 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIjLEpFMcR2HEPi36xLCyj6pOfJne24CN3kC84fQz/drzuG4Pr93WtZAQ1Eu0UM3TnSLKCf X-Received: by 2002:a17:902:7d82:: with SMTP id a2-v6mr28614090plm.202.1530595970552; Mon, 02 Jul 2018 22:32:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530595970; cv=none; d=google.com; s=arc-20160816; b=teOEmWny6g7u1WC0fGwhBaCBq59OonQTdPQFPMJu8hQg/95n4wbePCDezTO4lpu/RH UoVrdsir0wCdc2Z2KOjME5oKZUfLlqEqqiz6pox832dF3yrYBD3km1XKYTxx/g2bdbYe vNpnJloVXrHCvOfM5RANgFA2iyN90yxSXES3u2qRtpDn7fM9d62tdQL9UWhnvT42zs2K LOxHwLmc/BUHLgsaLGZcdKCr/sqA55F5t96pPAg9ZOE6eovtzDnk1dXk0z5wujH/mPgu aG9NKWg7k6cTF76QNrxrd6qLL0WDFWXnPEwLYvkGMSLABetyrjEfQZ+OvtWlvP6JNFlR cqxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :references:date:subject:cc:to:from:arc-authentication-results; bh=bGnF7zo4kSnaOCCqTAmPiif7D2m7l8UHfimD9SUP0fI=; b=pPcUIDtfFuoh9xmQIQcvqls0a8qnILOmJCxWx5WSwQmJefPt2KFKWzYtAlB1ukUfdG wpRdnJWY8KS7KsTD4G7WUjKKChVrlMmyMCwdp2V896MqfXGLC4FT0vcHEha/xihkVA70 4JIuUHCPhNaESTU+49KzK3hMrZNtdm9Sng39VmtbJw92dnZme2dUKBq5J7Kh41+X+XQ8 CPv30IDu5N4/p4H47N8GN1+WIQtWnXE/ONKLzY8y/UJkehq/hbYX71T8fRUdKzlZJ7gN aAczlT/TbFnC1QGJvcPpCCxFIzHpwyrHs1aQ2pLn+ICIB98CNkzdadPUKRyoYJpFKuMT 5+hQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w15-v6si298809plq.455.2018.07.02.22.32.36; Mon, 02 Jul 2018 22:32:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932644AbeGCFba (ORCPT + 99 others); Tue, 3 Jul 2018 01:31:30 -0400 Received: from mga07.intel.com ([134.134.136.100]:34174 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932499AbeGCFb2 (ORCPT ); Tue, 3 Jul 2018 01:31:28 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Jul 2018 22:31:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,302,1526367600"; d="scan'208";a="53657024" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.13.118]) by orsmga007.jf.intel.com with ESMTP; 02 Jul 2018 22:31:25 -0700 From: "Huang\, Ying" To: Matthew Wilcox Cc: Andrew Morton , , , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , "Shaohua Li" , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: Re: [PATCH -mm -v4 08/21] mm, THP, swap: Support to read a huge swap cluster for swapin a THP Date: Mon, 02 Jul 2018 14:02:47 +0800 References: <20180622035151.6676-1-ying.huang@intel.com> <20180622035151.6676-9-ying.huang@intel.com> <20180629062126.GJ7646@bombadil.infradead.org> Message-ID: <87y3esvqab.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Matthew Wilcox writes: > On Fri, Jun 22, 2018 at 11:51:38AM +0800, Huang, Ying wrote: >> +++ b/mm/swap_state.c >> @@ -426,33 +447,37 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, >> /* >> * call radix_tree_preload() while we can wait. >> */ >> - err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL); >> + err = radix_tree_maybe_preload_order(gfp_mask & GFP_KERNEL, >> + compound_order(new_page)); >> if (err) >> break; > > There's no more preloading in the XArray world, so this can just be dropped. Sure. >> /* >> * Swap entry may have been freed since our caller observed it. >> */ >> + err = swapcache_prepare(hentry, huge_cluster); >> + if (err) { >> radix_tree_preload_end(); >> - break; >> + if (err == -EEXIST) { >> + /* >> + * We might race against get_swap_page() and >> + * stumble across a SWAP_HAS_CACHE swap_map >> + * entry whose page has not been brought into >> + * the swapcache yet. >> + */ >> + cond_resched(); >> + continue; >> + } else if (err == -ENOTDIR) { >> + /* huge swap cluster is split under us */ >> + continue; >> + } else /* swp entry is obsolete ? */ >> + break; > > I'm not entirely happy about -ENOTDIR being overloaded to mean this. > Maybe we can return a new enum rather than an errno? Can we use -ESTALE instead? The "huge swap cluster is split under us" means the swap entry is kind of "staled". > Also, I'm not sure that a true/false parameter is the right approach for > "is this a huge page". I think we'll have usecases for swap entries which > are both larger and smaller than PMD_SIZE. OK. I can change the interface to number of swap entries style to make it more flexible. > I was hoping to encode the swap entry size into the entry; we only need one > extra bit to do that (no matter the size of the entry). I detailed the > encoding scheme here: > > https://plus.google.com/117536210417097546339/posts/hvctn17WUZu > > (let me know if that doesn't work for you; I'm not very experienced with > this G+ thing) The encoding method looks good. To use it, we need to - Encode swap entry and size into swap_entry_size - Call function with swap_entry_size - Decode swap_entry_size to swap entry and size It appears that there is no real benefit? Best Regards, Huang, Ying