Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp11614743ybi; Thu, 25 Jul 2019 20:21:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5OzF9q5nQW2VlpMLs5PeE+BPlkkjenAyPMhCgPAOCyUC9k7Ix8Y0jSaFlDdLB74j3P1lN X-Received: by 2002:a62:26c1:: with SMTP id m184mr19039768pfm.200.1564111318743; Thu, 25 Jul 2019 20:21:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564111318; cv=none; d=google.com; s=arc-20160816; b=OcBO+R6kkPjyZFgZBThw4UyR3YS7AAMesQUfySl9T/SxOh6MVXoBi2/E4uTMwj3gey ZyeZ9VruCN7vJq/GftMgTaBmgcVZgpBcZpwU9/3APz9WftMe2KGQEFQWPsgfKh4Mm9B8 dK8AqeqpWtddNKwrLtLfk9+SjEHnNyr2aqprK+i4uv9MTxK7G94SN4JPqmBzhxt8/9Aa sRQBp2+NEQWMv0ItApzK1aHSLcGvuvUjsxKSlpd5cjKwhbj7jzC+WYPSYI/Z9YZqda1X Jq6hitvR0BF9fkV87R4Q8nFRJq5Bh5UMqY2oHtaQ2vX103+JQ000s5vjuz3Vk9mnnxpU xiew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from; bh=hagU+U6M35C3mpB5PKk7c4uyB/4T5xiknS2lmZf42r8=; b=VGPNuepJdaEmE1fwltbmexUzqJwm5QJSXjUzwmqG3QSM045BDYcJ+t7/9Syrhx0gfe feTyPDjmhtf64eUxnl5+5d1vZtvlf7u7MXd0HpmKaG5mUWEY/fpOKEPcdZH5cXeXK+/8 Uq79AeCtpCgm5mjVv2ZvmNcyLhRBQxbiqa8ln3EoPqLVhRsymHCn3NB7m/P9hsR6xoBc TfukAFGIj7mXvuFJdQA/avcyLIiffLLjdRQzxlpCs5syzBoMl4Dk2I0Z6eoCMo7hpojo pSheAnvWkpodPKNsK3gLH8WqsLKQZVjxmCXa7bO63QrExtD+Xq1XisynwTkIxYQBqug2 qb6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h194si20191318pfe.214.2019.07.25.20.21.43; Thu, 25 Jul 2019 20:21:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726141AbfGZDU6 (ORCPT + 99 others); Thu, 25 Jul 2019 23:20:58 -0400 Received: from mga18.intel.com ([134.134.136.126]:36103 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725878AbfGZDU5 (ORCPT ); Thu, 25 Jul 2019 23:20:57 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jul 2019 20:20:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,309,1559545200"; d="scan'208";a="369406245" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.29]) by fmsmga005.fm.intel.com with ESMTP; 25 Jul 2019 20:20:56 -0700 From: "Huang\, Ying" To: Matthew Wilcox Cc: Mikhail Gavrilov , huang ying , Linux List Kernel Mailing , Subject: Re: kernel BUG at mm/swap_state.c:170! References: <878ssqbj56.fsf@yhuang-dev.intel.com> <87zhl59w2t.fsf@yhuang-dev.intel.com> <20190725114408.GV363@bombadil.infradead.org> Date: Fri, 26 Jul 2019 11:20:55 +0800 In-Reply-To: <20190725114408.GV363@bombadil.infradead.org> (Matthew Wilcox's message of "Thu, 25 Jul 2019 04:44:08 -0700") Message-ID: <87a7d17a7c.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Matthew Wilcox writes: > On Tue, Jul 23, 2019 at 01:08:42PM +0800, Huang, Ying wrote: >> @@ -2489,6 +2491,14 @@ static void __split_huge_page(struct page *page, struct list_head *list, >> /* complete memcg works before add pages to LRU */ >> mem_cgroup_split_huge_fixup(head); >> >> + if (PageAnon(head) && PageSwapCache(head)) { >> + swp_entry_t entry = { .val = page_private(head) }; >> + >> + offset = swp_offset(entry); >> + swap_cache = swap_address_space(entry); >> + xa_lock(&swap_cache->i_pages); >> + } >> + >> for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { >> __split_huge_page_tail(head, i, lruvec, list); >> /* Some pages can be beyond i_size: drop them from page cache */ >> @@ -2501,6 +2511,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, >> } else if (!PageAnon(page)) { >> __xa_store(&head->mapping->i_pages, head[i].index, >> head + i, 0); >> + } else if (swap_cache) { >> + __xa_store(&swap_cache->i_pages, offset + i, >> + head + i, 0); > > I tried something along these lines (though I think I messed up the offset > calculation which is why it wasn't working for me). My other concern > was with the case where SWAPFILE_CLUSTER was less than HPAGE_PMD_NR. > Don't we need to drop the lock and look up a new swap_cache if offset >= > SWAPFILE_CLUSTER? In swapfile.c, there is #ifdef CONFIG_THP_SWAP #define SWAPFILE_CLUSTER HPAGE_PMD_NR ... #else #define SWAPFILE_CLUSTER 256 ... #endif So, if a THP is in swap cache, then SWAPFILE_CLUSTER equals HPAGE_PMD_NR. And there is one swap address space for each 64M swap space. So one THP will be in one swap address space. In swap.h, there is /* One swap address space for each 64M swap space */ #define SWAP_ADDRESS_SPACE_SHIFT 14 #define SWAP_ADDRESS_SPACE_PAGES (1 << SWAP_ADDRESS_SPACE_SHIFT) Best Regards, Huang, Ying