Received: by 2002:ab2:6203:0:b0:1f5:f2ab:c469 with SMTP id o3csp170746lqt; Thu, 18 Apr 2024 11:22:22 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVHKDtv0z9xXsR3OH06DKpzAlXM4kYuaAfZkRkWVvo32q48Di8IPRYTpQbwk7C/5zQPACaErp+9Ub7c1gSVnKwjFlvR8/oRNbTEcvqu5g== X-Google-Smtp-Source: AGHT+IGlyHjTPmsEkDHbWog4gAd1FEIjyLRKnbBJ7rcNBUmY+OS4ZvfC28wIUdaPC90jDZuoEd44 X-Received: by 2002:a05:6214:11aa:b0:69b:bcc7:f823 with SMTP id u10-20020a05621411aa00b0069bbcc7f823mr3603059qvv.14.1713464542648; Thu, 18 Apr 2024 11:22:22 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1713464542; cv=pass; d=google.com; s=arc-20160816; b=tANro50UzgzaJkmxNgOjY82+TcSo/A/38kTmob0NOKk2xCMiDJzlWtDJl3NsnWopT1 pJKkqktPAyvfKLhxqv0qTNexW2b0zJ0Z6Nmt6n5RRfgY8fs+b7e9b7gWw5XG4aru/Zdr 10D1VDZ/CUppM6bzez0RkBaZmo3+d+bgeRtKBEKg0r9UrWUQjq+egG+x65eM5+rKCdpV iuqGLBieQowMRLhiymigG6LT4HpXBFcTDkY0SApJ+PhrOnJhm8fYELsKbcrYR215MmPy /+29HfLj8XZeTEg1h5iYylLpCIfMjgDkDmh20VDEp9eXYHR4dlDYPIDxHvIsQYIVVAwN yhPw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=arQB4u/26qC0ddBXoSwN4AHhGIzBucpf01R4aKgFJJ8=; fh=Qg1//u41niYyWdxpScy3t+cqAy4X61od5XskJ7O90GA=; b=DifXy6U6mrEQtTx8JjvN1TpRivVZxLHgFmB93qetHpgw1Tf0EoGyDfigHeNVC9NMxO ud6fKw7egXBanMN+xY9FOaTH8Enhv4fYX33N1DS+jRs0uZqSuwejtdGNYYgOI/+5iG1W qy7Fvyx6z2I+dXRIOSkbLgg3q+ODEBhp3b5wou5QdEGP0XecpVVKpaz7doek9RGS/TaG Q6c5HALX9oIFyZd4/nFSDVfLmAxebxjwoqkSJ7ybur50CKJgYVMShCBLT3pojy4gfTId S61aUF/Vck/hN2w4+q52wU7RCAKORmFFuVd+JFTZKwrcfDvMZ2LAKhCXZO33Eyv9i/VO VHuQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=T62taqkf; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-150623-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-150623-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id r7-20020a0562140c4700b0069b204ad957si1953011qvj.521.2024.04.18.11.22.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 11:22:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-150623-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=T62taqkf; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-150623-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-150623-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 6CC451C22178 for ; Thu, 18 Apr 2024 18:22:06 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 39E4317AD6A; Thu, 18 Apr 2024 18:21:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="T62taqkf" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74C3283A07; Thu, 18 Apr 2024 18:21:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.17 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713464516; cv=none; b=lxkLWyZIGj+DffkfbZK2/RJdZg4wjgrRNgOycyET2ZPc/9coekmoI6K6kgQwm1OqHc3brbQyiCAUKKYRrOUZRMB7XNVnxGLHHnlw9A0QFDF/OsSL+4TXSq+yq6kLpHqurlmNI9NqPamWHFJ6MO+tT0FR1WHZYvua5Y9Q60DdngQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713464516; c=relaxed/simple; bh=di7AEMML3XBmS6NNk5yHpV2YGbf6QjEDD4R85XDT5mo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ij5iXbEUWHSxK6EhluAofAudkALBtt/xZWGoFTpsPjlFyu0tpBONLTHqn6nvkXnEtJBCL++cg+M36D9ngSN10ylI664prGfuHgianNna4EheDPLL/klEboZhjXUZR4kH8UWvKq7tqgl3413owA7MhwY5je0yTJgvrEvT/Ddztvs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=T62taqkf; arc=none smtp.client-ip=192.198.163.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713464514; x=1745000514; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=di7AEMML3XBmS6NNk5yHpV2YGbf6QjEDD4R85XDT5mo=; b=T62taqkfFat9gpksDnMh8APP/37qV7ucPb2P7cVWPrhwtkJrNsE2GKjw 8iV1lBqyRmuhoXqCVNT68tmX3oNFyd6FxZMZg7dpNHG5UanjE+Edj+5Pa O9QracNts7YwboQcz0a85ZA6ByHENfTI8fxJ+GK//jAQdNaId5W11dVBN CDcXYy52fmLDwEoBlcc0qQqTl0CJjcRBqYyIUm3n8lLlREfV1R89brUoW zACrfN5rDZK4UmzeACLvvVFrqM2Z8LZJIAq7gllSoznmeXroawQVS4qeT IGbU3midrcmbhTapFePhAiW7nIHSNy2AVIcEO10l8ydPR5NyEPnnOU7l6 Q==; X-CSE-ConnectionGUID: ij1xVX3BRtSGKa/BN7U7PA== X-CSE-MsgGUID: wkXhNfWpRguTRl+NXL5w0g== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="8898352" X-IronPort-AV: E=Sophos;i="6.07,212,1708416000"; d="scan'208";a="8898352" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 11:21:53 -0700 X-CSE-ConnectionGUID: W6zduy+0QkW3zModnl3a/g== X-CSE-MsgGUID: JpytdpXORt2rOjRS5AKkpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,212,1708416000"; d="scan'208";a="27531776" Received: from unknown (HELO 23c141fc0fd8) ([10.239.97.151]) by fmviesa005.fm.intel.com with ESMTP; 18 Apr 2024 11:21:49 -0700 Received: from kbuild by 23c141fc0fd8 with local (Exim 4.96) (envelope-from ) id 1rxWOJ-00095q-0X; Thu, 18 Apr 2024 18:21:47 +0000 Date: Fri, 19 Apr 2024 02:21:13 +0800 From: kernel test robot To: Kairui Song , linux-mm@kvack.org Cc: oe-kbuild-all@lists.linux.dev, Andrew Morton , Linux Memory Management List , "Huang, Ying" , Matthew Wilcox , Chris Li , Barry Song , Ryan Roberts , Neil Brown , Minchan Kim , Hugh Dickins , David Hildenbrand , Yosry Ahmed , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Kairui Song Subject: Re: [PATCH 8/8] mm/swap: reduce swap cache search space Message-ID: <202404190258.wljFnvCL-lkp@intel.com> References: <20240417160842.76665-9-ryncsn@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240417160842.76665-9-ryncsn@gmail.com> Hi Kairui, kernel test robot noticed the following build errors: [auto build test ERROR on ceph-client/testing] [also build test ERROR on ceph-client/for-linus trondmy-nfs/linux-next konis-nilfs2/upstream jaegeuk-f2fs/dev-test jaegeuk-f2fs/dev cifs/for-next linus/master v6.9-rc4] [cannot apply to akpm-mm/mm-everything next-20240418] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Kairui-Song/NFS-remove-nfs_page_lengthg-and-usage-of-page_index/20240418-001343 base: https://github.com/ceph/ceph-client.git testing patch link: https://lore.kernel.org/r/20240417160842.76665-9-ryncsn%40gmail.com patch subject: [PATCH 8/8] mm/swap: reduce swap cache search space config: i386-buildonly-randconfig-002-20240419 (https://download.01.org/0day-ci/archive/20240419/202404190258.wljFnvCL-lkp@intel.com/config) compiler: gcc-9 (Ubuntu 9.5.0-4ubuntu2) 9.5.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240419/202404190258.wljFnvCL-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202404190258.wljFnvCL-lkp@intel.com/ All errors (new ones prefixed by >>): mm/huge_memory.c: In function '__split_huge_page': >> mm/huge_memory.c:2906:12: error: implicit declaration of function 'swap_cache_index' [-Werror=implicit-function-declaration] 2906 | offset = swap_cache_index(folio->swap); | ^~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +/swap_cache_index +2906 mm/huge_memory.c 2888 2889 static void __split_huge_page(struct page *page, struct list_head *list, 2890 pgoff_t end, unsigned int new_order) 2891 { 2892 struct folio *folio = page_folio(page); 2893 struct page *head = &folio->page; 2894 struct lruvec *lruvec; 2895 struct address_space *swap_cache = NULL; 2896 unsigned long offset = 0; 2897 int i, nr_dropped = 0; 2898 unsigned int new_nr = 1 << new_order; 2899 int order = folio_order(folio); 2900 unsigned int nr = 1 << order; 2901 2902 /* complete memcg works before add pages to LRU */ 2903 split_page_memcg(head, order, new_order); 2904 2905 if (folio_test_anon(folio) && folio_test_swapcache(folio)) { > 2906 offset = swap_cache_index(folio->swap); 2907 swap_cache = swap_address_space(folio->swap); 2908 xa_lock(&swap_cache->i_pages); 2909 } 2910 2911 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ 2912 lruvec = folio_lruvec_lock(folio); 2913 2914 ClearPageHasHWPoisoned(head); 2915 2916 for (i = nr - new_nr; i >= new_nr; i -= new_nr) { 2917 __split_huge_page_tail(folio, i, lruvec, list, new_order); 2918 /* Some pages can be beyond EOF: drop them from page cache */ 2919 if (head[i].index >= end) { 2920 struct folio *tail = page_folio(head + i); 2921 2922 if (shmem_mapping(folio->mapping)) 2923 nr_dropped++; 2924 else if (folio_test_clear_dirty(tail)) 2925 folio_account_cleaned(tail, 2926 inode_to_wb(folio->mapping->host)); 2927 __filemap_remove_folio(tail, NULL); 2928 folio_put(tail); 2929 } else if (!PageAnon(page)) { 2930 __xa_store(&folio->mapping->i_pages, head[i].index, 2931 head + i, 0); 2932 } else if (swap_cache) { 2933 __xa_store(&swap_cache->i_pages, offset + i, 2934 head + i, 0); 2935 } 2936 } 2937 2938 if (!new_order) 2939 ClearPageCompound(head); 2940 else { 2941 struct folio *new_folio = (struct folio *)head; 2942 2943 folio_set_order(new_folio, new_order); 2944 } 2945 unlock_page_lruvec(lruvec); 2946 /* Caller disabled irqs, so they are still disabled here */ 2947 2948 split_page_owner(head, order, new_order); 2949 2950 /* See comment in __split_huge_page_tail() */ 2951 if (folio_test_anon(folio)) { 2952 /* Additional pin to swap cache */ 2953 if (folio_test_swapcache(folio)) { 2954 folio_ref_add(folio, 1 + new_nr); 2955 xa_unlock(&swap_cache->i_pages); 2956 } else { 2957 folio_ref_inc(folio); 2958 } 2959 } else { 2960 /* Additional pin to page cache */ 2961 folio_ref_add(folio, 1 + new_nr); 2962 xa_unlock(&folio->mapping->i_pages); 2963 } 2964 local_irq_enable(); 2965 2966 if (nr_dropped) 2967 shmem_uncharge(folio->mapping->host, nr_dropped); 2968 remap_page(folio, nr); 2969 2970 if (folio_test_swapcache(folio)) 2971 split_swap_cluster(folio->swap); 2972 2973 /* 2974 * set page to its compound_head when split to non order-0 pages, so 2975 * we can skip unlocking it below, since PG_locked is transferred to 2976 * the compound_head of the page and the caller will unlock it. 2977 */ 2978 if (new_order) 2979 page = compound_head(page); 2980 2981 for (i = 0; i < nr; i += new_nr) { 2982 struct page *subpage = head + i; 2983 struct folio *new_folio = page_folio(subpage); 2984 if (subpage == page) 2985 continue; 2986 folio_unlock(new_folio); 2987 2988 /* 2989 * Subpages may be freed if there wasn't any mapping 2990 * like if add_to_swap() is running on a lru page that 2991 * had its mapping zapped. And freeing these pages 2992 * requires taking the lru_lock so we do the put_page 2993 * of the tail pages after the split is complete. 2994 */ 2995 free_page_and_swap_cache(subpage); 2996 } 2997 } 2998 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki