Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp173705ybj; Fri, 8 May 2020 09:03:59 -0700 (PDT) X-Google-Smtp-Source: APiQypJGu2K5s6OZCzq8Ax+aqE0N+tX+tyKbMVlPO3RgvgJjWHyUnF+xWE5TWigzQXYK8qG3gCWs X-Received: by 2002:a05:6402:b3b:: with SMTP id bo27mr2639838edb.115.1588953838901; Fri, 08 May 2020 09:03:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588953838; cv=none; d=google.com; s=arc-20160816; b=zWoYsBvfc7d5axnWxDTR5kwgBtfKSKxILfpZPHhu0Vew/poxirRo/+skz3nFcW1P/a KTHsrLn8I1vbLI/rIBllwqvFoMIdiErcyHs0XWFVcQL87AmYvNaF/BUrEsqV5v+Z+unM HMyPGKE3C9akaV5PnJvuwuPivClcd5PflductzERkapjtYX11lyLA9hCIStqLlwRIGb8 SndhUCyFUjY9j1pw6+twITP0P4TNXt3QC30mJsGd2FMrR2QKOwe+s0hMNdrqPTVKECNW Fl6+Z9ZXYe9k/BWVF9wbVSX6UYFTFz2b0lYIHCFYle4VPg016sZQ0CbMBfZz65B/qEcG vuwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=adGLp+IkZj0kV6al1ow9OZ12NwQUohKl91WoYoRS5mM=; b=sVk138jHMVJuzzpRyXtIea7KBJKjpv7FWZNIuHu5bsjNWZPgKyXHzfRAtWEcpiaFix 5r2/ITVd3Cm9D1AkZcP2aIslVHQaMBZcDdFtqXoynGuQNqvP+jlbzSvkoZkqfbnEAuNM jGBz904K+L45YdLdI9/i7N3bKRwzmI3kpsN+IgZ+S43MbAjkpBvOrb2Pj5vt0Wy/C7RE TtPHhB7D/M3xriIOawj2VAVl0KTZlfPvBpd3YPfMrldJvRh9muZjkUGpGlDd0rF2aHhx UgR9eAZ99T1X+zF2DdpM9PgklhC3I0qt8cLft5IS+9j4VAYLemtew0qruKhcjGRL976M LbKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=AD8oPcvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y8si1094108edw.356.2020.05.08.09.03.31; Fri, 08 May 2020 09:03:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=AD8oPcvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727099AbgEHQBm (ORCPT + 99 others); Fri, 8 May 2020 12:01:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727803AbgEHQBl (ORCPT ); Fri, 8 May 2020 12:01:41 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F475C061A0C for ; Fri, 8 May 2020 09:01:40 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id f83so2009752qke.13 for ; Fri, 08 May 2020 09:01:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=adGLp+IkZj0kV6al1ow9OZ12NwQUohKl91WoYoRS5mM=; b=AD8oPcvgFYrmzgjNeM2wsXcLVPN0AkMug1eHqMYDzwg9wcsDaWQTI7MZjNtQ0vQAET yjL+hZDuCAvyKfp1PAsutuQ+N00mzu4qMxKWgb9VeK6dAvKs9kww1sjHvozmAkK8kFDA oDMF8oRZcnxLhj2ISmjVIgU9BG1rTrwAy0jCeX9Q0c5zO+fVaoM0zXlzDWk+MiiWNOOO pykUZcukMrXvpICiZxRTi3xw16nQWAydQVSnECMCflP6yv1wfyZ4+zxFig/sjw9aiIWy PNbXG+cvkMUYIgk3bp26wMSolRmwGl3CsiwjLpHrocKvX0WbYR8xIC1meL1duhenHdeK 7AbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=adGLp+IkZj0kV6al1ow9OZ12NwQUohKl91WoYoRS5mM=; b=U9yvAyq6w7ekEhe77h26Se3qqGVpeUP8nXHB+dDQoC5i1Bmw+j/StonSwUxHpY7uCj TsYd0dpZVcF0tngFPmFeO4jwnEN6fLdJXLy7UmkkbwJjw4rE31x6GsME/3tq2goGXeZx 3R0VgWzqUTaRTApQM0aMucZj/FoLpReo+3gMrGUWSxBTZyCSYnIYXwPjt62WsFBIHK9S VjoM0Sj2EgAxCDkaSvuDKZ4XpEafdhbFgBRezTvcRCMM2OHZO/ZLYZ1xkVnPvvCew2zQ /A0f+t3iqCBwbitObjOMb2v+zes69bjf43w3tDMpHbnhY5zu8f+Nq7pW8ERhRGHjHaos +Hqw== X-Gm-Message-State: AGi0PubWDV/aB7y4dX66eir6Ts4ae1M/hB04MIV6StHHMzy20OneyTli j5aD1DS3QLUN1StkJSuXVKon/w== X-Received: by 2002:a37:8605:: with SMTP id i5mr1320801qkd.363.1588953699563; Fri, 08 May 2020 09:01:39 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:2627]) by smtp.gmail.com with ESMTPSA id h33sm1519800qtc.21.2020.05.08.09.01.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 09:01:38 -0700 (PDT) Date: Fri, 8 May 2020 12:01:22 -0400 From: Johannes Weiner To: Joonsoo Kim Cc: Alex Shi , Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH 05/18] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Message-ID: <20200508160122.GB181181@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> <20200420221126.341272-6-hannes@cmpxchg.org> <20200422064041.GE6780@js1304-desktop> <20200422120946.GA358439@cmpxchg.org> <20200423052450.GA12538@js1304-desktop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200423052450.GA12538@js1304-desktop> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 23, 2020 at 02:25:06PM +0900, Joonsoo Kim wrote: > On Wed, Apr 22, 2020 at 08:09:46AM -0400, Johannes Weiner wrote: > > On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote: > > > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > > > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > > > goto failed; > > > > } > > > > > > > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > > > > - if (!error) { > > > > - error = shmem_add_to_page_cache(page, mapping, index, > > > > - swp_to_radix_entry(swap), gfp); > > > > - /* > > > > - * We already confirmed swap under page lock, and make > > > > - * no memory allocation here, so usually no possibility > > > > - * of error; but free_swap_and_cache() only trylocks a > > > > - * page, so it is just possible that the entry has been > > > > - * truncated or holepunched since swap was confirmed. > > > > - * shmem_undo_range() will have done some of the > > > > - * unaccounting, now delete_from_swap_cache() will do > > > > - * the rest. > > > > - */ > > > > - if (error) { > > > > - mem_cgroup_cancel_charge(page, memcg); > > > > - delete_from_swap_cache(page); > > > > - } > > > > - } > > > > - if (error) > > > > + error = shmem_add_to_page_cache(page, mapping, index, > > > > + swp_to_radix_entry(swap), gfp, > > > > + charge_mm); > > > > + /* > > > > + * We already confirmed swap under page lock, and make no > > > > + * memory allocation here, so usually no possibility of error; > > > > + * but free_swap_and_cache() only trylocks a page, so it is > > > > + * just possible that the entry has been truncated or > > > > + * holepunched since swap was confirmed. shmem_undo_range() > > > > + * will have done some of the unaccounting, now > > > > + * delete_from_swap_cache() will do the rest. > > > > + */ > > > > + if (error) { > > > > + delete_from_swap_cache(page); > > > > goto failed; > > > > > > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled > > > differently. delete_from_swap_cache() is for -EEXIST case. > > > > Good catch, I accidentally changed things here. > > > > I was just going to change it back, but now I'm trying to understand > > how it actually works. > > > > Who is removing the page from swap cache if shmem_undo_range() races > > but we fail to charge the page? > > > > Here is how this race is supposed to be handled: The page is in the > > swapcache, we have it locked and confirmed that the entry in i_pages > > is indeed a swap entry. We charge the page, then we try to replace the > > swap entry in i_pages with the actual page. If we determine, under > > tree lock now, that shmem_undo_range has raced with us, unaccounted > > the swap space, but must have failed to get the page lock, we remove > > the page from swap cache on our side, to free up swap slot and page. > > > > But what if shmem_undo_range() raced with us, deleted the swap entry > > from i_pages while we had the page locked, but then we simply failed > > to charge? We unlock the page and return -EEXIST (shmem_confirm_swap > > at the exit). The page with its userdata is now in swapcache, but no > > corresponding swap entry in i_pages. shmem_getpage_gfp() sees the > > -EEXIST, retries, finds nothing in i_pages and allocates a new, empty > > page. > > > > Aren't we leaking the swap slot and the page? > > Yes, you're right! It seems that it's possible to leak the swap slot > and the page. Race could happen for all the places after lock_page() > and shmem_confirm_swap() are done. And, I think that it's not possible > to fix the problem in shmem_swapin_page() side since we can't know the > timing that trylock_page() is called. Maybe, solution would be, > instead of using free_swap_and_cache() in shmem_undo_range() that > calls trylock_page(), to use another function that calls lock_page(). I looked at this some more, as well as compared it to non-shmem swapping. My conclusion is - and Hugh may correct me on this - that the deletion looks mandatory but is actually an optimization. Page reclaim will ultimately pick these pages up. When non-shmem pages are swapped in by readahead (locked until IO completes) and their page tables are simultaneously unmapped, the zap_pte_range() code calls free_swap_and_cache() and the locked pages are stranded in the swap cache with no page table references. We rely on page reclaim to pick them up later on. The same appears to be true for shmem. If the references to the swap page are zapped while we're trying to swap in, we can strand the page in the swap cache. But it's not up to swapin to detect this reliably, it just frees the page more quickly than having to wait for reclaim. That being said, my patch introduces potentially undesirable behavior (although AFAICS no correctness problem): We should only delete the page from swapcache when we actually raced with undo_range - which we see from the swap entry having been purged from the page cache tree. If we delete the page from swapcache just because we failed to charge it, the next fault has to read the still-valid page again from the swap device. I'm going to include this: diff --git a/mm/shmem.c b/mm/shmem.c index e80167927dce..236642775f89 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, xas_lock_irq(&xas); entry = xas_find_conflict(&xas); if (entry != expected) - xas_set_err(&xas, -EEXIST); + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); xas_create_range(&xas); if (xas_error(&xas)) goto unlock; @@ -1683,17 +1683,18 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, error = shmem_add_to_page_cache(page, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); - /* - * We already confirmed swap under page lock, and make no - * memory allocation here, so usually no possibility of error; - * but free_swap_and_cache() only trylocks a page, so it is - * just possible that the entry has been truncated or - * holepunched since swap was confirmed. shmem_undo_range() - * will have done some of the unaccounting, now - * delete_from_swap_cache() will do the rest. - */ if (error) { - delete_from_swap_cache(page); + /* + * We already confirmed swap under page lock, but + * free_swap_and_cache() only trylocks a page, so it + * is just possible that the entry has been truncated + * or holepunched since swap was confirmed. + * shmem_undo_range() will have done some of the + * unaccounting, now delete_from_swap_cache() will do + * the rest. + */ + if (error == -ENOENT) + delete_from_swap_cache(page); goto failed; }