Received: by 2002:a25:2c96:0:0:0:0:0 with SMTP id s144csp270435ybs; Tue, 26 May 2020 08:48:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6LFWL4wvdd6fiaAkv5R8g7dfsGfxf1w4dQXoD35qRvVaEdYCrlfiJK1eUdWQlLJcl5Gf+ X-Received: by 2002:a50:baa3:: with SMTP id x32mr19848277ede.251.1590508083808; Tue, 26 May 2020 08:48:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590508083; cv=none; d=google.com; s=arc-20160816; b=ZbBIxpHa1LvmK7usax9CLbGM5PLdOyoBoGawYlaW1KnshhQbAYSfepVBeqwe9jtjYp RTdwYU5EZd2aR87bIdV5P3/i1tP72L8ggUunKQifxOmt5DM4oekl8jaIiF8MYoeSj6H0 7712ZT3j5QmeeNiDHxFzjv2er+uTqP5IWBWta4UMv8n/Ri8wFZVGGJkfljnaQ6wqzJ55 bA8Io2HHa7Yz15tL13YMyo6YYVg3yI3SEa8jcZjeTSrNy/K9MDk6SLsDumkzPnR7jCTC r2y6QkEhdI5C1u0fXco3rn5USIhqLO6uWuWtkE9R7uMoEpQNckIyW+Aaxyf9CmSEw5Gd vSXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=H9NFd0yHlWGmdk/EMdYVrUqJ2vOLGczXKpbxrO9cBaM=; b=JhXdcU8kKopg136M5hRScqvFLPCK1/8Nb4Av5VPdIiiVTTlONrFwh+kZ6suJfz3fxs feCXNBm42dGuN28FalikM3RM/3vY8qF75N0ZF6NMFS97FoaG2qEpUGAJsdB/BjpQM06v cZDT18lzQFfbb0qX4hlLRcULmfdTKKVOAQfUZymaCQSGNQHClpFG/ny/moaBgya25OXL 9vLWSDPp1281FxwyUKXpkOwaKGeXY8+LHPLQC4/Au/4zRtho+FigtMZmI0oEB/1QmNsL Wc5AMfwh8X1lTQ4AhU9NxY4QgR0l+GRcyF+HjGjrYmik4ZBKt65fZIUBV8CbO7xNvySK z5Kw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=sXcb3w0e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k14si102956ejz.419.2020.05.26.08.47.40; Tue, 26 May 2020 08:48:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=sXcb3w0e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728361AbgEZPpy (ORCPT + 99 others); Tue, 26 May 2020 11:45:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728270AbgEZPpx (ORCPT ); Tue, 26 May 2020 11:45:53 -0400 Received: from mail-qv1-xf43.google.com (mail-qv1-xf43.google.com [IPv6:2607:f8b0:4864:20::f43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62DB3C03E96D for ; Tue, 26 May 2020 08:45:53 -0700 (PDT) Received: by mail-qv1-xf43.google.com with SMTP id fb16so9645816qvb.5 for ; Tue, 26 May 2020 08:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=H9NFd0yHlWGmdk/EMdYVrUqJ2vOLGczXKpbxrO9cBaM=; b=sXcb3w0eQMDKcMmANV3bqeRrIPN376XqoY+mlxj5Ut31RFJurt5tGbdEwZIq3dbD4O 4Xc1eosEIel58VXlkkgjWLU6wV7f/ifJVJIY07JVrM+eeQDSfAJ1DRQpeBWK9MAn05CM f65q0wh613K5qqBw7J2PgH74PTHqbY3svEhQF6173TcXPGfhIiy6D76vMl51zT4i5OQH czyiNq+uFO6cGRDKUE2rCPkKOLxAxLqoLRpcQ/uA1g3ZUDLGtqUNp6RFcDu4AbU0e8fY KqkZS1lI+fdy90NvlaVPglUzNlxWIrthSeVPMKt9kLvjdd+hs3o1FdwqWi0WbI2I2VAE p/JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=H9NFd0yHlWGmdk/EMdYVrUqJ2vOLGczXKpbxrO9cBaM=; b=IWl7uamd5swqdDMs0kVlgimq50iLQtR/SHK0+xVE+FP+Zx8jIHA6ZVbgmFdG4gJpmA sacs4tKuTt5s1sm0cdLN45Tvx/dyomylUPTlwq061P+b6+ubBSz7cW+sThIaNdRTq144 CwR0a3OdHl29IohabX2Q0quU2MBOoIXsD1MQuKgEqzeUQ0O9j4MPZutvdEeGTX9A1QNF F14zY5Gt6f4UrjEp90u+6yKsNBJfrmj96muELdGDfNiq5Lg4PB3J/hCPO0TUP++6EfTu Sy246QGR/2JkeX96pqaPPIyQ9QiY0ObuYqHmxG5HntgBAuFp/vjUNH3URorFLPE1UJrC roxA== X-Gm-Message-State: AOAM530wD0wbwl0VlEf7D1aCaOxlQhJFvWJHWlOw4rf9wmlQSeu2jCs6 CzE9z5RlVQgnJFBdfSGfGpQlJA== X-Received: by 2002:ad4:404b:: with SMTP id r11mr21375700qvp.44.1590507952570; Tue, 26 May 2020 08:45:52 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:8152]) by smtp.gmail.com with ESMTPSA id a27sm23969qtc.92.2020.05.26.08.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 May 2020 08:45:51 -0700 (PDT) Date: Tue, 26 May 2020 11:45:28 -0400 From: Johannes Weiner To: Hugh Dickins Cc: Andrew Morton , Alex Shi , Joonsoo Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH mmotm] mm/swap: fix livelock in __read_swap_cache_async() Message-ID: <20200526154528.GA850116@cmpxchg.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 21, 2020 at 10:56:20PM -0700, Hugh Dickins wrote: > I've only seen this livelock on one machine (repeatably, but not to > order), and not fully analyzed it - two processes seen looping around > getting -EEXIST from swapcache_prepare(), I guess a third (at lower > priority? but wanting the same cpu as one of the loopers? preemption > or cond_resched() not enough to let it back in?) set SWAP_HAS_CACHE, > then went off into direct reclaim, scheduled away, and somehow could > not get back to add the page to swap cache and let them all complete. > > Restore the page allocation in __read_swap_cache_async() to before > the swapcache_prepare() call: "mm: memcontrol: charge swapin pages > on instantiation" moved it outside the loop, which indeed looks much > nicer, but exposed this weakness. We used to allocate new_page once > and then keep it across all iterations of the loop: but I think that > just optimizes for a rare case, and complicates the flow, so go with > the new simpler structure, with allocate+free each time around (which > is more considerate use of the memory too). > > Fix the comment on the looping case, which has long been inaccurate: > it's not a racing get_swap_page() that's the problem here. > > Fix the add_to_swap_cache() and mem_cgroup_charge() error recovery: > not swap_free(), but put_swap_page() to undo SWAP_HAS_CACHE, as was > done before; but delete_from_swap_cache() already includes it. > > And one more nit: I don't think it makes any difference in practice, > but remove the "& GFP_KERNEL" mask from the mem_cgroup_charge() call: > add_to_swap_cache() needs that, to convert gfp_mask from user and page > cache allocation (e.g. highmem) to radix node allocation (lowmem), but > we don't need or usually apply that mask when charging mem_cgroup. > > Signed-off-by: Hugh Dickins > --- Acked-by: Johannes Weiner > Mostly fixing mm-memcontrol-charge-swapin-pages-on-instantiation.patch > but now I see that mm-memcontrol-delete-unused-lrucare-handling.patch > made a further change here (took an arg off the mem_cgroup_charge call): > as is, this patch is diffed to go on top of both of them, and better > that I get it out now for Johannes look at; but could be rediffed for > folding into blah-instantiation.patch later. IMO it's worth having as a separate change. Joonsoo was concerned about the ordering but I didn't see it. Having this sequence of changes on record would be good for later reference.