Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp1051869ybm; Wed, 27 May 2020 14:59:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzL4sHvBfmIXRfZoFkkOV5ebI9FATxEx/qzOSjSFJvOyOS+l62ARHN0VFN6KCwR5U/dOO68 X-Received: by 2002:a17:906:a415:: with SMTP id l21mr411247ejz.100.1590616789244; Wed, 27 May 2020 14:59:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590616789; cv=none; d=google.com; s=arc-20160816; b=LtDZoMmv8w6HNqH3ZCfPAnDquBTqM/4PV5vIkpYb98ugxhAnkPXfmOXVG5f+vWPg9A evRPUwwUcmTjkiedGtxeqlCSx9dGOm6i20Pt49mDmiqj06N6OlCC/CFQWUaWNtomJ5Wz KCMUJbRaYnsEzO0FfH3zXrUZ/yamRNjAyhlXyKZMUku4x3awGxhanyAcpQjCqkSmXmKW MMi3aaJ/MTqt3/M9DrRRUOVF2hZj6DMwuPaCTogxDkFKfqAGoap8hLp+5Krsb+vTcA61 HJgvAiKmP2PCmwQBihv5MW9kVeeGgWqtiO3qAK/dRhgEtHE7EQlkyJJsle2za3MwpnPE 2Vog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=TFobVTEGsYU5ols9mN4BO6yuwl1VcOjdMiy1Mzv6JMs=; b=Q+Lantbc84uqeH5dy6gAV0Vsly7Zcg1PxEumMPPHTx1Db2HK4ychTQbObR/7x1YvOS V/EG4gAaWNo+lNyN8uF0cI9F7zBlxZv6gvlhrMHG+gvi3GoCHFArSiQgklAUKkupP5+Q mcY6iuxiCG6MydQjPCUWZLei2bNgwpFYfyRKpnIJDQNguOzI2339dkx4lEVcc199lqAO AJBdvNAaL0tNlYVeUjKc7Vb5VKNWM1cZFShSVLhElSAB25Z/hwr3pYLHv3832jW09RHh KlbF7NvTkAMdMRGL6vjfwu9UEG3msGRlZPjgBynNuhDjYibLgomjQysDVAAUcr7enfr0 urKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="S/yZXiAE"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l10si2332066edv.152.2020.05.27.14.59.25; Wed, 27 May 2020 14:59:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="S/yZXiAE"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725852AbgE0Vog (ORCPT + 99 others); Wed, 27 May 2020 17:44:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725267AbgE0Vof (ORCPT ); Wed, 27 May 2020 17:44:35 -0400 Received: from mail-oi1-x242.google.com (mail-oi1-x242.google.com [IPv6:2607:f8b0:4864:20::242]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BDA4C05BD1E for ; Wed, 27 May 2020 14:44:35 -0700 (PDT) Received: by mail-oi1-x242.google.com with SMTP id v128so23121293oia.7 for ; Wed, 27 May 2020 14:44:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=TFobVTEGsYU5ols9mN4BO6yuwl1VcOjdMiy1Mzv6JMs=; b=S/yZXiAEqv4YGcUAM8LIhrZsHZpM5QVrHMEl6m7QK/R9ugt5Zx2Yu/a9IZzRHrkcDM vOzcykdl7A6s0MoRjjoZD9q5Z0HIzV9dJVdY9vfLvg3hj0sqK4cTf6B/YdpNb1wg4gsF QTmm61L43pfSuUsI8/Oam2IHLEQ0GNWoBRW2jsqkePZJWEj2ETA07kCu3CozSGsLByqi 8Q4qSGOomu7eYrYDsDh8kUHmZO1lUE1sRuER6bII6RnBrZUd23EY2iDURuTpfv7mnGTw rXcvbI+37bfWsLa0khW2lLcq6uHaWhuZ6pXUq3kPi/I77NQlqE1bDJSyyIAeRREXuNie UODQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=TFobVTEGsYU5ols9mN4BO6yuwl1VcOjdMiy1Mzv6JMs=; b=EGvWLFoPTHPI0TUqVBo18LmMRgGJR1Oqd8CD42pjXFWEuWmaoFtC3ZkJS+x3vH4PJP eHPVO+3+bac5T/Py8e8nj3qC9SJeEckz+FbftyNAB9Hk23ldcvrVWVBV416QqBzhJDUX nY5Ud6+dxPX7qkWhK0Yjy6elzf4pHY7/P3mHyVJ7PWP/VaSlewr3DwB4QRkZOnZfx7Gc GZxwxMbKrq3KgJ3iaPZa/yFFb+XiE08EAPUmfpmz1j6fXKi7h3YH7w0DhtRLN2um6pfB fdhkFPJ278tZ4VyQZEZYU7YPcn31UHOyNn2d92AHEJTADh2X77VtSlh2YfKg9CVyFCBH kLqQ== X-Gm-Message-State: AOAM532aw7+EP4we/DxTjDDFy2K4XzFTFesLyp3pmc8HLsY5HDP5Ctth CxbeZDdYUrtMyP9ckYSc1R2n0w== X-Received: by 2002:aca:e104:: with SMTP id y4mr177936oig.120.1590615873755; Wed, 27 May 2020 14:44:33 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id c12sm1216280oib.22.2020.05.27.14.44.32 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Wed, 27 May 2020 14:44:32 -0700 (PDT) Date: Wed, 27 May 2020 14:44:31 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Johannes Weiner cc: Hugh Dickins , Andrew Morton , Alex Shi , Joonsoo Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH mmotm] mm/swap: fix livelock in __read_swap_cache_async() In-Reply-To: <20200526154528.GA850116@cmpxchg.org> Message-ID: References: <20200526154528.GA850116@cmpxchg.org> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 26 May 2020, Johannes Weiner wrote: > On Thu, May 21, 2020 at 10:56:20PM -0700, Hugh Dickins wrote: > > I've only seen this livelock on one machine (repeatably, but not to > > order), and not fully analyzed it - two processes seen looping around > > getting -EEXIST from swapcache_prepare(), I guess a third (at lower > > priority? but wanting the same cpu as one of the loopers? preemption > > or cond_resched() not enough to let it back in?) set SWAP_HAS_CACHE, > > then went off into direct reclaim, scheduled away, and somehow could > > not get back to add the page to swap cache and let them all complete. > > > > Restore the page allocation in __read_swap_cache_async() to before > > the swapcache_prepare() call: "mm: memcontrol: charge swapin pages > > on instantiation" moved it outside the loop, which indeed looks much > > nicer, but exposed this weakness. We used to allocate new_page once > > and then keep it across all iterations of the loop: but I think that > > just optimizes for a rare case, and complicates the flow, so go with > > the new simpler structure, with allocate+free each time around (which > > is more considerate use of the memory too). > > > > Fix the comment on the looping case, which has long been inaccurate: > > it's not a racing get_swap_page() that's the problem here. > > > > Fix the add_to_swap_cache() and mem_cgroup_charge() error recovery: > > not swap_free(), but put_swap_page() to undo SWAP_HAS_CACHE, as was > > done before; but delete_from_swap_cache() already includes it. > > > > And one more nit: I don't think it makes any difference in practice, > > but remove the "& GFP_KERNEL" mask from the mem_cgroup_charge() call: > > add_to_swap_cache() needs that, to convert gfp_mask from user and page > > cache allocation (e.g. highmem) to radix node allocation (lowmem), but > > we don't need or usually apply that mask when charging mem_cgroup. > > > > Signed-off-by: Hugh Dickins > > --- > > Acked-by: Johannes Weiner > > > Mostly fixing mm-memcontrol-charge-swapin-pages-on-instantiation.patch > > but now I see that mm-memcontrol-delete-unused-lrucare-handling.patch > > made a further change here (took an arg off the mem_cgroup_charge call): > > as is, this patch is diffed to go on top of both of them, and better > > that I get it out now for Johannes look at; but could be rediffed for > > folding into blah-instantiation.patch later. > > IMO it's worth having as a separate change. Joonsoo was concerned > about the ordering but I didn't see it. Having this sequence of > changes on record would be good for later reference. Yes, there would be some value in that: whichever way Andrew prefers. Now that the Acks are safely in (thanks guys), I will concede that that SWAP_HAS_CACHE occasional busywait loop is not ideal - but with this patch, no worse than it was before. Later on I hope to come back and do something better there: it's not immediately clear why swapcache_prepare() is important, and it would be nicer if add_to_swap_cache() were the thing to fail with -EEXIST (because then there's a page in the cache that others can lock to wait on if required); but there's memories I need to dredge up before going that way, and it may turn out to be a delusion. Hugh