Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1214AC61DA4 for ; Fri, 24 Feb 2023 14:12:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229787AbjBXOMQ (ORCPT ); Fri, 24 Feb 2023 09:12:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbjBXOMM (ORCPT ); Fri, 24 Feb 2023 09:12:12 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10DDA12B for ; Fri, 24 Feb 2023 06:12:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677247932; x=1708783932; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=wbHiZmYpOxEtl62leTc5hC5LH7H7mGhZv9YLmgqKKdE=; b=BvC4wCm8dN36wSjUbBPRfkEaBLBWQLYcxdCHq5o0ushvH1anyxiGyKOO 4EI/oqNWNT3Ud5L3DsnvbASClGLEVAMlRG7NG+jII087SgJnN+lUY7MXD iMijdpF26HN0W7ErXgmvae7kkyCM1dXFKpxexzIobdvI34akFyJ+7rBWJ cU1+4UmJcqVmSvdDaCh7qR1Zwfs8z54Ex800rYoh9mLazr707A0lchouw QxgTT1vQ1BlvRg97GNuVe72dUU037dLiqAXrkCxD/gQAl+4qV+YuDz6kz abZpm2d0JdTikxnPtU7Y53w4w3Mc1mV3LOahebTdUvCgofINuNVwd/2Fm w==; X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="332167657" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="332167657" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="741684622" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="741684622" Received: from bingqili-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.19]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:07 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Hugh Dickins , "Xu, Pengfei" , Christoph Hellwig , Stefan Roesch , Tejun Heo , Xin Hao , Zi Yan , Yang Shi , Baolin Wang , Matthew Wilcox , Mike Kravetz Subject: [PATCH 0/3] migrate_pages: fix deadlock in batched synchronous migration Date: Fri, 24 Feb 2023 22:11:42 +0800 Message-Id: <20230224141145.96814-1-ying.huang@intel.com> X-Mailer: git-send-email 2.39.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Two deadlock bugs were reported for the migrate_pages() batching series. Thanks Hugh and Pengfei. Analysis shows that if we have locked some other folios except the one we are migrating, it's not safe in general to wait synchronously, for example, to wait the writeback to complete or wait to lock the buffer head. So 1/3 fixes the deadlock in a simple way, where the batching support for the synchronous migration is disabled. The change is straightforward and easy to be understood. While 3/3 re-introduce the batching for synchronous migration via trying to migrate asynchronously in batch optimistically, then fall back to migrate synchronously one by one for fail-to-migrate folios. Test shows that this can restore the TLB flushing batching performance for synchronous migration effectively. Best Regards, Huang, Ying