Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1656137pxb; Mon, 23 Aug 2021 01:07:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyusoc6iyXdnxOWWXsc50ksIQHdZBXIOtU4zmeSouWAnq7z/nrzRcC7tLVhzQeKXXz3gLqT X-Received: by 2002:a05:6602:2436:: with SMTP id g22mr26083765iob.109.1629706024252; Mon, 23 Aug 2021 01:07:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629706024; cv=none; d=google.com; s=arc-20160816; b=quwmmgw1XSw1/YMm2m4BziUkvmvoungctH+HJsUc839bFisNMldewLmS+GnqzHzgRl qMrYQeV0YctY1b/58/jmByATIRaWgHUgRgBo1RFX6LOHliSCGFrDKG4U8aQfCrzxIOup 3AsxV22PwLRGwLOFBRPMPyELKqt3iRmjUBQSLyBF1cGlDJiaLwUlgqLFKzWW9RMVR8jP Tp0Eqhzc6PvaM4LU9qmFfGB6pV6NKdL0xieCit03j5rE0AdE93aExdXII1prTDhSk+M3 jgO13f+aca9XL+3CYHgR/U/FD+JvxCrIPkAD0WHd6TFjl4O7EaQyrZGtLtZDv1+Zf9aG ui/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from; bh=ANH9UYbRgJoDZf7JE2HCOQtfwyG0sRseMDzVZ3OfoyQ=; b=mICtmmTS0l5s4iTp5Jp7/NMv6rZW6cj32Yy1iPz/xcu7hWPwHGsTqdWVqN9FsmVMCy sEXWfMPJRlSAOgEx7vCsSINf+UGpIn++3wC5DHgoTiVsT99Q/Q+R6V7JgC8OT7go2cXL lHr9bX/pkL1N8+VUJDt9t9U1yu8YsIe95LNOn4dyCxlnSUyc6/cKAv2WrQ0xJy6yOnlR g6rWEzET8Ju8sMyEnyT3t3Ou8Q8nvQgS4MkAMTTmGdylUgxh0IQebwUvhzgXqLk0CjGr h5cxYhCB/XcRb27U+EMX7zxyKpAWc9gYroQQbfvxlnJCL2KJ9HBIWb8+ofR0O+Ut/Vz2 0vmw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v5si9959216ilg.86.2021.08.23.01.06.52; Mon, 23 Aug 2021 01:07:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235443AbhHWIGP (ORCPT + 99 others); Mon, 23 Aug 2021 04:06:15 -0400 Received: from mga11.intel.com ([192.55.52.93]:2762 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235353AbhHWIGL (ORCPT ); Mon, 23 Aug 2021 04:06:11 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10084"; a="213927037" X-IronPort-AV: E=Sophos;i="5.84,344,1620716400"; d="scan'208";a="213927037" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2021 01:05:29 -0700 X-IronPort-AV: E=Sophos;i="5.84,344,1620716400"; d="scan'208";a="535251129" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.159.119]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2021 01:05:26 -0700 From: "Huang, Ying" To: Nadav Amit Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Nadav Amit , Mel Gorman , Andrea Arcangeli , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , x86@kernel.org Subject: Re: [RFC 20/20] mm/rmap: avoid potential races References: <20210131001132.3368247-1-namit@vmware.com> <20210131001132.3368247-21-namit@vmware.com> Date: Mon, 23 Aug 2021 16:05:24 +0800 In-Reply-To: <20210131001132.3368247-21-namit@vmware.com> (Nadav Amit's message of "Sat, 30 Jan 2021 16:11:32 -0800") Message-ID: <87zgt8y4aj.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Nadav, Nadav Amit writes: > From: Nadav Amit > > flush_tlb_batched_pending() appears to have a theoretical race: > tlb_flush_batched is being cleared after the TLB flush, and if in > between another core calls set_tlb_ubc_flush_pending() and sets the > pending TLB flush indication, this indication might be lost. Holding the > page-table lock when SPLIT_LOCK is set cannot eliminate this race. Recently, when I read the corresponding code, I find the exact same race too. Do you still think the race is possible at least in theory? If so, why hasn't your fix been merged? > The current batched TLB invalidation scheme therefore does not seem > viable or easily repairable. I have some idea to fix this without too much code. If necessary, I will send it out. Best Regards, Huang, Ying