Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1188035yba; Thu, 9 May 2019 12:12:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqwX5BWXkWmX82Pf6lBLgu+GfeFXaGaDLC9P3LoeR0+6/brm1vtSL0A1eTLZS7gZ3xdATIug X-Received: by 2002:aa7:9285:: with SMTP id j5mr7701282pfa.129.1557429155183; Thu, 09 May 2019 12:12:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557429155; cv=none; d=google.com; s=arc-20160816; b=pOYACe2I6cNw6KHmOBZ9KXqL/hFzPU1J5Kc4UujXaKb/QqskVjWjBdt79Na4j/T5qp kIoI4xI1nTk1cgrE319x9Xnzo2vHJNTT6qzWaiOSiAyeE+8DJt7fNiHrBIV5Orih+ZYR WJlLHANDGzW6tp2j8QZGwS9/KzddlxnxYEYrr2CarPv3eTEdeiYbO/16476WhgXBgrpK zZ6WDZ3zHr+3N4SW+coE7impDHDqSRm9TwGtiYplry5Rf/VW0g2mb2rhJbvvYgkh+nEA 2lgBum4GyBGY+r+ISxxa4OPxyKOJBaxDjpwWWIeSeBhzCWkINX/A8qqSjbuR390e2C3L Bq4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=+5bN/ksySqKBSesJxdmmg0hvlpKokHjNtD46sWehC28=; b=jk3h3Xr1Agc19POxfDlLFrdqSaSMsy8R+qSMnMH/mbes8+wYG0NZDqLXrGpjRTrOGm DlG39BHM3FriF52kjufBfQrhjwy3ADo5CscRlL4aSBjeJxZjHV5BVF7nwPQhOHCbbDno eISfP+XbIfCQSOGlFsLDu7AZkfYwAbe744unhpxc58YAt8+JzF8bHwpcMJ3djZadZ5i4 uCZ/BfcPe/CIh0eTxGWRDbPrj1G3M6eM405F4JanE4WzBV0scNBLHCQ8lW063Di5nYe2 bAZTlqiX18M6vQ9Yw+p4ZbOiOqCPkq96QNLgh5l0heBz0pWcJ7FjY8GI8Gg8RAdD3peq ezLw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c19si4077575pfn.222.2019.05.09.12.12.18; Thu, 09 May 2019 12:12:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727022AbfEITKT (ORCPT + 99 others); Thu, 9 May 2019 15:10:19 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:51028 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726784AbfEITKS (ORCPT ); Thu, 9 May 2019 15:10:18 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07486;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0TRHKq3L_1557429003; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TRHKq3L_1557429003) by smtp.aliyun-inc.com(127.0.0.1); Fri, 10 May 2019 03:10:07 +0800 Subject: Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush To: Peter Zijlstra , Nadav Amit Cc: Will Deacon , "jstancek@redhat.com" , Andrew Morton , "stable@vger.kernel.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "aneesh.kumar@linux.vnet.ibm.com" , "npiggin@gmail.com" , "minchan@kernel.org" , Mel Gorman References: <1557264889-109594-1-git-send-email-yang.shi@linux.alibaba.com> <20190509083726.GA2209@brain-police> <20190509103813.GP2589@hirez.programming.kicks-ass.net> <20190509182435.GA2623@hirez.programming.kicks-ass.net> From: Yang Shi Message-ID: <84720bb8-bf3d-8c10-d675-0670f13b2efc@linux.alibaba.com> Date: Thu, 9 May 2019 12:10:02 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20190509182435.GA2623@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/9/19 11:24 AM, Peter Zijlstra wrote: > On Thu, May 09, 2019 at 05:36:29PM +0000, Nadav Amit wrote: >>> On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: >>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c >>> index 99740e1dd273..fe768f8d612e 100644 >>> --- a/mm/mmu_gather.c >>> +++ b/mm/mmu_gather.c >>> @@ -244,15 +244,20 @@ void tlb_finish_mmu(struct mmu_gather *tlb, >>> unsigned long start, unsigned long end) >>> { >>> /* >>> - * If there are parallel threads are doing PTE changes on same range >>> - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB >>> - * flush by batching, a thread has stable TLB entry can fail to flush >>> - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB >>> - * forcefully if we detect parallel PTE batching threads. >>> + * Sensible comment goes here.. >>> */ >>> - if (mm_tlb_flush_nested(tlb->mm)) { >>> - __tlb_reset_range(tlb); >>> - __tlb_adjust_range(tlb, start, end - start); >>> + if (mm_tlb_flush_nested(tlb->mm) && !tlb->full_mm) { >>> + /* >>> + * Since we're can't tell what we actually should have >>> + * flushed flush everything in the given range. >>> + */ >>> + tlb->start = start; >>> + tlb->end = end; >>> + tlb->freed_tables = 1; >>> + tlb->cleared_ptes = 1; >>> + tlb->cleared_pmds = 1; >>> + tlb->cleared_puds = 1; >>> + tlb->cleared_p4ds = 1; >>> } >>> >>> tlb_flush_mmu(tlb); >> As a simple optimization, I think it is possible to hold multiple nesting >> counters in the mm, similar to tlb_flush_pending, for freed_tables, >> cleared_ptes, etc. >> >> The first time you set tlb->freed_tables, you also atomically increase >> mm->tlb_flush_freed_tables. Then, in tlb_flush_mmu(), you just use >> mm->tlb_flush_freed_tables instead of tlb->freed_tables. > That sounds fraught with races and expensive; I would much prefer to not > go there for this arguably rare case. > > Consider such fun cases as where CPU-0 sees and clears a PTE, CPU-1 > races and doesn't see that PTE. Therefore CPU-0 sets and counts > cleared_ptes. Then if CPU-1 flushes while CPU-0 is still in mmu_gather, > it will see cleared_ptes count increased and flush that granularity, > OTOH if CPU-1 flushes after CPU-0 completes, it will not and potentiall > miss an invalidate it should have had. > > This whole concurrent mmu_gather stuff is horrible. > > /me ponders more.... > > So I think the fundamental race here is this: > > CPU-0 CPU-1 > > tlb_gather_mmu(.start=1, tlb_gather_mmu(.start=2, > .end=3); .end=4); > > ptep_get_and_clear_full(2) > tlb_remove_tlb_entry(2); > __tlb_remove_page(); > if (pte_present(2)) // nope > > tlb_finish_mmu(); > > // continue without TLBI(2) > // whoopsie > > tlb_finish_mmu(); > tlb_flush() -> TLBI(2) I'm not quite sure if this is the case Jan really met. But, according to his test, once correct tlb->freed_tables and tlb->cleared_* are set, his test works well. > > > And we can fix that by having tlb_finish_mmu() sync up. Never let a > concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers > have completed. Not sure if this will scale well. > > This should not be too hard to make happen.