Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp5915052pxu; Wed, 23 Dec 2020 08:28:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJzKsmRSab46Jq8jCUEM9+j1O8K4jjYXVXibfYdx0OEh6qs24pW9R2ZY0xkZJXkX7HbS73Bn X-Received: by 2002:a17:906:1796:: with SMTP id t22mr7259784eje.372.1608740909408; Wed, 23 Dec 2020 08:28:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608740909; cv=none; d=google.com; s=arc-20160816; b=aB37jv2pfapxjLnB0Q/EiXfSBbhK29QEmmE96l11gOka3SUBiYAw8+4tw9voxp9Wmd UFuHfUOWc+k2uHoo89CSCB4Bam+LjqU8c/vqZxtCNVK4x5BuwIWwEKBigjhMgpLFq9G6 U5eNF/EimjVlyhl1Z+oBH2SCTIRNXU3iNf/yfUn2xlc9XCaxAenekkJG3sMVWQlRuypo p3fpPD0f4fnk17dIKiGU8pH/7UIytDB1fsu/xHSvzAQBSVZpUboem+nN3KzpSh2my3r7 9xV1lWwgwfCv1fbozN1TQ15pa2Qw4xS+WeNAzdTc+w5KHmQ7LGw3o0YFMgpiuM3epF7k p9kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=EyXIEM4Le7OfcstCNHW3uuUenYWs3VwcIb00szymHWw=; b=T6rMkF6fSJzRN9FgPxpR9ouNgAD0WQ4NmdyCveWHQwR6i/ItgrlNqFyZ0U6F2uIoZe J148EJr1PFZuDC33quB6X9aOuiCyQIuTiaXNGWiRgHAyJGJrOlDAqPgXBlt4Zdhta25g faUHjt3HPfFlB2YvsrzmfRDsRguR8zfc23dvrb/wCzBHa7BkL8RFFuiEnz5mw72wf0QX Y54Xyqg4hSHM654JRrgAwXBddeO9YGLkuvnNV/GNpfGJUqFanMjn6MJAQqHMQ4G2RngN x2ZBTr7ievSStsJ2m6DirVJhdQLem/AnkAEMYzb02VU5X97lWaI/MoE2Zpc1kIbG/CjE aCZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bmQo15ym; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z4si12341778ejc.686.2020.12.23.08.28.06; Wed, 23 Dec 2020 08:28:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bmQo15ym; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727296AbgLWQZt (ORCPT + 99 others); Wed, 23 Dec 2020 11:25:49 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:44429 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgLWQZs (ORCPT ); Wed, 23 Dec 2020 11:25:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1608740661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=EyXIEM4Le7OfcstCNHW3uuUenYWs3VwcIb00szymHWw=; b=bmQo15ym9KnL48QmQrzt5NdoeUtI9fLTmRczKXrKz16VXrr5iXSwi3qP9MIlzHiBSkyN41 Mf3wtr2nFZItZHAO981b6nCTzG6xyvU+Ovvpd8Ix1dgcRW4Sc1+5q+EXixZh3+IHTMHPko JuSydK8EcLEGrMR57s4Jo+6S0fxtDLY= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-254-1VyuxxtaMUO3mDCHM1iB9g-1; Wed, 23 Dec 2020 11:24:19 -0500 X-MC-Unique: 1VyuxxtaMUO3mDCHM1iB9g-1 Received: by mail-qt1-f197.google.com with SMTP id a22so12508048qtx.20 for ; Wed, 23 Dec 2020 08:24:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=EyXIEM4Le7OfcstCNHW3uuUenYWs3VwcIb00szymHWw=; b=O9ghJhzo+eK8R9HgazSRZUNDtIlbJd2LI+ueeIzObnO0jGwVV1x6NeBWHT1a3XV1jL qH2eOFfvmVExpwnIuRPx+Im6JusVG7oI93wC/Td0ESXV/aoznwXjcSjJ8CmaOq2+XLrp vGqmhKSKC2zZSixK5vhUhEkWD8LlaJqQNYzLQ2ZrSgAsARM68MPwHKiynYzMLsOmtxAV 2QvPppcxU4r/IFvdDPQvzObpionvYWMAsWuDOiZoIauNfcDBv5YaxuF10zCVOZJqtMpZ 1GJBmrnp6OeOdY0XCaTH25LUgi4ydkQy/gt7kU/PvbUkoEIefbDxBn1+JCVKUgvZ+3N7 TyMA== X-Gm-Message-State: AOAM5323C3TiwMfw0/EokFupYHgni2MueMsY55GpAlaxL0tAux1WsEOA IpOwDIMWJmFFQ/QV1FhlgPGpQBgM3bnOvIX5T3+Rll5pogB09tGk3eXDVXwaJM52X1Fgiqi7cK7 YsYzFkrXnxOHRuwuO1TVURovs X-Received: by 2002:aed:374a:: with SMTP id i68mr14943420qtb.81.1608740659390; Wed, 23 Dec 2020 08:24:19 -0800 (PST) X-Received: by 2002:aed:374a:: with SMTP id i68mr14943391qtb.81.1608740659142; Wed, 23 Dec 2020 08:24:19 -0800 (PST) Received: from xz-x1 ([142.126.83.202]) by smtp.gmail.com with ESMTPSA id r8sm10491757qkm.106.2020.12.23.08.24.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Dec 2020 08:24:18 -0800 (PST) Date: Wed, 23 Dec 2020 11:24:16 -0500 From: Peter Xu To: Yu Zhao Cc: Linus Torvalds , Andrea Arcangeli , Andy Lutomirski , Nadav Amit , linux-mm , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable , Minchan Kim , Will Deacon , Peter Zijlstra Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Message-ID: <20201223162416.GD6404@xz-x1> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 23, 2020 at 03:06:30AM -0700, Yu Zhao wrote: > On Wed, Dec 23, 2020 at 01:44:42AM -0800, Linus Torvalds wrote: > > On Tue, Dec 22, 2020 at 4:01 PM Linus Torvalds > > wrote: > > > > > > The more I look at the mprotect code, the less I like it. We seem to > > > be much better about the TLB flushes in other places (looking at > > > mremap, for example). The mprotect code seems to be very laissez-faire > > > about the TLB flushing. > > > > No, this doesn't help. > > > > > Does adding a TLB flush to before that > > > > > > pte_unmap_unlock(pte - 1, ptl); > > > > > > fix things for you? > > > > It really doesn't fix it. Exactly because - as pointed out earlier - > > the actual page *copy* happens outside the pte lock. > > I appreciate all the pointers. It seems to me it does. > > > So what can happen is: > > > > - CPU 1 holds the page table lock, while doing the write protect. It > > has cleared the writable bit, but hasn't flushed the TLB's yet > > > > - CPU 2 did *not* have the TLB entry, sees the new read-only state, > > takes a COW page fault, and reads the PTE from memory (into > > vmf->orig_pte) > > In handle_pte_fault(), we lock page table and check pte_write(), so > we either see a RW pte before CPU 1 runs or a RO one with no stale tlb > entries after CPU 1 runs, assume CPU 1 flushes tlb while holding the > same page table lock (not mmap_lock). I think this is not against Linus's example - where cpu2 does not have tlb cached so it sees RO while cpu3 does have tlb cached so cpu3 can still modify it. So IMHO there's no problem here. But I do think in step 2 here we overlooked _PAGE_UFFD_WP bit. Note that if it's uffd-wp wr-protection it's always applied along with removing of the write bit in change_pte_range(): if (uffd_wp) { ptent = pte_wrprotect(ptent); ptent = pte_mkuffd_wp(ptent); } So instead of being handled as COW page do_wp_page() will always trap userfaultfd-wp first, hence no chance to race with COW. COW could only trigger after another uffd-wp-resolve ioctl which could remove the _PAGE_UFFD_WP bit, but with Andrea's theory unprotect will only happen after all wr-protect completes, which guarantees that when reaching the COW path the tlb must has been flushed anyways. Then no one should be modifying the page anymore even without pgtable lock in COW path. So IIUC what Linus proposed on "flushing tlb within pgtable lock" seems to work, but it just may cause more tlb flush than Andrea's proposal especially when the protection range is large (it's common to have a huge protection range for e.g. VM live snapshotting, where we'll protect all guest rw ram). My understanding of current issue is that either we can take Andrea's proposal (although I think the group rwsem may not be extremely better than a per-mm rwsem, which I don't know... at least not worst than that?), or we can also go the other way (also as Andrea mentioned) so that when wr-protect: - for <=2M range (pmd or less), we take read rwsem, but flush tlb within pgtable lock - for >2M range, we take write rwsem directly but flush tlb once Thanks, > > > - CPU 2 correctly decides it needs to be a COW, and copies the page contents > > > > - CPU 3 *does* have a stale TLB (because TLB invalidation hasn't > > happened yet), and writes to that page in users apce > > > > - CPU 1 now does the TLB invalidate, and releases the page table lock > > > > - CPU 2 gets the page table lock, sees that its PTE matches > > vmf->orig_pte, and switches it to be that writable copy of the page. > > > > where the copy happened before CPU 3 had stopped writing to the page. > > > > So the pte lock doesn't actually matter, unless we actually do the > > page copy inside of it (on CPU2), in addition to doing the TLB flush > > inside of it (on CPU1). > > > > mprotect() is actually safe for two independent reasons: (a) it does > > the mmap_sem for writing (so mprotect can't race with the COW logic at > > all), and (b) it changes the vma permissions so turning something > > read-only actually disables COW anyway, since it won't be a COW, it > > will be a SIGSEGV. > > > > So mprotect() is irrelevant, other than the fact that it shares some > > code with that "turn it read-only in the page tables". > > > > fork() is a much closer operation, in that it actually triggers that > > COW behavior, but fork() takes the mmap_sem for writing, so it avoids > > this too. > > > > So it's really just userfaultfd and that kind of ilk that is relevant > > here, I think. But that "you need to flush the TLB before releasing > > the page table lock" was not true (well, it's true in other > > circumstances - just not *here*), and is not part of the solution. > > > > Or rather, if it's part of the solution here, it would have to be > > matched with that "page copy needs to be done under the page table > > lock too". > > > > Linus > > > -- Peter Xu