Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3241897pxu; Sat, 19 Dec 2020 18:04:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJy8l/KRo4vdp4q6IrlmnZ3UOEWeM/QYCr7MahrfjFqE03C7fF30IbBSm6xqHNVR195HfvGG X-Received: by 2002:a05:6402:1d3b:: with SMTP id dh27mr10964710edb.238.1608429852220; Sat, 19 Dec 2020 18:04:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608429852; cv=none; d=google.com; s=arc-20160816; b=n1pp+cTKnMu3IxaR0aT/GjlkO12jMwKBVKId27BBMaI7JKkqdv22aWVhYucQ9/sWYN 7H99lbB2jljA9v2OIcQgRtTC5fZTbuVBwpC0Gn8PwW1wF0Nmir/nor6qEMW/+FEdKRXw bnjMDzYeYDUMJ9XHHRgavih+l64ZqVgz8D9Vhw+SUDEEGTPZ9edlNDQ+OKBQ+TbaIYm7 DLpDtknFqQvHhtv0xfKoy8dGr7ORAgdj0grPtHq28jFybrasEd/ULc+v+nFKr8aoc45Z gpSQrdLa0cz7rYQV5RgXcrl4lOzd8jVyWvNQPlmON1me9QS3XZe5YxuzhqyCDgE3MRWm nvOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=wtgBbL8tYwC46iw1E/jTfS51e/BxLzxbMdxZjlgg3Sc=; b=SGMzsl1dkQFgBPDHQyhMNFUTnMY2q9YwhijnooEKUjyPL3macWvc80+QAszeIw3X3f JTuBhOED8AnLZGYvGC87q7fSLUpFy9ENDjizKhIsNgC/FoJ0E+33ebsm5I6YqNv/xrNL t1Upca6Ud0phJcA2FXIHISoTkkR3/74MPU5DSB4l91ht1u34SWjF2V7QaUzyn6osjvLm 2igzhgLSX81Pu4eubFjHw9THLQd2VfXjWhvO/GsuOTyJQy7gPwQN/19KZZoMAcj2hQ+X R33G84pD1z/80UPtFSfEOLOUk/b+tSbZC2KZCnPcg6RD37ttVqCJ5rTUsb96hd8dsaBx YgsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Nms3CFFB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t21si8838124edc.354.2020.12.19.18.03.50; Sat, 19 Dec 2020 18:04:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Nms3CFFB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727090AbgLTCCd (ORCPT + 99 others); Sat, 19 Dec 2020 21:02:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:44066 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726854AbgLTCCc (ORCPT ); Sat, 19 Dec 2020 21:02:32 -0500 X-Gm-Message-State: AOAM5305hEAJQOa6KZijaqzM/RHGOMQowQqo4lkDA5zHR+t7iOKz2azU hAljEbfNU8uLkARXKp2ggZunblDuH10//3A841AOcg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608429712; bh=SnpmJA7RQYbU8RQ1XC54nkNwo0zSWeRz3Ts91x0CAyc=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=Nms3CFFBJJJy0s56usyXXOLaHkRo+WDd7vpHZqCp3Cme881lJ1swcOmCkLiRvEoMx vybkG3PhWN09A/Xlzt8Er8Zq94wts93DYz1kadJY6/pIn//yURv1UQ/ws1w1xemk1Q 4uyiz/t6dSOKZx97OZ3gKmETRNMvnJpCsu8FXGyyd0pIDiOAVKkGy2HhDQCZ/gX2KW Rx1EOrZdDSEloB1glSu7mfDlpo1coYvvzuXrKpjfaZZXQyRZfClVy7lBHM+L5ByfvJ tT5i+z/tDsNFn8WWiWS8u/miUdoIg6EqADkwWM2mL8eF7lwSVtQdg460qQh0GARtsj LnaHAFUEZlHmQ== X-Received: by 2002:a1c:1d85:: with SMTP id d127mr10455072wmd.49.1608429710342; Sat, 19 Dec 2020 18:01:50 -0800 (PST) MIME-Version: 1.0 References: <20201219043006.2206347-1-namit@vmware.com> In-Reply-To: From: Andy Lutomirski Date: Sat, 19 Dec 2020 18:01:39 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect To: Nadav Amit , Dave Hansen Cc: Andrea Arcangeli , linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable , Minchan Kim , Andy Lutomirski , Yu Zhao , Will Deacon , Peter Zijlstra Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Dec 19, 2020 at 1:34 PM Nadav Amit wrote: > > [ cc=E2=80=99ing some more people who have experience with similar proble= ms ] > > > On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli wr= ote: > > > > Hello, > > > > On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: > >> Analyzing this problem indicates that there is a real bug since > >> mmap_lock is only taken for read in mwriteprotect_range(). This might > > > > Never having to take the mmap_sem for writing, and in turn never > > blocking, in order to modify the pagetables is quite an important > > feature in uffd that justifies uffd instead of mprotect. It's not the > > most important reason to use uffd, but it'd be nice if that guarantee > > would remain also for the UFFDIO_WRITEPROTECT API, not only for the > > other pgtable manipulations. > > > >> Consider the following scenario with 3 CPUs (cpu2 is not shown): > >> > >> cpu0 cpu1 > >> ---- ---- > >> userfaultfd_writeprotect() > >> [ write-protecting ] > >> mwriteprotect_range() > >> mmap_read_lock() > >> change_protection() > >> change_protection_range() > >> ... > >> change_pte_range() > >> [ defer TLB flushes] > >> userfaultfd_writeprotect() > >> mmap_read_lock() > >> change_protection() > >> [ write-unprotect ] > >> ... > >> [ unprotect PTE logically ] > >> ... > >> [ page-fault] > >> ... > >> wp_page_copy() > >> [ set new writable page in PTE] > > > > Can't we check mm_tlb_flush_pending(vma->vm_mm) if MM_CP_UFFD_WP_ALL > > is set and do an explicit (potentially spurious) tlb flush before > > write-unprotect? > > There is a concrete scenario that I actually encountered and then there i= s a > general problem. > > In general, the kernel code assumes that PTEs that are read from the > page-tables are coherent across all the TLBs, excluding permission promot= ion > (i.e., the PTE may have higher permissions in the page-tables than those > that are cached in the TLBs). > > We therefore need to both: (a) protect change_protection_range() from the > changes of others who might defer TLB flushes without taking mmap_sem for > write (e.g., try_to_unmap_one()); and (b) to protect others (e.g., > page-fault handlers) from concurrent changes of change_protection(). > > We have already encountered several similar bugs, and debugging such issu= es > s time consuming and these bugs impact is substantial (memory corruption, > security). So I think we should only stick to general solutions. > > So perhaps your the approach of your proposed solution is feasible, but i= t > would have to be applied all over the place: we will need to add a check = for > mm_tlb_flush_pending() and conditionally flush the TLB in every case in > which PTEs are read and there might be an assumption that the > access-permission reflect what the TLBs hold. This includes page-fault > handlers, but also NUMA migration code in change_protection(), softdirty > cleanup in clear_refs_write() and maybe others. I missed the beginning of this thread, but it looks to me like userfaultfd changes PTEs with not locking except mmap_read_lock(). It also calls inc_tlb_flush_pending(), which is very explicitly documented as requiring the pagetable lock. Those docs must be wrong, because mprotect() uses the mmap_sem write lock, which is just fine, but ISTM some kind of mutual exclusion with proper acquire/release ordering is indeed needed. So the userfaultfd code seems bogus. I think userfaultfd either needs to take a real lock (probably doesn't matter which) or the core rules about PTEs need to be rewritten.