Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1285600pxb; Fri, 21 Jan 2022 14:22:14 -0800 (PST) X-Google-Smtp-Source: ABdhPJzidVWKx5tLm7BTUIBysWGVQSV/0p3zuNqB5ptMIFxCGThta2b1ayyayGmoSi2PhABx3n5X X-Received: by 2002:a62:1681:0:b0:4bc:ceec:257c with SMTP id 123-20020a621681000000b004bcceec257cmr5507691pfw.84.1642803733919; Fri, 21 Jan 2022 14:22:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642803733; cv=none; d=google.com; s=arc-20160816; b=i5um5FUGrKspO8e3hmr2QvHwyVOmF6aIimHQjvMg818Hw9CBozb2U/OI2fyfy8HmWN DGRmx/uyyVR1NVPi+mWBTd52fiS1POxDFtX2vnKPivmz+IDkele3tXHJUrc+omwrtiP0 FyDTaPBKp8N4MsLECikEQk5YcWSz2MzBanCGbG4SpT8OYDznjARojBngfqtkSHPze3vi g2TcsY+kZRKEE/ajBIE+FLzm0lR9YWVZjC6zbHUIrvWbL4CTtuE7Flispb6v/4+qCaGJ 0d3r7wYb0PF9NhahP3GPyQSQEO5tKUggh798bpPMpmk4ece0RWfJ4W9gZe1dlq42UhNi TZ8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=LBHrbBkWmW7f6MfuEAQNEHzbF+2jNQaqlNNgyrndfGg=; b=yErv8TA3VN0eU+X5wmQrg9+0V4AL7zoQHOBPQQO80fvYp09t0iI5R5zrMKEnB+6/lH Jiso8xgbNbARRC10k/RY91YiZH9Y5ve63Ce1cRbURvQIzDyMarCaaz2+tV+PL+zxZioP kEhyE6SnLUaVHTnMuVLX7GOwi2R50PKtdQ1wiuiZam5xnmIc3H7IFo0KyvduBNUIv3Yx mtQeDqL7jaI8Hv2dq6SxXVthP9WblesRLp/ZeziDrDcJUS3n0KpmRBz5RU2sGV5qq+hJ BQjziT11RyS0W81BmvKDGBhpdS0O6zy2MFBwHrIIKFWcL6WyJH7crQ2AxXZngmFd4j6A xPUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VupVYYwY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s4si7081976plq.117.2022.01.21.14.22.01; Fri, 21 Jan 2022 14:22:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VupVYYwY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376508AbiATPp7 (ORCPT + 99 others); Thu, 20 Jan 2022 10:45:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346016AbiATPp6 (ORCPT ); Thu, 20 Jan 2022 10:45:58 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D698C061574 for ; Thu, 20 Jan 2022 07:45:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LBHrbBkWmW7f6MfuEAQNEHzbF+2jNQaqlNNgyrndfGg=; b=VupVYYwY83rHw+3C1+NQ9tyCs4 L3Gi4WMikDKeMk9+caHwxHPod0htE8MGNZ9dEnKiRYwqlJzwHBOzD98mqz3YEAkFQPrx73Qif2ISw ymGk9bASgIp8z7AmoOkZFGfd74H34PvkJjQtZllSSSU66v96fEX/b/dychUGFypnPPAIDYK9n8oPy e9uBEVwY6h4kISWD5Ku2oY8qthTUF3txhuS5VkXQd0ggeNs9yBRILPUdh41w0rOZBL18XCfCXIwwR pGAhbVfwJ6ShRiQ2lnPEG/0e2nfYDhZcVuRe7zPRHgmiRgushaaw7lJvIhTBVliwRaEKByoKpg0Uv xBBURX2g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAZdH-00EOGa-Kf; Thu, 20 Jan 2022 15:45:51 +0000 Date: Thu, 20 Jan 2022 15:45:51 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: "zhangliang (AG)" , Andrew Morton , Linux-MM , Linux Kernel Mailing List , wangzhigang17@huawei.com, Linus Torvalds Subject: Re: [PATCH] mm: reuse the unshared swapcache page in do_wp_page Message-ID: References: <9cd7eee2-91fd-ddb8-e47d-e8585e5baa05@redhat.com> <747ff31c-6c9e-df6c-f14d-c43aa1c77b4a@redhat.com> <759f9bc8-0b10-7f0f-28a6-f292bed9053f@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <759f9bc8-0b10-7f0f-28a6-f292bed9053f@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 20, 2022 at 04:39:55PM +0100, David Hildenbrand wrote: > On 20.01.22 16:36, Matthew Wilcox wrote: > > On Thu, Jan 20, 2022 at 04:26:22PM +0100, David Hildenbrand wrote: > >> On 20.01.22 15:39, Matthew Wilcox wrote: > >>> On Thu, Jan 20, 2022 at 03:15:37PM +0100, David Hildenbrand wrote: > >>>> On 17.01.22 14:31, zhangliang (AG) wrote: > >>>>> Sure, I will do that :) > >>>> > >>>> I'm polishing up / testing the patches and might send something out for discussion shortly. > >>>> Just a note that on my branch was a version with a wrong condition that should have been fixed now. > >>>> > >>>> I am still thinking about PTE mapped THP. For these, we'll always > >>>> have page_count() > 1, essentially corresponding to the number of still-mapped sub-pages. > >>>> > >>>> So if we end up with a R/O mapped part of a THP, we'll always have to COW and cannot reuse ever, > >>>> although it's really just a single process mapping the THP via PTEs. > >>>> > >>>> One approach would be to scan the currently locked page table for entries mapping > >>>> this same page. If page_count() corresponds to that value, we know that only we are > >>>> mapping the THP and there are no additional references. That would be a special case > >>>> if we find an anon THP in do_wp_page(). Hm. > >>> > >>> You're starting to optimise for some pretty weird cases at that point. > >> > >> So your claim is that read-only, PTE mapped pages are weird? How do you > >> come to that conclusion? > > > > Because normally anon THP pages are PMD mapped. That's rather > > the point of anon THPs. > > For example unless we are talking about *drumroll* COW handling. > > > > >> If we adjust the THP reuse logic to split on additional references > >> (page_count() == 1) -- similarly as suggested by Linus to fix the CVE -- > >> we're going to end up with exactly that more frequently. > > > > I don't understand. Are we talking past each other? As I understand > > the situation we're talking about here, a process has created a THP, > > done something to cause it to be partially mapped (or mapped in a > > misaligned way) in its own address space, then forked, and we're > > trying to figure out if it's safe to reuse it? I say that situation is > > rare enough that it's OK to always allocate an order-0 page and > > copy into it. > > Yes, we are talking past each other and no, I am talking about fully > mapped THP, just mapped via PTEs. > > Please refer to our THP COW logic: do_huge_pmd_wp_page() You're going to have to be a bit more explicit. That's clearly handling the case where there's a PMD mapping. If there is _also_ a PTE mapping, then obviously the page is mapped more than once and can't be reused! > > > >>> Anon THP is always going to start out aligned (and can be moved by > >>> mremap()). Arguably it should be broken up if it's moved so it can be > >>> reformed into aligned THPs by khugepaged. > >> > >> Can you elaborate, I'm missing the point where something gets moved. I > >> don't care about mremap() at all here. > >> > >> > >> 1. You have a read-only, PTE mapped THP > >> 2. Write fault on the THP > >> 3. We PTE-map the THP because we run into a false positive in our COW > >> logic to handle COW on PTE > >> 4. Write fault on the PTE > >> 5. We always have to COW each and every sub-page and can never reuse, > >> because page_count() > 1 > >> > >> That's essentially what reuse_swap_page() tried to handle before. > >> Eventually optimizing for this is certainly the next step, but I'd like > >> to document which effect the removal of reuse_swap_page() will have to THP. > > > > I'm talking about step 0. How do we get a read-only, PTE-mapped THP? > > Through mremap() or perhaps through an mprotect()/mmap()/munmap() that > > failed to split the THP. > > do_huge_pmd_wp_page() I feel you could be a little more verbose about what you think is going on here. Are you talking about the fallback: path where we call __split_huge_pmd()?