Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD6E1C6379F for ; Tue, 14 Feb 2023 17:40:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232767AbjBNRkT (ORCPT ); Tue, 14 Feb 2023 12:40:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232234AbjBNRkQ (ORCPT ); Tue, 14 Feb 2023 12:40:16 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FC72222D7 for ; Tue, 14 Feb 2023 09:39:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676396367; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QKLllZC1uEUOE2BdO7bzLxdbv09hiQiyoC40FRYKDo=; b=eA9w+CMpT8mDhMRFM3sgpnbzyQYzY6gAmPhY5DFoks7eL6nFc2djF/n2MmCkPYoBY6ZZit /wqASv2UFmNH4fsqmomGC3smxEsGj5X99vsNf6j5ir6RAFHd5qiT0qdE4kFX5c9C6LDGrP goX5YwB/2bsuEIs817gVs7hKQNd11Hs= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-593-k-ZuYSD1PAKr8rHbqWmxHg-1; Tue, 14 Feb 2023 12:39:26 -0500 X-MC-Unique: k-ZuYSD1PAKr8rHbqWmxHg-1 Received: by mail-wm1-f71.google.com with SMTP id bi10-20020a05600c3d8a00b003dd1b5d2a36so520840wmb.1 for ; Tue, 14 Feb 2023 09:39:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4QKLllZC1uEUOE2BdO7bzLxdbv09hiQiyoC40FRYKDo=; b=NOdJqP6HPz1RUs0c8gD7cq65IpEc+fEed/S+iLFp8+eChfXp4wjgFsFM74lls45DC9 qCuEEPcgxXQo1XDZJ1BNE+1fwvf7XJWfVp/xeoqx63JY4+1ALReu8lE/RcmKlMKFpyNu R1IJTNgEq2BJixiNSiU6/Gk6L5R9Qj7c4s0ZD950TRKY1Qjh5VClArXY7QiHhZrQSYvO ZSLiYafjA86zTZkALcZRKGawDmwHHZkWHxBxE94OnkwB534tasR4e6ZAwqp/ABxSQgk1 EUsx/K9+sClDA06Y0Je30XHXn3crs6yXBccLEBvp/MbgsYOHnqsROel5bG0zXIKjgBXK ze4Q== X-Gm-Message-State: AO0yUKUxdUXsobZpXW2n1MvIhB5IRG8REliOjjAqp9SZTNVWBu6bbJep r4nQ5NoJ7OUcN63cdnBULqZOC8AkLlRGRx3PyNmZVR84iOxy0PuLTbz97Aehyf1JIv5pn1D45Kv 5Ob36c4UNlE20rbeQPKhCkcY9 X-Received: by 2002:a5d:6806:0:b0:2c4:57d3:396 with SMTP id w6-20020a5d6806000000b002c457d30396mr2716991wru.40.1676396364826; Tue, 14 Feb 2023 09:39:24 -0800 (PST) X-Google-Smtp-Source: AK7set/r0DQ7+ma2F6crvNFnBEefDNPcQ1q0jtIdkDICXCjvMBFXmslFeTySfwfESBVH/z6uWOQnhg== X-Received: by 2002:a5d:6806:0:b0:2c4:57d3:396 with SMTP id w6-20020a5d6806000000b002c457d30396mr2716932wru.40.1676396364345; Tue, 14 Feb 2023 09:39:24 -0800 (PST) Received: from [192.168.3.108] (p5b0c60e7.dip0.t-ipconnect.de. [91.12.96.231]) by smtp.gmail.com with ESMTPSA id u14-20020adff88e000000b002c56046a3b5sm3582125wrp.53.2023.02.14.09.39.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 14 Feb 2023 09:39:23 -0800 (PST) Message-ID: Date: Tue, 14 Feb 2023 18:39:21 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Content-Language: en-US To: Yang Shi Cc: Chih-En Lin , Pasha Tatashin , Andrew Morton , Qi Zheng , "Matthew Wilcox (Oracle)" , Christophe Leroy , John Hubbard , Nadav Amit , Barry Song , Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Peter Xu , Vlastimil Babka , Zach O'Keefe , Yun Zhou , Hugh Dickins , Suren Baghdasaryan , Yu Zhao , Juergen Gross , Tong Tiangen , Liu Shixin , Anshuman Khandual , Li kunyu , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , Vincenzo Frascino , Thomas Gleixner , "Eric W. Biederman" , Andy Lutomirski , Sebastian Andrzej Siewior , "Liam R. Howlett" , Fenghua Yu , Andrei Vagin , Barret Rhoden , Michal Hocko , "Jason A. Donenfeld" , Alexey Gladkov , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng References: <20230207035139.272707-1-shiyn.lin@gmail.com> <62c44d12-933d-ee66-ef50-467cd8d30a58@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14.02.23 18:23, Yang Shi wrote: > On Tue, Feb 14, 2023 at 1:58 AM David Hildenbrand wrote: >> >> On 10.02.23 18:20, Chih-En Lin wrote: >>> On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote: >>>>>>> Currently, copy-on-write is only used for the mapped memory; the child >>>>>>> process still needs to copy the entire page table from the parent >>>>>>> process during forking. The parent process might take a lot of time and >>>>>>> memory to copy the page table when the parent has a big page table >>>>>>> allocated. For example, the memory usage of a process after forking with >>>>>>> 1 GB mapped memory is as follows: >>>>>> >>>>>> For some reason, I was not able to reproduce performance improvements >>>>>> with a simple fork() performance measurement program. The results that >>>>>> I saw are the following: >>>>>> >>>>>> Base: >>>>>> Fork latency per gigabyte: 0.004416 seconds >>>>>> Fork latency per gigabyte: 0.004382 seconds >>>>>> Fork latency per gigabyte: 0.004442 seconds >>>>>> COW kernel: >>>>>> Fork latency per gigabyte: 0.004524 seconds >>>>>> Fork latency per gigabyte: 0.004764 seconds >>>>>> Fork latency per gigabyte: 0.004547 seconds >>>>>> >>>>>> AMD EPYC 7B12 64-Core Processor >>>>>> Base: >>>>>> Fork latency per gigabyte: 0.003923 seconds >>>>>> Fork latency per gigabyte: 0.003909 seconds >>>>>> Fork latency per gigabyte: 0.003955 seconds >>>>>> COW kernel: >>>>>> Fork latency per gigabyte: 0.004221 seconds >>>>>> Fork latency per gigabyte: 0.003882 seconds >>>>>> Fork latency per gigabyte: 0.003854 seconds >>>>>> >>>>>> Given, that page table for child is not copied, I was expecting the >>>>>> performance to be better with COW kernel, and also not to depend on >>>>>> the size of the parent. >>>>> >>>>> Yes, the child won't duplicate the page table, but fork will still >>>>> traverse all the page table entries to do the accounting. >>>>> And, since this patch expends the COW to the PTE table level, it's not >>>>> the mapped page (page table entry) grained anymore, so we have to >>>>> guarantee that all the mapped page is available to do COW mapping in >>>>> the such page table. >>>>> This kind of checking also costs some time. >>>>> As a result, since the accounting and the checking, the COW PTE fork >>>>> still depends on the size of the parent so the improvement might not >>>>> be significant. >>>> >>>> The current version of the series does not provide any performance >>>> improvements for fork(). I would recommend removing claims from the >>>> cover letter about better fork() performance, as this may be >>>> misleading for those looking for a way to speed up forking. In my >>> >>> From v3 to v4, I changed the implementation of the COW fork() part to do >>> the accounting and checking. At the time, I also removed most of the >>> descriptions about the better fork() performance. Maybe it's not enough >>> and still has some misleading. I will fix this in the next version. >>> Thanks. >>> >>>> case, I was looking to speed up Redis OSS, which relies on fork() to >>>> create consistent snapshots for driving replicates/backups. The O(N) >>>> per-page operation causes fork() to be slow, so I was hoping that this >>>> series, which does not duplicate the VA during fork(), would make the >>>> operation much quicker. >>> >>> Indeed, at first, I tried to avoid the O(N) per-page operation by >>> deferring the accounting and the swap stuff to the page fault. But, >>> as I mentioned, it's not suitable for the mainline. >>> >>> Honestly, for improving the fork(), I have an idea to skip the per-page >>> operation without breaking the logic. However, this will introduce the >>> complicated mechanism and may has the overhead for other features. It >>> might not be worth it. It's hard to strike a balance between the >>> over-complicated mechanism with (probably) better performance and data >>> consistency with the page status. So, I would focus on the safety and >>> stable approach at first. >> >> Yes, it is most probably possible, but complexity, robustness and >> maintainability have to be considered as well. >> >> Thanks for implementing this approach (only deduplication without other >> optimizations) and evaluating it accordingly. It's certainly "cleaner", >> such that we only have to mess with unsharing and not with other >> accounting/pinning/mapcount thingies. But it also highlights how >> intrusive even this basic deduplication approach already is -- and that >> most benefits of the original approach requires even more complexity on top. >> >> I am not quite sure if the benefit is worth the price (I am not to >> decide and I would like to hear other options). >> >> My quick thoughts after skimming over the core parts of this series >> >> (1) forgetting to break COW on a PTE in some pgtable walker feels quite >> likely (meaning that it might be fairly error-prone) and forgetting >> to break COW on a PTE table, accidentally modifying the shared >> table. >> (2) break_cow_pte() can fail, which means that we can fail some >> operations (possibly silently halfway through) now. For example, >> looking at your change_pte_range() change, I suspect it's wrong. >> (3) handle_cow_pte_fault() looks quite complicated and needs quite some >> double-checking: we temporarily clear the PMD, to reset it >> afterwards. I am not sure if that is correct. For example, what >> stops another page fault stumbling over that pmd_none() and >> allocating an empty page table? Maybe there are some locking details >> missing or they are very subtle such that we better document them. I >> recall that THP played quite some tricks to make such cases work ... >> >>> >>>>> Actually, at the RFC v1 and v2, we proposed the version of skipping >>>>> those works, and we got a significant improvement. You can see the >>>>> number from RFC v2 cover letter [1]: >>>>> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93% >>>>> for normal fork" >>>> >>>> I suspect the 93% improvement (when the mapcount was not updated) was >>>> only for VAs with 4K pages. With 2M mappings this series did not >>>> provide any benefit is this correct? >>> >>> Yes. In this case, the COW PTE performance is similar to the normal >>> fork(). >> >> >> The thing with THP is, that during fork(), we always allocate a backup >> PTE table, to be able to PTE-map the THP whenever we have to. Otherwise >> we'd have to eventually fail some operations we don't want to fail -- >> similar to the case where break_cow_pte() could fail now due to -ENOMEM >> although we really don't want to fail (e.g., change_pte_range() ). >> >> I always considered that wasteful, because in many scenarios, we'll >> never ever split a THP and possibly waste memory. > > When you say "split THP", do you mean split the compound page to base > pages? IIUC the backup PTE table page is used to guarantee the PMD > split (just convert pmd mapped THP to PTE-mapped but not split the > compound page) succeed. You may already notice there is no return > value for PMD split. Yes, as I raised in my other reply. > > The PMD split may be called quite often, for example, MADV_DONTNEED, > mbind, mlock, and even in memory reclamation context (THP swap). Yes, but with a single MADV_DONTNEED call you cannot PTE-map more than 2 THP (all other overlapped THP will get zapped). Same with most other operations. There are corner cases, though. I recall that s390x/kvm wants to break all THP in a given VMA range. But that operation could safely fail if we can't do that. Certainly needs some investigation, that's most probably why it hasn't been done yet. > >> >> Optimizing that for THP (e.g., don't always allocate backup THP, have >> some global allocation backup pool for splits + refill when >> close-to-empty) might provide similar fork() improvements, both in speed >> and memory consumption when it comes to anonymous memory. > > It might work. But may be much more complicated than what you thought > when handling multiple parallel PMD splits. I consider the whole PTE-table linking to THPs complicated enough to eventually replace it by something differently complicated that wastes less memory ;) -- Thanks, David / dhildenb