Received: by 2002:a05:7412:d002:b0:f9:9049:d2ea with SMTP id bd2csp353rdb; Wed, 20 Dec 2023 01:58:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IG0i7HSFdz2iKeG77pmrONjZcVGcarRTNn5effIZtCHA71T5N/Dyv7+LgZ880CmcnfZfykc X-Received: by 2002:a17:903:8ce:b0:1d3:6247:6cf1 with SMTP id lk14-20020a17090308ce00b001d362476cf1mr7618687plb.0.1703066286937; Wed, 20 Dec 2023 01:58:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703066286; cv=none; d=google.com; s=arc-20160816; b=EHjXk/JyZaUKap3DagVX3K69VdePcal65l9TM0jmsWgAMLMWt6H+1wOIcA+r+w47zC 7kPL1ZyHI5wCFhsJ6Jnr9yGy505d0+IWHuw1Mf0rUZKtYULL+p4K/iPlZS5ZLmh2hJCT d2Z/4aNsDcskJYZMfPJIkHQxe5MPTbJ2CnrIN/1i57ihZsOyBBenSvI769UqBqXxistR GVrCcrK/rQAJBQt92RROQyP0nBJbqefycEPB5kEkLnnnluzcC5FyFgqPJZUyJnEBB8s0 yo5TiYySyYNjI9xQFKibgS66goZm1y970yqzowD7efaK6hQq1Gu4kWtMmTvEHGJf9AF0 72jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:date:message-id :dkim-signature; bh=9As2S8Kzp1MuG6UxCCIg70Z966bnMvQcWNaINEWheFg=; fh=WhoFSvQpKScYkVXHvDK0JQyrq01qVVb7FPiYhgoL85Y=; b=PvlC/zMaF7gsq0cbecT/qUI4xlvKrm0jMxd71Tmw4kVfa4JvB+/Y3wTyYml7VxwJsE N+LHcl0YaJDg/zpMjBn2OXYSLoD+Cv5kJ0H2b/Z8mfcfxm0F5Jah54ziBZgxNN8rUdiB QF9I3dGq+fBsYNpbQizXneiCaObHmpFaejcZ2bXLFT1SgXpYe/THY9ywhp1PlcqN3c+C NH6l8pZaWtKy+rpJzUOrmGrW+r2QFBMdQNLO60SQA4maMegSIYJAWrz9wQD4YtcUeUFJ 8TrU2xKACOS8D6RjBCjK+8YHnh24nKly4rBGTN2ZMCFgY+jBsKH9n25o6BqzIgg1/yyn +oMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PbcG3T4y; spf=pass (google.com: domain of linux-kernel+bounces-6662-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6662-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id iz7-20020a170902ef8700b001d3f1b7685esi478449plb.547.2023.12.20.01.58.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 01:58:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6662-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PbcG3T4y; spf=pass (google.com: domain of linux-kernel+bounces-6662-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6662-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 1E15AB25B94 for ; Wed, 20 Dec 2023 09:55:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DAEEF208CB; Wed, 20 Dec 2023 09:55:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PbcG3T4y" X-Original-To: linux-kernel@vger.kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FCEE208A9 for ; Wed, 20 Dec 2023 09:55:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703066102; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=9As2S8Kzp1MuG6UxCCIg70Z966bnMvQcWNaINEWheFg=; b=PbcG3T4ypAQKbYhsfkY2o9+Oalbg8kAyHQtz4aDfc4Efpf37E9bljcjPkmdINLN5MmR0a8 FMYoizdqc/vo4ILNcftTroCaryGiHtJslyp6NrFf9W2enU0RhS59lU5he/fAhIu0FDnfrA 6m3OFi/lkGZAL7B9NQO+5IfA0/dU6Vk= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-75-yQljdKxzOOefXLtB3iQAAw-1; Wed, 20 Dec 2023 04:55:00 -0500 X-MC-Unique: yQljdKxzOOefXLtB3iQAAw-1 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-336719795a7so871498f8f.0 for ; Wed, 20 Dec 2023 01:55:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066099; x=1703670899; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=9As2S8Kzp1MuG6UxCCIg70Z966bnMvQcWNaINEWheFg=; b=jdIy4TCQ2tnwP4KX7o5gtvJn0+b3I2awCOP/HVXE8VWjubIMhEXouMZ1LG27sY13VH CV4DYJBHMTXcpWbecPWCy1jbYxIopYWINJ+YFNjxpyhuVXjDF2d5Sb6ZV3puGlnQ6q03 I5ZnDLe04/EnYRLkqXk48rGYtkzVdbcrfUOB1l0jYiWqdKUi+RCnrMiVNRbAzg3U4gyi Ak1zbawFwjCLgFYUNQ3twdUJqll7PnCNOrpNy0tn7/80i12t+ydj8hLtPzOmSZqYfnnW EeRJk4XtzSLKJO/IznysgsnFbl+eSyl8C1KiYcWefN8A0mnP8cm502GbKMh2EoB8TWeX 9LIA== X-Gm-Message-State: AOJu0YyUNEqLk3DYdprgSnnoV/RJZ6sqP+PhtqCSLL4gM2+xPxd9dC4G fIq7HnkeZLmKXH78FDoC7Da/zAv7wOsunIaC2MyGqHdZudNzpICK7IX20c2/aa8OeJJ6t3P+KLh cQ4BwJHWKeR8ROYUY+cjRRp7atWls9Qyw X-Received: by 2002:adf:e947:0:b0:336:7641:f3de with SMTP id m7-20020adfe947000000b003367641f3demr986427wrn.91.1703066099289; Wed, 20 Dec 2023 01:54:59 -0800 (PST) X-Received: by 2002:adf:e947:0:b0:336:7641:f3de with SMTP id m7-20020adfe947000000b003367641f3demr986393wrn.91.1703066098779; Wed, 20 Dec 2023 01:54:58 -0800 (PST) Received: from ?IPV6:2003:cb:c73b:eb00:8e25:6953:927:1802? (p200300cbc73beb008e25695309271802.dip0.t-ipconnect.de. [2003:cb:c73b:eb00:8e25:6953:927:1802]) by smtp.gmail.com with ESMTPSA id a15-20020a5d508f000000b0033330846e76sm13054134wrt.86.2023.12.20.01.54.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 20 Dec 2023 01:54:58 -0800 (PST) Message-ID: <28968568-f920-47ac-b6fd-87528ffd8f77@redhat.com> Date: Wed, 20 Dec 2023 10:54:56 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork() Content-Language: en-US To: Ryan Roberts , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231218105100.172635-1-ryan.roberts@arm.com> <20231218105100.172635-3-ryan.roberts@arm.com> <0bef5423-6eea-446b-8854-980e9c23a948@redhat.com> <7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com> <9a58b1a2-2c13-4fa0-8ffa-2b3d9655f1b6@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <9a58b1a2-2c13-4fa0-8ffa-2b3d9655f1b6@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 20.12.23 10:51, Ryan Roberts wrote: > On 20/12/2023 09:17, David Hildenbrand wrote: >> On 19.12.23 18:42, Ryan Roberts wrote: >>> On 19/12/2023 17:22, David Hildenbrand wrote: >>>> On 19.12.23 09:30, Ryan Roberts wrote: >>>>> On 18/12/2023 17:47, David Hildenbrand wrote: >>>>>> On 18.12.23 11:50, Ryan Roberts wrote: >>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given >>>>>>> batch is determined by the architecture with the new helper, >>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory, >>>>>>> all belonging to the same folio. A pte batch is then write-protected in >>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is >>>>>>> set in one go in the child using the new helper, set_ptes_full(). >>>>>>> >>>>>>> The primary motivation for this change is to reduce the number of tlb >>>>>>> maintenance operations that the arm64 backend has to perform during >>>>>>> fork, as it is about to add transparent support for the "contiguous bit" >>>>>>> in its ptes. By write-protecting the parent using the new >>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend >>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive, >>>>>>> when all ptes in the range are being write-protected. Similarly, by >>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the >>>>>>> child, the backend does not need to fold a contiguous range once they >>>>>>> are all populated - they can be initially populated as a contiguous >>>>>>> range in the first place. >>>>>>> >>>>>>> This code is very performance sensitive, and a significant amount of >>>>>>> effort has been put into not regressing performance for the order-0 >>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1, >>>>>>> which enables the compiler to simplify the extra loops that are added >>>>>>> for batching and produce code that is equivalent (and equally >>>>>>> performant) as the previous implementation. >>>>>>> >>>>>>> This change addresses the core-mm refactoring only and a separate change >>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and >>>>>>> set_ptes_full() in the arm64 backend to realize the performance >>>>>>> improvement as part of the work to enable contpte mappings. >>>>>>> >>>>>>> To ensure the arm64 is performant once implemented, this change is very >>>>>>> careful to only call ptep_get() once per pte batch. >>>>>>> >>>>>>> The following microbenchmark results demonstate that there is no >>>>>>> significant performance change after this patch. Fork is called in a >>>>>>> tight loop in a process with 1G of populated memory and the time for the >>>>>>> function to execute is measured. 100 iterations per run, 8 runs >>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests >>>>>>> performed for case where 1G memory is comprised of order-0 folios and >>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster, >>>>>>> positive is slower, compared to baseline upon which the series is based: >>>>>>> >>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) | >>>>>>> | fork          |-------------------|-------------------| >>>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>>> |---------------|---------|---------|---------|---------| >>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% | >>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% | >>>>>>> >>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) | >>>>>>> | fork          |-------------------|-------------------| >>>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>>> |---------------|---------|---------|---------|---------| >>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% | >>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% | >>>>>>> >>>>>>> Tested-by: John Hubbard >>>>>>> Reviewed-by: Alistair Popple >>>>>>> Signed-off-by: Ryan Roberts >>>>>>> --- >>>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++ >>>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++--------------- >>>>>>>     2 files changed, 139 insertions(+), 33 deletions(-) >>>>>>> >>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>>>>> index af7639c3b0a3..db93fb81465a 100644 >>>>>>> --- a/include/linux/pgtable.h >>>>>>> +++ b/include/linux/pgtable.h >>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd) >>>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0) >>>>>>>     #endif >>>>>>>     +#ifndef pte_batch_remaining >>>>>>> +/** >>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary. >>>>>>> + * @pte: Page table entry for the first page. >>>>>>> + * @addr: Address of the first page. >>>>>>> + * @end: Batch ceiling (e.g. end of vma). >>>>>>> + * >>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of >>>>>>> ptes. >>>>>>> + * In such cases, this function returns the remaining number of pages to >>>>>>> the end >>>>>>> + * of the current batch, as defined by addr. This can be useful when >>>>>>> iterating >>>>>>> + * over ptes. >>>>>>> + * >>>>>>> + * May be overridden by the architecture, else batch size is always 1. >>>>>>> + */ >>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long >>>>>>> addr, >>>>>>> +                        unsigned long end) >>>>>>> +{ >>>>>>> +    return 1; >>>>>>> +} >>>>>>> +#endif >>>>>> >>>>>> It's a shame we now lose the optimization for all other archtiectures. >>>>>> >>>>>> Was there no way to have some basic batching mechanism that doesn't require >>>>>> arch >>>>>> specifics? >>>>> >>>>> I tried a bunch of things but ultimately the way I've done it was the only way >>>>> to reduce the order-0 fork regression to 0. >>>>> >>>>> My original v3 posting was costing 5% extra and even my first attempt at an >>>>> arch-specific version that didn't resolve to a compile-time constant 1 still >>>>> cost an extra 3%. >>>>> >>>>> >>>>>> >>>>>> I'd have thought that something very basic would have worked like: >>>>>> >>>>>> * Check if PTE is the same when setting the PFN to 0. >>>>>> * Check that PFN is consecutive >>>>>> * Check that all PFNs belong to the same folio >>>>> >>>>> I haven't tried this exact approach, but I'd be surprised if I can get the >>>>> regression under 4% with this. Further along the series I spent a lot of time >>>>> having to fiddle with the arm64 implementation; every conditional and every >>>>> memory read (even when in cache) was a problem. There is just so little in the >>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple >>>>> M2). >>>>> >>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to >>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play >>>>> safe and ensure the common order-0 case doesn't regress, as you previously >>>>> suggested. >>>>> >>>> >>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I >>>> implemented very generic and simple batching for large folios (all PTE bits >>>> except the PFN have to match). >>>> >>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver >>>> 4210R CPU. >>>> >>>> order-0: 0.014210 -> 0.013969 >>>> >>>> -> Around 1.7 % faster >>>> >>>> order-9: 0.014373 -> 0.009149 >>>> >>>> -> Around 36.3 % faster >>> >>> Well I guess that shows me :) >>> >>> I'll do a review and run the tests on my HW to see if it concurs. >> >> >> I pushed a simple compile fixup (we need pte_next_pfn()). > > I've just been trying to compile and noticed this. Will take a look at your update. > > But upon review, I've noticed the part that I think makes this difficult for > arm64 with the contpte optimization; You are calling ptep_get() for every pte in > the batch. While this is functionally correct, once arm64 has the contpte > changes, its ptep_get() has to read every pte in the contpte block in order to > gather the access and dirty bits. So if your batching function ends up wealking > a 16 entry contpte block, that will cause 16 x 16 reads, which kills > performance. That's why I added the arch-specific pte_batch_remaining() > function; this allows the core-mm to skip to the end of the contpte block and > avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead > of 256. > > I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit > gathering. But we have a similar problem in zap_pte_range() and that function > needs the dirty bit to update the folio. So it doesn't work there. (see patch 3 > in my series). > > I guess you are going to say that we should combine both approaches, so that > your batching loop can skip forward an arch-provided number of ptes? That would > certainly work, but feels like an orthogonal change to what I'm trying to > achieve :). Anyway, I'll spend some time playing with it today. You can overwrite the function or add special-casing internally, yes. Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it doesn't do any of that besides preparing for some arm64 work. -- Cheers, David / dhildenb