Received: by 2002:a05:7412:40d:b0:e2:908c:2ebd with SMTP id 13csp456987rdf; Tue, 21 Nov 2023 07:16:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IEvWki5fY2vzz3UQvOUMh4bJ3X1Wt2tmJFnEkf79dFbWLXGZDpzL2fOVt0yVFoJIHAsCGCU X-Received: by 2002:a05:6871:788b:b0:1f5:ef15:3bf0 with SMTP id oz11-20020a056871788b00b001f5ef153bf0mr5738664oac.56.1700579776377; Tue, 21 Nov 2023 07:16:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700579776; cv=none; d=google.com; s=arc-20160816; b=JaD+st3j6tbyi0YQSnoNOuk/Gs+e8iZ/RfQBl0SlGUPmJlK6KMScv8muE2GKYxEz96 JnCO1AlgvfhL/5xmIAck6AE7PQ9ko/9y/koAIgJfiGJk2NmnAYGBJe7gfY+2iuC+WFAc UYAIKn01MNVUnUcF8qTSjKh/e3YHueJvgj+ocIDMXXM+yZefJRGX0sdUXzc++RlFeC4E yiVJD/yNP1gUDLcOsX5XQCFGXcfmk2fI6HzTSmqnxWKAr9sWf3qpvLxeMFsP7ahapZKJ 0cftYke2A4+xsqwUGBtp1cmWTn/XF9IewmcEqv2ME+BbvJzy4OGEN+L7/+gfyxEE/Pt0 EEOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=uU4UyN6mqeL4FAL3NZOAl0+71kJPrarSlR0+CqEJr0Q=; fh=Om0/u7nXBQ4LaIxc2I5LOM+DE9wxQWy9xmDIpiEl7pg=; b=S2f2EYJ5ISx6rI5mkopAE2houpX38n9oY4cIL9wEba0PvR2G8q7N9m42CTzZHlqS2R xON/b7TbBdCAUhMqNSsDxR6CPPbCAtDNUpchGYhEPmnQgZB110D+Ssuk5IUmaJvXDTSQ ZXLsxZFA0uW6PuIZBNvVnTKLRHm/hnbxVWF0i6LHB/DQNgiS9aGmskVmrT2B1p5e+Xsn ydTRiKfOU2RaZy6ZblPo0oibaA1QbZ4VO4ZXuHPUHa2AhiTsf8+hnmyQIJTYwTUmHF0y /x3nhXuyZGI9blMgHndiLj9bPuViS72HEXBnKX/j6kyg4ALpd2yop+k4xS1qaeX5TTDv nQ2Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id k18-20020a056870d0d200b001e9c1db7dc4si3644567oaa.240.2023.11.21.07.16.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 07:16:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 7DA618077813; Tue, 21 Nov 2023 07:15:23 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234230AbjKUPPG (ORCPT + 99 others); Tue, 21 Nov 2023 10:15:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234100AbjKUPPF (ORCPT ); Tue, 21 Nov 2023 10:15:05 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 83F5F113 for ; Tue, 21 Nov 2023 07:15:00 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E87DFEC; Tue, 21 Nov 2023 07:15:46 -0800 (PST) Received: from [10.1.26.189] (XHFQ2J9959.cambridge.arm.com [10.1.26.189]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA1F03F7A6; Tue, 21 Nov 2023 07:14:56 -0800 (PST) Message-ID: Date: Tue, 21 Nov 2023 15:14:55 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-13-ryan.roberts@arm.com> <87v89vmjus.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <87v89vmjus.fsf@nvdebian.thelocal> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 21 Nov 2023 07:15:23 -0800 (PST) On 21/11/2023 11:22, Alistair Popple wrote: > > Ryan Roberts writes: > > [...] > >> +static void contpte_fold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte, bool fold) >> +{ >> + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); >> + unsigned long start_addr; >> + pte_t *start_ptep; >> + int i; >> + >> + start_ptep = ptep = contpte_align_down(ptep); >> + start_addr = addr = ALIGN_DOWN(addr, CONT_PTE_SIZE); >> + pte = pfn_pte(ALIGN_DOWN(pte_pfn(pte), CONT_PTES), pte_pgprot(pte)); >> + pte = fold ? pte_mkcont(pte) : pte_mknoncont(pte); >> + >> + for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE) { >> + pte_t ptent = __ptep_get_and_clear(mm, addr, ptep); >> + >> + if (pte_dirty(ptent)) >> + pte = pte_mkdirty(pte); >> + >> + if (pte_young(ptent)) >> + pte = pte_mkyoung(pte); >> + } >> + >> + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); >> + >> + __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); >> +} >> + >> +void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte) >> +{ >> + /* >> + * We have already checked that the virtual and pysical addresses are >> + * correctly aligned for a contpte mapping in contpte_try_fold() so the >> + * remaining checks are to ensure that the contpte range is fully >> + * covered by a single folio, and ensure that all the ptes are valid >> + * with contiguous PFNs and matching prots. We ignore the state of the >> + * access and dirty bits for the purpose of deciding if its a contiguous >> + * range; the folding process will generate a single contpte entry which >> + * has a single access and dirty bit. Those 2 bits are the logical OR of >> + * their respective bits in the constituent pte entries. In order to >> + * ensure the contpte range is covered by a single folio, we must >> + * recover the folio from the pfn, but special mappings don't have a >> + * folio backing them. Fortunately contpte_try_fold() already checked >> + * that the pte is not special - we never try to fold special mappings. >> + * Note we can't use vm_normal_page() for this since we don't have the >> + * vma. >> + */ >> + >> + struct page *page = pte_page(pte); >> + struct folio *folio = page_folio(page); >> + unsigned long folio_saddr = addr - (page - &folio->page) * PAGE_SIZE; >> + unsigned long folio_eaddr = folio_saddr + folio_nr_pages(folio) * PAGE_SIZE; >> + unsigned long cont_saddr = ALIGN_DOWN(addr, CONT_PTE_SIZE); >> + unsigned long cont_eaddr = cont_saddr + CONT_PTE_SIZE; >> + unsigned long pfn; >> + pgprot_t prot; >> + pte_t subpte; >> + pte_t *orig_ptep; >> + int i; >> + >> + if (folio_saddr > cont_saddr || folio_eaddr < cont_eaddr) >> + return; >> + >> + pfn = pte_pfn(pte) - ((addr - cont_saddr) >> PAGE_SHIFT); >> + prot = pte_pgprot(pte_mkold(pte_mkclean(pte))); >> + orig_ptep = ptep; >> + ptep = contpte_align_down(ptep); >> + >> + for (i = 0; i < CONT_PTES; i++, ptep++, pfn++) { >> + subpte = __ptep_get(ptep); >> + subpte = pte_mkold(pte_mkclean(subpte)); >> + >> + if (!pte_valid(subpte) || >> + pte_pfn(subpte) != pfn || >> + pgprot_val(pte_pgprot(subpte)) != pgprot_val(prot)) >> + return; >> + } >> + >> + contpte_fold(mm, addr, orig_ptep, pte, true); >> +} >> +EXPORT_SYMBOL(__contpte_try_fold); >> + >> +void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte) >> +{ >> + /* >> + * We have already checked that the ptes are contiguous in >> + * contpte_try_unfold(), so we can unfold unconditionally here. >> + */ >> + >> + contpte_fold(mm, addr, ptep, pte, false); > > I'm still working my way through the series but Thanks for taking the time to review! > calling a fold during an > unfold stood out as it seemed wrong. Obviously further reading revealed > the boolean flag that changes the functions meaning but I think it would > be better to refactor that. Yes that sounds reasonable. > > We could easily rename contpte_fold() to eg. set_cont_ptes() and factor > the pte calculation loop into a separate helper > (eg. calculate_contpte_dirty_young() or some hopefully better name) > called further up the stack. That has an added benefit of providing a > spot to add the nice comment for young/dirty rules you provided in the > patch description ;-) > > In other words we'd have something like: > > void __contpte_try_unfold() { > pte = calculate_contpte_dirty_young(mm, addr, ptep, pte); > pte = pte_mknoncont(pte); > set_cont_ptes(mm, addr, ptep, pte); > } My concern with this approach is that calculate_contpte_dirty_young() has side effects; it has to clear each PTE as it loops through it prevent a race between our reading access/dirty and another thread causing access/dirty to be set. So its not just a "calculation", its the teardown portion of the process too. I guess its a taste thing, so happy for it to be argued the other way, but I would prefer to keep it all together in one function. How about renaming contpte_fold() to contpte_convert() or contpte_repaint() (other suggestions welcome), and extracting the pte_mkcont()/pte_mknoncont() part (so we can remove the bool param): void __contpte_try_unfold() { pte = pte_mknoncont(pte); contpte_convert(mm, addr, ptep, pte); } Thanks, Ryan > > Which IMHO is more immediately understandable. > > - Alistair >