Received: by 2002:a05:7412:b130:b0:e2:908c:2ebd with SMTP id az48csp188780rdb; Thu, 16 Nov 2023 16:16:47 -0800 (PST) X-Google-Smtp-Source: AGHT+IGrN176DZiir7ZVJpUP2PpmLWyW2NZ9pI/bEndqxHYraUl8FweyMfmobeO/N5Ceo3GCrMtQ X-Received: by 2002:a17:90b:33c7:b0:280:ff37:8981 with SMTP id lk7-20020a17090b33c700b00280ff378981mr15200323pjb.44.1700180207389; Thu, 16 Nov 2023 16:16:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700180207; cv=none; d=google.com; s=arc-20160816; b=JOW371+Yu3s+01tmXmQzfyUsigSI9d61TcrBvDCGEyUskaum4g1CJ6JsKNLGF07Ehv Kk+jCC8ciGxVMPOEKoxeqRTmqIRfsvnV98/RTLn6MM6xvE+VXflgcHXNPVbMq14XcEtQ i5bYjnGljbPr9sj/MFaIqchnWQ0v8vXvl2MpuaCkqLBYjiJ+NbqlqC2BTQzJsK6Kjpzt zRaZ7y6QsQKdQzwN0fkVDYT/LBL/pLhMXyNeR+9J0HHCAEdRSM/O8U14RXCfqGv/rFIt qdOA9Ba1nJlJ0At2lj4lOUJ27/uA+vqfALwOH2jDsXjp7jx46ugyVUthR/9C2eBTplrt +/xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; fh=ifbU/qIkzN9nTdJ7DW7Vv5BTA7clIvXUnSE6QIgrK8Q=; b=PucjOPlSe4PL4F9j/BYGXnxjJayNC3ClhBi+nooGHVnmnXw+9FTGgMYsiQnOxVLGxP sMTvAB+bn+VDNHPPxEP6iYp2wHWPUqSFxmNmIaHlqVQrPpjK9K5BKC628Pz7GpuF8RwO ZJ1Ix2H1oG9+VHnpy2BDk7vO1zx0txHhEIdq2+jFNQNoMJfm6A9ex+BoWflzeVFrhi+B gOmhZq7sKdE8cFMYJrhuYBEF6dVq/DWGuqPNJp+UhJIOpi0Dj7i9UvKywto3GciTPe6E u5wfuaf+qnXQj5M981xzzEr5zShOw1Yn9ROGGVdIRhyep9Fp9Wlj7Bl0zznNBm6zs+7m jV5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=SBTJ5hGG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id ga20-20020a17090b039400b0026d41496190si3111850pjb.85.2023.11.16.16.16.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Nov 2023 16:16:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=SBTJ5hGG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 5AEBF81F3306; Thu, 16 Nov 2023 16:16:19 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232401AbjKQAQF (ORCPT + 99 others); Thu, 16 Nov 2023 19:16:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229487AbjKQAQE (ORCPT ); Thu, 16 Nov 2023 19:16:04 -0500 Received: from mail-vk1-xa32.google.com (mail-vk1-xa32.google.com [IPv6:2607:f8b0:4864:20::a32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88880EA for ; Thu, 16 Nov 2023 16:16:00 -0800 (PST) Received: by mail-vk1-xa32.google.com with SMTP id 71dfb90a1353d-4ac023c8f82so538041e0c.1 for ; Thu, 16 Nov 2023 16:16:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700180159; x=1700784959; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; b=SBTJ5hGGG3zD8IhflwsTuj6BnXj01TyvSIXCtH25fvpLb4/QckbBkjWrpnghO8Li1T Hsd0gcRDzxm2xgeUPE+VvgkU8YWbNZgTcaR04zY5tvnoXqhI46WCOeae185BvsElrUUZ sP8siHldkFQzhBsx1uJly/beukoFAkR2LtwEQvnYOUlMX/EmvO330nCWiWm/d1So2eDy ruzOxXv5kfCfLE2ajQPOdyK25ha4HeMeBHZf265OF0Cq7zJiaYxjtpTmKJCPQUHeP6GQ OUPCiSxhON3v3VlMz0PCRJWO/HFGQVB1y0MtkOICrzLsPlBkT6Irr9vad1M9IQoAtXz5 f9fA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700180159; x=1700784959; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; b=R+Yb6/e0EmVHLmww93lEy7wlKYk1R44upK0EBZLx/qG2PQ+v5nDopzDBDDkNUwOBfw fGxPVIw173kAliW5LsoamPFYjedZbdDT+FtzRPz/x1mxx7WA5Qw1m98a1gABYOGTmPZN GAs637qltwcB6TLwEOD/PeiNGDq3zdY4CDBA+K2kSO1Hfp5tpBKMwfj2lcC2Eo84Vbbn UnNoOOJ51bKQL2RasHZLCuBUxJqIwkoZmH5gzOCeQ883N9J87iRS7G2z5MDtEbFxA6Ai STrs/DJOxToxB0QWJTDlOyB54uLwq21HjBSa0NBwIZQ5c0KhANj+VyDhM0Np+KjIxy/v Srig== X-Gm-Message-State: AOJu0Yy/XBC47itZ+cpRfChjiB5RId0cH1G40EjRb3lFii29iEfb/sxj hjX2eAjLESDuYTAKK64sJClznPTKdMrh8DNlTKY= X-Received: by 2002:a1f:b204:0:b0:495:c464:a2fe with SMTP id b4-20020a1fb204000000b00495c464a2femr15945295vkf.2.1700180159393; Thu, 16 Nov 2023 16:15:59 -0800 (PST) MIME-Version: 1.0 References: <20231114014313.67232-1-v-songbaohua@oppo.com> <864489b3-5d85-4145-b5bb-5d8a74b9b92d@redhat.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 17 Nov 2023 08:15:48 +0800 Message-ID: Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for large folios To: David Hildenbrand Cc: steven.price@arm.com, akpm@linux-foundation.org, ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, v-songbaohua@oppo.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 16 Nov 2023 16:16:19 -0800 (PST) On Fri, Nov 17, 2023 at 7:47=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Thu, Nov 16, 2023 at 5:36=E2=80=AFPM David Hildenbrand wrote: > > > > On 15.11.23 21:49, Barry Song wrote: > > > On Wed, Nov 15, 2023 at 11:16=E2=80=AFPM David Hildenbrand wrote: > > >> > > >> On 14.11.23 02:43, Barry Song wrote: > > >>> This patch makes MTE tags saving and restoring support large folios= , > > >>> then we don't need to split them into base pages for swapping out > > >>> on ARM64 SoCs with MTE. > > >>> > > >>> arch_prepare_to_swap() should take folio rather than page as parame= ter > > >>> because we support THP swap-out as a whole. > > >>> > > >>> Meanwhile, arch_swap_restore() should use page parameter rather tha= n > > >>> folio as swap-in always works at the granularity of base pages righ= t > > >>> now. > > >> > > >> ... but then we always have order-0 folios and can pass a folio, or = what > > >> am I missing? > > > > > > Hi David, > > > you missed the discussion here: > > > > > > https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4= Ac68MkhjMDw@mail.gmail.com/ > > > https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguO= kNzmh1Veocg@mail.gmail.com/ > > > > Okay, so you want to handle the refault-from-swapcache case where you g= et a > > large folio. > > > > I was mislead by your "folio as swap-in always works at the granularity= of > > base pages right now" comment. > > > > What you actually wanted to say is "While we always swap in small folio= s, we > > might refault large folios from the swapcache, and we only want to rest= ore > > the tags for the page of the large folio we are faulting on." > > > > But, I do if we can't simply restore the tags for the whole thing at on= ce > > at make the interface page-free? > > > > Let me elaborate: > > > > IIRC, if we have a large folio in the swapcache, the swap entries/offse= t are > > contiguous. If you know you are faulting on page[1] of the folio with a > > given swap offset, you can calculate the swap offset for page[0] simply= by > > subtracting from the offset. > > > > See page_swap_entry() on how we perform this calculation. > > > > > > So you can simply pass the large folio and the swap entry corresponding > > to the first page of the large folio, and restore all tags at once. > > > > So the interface would be > > > > arch_prepare_to_swap(struct folio *folio); > > void arch_swap_restore(struct page *folio, swp_entry_t start_entry); > > > > I'm sorry if that was also already discussed. > > This has been discussed. Steven, Ryan and I all don't think this is a goo= d > option. in case we have a large folio with 16 basepages, as do_swap_page > can only map one base page for each page fault, that means we have > to restore 16(tags we restore in each page fault) * 16(the times of page = faults) > for this large folio. > > and still the worst thing is the page fault in the Nth PTE of large folio > might free swap entry as that swap has been in. > do_swap_page() > { > /* > * Remove the swap entry and conditionally try to free up the swapcach= e. > * We're already holding a reference on the page but haven't mapped it > * yet. > */ > swap_free(entry); > } > > So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you might= access > a freed tag. And David, one more information is that to keep the parameter of arch_swap_restore() unchanged as folio, i actually tried an ugly approach in rfc v2: +void arch_swap_restore(swp_entry_t entry, struct folio *folio) +{ + if (system_supports_mte()) { + /* + * We don't support large folios swap in as whole yet, but + * we can hit a large folio which is still in swapcache + * after those related processes' PTEs have been unmapped + * but before the swapcache folio is dropped, in this case, + * we need to find the exact page which "entry" is mapping + * to. If we are not hitting swapcache, this folio won't be + * large + */ + struct page *page =3D folio_file_page(folio, swp_offset(entry)); + mte_restore_tags(entry, page); + } +} And obviously everybody in the discussion hated it :-) i feel the only way to keep API unchanged using folio is that we support restoring PTEs all together for the whole large folio and we support the swap-in of large folios. This is in my list to do, I will send a patchset based on Ryan's large anon folios series after a while. till that is really done, it seems using page rather than folio is a better choice. > > > > > BUT, IIRC in the context of > > > > commit cfeed8ffe55b37fa10286aaaa1369da00cb88440 > > Author: David Hildenbrand > > Date: Mon Aug 21 18:08:46 2023 +0200 > > > > mm/swap: stop using page->private on tail pages for THP_SWAP > > > > Patch series "mm/swap: stop using page->private on tail pages for = THP_SWAP > > + cleanups". > > > > This series stops using page->private on tail pages for THP_SWAP, = replaces > > folio->private by folio->swap for swapcache folios, and starts usi= ng > > "new_folio" for tail pages that we are splitting to remove the usa= ge of > > page->private for swapcache handling completely. > > > > As long as the folio is in the swapcache, we even do have the proper > > swp_entry_t start_entry available as folio_swap_entry(folio). > > > > But now I am confused when we actually would have to pass > > "swp_entry_t start_entry". We shouldn't if the folio is in the swapcach= e ... > > > > Nop, hitting swapcache doesn't necessarily mean tags have been restored. > when A forks B,C,D,E,F. and A, B, C, D, E ,F share the swapslot. > as we have two chances to hit swapcache: > 1. swap out, unmap has been done but folios haven't been dropped > 2. swap in, shared processes allocate folios and add to swapcache > > for 2, If A gets fault earlier than B, A will allocate folio and add > it to swapcache. > Then B will hit the swapcache. But If B's CPU is faster than A, B still m= ight > be the one mapping PTE earlier than A though A is the one which has > added the page to swapcache. we have to make sure MTE is there when > mapping is done. > > > -- > > Cheers, > > > > David / dhildenb Thanks Barry