Received: by 2002:a05:7412:8598:b0:f9:33c2:5753 with SMTP id n24csp359662rdh; Tue, 19 Dec 2023 00:41:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IEz1Kv4ZfMSZSFvmtSGz8cawXG68wjl/AdhTyoJ6opOWzJcZq9KAbEtjZXqq6cGBrmnZyLC X-Received: by 2002:a05:6358:7e97:b0:172:ab3c:b952 with SMTP id o23-20020a0563587e9700b00172ab3cb952mr7570463rwn.31.1702975267425; Tue, 19 Dec 2023 00:41:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702975267; cv=none; d=google.com; s=arc-20160816; b=tcVYgWP8KZkV/mtStGUwI0qeDDqFJX2OXyyZYI4GF4Mh4cjrW0Vy6p7zP0BnBkrkTc QRY9GG4Oh4lsI12ZDp3RNj+kCYvP3aLQlS7J7K8L+GyuPpn0ufFnTLaQNi9FsfMLtz2M ZOQs+Q8brLsTrgupG+ijXRphKMYOs9xnIArdZB79lOJelYcaN/kfGo2eahmVEPvHOqhY sSJ0CVwCgkEYUXOR6MlxzUNdjHdDQ/KH+3D6+y29BSoxw2M3B3ozYHWPiwa9u9MUmAuo 4aOCmfRiIQm6JpJM9Snw+E5XDMdwAvQ4Pm8ERP2yWuctF9ULWRKxLLetbUBwTW75L5Z2 WLgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=VSL+YtfVYtwkAkc5QGBBJM9PylwCr0wv5FYeit7R8Vg=; fh=GlLzLBhqF07GE0FJUf82nnkBDPyXS9iIZ0BDzvPX32s=; b=VH5vDRhy5ojUApgdspQmSDYdQQvgtutaW6WWBRYxi1ZJBzmFFOmtndisWDYvvICeO7 GawI9R4+SWR1cL7aQRzvONJobIOKYb0C1QegsikY/8bZh0WGORHw1NwW5uW2b+4Tx2ae 1x79ibIrhl3FO8qP7R43FsWZSruWi521rk45O/XHEXlnW+Ruh0OYuy4OShuh6TCJClDr o6zl10DCBOkB+9tT0uL7JaZ63zYt58MoMrXIXNoJFbS9LOuZrHUYQDtFNlxLskjPViRC ajP59+ZDYzx2u5XNFQvOQ+DY3mMVdeAFZMRRZMuYV4dXl+8Nyt9QBQ/mTr/LUF5KWKIs YXrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4901-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4901-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id y24-20020a634b18000000b005c6b4e190absi18645537pga.502.2023.12.19.00.41.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 00:41:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-4901-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-4901-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-4901-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0195F28381D for ; Tue, 19 Dec 2023 08:41:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 12EFD11C89; Tue, 19 Dec 2023 08:41:02 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1AD7CE57A for ; Tue, 19 Dec 2023 08:40:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BA2AE1FB; Tue, 19 Dec 2023 00:41:43 -0800 (PST) Received: from [10.57.75.230] (unknown [10.57.75.230]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 117D13F738; Tue, 19 Dec 2023 00:40:57 -0800 (PST) Message-ID: <6ceb7a49-0039-4faf-a069-7d47d74790f6@arm.com> Date: Tue, 19 Dec 2023 08:40:56 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 13/39] mm/rmap: factor out adding folio mappings into __folio_add_rmap() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-14-david@redhat.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 18/12/2023 17:06, David Hildenbrand wrote: > On 18.12.23 17:07, Ryan Roberts wrote: >> On 11/12/2023 15:56, David Hildenbrand wrote: >>> Let's factor it out to prepare for reuse as we convert >>> page_add_anon_rmap() to folio_add_anon_rmap_[pte|ptes|pmd](). >>> >>> Make the compiler always special-case on the granularity by using >>> __always_inline. >>> >>> Reviewed-by: Yin Fengwei >>> Signed-off-by: David Hildenbrand >>> --- >>>   mm/rmap.c | 81 ++++++++++++++++++++++++++++++------------------------- >>>   1 file changed, 45 insertions(+), 36 deletions(-) >>> >>> diff --git a/mm/rmap.c b/mm/rmap.c >>> index 2ff2f11275e5..c5761986a411 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -1157,6 +1157,49 @@ int folio_total_mapcount(struct folio *folio) >>>       return mapcount; >>>   } >>>   +static __always_inline unsigned int __folio_add_rmap(struct folio *folio, >>> +        struct page *page, int nr_pages, enum rmap_mode mode, >>> +        unsigned int *nr_pmdmapped) >>> +{ >>> +    atomic_t *mapped = &folio->_nr_pages_mapped; >>> +    int first, nr = 0; >>> + >>> +    __folio_rmap_sanity_checks(folio, page, nr_pages, mode); >>> + >>> +    /* Is page being mapped by PTE? Is this its first map to be added? */ >> >> I suspect this comment is left over from the old version? It sounds a bit odd in >> its new context. > > In this patch, I'm just moving the code, so it would have to be dropped in a > previous patch. > > I'm happy to drop all these comments in previous patches. Well it doesn't really mean much to me in this new context, so I would reword if there is still something you need to convey to the reader, else just remove. > >> >>> +    switch (mode) { >>> +    case RMAP_MODE_PTE: >>> +        do { >>> +            first = atomic_inc_and_test(&page->_mapcount); >>> +            if (first && folio_test_large(folio)) { >>> +                first = atomic_inc_return_relaxed(mapped); >>> +                first = (first < COMPOUND_MAPPED); >>> +            } >>> + >>> +            if (first) >>> +                nr++; >>> +        } while (page++, --nr_pages > 0); >>> +        break; >>> +    case RMAP_MODE_PMD: >>> +        first = atomic_inc_and_test(&folio->_entire_mapcount); >>> +        if (first) { >>> +            nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); >>> +            if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) { >>> +                *nr_pmdmapped = folio_nr_pages(folio); >>> +                nr = *nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); >>> +                /* Raced ahead of a remove and another add? */ >>> +                if (unlikely(nr < 0)) >>> +                    nr = 0; >>> +            } else { >>> +                /* Raced ahead of a remove of COMPOUND_MAPPED */ >>> +                nr = 0; >>> +            } >>> +        } >>> +        break; >>> +    } >>> +    return nr; >>> +} >>> + >>>   /** >>>    * folio_move_anon_rmap - move a folio to our anon_vma >>>    * @folio:    The folio to move to our anon_vma >>> @@ -1380,45 +1423,11 @@ static __always_inline void >>> __folio_add_file_rmap(struct folio *folio, >>>           struct page *page, int nr_pages, struct vm_area_struct *vma, >>>           enum rmap_mode mode) >>>   { >>> -    atomic_t *mapped = &folio->_nr_pages_mapped; >>> -    unsigned int nr_pmdmapped = 0, first; >>> -    int nr = 0; >>> +    unsigned int nr, nr_pmdmapped = 0; >> >> You're still being inconsistent with signed/unsigned here. Is there a reason >> these can't be signed like nr_pages in the interface? > > I can turn them into signed values. > > Personally, I think it's misleading to use "signed" for values that have > absolutely no meaning for negative meaning. But sure, we can be consistent, at > least in rmap code. > Well it's an easy way to detect overflow? But I know what you mean. There are lots of other APIs that accept signed/unsigned 32/64 bits; It's a mess. It would be a tiny step in the right direction if a series could at least be consistent with itself though, IMHO. :)