2023-06-13 20:32:35

by Vishal Moola

[permalink] [raw]
Subject: [PATCH 0/5] Replace is_longterm_pinnable_page()

This patchset introduces some more helper functions for the folio
conversions, and converts all callers of is_longterm_pinnable_page() to
use folios.

Vishal Moola (Oracle) (5):
mmzone: Introduce folio_is_zone_movable()
mmzone: Introduce folio_migratetype()
mm/gup_test.c: Convert verify_dma_pinned() to us folios
mm/gup.c: Reorganize try_get_folio()
mm: Remove is_longterm_pinnable_page() and Reimplement
folio_is_longterm_pinnable()

include/linux/mm.h | 22 +++++------
include/linux/mmzone.h | 8 ++++
mm/gup.c | 88 ++++++++++++++++++++++--------------------
mm/gup_test.c | 13 ++++---
4 files changed, 70 insertions(+), 61 deletions(-)

--
2.40.1



2023-06-13 20:33:15

by Vishal Moola

[permalink] [raw]
Subject: [PATCH 4/5] mm/gup.c: Reorganize try_get_folio()

try_get_folio() takes in a page, then chooses to do some folio
operations based on the flags (either FOLL_GET or FOLL_PIN).
We can rewrite this function to be more purpose oriented.

After calling try_get_folio(), if FOLL_GET is set we can return the
result and end the function. If FOLL_GET is not set and FOLL_PIN is not
set then it's a bug so we warn and fail. Otherwise we simply proceed to
pin the folio and return that as well.

This change assists with folio conversions, and makes the function more
readable.

Signed-off-by: Vishal Moola (Oracle) <[email protected]>
---
mm/gup.c | 88 +++++++++++++++++++++++++++++---------------------------
1 file changed, 46 insertions(+), 42 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index bbe416236593..adbd81f888f5 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -123,58 +123,62 @@ static inline struct folio *try_get_folio(struct page *page, int refs)
*/
struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
{
+ struct folio *folio;
if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)))
return NULL;

+ folio = try_get_folio(page, refs);
+
if (flags & FOLL_GET)
- return try_get_folio(page, refs);
- else if (flags & FOLL_PIN) {
- struct folio *folio;
+ return folio;

- /*
- * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
- * right zone, so fail and let the caller fall back to the slow
- * path.
- */
- if (unlikely((flags & FOLL_LONGTERM) &&
- !is_longterm_pinnable_page(page)))
- return NULL;
+ if (unlikely(!(flags & FOLL_PIN))) {
+ WARN_ON_ONCE(1);
+ return NULL;
+ }

- /*
- * CAUTION: Don't use compound_head() on the page before this
- * point, the result won't be stable.
- */
- folio = try_get_folio(page, refs);
- if (!folio)
- return NULL;
+ /*
+ * CAUTION: Don't use compound_head() on the page before this
+ * point, the result won't be stable.
+ */
+ if (!folio)
+ return NULL;

- /*
- * When pinning a large folio, use an exact count to track it.
- *
- * However, be sure to *also* increment the normal folio
- * refcount field at least once, so that the folio really
- * is pinned. That's why the refcount from the earlier
- * try_get_folio() is left intact.
- */
- if (folio_test_large(folio))
- atomic_add(refs, &folio->_pincount);
- else
- folio_ref_add(folio,
- refs * (GUP_PIN_COUNTING_BIAS - 1));
- /*
- * Adjust the pincount before re-checking the PTE for changes.
- * This is essentially a smp_mb() and is paired with a memory
- * barrier in page_try_share_anon_rmap().
- */
- smp_mb__after_atomic();
+ /*
+ * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
+ * right zone, so fail and let the caller fall back to the slow
+ * path.
+ */
+ if (unlikely((flags & FOLL_LONGTERM) &&
+ !folio_is_longterm_pinnable(folio))) {
+ if (!put_devmap_managed_page_refs(&folio->page, refs))
+ folio_put_refs(folio, refs);
+ return NULL;
+ }

- node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);
+ /*
+ * When pinning a large folio, use an exact count to track it.
+ *
+ * However, be sure to *also* increment the normal folio
+ * refcount field at least once, so that the folio really
+ * is pinned. That's why the refcount from the earlier
+ * try_get_folio() is left intact.
+ */
+ if (folio_test_large(folio))
+ atomic_add(refs, &folio->_pincount);
+ else
+ folio_ref_add(folio,
+ refs * (GUP_PIN_COUNTING_BIAS - 1));
+ /*
+ * Adjust the pincount before re-checking the PTE for changes.
+ * This is essentially a smp_mb() and is paired with a memory
+ * barrier in page_try_share_anon_rmap().
+ */
+ smp_mb__after_atomic();

- return folio;
- }
+ node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);

- WARN_ON_ONCE(1);
- return NULL;
+ return folio;
}

static void gup_put_folio(struct folio *folio, int refs, unsigned int flags)
--
2.40.1


2023-06-13 20:59:37

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 4/5] mm/gup.c: Reorganize try_get_folio()

On Tue, Jun 13, 2023 at 01:18:26PM -0700, Vishal Moola (Oracle) wrote:
> struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
> {
> + struct folio *folio;

checkpatch will whinge about there not being a blank line here, and in
this case, I think it's correct.

> if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)))
> return NULL;
>
> + folio = try_get_folio(page, refs);
> +
> if (flags & FOLL_GET)
> - return try_get_folio(page, refs);
> - else if (flags & FOLL_PIN) {
> - struct folio *folio;
> + return folio;
>
> - /*
> - * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
> - * right zone, so fail and let the caller fall back to the slow
> - * path.
> - */
> - if (unlikely((flags & FOLL_LONGTERM) &&
> - !is_longterm_pinnable_page(page)))
> - return NULL;
> + if (unlikely(!(flags & FOLL_PIN))) {
> + WARN_ON_ONCE(1);
> + return NULL;

Don't we need to folio_put_refs() in this case? Or rather, I think the

if (WARN_ON_ONCE(flags & (FOLL_PIN|FOLL_GET) == 0) {

test should be first.

> + /*
> + * CAUTION: Don't use compound_head() on the page before this
> + * point, the result won't be stable.
> + */

I think we can lose the comment at this point?

> + if (!folio)
> + return NULL;