Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp7280712rwd; Tue, 6 Jun 2023 08:35:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Y0b4y9/S4X8gxEMaazLzpk0Wu2jKNjG4+VnDMuMqz07rmTwLnFeReA0hacYJXqUMwx7F3 X-Received: by 2002:a17:90a:f284:b0:250:69de:7157 with SMTP id fs4-20020a17090af28400b0025069de7157mr12681733pjb.2.1686065702392; Tue, 06 Jun 2023 08:35:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686065702; cv=none; d=google.com; s=arc-20160816; b=uvaAUo6Pqj+c+DScTzvwWVgL+iuA/02JohdEu1Fr/UdzwgGwR4bNgotMQCRkak1W4y 7EL3eXLSL579os4jt4TMXp47L5wpDdoUZMCimENxX+kgUhP2PZ7Lcd4Rdd/RMmodAMZo oW9yJ+E9PkNBBEnOQP7DlPfCp/+dWnh8OhKBg5XovLMC2yFlyEx+OnNh7BtFR2hJM6Yc KzVztOlkwOW2ObejMdVRsRYSLcQ46N6lRYhHMqS09H5NTGnEVHHIhND8EnRc6uSZUxfb 7XunmU6XJcuzTDQ1A30324wAuzDWZNOQNgqVIjExXe7M1e4wnetw4+bo83+q8JIb8Q4K ivpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eUrq3DY+OGrIYm8UgHc2JyLm8lRK0sjzVcTatP/l8cE=; b=TNdx2cmZGglyWkf/h02vSmD5MVmURx+xAdskC6DD1nvo1ulq9sVQ4sklbUdgY4UKSP NYdTZ3L1QSqwsrQIRfWSMqYPzL7+eEfC3CSJRHah+UhHJWVNOIydZEte/c2oF34GIWr2 b6mc63WYXHeAJUzUPimlz4YlHXUoTlpSVLmKMTgM9/bH9GDmpG+q/9dbSmv87Dt/Q+oa u+vb8Qh3Z6e2G1EeICg/8NqjMdg94K+L8e0knxC12c6tOTolGz2XQXABqaz8TG1Y66+r 3YbuSmd+tgrohld0DcsivENTI2bO2WAaXiMNoUm/a2bersa14e4QDfus0KEH6qm5jyy+ ImKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=lV3nDQEH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ls17-20020a17090b351100b0024e29c5c06dsi7789083pjb.12.2023.06.06.08.34.50; Tue, 06 Jun 2023 08:35:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=lV3nDQEH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238523AbjFFO5f (ORCPT + 99 others); Tue, 6 Jun 2023 10:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238516AbjFFO5C (ORCPT ); Tue, 6 Jun 2023 10:57:02 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E53C71730 for ; Tue, 6 Jun 2023 07:56:42 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-977d55ac17bso435193866b.3 for ; Tue, 06 Jun 2023 07:56:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686063401; x=1688655401; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eUrq3DY+OGrIYm8UgHc2JyLm8lRK0sjzVcTatP/l8cE=; b=lV3nDQEHbDcpVPllE7l1H6osQPEUCuqT5vJrxzBodqY+3XMguqK34PcJMeRECQG32o jfMfyeWIbmqdimmhaJui1HYH6XGTl00n63UupT5SSWY57u90RUB6qh8PEe3CM0Lc/PX6 bv7mUKsXdmXj1ANAvDdsgZl1EvX/+sL0EXfcXVHdX7RhZiJB9IKB7i/5FQDcMrULTFFn tdsNTzFoKMtORONsGtRFRPVuPSgo7xFKTn9uMpk5ENTjj3Xfyx51Wb5NniaMeIN6iAnC rpvp+3O9hEUP0oSwIRmO/ooVGoplU7z2s0bkinWyzhaud9E+g7zuGd38LgRH+vWUaMlk Ju7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686063401; x=1688655401; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eUrq3DY+OGrIYm8UgHc2JyLm8lRK0sjzVcTatP/l8cE=; b=FMskAl2nn0twn0rpEy9m9bR1cwEadBX5Zvk2wI4dLY3DbS5pj4/5i31DeVnvTS3Ibp Q3xSaDHXbnjivb9KRCQwgbhAK7eThT+gVrMdWYxL3xPiS4sq0DM8Un7QAHi7JDr6+JxZ EDBwVG47Iz3VTvG/a+usIGDnV4NDIChZc5a+VBIP6h4gIg2KnIlREv3yJ1A7FRi9oU74 6akBjQzJSGYF7UBFnAySCKNCV6Od5nBBRxjuKj8wLIRwRMDKRwq/a1xqMDv7I906x7ln 9xUcyBB1cVpY1lVOVPhD0jwMCf5U+iRmz5AdZhqpEZsmkHU32+k22Lhrx7Ip19rBJo55 JmAA== X-Gm-Message-State: AC+VfDyqHbSRJSCrC8utAQPyI4E0+R83/kTQoYXBPRVfFU6S4QEDZVUT 4oZc7IN04/n8MPaxsrhOLqw= X-Received: by 2002:a17:907:3da4:b0:96a:6723:da47 with SMTP id he36-20020a1709073da400b0096a6723da47mr2821941ejc.43.1686063400942; Tue, 06 Jun 2023 07:56:40 -0700 (PDT) Received: from lelloman-5950.homenet.telecomitalia.it (host-82-53-214-132.retail.telecomitalia.it. [82.53.214.132]) by smtp.gmail.com with ESMTPSA id t15-20020a1709063e4f00b00965c529f103sm5619618eji.86.2023.06.06.07.56.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Jun 2023 07:56:40 -0700 (PDT) From: Domenico Cerasuolo To: vitaly.wool@konsulko.com, minchan@kernel.org, senozhatsky@chromium.org, yosryahmed@google.com, linux-mm@kvack.org Cc: ddstreet@ieee.org, sjenning@redhat.com, nphamcs@gmail.com, hannes@cmpxchg.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Domenico Cerasuolo Subject: [RFC PATCH v2 3/7] mm: zswap: remove page reclaim logic from z3fold Date: Tue, 6 Jun 2023 16:56:07 +0200 Message-Id: <20230606145611.704392-4-cerasuolodomenico@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230606145611.704392-1-cerasuolodomenico@gmail.com> References: <20230606145611.704392-1-cerasuolodomenico@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With the recent enhancement to zswap enabling direct page writeback, the need for the shrink code in z3fold has become obsolete. As a result, this commit removes the page reclaim logic from z3fold entirely. Signed-off-by: Domenico Cerasuolo --- mm/z3fold.c | 246 +--------------------------------------------------- 1 file changed, 3 insertions(+), 243 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 0cef845d397b..4af8741553ac 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -125,13 +125,11 @@ struct z3fold_header { /** * struct z3fold_pool - stores metadata for each z3fold pool * @name: pool name - * @lock: protects pool unbuddied/lru lists + * @lock: protects pool unbuddied lists * @stale_lock: protects pool stale page list * @unbuddied: per-cpu array of lists tracking z3fold pages that contain 2- * buddies; the list each z3fold page is added to depends on * the size of its free region. - * @lru: list tracking the z3fold pages in LRU order by most recently - * added buddy. * @stale: list of pages marked for freeing * @pages_nr: number of z3fold pages in the pool. * @c_handle: cache for z3fold_buddy_slots allocation @@ -149,12 +147,9 @@ struct z3fold_pool { spinlock_t lock; spinlock_t stale_lock; struct list_head *unbuddied; - struct list_head lru; struct list_head stale; atomic64_t pages_nr; struct kmem_cache *c_handle; - struct zpool *zpool; - const struct zpool_ops *zpool_ops; struct workqueue_struct *compact_wq; struct workqueue_struct *release_wq; struct work_struct work; @@ -329,7 +324,6 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless, struct z3fold_header *zhdr = page_address(page); struct z3fold_buddy_slots *slots; - INIT_LIST_HEAD(&page->lru); clear_bit(PAGE_HEADLESS, &page->private); clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); clear_bit(NEEDS_COMPACTING, &page->private); @@ -451,8 +445,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) set_bit(PAGE_STALE, &page->private); clear_bit(NEEDS_COMPACTING, &page->private); spin_lock(&pool->lock); - if (!list_empty(&page->lru)) - list_del_init(&page->lru); spin_unlock(&pool->lock); if (locked) @@ -930,7 +922,6 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp) for_each_unbuddied_list(i, 0) INIT_LIST_HEAD(&unbuddied[i]); } - INIT_LIST_HEAD(&pool->lru); INIT_LIST_HEAD(&pool->stale); atomic64_set(&pool->pages_nr, 0); pool->name = name; @@ -1073,12 +1064,6 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp, headless: spin_lock(&pool->lock); - /* Add/move z3fold page to beginning of LRU */ - if (!list_empty(&page->lru)) - list_del(&page->lru); - - list_add(&page->lru, &pool->lru); - *handle = encode_handle(zhdr, bud); spin_unlock(&pool->lock); if (bud != HEADLESS) @@ -1115,9 +1100,6 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) * immediately so we don't care about its value any more. */ if (!page_claimed) { - spin_lock(&pool->lock); - list_del(&page->lru); - spin_unlock(&pool->lock); put_z3fold_header(zhdr); free_z3fold_page(page, true); atomic64_dec(&pool->pages_nr); @@ -1172,194 +1154,6 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) put_z3fold_header(zhdr); } -/** - * z3fold_reclaim_page() - evicts allocations from a pool page and frees it - * @pool: pool from which a page will attempt to be evicted - * @retries: number of pages on the LRU list for which eviction will - * be attempted before failing - * - * z3fold reclaim is different from normal system reclaim in that it is done - * from the bottom, up. This is because only the bottom layer, z3fold, has - * information on how the allocations are organized within each z3fold page. - * This has the potential to create interesting locking situations between - * z3fold and the user, however. - * - * To avoid these, this is how z3fold_reclaim_page() should be called: - * - * The user detects a page should be reclaimed and calls z3fold_reclaim_page(). - * z3fold_reclaim_page() will remove a z3fold page from the pool LRU list and - * call the user-defined eviction handler with the pool and handle as - * arguments. - * - * If the handle can not be evicted, the eviction handler should return - * non-zero. z3fold_reclaim_page() will add the z3fold page back to the - * appropriate list and try the next z3fold page on the LRU up to - * a user defined number of retries. - * - * If the handle is successfully evicted, the eviction handler should - * return 0 _and_ should have called z3fold_free() on the handle. z3fold_free() - * contains logic to delay freeing the page if the page is under reclaim, - * as indicated by the setting of the PG_reclaim flag on the underlying page. - * - * If all buddies in the z3fold page are successfully evicted, then the - * z3fold page can be freed. - * - * Returns: 0 if page is successfully freed, otherwise -EINVAL if there are - * no pages to evict or an eviction handler is not registered, -EAGAIN if - * the retry limit was hit. - */ -static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries) -{ - int i, ret = -1; - struct z3fold_header *zhdr = NULL; - struct page *page = NULL; - struct list_head *pos; - unsigned long first_handle = 0, middle_handle = 0, last_handle = 0; - struct z3fold_buddy_slots slots __attribute__((aligned(SLOTS_ALIGN))); - - rwlock_init(&slots.lock); - slots.pool = (unsigned long)pool | (1 << HANDLES_NOFREE); - - spin_lock(&pool->lock); - for (i = 0; i < retries; i++) { - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lock); - return -EINVAL; - } - list_for_each_prev(pos, &pool->lru) { - page = list_entry(pos, struct page, lru); - - zhdr = page_address(page); - if (test_bit(PAGE_HEADLESS, &page->private)) { - /* - * For non-headless pages, we wait to do this - * until we have the page lock to avoid racing - * with __z3fold_alloc(). Headless pages don't - * have a lock (and __z3fold_alloc() will never - * see them), but we still need to test and set - * PAGE_CLAIMED to avoid racing with - * z3fold_free(), so just do it now before - * leaving the loop. - */ - if (test_and_set_bit(PAGE_CLAIMED, &page->private)) - continue; - - break; - } - - if (!z3fold_page_trylock(zhdr)) { - zhdr = NULL; - continue; /* can't evict at this point */ - } - - /* test_and_set_bit is of course atomic, but we still - * need to do it under page lock, otherwise checking - * that bit in __z3fold_alloc wouldn't make sense - */ - if (zhdr->foreign_handles || - test_and_set_bit(PAGE_CLAIMED, &page->private)) { - z3fold_page_unlock(zhdr); - zhdr = NULL; - continue; /* can't evict such page */ - } - list_del_init(&zhdr->buddy); - zhdr->cpu = -1; - /* See comment in __z3fold_alloc. */ - kref_get(&zhdr->refcount); - break; - } - - if (!zhdr) - break; - - list_del_init(&page->lru); - spin_unlock(&pool->lock); - - if (!test_bit(PAGE_HEADLESS, &page->private)) { - /* - * We need encode the handles before unlocking, and - * use our local slots structure because z3fold_free - * can zero out zhdr->slots and we can't do much - * about that - */ - first_handle = 0; - last_handle = 0; - middle_handle = 0; - memset(slots.slot, 0, sizeof(slots.slot)); - if (zhdr->first_chunks) - first_handle = __encode_handle(zhdr, &slots, - FIRST); - if (zhdr->middle_chunks) - middle_handle = __encode_handle(zhdr, &slots, - MIDDLE); - if (zhdr->last_chunks) - last_handle = __encode_handle(zhdr, &slots, - LAST); - /* - * it's safe to unlock here because we hold a - * reference to this page - */ - z3fold_page_unlock(zhdr); - } else { - first_handle = encode_handle(zhdr, HEADLESS); - last_handle = middle_handle = 0; - } - /* Issue the eviction callback(s) */ - if (middle_handle) { - ret = pool->zpool_ops->evict(pool->zpool, middle_handle); - if (ret) - goto next; - } - if (first_handle) { - ret = pool->zpool_ops->evict(pool->zpool, first_handle); - if (ret) - goto next; - } - if (last_handle) { - ret = pool->zpool_ops->evict(pool->zpool, last_handle); - if (ret) - goto next; - } -next: - if (test_bit(PAGE_HEADLESS, &page->private)) { - if (ret == 0) { - free_z3fold_page(page, true); - atomic64_dec(&pool->pages_nr); - return 0; - } - spin_lock(&pool->lock); - list_add(&page->lru, &pool->lru); - spin_unlock(&pool->lock); - clear_bit(PAGE_CLAIMED, &page->private); - } else { - struct z3fold_buddy_slots *slots = zhdr->slots; - z3fold_page_lock(zhdr); - if (kref_put(&zhdr->refcount, - release_z3fold_page_locked)) { - kmem_cache_free(pool->c_handle, slots); - return 0; - } - /* - * if we are here, the page is still not completely - * free. Take the global pool lock then to be able - * to add it back to the lru list - */ - spin_lock(&pool->lock); - list_add(&page->lru, &pool->lru); - spin_unlock(&pool->lock); - if (list_empty(&zhdr->buddy)) - add_to_unbuddied(pool, zhdr); - clear_bit(PAGE_CLAIMED, &page->private); - z3fold_page_unlock(zhdr); - } - - /* We started off locked to we need to lock the pool back */ - spin_lock(&pool->lock); - } - spin_unlock(&pool->lock); - return -EAGAIN; -} - /** * z3fold_map() - maps the allocation associated with the given handle * @pool: pool in which the allocation resides @@ -1470,8 +1264,6 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode) spin_lock(&pool->lock); if (!list_empty(&zhdr->buddy)) list_del_init(&zhdr->buddy); - if (!list_empty(&page->lru)) - list_del_init(&page->lru); spin_unlock(&pool->lock); kref_get(&zhdr->refcount); @@ -1531,9 +1323,6 @@ static int z3fold_page_migrate(struct page *newpage, struct page *page, encode_handle(new_zhdr, MIDDLE); set_bit(NEEDS_COMPACTING, &newpage->private); new_zhdr->cpu = smp_processor_id(); - spin_lock(&pool->lock); - list_add(&newpage->lru, &pool->lru); - spin_unlock(&pool->lock); __SetPageMovable(newpage, &z3fold_mops); z3fold_page_unlock(new_zhdr); @@ -1559,9 +1348,6 @@ static void z3fold_page_putback(struct page *page) INIT_LIST_HEAD(&page->lru); if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) return; - spin_lock(&pool->lock); - list_add(&page->lru, &pool->lru); - spin_unlock(&pool->lock); if (list_empty(&zhdr->buddy)) add_to_unbuddied(pool, zhdr); clear_bit(PAGE_CLAIMED, &page->private); @@ -1582,14 +1368,7 @@ static void *z3fold_zpool_create(const char *name, gfp_t gfp, const struct zpool_ops *zpool_ops, struct zpool *zpool) { - struct z3fold_pool *pool; - - pool = z3fold_create_pool(name, gfp); - if (pool) { - pool->zpool = zpool; - pool->zpool_ops = zpool_ops; - } - return pool; + return z3fold_create_pool(name, gfp); } static void z3fold_zpool_destroy(void *pool) @@ -1607,25 +1386,6 @@ static void z3fold_zpool_free(void *pool, unsigned long handle) z3fold_free(pool, handle); } -static int z3fold_zpool_shrink(void *pool, unsigned int pages, - unsigned int *reclaimed) -{ - unsigned int total = 0; - int ret = -EINVAL; - - while (total < pages) { - ret = z3fold_reclaim_page(pool, 8); - if (ret < 0) - break; - total++; - } - - if (reclaimed) - *reclaimed = total; - - return ret; -} - static void *z3fold_zpool_map(void *pool, unsigned long handle, enum zpool_mapmode mm) { @@ -1649,7 +1409,7 @@ static struct zpool_driver z3fold_zpool_driver = { .destroy = z3fold_zpool_destroy, .malloc = z3fold_zpool_malloc, .free = z3fold_zpool_free, - .shrink = z3fold_zpool_shrink, + .shrink = NULL, .map = z3fold_zpool_map, .unmap = z3fold_zpool_unmap, .total_size = z3fold_zpool_total_size, -- 2.34.1