Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4053404pxb; Tue, 26 Jan 2021 11:06:49 -0800 (PST) X-Google-Smtp-Source: ABdhPJxCnqzSS9MPJY1GNZrObhGS60xaubTlbVrjRVYHaC/Hp5KszMKnikCZIqz1HGjtF9zZqTKU X-Received: by 2002:a05:6402:524a:: with SMTP id t10mr5772773edd.270.1611688009186; Tue, 26 Jan 2021 11:06:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611688009; cv=none; d=google.com; s=arc-20160816; b=eaxWC46dNTMAjn9icucSaYn3v5nSHR0WCMKMuDMPQEq41et2Vc/aNoXQIB3Np2vbmd XuThpuPvL/7yyuJ3JtN5b/oQ6goTcH5BKBhNfCKzLVOkSj6p+HfJn33sb7E4x4HmcHOf hjC6DmquR7cvqbsUBy0UNDCKgFN65NsYmS/bp9KCHvvwFuHBk0hYl7QfGlLaa/vXaqTP Da2rKwYJMdaR/XyzqNniFQqI3eyujlj7FQNs0htbsPxvDgp9ijAaCxC1dsHRl5Wo1OKi Mm1iNLM8FwDJO4IHJLJLywe8bAO9es+Wq20AEodU5g3XLe2risVjM3LFAnSFNSVLVeX6 JR6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0t4v8RJAdza7hqtAzUl4cEccn+i2tW9dboJSqY4Lngs=; b=SsM8T+wOd3hNQkNbh6dg8S/rBT//mJJp5ZfECwmWBga5eilxRGj2prK/bcFIvlZelz npNa8GHoTYVJvo6rpg+dwJLjDrjMsCmc9ykxivxb+PF+9cHK5qh/3GWLjsVcdBI7xeob KVClZ90kj/i1dPREFvY6UyRbEMuDuf62C6M2y7NQGTTaVTlgIajbd1RYoyc4UgGm7mvd drfHIeDqgEow3uLb2WrwC7m4p4+w3mpnkMWyY/83W/0MSynVImIsNz/F5Fz9xtkCADq4 IgXElX1hDNTn5EjHpePn143WJ+ead1k3emIQ6zprY9h5dY0QhBs/+tpZFnl7lt2RubR2 2pvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NgxpFnll; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z17si7513700ejc.207.2021.01.26.11.06.22; Tue, 26 Jan 2021 11:06:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NgxpFnll; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405764AbhAZOGU (ORCPT + 99 others); Tue, 26 Jan 2021 09:06:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:45694 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392691AbhAZNlu (ORCPT ); Tue, 26 Jan 2021 08:41:50 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5E73C22B51; Tue, 26 Jan 2021 13:41:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611668469; bh=NpT6nT8yxpV8zI3Shd4XJMd6qHJkrwUFp9lmkGTZsXU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NgxpFnlloux8l27u6qRPZXpR/snk2FnIOmDnLGjqSTlaVhNLRotN7NXKeE6tL1rBJ 9peG6RqMb7qVbmhfKA5Uszc4qtM9Y4aXGJuQK50B07vg5MefNpfCBkukQd0q3DJuWm iZjGnTDX07aEq2jwGnpKHeMAAPI4+V0xmf1Whk0auOCz4SXPV9yZjhbRMKe/RAkztY nfpZkKzYmcyJduQWSPH0PVbbescb0YKb6ZsFDKSBwcMKJ4LOy2pZOx0RQqa2nydEqB Cy+eeYf/TpSbNB1uUAxtNM0pgOAaP+RgUhL74wCN3YZLforrRFe9/phLetFX94oTzH 7mt0wODZh4t4Q== From: Jeff Layton To: ceph-devel@vger.kernel.org, idryomov@gmail.com, dhowells@redhat.com Cc: willy@infradead.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH 4/6] ceph: convert readpage to fscache read helper Date: Tue, 26 Jan 2021 08:41:01 -0500 Message-Id: <20210126134103.240031-5-jlayton@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126134103.240031-1-jlayton@kernel.org> References: <20210126134103.240031-1-jlayton@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Have the ceph KConfig select NETFS_SUPPORT. Add a new netfs ops structure and the operations for it. Convert ceph_readpage to use the new netfs_readpage helper. Signed-off-by: Jeff Layton --- fs/ceph/Kconfig | 1 + fs/ceph/addr.c | 149 ++++++++++++++++++++++++++++++++++++++++++++---- fs/ceph/cache.h | 36 ++++++++++++ 3 files changed, 176 insertions(+), 10 deletions(-) diff --git a/fs/ceph/Kconfig b/fs/ceph/Kconfig index 471e40156065..94df854147d3 100644 --- a/fs/ceph/Kconfig +++ b/fs/ceph/Kconfig @@ -6,6 +6,7 @@ config CEPH_FS select LIBCRC32C select CRYPTO_AES select CRYPTO + select NETFS_SUPPORT default n help Choose Y or M here to include support for mounting the diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index fbfa49db06fd..9ae6c0fddb80 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "super.h" #include "mds_client.h" @@ -183,6 +184,144 @@ static int ceph_releasepage(struct page *page, gfp_t gfp_flags) return !PagePrivate(page); } +static bool ceph_netfs_clamp_length(struct netfs_read_subrequest *subreq) +{ + struct inode *inode = subreq->rreq->mapping->host; + struct ceph_inode_info *ci = ceph_inode(inode); + u64 objno, objoff; + u32 xlen; + + /* Truncate the extent at the end of the current object */ + ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, + &objno, &objoff, &xlen); + subreq->len = xlen; + return true; +} + +static void finish_netfs_read(struct ceph_osd_request *req) +{ + struct ceph_fs_client *fsc = ceph_inode_to_client(req->r_inode); + struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0); + struct netfs_read_subrequest *subreq = req->r_priv; + int num_pages; + int err = req->r_result; + + ceph_update_read_latency(&fsc->mdsc->metric, req->r_start_latency, + req->r_end_latency, err); + + dout("%s: result %d subreq->len=%zu i_size=%lld\n", __func__, req->r_result, + subreq->len, i_size_read(req->r_inode)); + + /* no object means success but no data */ + if (err == -ENOENT) + err = 0; + else if (err == -EBLOCKLISTED) + fsc->blocklisted = true; + + if (err >= 0 && err < subreq->len) + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + + netfs_subreq_terminated(subreq, err); + + num_pages = calc_pages_for(osd_data->alignment, osd_data->length); + ceph_put_page_vector(osd_data->pages, num_pages, false); + iput(req->r_inode); +} + +static void ceph_netfs_issue_op(struct netfs_read_subrequest *subreq) +{ + struct netfs_read_request *rreq = subreq->rreq; + struct inode *inode = rreq->mapping->host; + struct ceph_inode_info *ci = ceph_inode(inode); + struct ceph_fs_client *fsc = ceph_inode_to_client(inode); + struct ceph_osd_request *req = NULL; + struct ceph_vino vino = ceph_vino(inode); + struct iov_iter iter; + struct page **pages; + size_t page_off; + int err = 0; + u64 len = subreq->len; + + req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, subreq->start, &len, + 0, 1, CEPH_OSD_OP_READ, + CEPH_OSD_FLAG_READ | fsc->client->osdc.client->options->read_from_replica, + NULL, ci->i_truncate_seq, ci->i_truncate_size, false); + if (IS_ERR(req)) { + err = PTR_ERR(req); + goto out; + } + + dout("%s: pos=%llu orig_len=%zu len=%llu\n", __func__, subreq->start, subreq->len, len); + iov_iter_xarray(&iter, READ, &rreq->mapping->i_pages, subreq->start, len); + err = iov_iter_get_pages_alloc(&iter, &pages, len, &page_off); + if (err < 0) { + dout("%s: iov_ter_get_pages_alloc returned %d\n", __func__, err); + goto out; + } + + /* should always give us a page-aligned read */ + WARN_ON_ONCE(page_off); + len = err; + + osd_req_op_extent_osd_data_pages(req, 0, pages, len, 0, false, false); + req->r_callback = finish_netfs_read; + req->r_priv = subreq; + req->r_inode = inode; + ihold(inode); + + err = ceph_osdc_start_request(req->r_osdc, req, false); + if (err) + iput(inode); +out: + if (req) + ceph_osdc_put_request(req); + if (err) + netfs_subreq_terminated(subreq, err); + dout("%s: result %d\n", __func__, err); +} + +static void ceph_init_rreq(struct netfs_read_request *rreq, struct file *file) +{ +} + +const struct netfs_read_request_ops ceph_readpage_netfs_ops = { + .init_rreq = ceph_init_rreq, + .is_cache_enabled = ceph_is_cache_enabled, + .begin_cache_operation = ceph_begin_cache_operation, + .issue_op = ceph_netfs_issue_op, + .clamp_length = ceph_netfs_clamp_length, +}; + +/* read a single page, without unlocking it. */ +static int ceph_readpage(struct file *file, struct page *page) +{ + struct inode *inode = file_inode(file); + struct ceph_inode_info *ci = ceph_inode(inode); + struct ceph_vino vino = ceph_vino(inode); + u64 off = page_offset(page); + u64 len = PAGE_SIZE; + + if (ci->i_inline_version != CEPH_INLINE_NONE) { + /* + * Uptodate inline data should have been added + * into page cache while getting Fcr caps. + */ + if (off == 0) { + unlock_page(page); + return -EINVAL; + } + zero_user_segment(page, 0, PAGE_SIZE); + SetPageUptodate(page); + unlock_page(page); + return 0; + } + + dout("readpage ino %llx.%llx file %p off %llu len %llu page %p index %lu\n", + vino.ino, vino.snap, file, off, len, page, page->index); + + return netfs_readpage(file, page, &ceph_readpage_netfs_ops, NULL); +} + /* read a single page, without unlocking it. */ static int ceph_do_readpage(struct file *filp, struct page *page) { @@ -253,16 +392,6 @@ static int ceph_do_readpage(struct file *filp, struct page *page) return err < 0 ? err : 0; } -static int ceph_readpage(struct file *filp, struct page *page) -{ - int r = ceph_do_readpage(filp, page); - if (r != -EINPROGRESS) - unlock_page(page); - else - r = 0; - return r; -} - /* * Finish an async read(ahead) op. */ diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h index 10c21317b62f..1409d6149281 100644 --- a/fs/ceph/cache.h +++ b/fs/ceph/cache.h @@ -9,6 +9,8 @@ #ifndef _CEPH_CACHE_H #define _CEPH_CACHE_H +#include + #ifdef CONFIG_CEPH_FSCACHE extern struct fscache_netfs ceph_cache_netfs; @@ -35,11 +37,31 @@ static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci) ci->fscache = NULL; } +static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci) +{ + return ci->fscache; +} + static inline void ceph_fscache_invalidate(struct inode *inode) { fscache_invalidate(ceph_inode(inode)->fscache); } +static inline bool ceph_is_cache_enabled(struct inode *inode) +{ + struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(inode)); + + if (!cookie) + return false; + return fscache_cookie_enabled(cookie); +} + +static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq) +{ + struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode)); + + return fscache_begin_read_operation(rreq, cookie); +} #else static inline int ceph_fscache_register(void) @@ -65,6 +87,11 @@ static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci) { } +static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci) +{ + return NULL; +} + static inline void ceph_fscache_register_inode_cookie(struct inode *inode) { } @@ -82,6 +109,15 @@ static inline void ceph_fscache_invalidate(struct inode *inode) { } +static inline bool ceph_is_cache_enabled(struct inode *inode) +{ + return false; +} + +static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq) +{ + return -ENOBUFS; +} #endif #endif /* _CEPH_CACHE_H */ -- 2.29.2