Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3091C433F5 for ; Thu, 9 Dec 2021 16:55:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241490AbhLIQ7a (ORCPT ); Thu, 9 Dec 2021 11:59:30 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:52632 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241592AbhLIQ7R (ORCPT ); Thu, 9 Dec 2021 11:59:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1639068943; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hR3Z/kyrlUkUvfdfw6B9RZ1W0qKldMsg4ZoDVDvlhIo=; b=YFbjiTGtQbaXKpkTmI2kJAK+sKS2C7mOjS3/DSEmz4l0nwt0x17CHMC+fbitojIi7XWEo+ HgwZQgggX4u+xathB3T5ftmfh+oRQzmAnFr+XUzFR2IkhO2u7cKF6P9VZO4R7z71RuX/qt rIlLtJN9gfDICVLzcb6MSeUQUm2bSlY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-507-DVQpfrLlOMmu4_3pZu5eEQ-1; Thu, 09 Dec 2021 11:55:40 -0500 X-MC-Unique: DVQpfrLlOMmu4_3pZu5eEQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E443D192CC4B; Thu, 9 Dec 2021 16:55:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.122]) by smtp.corp.redhat.com (Postfix) with ESMTP id 814DF19C59; Thu, 9 Dec 2021 16:55:34 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v2 11/67] fscache: Implement cache-level access helpers From: David Howells To: linux-cachefs@redhat.com Cc: dhowells@redhat.com, Trond Myklebust , Anna Schumaker , Steve French , Dominique Martinet , Jeff Layton , Matthew Wilcox , Alexander Viro , Omar Sandoval , JeffleXu , Linus Torvalds , linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Thu, 09 Dec 2021 16:55:33 +0000 Message-ID: <163906893368.143852.14164004598465617981.stgit@warthog.procyon.org.uk> In-Reply-To: <163906878733.143852.5604115678965006622.stgit@warthog.procyon.org.uk> References: <163906878733.143852.5604115678965006622.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a pair of functions to pin/unpin a cache that we're wanting to do a high-level access to (such as creating or removing a volume): bool fscache_begin_cache_access(struct fscache_cache *cache, enum fscache_access_trace why); void fscache_end_cache_access(struct fscache_cache *cache, enum fscache_access_trace why); The way the access gate works/will work is: (1) If the cache tests as not live (state is not FSCACHE_CACHE_IS_ACTIVE), then we return false to indicate access was not permitted. (2) If the cache tests as live, then we increment the n_accesses count and then recheck the liveness, ending the access if it ceased to be live. (3) When we end the access, we decrement n_accesses and wake up the any waiters if it reaches 0. (4) Whilst the cache is caching, n_accesses is kept artificially incremented to prevent wakeups from happening. (5) When the cache is taken offline, the state is changed to prevent new accesses, n_accesses is decremented and we wait for n_accesses to become 0. Note that some of this is implemented in a later patch. Signed-off-by: David Howells cc: linux-cachefs@redhat.com Link: https://lore.kernel.org/r/163819593239.215744.7537428720603638088.stgit@warthog.procyon.org.uk/ # v1 --- fs/fscache/cache.c | 62 ++++++++++++++++++++++++++++++++++++++++ fs/fscache/internal.h | 2 + fs/fscache/main.c | 2 + include/trace/events/fscache.h | 41 ++++++++++++++++++++++++++ 4 files changed, 107 insertions(+) diff --git a/fs/fscache/cache.c b/fs/fscache/cache.c index 8db77bb9f8e2..e867cff53a70 100644 --- a/fs/fscache/cache.c +++ b/fs/fscache/cache.c @@ -216,6 +216,68 @@ void fscache_relinquish_cache(struct fscache_cache *cache) } EXPORT_SYMBOL(fscache_relinquish_cache); +/** + * fscache_begin_cache_access - Pin a cache so it can be accessed + * @cache: The cache-level cookie + * @why: An indication of the circumstances of the access for tracing + * + * Attempt to pin the cache to prevent it from going away whilst we're + * accessing it and returns true if successful. This works as follows: + * + * (1) If the cache tests as not live (state is not FSCACHE_CACHE_IS_ACTIVE), + * then we return false to indicate access was not permitted. + * + * (2) If the cache tests as live, then we increment the n_accesses count and + * then recheck the liveness, ending the access if it ceased to be live. + * + * (3) When we end the access, we decrement n_accesses and wake up the any + * waiters if it reaches 0. + * + * (4) Whilst the cache is caching, n_accesses is kept artificially + * incremented to prevent wakeups from happening. + * + * (5) When the cache is taken offline, the state is changed to prevent new + * accesses, n_accesses is decremented and we wait for n_accesses to + * become 0. + */ +bool fscache_begin_cache_access(struct fscache_cache *cache, enum fscache_access_trace why) +{ + int n_accesses; + + if (!fscache_cache_is_live(cache)) + return false; + + n_accesses = atomic_inc_return(&cache->n_accesses); + smp_mb__after_atomic(); /* Reread live flag after n_accesses */ + trace_fscache_access_cache(cache->debug_id, refcount_read(&cache->ref), + n_accesses, why); + if (!fscache_cache_is_live(cache)) { + fscache_end_cache_access(cache, fscache_access_unlive); + return false; + } + return true; +} + +/** + * fscache_end_cache_access - Unpin a cache at the end of an access. + * @cache: The cache-level cookie + * @why: An indication of the circumstances of the access for tracing + * + * Unpin a cache after we've accessed it. The @why indicator is merely + * provided for tracing purposes. + */ +void fscache_end_cache_access(struct fscache_cache *cache, enum fscache_access_trace why) +{ + int n_accesses; + + smp_mb__before_atomic(); + n_accesses = atomic_dec_return(&cache->n_accesses); + trace_fscache_access_cache(cache->debug_id, refcount_read(&cache->ref), + n_accesses, why); + if (n_accesses == 0) + wake_up_var(&cache->n_accesses); +} + #ifdef CONFIG_PROC_FS static const char fscache_cache_states[NR__FSCACHE_CACHE_STATE] = "-PAEW"; diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h index c9bd4bcd4485..ca035d6be7f9 100644 --- a/fs/fscache/internal.h +++ b/fs/fscache/internal.h @@ -23,6 +23,8 @@ #ifdef CONFIG_PROC_FS extern const struct seq_operations fscache_caches_seq_ops; #endif +bool fscache_begin_cache_access(struct fscache_cache *cache, enum fscache_access_trace why); +void fscache_end_cache_access(struct fscache_cache *cache, enum fscache_access_trace why); struct fscache_cache *fscache_lookup_cache(const char *name, bool is_cache); void fscache_put_cache(struct fscache_cache *cache, enum fscache_cache_trace where); diff --git a/fs/fscache/main.c b/fs/fscache/main.c index fad9c1933987..e1f14b29cff4 100644 --- a/fs/fscache/main.c +++ b/fs/fscache/main.c @@ -21,6 +21,8 @@ module_param_named(debug, fscache_debug, uint, MODULE_PARM_DESC(fscache_debug, "FS-Cache debugging mask"); +EXPORT_TRACEPOINT_SYMBOL(fscache_access_cache); + struct workqueue_struct *fscache_wq; EXPORT_SYMBOL(fscache_wq); diff --git a/include/trace/events/fscache.h b/include/trace/events/fscache.h index 9286e1c4b2ac..734966bc49e1 100644 --- a/include/trace/events/fscache.h +++ b/include/trace/events/fscache.h @@ -62,6 +62,12 @@ enum fscache_cookie_trace { fscache_cookie_see_work, }; +enum fscache_access_trace { + fscache_access_cache_pin, + fscache_access_cache_unpin, + fscache_access_unlive, +}; + #endif /* @@ -107,6 +113,11 @@ enum fscache_cookie_trace { EM(fscache_cookie_see_withdraw, "- x-wth") \ E_(fscache_cookie_see_work, "- work ") +#define fscache_access_traces \ + EM(fscache_access_cache_pin, "PIN cache ") \ + EM(fscache_access_cache_unpin, "UNPIN cache ") \ + E_(fscache_access_unlive, "END unlive ") + /* * Export enum symbols via userspace. */ @@ -118,6 +129,7 @@ enum fscache_cookie_trace { fscache_cache_traces; fscache_volume_traces; fscache_cookie_traces; +fscache_access_traces; /* * Now redefine the EM() and E_() macros to map the enums to the strings that @@ -204,6 +216,35 @@ TRACE_EVENT(fscache_cookie, __entry->ref) ); +TRACE_EVENT(fscache_access_cache, + TP_PROTO(unsigned int cache_debug_id, + int ref, + int n_accesses, + enum fscache_access_trace why), + + TP_ARGS(cache_debug_id, ref, n_accesses, why), + + TP_STRUCT__entry( + __field(unsigned int, cache ) + __field(int, ref ) + __field(int, n_accesses ) + __field(enum fscache_access_trace, why ) + ), + + TP_fast_assign( + __entry->cache = cache_debug_id; + __entry->ref = ref; + __entry->n_accesses = n_accesses; + __entry->why = why; + ), + + TP_printk("C=%08x %s r=%d a=%d", + __entry->cache, + __print_symbolic(__entry->why, fscache_access_traces), + __entry->ref, + __entry->n_accesses) + ); + TRACE_EVENT(fscache_acquire, TP_PROTO(struct fscache_cookie *cookie),