Received: by 10.192.165.148 with SMTP id m20csp3074411imm; Mon, 7 May 2018 06:16:18 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqj5GaM5vZN1h1TPDprWHTlbhD7YnDUJ1pC2pB2gIgdvYZkvPI6pk5kQ0cY0DOoiR0JSFB3 X-Received: by 10.98.133.15 with SMTP id u15mr36571239pfd.160.1525698978870; Mon, 07 May 2018 06:16:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525698978; cv=none; d=google.com; s=arc-20160816; b=k5vCRzYHl6dOxKQNX7GsGsiTXTtdvxVp2RlN6vnPX8LbK1YIR45n7ZLSLTGlJjGAQR OGdPVsLJ+6TvoB/tlfnkq+xXVm0uG2NelMc/jIpMxArI/17YzqRrYu+t+01JctGbzbDH VtLp5FpVrxo7jC/lF/tyTz4LP8xFuUDS5pPhH3oK8OjXbTPDZiVhOvHWuNnaidUV8FAw UC9b8z9nQa7+zKpnKML5rB4qbJRCNYAcefY2V0diY4DPkiRiWclKmL+CVNbCXVIOmOqP u6nHT3962D7K4222GhQqwFLT1xNrfw6MN7OQLWT1X6tophoBYYnrdNpMNMNSRKphTBlh FmHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=rV5NCwmaH7lJvKpsAsXwsJVRGA3BTXJKyYoOgu2jaeE=; b=O8yB0h+zn7aEmwjiVwOK9d1SQ0RMpdJIyUsOjhGvhCK8WhWXMRWpBIyQDBnuNheIz7 TgqQA1tqpt819MjwYs5CWSVq4ohEZYxNYWRejEfxXEQNt38q9bh33oM5iW5x7xxIUB3O rZYTJbwrPCgU3VdHcijimrwVYAKF3XE8VNODmZpiA++h9R4uMn0ymsZ6CSb9KLUISlPp O2LczVLWHqBWcvNf2kozTgS5e6HUxdra50kr+EnBheJBInurhmxEcHLwvFyCU5OyZMYf nyOaTZ7c4kIvEhFsy90hTTWtlqfvLLqKVBbApfbVW8jPYnmOp9DluBjukrvRTyFIZ7am 4bTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=goCl95+n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s86si22189819pfd.288.2018.05.07.06.16.04; Mon, 07 May 2018 06:16:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=goCl95+n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752295AbeEGNPv (ORCPT + 99 others); Mon, 7 May 2018 09:15:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:49302 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752112AbeEGNPr (ORCPT ); Mon, 7 May 2018 09:15:47 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F216821783; Mon, 7 May 2018 13:15:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1525698946; bh=q/UhKkyXIoYoc8m4UwBQ9PDcGvCzJeKkRU1NglrCQyU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=goCl95+nTNSRV7xaET0YdJXuplYBa3fzjT9lbBKYqFTRjBZhumYQycr52jCMHUTPe Ca6iMEJAg1KCSt81JrSvEh3pqTkeR6Z8FlyWCkK9x5ZOukkY+Xz3mI8C7jz6e/yL4/ +hwjg1LZ1DLIsI4B0bK0FsSMoJsAiCC5Gi51f4HQ= Date: Sun, 6 May 2018 18:42:51 -0700 From: Greg Kroah-Hartman To: NeilBrown Cc: James Simmons , Oleg Drokin , Andreas Dilger , Linux Kernel Mailing List , Lustre Development List Subject: Re: [PATCH 02/10] staging: lustre: make struct lu_site_bkt_data private Message-ID: <20180507014251.GB29421@kroah.com> References: <152514658325.17843.11455067361317157487.stgit@noble> <152514675889.17843.8198702967686860985.stgit@noble> <87muxgfgx3.fsf@notabene.neil.brown.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87muxgfgx3.fsf@notabene.neil.brown.name> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 04, 2018 at 09:39:04AM +1000, NeilBrown wrote: > On Wed, May 02 2018, James Simmons wrote: > > >> This data structure only needs to be public so that > >> various modules can access a wait queue to wait for object > >> destruction. > >> If we provide a function to get the wait queue, rather than the > >> whole bucket, the structure can be made private. > >> > >> Signed-off-by: NeilBrown > >> --- > >> drivers/staging/lustre/lustre/include/lu_object.h | 36 +------------- > >> drivers/staging/lustre/lustre/llite/lcommon_cl.c | 8 ++- > >> drivers/staging/lustre/lustre/lov/lov_object.c | 8 ++- > >> drivers/staging/lustre/lustre/obdclass/lu_object.c | 50 +++++++++++++++++--- > >> 4 files changed, 54 insertions(+), 48 deletions(-) > >> > >> diff --git a/drivers/staging/lustre/lustre/include/lu_object.h b/drivers/staging/lustre/lustre/include/lu_object.h > >> index c3b0ed518819..f29bbca5af65 100644 > >> --- a/drivers/staging/lustre/lustre/include/lu_object.h > >> +++ b/drivers/staging/lustre/lustre/include/lu_object.h > >> @@ -549,31 +549,7 @@ struct lu_object_header { > >> }; > >> > >> struct fld; > >> - > >> -struct lu_site_bkt_data { > >> - /** > >> - * number of object in this bucket on the lsb_lru list. > >> - */ > >> - long lsb_lru_len; > >> - /** > >> - * LRU list, updated on each access to object. Protected by > >> - * bucket lock of lu_site::ls_obj_hash. > >> - * > >> - * "Cold" end of LRU is lu_site::ls_lru.next. Accessed object are > >> - * moved to the lu_site::ls_lru.prev (this is due to the non-existence > >> - * of list_for_each_entry_safe_reverse()). > >> - */ > >> - struct list_head lsb_lru; > >> - /** > >> - * Wait-queue signaled when an object in this site is ultimately > >> - * destroyed (lu_object_free()). It is used by lu_object_find() to > >> - * wait before re-trying when object in the process of destruction is > >> - * found in the hash table. > >> - * > >> - * \see htable_lookup(). > >> - */ > >> - wait_queue_head_t lsb_marche_funebre; > >> -}; > >> +struct lu_site_bkt_data; > >> > >> enum { > >> LU_SS_CREATED = 0, > >> @@ -642,14 +618,8 @@ struct lu_site { > >> struct percpu_counter ls_lru_len_counter; > >> }; > >> > >> -static inline struct lu_site_bkt_data * > >> -lu_site_bkt_from_fid(struct lu_site *site, struct lu_fid *fid) > >> -{ > >> - struct cfs_hash_bd bd; > >> - > >> - cfs_hash_bd_get(site->ls_obj_hash, fid, &bd); > >> - return cfs_hash_bd_extra_get(site->ls_obj_hash, &bd); > >> -} > >> +wait_queue_head_t * > >> +lu_site_wq_from_fid(struct lu_site *site, struct lu_fid *fid); > >> > >> static inline struct seq_server_site *lu_site2seq(const struct lu_site *s) > >> { > >> diff --git a/drivers/staging/lustre/lustre/llite/lcommon_cl.c b/drivers/staging/lustre/lustre/llite/lcommon_cl.c > >> index df5c0c0ae703..d5b42fb1d601 100644 > >> --- a/drivers/staging/lustre/lustre/llite/lcommon_cl.c > >> +++ b/drivers/staging/lustre/lustre/llite/lcommon_cl.c > >> @@ -211,12 +211,12 @@ static void cl_object_put_last(struct lu_env *env, struct cl_object *obj) > >> > >> if (unlikely(atomic_read(&header->loh_ref) != 1)) { > >> struct lu_site *site = obj->co_lu.lo_dev->ld_site; > >> - struct lu_site_bkt_data *bkt; > >> + wait_queue_head_t *wq; > >> > >> - bkt = lu_site_bkt_from_fid(site, &header->loh_fid); > >> + wq = lu_site_wq_from_fid(site, &header->loh_fid); > >> > >> init_waitqueue_entry(&waiter, current); > >> - add_wait_queue(&bkt->lsb_marche_funebre, &waiter); > >> + add_wait_queue(wq, &waiter); > >> > >> while (1) { > >> set_current_state(TASK_UNINTERRUPTIBLE); > >> @@ -226,7 +226,7 @@ static void cl_object_put_last(struct lu_env *env, struct cl_object *obj) > >> } > >> > >> set_current_state(TASK_RUNNING); > >> - remove_wait_queue(&bkt->lsb_marche_funebre, &waiter); > >> + remove_wait_queue(wq, &waiter); > >> } > >> > >> cl_object_put(env, obj); > >> diff --git a/drivers/staging/lustre/lustre/lov/lov_object.c b/drivers/staging/lustre/lustre/lov/lov_object.c > >> index f7c69680cb7d..adc90f310fd7 100644 > >> --- a/drivers/staging/lustre/lustre/lov/lov_object.c > >> +++ b/drivers/staging/lustre/lustre/lov/lov_object.c > >> @@ -370,7 +370,7 @@ static void lov_subobject_kill(const struct lu_env *env, struct lov_object *lov, > >> struct cl_object *sub; > >> struct lov_layout_raid0 *r0; > >> struct lu_site *site; > >> - struct lu_site_bkt_data *bkt; > >> + wait_queue_head_t *wq; > >> wait_queue_entry_t *waiter; > >> > >> r0 = &lov->u.raid0; > >> @@ -378,7 +378,7 @@ static void lov_subobject_kill(const struct lu_env *env, struct lov_object *lov, > >> > >> sub = lovsub2cl(los); > >> site = sub->co_lu.lo_dev->ld_site; > >> - bkt = lu_site_bkt_from_fid(site, &sub->co_lu.lo_header->loh_fid); > >> + wq = lu_site_wq_from_fid(site, &sub->co_lu.lo_header->loh_fid); > >> > >> cl_object_kill(env, sub); > >> /* release a reference to the sub-object and ... */ > >> @@ -391,7 +391,7 @@ static void lov_subobject_kill(const struct lu_env *env, struct lov_object *lov, > >> if (r0->lo_sub[idx] == los) { > >> waiter = &lov_env_info(env)->lti_waiter; > >> init_waitqueue_entry(waiter, current); > >> - add_wait_queue(&bkt->lsb_marche_funebre, waiter); > >> + add_wait_queue(wq, waiter); > >> set_current_state(TASK_UNINTERRUPTIBLE); > >> while (1) { > >> /* this wait-queue is signaled at the end of > >> @@ -408,7 +408,7 @@ static void lov_subobject_kill(const struct lu_env *env, struct lov_object *lov, > >> break; > >> } > >> } > >> - remove_wait_queue(&bkt->lsb_marche_funebre, waiter); > >> + remove_wait_queue(wq, waiter); > >> } > >> LASSERT(!r0->lo_sub[idx]); > >> } > >> diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c > >> index 3de7dc0497c4..2a8a25d6edb5 100644 > >> --- a/drivers/staging/lustre/lustre/obdclass/lu_object.c > >> +++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c > >> @@ -56,6 +56,31 @@ > >> #include > >> #include > >> > >> +struct lu_site_bkt_data { > >> + /** > >> + * number of object in this bucket on the lsb_lru list. > >> + */ > >> + long lsb_lru_len; > >> + /** > >> + * LRU list, updated on each access to object. Protected by > >> + * bucket lock of lu_site::ls_obj_hash. > >> + * > >> + * "Cold" end of LRU is lu_site::ls_lru.next. Accessed object are > >> + * moved to the lu_site::ls_lru.prev (this is due to the non-existence > >> + * of list_for_each_entry_safe_reverse()). > >> + */ > >> + struct list_head lsb_lru; > >> + /** > >> + * Wait-queue signaled when an object in this site is ultimately > >> + * destroyed (lu_object_free()). It is used by lu_object_find() to > >> + * wait before re-trying when object in the process of destruction is > >> + * found in the hash table. > >> + * > >> + * \see htable_lookup(). > >> + */ > >> + wait_queue_head_t lsb_marche_funebre; > >> +}; > >> + > >> enum { > >> LU_CACHE_PERCENT_MAX = 50, > >> LU_CACHE_PERCENT_DEFAULT = 20 > >> @@ -88,6 +113,17 @@ MODULE_PARM_DESC(lu_cache_nr, "Maximum number of objects in lu_object cache"); > >> static void lu_object_free(const struct lu_env *env, struct lu_object *o); > >> static __u32 ls_stats_read(struct lprocfs_stats *stats, int idx); > >> > >> +wait_queue_head_t * > >> +lu_site_wq_from_fid(struct lu_site *site, struct lu_fid *fid) > >> +{ > >> + struct cfs_hash_bd bd; > >> + struct lu_site_bkt_data *bkt; > >> + > >> + cfs_hash_bd_get(site->ls_obj_hash, fid, &bd); > >> + bkt = cfs_hash_bd_extra_get(site->ls_obj_hash, &bd); > >> + return &bkt->lsb_marche_funebre; > >> +} > >> + > > > > Nak. You forgot the EXPORT_SYMBOL for this. Once I added the export symbol > > it seems to work so far in my testing. > > Thanks! I'll try to remember the occasional build-test with modules. 'make allmodconfig' is your friend :) I'm dropping this series, I'm guessing you will send a v2 for the whole set. thanks, greg k-h