Received: by 10.223.185.116 with SMTP id b49csp8021096wrg; Thu, 1 Mar 2018 15:36:54 -0800 (PST) X-Google-Smtp-Source: AG47ELsdufWhrQ8LO2rKopECXWqjAMh3dKU0YK77T9wI5JOw3H0Dglx4aUdJ6y8XxeAlhMehvLWG X-Received: by 2002:a17:902:b683:: with SMTP id c3-v6mr3427545pls.154.1519947414353; Thu, 01 Mar 2018 15:36:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519947414; cv=none; d=google.com; s=arc-20160816; b=lN2xo9nH6k/55PzzET2Iw/2t+9I5Jpltdl5xM40IobVuW4AApurN3qv/5GNNLt010m qacNU9rnLZ1nA1viqOwdu5iWr3zQaGfUXFIkmJRUNx+3QRShbGi2hg8iCkEYwyQvQp46 7ABeA1uVitgiX1wvtGjXon2YKdZuQEUdPn/YVuYnl3tnabNJ3yBXp8ba6E4ZB5PRqjgl ih4Sd/x77EUSLCwbcEpYyE1wgxftux6vij4ugt9WW07nwWFSsdixtbVPTBy6v+/hyhGx H8Wey8TL7Dv6llEo6q7xMCtGqVxDcXlP4CGWELefpq5VehCa9R7kVyGc2pwGjM4TnIla PjyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:cc:subject:date:to :from:arc-authentication-results; bh=BWjAVdFvTA+9LD5CEJsZ53/1YyU3dHyTm+m/uuIUO7Y=; b=NU8SKFMzy70kel2w6Bptluy4Y/jQhn3br1XMfFbMlkL2Eu7zyF4AUglMpp8rSzBBR7 KLOdRBgDjB9O22LxOgm4kovYQi+aAKD1qP7pynrWxyBSrcr5DQhDlaohlsYDuJgyN2uh /6y00tkd1VYX/oWiAjB1kGVBBemID8VdpXmnFOORd4PYJ3gphx2gs9pxbWFF+xqZuXm9 WMrP8inhmj2ieemvnS9Y9wM3TxWhN3lo5pR/TKP5DMrFwIStmj41sQNiLAO6yVAYDJG/ 6LJTNL3YrkRjdW7noZlPqmmOiahTGXaMpYM6lrfHhvzuSAo1VKf/ZDxz88BBRgJhlOWW I5yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d4si3788694pfd.134.2018.03.01.15.36.38; Thu, 01 Mar 2018 15:36:54 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163360AbeCAXdb (ORCPT + 99 others); Thu, 1 Mar 2018 18:33:31 -0500 Received: from mx2.suse.de ([195.135.220.15]:54235 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163009AbeCAXd0 (ORCPT ); Thu, 1 Mar 2018 18:33:26 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F1509B46C; Thu, 1 Mar 2018 23:33:24 +0000 (UTC) From: NeilBrown To: Oleg Drokin , Greg Kroah-Hartman , James Simmons , Andreas Dilger Date: Fri, 02 Mar 2018 10:31:25 +1100 Subject: [PATCH 10/17] staging: lustre: ptlrpc: use delayed_work in sec_gc Cc: Linux Kernel Mailing List , Lustre Development List Message-ID: <151994708538.7628.11965951418635189732.stgit@noble> In-Reply-To: <151994679573.7628.1024109499321778846.stgit@noble> References: <151994679573.7628.1024109499321778846.stgit@noble> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The garbage collection for security contexts currently has a dedicated kthread which wakes up every 30 minutes to discard old garbage. Replace this with a simple delayed_work item on the system work queue. Signed-off-by: NeilBrown --- drivers/staging/lustre/lustre/ptlrpc/sec_gc.c | 90 ++++++++----------------- 1 file changed, 28 insertions(+), 62 deletions(-) diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c b/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c index 48f1a72afd77..2c8bad7b7877 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c @@ -55,7 +55,6 @@ static spinlock_t sec_gc_list_lock; static LIST_HEAD(sec_gc_ctx_list); static spinlock_t sec_gc_ctx_list_lock; -static struct ptlrpc_thread sec_gc_thread; static atomic_t sec_gc_wait_del = ATOMIC_INIT(0); void sptlrpc_gc_add_sec(struct ptlrpc_sec *sec) @@ -139,86 +138,53 @@ static void sec_do_gc(struct ptlrpc_sec *sec) sec->ps_gc_next = ktime_get_real_seconds() + sec->ps_gc_interval; } -static int sec_gc_main(void *arg) -{ - struct ptlrpc_thread *thread = arg; - - unshare_fs_struct(); +static void sec_gc_main(struct work_struct *ws); +static DECLARE_DELAYED_WORK(sec_gc_work, sec_gc_main); - /* Record that the thread is running */ - thread_set_flags(thread, SVC_RUNNING); - wake_up(&thread->t_ctl_waitq); - - while (1) { - struct ptlrpc_sec *sec; +static void sec_gc_main(struct work_struct *ws) +{ + struct ptlrpc_sec *sec; - sec_process_ctx_list(); + sec_process_ctx_list(); again: - /* go through sec list do gc. - * FIXME here we iterate through the whole list each time which - * is not optimal. we perhaps want to use balanced binary tree - * to trace each sec as order of expiry time. - * another issue here is we wakeup as fixed interval instead of - * according to each sec's expiry time + /* go through sec list do gc. + * FIXME here we iterate through the whole list each time which + * is not optimal. we perhaps want to use balanced binary tree + * to trace each sec as order of expiry time. + * another issue here is we wakeup as fixed interval instead of + * according to each sec's expiry time + */ + mutex_lock(&sec_gc_mutex); + list_for_each_entry(sec, &sec_gc_list, ps_gc_list) { + /* if someone is waiting to be deleted, let it + * proceed as soon as possible. */ - mutex_lock(&sec_gc_mutex); - list_for_each_entry(sec, &sec_gc_list, ps_gc_list) { - /* if someone is waiting to be deleted, let it - * proceed as soon as possible. - */ - if (atomic_read(&sec_gc_wait_del)) { - CDEBUG(D_SEC, "deletion pending, start over\n"); - mutex_unlock(&sec_gc_mutex); - goto again; - } - - sec_do_gc(sec); + if (atomic_read(&sec_gc_wait_del)) { + CDEBUG(D_SEC, "deletion pending, start over\n"); + mutex_unlock(&sec_gc_mutex); + goto again; } - mutex_unlock(&sec_gc_mutex); - - /* check ctx list again before sleep */ - sec_process_ctx_list(); - wait_event_idle_timeout(thread->t_ctl_waitq, - thread_is_stopping(thread), - SEC_GC_INTERVAL * HZ); - if (thread_test_and_clear_flags(thread, SVC_STOPPING)) - break; + sec_do_gc(sec); } + mutex_unlock(&sec_gc_mutex); - thread_set_flags(thread, SVC_STOPPED); - wake_up(&thread->t_ctl_waitq); - return 0; + /* check ctx list again before sleep */ + sec_process_ctx_list(); + schedule_delayed_work(&sec_gc_work, SEC_GC_INTERVAL * HZ); } int sptlrpc_gc_init(void) { - struct task_struct *task; - mutex_init(&sec_gc_mutex); spin_lock_init(&sec_gc_list_lock); spin_lock_init(&sec_gc_ctx_list_lock); - /* initialize thread control */ - memset(&sec_gc_thread, 0, sizeof(sec_gc_thread)); - init_waitqueue_head(&sec_gc_thread.t_ctl_waitq); - - task = kthread_run(sec_gc_main, &sec_gc_thread, "sptlrpc_gc"); - if (IS_ERR(task)) { - CERROR("can't start gc thread: %ld\n", PTR_ERR(task)); - return PTR_ERR(task); - } - - wait_event_idle(sec_gc_thread.t_ctl_waitq, - thread_is_running(&sec_gc_thread)); + schedule_delayed_work(&sec_gc_work, 0); return 0; } void sptlrpc_gc_fini(void) { - thread_set_flags(&sec_gc_thread, SVC_STOPPING); - wake_up(&sec_gc_thread.t_ctl_waitq); - - wait_event_idle(sec_gc_thread.t_ctl_waitq, - thread_is_stopped(&sec_gc_thread)); + cancel_delayed_work_sync(&sec_gc_work); }