Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760640AbdLRUzR (ORCPT ); Mon, 18 Dec 2017 15:55:17 -0500 Received: from esa1.cray.iphmx.com ([68.232.142.33]:17425 "EHLO esa1.cray.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758671AbdLRUzP (ORCPT ); Mon, 18 Dec 2017 15:55:15 -0500 X-IronPort-AV: E=Sophos;i="5.45,423,1508799600"; d="scan'208";a="17067541" X-Cray-OBMMKR: 1433258124 17067541 From: Patrick Farrell To: NeilBrown , Oleg Drokin , "Andreas Dilger" , James Simmons , Greg Kroah-Hartman CC: lkml , lustre Subject: Re: [lustre-devel] [PATCH 08/16] staging: lustre: open code polling loop instead of using l_wait_event() Thread-Topic: [lustre-devel] [PATCH 08/16] staging: lustre: open code polling loop instead of using l_wait_event() Thread-Index: AQHTd9CY1aN89v6yd0qdfVlY8OHOd6NJMMGA Date: Mon, 18 Dec 2017 20:55:10 +0000 Message-ID: References: <151358127190.5099.12792810096274074963.stgit@noble> <151358148008.5099.7316878897181140635.stgit@noble> In-Reply-To: <151358148008.5099.7316878897181140635.stgit@noble> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.7.2.170228 authentication-results: spf=none (sender IP is ) smtp.mailfrom=paf@cray.com; x-originating-ip: [136.162.2.1] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1;MWHPR1101MB2141;6:UqqChXJWqs7hxk2hCn0UQISpGTYKEKsF4tdxYBqh+kSxg23vA8LnVyuXQE/dsaUMsItwY0pkB0rlb9HpWNlfFXH97HYnk/NxNsaN6KNnMdd+kQ6aFLEuuYt+FLW39GzqwQ4gQdGR8krDHoAanvrp2JXc+0ei5fW+z+d+jOhS4fBTJ452c5H7LTnOXXK9nWBJHnwcXuqpce9yW8SyD6TeeKLxeT1J2YTfMLHrUSx8fvMYZ6po2K7xDcb1Zg6c/3BdWJebwbk+Va//zp1TyWvVv3hz2FqarNLG2o4iXwIPVliRoSjUJQ3X+6zFvY5mMEVPoJSZwwdOf6RXfYkERzb/68L/0XJ0cAW7zlpAXMznvGU=;5:WAJyEncM7FvCy4qgKtf4NsmZb1SI1JkXIFS6/tGhkaIE16wYLVrSFLYPZFaFe6Ofxy1mgWkEM6GIkA81SM6WSGTU+3xsI0STi/Td8WwDJKAW5zPhNeOFiZ7gNQcCD1VWXKGQ2a0UUlJp1GEBRolabJqygu5Y/VW93ZgFBGxFf9s=;24:zOPZxOy+bdNnCOlrn7c9pBAAaydHwu6d0OYJneUzXEJEz71WPYKovH+dmihXOfko+MJc4xjKrBZlHpqWGMNEggYTYO7dp27wnMqnshlxfFo=;7:RG7ZoOr97+EQ7t659+cdn/hFBFffTFWubi0vLR334vHYFLMNTB9cVBCutO8DgqnbFP7mZ6Jq4OXFRR9UiK9IhttjIYMGBrLQm73h3rVowCIj88FgqYnqJKDdouIMq+dNduChVvqG70nQ99ycgzSLtvJP12lkgr+et3iKVTWfB/wq6dpvnqbILDyhD+BXPbVL1STkkvct4C5Ms46YY8X8IRwiFu+62qSP/Ql7M4Bb2B9kqB45GUU00lMaawq9i+7Q x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 0257dac9-1df8-43a1-8907-08d54659a0d5 x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603307);SRVR:MWHPR1101MB2141; x-ms-traffictypediagnostic: MWHPR1101MB2141: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(6040450)(2401047)(5005006)(8121501046)(3002001)(3231023)(93006095)(93001095)(10201501046)(6041248)(20161123560025)(20161123562025)(20161123555025)(20161123564025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011);SRVR:MWHPR1101MB2141;BCL:0;PCL:0;RULEID:(100000803101)(100110400095);SRVR:MWHPR1101MB2141; x-forefront-prvs: 0525BB0ADF x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(39850400004)(366004)(346002)(396003)(376002)(199004)(189003)(24454002)(14454004)(36756003)(8676002)(478600001)(81156014)(81166006)(59450400001)(7736002)(3846002)(6116002)(102836003)(2900100001)(5660300001)(305945005)(2950100002)(106356001)(76176011)(2906002)(68736007)(105586002)(58126008)(66066001)(110136005)(54906003)(83506002)(229853002)(316002)(4326008)(6306002)(6506007)(97736004)(53546011)(6246003)(3660700001)(8936002)(86362001)(53936002)(25786009)(6436002)(3280700002)(6512007)(77096006)(6486002)(99286004)(13693001)(42262002);DIR:OUT;SFP:1102;SCL:1;SRVR:MWHPR1101MB2141;H:MWHPR1101MB2143.namprd11.prod.outlook.com;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-ID: <97575341CE174641A33D086E1846DE88@namprd11.prod.outlook.com> MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 0257dac9-1df8-43a1-8907-08d54659a0d5 X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Dec 2017 20:55:10.5613 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: e7b8488a-c0cd-4614-aae1-996bfabec247 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2141 X-OriginatorOrg: cray.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by nfs id vBIKtLCc021666 Content-Length: 3084 Lines: 89 The lov_check_and_wait_active wait is usually (always?) going to be asynchronous from userspace and probably shouldn?t contribute to load. So I guess that means schedule_timeout_idle. On 12/18/17, 1:18 AM, "lustre-devel on behalf of NeilBrown" wrote: >Two places that LWI_TIMEOUT_INTERVAL() is used, the outcome is a >simple polling loop that polls every second for some event (with a >limit). > >So write a simple loop to make this more apparent. > >Signed-off-by: NeilBrown >--- > drivers/staging/lustre/lustre/llite/llite_lib.c | 11 +++++------ > drivers/staging/lustre/lustre/lov/lov_request.c | 12 +++++------- > 2 files changed, 10 insertions(+), 13 deletions(-) > >diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c >b/drivers/staging/lustre/lustre/llite/llite_lib.c >index 33dc15e9aebb..f6642fa30428 100644 >--- a/drivers/staging/lustre/lustre/llite/llite_lib.c >+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c >@@ -1984,8 +1984,7 @@ void ll_umount_begin(struct super_block *sb) > struct ll_sb_info *sbi = ll_s2sbi(sb); > struct obd_device *obd; > struct obd_ioctl_data *ioc_data; >- wait_queue_head_t waitq; >- struct l_wait_info lwi; >+ int cnt = 0; > > CDEBUG(D_VFSTRACE, "VFS Op: superblock %p count %d active %d\n", sb, > sb->s_count, atomic_read(&sb->s_active)); >@@ -2021,10 +2020,10 @@ void ll_umount_begin(struct super_block *sb) > * and then continue. For now, we just periodically checking for vfs > * to decrement mnt_cnt and hope to finish it within 10sec. > */ >- init_waitqueue_head(&waitq); >- lwi = LWI_TIMEOUT_INTERVAL(10 * HZ, >- HZ, NULL, NULL); >- l_wait_event(waitq, may_umount(sbi->ll_mnt.mnt), &lwi); >+ while (cnt < 10 && !may_umount(sbi->ll_mnt.mnt)) { >+ schedule_timeout_uninterruptible(HZ); >+ cnt ++; >+ } > > schedule(); > } >diff --git a/drivers/staging/lustre/lustre/lov/lov_request.c >b/drivers/staging/lustre/lustre/lov/lov_request.c >index fb3b7a7fa32a..c1e58fcc30b3 100644 >--- a/drivers/staging/lustre/lustre/lov/lov_request.c >+++ b/drivers/staging/lustre/lustre/lov/lov_request.c >@@ -99,8 +99,7 @@ static int lov_check_set(struct lov_obd *lov, int idx) > */ > static int lov_check_and_wait_active(struct lov_obd *lov, int ost_idx) > { >- wait_queue_head_t waitq; >- struct l_wait_info lwi; >+ int cnt = 0; > struct lov_tgt_desc *tgt; > int rc = 0; > >@@ -125,11 +124,10 @@ static int lov_check_and_wait_active(struct lov_obd >*lov, int ost_idx) > > mutex_unlock(&lov->lov_lock); > >- init_waitqueue_head(&waitq); >- lwi = LWI_TIMEOUT_INTERVAL(obd_timeout * HZ, >- HZ, NULL, NULL); >- >- rc = l_wait_event(waitq, lov_check_set(lov, ost_idx), &lwi); >+ while (cnt < obd_timeout && !lov_check_set(lov, ost_idx)) { >+ schedule_timeout_uninterruptible(HZ); >+ cnt ++; >+ } > if (tgt->ltd_active) > return 1; > > > >_______________________________________________ >lustre-devel mailing list >lustre-devel@lists.lustre.org >http://lists.lustre.org/listinfo.cgi/lustre-devel-lustre.org