Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp628265pxu; Thu, 7 Jan 2021 13:57:11 -0800 (PST) X-Google-Smtp-Source: ABdhPJxpgkzxTcHUYsl9wd+peZyq3gglJ/QZfL+XbFoj2Hjtqj2mkuTSqYGN5L2xVBMq0iCvfzU1 X-Received: by 2002:aa7:cb49:: with SMTP id w9mr3233620edt.357.1610056631513; Thu, 07 Jan 2021 13:57:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610056631; cv=none; d=google.com; s=arc-20160816; b=JC7i/ET2ehSyBAA4oDSDxR3FXFmjBBWJNpWgVI0awkEm7BYf1/60tqV2Mq3xEzt6y+ TeyV02v3/xci0LDX4x271M3qc/rDgiewNe1vX0HJLBPrzzIqeM6wOWz8g4f+P5uSgEOP 52NpRoeuz25oOEP2Obl4QDh53ntvoRmMk7ZhS07bCwzO/h/cuoBAwGDlOSihWsFeO78H mThJge0lhitzntNGI6N6ImCBC+zzx/oNYV1tsHdEXrIrcsqG6q8auxnhTKidgHffomIx +fjgyRp5vXul2f5PDgc3Ig0Re9LY9ek3KZPmxJVfUyveo1sMwTd/k9Ubx71bLPJ4zW14 zX2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=89ThRRH6AzC5PKlp3cG5RrKSdxBtarj5W4k7nsgKMwQ=; b=IRiMA6D0Mm4YcAxC+l1qJN67M9VqHNbwsPjIerVmj3CS62Qxo2H6coeOfd7ZXM+jM7 B5e1jbNmxG0UNDIv9wckQ+fU/Y9FYrO+ButAZ8Hk3M8joowcrhtETdet5S7P4RAh5HnE kx84L/d/tvKOedYd/o4YczY8pFxgeq5I21JgH0DFqvDc5nESzwm3CAZnQmO/DFQ9/9Md 4dF2pVkG58gQNRMcAKJ9Fl2wf4Y7K6lg+hr69CkCtPDljLR9zhY+dJ7h+Tsy6HLKY39V OyoAECYNjhW0NCm/mfHKrTlU1vRv8TvsNJJ56Gcl00SEAVj8v/TVjYXdsC52ligSHMT7 VSuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cf6si2656913edb.557.2021.01.07.13.56.47; Thu, 07 Jan 2021 13:57:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728057AbhAGVzc (ORCPT + 99 others); Thu, 7 Jan 2021 16:55:32 -0500 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55822 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbhAGVza (ORCPT ); Thu, 7 Jan 2021 16:55:30 -0500 Received: from dread.disaster.area (pa49-179-167-107.pa.nsw.optusnet.com.au [49.179.167.107]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id DDF833C28B5; Fri, 8 Jan 2021 08:54:45 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1kxdEy-0046v2-K7; Fri, 08 Jan 2021 08:54:44 +1100 Date: Fri, 8 Jan 2021 08:54:44 +1100 From: Dave Chinner To: Brian Foster Cc: Donald Buczek , linux-xfs@vger.kernel.org, Linux Kernel Mailing List , it+linux-xfs@molgen.mpg.de Subject: Re: [PATCH] xfs: Wake CIL push waiters more reliably Message-ID: <20210107215444.GG331610@dread.disaster.area> References: <1705b481-16db-391e-48a8-a932d1f137e7@molgen.mpg.de> <20201229235627.33289-1-buczek@molgen.mpg.de> <20201230221611.GC164134@dread.disaster.area> <20210104162353.GA254939@bfoster> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210104162353.GA254939@bfoster> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=YKPhNiOx c=1 sm=1 tr=0 cx=a_idp_d a=+wqVUQIkAh0lLYI+QRsciw==:117 a=+wqVUQIkAh0lLYI+QRsciw==:17 a=kj9zAlcOel0A:10 a=EmqxpYm9HcoA:10 a=7-415B0cAAAA:8 a=3RWKWmqAdg96zEk6hCgA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 04, 2021 at 11:23:53AM -0500, Brian Foster wrote: > On Thu, Dec 31, 2020 at 09:16:11AM +1100, Dave Chinner wrote: > > On Wed, Dec 30, 2020 at 12:56:27AM +0100, Donald Buczek wrote: > > > If the value goes below the limit while some threads are > > > already waiting but before the push worker gets to it, these threads are > > > not woken. > > > > > > Always wake all CIL push waiters. Test with waitqueue_active() as an > > > optimization. This is possible, because we hold the xc_push_lock > > > spinlock, which prevents additions to the waitqueue. > > > > > > Signed-off-by: Donald Buczek > > > --- > > > fs/xfs/xfs_log_cil.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c > > > index b0ef071b3cb5..d620de8e217c 100644 > > > --- a/fs/xfs/xfs_log_cil.c > > > +++ b/fs/xfs/xfs_log_cil.c > > > @@ -670,7 +670,7 @@ xlog_cil_push_work( > > > /* > > > * Wake up any background push waiters now this context is being pushed. > > > */ > > > - if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) > > > + if (waitqueue_active(&cil->xc_push_wait)) > > > wake_up_all(&cil->xc_push_wait); > > > > That just smells wrong to me. It *might* be correct, but this > > condition should pair with the sleep condition, as space used by a > > CIL context should never actually decrease.... > > > > ... but I'm a little confused by this assertion. The shadow buffer > allocation code refers to the possibility of shadow buffers falling out > that are smaller than currently allocated buffers. Further, the > _insert_format_items() code appears to explicitly optimize for this > possibility by reusing the active buffer, subtracting the old size/count > values from the diff variables and then reformatting the latest > (presumably smaller) item to the lv. Individual items might shrink, but the overall transaction should grow. Think of a extent to btree conversion of an inode fork. THe data in the inode fork decreases from a list of extents to a btree root block pointer, so the inode item shrinks. But then we add a new btree root block that contains all the extents + the btree block header, and it gets rounded up to ithe 128 byte buffer logging chunk size. IOWs, while the inode item has decreased in size, the overall space consumed by the transaction has gone up and so the CIL ctx used_space should increase. Hence we can't just look at individual log items and whether they have decreased in size - we have to look at all the items in the transaction to understand how the space used in that transaction has changed. i.e. it's the aggregation of all items in the transaction that matter here, not so much the individual items. > Of course this could just be implementation detail. I haven't dug into > the details in the remainder of this thread and I don't have specific > examples off the top of my head, but perhaps based on the ability of > various structures to change formats and the ability of log vectors to > shrink in size, shouldn't we expect the possibility of a CIL context to > shrink in size as well? Just from poking around the CIL it seems like > the surrounding code supports it (xlog_cil_insert_items() checks len > 0 > for recalculating split res as well)... Yes, there may be situations where it decreases. It may be this is fine, but the assumption *I've made* in lots of the CIL push code is that ctx->used_space rarely, if ever, will go backwards. e.g. we run the first transaction into the CIL, it steals the sapce needed for the cil checkpoint headers for the transaciton. Then if the space returned by the item formatting is negative (because it is in the AIL and being relogged), the CIL checkpoint now doesn't have the space reserved it needs to run a checkpoint. That transaction is a sync transaction, so it forces the log, and now we push the CIL without sufficient reservation to write out the log headers and the items we just formatted.... So, yeah, shrinking transaction space usage definitely violates some of the assumptions the code makes about how relogging works. It's entirely possible the assumptions I've made are not entirely correct in some corner cases - those particular cases are what we need to ferret out here, and then decide if they are correct or not and deal with it from there... Cheers, Dave. -- Dave Chinner david@fromorbit.com