Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp1148195pxy; Fri, 23 Apr 2021 01:05:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwqdi2VoS9Wygmc6R04ek+g0iCWkfkzsgCmKTGdbBd9bo2off6Ihb8r6AUp4n+GJFK4jCWZ X-Received: by 2002:a17:90b:33c6:: with SMTP id lk6mr3124487pjb.37.1619165145122; Fri, 23 Apr 2021 01:05:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619165145; cv=none; d=google.com; s=arc-20160816; b=ms8kVwAsvBkXfRZqxJSLnaFSJhOdPNm8C+ytEFBfEbHM2DPz3YE9Bc+3b+YpoUHepo 48hP3dchSge5v9IHPuonljW9M39EZ4C7sVlfSdX6D3EWOoFpSKDmD236GQvhzq2G+ikY nFHifqlGBbXSLbU9bbHvAk54/mKCGX4Wda83fWswI9O/2BotpgcIKUKXALDsWQFLJBgE i9D9EwbFhLyleuZT2RlTAB4Hjhc2uXcFVs/pBHcSJc9VQZPE0AWlqGF6NHPuWx3FArVe gAB3sc8lTSsxy8QhkKMAtLZ5G5NPHmjVHEVpd2EqoG9owyUgwyD9k/6+HU7WW5TWInzk 1Pkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:date:message-id:from:references:cc:to :dkim-signature:subject; bh=5g2egrzvmJB9GWyY57jvNwVURlIfPH7Ow77zmwBwr2s=; b=nM0rfY9HOwOaWQwmjI35NWsY2PP+Qjhy3JtjeQ1zjXU+B9h82LWpFATAo+1HhqFFJj Nn7jPW62hTocK0PRZV+oyitK2KzuF09ZmYPSWPKOlvkJfYLH+Cw1eEEtu2lXQYplOVHf 4PRXP2UKixx1ZW4dvpEFrM8AWErsMRa0w/t9abjLpmrAO5zvi/R/J6TEBOby89knF/3F Tskg/ZdO9/pfFSFWaaZjh2BbKdlbYCwKD7ZvH0W/Bg4ePP/GPK7yOk2MpuuV1fZP+84y UKytnvvrk4rNby8yIr8b+sEAoD4BVWvEVc802kLCyM63Y9u2lgHXhRVOp7d0votGVIoZ 5tUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@synology.com header.s=123 header.b=myaTybs+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=synology.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x1si6556998pll.94.2021.04.23.01.05.31; Fri, 23 Apr 2021 01:05:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@synology.com header.s=123 header.b=myaTybs+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=synology.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241412AbhDWIFb (ORCPT + 99 others); Fri, 23 Apr 2021 04:05:31 -0400 Received: from mail.synology.com ([211.23.38.101]:56580 "EHLO synology.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229917AbhDWIFa (ORCPT ); Fri, 23 Apr 2021 04:05:30 -0400 Subject: Re: [PATCH v2] block: fix trace completion for chained bio DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synology.com; s=123; t=1619165093; bh=KD2DEUbUs/MuymN+RMyXMiyFHNjxQu60kGtFI97ART0=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=myaTybs+xgX1giHiS+cTjzfjEz+aAyKT//zpKLYKHK8WBEXU3wfJ6QpxnF4ImCwsS 0VTsRaLsIC09/z1Z53E0/01JWyWRJsLw5fgSeTdCiot3ykjmYZL+eoIP+dXMMNkBha bRNywAebMqWK9sHW1HLn8p0jbu9VQkXAX1wlMyy0= To: NeilBrown , axboe@kernel.dk, neilb@suse.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, s3t@synology.com, bingjingc@synology.com, cccheng@synology.com References: <1614741726-28074-1-git-send-email-edwardh@synology.com> <87zgyudgss.fsf@notabene.neil.brown.name> From: Edward Hsieh Message-ID: Date: Fri, 23 Apr 2021 16:04:49 +0800 MIME-Version: 1.0 In-Reply-To: <87zgyudgss.fsf@notabene.neil.brown.name> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Synology-MCP-Status: no X-Synology-Spam-Flag: no X-Synology-Spam-Status: score=0, required 6, WHITELIST_FROM_ADDRESS 0 X-Synology-Virus-Status: no Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/23/2021 5:22 AM, NeilBrown wrote: > On Wed, Mar 03 2021, edwardh wrote: > >> From: Edward Hsieh >> >> For chained bio, trace_block_bio_complete in bio_endio is currently called >> only by the parent bio once upon all chained bio completed. >> However, the sector and size for the parent bio are modified in bio_split. >> Therefore, the size and sector of the complete events might not match the >> queue events in blktrace. >> >> The original fix of bio completion trace ("block: trace >> completion of all bios.") wants multiple complete events to correspond >> to one queue event but missed this. >> >> md/raid5 read with bio cross chunks can reproduce this issue. >> >> To fix, move trace completion into the loop for every chained bio to call. > > Thanks. I think this is correct as far as tracing goes. > However the code still looks a bit odd. > > The comment for the handling of bio_chain_endio suggests that the *only* > purpose for that is to avoid deep recursion. That suggests it should be > at the end of the function. > As it is blk_throtl_bio_endio() and bio_unint() are only called on the > last bio in a chain. > That seems wrong. > > I'd be more comfortable if the patch moved the bio_chain_endio() > handling to the end, after all of that. > So the function would end. > > if (bio->bi_end_io == bio_chain_endio) { > bio = __bio_chain_endio(bio); > goto again; > } else if (bio->bi_end_io) > bio->bi_end_io(bio); > > Jens: can you see any reason why that functions must only be called on > the last bio in the chain? > > Thanks, > NeilBrown > Hi Neil and Jens, From the commit message, bio_uninit is put here for bio allocated in special ways (e.g., on stack), that will not be release by bio_free. For chained bio, __bio_chain_endio invokes bio_put and release the resources, so it seems that we don't need to call bio_uninit for chained bio. The blk_throtl_bio_endio is used to update the latency for the throttle group. I think the latency should only be updated after the whole bio is finished? To make sense for the "tail call optimization" in the comment, I'll suggest to wrap the whole statement with an else. What do you think? if (bio->bi_end_io == bio_chain_endio) { bio = __bio_chain_endio(bio); goto again; } else { blk_throtl_bio_endio(bio); /* release cgroup info */ bio_uninit(bio); if (bio->bi_end_io) bio->bi_end_io(bio); } Thanks, Edward Hsieh