Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2238157pxa; Mon, 24 Aug 2020 08:50:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw4zM0j2ccGfwmkwunO8MoRnwGpZwK8YLnOs8zVYIkoaEGBbBgvPMsLoyzWxgShOFhP//NH X-Received: by 2002:aa7:cd04:: with SMTP id b4mr5770401edw.254.1598284229961; Mon, 24 Aug 2020 08:50:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598284229; cv=none; d=google.com; s=arc-20160816; b=jagExCz5oiLW3NQfAAGfNvEY/vpeyvr5dx8T06/6ajAY8wuJl3OACfJVUn5wGcXRdk HVBiOtRkySKsyppU55AJe+XGBG7uCALkHqN4R2iPSm9M9lr5nfcfG/4huCcpZaL6oYXG Ms9j5zWOetHWnzYKqG14UmO/EVrJcIU6H2wy8ELHNQwVm4AjRdvY8TXMlODJ0zvxOcDo 97E2Xozh69VddHesdugKhScEoRcMTInHyryRnJbpkaBm4EXlgDPMXKAgNw4wXEJWEK6m 8sce81/Hr0eKY8VBseDOZzWPoYoDcuKsEichc1TXpvi0dn6inqiL7rTkAFd6TNoabGIY Xo7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=UP0GmnQUZczjWQdgtPqOBVeYSbOn6bpQOH6oqkTvgaU=; b=WoND3hvgcNWqfDMZJ+CYUm0hKqOCnzdodOGs8WW4wPe/oluyFToNVRFyxtV4kAbaJo Gs9z8K1IXhIqlVqNW/CPKQ4m1GJNJCaDYDYCIXH8WZ+EApiAXoh68JfcTVb7wzSeO4Pj oZZ35dIgSyLB5YGT5iF8ORMBauhCnbOnsI+CeWKRI7fiwkfZLMpUGv8v69xQhmcnA8Zv pj73VbjkubIf1qZpakrPIIda3lKThY79tSTB1KDxJ9bA/cm9sDdPmbuZ3gETVctjaKH9 JKrrwIVVtfSVOh8EmYkmMPqoAlxYNScp0RlPalOSe+9rvFmVhJF7KMSsLJNN9jrpSIsb XLJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VOCN4QOH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s3si7328980edy.554.2020.08.24.08.50.07; Mon, 24 Aug 2020 08:50:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VOCN4QOH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727950AbgHXPtY (ORCPT + 99 others); Mon, 24 Aug 2020 11:49:24 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:58269 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726830AbgHXPsy (ORCPT ); Mon, 24 Aug 2020 11:48:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598284130; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UP0GmnQUZczjWQdgtPqOBVeYSbOn6bpQOH6oqkTvgaU=; b=VOCN4QOHFUeqxzUs+Rbndid1zUte35pZg2QIxMhUCuaOOzoTWEcX8so9s8FcB0v2LhAUXS J2u4+lOJXR+PXuIRxCbqpA5Ov/5N3iJsSwaIhG7/yqSrz00Th2w36mbfSOFIQtEYlOY08p 81nBX7DKJY6D2T7omalEBqlPA0RU+Kw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-570-M8-QL4DHNVSdjwXzyLR64Q-1; Mon, 24 Aug 2020 11:48:46 -0400 X-MC-Unique: M8-QL4DHNVSdjwXzyLR64Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F22D481F010; Mon, 24 Aug 2020 15:48:44 +0000 (UTC) Received: from bfoster (ovpn-112-11.rdu2.redhat.com [10.10.112.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C930B62A13; Mon, 24 Aug 2020 15:48:43 +0000 (UTC) Date: Mon, 24 Aug 2020 11:48:41 -0400 From: Brian Foster To: Christoph Hellwig Cc: Dave Chinner , Ritesh Harjani , Anju T Sudhakar , darrick.wong@oracle.com, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, willy@infradead.org Subject: Re: [PATCH] iomap: Fix the write_count in iomap_add_to_ioend(). Message-ID: <20200824154841.GB295033@bfoster> References: <20200819102841.481461-1-anju@linux.vnet.ibm.com> <20200820231140.GE7941@dread.disaster.area> <20200821044533.BBFD1A405F@d06av23.portsmouth.uk.ibm.com> <20200821215358.GG7941@dread.disaster.area> <20200822131312.GA17997@infradead.org> <20200824142823.GA295033@bfoster> <20200824150417.GA12258@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200824150417.GA12258@infradead.org> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 24, 2020 at 04:04:17PM +0100, Christoph Hellwig wrote: > On Mon, Aug 24, 2020 at 10:28:23AM -0400, Brian Foster wrote: > > Do I understand the current code (__bio_try_merge_page() -> > > page_is_mergeable()) correctly in that we're checking for physical page > > contiguity and not necessarily requiring a new bio_vec per physical > > page? > > > Yes. > Ok. I also realize now that this occurs on a kernel without commit 07173c3ec276 ("block: enable multipage bvecs"). That is probably a contributing factor, but it's not clear to me whether it's feasible to backport whatever supporting infrastructure is required for that mechanism to work (I suspect not). > > With regard to Dave's earlier point around seeing excessively sized bio > > chains.. If I set up a large memory box with high dirty mem ratios and > > do contiguous buffered overwrites over a 32GB range followed by fsync, I > > can see upwards of 1GB per bio and thus chains on the order of 32+ bios > > for the entire write. If I play games with how the buffered overwrite is > > submitted (i.e., in reverse) however, then I can occasionally reproduce > > a ~32GB chain of ~32k bios, which I think is what leads to problems in > > I/O completion on some systems. Granted, I don't reproduce soft lockup > > issues on my system with that behavior, so perhaps there's more to that > > particular issue. > > > > Regardless, it seems reasonable to me to at least have a conservative > > limit on the length of an ioend bio chain. Would anybody object to > > iomap_ioend growing a chain counter and perhaps forcing into a new ioend > > if we chain something like more than 1k bios at once? > > So what exactly is the problem of processing a long chain in the > workqueue vs multiple small chains? Maybe we need a cond_resched() > here and there, but I don't see how we'd substantially change behavior. > The immediate problem is a watchdog lockup detection in bio completion: NMI watchdog: Watchdog detected hard LOCKUP on cpu 25 This effectively lands at the following segment of iomap_finish_ioend(): ... /* walk each page on bio, ending page IO on them */ bio_for_each_segment_all(bv, bio, iter_all) iomap_finish_page_writeback(inode, bv->bv_page, error); I suppose we could add a cond_resched(), but is that safe directly inside of a ->bi_end_io() handler? Another option could be to dump large chains into the completion workqueue, but we may still need to track the length to do that. Thoughts? Brian