Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp6391132imm; Wed, 27 Jun 2018 07:02:54 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIaZTfeIzvhD6mTVtfJzp5QGsl26nyrKo4PbdKdGulfIUWyhWaVobRWYZcSQsXenGwxD8Pe X-Received: by 2002:a17:902:b60b:: with SMTP id b11-v6mr6382201pls.330.1530108174689; Wed, 27 Jun 2018 07:02:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530108174; cv=none; d=google.com; s=arc-20160816; b=BTV38BqfJvkpvqkBSw+J0UfHUjs4xo+L2+W7GFdMogLBthp9NH/AeT7YYW9nKXDzw4 HDO263djwQY+o7lud5kRwpRHUWqaFkpxRJ0Lqgy3mNaWwJGRze2lJfHyeykOi3q5f60K l/eoEeGKZOOdW5KkJs7IWpzorpOM4lQ8gSYN/bJEHF6U4JkmnG/iW7ehoe3lZPhMcVg1 HKUNNF9c/hAtNlIgOf+Z3NF7VhEDr7VCxqitv/keRjRpLj561owvR+ujSQlUlBzw15qw HgdtBGBrHTNt899dNBFbHdQ+V92ofFIvGj45gc4LpFYkvplM5vZt/MFrmirvXn56Pzpm YXGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=POtMcwteeTbv3vophBJXGy4EfG1rstscUIQXIDSlbzA=; b=fJAXdrsbb0azWYAuOuBXAgN+gSAJLRXtrNBYMYcQe6R1bJ6eg5lZtkRpGNm/da/Ow/ vMwOystLL0YYXklkHs9IIDoxGjLUbEnluY0G9nYpMOFYipYOTP9v/Ky723dLS40yuldW Ji0z0eJgaUjs7z7gqCiedNW5T9/HBnSOI42rwwAPSXQvq4WBv0pQJkSfwxAGjHVf4mtU HLfJ4ExDUUfG5s0uMQAbO+kOhtXCLH9btTZvm90VlArJ4KSKwyBBMxP2wR+JuwLLmspb H0Jsr7bSV08W9jgBYlVVDbig6QV/kLODvA5RGsYy60h4xm6Bk0a0kk9MFTIYph5MmPL/ QLxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c14-v6si4483122pfl.319.2018.06.27.07.02.39; Wed, 27 Jun 2018 07:02:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965171AbeF0MrY (ORCPT + 99 others); Wed, 27 Jun 2018 08:47:24 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:55482 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S965132AbeF0MrW (ORCPT ); Wed, 27 Jun 2018 08:47:22 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CA1FB70206; Wed, 27 Jun 2018 12:47:21 +0000 (UTC) Received: from localhost (ovpn-12-44.pek2.redhat.com [10.72.12.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id B140B2026D5B; Wed, 27 Jun 2018 12:47:13 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Kent Overstreet Cc: David Sterba , Huang Ying , Mike Snitzer , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Christoph Hellwig Subject: [PATCH V7 07/24] block: simplify bio_check_pages_dirty Date: Wed, 27 Jun 2018 20:45:31 +0800 Message-Id: <20180627124548.3456-8-ming.lei@redhat.com> In-Reply-To: <20180627124548.3456-1-ming.lei@redhat.com> References: <20180627124548.3456-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 27 Jun 2018 12:47:21 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 27 Jun 2018 12:47:21 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christoph Hellwig bio_check_pages_dirty currently inviolates the invariant that bv_page of a bio_vec inside bi_vcnt shouldn't be zero, and that is going to become really annoying with multpath biovecs. Fortunately there isn't any all that good reason for it - once we decide to defer freeing the bio to a workqueue holding onto a few additional pages isn't really an issue anymore. So just check if there is a clean page that needs dirtying in the first path, and do a second pass to free them if there was none, while the cache is still hot. Also use the chance to micro-optimize bio_dirty_fn a bit by not saving irq state - we know we are called from a workqueue. Reviewed-by: Ming Lei Signed-off-by: Christoph Hellwig --- block/bio.c | 56 +++++++++++++++++++++----------------------------------- 1 file changed, 21 insertions(+), 35 deletions(-) diff --git a/block/bio.c b/block/bio.c index 43698bcff737..77f991688810 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1570,19 +1570,15 @@ static void bio_release_pages(struct bio *bio) struct bio_vec *bvec; int i; - bio_for_each_segment_all(bvec, bio, i) { - struct page *page = bvec->bv_page; - - if (page) - put_page(page); - } + bio_for_each_segment_all(bvec, bio, i) + put_page(bvec->bv_page); } /* * bio_check_pages_dirty() will check that all the BIO's pages are still dirty. * If they are, then fine. If, however, some pages are clean then they must * have been written out during the direct-IO read. So we take another ref on - * the BIO and the offending pages and re-dirty the pages in process context. + * the BIO and re-dirty the pages in process context. * * It is expected that bio_check_pages_dirty() will wholly own the BIO from * here on. It will run one put_page() against each page and will run one @@ -1600,52 +1596,42 @@ static struct bio *bio_dirty_list; */ static void bio_dirty_fn(struct work_struct *work) { - unsigned long flags; - struct bio *bio; + struct bio *bio, *next; - spin_lock_irqsave(&bio_dirty_lock, flags); - bio = bio_dirty_list; + spin_lock_irq(&bio_dirty_lock); + next = bio_dirty_list; bio_dirty_list = NULL; - spin_unlock_irqrestore(&bio_dirty_lock, flags); + spin_unlock_irq(&bio_dirty_lock); - while (bio) { - struct bio *next = bio->bi_private; + while ((bio = next) != NULL) { + next = bio->bi_private; bio_set_pages_dirty(bio); bio_release_pages(bio); bio_put(bio); - bio = next; } } void bio_check_pages_dirty(struct bio *bio) { struct bio_vec *bvec; - int nr_clean_pages = 0; + unsigned long flags; int i; bio_for_each_segment_all(bvec, bio, i) { - struct page *page = bvec->bv_page; - - if (PageDirty(page) || PageCompound(page)) { - put_page(page); - bvec->bv_page = NULL; - } else { - nr_clean_pages++; - } + if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page)) + goto defer; } - if (nr_clean_pages) { - unsigned long flags; - - spin_lock_irqsave(&bio_dirty_lock, flags); - bio->bi_private = bio_dirty_list; - bio_dirty_list = bio; - spin_unlock_irqrestore(&bio_dirty_lock, flags); - schedule_work(&bio_dirty_work); - } else { - bio_put(bio); - } + bio_release_pages(bio); + bio_put(bio); + return; +defer: + spin_lock_irqsave(&bio_dirty_lock, flags); + bio->bi_private = bio_dirty_list; + bio_dirty_list = bio; + spin_unlock_irqrestore(&bio_dirty_lock, flags); + schedule_work(&bio_dirty_work); } EXPORT_SYMBOL_GPL(bio_check_pages_dirty); -- 2.9.5