Received: by 10.192.165.156 with SMTP id m28csp48043imm; Tue, 10 Apr 2018 16:07:19 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+gnL9AdX8y7tIXSpSyGbCFSpn/ewdp0eQ+p8up5Y+2sNf4UtM3+YLzDFe9GM9dyH9fqXdr X-Received: by 10.98.17.85 with SMTP id z82mr1932390pfi.206.1523401639194; Tue, 10 Apr 2018 16:07:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523401639; cv=none; d=google.com; s=arc-20160816; b=VpADpUFdzd/PDrNbzbw/RrTh5eQVZoyQFwJWTiyRRsCpxFz2KpD57s4OWAxZ7pzt6W jTkmCwtYUjfiDvd53HBDVV+WxRNjqyZ15iWtO8H+65exji01OPc23Q231wjxivxIKPUe zVJy+X7Uurg7MXAI6kU2iY9KFYk1wHLfV0l/EbTirOuxLKasbDPb29gXMqKyNnIUolaz L5E34MgjlNHuEWue0LudQiIpKY5Mo6L23e28cNCaTX7rNwoHnZNh5DuUFGpdwenTa+Ox e/OlYW/DpjyZ5IOitc3dtQLX1WbqbbAo1L2L5S4K/kxDVVIijnz863ofBIuJxrOOZ/AG KmMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Q3dWdJkLqjKYBgHFEulLkkFt9u+HMEmdo+aDNai25uk=; b=bP2ICpKH2KGm5JYMonF7T927b6q0XGE41MtYiU/emov2SHqT4DFnBUGWevYn139JD+ xEpYCT0PATmj7OOUPSFkBzHlkHw0II8viHWX90pI2m6FetOo9Wop+xk2w8FF5eKy9qGi aRsmX5CDhBiVMDTTvWGCc4qHJt6vaw/jIJEallzGi3A8ansCTQnY4wU2q4Zc4UoGvUp/ fCevhUa+EZKz9KbJyotNzX/HDmwvK7PO9SKnjFkZkNew2gTfZRPzjR0oOg+KevFTgius 6oLwN6hUlz2/qgJJQTldMQUZZLDqLdaje2pf8rHzSe3rhftfCc6s9bSBcZmtok027RlN IjUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a8-v6si3452657pln.234.2018.04.10.16.06.42; Tue, 10 Apr 2018 16:07:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932495AbeDJW6z (ORCPT + 99 others); Tue, 10 Apr 2018 18:58:55 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:42368 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932367AbeDJWgH (ORCPT ); Tue, 10 Apr 2018 18:36:07 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id F0BB1DFE; Tue, 10 Apr 2018 22:36:06 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hua Rui , Michael Lyle , Jens Axboe , Sasha Levin Subject: [PATCH 4.14 057/138] bcache: ret IOERR when read meets metadata error Date: Wed, 11 Apr 2018 00:24:07 +0200 Message-Id: <20180410212908.693693913@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180410212902.121524696@linuxfoundation.org> References: <20180410212902.121524696@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Rui Hua [ Upstream commit b221fc130c49c50f4c2250d22e873420765a9fa2 ] The read request might meet error when searching the btree, but the error was not handled in cache_lookup(), and this kind of metadata failure will not go into cached_dev_read_error(), finally, the upper layer will receive bi_status=0. In this patch we judge the metadata error by the return value of bch_btree_map_keys(), there are two potential paths give rise to the error: 1. Because the btree is not totally cached in memery, we maybe get error when read btree node from cache device (see bch_btree_node_get()), the likely errno is -EIO, -ENOMEM 2. When read miss happens, bch_btree_insert_check_key() will be called to insert a "replace_key" to btree(see cached_dev_cache_miss(), just for doing preparatory work before insert the missed data to cache device), a failure can also happen in this situation, the likely errno is -ENOMEM bch_btree_map_keys() will return MAP_DONE in normal scenario, but we will get either -EIO or -ENOMEM in above two cases. if this happened, we should NOT recover data from backing device (when cache device is dirty) because we don't know whether bkeys the read request covered are all clean. And after that happened, s->iop.status is still its initially value(0) before we submit s->bio.bio, we set it to BLK_STS_IOERR, so it can go into cached_dev_read_error(), and finally it can be passed to upper layer, or recovered by reread from backing device. [edit by mlyle: patch formatting, word-wrap, comment spelling, commit log format] Signed-off-by: Hua Rui Reviewed-by: Michael Lyle Signed-off-by: Michael Lyle Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/request.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -568,6 +568,7 @@ static void cache_lookup(struct closure { struct search *s = container_of(cl, struct search, iop.cl); struct bio *bio = &s->bio.bio; + struct cached_dev *dc; int ret; bch_btree_op_init(&s->op, -1); @@ -580,6 +581,27 @@ static void cache_lookup(struct closure return; } + /* + * We might meet err when searching the btree, If that happens, we will + * get negative ret, in this scenario we should not recover data from + * backing device (when cache device is dirty) because we don't know + * whether bkeys the read request covered are all clean. + * + * And after that happened, s->iop.status is still its initial value + * before we submit s->bio.bio + */ + if (ret < 0) { + BUG_ON(ret == -EINTR); + if (s->d && s->d->c && + !UUID_FLASH_ONLY(&s->d->c->uuids[s->d->id])) { + dc = container_of(s->d, struct cached_dev, disk); + if (dc && atomic_read(&dc->has_dirty)) + s->recoverable = false; + } + if (!s->iop.status) + s->iop.status = BLK_STS_IOERR; + } + closure_return(cl); }