Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3075319imm; Sun, 1 Jul 2018 11:40:41 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJXde3ikD/lEkyJcnKmDCynNhyLE6fApm91uW/C/W1Eba+x2q5pXUBNab2u0K1+xfzPuBox X-Received: by 2002:a63:8dca:: with SMTP id z193-v6mr19467285pgd.228.1530470441278; Sun, 01 Jul 2018 11:40:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530470441; cv=none; d=google.com; s=arc-20160816; b=UKqLaCSZRXpKHeiTOlHtV7+bE3XvzUQaTQx64GRF09W+TNG+U6GQK5/KIy8/0KLJM8 jBtWs2F/SThpsq+x8pbsQ9jPnFvhf2eSrJyjx/4L0e4GbhpE+K2F5LoIg2SuHh5NVFKc ffezJuuoO6mNRmiCIt5Wvwblr3Z9thhRqMk0tkp+Saj86M1a1k2tyncMymfrQuxWP7l0 9fse5Wfo6bG1cM0BiL/FvSeYwFqXXi76s9ZPpb56qCS0/wcVevRNQRPnP+gwXtGLyLKA ER6jYCPauUVzx978aOHfYogskVvlbh5w4MesUYjJt2C3oX17GmqJGJuWOGefIgE0ISlr 1K2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=LWqak159oM+dAUWf3M5Llq15xuBQcirC8JGZGOOmGMM=; b=Yn9FPXfpAULGZ7STYIn/eZMnR8r6NoQV6EOgQCthAWM1ReN9vRhXEggxCvb0bFYq7l DtedtOr0ztxygGIXW2xiBzyOVFq5dLDDKrPxIH1OkXJNsq9KABT7s5qqlNcyrXqECVEp 8auiW5HS2rPFAhqTzUcrOF0FK0KAwFTFuRpm9gcb70bPiT9+5I5lECGBjolKEoxyXMc+ lvPC0tN2EBn8D8FqjBBlwe8gegJdZ/B8tfVoubz3ybhHUB4mbIKMufYVIqm1u0Um22jf rsfKOMyUcXFkP5XpsD974QXKKMdfk7ysXAhCzhjBR3VvILpg2c32spxGBy9+K+/dCQap JdeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b10-v6si13815806pls.501.2018.07.01.11.40.26; Sun, 01 Jul 2018 11:40:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933206AbeGASij (ORCPT + 99 others); Sun, 1 Jul 2018 14:38:39 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:60420 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752711AbeGAQOr (ORCPT ); Sun, 1 Jul 2018 12:14:47 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 58A094A3; Sun, 1 Jul 2018 16:14:46 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dennis Yang , Mike Snitzer Subject: [PATCH 3.18 85/85] dm thin: handle running out of data space vs concurrent discard Date: Sun, 1 Jul 2018 18:02:43 +0200 Message-Id: <20180701153125.743534187@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701153122.365061142@linuxfoundation.org> References: <20180701153122.365061142@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Snitzer commit a685557fbbc3122ed11e8ad3fa63a11ebc5de8c3 upstream. Discards issued to a DM thin device can complete to userspace (via fstrim) _before_ the metadata changes associated with the discards is reflected in the thinp superblock (e.g. free blocks). As such, if a user constructs a test that loops repeatedly over these steps, block allocation can fail due to discards not having completed yet: 1) fill thin device via filesystem file 2) remove file 3) fstrim >From initial report, here: https://www.redhat.com/archives/dm-devel/2018-April/msg00022.html "The root cause of this issue is that dm-thin will first remove mapping and increase corresponding blocks' reference count to prevent them from being reused before DISCARD bios get processed by the underlying layers. However. increasing blocks' reference count could also increase the nr_allocated_this_transaction in struct sm_disk which makes smd->old_ll.nr_allocated + smd->nr_allocated_this_transaction bigger than smd->old_ll.nr_blocks. In this case, alloc_data_block() will never commit metadata to reset the begin pointer of struct sm_disk, because sm_disk_get_nr_free() always return an underflow value." While there is room for improvement to the space-map accounting that thinp is making use of: the reality is this test is inherently racey and will result in the previous iteration's fstrim's discard(s) completing vs concurrent block allocation, via dd, in the next iteration of the loop. No amount of space map accounting improvements will be able to allow user's to use a block before a discard of that block has completed. So the best we can really do is allow DM thinp to gracefully handle such aggressive use of all the pool's data by degrading the pool into out-of-data-space (OODS) mode. We _should_ get that behaviour already (if space map accounting didn't falsely cause alloc_data_block() to believe free space was available).. but short of that we handle the current reality that dm_pool_alloc_data_block() can return -ENOSPC. Reported-by: Dennis Yang Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-thin.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -992,6 +992,8 @@ static void schedule_external_copy(struc static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); +static void requeue_bios(struct pool *pool); + static void check_for_space(struct pool *pool) { int r; @@ -1004,8 +1006,10 @@ static void check_for_space(struct pool if (r) return; - if (nr_free) + if (nr_free) { set_pool_mode(pool, PM_WRITE); + requeue_bios(pool); + } } /* @@ -1082,7 +1086,10 @@ static int alloc_data_block(struct thin_ r = dm_pool_alloc_data_block(pool->pmd, result); if (r) { - metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); + if (r == -ENOSPC) + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); + else + metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); return r; }