Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp2974888imm; Sun, 1 Jul 2018 09:30:22 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcuN9z6kbN0vlH8fBUPjzX1DeClMFBqb5OdQf3foYoxwsawoSqu+Q34F5MWjeAXxuOnzAOq X-Received: by 2002:a62:9541:: with SMTP id p62-v6mr2910138pfd.152.1530462622742; Sun, 01 Jul 2018 09:30:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530462622; cv=none; d=google.com; s=arc-20160816; b=PCjHLhNYZGD7PeI7G5H/cO3aacfFA8aJeYxny2gCHbNzRpHXl1FEW3wQrUJKRmBUu1 TRdwjUGH0e9YYINc4FNfcxho2GSpyGroeIdsDKzNhhkaTHoAL/yR+ThNKsiWg7M1PFYY Z9bLYVcmWBU3GEPBqY05PAICYZ+bItuIRzU+8bKHDRlvB1A7/acHT6APRlD1u+fKFp/p HH3SwIfh/kYy4CtXQBHNQ/aEvOGx7sl+Znv5/2GmL3SNq9M5fxivAWdR3rjinzXXfXAt 7STh+vylpmiyhSgCA6su0wbxpttvralqj+OeiZU3dtx3hN8Whu69ZGeHGNkXnbixrywR Otyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=2WRL1k1Jq0IU3RJXi3TkKeAbDuNMhaLsTUnB533KCBs=; b=PkxCoGkWHZWAbrQXImJwLZ7IR6pYht8sME4S3jqZu3OGKVjI///AUm2GFXH/oC2uEq Iw3cgLTgCQt6A3SZCjc22vLi+1wnBaMq+nfeJ5QfRoHg8LVPf2dJrFSyjsSt8TVffqlD j0ykJv4Z495WLVirbpq/GwdIt1Jrtqd9UZhNkpoBJA0AiNvgEuaMUeydm926WjsoO2SC 1SwOLTYsb5qPn33stIulfQoBaVx4YdURwVc3Yih8KbcJTGm+aoXFdYOn8p0eWwk6VfFG 5JiMbo++IQffDGU/8RkoZoJxQE9VTxisCM9aobzDYb7sttuaRR5EF0OW1a+cCfucB3NV xkew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r3-v6si12603687pgo.606.2018.07.01.09.30.08; Sun, 01 Jul 2018 09:30:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965780AbeGAQ2z (ORCPT + 99 others); Sun, 1 Jul 2018 12:28:55 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:34394 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965757AbeGAQ2v (ORCPT ); Sun, 1 Jul 2018 12:28:51 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id CCCCF92B; Sun, 1 Jul 2018 16:28:50 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dennis Yang , Mike Snitzer Subject: [PATCH 4.9 100/101] dm thin: handle running out of data space vs concurrent discard Date: Sun, 1 Jul 2018 18:22:26 +0200 Message-Id: <20180701160801.145614656@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701160757.138608453@linuxfoundation.org> References: <20180701160757.138608453@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Snitzer commit a685557fbbc3122ed11e8ad3fa63a11ebc5de8c3 upstream. Discards issued to a DM thin device can complete to userspace (via fstrim) _before_ the metadata changes associated with the discards is reflected in the thinp superblock (e.g. free blocks). As such, if a user constructs a test that loops repeatedly over these steps, block allocation can fail due to discards not having completed yet: 1) fill thin device via filesystem file 2) remove file 3) fstrim >From initial report, here: https://www.redhat.com/archives/dm-devel/2018-April/msg00022.html "The root cause of this issue is that dm-thin will first remove mapping and increase corresponding blocks' reference count to prevent them from being reused before DISCARD bios get processed by the underlying layers. However. increasing blocks' reference count could also increase the nr_allocated_this_transaction in struct sm_disk which makes smd->old_ll.nr_allocated + smd->nr_allocated_this_transaction bigger than smd->old_ll.nr_blocks. In this case, alloc_data_block() will never commit metadata to reset the begin pointer of struct sm_disk, because sm_disk_get_nr_free() always return an underflow value." While there is room for improvement to the space-map accounting that thinp is making use of: the reality is this test is inherently racey and will result in the previous iteration's fstrim's discard(s) completing vs concurrent block allocation, via dd, in the next iteration of the loop. No amount of space map accounting improvements will be able to allow user's to use a block before a discard of that block has completed. So the best we can really do is allow DM thinp to gracefully handle such aggressive use of all the pool's data by degrading the pool into out-of-data-space (OODS) mode. We _should_ get that behaviour already (if space map accounting didn't falsely cause alloc_data_block() to believe free space was available).. but short of that we handle the current reality that dm_pool_alloc_data_block() can return -ENOSPC. Reported-by: Dennis Yang Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-thin.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -1384,6 +1384,8 @@ static void schedule_external_copy(struc static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); +static void requeue_bios(struct pool *pool); + static void check_for_space(struct pool *pool) { int r; @@ -1396,8 +1398,10 @@ static void check_for_space(struct pool if (r) return; - if (nr_free) + if (nr_free) { set_pool_mode(pool, PM_WRITE); + requeue_bios(pool); + } } /* @@ -1474,7 +1478,10 @@ static int alloc_data_block(struct thin_ r = dm_pool_alloc_data_block(pool->pmd, result); if (r) { - metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); + if (r == -ENOSPC) + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); + else + metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); return r; }