Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp2969579imm; Sun, 1 Jul 2018 09:23:26 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf7v0ZIRZOujlIQ3TLT8paOL56UdPaX1ZXYqYAIZ5UzsfjWpcDMqFR0Gg8m0/ztZYK0a0F2 X-Received: by 2002:a62:859c:: with SMTP id m28-v6mr22147506pfk.42.1530462206022; Sun, 01 Jul 2018 09:23:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530462205; cv=none; d=google.com; s=arc-20160816; b=jaudNnQv9XOiFvwcU8rUyhEhLLDWCqKd65YNghzl6rML6kUrIGzIsSnjYBqfeIrvhi 4SXPOfbdgtAnE/qNpKNkZCAEp85DDEA9S4BY7Nqk/c+gjiHz7z4uCZWe4nqE1VH9diUP mZjBMRr6XSkQUje5c6bRF1qQ1gFnTCc4uvWcnUuI89/ZTCQl+qHU197Ni2ELN6yY5KVl Afd4228TEJEYTAS0+P4ODaX1WicHJCDBfQ9B8dZa/vy2NIAFphZO9qlTzwcBMVKw4LnE 3WZiwtWG7IZmXa84IdNGe/4ezsTl0y+mzINk56ilyLvi7KleUZRPV6z3ZjRCl6Htgt5v GQVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=49LStYZ+suVVvDRCig6smz8Zc3Nz1q8pJ9J6F36p1iw=; b=GSUYGmgnPIcz1Wb5k87dLm7aNfm1QzxCjtJqRP8l+sOvNJar9cjo85sSX9Y/W7suiv s7eZ5S5KJcAUX1VMYY0Wr8jaLYTBnORKzMHSxDHQWtZk2F+1Zyca5Kw+dRziovV6X0V+ thhhu+QE8agjWMsSxUv/IVHADs2uPBm9azMBNcJB6F5Lxvhs3Rp9cnVOPGm4KV33zCxn uB61AZvHJjA72eTQuMfXESdiFQQ3UQqeP5n4I0wsfHLNc+M2L3uC+bzXmtK+ZIcZ9Z8L +xB1lGlj2HwqTDAspW1Su9NepRVXnob65oV178O8b6zZyI5ZGf/B6aD5ZUZEIH1bsqEC HJGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5-v6si13699021plr.56.2018.07.01.09.23.10; Sun, 01 Jul 2018 09:23:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933772AbeGAQVx (ORCPT + 99 others); Sun, 1 Jul 2018 12:21:53 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:33274 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964845AbeGAQTq (ORCPT ); Sun, 1 Jul 2018 12:19:46 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id BCDE086A; Sun, 1 Jul 2018 16:19:45 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dennis Yang , Mike Snitzer Subject: [PATCH 4.4 103/105] dm thin: handle running out of data space vs concurrent discard Date: Sun, 1 Jul 2018 18:02:53 +0200 Message-Id: <20180701153156.862304755@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701153149.382300170@linuxfoundation.org> References: <20180701153149.382300170@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Snitzer commit a685557fbbc3122ed11e8ad3fa63a11ebc5de8c3 upstream. Discards issued to a DM thin device can complete to userspace (via fstrim) _before_ the metadata changes associated with the discards is reflected in the thinp superblock (e.g. free blocks). As such, if a user constructs a test that loops repeatedly over these steps, block allocation can fail due to discards not having completed yet: 1) fill thin device via filesystem file 2) remove file 3) fstrim >From initial report, here: https://www.redhat.com/archives/dm-devel/2018-April/msg00022.html "The root cause of this issue is that dm-thin will first remove mapping and increase corresponding blocks' reference count to prevent them from being reused before DISCARD bios get processed by the underlying layers. However. increasing blocks' reference count could also increase the nr_allocated_this_transaction in struct sm_disk which makes smd->old_ll.nr_allocated + smd->nr_allocated_this_transaction bigger than smd->old_ll.nr_blocks. In this case, alloc_data_block() will never commit metadata to reset the begin pointer of struct sm_disk, because sm_disk_get_nr_free() always return an underflow value." While there is room for improvement to the space-map accounting that thinp is making use of: the reality is this test is inherently racey and will result in the previous iteration's fstrim's discard(s) completing vs concurrent block allocation, via dd, in the next iteration of the loop. No amount of space map accounting improvements will be able to allow user's to use a block before a discard of that block has completed. So the best we can really do is allow DM thinp to gracefully handle such aggressive use of all the pool's data by degrading the pool into out-of-data-space (OODS) mode. We _should_ get that behaviour already (if space map accounting didn't falsely cause alloc_data_block() to believe free space was available).. but short of that we handle the current reality that dm_pool_alloc_data_block() can return -ENOSPC. Reported-by: Dennis Yang Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-thin.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -1299,6 +1299,8 @@ static void schedule_external_copy(struc static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); +static void requeue_bios(struct pool *pool); + static void check_for_space(struct pool *pool) { int r; @@ -1311,8 +1313,10 @@ static void check_for_space(struct pool if (r) return; - if (nr_free) + if (nr_free) { set_pool_mode(pool, PM_WRITE); + requeue_bios(pool); + } } /* @@ -1389,7 +1393,10 @@ static int alloc_data_block(struct thin_ r = dm_pool_alloc_data_block(pool->pmd, result); if (r) { - metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); + if (r == -ENOSPC) + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); + else + metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); return r; }