Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2178208iof; Tue, 7 Jun 2022 22:18:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiQqKs0WBQsHXrRvU9ul+0ZIUN59Q0aULS6p6Wirkc8HvenS7UNePLi4BP/xzTL8Bhja+Y X-Received: by 2002:a17:90a:7606:b0:1e8:9e2a:d87c with SMTP id s6-20020a17090a760600b001e89e2ad87cmr8574644pjk.226.1654665527523; Tue, 07 Jun 2022 22:18:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654665527; cv=none; d=google.com; s=arc-20160816; b=aqOg75qXryhNCsL+BlKazBeBtm/5nGH8YpNOxr14yHMBFzCMTNuzgVS+3Y1MjDhdBd V7CeImLljtS+kBmyPdGTCG2CroLzxkDvbvoXEMacK977HUZli2u+xWZymXKRYSz1yOZv QGJ6YCRBx7IPqT+v/SQmx3ft/kIrar6OiNdphnNWD3hiiaB5JIUa7XG9iFbmyU/eQ4J9 2zK/YYgJ7pSH7LDAAWrrZqZb4kMT4AVOweJjya4EpXL4QlzeWXEcGc9UxYjzPwHFN1Tz QO516/jL9ol8GVr0B4W1Ptr9QRXaxyqpSPenfnt0Fa8BjOQytzc96Ubg/jh9j83/cdaf Mv9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bTENq0R0vSmdI9/ipmT/Z1EPTGuo3xLO+sXmIHH2EGQ=; b=esAT+4YWAXYHa8fcDXX12BbSemKLJT2RAKG/fL+OCwQa5CHhdB5ZggK7bHIO1VERqh XGMu1Y64SpUpjqEEvxp+a46xqAIYX+trPfEsUQ8QV/Rol7NOF/wIqHJB07ilpyFL/zWd xwNLr/JEW5XZDhYo4N4gc/DTmhWInMN/7b9yac0JyY9eZ6k1tP6n5Dpd/IGFKUN3Itlm yhF17pvW/8hYdv844DfTVnN6QvX8apEI7X97MxTu+cM/u6Q0m6hxmmC2ByCOrxph4k8W u0UI+rnRBtwF7U0nrMq6lbq6ahlhXQ6rlp48afb8UxG9oKT0JSPZ7HBuB5JAfx9Ephg0 X6LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=GD5PhAMq; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id q18-20020a056a00151200b0051c1fbd5d08si10997057pfu.307.2022.06.07.22.18.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 22:18:47 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=GD5PhAMq; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2401A2EEBEA; Tue, 7 Jun 2022 21:47:52 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352022AbiFGSdL (ORCPT + 99 others); Tue, 7 Jun 2022 14:33:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351618AbiFGSCL (ORCPT ); Tue, 7 Jun 2022 14:02:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F316B156474; Tue, 7 Jun 2022 10:44:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8876A6159C; Tue, 7 Jun 2022 17:44:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9776BC385A5; Tue, 7 Jun 2022 17:44:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654623899; bh=DtOx1O0XiaLlxSWEw4zXLFc27dFqLNs2qRbn4pPwBT0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GD5PhAMqH3Mx/ZDKYoTB5WBjqTFsbd/JRu/V3+NBSZOwF+xKfIoNHh07jKeQKi5OE OOrZqcpjdNyIntgdTiZpwGbTKgncXeMXl7LBP9CbQ7PWpQtj6h/sdQsZKr5A36yVX8 zRXLudzm66wP/SA0pIt8UPk/eUBOvo6PrI3PbnfY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bodo Stroesser , Xiaoguang Wang , "Martin K. Petersen" , Sasha Levin Subject: [PATCH 5.15 091/667] scsi: target: tcmu: Fix possible data corruption Date: Tue, 7 Jun 2022 18:55:56 +0200 Message-Id: <20220607164937.550033493@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220607164934.766888869@linuxfoundation.org> References: <20220607164934.766888869@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Xiaoguang Wang [ Upstream commit bb9b9eb0ae2e9d3f6036f0ad907c3a83dcd43485 ] When tcmu_vma_fault() gets a page successfully, before the current context completes page fault procedure, find_free_blocks() may run and call unmap_mapping_range() to unmap the page. Assume that when find_free_blocks() initially completes and the previous page fault procedure starts to run again and completes, then one truncated page has been mapped to userspace. But note that tcmu_vma_fault() has gotten a refcount for the page so any other subsystem won't be able to use the page unless the userspace address is unmapped later. If another command subsequently runs and needs to extend dbi_thresh it may reuse the corresponding slot for the previous page in data_bitmap. Then though we'll allocate new page for this slot in data_area, no page fault will happen because we have a valid map and the real request's data will be lost. Filesystem implementations will also run into this issue but they usually lock the page when vm_operations_struct->fault gets a page and unlock the page after finish_fault() completes. For truncate filesystems lock pages in truncate_inode_pages() to protect against racing wrt. page faults. To fix this possible data corruption scenario we can apply a method similar to the filesystems. For pages that are to be freed, tcmu_blocks_release() locks and unlocks. Make tcmu_vma_fault() also lock found page under cmdr_lock. At the same time, since tcmu_vma_fault() gets an extra page refcount, tcmu_blocks_release() won't free pages if pages are in page fault procedure, which means it is safe to call tcmu_blocks_release() before unmap_mapping_range(). With these changes tcmu_blocks_release() will wait for all page faults to be completed before calling unmap_mapping_range(). And later, if unmap_mapping_range() is called, it will ensure stale mappings are removed. Link: https://lore.kernel.org/r/20220421023735.9018-1-xiaoguang.wang@linux.alibaba.com Reviewed-by: Bodo Stroesser Signed-off-by: Xiaoguang Wang Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/target/target_core_user.c | 40 ++++++++++++++++++++++++++++--- 1 file changed, 37 insertions(+), 3 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 0ca5ec14d3db..0173f44b015a 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -1667,6 +1668,26 @@ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first, xas_lock(&xas); xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) { xas_store(&xas, NULL); + /* + * While reaching here there may be page faults occurring on + * the to-be-released pages. A race condition may occur if + * unmap_mapping_range() is called before page faults on these + * pages have completed; a valid but stale map is created. + * + * If another command subsequently runs and needs to extend + * dbi_thresh, it may reuse the slot corresponding to the + * previous page in data_bitmap. Though we will allocate a new + * page for the slot in data_area, no page fault will happen + * because we have a valid map. Therefore the command's data + * will be lost. + * + * We lock and unlock pages that are to be released to ensure + * all page faults have completed. This way + * unmap_mapping_range() can ensure stale maps are cleanly + * removed. + */ + lock_page(page); + unlock_page(page); __free_page(page); pages_freed++; } @@ -1822,6 +1843,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) page = xa_load(&udev->data_pages, dpi); if (likely(page)) { get_page(page); + lock_page(page); mutex_unlock(&udev->cmdr_lock); return page; } @@ -1863,6 +1885,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) struct page *page; unsigned long offset; void *addr; + vm_fault_t ret = 0; int mi = tcmu_find_mem_index(vmf->vma); if (mi < 0) @@ -1887,10 +1910,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) page = tcmu_try_get_data_page(udev, dpi); if (!page) return VM_FAULT_SIGBUS; + ret = VM_FAULT_LOCKED; } vmf->page = page; - return 0; + return ret; } static const struct vm_operations_struct tcmu_vm_ops = { @@ -3153,12 +3177,22 @@ static void find_free_blocks(void) udev->dbi_max = block; } + /* + * Release the block pages. + * + * Also note that since tcmu_vma_fault() gets an extra page + * refcount, tcmu_blocks_release() won't free pages if pages + * are mapped. This means it is safe to call + * tcmu_blocks_release() before unmap_mapping_range() which + * drops the refcount of any pages it unmaps and thus releases + * them. + */ + pages_freed = tcmu_blocks_release(udev, start, end - 1); + /* Here will truncate the data area from off */ off = udev->data_off + (loff_t)start * udev->data_blk_size; unmap_mapping_range(udev->inode->i_mapping, off, 0, 1); - /* Release the block pages */ - pages_freed = tcmu_blocks_release(udev, start, end - 1); mutex_unlock(&udev->cmdr_lock); total_pages_freed += pages_freed; -- 2.35.1