2022-05-30 18:29:29

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 5.18 074/159] scsi: target: tcmu: Fix possible data corruption

From: Xiaoguang Wang <[email protected]>

[ Upstream commit bb9b9eb0ae2e9d3f6036f0ad907c3a83dcd43485 ]

When tcmu_vma_fault() gets a page successfully, before the current context
completes page fault procedure, find_free_blocks() may run and call
unmap_mapping_range() to unmap the page. Assume that when
find_free_blocks() initially completes and the previous page fault
procedure starts to run again and completes, then one truncated page has
been mapped to userspace. But note that tcmu_vma_fault() has gotten a
refcount for the page so any other subsystem won't be able to use the page
unless the userspace address is unmapped later.

If another command subsequently runs and needs to extend dbi_thresh it may
reuse the corresponding slot for the previous page in data_bitmap. Then
though we'll allocate new page for this slot in data_area, no page fault
will happen because we have a valid map and the real request's data will be
lost.

Filesystem implementations will also run into this issue but they usually
lock the page when vm_operations_struct->fault gets a page and unlock the
page after finish_fault() completes. For truncate filesystems lock pages in
truncate_inode_pages() to protect against racing wrt. page faults.

To fix this possible data corruption scenario we can apply a method similar
to the filesystems. For pages that are to be freed, tcmu_blocks_release()
locks and unlocks. Make tcmu_vma_fault() also lock found page under
cmdr_lock. At the same time, since tcmu_vma_fault() gets an extra page
refcount, tcmu_blocks_release() won't free pages if pages are in page fault
procedure, which means it is safe to call tcmu_blocks_release() before
unmap_mapping_range().

With these changes tcmu_blocks_release() will wait for all page faults to
be completed before calling unmap_mapping_range(). And later, if
unmap_mapping_range() is called, it will ensure stale mappings are removed.

Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Bodo Stroesser <[email protected]>
Signed-off-by: Xiaoguang Wang <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/target/target_core_user.c | 40 ++++++++++++++++++++++++++++---
1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index fd7267baa707..b1fd06edea59 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -20,6 +20,7 @@
#include <linux/configfs.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
+#include <linux/pagemap.h>
#include <net/genetlink.h>
#include <scsi/scsi_common.h>
#include <scsi/scsi_proto.h>
@@ -1667,6 +1668,26 @@ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first,
xas_lock(&xas);
xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) {
xas_store(&xas, NULL);
+ /*
+ * While reaching here there may be page faults occurring on
+ * the to-be-released pages. A race condition may occur if
+ * unmap_mapping_range() is called before page faults on these
+ * pages have completed; a valid but stale map is created.
+ *
+ * If another command subsequently runs and needs to extend
+ * dbi_thresh, it may reuse the slot corresponding to the
+ * previous page in data_bitmap. Though we will allocate a new
+ * page for the slot in data_area, no page fault will happen
+ * because we have a valid map. Therefore the command's data
+ * will be lost.
+ *
+ * We lock and unlock pages that are to be released to ensure
+ * all page faults have completed. This way
+ * unmap_mapping_range() can ensure stale maps are cleanly
+ * removed.
+ */
+ lock_page(page);
+ unlock_page(page);
__free_page(page);
pages_freed++;
}
@@ -1822,6 +1843,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
page = xa_load(&udev->data_pages, dpi);
if (likely(page)) {
get_page(page);
+ lock_page(page);
mutex_unlock(&udev->cmdr_lock);
return page;
}
@@ -1863,6 +1885,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
struct page *page;
unsigned long offset;
void *addr;
+ vm_fault_t ret = 0;

int mi = tcmu_find_mem_index(vmf->vma);
if (mi < 0)
@@ -1887,10 +1910,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
page = tcmu_try_get_data_page(udev, dpi);
if (!page)
return VM_FAULT_SIGBUS;
+ ret = VM_FAULT_LOCKED;
}

vmf->page = page;
- return 0;
+ return ret;
}

static const struct vm_operations_struct tcmu_vm_ops = {
@@ -3205,12 +3229,22 @@ static void find_free_blocks(void)
udev->dbi_max = block;
}

+ /*
+ * Release the block pages.
+ *
+ * Also note that since tcmu_vma_fault() gets an extra page
+ * refcount, tcmu_blocks_release() won't free pages if pages
+ * are mapped. This means it is safe to call
+ * tcmu_blocks_release() before unmap_mapping_range() which
+ * drops the refcount of any pages it unmaps and thus releases
+ * them.
+ */
+ pages_freed = tcmu_blocks_release(udev, start, end - 1);
+
/* Here will truncate the data area from off */
off = udev->data_off + (loff_t)start * udev->data_blk_size;
unmap_mapping_range(udev->inode->i_mapping, off, 0, 1);

- /* Release the block pages */
- pages_freed = tcmu_blocks_release(udev, start, end - 1);
mutex_unlock(&udev->cmdr_lock);

total_pages_freed += pages_freed;
--
2.35.1



2022-06-01 04:26:09

by Bodo Stroesser

[permalink] [raw]
Subject: Re: [PATCH AUTOSEL 5.18 074/159] scsi: target: tcmu: Fix possible data corruption

Sasha,

the below patch introduces a new bug, which is fixed by commit
325d5c5fb216 ("scsi: target: tcmu: Avoid holding XArray lock when calling lock_page")
Please consider adding this further fix.

For my understanding: commit 325d5c5fb216 contains a "Fixes:"
tag. So I'd expect it to be added automatically.
Is there still something missing in the commit?

Thank you,
Bodo


On 30.05.22 15:22, Sasha Levin wrote:
> From: Xiaoguang Wang <[email protected]>
>
> [ Upstream commit bb9b9eb0ae2e9d3f6036f0ad907c3a83dcd43485 ]
>
> When tcmu_vma_fault() gets a page successfully, before the current context
> completes page fault procedure, find_free_blocks() may run and call
> unmap_mapping_range() to unmap the page. Assume that when
> find_free_blocks() initially completes and the previous page fault
> procedure starts to run again and completes, then one truncated page has
> been mapped to userspace. But note that tcmu_vma_fault() has gotten a
> refcount for the page so any other subsystem won't be able to use the page
> unless the userspace address is unmapped later.
>
> If another command subsequently runs and needs to extend dbi_thresh it may
> reuse the corresponding slot for the previous page in data_bitmap. Then
> though we'll allocate new page for this slot in data_area, no page fault
> will happen because we have a valid map and the real request's data will be
> lost.
>
> Filesystem implementations will also run into this issue but they usually
> lock the page when vm_operations_struct->fault gets a page and unlock the
> page after finish_fault() completes. For truncate filesystems lock pages in
> truncate_inode_pages() to protect against racing wrt. page faults.
>
> To fix this possible data corruption scenario we can apply a method similar
> to the filesystems. For pages that are to be freed, tcmu_blocks_release()
> locks and unlocks. Make tcmu_vma_fault() also lock found page under
> cmdr_lock. At the same time, since tcmu_vma_fault() gets an extra page
> refcount, tcmu_blocks_release() won't free pages if pages are in page fault
> procedure, which means it is safe to call tcmu_blocks_release() before
> unmap_mapping_range().
>
> With these changes tcmu_blocks_release() will wait for all page faults to
> be completed before calling unmap_mapping_range(). And later, if
> unmap_mapping_range() is called, it will ensure stale mappings are removed.
>
> Link: https://lore.kernel.org/r/[email protected]
> Reviewed-by: Bodo Stroesser <[email protected]>
> Signed-off-by: Xiaoguang Wang <[email protected]>
> Signed-off-by: Martin K. Petersen <[email protected]>
> Signed-off-by: Sasha Levin <[email protected]>
> ---
> drivers/target/target_core_user.c | 40 ++++++++++++++++++++++++++++---
> 1 file changed, 37 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
> index fd7267baa707..b1fd06edea59 100644
> --- a/drivers/target/target_core_user.c
> +++ b/drivers/target/target_core_user.c
> @@ -20,6 +20,7 @@
> #include <linux/configfs.h>
> #include <linux/mutex.h>
> #include <linux/workqueue.h>
> +#include <linux/pagemap.h>
> #include <net/genetlink.h>
> #include <scsi/scsi_common.h>
> #include <scsi/scsi_proto.h>
> @@ -1667,6 +1668,26 @@ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first,
> xas_lock(&xas);
> xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) {
> xas_store(&xas, NULL);
> + /*
> + * While reaching here there may be page faults occurring on
> + * the to-be-released pages. A race condition may occur if
> + * unmap_mapping_range() is called before page faults on these
> + * pages have completed; a valid but stale map is created.
> + *
> + * If another command subsequently runs and needs to extend
> + * dbi_thresh, it may reuse the slot corresponding to the
> + * previous page in data_bitmap. Though we will allocate a new
> + * page for the slot in data_area, no page fault will happen
> + * because we have a valid map. Therefore the command's data
> + * will be lost.
> + *
> + * We lock and unlock pages that are to be released to ensure
> + * all page faults have completed. This way
> + * unmap_mapping_range() can ensure stale maps are cleanly
> + * removed.
> + */
> + lock_page(page);
> + unlock_page(page);
> __free_page(page);
> pages_freed++;
> }
> @@ -1822,6 +1843,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
> page = xa_load(&udev->data_pages, dpi);
> if (likely(page)) {
> get_page(page);
> + lock_page(page);
> mutex_unlock(&udev->cmdr_lock);
> return page;
> }
> @@ -1863,6 +1885,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
> struct page *page;
> unsigned long offset;
> void *addr;
> + vm_fault_t ret = 0;
>
> int mi = tcmu_find_mem_index(vmf->vma);
> if (mi < 0)
> @@ -1887,10 +1910,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
> page = tcmu_try_get_data_page(udev, dpi);
> if (!page)
> return VM_FAULT_SIGBUS;
> + ret = VM_FAULT_LOCKED;
> }
>
> vmf->page = page;
> - return 0;
> + return ret;
> }
>
> static const struct vm_operations_struct tcmu_vm_ops = {
> @@ -3205,12 +3229,22 @@ static void find_free_blocks(void)
> udev->dbi_max = block;
> }
>
> + /*
> + * Release the block pages.
> + *
> + * Also note that since tcmu_vma_fault() gets an extra page
> + * refcount, tcmu_blocks_release() won't free pages if pages
> + * are mapped. This means it is safe to call
> + * tcmu_blocks_release() before unmap_mapping_range() which
> + * drops the refcount of any pages it unmaps and thus releases
> + * them.
> + */
> + pages_freed = tcmu_blocks_release(udev, start, end - 1);
> +
> /* Here will truncate the data area from off */
> off = udev->data_off + (loff_t)start * udev->data_blk_size;
> unmap_mapping_range(udev->inode->i_mapping, off, 0, 1);
>
> - /* Release the block pages */
> - pages_freed = tcmu_blocks_release(udev, start, end - 1);
> mutex_unlock(&udev->cmdr_lock);
>
> total_pages_freed += pages_freed;

2022-06-06 04:54:29

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH AUTOSEL 5.18 074/159] scsi: target: tcmu: Fix possible data corruption

On Mon, May 30, 2022 at 05:47:12PM +0200, Bodo Stroesser wrote:
>Sasha,
>
>the below patch introduces a new bug, which is fixed by commit
> 325d5c5fb216 ("scsi: target: tcmu: Avoid holding XArray lock when calling lock_page")
>Please consider adding this further fix.
>
>For my understanding: commit 325d5c5fb216 contains a "Fixes:"
>tag. So I'd expect it to be added automatically.
>Is there still something missing in the commit?

I'll make sure it's added along with this one, thanks!

--
Thanks,
Sasha