2014-08-21 16:07:18

by Christoph Hellwig

[permalink] [raw]
Subject:

This series fixes a wide range of issues with the pnfs block layout driver.

Before this we were seeing frequent silent data corruption, softlockups and
kernel crashes when running both user applications and test cases like xfstests.

After this rewrite of the I/O path we've sorted out all issues under normal
operations, although error handling in the block layout driver and its
interaction with the core nfs and pnfs code still needs further work.

It was pointed out that the requirement for blocksizes smaller or equal to
the page size conflicts with existing servers. From the state of the code
I'm very sure it was never used for real against those servers, but as
soon as someone signs up to testing and fix support for these servers I'm
happy to land a helping hand, although rock solid support probably is
imposible.

This work was sponsored by NetApp, Inc.

Changes since V1:
- added two more layout stateid handling fixes
- change the layoutget path so that the layout driver is responsible for
freeing the spliced in payload.



2014-08-21 16:07:24

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

Expedite layout recall processing by forcing a layout commit when
we see busy segments. Without it the layout recall might have to wait
until the VM decided to start writeback for the file, which can introduce
long delays.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/callback_proc.c | 16 +++++++++++-----
fs/nfs/pnfs.c | 3 +++
2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 41db525..bf017b0 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -164,6 +164,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
struct inode *ino;
struct pnfs_layout_hdr *lo;
u32 rv = NFS4ERR_NOMATCHING_LAYOUT;
+ bool need_commit = false;
LIST_HEAD(free_me_list);

lo = get_layout_by_fh(clp, &args->cbl_fh, &args->cbl_stateid);
@@ -172,16 +173,21 @@ static u32 initiate_file_draining(struct nfs_client *clp,

ino = lo->plh_inode;
spin_lock(&ino->i_lock);
- if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
- pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
- &args->cbl_range))
+ if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags)) {
rv = NFS4ERR_DELAY;
- else
- rv = NFS4ERR_NOMATCHING_LAYOUT;
+ } else if (pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
+ &args->cbl_range)) {
+ need_commit = true;
+ rv = NFS4ERR_DELAY;
+ }
+
pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&free_me_list);
pnfs_put_layout_hdr(lo);
+
+ if (need_commit)
+ pnfs_layoutcommit_inode(ino, false);
iput(ino);
out:
return rv;
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 6e0fa71..242e73f 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -604,6 +604,9 @@ pnfs_layout_free_bulk_destroy_list(struct list_head *layout_list,
spin_unlock(&inode->i_lock);
pnfs_free_lseg_list(&lseg_list);
pnfs_put_layout_hdr(lo);
+
+ if (ret)
+ pnfs_layoutcommit_inode(inode, false);
iput(inode);
}
return ret;
--
1.9.1


2014-08-26 09:06:50

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH 16/19] pnfs/blocklayout: rewrite extent tracking

On 08/25/2014 05:43 PM, Christoph Hellwig wrote:
> On Mon, Aug 25, 2014 at 10:36:09AM -0400, Anna Schumaker wrote:
>> Whoa, I like the amount of code you manage to remove in this patch! However, there is a lot going on here. Any way this patch could be split into incremental changes? (it's okay to say "no"!)
>
> The answer really is no - it's a wholesale replacement of three tracking
> structures with an integrated one that is managed very differently. The only
> thing I had split out ealier where various trivial bug fixes like sector
> truncations in code that gets ripped out entirely later, so I decided to
> fold those again, which didn't increase the patch size by more than 10 lines.
>

Right, same here. Anna it is best to just go read the code and don't dare
look at the patch itself. The patch will just complicate what is more
simple at the code level.

I have seen this patch and it is on my todo to review the new code
wholesale after this patch, to learn the nice structures Christoph
has introduced here.

Thanks
Boaz


2014-08-26 14:53:01

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 05:37 PM, Boaz Harrosh wrote:
> On 08/26/2014 05:26 PM, Trond Myklebust wrote:
>> On Tue, Aug 26, 2014 at 10:10 AM, Boaz Harrosh <[email protected]> wrote:
>>> +
>>> + /* kick out any segs held by need to commit */
>>> + pnfs_layoutcommit_inode(ino, true);
>>
>> Making this call synchronous could deadlock the entire back channel.
>> Is there any reason why it can't just be made asynchonous?
>>
>
> We were just talking about that.
>
> So the logic here is that we want to save round trips and make this
> as efficient as possible with no extra round trips for the server
> recall. A single RECALL => LAYOUT_COMMIT => LAYOUT_COMMIT_REPLAY
> REACLL_REPLAY_NO_MATCHING.
>
> Please explain the deadlock you foresee. The worst is that the
> mal-behaving server will time-out and after a long time the
> RECALL_REPLAY will return with ERR. But why do you say deadlock
> how can this deadlock?
>
> Otherwise Christoph's version of this patch does the asynchonous
> way which will always cause another poll of the RECALL and more delays
> for every RECALL operation, which I was trying to avoid.
>

Also, for any client there is only a single server on it's backchannel

do you mean that ANY recall from any server can only come on a single
thread? I did not know that I thought it was from a pool of RPC threads.

So are you saying that waiting on this server operation will cause
any other server's recall to some other mounts stall until the
operation is complete for this server?

Thanks
Boaz


2014-08-21 16:07:32

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 07/19] pnfs: add flag to force read-modify-write in ->write_begin

Like all block based filesystems, the pNFS block layout driver can't read
or write at a byte granularity and thus has to perform read-modify-write
cycles on writes smaller than this granularity.

Add a flag so that the core NFS code always reads a whole page when
starting a smaller write, so that we can do it in the place where the VFS
expects it instead of doing in very deadlock prone way in the writeback
handler.

Note that in theory we could do less than page size reads here for disks
that have a smaller sector size which are server by a server with a smaller
pnfs block size. But so far that doesn't seem like a worthwhile
optimization.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/file.c | 9 +++++++++
fs/nfs/pnfs.h | 1 +
2 files changed, 10 insertions(+)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 524dd80..58566e9 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -36,6 +36,7 @@
#include "internal.h"
#include "iostat.h"
#include "fscache.h"
+#include "pnfs.h"

#include "nfstrace.h"

@@ -323,10 +324,18 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
static int nfs_want_read_modify_write(struct file *file, struct page *page,
loff_t pos, unsigned len)
{
+ struct nfs_server *server = NFS_SERVER(file->f_mapping->host);
unsigned int pglen = nfs_page_length(page);
unsigned int offset = pos & (PAGE_CACHE_SIZE - 1);
unsigned int end = offset + len;

+ if (server->pnfs_curr_ld &&
+ (server->pnfs_curr_ld->flags & PNFS_READ_WHOLE_PAGE)) {
+ if (!PageUptodate(page))
+ return 1;
+ return 0;
+ }
+
if ((file->f_mode & FMODE_READ) && /* open for read? */
!PageUptodate(page) && /* Uptodate? */
!PagePrivate(page) && /* i/o request already? */
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 16cd14f..302b279 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -72,6 +72,7 @@ enum layoutdriver_policy_flags {
/* Should the pNFS client commit and return the layout upon a setattr */
PNFS_LAYOUTRET_ON_SETATTR = 1 << 0,
PNFS_LAYOUTRET_ON_ERROR = 1 << 1,
+ PNFS_READ_WHOLE_PAGE = 1 << 2,
};

struct nfs4_deviceid_node;
--
1.9.1


2014-08-21 16:07:47

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 14/19] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist

Use the new PNFS_READ_WHOLE_PAGE flag to offload read-modify-write
handling to core nfs code, and remove a huge chunk of deadlock prone
mess from the block layout writeback path.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 500 +++++----------------------------------
1 file changed, 64 insertions(+), 436 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 87a633d..6484b9f 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -35,7 +35,6 @@
#include <linux/mount.h>
#include <linux/namei.h>
#include <linux/bio.h> /* struct bio */
-#include <linux/buffer_head.h> /* various write calls */
#include <linux/prefetch.h>
#include <linux/pagevec.h>

@@ -188,16 +187,6 @@ retry:
return bio;
}

-static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
- sector_t isect, struct page *page,
- struct pnfs_block_extent *be,
- void (*end_io)(struct bio *, int err),
- struct parallel_io *par)
-{
- return do_add_page_to_bio(bio, npg, rw, isect, page, be,
- end_io, par, 0, PAGE_CACHE_SIZE);
-}
-
/* This is basically copied from mpage_end_io_read */
static void bl_end_io_read(struct bio *bio, int err)
{
@@ -293,8 +282,8 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
}
}

+ pg_offset = f_offset & ~PAGE_CACHE_MASK;
if (is_dio) {
- pg_offset = f_offset & ~PAGE_CACHE_MASK;
if (pg_offset + bytes_left > PAGE_CACHE_SIZE)
pg_len = PAGE_CACHE_SIZE - pg_offset;
else
@@ -305,7 +294,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
isect += (pg_offset >> SECTOR_SHIFT);
extent_length -= (pg_offset >> SECTOR_SHIFT);
} else {
- pg_offset = 0;
+ BUG_ON(pg_offset != 0);
pg_len = PAGE_CACHE_SIZE;
}

@@ -383,29 +372,6 @@ static void mark_extents_written(struct pnfs_block_layout *bl,
}
}

-static void bl_end_io_write_zero(struct bio *bio, int err)
-{
- struct parallel_io *par = bio->bi_private;
- struct bio_vec *bvec;
- int i;
-
- bio_for_each_segment_all(bvec, bio, i) {
- /* This is the zeroing page we added */
- end_page_writeback(bvec->bv_page);
- page_cache_release(bvec->bv_page);
- }
-
- if (unlikely(err)) {
- struct nfs_pgio_header *header = par->data;
-
- if (!header->pnfs_error)
- header->pnfs_error = -EIO;
- pnfs_set_lo_fail(header->lseg);
- }
- bio_put(bio);
- put_parallel(par);
-}
-
static void bl_end_io_write(struct bio *bio, int err)
{
struct parallel_io *par = bio->bi_private;
@@ -455,256 +421,22 @@ static void bl_end_par_io_write(void *data, int num_se)
schedule_work(&hdr->task.u.tk_work);
}

-/* FIXME STUB - mark intersection of layout and page as bad, so is not
- * used again.
- */
-static void mark_bad_read(void)
-{
- return;
-}
-
-/*
- * map_block: map a requested I/0 block (isect) into an offset in the LVM
- * block_device
- */
-static void
-map_block(struct buffer_head *bh, sector_t isect, struct pnfs_block_extent *be)
-{
- dprintk("%s enter be=%p\n", __func__, be);
-
- set_buffer_mapped(bh);
- bh->b_bdev = be->be_mdev;
- bh->b_blocknr = (isect - be->be_f_offset + be->be_v_offset) >>
- (be->be_mdev->bd_inode->i_blkbits - SECTOR_SHIFT);
-
- dprintk("%s isect %llu, bh->b_blocknr %ld, using bsize %Zd\n",
- __func__, (unsigned long long)isect, (long)bh->b_blocknr,
- bh->b_size);
- return;
-}
-
-static void
-bl_read_single_end_io(struct bio *bio, int error)
-{
- struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
- struct page *page = bvec->bv_page;
-
- /* Only one page in bvec */
- unlock_page(page);
-}
-
-static int
-bl_do_readpage_sync(struct page *page, struct pnfs_block_extent *be,
- unsigned int offset, unsigned int len)
-{
- struct bio *bio;
- struct page *shadow_page;
- sector_t isect;
- char *kaddr, *kshadow_addr;
- int ret = 0;
-
- dprintk("%s: offset %u len %u\n", __func__, offset, len);
-
- shadow_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
- if (shadow_page == NULL)
- return -ENOMEM;
-
- bio = bio_alloc(GFP_NOIO, 1);
- if (bio == NULL)
- return -ENOMEM;
-
- isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) +
- (offset / SECTOR_SIZE);
-
- bio->bi_iter.bi_sector = isect - be->be_f_offset + be->be_v_offset;
- bio->bi_bdev = be->be_mdev;
- bio->bi_end_io = bl_read_single_end_io;
-
- lock_page(shadow_page);
- if (bio_add_page(bio, shadow_page,
- SECTOR_SIZE, round_down(offset, SECTOR_SIZE)) == 0) {
- unlock_page(shadow_page);
- bio_put(bio);
- return -EIO;
- }
-
- submit_bio(READ, bio);
- wait_on_page_locked(shadow_page);
- if (unlikely(!test_bit(BIO_UPTODATE, &bio->bi_flags))) {
- ret = -EIO;
- } else {
- kaddr = kmap_atomic(page);
- kshadow_addr = kmap_atomic(shadow_page);
- memcpy(kaddr + offset, kshadow_addr + offset, len);
- kunmap_atomic(kshadow_addr);
- kunmap_atomic(kaddr);
- }
- __free_page(shadow_page);
- bio_put(bio);
-
- return ret;
-}
-
-static int
-bl_read_partial_page_sync(struct page *page, struct pnfs_block_extent *be,
- unsigned int dirty_offset, unsigned int dirty_len,
- bool full_page)
-{
- int ret = 0;
- unsigned int start, end;
-
- if (full_page) {
- start = 0;
- end = PAGE_CACHE_SIZE;
- } else {
- start = round_down(dirty_offset, SECTOR_SIZE);
- end = round_up(dirty_offset + dirty_len, SECTOR_SIZE);
- }
-
- dprintk("%s: offset %u len %d\n", __func__, dirty_offset, dirty_len);
- if (!be) {
- zero_user_segments(page, start, dirty_offset,
- dirty_offset + dirty_len, end);
- if (start == 0 && end == PAGE_CACHE_SIZE &&
- trylock_page(page)) {
- SetPageUptodate(page);
- unlock_page(page);
- }
- return ret;
- }
-
- if (start != dirty_offset)
- ret = bl_do_readpage_sync(page, be, start, dirty_offset - start);
-
- if (!ret && (dirty_offset + dirty_len < end))
- ret = bl_do_readpage_sync(page, be, dirty_offset + dirty_len,
- end - dirty_offset - dirty_len);
-
- return ret;
-}
-
-/* Given an unmapped page, zero it or read in page for COW, page is locked
- * by caller.
- */
-static int
-init_page_for_write(struct page *page, struct pnfs_block_extent *cow_read)
-{
- struct buffer_head *bh = NULL;
- int ret = 0;
- sector_t isect;
-
- dprintk("%s enter, %p\n", __func__, page);
- BUG_ON(PageUptodate(page));
- if (!cow_read) {
- zero_user_segment(page, 0, PAGE_SIZE);
- SetPageUptodate(page);
- goto cleanup;
- }
-
- bh = alloc_page_buffers(page, PAGE_CACHE_SIZE, 0);
- if (!bh) {
- ret = -ENOMEM;
- goto cleanup;
- }
-
- isect = (sector_t) page->index << PAGE_CACHE_SECTOR_SHIFT;
- map_block(bh, isect, cow_read);
- if (!bh_uptodate_or_lock(bh))
- ret = bh_submit_read(bh);
- if (ret)
- goto cleanup;
- SetPageUptodate(page);
-
-cleanup:
- if (bh)
- free_buffer_head(bh);
- if (ret) {
- /* Need to mark layout with bad read...should now
- * just use nfs4 for reads and writes.
- */
- mark_bad_read();
- }
- return ret;
-}
-
-/* Find or create a zeroing page marked being writeback.
- * Return ERR_PTR on error, NULL to indicate skip this page and page itself
- * to indicate write out.
- */
-static struct page *
-bl_find_get_zeroing_page(struct inode *inode, pgoff_t index,
- struct pnfs_block_extent *cow_read)
-{
- struct page *page;
- int locked = 0;
- page = find_get_page(inode->i_mapping, index);
- if (page)
- goto check_page;
-
- page = find_or_create_page(inode->i_mapping, index, GFP_NOFS);
- if (unlikely(!page)) {
- dprintk("%s oom\n", __func__);
- return ERR_PTR(-ENOMEM);
- }
- locked = 1;
-
-check_page:
- /* PageDirty: Other will write this out
- * PageWriteback: Other is writing this out
- * PageUptodate: It was read before
- */
- if (PageDirty(page) || PageWriteback(page)) {
- print_page(page);
- if (locked)
- unlock_page(page);
- page_cache_release(page);
- return NULL;
- }
-
- if (!locked) {
- lock_page(page);
- locked = 1;
- goto check_page;
- }
- if (!PageUptodate(page)) {
- /* New page, readin or zero it */
- init_page_for_write(page, cow_read);
- }
- set_page_writeback(page);
- unlock_page(page);
-
- return page;
-}
-
static enum pnfs_try_status
bl_write_pagelist(struct nfs_pgio_header *header, int sync)
{
- int i, ret, npg_zero, pg_index, last = 0;
+ int i, ret;
struct bio *bio = NULL;
- struct pnfs_block_extent *be = NULL, *cow_read = NULL;
- sector_t isect, last_isect = 0, extent_length = 0;
+ struct pnfs_block_extent *be = NULL;
+ sector_t isect, extent_length = 0;
struct parallel_io *par = NULL;
loff_t offset = header->args.offset;
size_t count = header->args.count;
- unsigned int pg_offset, pg_len, saved_len;
struct page **pages = header->args.pages;
- struct page *page;
- pgoff_t index;
- u64 temp;
- int npg_per_block =
- NFS_SERVER(header->inode)->pnfs_blksize >> PAGE_CACHE_SHIFT;
+ int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
struct blk_plug plug;

dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);

- blk_start_plug(&plug);
-
- if (header->dreq != NULL &&
- (!IS_ALIGNED(offset, NFS_SERVER(header->inode)->pnfs_blksize) ||
- !IS_ALIGNED(count, NFS_SERVER(header->inode)->pnfs_blksize))) {
- dprintk("pnfsblock nonblock aligned DIO writes. Resend MDS\n");
- goto out_mds;
- }
/* At this point, header->page_aray is a (sequential) list of nfs_pages.
* We want to write each, and if there is an error set pnfs_error
* to have it redone using nfs.
@@ -715,97 +447,20 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
par->pnfs_callback = bl_end_par_io_write;
/* At this point, have to be more careful with error handling */

- isect = (sector_t) ((offset & (long)PAGE_CACHE_MASK) >> SECTOR_SHIFT);
- be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg), isect, &cow_read);
- if (!be || !is_writable(be, isect)) {
- dprintk("%s no matching extents!\n", __func__);
- goto out_mds;
- }
-
- /* First page inside INVALID extent */
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
- if (likely(!bl_push_one_short_extent(be->be_inval)))
- par->bse_count++;
- else
- goto out_mds;
- temp = offset >> PAGE_CACHE_SHIFT;
- npg_zero = do_div(temp, npg_per_block);
- isect = (sector_t) (((offset - npg_zero * PAGE_CACHE_SIZE) &
- (long)PAGE_CACHE_MASK) >> SECTOR_SHIFT);
- extent_length = be->be_length - (isect - be->be_f_offset);
-
-fill_invalid_ext:
- dprintk("%s need to zero %d pages\n", __func__, npg_zero);
- for (;npg_zero > 0; npg_zero--) {
- if (bl_is_sector_init(be->be_inval, isect)) {
- dprintk("isect %llu already init\n",
- (unsigned long long)isect);
- goto next_page;
- }
- /* page ref released in bl_end_io_write_zero */
- index = isect >> PAGE_CACHE_SECTOR_SHIFT;
- dprintk("%s zero %dth page: index %lu isect %llu\n",
- __func__, npg_zero, index,
- (unsigned long long)isect);
- page = bl_find_get_zeroing_page(header->inode, index,
- cow_read);
- if (unlikely(IS_ERR(page))) {
- header->pnfs_error = PTR_ERR(page);
- goto out;
- } else if (page == NULL)
- goto next_page;
-
- ret = bl_mark_sectors_init(be->be_inval, isect,
- PAGE_CACHE_SECTORS);
- if (unlikely(ret)) {
- dprintk("%s bl_mark_sectors_init fail %d\n",
- __func__, ret);
- end_page_writeback(page);
- page_cache_release(page);
- header->pnfs_error = ret;
- goto out;
- }
- if (likely(!bl_push_one_short_extent(be->be_inval)))
- par->bse_count++;
- else {
- end_page_writeback(page);
- page_cache_release(page);
- header->pnfs_error = -ENOMEM;
- goto out;
- }
- /* FIXME: This should be done in bi_end_io */
- mark_extents_written(BLK_LSEG2EXT(header->lseg),
- page->index << PAGE_CACHE_SHIFT,
- PAGE_CACHE_SIZE);
-
- bio = bl_add_page_to_bio(bio, npg_zero, WRITE,
- isect, page, be,
- bl_end_io_write_zero, par);
- if (IS_ERR(bio)) {
- header->pnfs_error = PTR_ERR(bio);
- bio = NULL;
- goto out;
- }
-next_page:
- isect += PAGE_CACHE_SECTORS;
- extent_length -= PAGE_CACHE_SECTORS;
- }
- if (last)
- goto write_done;
- }
- bio = bl_submit_bio(WRITE, bio);
+ blk_start_plug(&plug);

- /* Middle pages */
- pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
+ /* we always write out the whole page */
+ offset = offset & (loff_t)PAGE_CACHE_MASK;
+ isect = offset >> SECTOR_SHIFT;
+
for (i = pg_index; i < header->page_array.npages; i++) {
if (extent_length <= 0) {
/* We've used up the previous extent */
bl_put_extent(be);
- bl_put_extent(cow_read);
bio = bl_submit_bio(WRITE, bio);
/* Get the next one */
be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
- isect, &cow_read);
+ isect, NULL);
if (!be || !is_writable(be, isect)) {
header->pnfs_error = -EINVAL;
goto out;
@@ -823,25 +478,10 @@ next_page:
(isect - be->be_f_offset);
}

- dprintk("%s offset %lld count %Zu\n", __func__, offset, count);
- pg_offset = offset & ~PAGE_CACHE_MASK;
- if (pg_offset + count > PAGE_CACHE_SIZE)
- pg_len = PAGE_CACHE_SIZE - pg_offset;
- else
- pg_len = count;
+ BUG_ON(offset & ~PAGE_CACHE_MASK);

- saved_len = pg_len;
if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
!bl_is_sector_init(be->be_inval, isect)) {
- ret = bl_read_partial_page_sync(pages[i], cow_read,
- pg_offset, pg_len, true);
- if (ret) {
- dprintk("%s bl_read_partial_page_sync fail %d\n",
- __func__, ret);
- header->pnfs_error = ret;
- goto out;
- }
-
ret = bl_mark_sectors_init(be->be_inval, isect,
PAGE_CACHE_SECTORS);
if (unlikely(ret)) {
@@ -850,66 +490,31 @@ next_page:
header->pnfs_error = ret;
goto out;
}
-
- /* Expand to full page write */
- pg_offset = 0;
- pg_len = PAGE_CACHE_SIZE;
- } else if ((pg_offset & (SECTOR_SIZE - 1)) ||
- (pg_len & (SECTOR_SIZE - 1))){
- /* ahh, nasty case. We have to do sync full sector
- * read-modify-write cycles.
- */
- unsigned int saved_offset = pg_offset;
- ret = bl_read_partial_page_sync(pages[i], be, pg_offset,
- pg_len, false);
- pg_offset = round_down(pg_offset, SECTOR_SIZE);
- pg_len = round_up(saved_offset + pg_len, SECTOR_SIZE)
- - pg_offset;
}

-
bio = do_add_page_to_bio(bio, header->page_array.npages - i,
- WRITE,
- isect, pages[i], be,
+ WRITE, isect, pages[i], be,
bl_end_io_write, par,
- pg_offset, pg_len);
+ 0, PAGE_CACHE_SIZE);
if (IS_ERR(bio)) {
header->pnfs_error = PTR_ERR(bio);
bio = NULL;
goto out;
}
- offset += saved_len;
- count -= saved_len;
+ offset += PAGE_CACHE_SIZE;
+ count -= PAGE_CACHE_SIZE;
isect += PAGE_CACHE_SECTORS;
- last_isect = isect;
extent_length -= PAGE_CACHE_SECTORS;
}

- /* Last page inside INVALID extent */
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
- bio = bl_submit_bio(WRITE, bio);
- temp = last_isect >> PAGE_CACHE_SECTOR_SHIFT;
- npg_zero = npg_per_block - do_div(temp, npg_per_block);
- if (npg_zero < npg_per_block) {
- last = 1;
- goto fill_invalid_ext;
- }
- }
-
-write_done:
header->res.count = header->args.count;
out:
bl_put_extent(be);
- bl_put_extent(cow_read);
bl_submit_bio(WRITE, bio);
blk_finish_plug(&plug);
put_parallel(par);
return PNFS_ATTEMPTED;
out_mds:
- blk_finish_plug(&plug);
- bl_put_extent(be);
- bl_put_extent(cow_read);
- kfree(par);
return PNFS_NOT_ATTEMPTED;
}

@@ -1188,20 +793,45 @@ bl_clear_layoutdriver(struct nfs_server *server)
}

static bool
-is_aligned_req(struct nfs_page *req, unsigned int alignment)
+is_aligned_req(struct nfs_pageio_descriptor *pgio,
+ struct nfs_page *req, unsigned int alignment)
{
- return IS_ALIGNED(req->wb_offset, alignment) &&
- IS_ALIGNED(req->wb_bytes, alignment);
+ /*
+ * Always accept buffered writes, higher layers take care of the
+ * right alignment.
+ */
+ if (pgio->pg_dreq == NULL)
+ return true;
+
+ if (!IS_ALIGNED(req->wb_offset, alignment))
+ return false;
+
+ if (IS_ALIGNED(req->wb_bytes, alignment))
+ return true;
+
+ if (req_offset(req) + req->wb_bytes == i_size_read(pgio->pg_inode)) {
+ /*
+ * If the write goes up to the inode size, just write
+ * the full page. Data past the inode size is
+ * guaranteed to be zeroed by the higher level client
+ * code, and this behaviour is mandated by RFC 5663
+ * section 2.3.2.
+ */
+ return true;
+ }
+
+ return false;
}

static void
bl_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
{
- if (pgio->pg_dreq != NULL &&
- !is_aligned_req(req, SECTOR_SIZE))
+ if (!is_aligned_req(pgio, req, SECTOR_SIZE)) {
nfs_pageio_reset_read_mds(pgio);
- else
- pnfs_generic_pg_init_read(pgio, req);
+ return;
+ }
+
+ pnfs_generic_pg_init_read(pgio, req);
}

/*
@@ -1212,10 +842,8 @@ static size_t
bl_pg_test_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
struct nfs_page *req)
{
- if (pgio->pg_dreq != NULL &&
- !is_aligned_req(req, SECTOR_SIZE))
+ if (!is_aligned_req(pgio, req, SECTOR_SIZE))
return 0;
-
return pnfs_generic_pg_test(pgio, prev, req);
}

@@ -1245,19 +873,20 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
static void
bl_pg_init_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
{
- if (pgio->pg_dreq != NULL &&
- !is_aligned_req(req, PAGE_CACHE_SIZE)) {
+ u64 wb_size;
+
+ if (!is_aligned_req(pgio, req, PAGE_SIZE)) {
nfs_pageio_reset_write_mds(pgio);
- } else {
- u64 wb_size;
- if (pgio->pg_dreq == NULL)
- wb_size = pnfs_num_cont_bytes(pgio->pg_inode,
- req->wb_index);
- else
- wb_size = nfs_dreq_bytes_left(pgio->pg_dreq);
-
- pnfs_generic_pg_init_write(pgio, req, wb_size);
+ return;
}
+
+ if (pgio->pg_dreq == NULL)
+ wb_size = pnfs_num_cont_bytes(pgio->pg_inode,
+ req->wb_index);
+ else
+ wb_size = nfs_dreq_bytes_left(pgio->pg_dreq);
+
+ pnfs_generic_pg_init_write(pgio, req, wb_size);
}

/*
@@ -1268,10 +897,8 @@ static size_t
bl_pg_test_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
struct nfs_page *req)
{
- if (pgio->pg_dreq != NULL &&
- !is_aligned_req(req, PAGE_CACHE_SIZE))
+ if (!is_aligned_req(pgio, req, PAGE_SIZE))
return 0;
-
return pnfs_generic_pg_test(pgio, prev, req);
}

@@ -1291,6 +918,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
.id = LAYOUT_BLOCK_VOLUME,
.name = "LAYOUT_BLOCK_VOLUME",
.owner = THIS_MODULE,
+ .flags = PNFS_READ_WHOLE_PAGE,
.read_pagelist = bl_read_pagelist,
.write_pagelist = bl_write_pagelist,
.alloc_layout_hdr = bl_alloc_layout_hdr,
--
1.9.1


2014-08-21 16:07:40

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 11/19] pnfs/blocklayout: improve GETDEVICEINFO error reporting

Tell userspace what stage of GETDEVICEINFO failed so that there is a chance
to debug it, especially with the userspace daemon clusterf***k in the block
layout driver.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayoutdev.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayoutdev.c b/fs/nfs/blocklayout/blocklayoutdev.c
index 04303b5..63f77925 100644
--- a/fs/nfs/blocklayout/blocklayoutdev.c
+++ b/fs/nfs/blocklayout/blocklayoutdev.c
@@ -150,7 +150,7 @@ nfs4_blk_decode_device(struct nfs_server *server,
remove_wait_queue(&nn->bl_wq, &wq);

if (reply->status != BL_DEVICE_REQUEST_PROC) {
- dprintk("%s failed to open device: %d\n",
+ printk(KERN_WARNING "%s failed to decode device: %d\n",
__func__, reply->status);
rv = ERR_PTR(-EINVAL);
goto out;
@@ -159,7 +159,8 @@ nfs4_blk_decode_device(struct nfs_server *server,
bd = blkdev_get_by_dev(MKDEV(reply->major, reply->minor),
FMODE_READ, NULL);
if (IS_ERR(bd)) {
- dprintk("%s failed to open device : %ld\n", __func__,
+ printk(KERN_WARNING "%s failed to open device %d:%d (%ld)\n",
+ __func__, reply->major, reply->minor,
PTR_ERR(bd));
rv = ERR_CAST(bd);
goto out;
--
1.9.1


2014-08-21 16:07:26

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 04/19] pnfs: don't check sequence on new stateids in layoutget

When layoutget returns an entirely new layout stateid it should not
check the generation counter as the new stateid will start with a new
counter entirely unrelated to old one.

The current behavior causes constant layoutget failures against a block
server which allocates a new stateid after an recall that removed all
outstanding layouts.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/pnfs.c | 36 +++++++++++++++++++-----------------
1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 242e73f..32a14a6 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -685,17 +685,6 @@ static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
return (s32)(s1 - s2) > 0;
}

-static void
-pnfs_verify_layout_stateid(struct pnfs_layout_hdr *lo,
- const nfs4_stateid *new,
- struct list_head *free_me_list)
-{
- if (nfs4_stateid_match_other(&lo->plh_stateid, new))
- return;
- /* Layout is new! Kill existing layout segments */
- pnfs_mark_matching_lsegs_invalid(lo, free_me_list, NULL);
-}
-
/* update lo->plh_stateid with new if is more recent */
void
pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
@@ -1370,16 +1359,29 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
goto out_forget_reply;
}

- if (pnfs_layoutgets_blocked(lo, 1) ||
- pnfs_layout_stateid_blocked(lo, &res->stateid)) {
+ if (pnfs_layoutgets_blocked(lo, 1)) {
dprintk("%s forget reply due to state\n", __func__);
goto out_forget_reply;
}

- /* Check that the new stateid matches the old stateid */
- pnfs_verify_layout_stateid(lo, &res->stateid, &free_me);
- /* Done processing layoutget. Set the layout stateid */
- pnfs_set_layout_stateid(lo, &res->stateid, false);
+ if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+ /* existing state ID, make sure the sequence number matches. */
+ if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
+ dprintk("%s forget reply due to sequence\n", __func__);
+ goto out_forget_reply;
+ }
+ pnfs_set_layout_stateid(lo, &res->stateid, false);
+ } else {
+ /*
+ * We got an entirely new state ID. Mark all segments for the
+ * inode invalid, and don't bother validating the stateid
+ * sequence number.
+ */
+ pnfs_mark_matching_lsegs_invalid(lo, &free_me, NULL);
+
+ nfs4_stateid_copy(&lo->plh_stateid, &res->stateid);
+ lo->plh_barrier = be32_to_cpu(res->stateid.seqid);
+ }

pnfs_get_lseg(lseg);
pnfs_layout_insert_lseg(lo, lseg);
--
1.9.1


2014-08-25 14:09:54

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 08/19] pnfs: add return_range method

On Mon, Aug 25, 2014 at 09:50:34AM -0400, Anna Schumaker wrote:
> > + } else {
> > + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> > + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
> > + &args->cbl_range);
> > + }
> > }
> Is there a reason you're nesting the else-if here?

To catch the intent - the first two clauses find excuses why we can't return
quite yet, while this if is for an optional feature in the actual return
path. If I wasn't updating but newly writing the function I'd actually
do something like:

...

rv = NFS4ERR_DELAY;
if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags))
goto out_set_stateid;

if (pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
&args->cbl_range)) {
need_commit = true;
goto out_set_stateid;
}

rv = NFS4ERR_NOMATCHING_LAYOUT;
if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
&args->cbl_range);
}

out_set_stateid:
...


2014-08-21 16:07:56

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 16/19] pnfs/blocklayout: rewrite extent tracking

Currently the block layout driver tracks extents in three separate
data structures:

- the two list of pnfs_block_extent structures returned by the server
- the list of sectors that were in invalid state but have been written to
- a list of pnfs_block_short_extent structures for LAYOUTCOMMIT

All of these share the property that they are not only highly inefficient
data structures, but also that operations on them are even more inefficient
than nessecary.

In addition there are various implementation defects like:

- using an int to track sectors, causing corruption for large offsets
- incorrect normalization of page or block granularity ranges
- insufficient error handling
- incorrect synchronization as extents can be modified while they are in
use

This patch replace all three data with a single unified rbtree structure
tracking all extents, as well as their in-memory state, although we still
need to instance for read-only and read-write extent due to the arcane
client side COW feature in the block layouts spec.

To fix the problem of extent possibly being modified while in use we make
sure to return a copy of the extent for use in the write path - the
extent can only be invalidated by a layout recall or return which has
to wait until the I/O operations finished due to refcounts on the layout
segment.

The new extent tree work similar to the schemes used by block based
filesystems like XFS or ext4.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/Makefile | 3 +-
fs/nfs/blocklayout/blocklayout.c | 258 +++-------
fs/nfs/blocklayout/blocklayout.h | 112 +----
fs/nfs/blocklayout/blocklayoutdev.c | 35 +-
fs/nfs/blocklayout/extent_tree.c | 545 ++++++++++++++++++++++
fs/nfs/blocklayout/extents.c | 908 ------------------------------------
6 files changed, 649 insertions(+), 1212 deletions(-)
create mode 100644 fs/nfs/blocklayout/extent_tree.c
delete mode 100644 fs/nfs/blocklayout/extents.c

diff --git a/fs/nfs/blocklayout/Makefile b/fs/nfs/blocklayout/Makefile
index d581550..3fa5ec7 100644
--- a/fs/nfs/blocklayout/Makefile
+++ b/fs/nfs/blocklayout/Makefile
@@ -2,4 +2,5 @@
# Makefile for the pNFS block layout driver kernel module
#
obj-$(CONFIG_PNFS_BLOCK) += blocklayoutdriver.o
-blocklayoutdriver-objs := blocklayout.o extents.o blocklayoutdev.o blocklayoutdm.o
+blocklayoutdriver-objs := blocklayout.o blocklayoutdev.o blocklayoutdm.o \
+ extent_tree.o
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 92be984..42b6f9c 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -49,26 +49,16 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Andy Adamson <[email protected]>");
MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");

-/* Given the be associated with isect, determine if page data needs to be
- * initialized.
- */
-static int is_hole(struct pnfs_block_extent *be, sector_t isect)
-{
- if (be->be_state == PNFS_BLOCK_NONE_DATA)
- return 1;
- else if (be->be_state != PNFS_BLOCK_INVALID_DATA)
- return 0;
- else
- return !bl_is_sector_init(be->be_inval, isect);
-}
-
-/* Given the be associated with isect, determine if page data can be
- * written to disk.
- */
-static int is_writable(struct pnfs_block_extent *be, sector_t isect)
+static bool is_hole(struct pnfs_block_extent *be)
{
- return (be->be_state == PNFS_BLOCK_READWRITE_DATA ||
- be->be_state == PNFS_BLOCK_INVALID_DATA);
+ switch (be->be_state) {
+ case PNFS_BLOCK_NONE_DATA:
+ return true;
+ case PNFS_BLOCK_INVALID_DATA:
+ return be->be_tag ? false : true;
+ default:
+ return false;
+ }
}

/* The data we are handed might be spread across several bios. We need
@@ -76,9 +66,8 @@ static int is_writable(struct pnfs_block_extent *be, sector_t isect)
*/
struct parallel_io {
struct kref refcnt;
- void (*pnfs_callback) (void *data, int num_se);
+ void (*pnfs_callback) (void *data);
void *data;
- int bse_count;
};

static inline struct parallel_io *alloc_parallel(void *data)
@@ -89,7 +78,6 @@ static inline struct parallel_io *alloc_parallel(void *data)
if (rv) {
rv->data = data;
kref_init(&rv->refcnt);
- rv->bse_count = 0;
}
return rv;
}
@@ -104,7 +92,7 @@ static void destroy_parallel(struct kref *kref)
struct parallel_io *p = container_of(kref, struct parallel_io, refcnt);

dprintk("%s enter\n", __func__);
- p->pnfs_callback(p->data, p->bse_count);
+ p->pnfs_callback(p->data);
kfree(p);
}

@@ -200,7 +188,7 @@ static void bl_read_cleanup(struct work_struct *work)
}

static void
-bl_end_par_io_read(void *data, int unused)
+bl_end_par_io_read(void *data)
{
struct nfs_pgio_header *hdr = data;

@@ -210,56 +198,46 @@ bl_end_par_io_read(void *data, int unused)
}

static enum pnfs_try_status
-bl_read_pagelist(struct nfs_pgio_header *hdr)
+bl_read_pagelist(struct nfs_pgio_header *header)
{
- struct nfs_pgio_header *header = hdr;
- int i, hole;
+ struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
struct bio *bio = NULL;
- struct pnfs_block_extent *be = NULL, *cow_read = NULL;
+ struct pnfs_block_extent be;
sector_t isect, extent_length = 0;
struct parallel_io *par;
- loff_t f_offset = hdr->args.offset;
- size_t bytes_left = hdr->args.count;
+ loff_t f_offset = header->args.offset;
+ size_t bytes_left = header->args.count;
unsigned int pg_offset, pg_len;
- struct page **pages = hdr->args.pages;
- int pg_index = hdr->args.pgbase >> PAGE_CACHE_SHIFT;
+ struct page **pages = header->args.pages;
+ int pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
const bool is_dio = (header->dreq != NULL);
struct blk_plug plug;
+ int i;

dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
- hdr->page_array.npages, f_offset,
- (unsigned int)hdr->args.count);
+ header->page_array.npages, f_offset,
+ (unsigned int)header->args.count);

- par = alloc_parallel(hdr);
+ par = alloc_parallel(header);
if (!par)
- goto use_mds;
+ return PNFS_NOT_ATTEMPTED;
par->pnfs_callback = bl_end_par_io_read;
- /* At this point, we can no longer jump to use_mds */

blk_start_plug(&plug);

isect = (sector_t) (f_offset >> SECTOR_SHIFT);
/* Code assumes extents are page-aligned */
- for (i = pg_index; i < hdr->page_array.npages; i++) {
+ for (i = pg_index; i < header->page_array.npages; i++) {
if (extent_length <= 0) {
/* We've used up the previous extent */
- bl_put_extent(be);
- bl_put_extent(cow_read);
bio = bl_submit_bio(READ, bio);
+
/* Get the next one */
- be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
- isect, &cow_read);
- if (!be) {
+ if (!ext_tree_lookup(bl, isect, &be, false)) {
header->pnfs_error = -EIO;
goto out;
}
- extent_length = be->be_length -
- (isect - be->be_f_offset);
- if (cow_read) {
- sector_t cow_length = cow_read->be_length -
- (isect - cow_read->be_f_offset);
- extent_length = min(extent_length, cow_length);
- }
+ extent_length = be.be_length - (isect - be.be_f_offset);
}

pg_offset = f_offset & ~PAGE_CACHE_MASK;
@@ -278,20 +256,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
pg_len = PAGE_CACHE_SIZE;
}

- hole = is_hole(be, isect);
- if (hole && !cow_read) {
+ if (is_hole(&be)) {
bio = bl_submit_bio(READ, bio);
/* Fill hole w/ zeroes w/o accessing device */
dprintk("%s Zeroing page for hole\n", __func__);
zero_user_segment(pages[i], pg_offset, pg_len);
} else {
- struct pnfs_block_extent *be_read;
-
- be_read = (hole && cow_read) ? cow_read : be;
bio = do_add_page_to_bio(bio,
- hdr->page_array.npages - i,
+ header->page_array.npages - i,
READ,
- isect, pages[i], be_read,
+ isect, pages[i], &be,
bl_end_io_read, par,
pg_offset, pg_len);
if (IS_ERR(bio)) {
@@ -304,50 +278,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
extent_length -= (pg_len >> SECTOR_SHIFT);
}
if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
- hdr->res.eof = 1;
- hdr->res.count = header->inode->i_size - hdr->args.offset;
+ header->res.eof = 1;
+ header->res.count = header->inode->i_size - header->args.offset;
} else {
- hdr->res.count = (isect << SECTOR_SHIFT) - hdr->args.offset;
+ header->res.count = (isect << SECTOR_SHIFT) - header->args.offset;
}
out:
- bl_put_extent(be);
- bl_put_extent(cow_read);
bl_submit_bio(READ, bio);
blk_finish_plug(&plug);
put_parallel(par);
return PNFS_ATTEMPTED;
-
- use_mds:
- dprintk("Giving up and using normal NFS\n");
- return PNFS_NOT_ATTEMPTED;
-}
-
-static void mark_extents_written(struct pnfs_block_layout *bl,
- __u64 offset, __u32 count)
-{
- sector_t isect, end;
- struct pnfs_block_extent *be;
- struct pnfs_block_short_extent *se;
-
- dprintk("%s(%llu, %u)\n", __func__, offset, count);
- if (count == 0)
- return;
- isect = (offset & (long)(PAGE_CACHE_MASK)) >> SECTOR_SHIFT;
- end = (offset + count + PAGE_CACHE_SIZE - 1) & (long)(PAGE_CACHE_MASK);
- end >>= SECTOR_SHIFT;
- while (isect < end) {
- sector_t len;
- be = bl_find_get_extent(bl, isect, NULL);
- BUG_ON(!be); /* FIXME */
- len = min(end, be->be_f_offset + be->be_length) - isect;
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
- se = bl_pop_one_short_extent(be->be_inval);
- BUG_ON(!se);
- bl_mark_for_commit(be, isect, len, se);
- }
- isect += len;
- bl_put_extent(be);
- }
}

static void bl_end_io_write(struct bio *bio, int err)
@@ -370,29 +310,30 @@ static void bl_end_io_write(struct bio *bio, int err)
*/
static void bl_write_cleanup(struct work_struct *work)
{
- struct rpc_task *task;
- struct nfs_pgio_header *hdr;
+ struct rpc_task *task = container_of(work, struct rpc_task, u.tk_work);
+ struct nfs_pgio_header *hdr =
+ container_of(task, struct nfs_pgio_header, task);
+
dprintk("%s enter\n", __func__);
- task = container_of(work, struct rpc_task, u.tk_work);
- hdr = container_of(task, struct nfs_pgio_header, task);
+
if (likely(!hdr->pnfs_error)) {
- /* Marks for LAYOUTCOMMIT */
- mark_extents_written(BLK_LSEG2EXT(hdr->lseg),
- hdr->args.offset, hdr->args.count);
+ struct pnfs_block_layout *bl = BLK_LSEG2EXT(hdr->lseg);
+ u64 start = hdr->args.offset & (loff_t)PAGE_CACHE_MASK;
+ u64 end = (hdr->args.offset + hdr->args.count +
+ PAGE_CACHE_SIZE - 1) & (loff_t)PAGE_CACHE_MASK;
+
+ ext_tree_mark_written(bl, start >> SECTOR_SHIFT,
+ (end - start) >> SECTOR_SHIFT);
}
+
pnfs_ld_write_done(hdr);
}

/* Called when last of bios associated with a bl_write_pagelist call finishes */
-static void bl_end_par_io_write(void *data, int num_se)
+static void bl_end_par_io_write(void *data)
{
struct nfs_pgio_header *hdr = data;

- if (unlikely(hdr->pnfs_error)) {
- bl_free_short_extents(&BLK_LSEG2EXT(hdr->lseg)->bl_inval,
- num_se);
- }
-
hdr->task.tk_status = hdr->pnfs_error;
hdr->verf.committed = NFS_FILE_SYNC;
INIT_WORK(&hdr->task.u.tk_work, bl_write_cleanup);
@@ -402,9 +343,9 @@ static void bl_end_par_io_write(void *data, int num_se)
static enum pnfs_try_status
bl_write_pagelist(struct nfs_pgio_header *header, int sync)
{
- int i, ret;
+ struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
struct bio *bio = NULL;
- struct pnfs_block_extent *be = NULL;
+ struct pnfs_block_extent be;
sector_t isect, extent_length = 0;
struct parallel_io *par = NULL;
loff_t offset = header->args.offset;
@@ -412,6 +353,7 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
struct page **pages = header->args.pages;
int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
struct blk_plug plug;
+ int i;

dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);

@@ -421,9 +363,8 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
*/
par = alloc_parallel(header);
if (!par)
- goto out_mds;
+ return PNFS_NOT_ATTEMPTED;
par->pnfs_callback = bl_end_par_io_write;
- /* At this point, have to be more careful with error handling */

blk_start_plug(&plug);

@@ -434,44 +375,18 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
for (i = pg_index; i < header->page_array.npages; i++) {
if (extent_length <= 0) {
/* We've used up the previous extent */
- bl_put_extent(be);
bio = bl_submit_bio(WRITE, bio);
/* Get the next one */
- be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
- isect, NULL);
- if (!be || !is_writable(be, isect)) {
+ if (!ext_tree_lookup(bl, isect, &be, true)) {
header->pnfs_error = -EINVAL;
goto out;
}
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
- if (likely(!bl_push_one_short_extent(
- be->be_inval)))
- par->bse_count++;
- else {
- header->pnfs_error = -ENOMEM;
- goto out;
- }
- }
- extent_length = be->be_length -
- (isect - be->be_f_offset);
- }

- BUG_ON(offset & ~PAGE_CACHE_MASK);
-
- if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
- !bl_is_sector_init(be->be_inval, isect)) {
- ret = bl_mark_sectors_init(be->be_inval, isect,
- PAGE_CACHE_SECTORS);
- if (unlikely(ret)) {
- dprintk("%s bl_mark_sectors_init fail %d\n",
- __func__, ret);
- header->pnfs_error = ret;
- goto out;
- }
+ extent_length = be.be_length - (isect - be.be_f_offset);
}

bio = do_add_page_to_bio(bio, header->page_array.npages - i,
- WRITE, isect, pages[i], be,
+ WRITE, isect, pages[i], &be,
bl_end_io_write, par,
0, PAGE_CACHE_SIZE);
if (IS_ERR(bio)) {
@@ -487,60 +402,22 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)

header->res.count = header->args.count;
out:
- bl_put_extent(be);
bl_submit_bio(WRITE, bio);
blk_finish_plug(&plug);
put_parallel(par);
return PNFS_ATTEMPTED;
-out_mds:
- return PNFS_NOT_ATTEMPTED;
-}
-
-/* FIXME - range ignored */
-static void
-release_extents(struct pnfs_block_layout *bl, struct pnfs_layout_range *range)
-{
- int i;
- struct pnfs_block_extent *be;
-
- spin_lock(&bl->bl_ext_lock);
- for (i = 0; i < EXTENT_LISTS; i++) {
- while (!list_empty(&bl->bl_extents[i])) {
- be = list_first_entry(&bl->bl_extents[i],
- struct pnfs_block_extent,
- be_node);
- list_del(&be->be_node);
- bl_put_extent(be);
- }
- }
- spin_unlock(&bl->bl_ext_lock);
-}
-
-static void
-release_inval_marks(struct pnfs_inval_markings *marks)
-{
- struct pnfs_inval_tracking *pos, *temp;
- struct pnfs_block_short_extent *se, *stemp;
-
- list_for_each_entry_safe(pos, temp, &marks->im_tree.mtt_stub, it_link) {
- list_del(&pos->it_link);
- kfree(pos);
- }
-
- list_for_each_entry_safe(se, stemp, &marks->im_extents, bse_node) {
- list_del(&se->bse_node);
- kfree(se);
- }
- return;
}

static void bl_free_layout_hdr(struct pnfs_layout_hdr *lo)
{
struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
+ int err;

dprintk("%s enter\n", __func__);
- release_extents(bl, NULL);
- release_inval_marks(&bl->bl_inval);
+
+ err = ext_tree_remove(bl, true, 0, LLONG_MAX);
+ WARN_ON(err);
+
kfree(bl);
}

@@ -553,14 +430,11 @@ static struct pnfs_layout_hdr *bl_alloc_layout_hdr(struct inode *inode,
bl = kzalloc(sizeof(*bl), gfp_flags);
if (!bl)
return NULL;
+
+ bl->bl_ext_rw = RB_ROOT;
+ bl->bl_ext_ro = RB_ROOT;
spin_lock_init(&bl->bl_ext_lock);
- INIT_LIST_HEAD(&bl->bl_extents[0]);
- INIT_LIST_HEAD(&bl->bl_extents[1]);
- INIT_LIST_HEAD(&bl->bl_commit);
- INIT_LIST_HEAD(&bl->bl_committing);
- bl->bl_count = 0;
- bl->bl_blocksize = NFS_SERVER(inode)->pnfs_blksize >> SECTOR_SHIFT;
- BL_INIT_INVAL_MARKS(&bl->bl_inval, bl->bl_blocksize);
+
return &bl->bl_layout;
}

@@ -600,7 +474,7 @@ bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
const struct nfs4_layoutcommit_args *arg)
{
dprintk("%s enter\n", __func__);
- encode_pnfs_block_layoutupdate(BLK_LO2EXT(lo), xdr, arg);
+ ext_tree_encode_commit(BLK_LO2EXT(lo), xdr);
}

static void
@@ -609,7 +483,7 @@ bl_cleanup_layoutcommit(struct nfs4_layoutcommit_data *lcdata)
struct pnfs_layout_hdr *lo = NFS_I(lcdata->args.inode)->layout;

dprintk("%s enter\n", __func__);
- clean_pnfs_block_layoutupdate(BLK_LO2EXT(lo), &lcdata->args, lcdata->res.status);
+ ext_tree_mark_committed(BLK_LO2EXT(lo), lcdata->res.status);
}

static void free_blk_mountid(struct block_mount_id *mid)
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index 9838fb0..b4f66d8 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -63,82 +63,28 @@ enum exstate4 {
PNFS_BLOCK_NONE_DATA = 3 /* unmapped, it's a hole */
};

-#define MY_MAX_TAGS (15) /* tag bitnums used must be less than this */
-
-struct my_tree {
- sector_t mtt_step_size; /* Internal sector alignment */
- struct list_head mtt_stub; /* Should be a radix tree */
-};
-
-struct pnfs_inval_markings {
- spinlock_t im_lock;
- struct my_tree im_tree; /* Sectors that need LAYOUTCOMMIT */
- sector_t im_block_size; /* Server blocksize in sectors */
- struct list_head im_extents; /* Short extents for INVAL->RW conversion */
-};
-
-struct pnfs_inval_tracking {
- struct list_head it_link;
- int it_sector;
- int it_tags;
-};
-
/* sector_t fields are all in 512-byte sectors */
struct pnfs_block_extent {
- struct kref be_refcnt;
- struct list_head be_node; /* link into lseg list */
- struct nfs4_deviceid be_devid; /* FIXME: could use device cache instead */
+ union {
+ struct rb_node be_node;
+ struct list_head be_list;
+ };
+ struct nfs4_deviceid be_devid; /* FIXME: could use device cache instead */
struct block_device *be_mdev;
sector_t be_f_offset; /* the starting offset in the file */
sector_t be_length; /* the size of the extent */
sector_t be_v_offset; /* the starting offset in the volume */
enum exstate4 be_state; /* the state of this extent */
- struct pnfs_inval_markings *be_inval; /* tracks INVAL->RW transition */
-};
-
-/* Shortened extent used by LAYOUTCOMMIT */
-struct pnfs_block_short_extent {
- struct list_head bse_node;
- struct nfs4_deviceid bse_devid;
- struct block_device *bse_mdev;
- sector_t bse_f_offset; /* the starting offset in the file */
- sector_t bse_length; /* the size of the extent */
+#define EXTENT_WRITTEN 1
+#define EXTENT_COMMITTING 2
+ unsigned int be_tag;
};

-static inline void
-BL_INIT_INVAL_MARKS(struct pnfs_inval_markings *marks, sector_t blocksize)
-{
- spin_lock_init(&marks->im_lock);
- INIT_LIST_HEAD(&marks->im_tree.mtt_stub);
- INIT_LIST_HEAD(&marks->im_extents);
- marks->im_block_size = blocksize;
- marks->im_tree.mtt_step_size = min((sector_t)PAGE_CACHE_SECTORS,
- blocksize);
-}
-
-enum extentclass4 {
- RW_EXTENT = 0, /* READWRTE and INVAL */
- RO_EXTENT = 1, /* READ and NONE */
- EXTENT_LISTS = 2,
-};
-
-static inline int bl_choose_list(enum exstate4 state)
-{
- if (state == PNFS_BLOCK_READ_DATA || state == PNFS_BLOCK_NONE_DATA)
- return RO_EXTENT;
- else
- return RW_EXTENT;
-}
-
struct pnfs_block_layout {
- struct pnfs_layout_hdr bl_layout;
- struct pnfs_inval_markings bl_inval; /* tracks INVAL->RW transition */
+ struct pnfs_layout_hdr bl_layout;
+ struct rb_root bl_ext_rw;
+ struct rb_root bl_ext_ro;
spinlock_t bl_ext_lock; /* Protects list manipulation */
- struct list_head bl_extents[EXTENT_LISTS]; /* R and RW extents */
- struct list_head bl_commit; /* Needs layout commit */
- struct list_head bl_committing; /* Layout committing */
- unsigned int bl_count; /* entries in bl_commit */
- sector_t bl_blocksize; /* Server blocksize in sectors */
};

#define BLK_ID(lo) ((struct block_mount_id *)(NFS_SERVER(lo->plh_inode)->pnfs_ld_data))
@@ -183,29 +129,17 @@ int nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
/* blocklayoutdm.c */
void bl_free_block_dev(struct pnfs_block_dev *bdev);

-/* extents.c */
-struct pnfs_block_extent *
-bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
- struct pnfs_block_extent **cow_read);
-int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
- sector_t offset, sector_t length);
-void bl_put_extent(struct pnfs_block_extent *be);
-struct pnfs_block_extent *bl_alloc_extent(void);
-int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect);
-int encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
- struct xdr_stream *xdr,
- const struct nfs4_layoutcommit_args *arg);
-void clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
- const struct nfs4_layoutcommit_args *arg,
- int status);
-int bl_add_merge_extent(struct pnfs_block_layout *bl,
- struct pnfs_block_extent *new);
-int bl_mark_for_commit(struct pnfs_block_extent *be,
- sector_t offset, sector_t length,
- struct pnfs_block_short_extent *new);
-int bl_push_one_short_extent(struct pnfs_inval_markings *marks);
-struct pnfs_block_short_extent *
-bl_pop_one_short_extent(struct pnfs_inval_markings *marks);
-void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free);
+/* extent_tree.c */
+int ext_tree_insert(struct pnfs_block_layout *bl,
+ struct pnfs_block_extent *new);
+int ext_tree_remove(struct pnfs_block_layout *bl, bool rw, sector_t start,
+ sector_t end);
+int ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
+ sector_t len);
+bool ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
+ struct pnfs_block_extent *ret, bool rw);
+int ext_tree_encode_commit(struct pnfs_block_layout *bl,
+ struct xdr_stream *xdr);
+void ext_tree_mark_committed(struct pnfs_block_layout *bl, int status);

#endif /* FS_NFS_NFS4BLOCKLAYOUT_H */
diff --git a/fs/nfs/blocklayout/blocklayoutdev.c b/fs/nfs/blocklayout/blocklayoutdev.c
index 63f77925..cd71b5e 100644
--- a/fs/nfs/blocklayout/blocklayoutdev.c
+++ b/fs/nfs/blocklayout/blocklayoutdev.c
@@ -309,7 +309,7 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
* recovery easier.
*/
for (i = 0; i < count; i++) {
- be = bl_alloc_extent();
+ be = kzalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
if (!be) {
status = -ENOMEM;
goto out_err;
@@ -330,13 +330,11 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
if (decode_sector_number(&p, &be->be_v_offset) < 0)
goto out_err;
be->be_state = be32_to_cpup(p++);
- if (be->be_state == PNFS_BLOCK_INVALID_DATA)
- be->be_inval = &bl->bl_inval;
if (verify_extent(be, &lv)) {
dprintk("%s verify failed\n", __func__);
goto out_err;
}
- list_add_tail(&be->be_node, &extents);
+ list_add_tail(&be->be_list, &extents);
}
if (lgr->range.offset + lgr->range.length !=
lv.start << SECTOR_SHIFT) {
@@ -352,21 +350,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
/* Extents decoded properly, now try to merge them in to
* existing layout extents.
*/
- spin_lock(&bl->bl_ext_lock);
- list_for_each_entry_safe(be, save, &extents, be_node) {
- list_del(&be->be_node);
- status = bl_add_merge_extent(bl, be);
- if (status) {
- spin_unlock(&bl->bl_ext_lock);
- /* This is a fairly catastrophic error, as the
- * entire layout extent lists are now corrupted.
- * We should have some way to distinguish this.
- */
- be = NULL;
- goto out_err;
- }
+ list_for_each_entry_safe(be, save, &extents, be_list) {
+ list_del(&be->be_list);
+
+ status = ext_tree_insert(bl, be);
+ if (status)
+ goto out_free_list;
}
- spin_unlock(&bl->bl_ext_lock);
status = 0;
out:
__free_page(scratch);
@@ -374,12 +364,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
return status;

out_err:
- bl_put_extent(be);
+ kfree(be);
+ out_free_list:
while (!list_empty(&extents)) {
be = list_first_entry(&extents, struct pnfs_block_extent,
- be_node);
- list_del(&be->be_node);
- bl_put_extent(be);
+ be_list);
+ list_del(&be->be_list);
+ kfree(be);
}
goto out;
}
diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
new file mode 100644
index 0000000..c7dacfa
--- /dev/null
+++ b/fs/nfs/blocklayout/extent_tree.c
@@ -0,0 +1,545 @@
+/*
+ * Copyright (c) 2014 Christoph Hellwig.
+ */
+
+#include "blocklayout.h"
+
+static inline struct pnfs_block_extent *
+ext_node(struct rb_node *node)
+{
+ return rb_entry(node, struct pnfs_block_extent, be_node);
+}
+
+static struct pnfs_block_extent *
+ext_tree_first(struct rb_root *root)
+{
+ struct rb_node *node = rb_first(root);
+ return node ? ext_node(node) : NULL;
+}
+
+static struct pnfs_block_extent *
+ext_tree_prev(struct pnfs_block_extent *be)
+{
+ struct rb_node *node = rb_prev(&be->be_node);
+ return node ? ext_node(node) : NULL;
+}
+
+static struct pnfs_block_extent *
+ext_tree_next(struct pnfs_block_extent *be)
+{
+ struct rb_node *node = rb_next(&be->be_node);
+ return node ? ext_node(node) : NULL;
+}
+
+static inline sector_t
+ext_f_end(struct pnfs_block_extent *be)
+{
+ return be->be_f_offset + be->be_length;
+}
+
+static struct pnfs_block_extent *
+__ext_tree_search(struct rb_root *root, sector_t start)
+{
+ struct rb_node *node = root->rb_node;
+ struct pnfs_block_extent *be = NULL;
+
+ while (node) {
+ be = ext_node(node);
+ if (start < be->be_f_offset)
+ node = node->rb_left;
+ else if (start >= ext_f_end(be))
+ node = node->rb_right;
+ else
+ return be;
+ }
+
+ if (be) {
+ if (start < be->be_f_offset)
+ return be;
+
+ if (start >= ext_f_end(be))
+ return ext_tree_next(be);
+ }
+
+ return NULL;
+}
+
+static bool
+ext_can_merge(struct pnfs_block_extent *be1, struct pnfs_block_extent *be2)
+{
+ if (be1->be_state != be2->be_state)
+ return false;
+ if (be1->be_mdev != be2->be_mdev)
+ return false;
+
+ if (be1->be_f_offset + be1->be_length != be2->be_f_offset)
+ return false;
+
+ if (be1->be_state != PNFS_BLOCK_NONE_DATA &&
+ (be1->be_v_offset + be1->be_length != be2->be_v_offset))
+ return false;
+
+ if (be1->be_state == PNFS_BLOCK_INVALID_DATA &&
+ be1->be_tag != be2->be_tag)
+ return false;
+
+ return true;
+}
+
+static struct pnfs_block_extent *
+ext_try_to_merge_left(struct rb_root *root, struct pnfs_block_extent *be)
+{
+ struct pnfs_block_extent *left = ext_tree_prev(be);
+
+ if (left && ext_can_merge(left, be)) {
+ left->be_length += be->be_length;
+ rb_erase(&be->be_node, root);
+ kfree(be);
+ return left;
+ }
+
+ return be;
+}
+
+static struct pnfs_block_extent *
+ext_try_to_merge_right(struct rb_root *root, struct pnfs_block_extent *be)
+{
+ struct pnfs_block_extent *right = ext_tree_next(be);
+
+ if (right && ext_can_merge(be, right)) {
+ be->be_length += right->be_length;
+ rb_erase(&right->be_node, root);
+ kfree(right);
+ }
+
+ return be;
+}
+
+static void
+__ext_tree_insert(struct rb_root *root,
+ struct pnfs_block_extent *new, bool merge_ok)
+{
+ struct rb_node **p = &root->rb_node, *parent = NULL;
+ struct pnfs_block_extent *be;
+
+ while (*p) {
+ parent = *p;
+ be = ext_node(parent);
+
+ if (new->be_f_offset < be->be_f_offset) {
+ if (merge_ok && ext_can_merge(new, be)) {
+ be->be_f_offset = new->be_f_offset;
+ if (be->be_state != PNFS_BLOCK_NONE_DATA)
+ be->be_v_offset = new->be_v_offset;
+ be->be_length += new->be_length;
+ be = ext_try_to_merge_left(root, be);
+ kfree(new);
+ return;
+ }
+ p = &(*p)->rb_left;
+ } else if (new->be_f_offset >= ext_f_end(be)) {
+ if (merge_ok && ext_can_merge(be, new)) {
+ be->be_length += new->be_length;
+ be = ext_try_to_merge_right(root, be);
+ kfree(new);
+ return;
+ }
+ p = &(*p)->rb_right;
+ } else {
+ BUG();
+ }
+ }
+
+ rb_link_node(&new->be_node, parent, p);
+ rb_insert_color(&new->be_node, root);
+}
+
+static int
+__ext_tree_remove(struct rb_root *root, sector_t start, sector_t end)
+{
+ struct pnfs_block_extent *be;
+ sector_t len1 = 0, len2 = 0;
+ sector_t orig_f_offset;
+ sector_t orig_v_offset;
+ sector_t orig_len;
+
+ be = __ext_tree_search(root, start);
+ if (!be)
+ return 0;
+ if (be->be_f_offset >= end)
+ return 0;
+
+ orig_f_offset = be->be_f_offset;
+ orig_v_offset = be->be_v_offset;
+ orig_len = be->be_length;
+
+ if (start > be->be_f_offset)
+ len1 = start - be->be_f_offset;
+ if (ext_f_end(be) > end)
+ len2 = ext_f_end(be) - end;
+
+ if (len2 > 0) {
+ if (len1 > 0) {
+ struct pnfs_block_extent *new;
+
+ new = kzalloc(sizeof(*new), GFP_ATOMIC);
+ if (!new)
+ return -ENOMEM;
+
+ be->be_length = len1;
+
+ new->be_f_offset = end;
+ if (be->be_state != PNFS_BLOCK_NONE_DATA) {
+ new->be_v_offset =
+ orig_v_offset + orig_len - len2;
+ }
+ new->be_length = len2;
+ new->be_state = be->be_state;
+ new->be_tag = be->be_tag;
+ new->be_mdev = be->be_mdev;
+ memcpy(&new->be_devid, &be->be_devid,
+ sizeof(struct nfs4_deviceid));
+
+ __ext_tree_insert(root, new, true);
+ } else {
+ be->be_f_offset = end;
+ if (be->be_state != PNFS_BLOCK_NONE_DATA) {
+ be->be_v_offset =
+ orig_v_offset + orig_len - len2;
+ }
+ be->be_length = len2;
+ }
+ } else {
+ if (len1 > 0) {
+ be->be_length = len1;
+ be = ext_tree_next(be);
+ }
+
+ while (be && ext_f_end(be) <= end) {
+ struct pnfs_block_extent *next = ext_tree_next(be);
+
+ rb_erase(&be->be_node, root);
+ kfree(be);
+ be = next;
+ }
+
+ if (be && be->be_f_offset < end) {
+ len1 = ext_f_end(be) - end;
+ be->be_f_offset = end;
+ if (be->be_state != PNFS_BLOCK_NONE_DATA)
+ be->be_v_offset += be->be_length - len1;
+ be->be_length = len1;
+ }
+ }
+
+ return 0;
+}
+
+int
+ext_tree_insert(struct pnfs_block_layout *bl, struct pnfs_block_extent *new)
+{
+ struct pnfs_block_extent *be;
+ struct rb_root *root;
+ int err = 0;
+
+ switch (new->be_state) {
+ case PNFS_BLOCK_READWRITE_DATA:
+ case PNFS_BLOCK_INVALID_DATA:
+ root = &bl->bl_ext_rw;
+ break;
+ case PNFS_BLOCK_READ_DATA:
+ case PNFS_BLOCK_NONE_DATA:
+ root = &bl->bl_ext_ro;
+ break;
+ default:
+ dprintk("invalid extent type\n");
+ return -EINVAL;
+ }
+
+ spin_lock(&bl->bl_ext_lock);
+retry:
+ be = __ext_tree_search(root, new->be_f_offset);
+ if (!be || be->be_f_offset >= ext_f_end(new)) {
+ __ext_tree_insert(root, new, true);
+ } else if (new->be_f_offset >= be->be_f_offset) {
+ if (ext_f_end(new) <= ext_f_end(be)) {
+ kfree(new);
+ } else {
+ sector_t new_len = ext_f_end(new) - ext_f_end(be);
+ sector_t diff = new->be_length - new_len;
+
+ new->be_f_offset += diff;
+ new->be_v_offset += diff;
+ new->be_length = new_len;
+ goto retry;
+ }
+ } else if (ext_f_end(new) <= ext_f_end(be)) {
+ new->be_length = be->be_f_offset - new->be_f_offset;
+ __ext_tree_insert(root, new, true);
+ } else {
+ struct pnfs_block_extent *split;
+ sector_t new_len = ext_f_end(new) - ext_f_end(be);
+ sector_t diff = new->be_length - new_len;
+
+ split = kmemdup(new, sizeof(*new), GFP_ATOMIC);
+ if (!split) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ split->be_length = be->be_f_offset - split->be_f_offset;
+ __ext_tree_insert(root, split, true);
+
+ new->be_f_offset += diff;
+ new->be_v_offset += diff;
+ new->be_length = new_len;
+ goto retry;
+ }
+out:
+ spin_unlock(&bl->bl_ext_lock);
+ return err;
+}
+
+static bool
+__ext_tree_lookup(struct rb_root *root, sector_t isect,
+ struct pnfs_block_extent *ret)
+{
+ struct rb_node *node;
+ struct pnfs_block_extent *be;
+
+ node = root->rb_node;
+ while (node) {
+ be = ext_node(node);
+ if (isect < be->be_f_offset)
+ node = node->rb_left;
+ else if (isect >= ext_f_end(be))
+ node = node->rb_right;
+ else {
+ *ret = *be;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+bool
+ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
+ struct pnfs_block_extent *ret, bool rw)
+{
+ bool found = false;
+
+ spin_lock(&bl->bl_ext_lock);
+ if (!rw)
+ found = __ext_tree_lookup(&bl->bl_ext_ro, isect, ret);
+ if (!found)
+ found = __ext_tree_lookup(&bl->bl_ext_rw, isect, ret);
+ spin_unlock(&bl->bl_ext_lock);
+
+ return found;
+}
+
+int ext_tree_remove(struct pnfs_block_layout *bl, bool rw,
+ sector_t start, sector_t end)
+{
+ int err, err2;
+
+ spin_lock(&bl->bl_ext_lock);
+ err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
+ if (rw) {
+ err2 = __ext_tree_remove(&bl->bl_ext_rw, start, end);
+ if (!err)
+ err = err2;
+ }
+ spin_unlock(&bl->bl_ext_lock);
+
+ return err;
+}
+
+static int
+ext_tree_split(struct rb_root *root, struct pnfs_block_extent *be,
+ sector_t split)
+{
+ struct pnfs_block_extent *new;
+ sector_t orig_len = be->be_length;
+
+ dprintk("%s: need split for 0x%lx:0x%lx at 0x%lx\n",
+ __func__, be->be_f_offset, ext_f_end(be), split);
+
+ new = kzalloc(sizeof(*new), GFP_ATOMIC);
+ if (!new)
+ return -ENOMEM;
+
+ be->be_length = split - be->be_f_offset;
+
+ new->be_f_offset = split;
+ if (be->be_state != PNFS_BLOCK_NONE_DATA)
+ new->be_v_offset = be->be_v_offset + be->be_length;
+ new->be_length = orig_len - be->be_length;
+ new->be_state = be->be_state;
+ new->be_tag = be->be_tag;
+
+ new->be_mdev = be->be_mdev;
+ memcpy(&new->be_devid, &be->be_devid, sizeof(struct nfs4_deviceid));
+
+ dprintk("%s: got 0x%lx:0x%lx!\n",
+ __func__, be->be_f_offset, ext_f_end(be));
+ dprintk("%s: got 0x%lx:0x%lx!\n",
+ __func__, new->be_f_offset, ext_f_end(new));
+
+ __ext_tree_insert(root, new, false);
+ return 0;
+}
+
+int
+ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
+ sector_t len)
+{
+ struct rb_root *root = &bl->bl_ext_rw;
+ sector_t end = start + len;
+ struct pnfs_block_extent *be;
+ int err = 0;
+
+ spin_lock(&bl->bl_ext_lock);
+ /*
+ * First remove all COW extents or holes from written to range.
+ */
+ err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
+ if (err)
+ goto out;
+
+ /*
+ * Then mark all invalid extents in the range as written to.
+ */
+ for (be = __ext_tree_search(root, start); be; be = ext_tree_next(be)) {
+ if (be->be_f_offset >= end)
+ break;
+
+ if (be->be_state != PNFS_BLOCK_INVALID_DATA || be->be_tag)
+ continue;
+
+ if (be->be_f_offset < start) {
+ struct pnfs_block_extent *left = ext_tree_prev(be);
+
+ if (left && ext_can_merge(left, be)) {
+ sector_t diff = start - be->be_f_offset;
+
+ left->be_length += diff;
+
+ be->be_f_offset += diff;
+ be->be_v_offset += diff;
+ be->be_length -= diff;
+ } else {
+ err = ext_tree_split(root, be, start);
+ if (err)
+ goto out;
+ }
+ }
+
+ if (ext_f_end(be) > end) {
+ struct pnfs_block_extent *right = ext_tree_next(be);
+
+ if (right && ext_can_merge(be, right)) {
+ sector_t diff = end - be->be_f_offset;
+
+ be->be_length -= diff;
+
+ right->be_f_offset -= diff;
+ right->be_v_offset -= diff;
+ right->be_length += diff;
+ } else {
+ err = ext_tree_split(root, be, end);
+ if (err)
+ goto out;
+ }
+ }
+
+ if (be->be_f_offset >= start && ext_f_end(be) <= end) {
+ be->be_tag = EXTENT_WRITTEN;
+ be = ext_try_to_merge_left(root, be);
+ be = ext_try_to_merge_right(root, be);
+ }
+ }
+out:
+ spin_unlock(&bl->bl_ext_lock);
+ return err;
+}
+
+int
+ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
+{
+ struct pnfs_block_extent *be;
+ unsigned int count = 0;
+ __be32 *p, *xdr_start;
+ int ret = 0;
+
+ dprintk("%s enter\n", __func__);
+
+ xdr_start = xdr_reserve_space(xdr, 8);
+ if (!xdr_start)
+ return -ENOSPC;
+
+ spin_lock(&bl->bl_ext_lock);
+ for (be = ext_tree_first(&bl->bl_ext_rw); be; be = ext_tree_next(be)) {
+ if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
+ be->be_tag != EXTENT_WRITTEN)
+ continue;
+
+ p = xdr_reserve_space(xdr, 7 * sizeof(__be32) +
+ NFS4_DEVICEID4_SIZE);
+ if (!p) {
+ printk("%s: out of space for extent list\n", __func__);
+ ret = -ENOSPC;
+ break;
+ }
+
+ p = xdr_encode_opaque_fixed(p, be->be_devid.data,
+ NFS4_DEVICEID4_SIZE);
+ p = xdr_encode_hyper(p, be->be_f_offset << SECTOR_SHIFT);
+ p = xdr_encode_hyper(p, be->be_length << SECTOR_SHIFT);
+ p = xdr_encode_hyper(p, 0LL);
+ *p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
+
+ be->be_tag = EXTENT_COMMITTING;
+ count++;
+ }
+ spin_unlock(&bl->bl_ext_lock);
+
+ xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
+ xdr_start[1] = cpu_to_be32(count);
+
+ dprintk("%s found %i ranges\n", __func__, count);
+ return ret;
+}
+
+void
+ext_tree_mark_committed(struct pnfs_block_layout *bl, int status)
+{
+ struct rb_root *root = &bl->bl_ext_rw;
+ struct pnfs_block_extent *be;
+
+ dprintk("%s status %d\n", __func__, status);
+
+ spin_lock(&bl->bl_ext_lock);
+ for (be = ext_tree_first(root); be; be = ext_tree_next(be)) {
+ if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
+ be->be_tag != EXTENT_COMMITTING)
+ continue;
+
+ if (status) {
+ /*
+ * Mark as written and try again.
+ *
+ * XXX: some real error handling here wouldn't hurt..
+ */
+ be->be_tag = EXTENT_WRITTEN;
+ } else {
+ be->be_state = PNFS_BLOCK_READWRITE_DATA;
+ be->be_tag = 0;
+ }
+
+ be = ext_try_to_merge_left(root, be);
+ be = ext_try_to_merge_right(root, be);
+ }
+ spin_unlock(&bl->bl_ext_lock);
+}
diff --git a/fs/nfs/blocklayout/extents.c b/fs/nfs/blocklayout/extents.c
deleted file mode 100644
index 4d01614..0000000
--- a/fs/nfs/blocklayout/extents.c
+++ /dev/null
@@ -1,908 +0,0 @@
-/*
- * linux/fs/nfs/blocklayout/blocklayout.h
- *
- * Module for the NFSv4.1 pNFS block layout driver.
- *
- * Copyright (c) 2006 The Regents of the University of Michigan.
- * All rights reserved.
- *
- * Andy Adamson <[email protected]>
- * Fred Isaman <[email protected]>
- *
- * permission is granted to use, copy, create derivative works and
- * redistribute this software and such derivative works for any purpose,
- * so long as the name of the university of michigan is not used in
- * any advertising or publicity pertaining to the use or distribution
- * of this software without specific, written prior authorization. if
- * the above copyright notice or any other identification of the
- * university of michigan is included in any copy of any portion of
- * this software, then the disclaimer below must also be included.
- *
- * this software is provided as is, without representation from the
- * university of michigan as to its fitness for any purpose, and without
- * warranty by the university of michigan of any kind, either express
- * or implied, including without limitation the implied warranties of
- * merchantability and fitness for a particular purpose. the regents
- * of the university of michigan shall not be liable for any damages,
- * including special, indirect, incidental, or consequential damages,
- * with respect to any claim arising out or in connection with the use
- * of the software, even if it has been or is hereafter advised of the
- * possibility of such damages.
- */
-
-#include "blocklayout.h"
-#define NFSDBG_FACILITY NFSDBG_PNFS_LD
-
-/* Bit numbers */
-#define EXTENT_INITIALIZED 0
-#define EXTENT_WRITTEN 1
-#define EXTENT_IN_COMMIT 2
-#define INTERNAL_EXISTS MY_MAX_TAGS
-#define INTERNAL_MASK ((1 << INTERNAL_EXISTS) - 1)
-
-/* Returns largest t<=s s.t. t%base==0 */
-static inline sector_t normalize(sector_t s, int base)
-{
- sector_t tmp = s; /* Since do_div modifies its argument */
- return s - sector_div(tmp, base);
-}
-
-static inline sector_t normalize_up(sector_t s, int base)
-{
- return normalize(s + base - 1, base);
-}
-
-/* Complete stub using list while determine API wanted */
-
-/* Returns tags, or negative */
-static int32_t _find_entry(struct my_tree *tree, u64 s)
-{
- struct pnfs_inval_tracking *pos;
-
- dprintk("%s(%llu) enter\n", __func__, s);
- list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
- if (pos->it_sector > s)
- continue;
- else if (pos->it_sector == s)
- return pos->it_tags & INTERNAL_MASK;
- else
- break;
- }
- return -ENOENT;
-}
-
-static inline
-int _has_tag(struct my_tree *tree, u64 s, int32_t tag)
-{
- int32_t tags;
-
- dprintk("%s(%llu, %i) enter\n", __func__, s, tag);
- s = normalize(s, tree->mtt_step_size);
- tags = _find_entry(tree, s);
- if ((tags < 0) || !(tags & (1 << tag)))
- return 0;
- else
- return 1;
-}
-
-/* Creates entry with tag, or if entry already exists, unions tag to it.
- * If storage is not NULL, newly created entry will use it.
- * Returns number of entries added, or negative on error.
- */
-static int _add_entry(struct my_tree *tree, u64 s, int32_t tag,
- struct pnfs_inval_tracking *storage)
-{
- int found = 0;
- struct pnfs_inval_tracking *pos;
-
- dprintk("%s(%llu, %i, %p) enter\n", __func__, s, tag, storage);
- list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
- if (pos->it_sector > s)
- continue;
- else if (pos->it_sector == s) {
- found = 1;
- break;
- } else
- break;
- }
- if (found) {
- pos->it_tags |= (1 << tag);
- return 0;
- } else {
- struct pnfs_inval_tracking *new;
- new = storage;
- new->it_sector = s;
- new->it_tags = (1 << tag);
- list_add(&new->it_link, &pos->it_link);
- return 1;
- }
-}
-
-/* XXXX Really want option to not create */
-/* Over range, unions tag with existing entries, else creates entry with tag */
-static int _set_range(struct my_tree *tree, int32_t tag, u64 s, u64 length)
-{
- u64 i;
-
- dprintk("%s(%i, %llu, %llu) enter\n", __func__, tag, s, length);
- for (i = normalize(s, tree->mtt_step_size); i < s + length;
- i += tree->mtt_step_size)
- if (_add_entry(tree, i, tag, NULL))
- return -ENOMEM;
- return 0;
-}
-
-/* Ensure that future operations on given range of tree will not malloc */
-static int _preload_range(struct pnfs_inval_markings *marks,
- u64 offset, u64 length)
-{
- u64 start, end, s;
- int count, i, used = 0, status = -ENOMEM;
- struct pnfs_inval_tracking **storage;
- struct my_tree *tree = &marks->im_tree;
-
- dprintk("%s(%llu, %llu) enter\n", __func__, offset, length);
- start = normalize(offset, tree->mtt_step_size);
- end = normalize_up(offset + length, tree->mtt_step_size);
- count = (int)(end - start) / (int)tree->mtt_step_size;
-
- /* Pre-malloc what memory we might need */
- storage = kcalloc(count, sizeof(*storage), GFP_NOFS);
- if (!storage)
- return -ENOMEM;
- for (i = 0; i < count; i++) {
- storage[i] = kmalloc(sizeof(struct pnfs_inval_tracking),
- GFP_NOFS);
- if (!storage[i])
- goto out_cleanup;
- }
-
- spin_lock_bh(&marks->im_lock);
- for (s = start; s < end; s += tree->mtt_step_size)
- used += _add_entry(tree, s, INTERNAL_EXISTS, storage[used]);
- spin_unlock_bh(&marks->im_lock);
-
- status = 0;
-
- out_cleanup:
- for (i = used; i < count; i++) {
- if (!storage[i])
- break;
- kfree(storage[i]);
- }
- kfree(storage);
- return status;
-}
-
-/* We are relying on page lock to serialize this */
-int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect)
-{
- int rv;
-
- spin_lock_bh(&marks->im_lock);
- rv = _has_tag(&marks->im_tree, isect, EXTENT_INITIALIZED);
- spin_unlock_bh(&marks->im_lock);
- return rv;
-}
-
-/* Assume start, end already sector aligned */
-static int
-_range_has_tag(struct my_tree *tree, u64 start, u64 end, int32_t tag)
-{
- struct pnfs_inval_tracking *pos;
- u64 expect = 0;
-
- dprintk("%s(%llu, %llu, %i) enter\n", __func__, start, end, tag);
- list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
- if (pos->it_sector >= end)
- continue;
- if (!expect) {
- if ((pos->it_sector == end - tree->mtt_step_size) &&
- (pos->it_tags & (1 << tag))) {
- expect = pos->it_sector - tree->mtt_step_size;
- if (pos->it_sector < tree->mtt_step_size || expect < start)
- return 1;
- continue;
- } else {
- return 0;
- }
- }
- if (pos->it_sector != expect || !(pos->it_tags & (1 << tag)))
- return 0;
- expect -= tree->mtt_step_size;
- if (expect < start)
- return 1;
- }
- return 0;
-}
-
-static int is_range_written(struct pnfs_inval_markings *marks,
- sector_t start, sector_t end)
-{
- int rv;
-
- spin_lock_bh(&marks->im_lock);
- rv = _range_has_tag(&marks->im_tree, start, end, EXTENT_WRITTEN);
- spin_unlock_bh(&marks->im_lock);
- return rv;
-}
-
-/* Marks sectors in [offest, offset_length) as having been initialized.
- * All lengths are step-aligned, where step is min(pagesize, blocksize).
- * Currently assumes offset is page-aligned
- */
-int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
- sector_t offset, sector_t length)
-{
- sector_t start, end;
-
- dprintk("%s(offset=%llu,len=%llu) enter\n",
- __func__, (u64)offset, (u64)length);
-
- start = normalize(offset, marks->im_block_size);
- end = normalize_up(offset + length, marks->im_block_size);
- if (_preload_range(marks, start, end - start))
- goto outerr;
-
- spin_lock_bh(&marks->im_lock);
- if (_set_range(&marks->im_tree, EXTENT_INITIALIZED, offset, length))
- goto out_unlock;
- spin_unlock_bh(&marks->im_lock);
-
- return 0;
-
-out_unlock:
- spin_unlock_bh(&marks->im_lock);
-outerr:
- return -ENOMEM;
-}
-
-/* Marks sectors in [offest, offset+length) as having been written to disk.
- * All lengths should be block aligned.
- */
-static int mark_written_sectors(struct pnfs_inval_markings *marks,
- sector_t offset, sector_t length)
-{
- int status;
-
- dprintk("%s(offset=%llu,len=%llu) enter\n", __func__,
- (u64)offset, (u64)length);
- spin_lock_bh(&marks->im_lock);
- status = _set_range(&marks->im_tree, EXTENT_WRITTEN, offset, length);
- spin_unlock_bh(&marks->im_lock);
- return status;
-}
-
-static void print_short_extent(struct pnfs_block_short_extent *be)
-{
- dprintk("PRINT SHORT EXTENT extent %p\n", be);
- if (be) {
- dprintk(" be_f_offset %llu\n", (u64)be->bse_f_offset);
- dprintk(" be_length %llu\n", (u64)be->bse_length);
- }
-}
-
-static void print_clist(struct list_head *list, unsigned int count)
-{
- struct pnfs_block_short_extent *be;
- unsigned int i = 0;
-
- ifdebug(FACILITY) {
- printk(KERN_DEBUG "****************\n");
- printk(KERN_DEBUG "Extent list looks like:\n");
- list_for_each_entry(be, list, bse_node) {
- i++;
- print_short_extent(be);
- }
- if (i != count)
- printk(KERN_DEBUG "\n\nExpected %u entries\n\n\n", count);
- printk(KERN_DEBUG "****************\n");
- }
-}
-
-/* Note: In theory, we should do more checking that devid's match between
- * old and new, but if they don't, the lists are too corrupt to salvage anyway.
- */
-/* Note this is very similar to bl_add_merge_extent */
-static void add_to_commitlist(struct pnfs_block_layout *bl,
- struct pnfs_block_short_extent *new)
-{
- struct list_head *clist = &bl->bl_commit;
- struct pnfs_block_short_extent *old, *save;
- sector_t end = new->bse_f_offset + new->bse_length;
-
- dprintk("%s enter\n", __func__);
- print_short_extent(new);
- print_clist(clist, bl->bl_count);
- bl->bl_count++;
- /* Scan for proper place to insert, extending new to the left
- * as much as possible.
- */
- list_for_each_entry_safe(old, save, clist, bse_node) {
- if (new->bse_f_offset < old->bse_f_offset)
- break;
- if (end <= old->bse_f_offset + old->bse_length) {
- /* Range is already in list */
- bl->bl_count--;
- kfree(new);
- return;
- } else if (new->bse_f_offset <=
- old->bse_f_offset + old->bse_length) {
- /* new overlaps or abuts existing be */
- if (new->bse_mdev == old->bse_mdev) {
- /* extend new to fully replace old */
- new->bse_length += new->bse_f_offset -
- old->bse_f_offset;
- new->bse_f_offset = old->bse_f_offset;
- list_del(&old->bse_node);
- bl->bl_count--;
- kfree(old);
- }
- }
- }
- /* Note that if we never hit the above break, old will not point to a
- * valid extent. However, in that case &old->bse_node==list.
- */
- list_add_tail(&new->bse_node, &old->bse_node);
- /* Scan forward for overlaps. If we find any, extend new and
- * remove the overlapped extent.
- */
- old = list_prepare_entry(new, clist, bse_node);
- list_for_each_entry_safe_continue(old, save, clist, bse_node) {
- if (end < old->bse_f_offset)
- break;
- /* new overlaps or abuts old */
- if (new->bse_mdev == old->bse_mdev) {
- if (end < old->bse_f_offset + old->bse_length) {
- /* extend new to fully cover old */
- end = old->bse_f_offset + old->bse_length;
- new->bse_length = end - new->bse_f_offset;
- }
- list_del(&old->bse_node);
- bl->bl_count--;
- kfree(old);
- }
- }
- dprintk("%s: after merging\n", __func__);
- print_clist(clist, bl->bl_count);
-}
-
-/* Note the range described by offset, length is guaranteed to be contained
- * within be.
- * new will be freed, either by this function or add_to_commitlist if they
- * decide not to use it, or after LAYOUTCOMMIT uses it in the commitlist.
- */
-int bl_mark_for_commit(struct pnfs_block_extent *be,
- sector_t offset, sector_t length,
- struct pnfs_block_short_extent *new)
-{
- sector_t new_end, end = offset + length;
- struct pnfs_block_layout *bl = container_of(be->be_inval,
- struct pnfs_block_layout,
- bl_inval);
-
- mark_written_sectors(be->be_inval, offset, length);
- /* We want to add the range to commit list, but it must be
- * block-normalized, and verified that the normalized range has
- * been entirely written to disk.
- */
- new->bse_f_offset = offset;
- offset = normalize(offset, bl->bl_blocksize);
- if (offset < new->bse_f_offset) {
- if (is_range_written(be->be_inval, offset, new->bse_f_offset))
- new->bse_f_offset = offset;
- else
- new->bse_f_offset = offset + bl->bl_blocksize;
- }
- new_end = normalize_up(end, bl->bl_blocksize);
- if (end < new_end) {
- if (is_range_written(be->be_inval, end, new_end))
- end = new_end;
- else
- end = new_end - bl->bl_blocksize;
- }
- if (end <= new->bse_f_offset) {
- kfree(new);
- return 0;
- }
- new->bse_length = end - new->bse_f_offset;
- new->bse_devid = be->be_devid;
- new->bse_mdev = be->be_mdev;
-
- spin_lock(&bl->bl_ext_lock);
- add_to_commitlist(bl, new);
- spin_unlock(&bl->bl_ext_lock);
- return 0;
-}
-
-static void print_bl_extent(struct pnfs_block_extent *be)
-{
- dprintk("PRINT EXTENT extent %p\n", be);
- if (be) {
- dprintk(" be_f_offset %llu\n", (u64)be->be_f_offset);
- dprintk(" be_length %llu\n", (u64)be->be_length);
- dprintk(" be_v_offset %llu\n", (u64)be->be_v_offset);
- dprintk(" be_state %d\n", be->be_state);
- }
-}
-
-static void
-destroy_extent(struct kref *kref)
-{
- struct pnfs_block_extent *be;
-
- be = container_of(kref, struct pnfs_block_extent, be_refcnt);
- dprintk("%s be=%p\n", __func__, be);
- kfree(be);
-}
-
-void
-bl_put_extent(struct pnfs_block_extent *be)
-{
- if (be) {
- dprintk("%s enter %p (%i)\n", __func__, be,
- atomic_read(&be->be_refcnt.refcount));
- kref_put(&be->be_refcnt, destroy_extent);
- }
-}
-
-struct pnfs_block_extent *bl_alloc_extent(void)
-{
- struct pnfs_block_extent *be;
-
- be = kmalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
- if (!be)
- return NULL;
- INIT_LIST_HEAD(&be->be_node);
- kref_init(&be->be_refcnt);
- be->be_inval = NULL;
- return be;
-}
-
-static void print_elist(struct list_head *list)
-{
- struct pnfs_block_extent *be;
- dprintk("****************\n");
- dprintk("Extent list looks like:\n");
- list_for_each_entry(be, list, be_node) {
- print_bl_extent(be);
- }
- dprintk("****************\n");
-}
-
-static inline int
-extents_consistent(struct pnfs_block_extent *old, struct pnfs_block_extent *new)
-{
- /* Note this assumes new->be_f_offset >= old->be_f_offset */
- return (new->be_state == old->be_state) &&
- ((new->be_state == PNFS_BLOCK_NONE_DATA) ||
- ((new->be_v_offset - old->be_v_offset ==
- new->be_f_offset - old->be_f_offset) &&
- new->be_mdev == old->be_mdev));
-}
-
-/* Adds new to appropriate list in bl, modifying new and removing existing
- * extents as appropriate to deal with overlaps.
- *
- * See bl_find_get_extent for list constraints.
- *
- * Refcount on new is already set. If end up not using it, or error out,
- * need to put the reference.
- *
- * bl->bl_ext_lock is held by caller.
- */
-int
-bl_add_merge_extent(struct pnfs_block_layout *bl,
- struct pnfs_block_extent *new)
-{
- struct pnfs_block_extent *be, *tmp;
- sector_t end = new->be_f_offset + new->be_length;
- struct list_head *list;
-
- dprintk("%s enter with be=%p\n", __func__, new);
- print_bl_extent(new);
- list = &bl->bl_extents[bl_choose_list(new->be_state)];
- print_elist(list);
-
- /* Scan for proper place to insert, extending new to the left
- * as much as possible.
- */
- list_for_each_entry_safe_reverse(be, tmp, list, be_node) {
- if (new->be_f_offset >= be->be_f_offset + be->be_length)
- break;
- if (new->be_f_offset >= be->be_f_offset) {
- if (end <= be->be_f_offset + be->be_length) {
- /* new is a subset of existing be*/
- if (extents_consistent(be, new)) {
- dprintk("%s: new is subset, ignoring\n",
- __func__);
- bl_put_extent(new);
- return 0;
- } else {
- goto out_err;
- }
- } else {
- /* |<-- be -->|
- * |<-- new -->| */
- if (extents_consistent(be, new)) {
- /* extend new to fully replace be */
- new->be_length += new->be_f_offset -
- be->be_f_offset;
- new->be_f_offset = be->be_f_offset;
- new->be_v_offset = be->be_v_offset;
- dprintk("%s: removing %p\n", __func__, be);
- list_del(&be->be_node);
- bl_put_extent(be);
- } else {
- goto out_err;
- }
- }
- } else if (end >= be->be_f_offset + be->be_length) {
- /* new extent overlap existing be */
- if (extents_consistent(be, new)) {
- /* extend new to fully replace be */
- dprintk("%s: removing %p\n", __func__, be);
- list_del(&be->be_node);
- bl_put_extent(be);
- } else {
- goto out_err;
- }
- } else if (end > be->be_f_offset) {
- /* |<-- be -->|
- *|<-- new -->| */
- if (extents_consistent(new, be)) {
- /* extend new to fully replace be */
- new->be_length += be->be_f_offset + be->be_length -
- new->be_f_offset - new->be_length;
- dprintk("%s: removing %p\n", __func__, be);
- list_del(&be->be_node);
- bl_put_extent(be);
- } else {
- goto out_err;
- }
- }
- }
- /* Note that if we never hit the above break, be will not point to a
- * valid extent. However, in that case &be->be_node==list.
- */
- list_add(&new->be_node, &be->be_node);
- dprintk("%s: inserting new\n", __func__);
- print_elist(list);
- /* FIXME - The per-list consistency checks have all been done,
- * should now check cross-list consistency.
- */
- return 0;
-
- out_err:
- bl_put_extent(new);
- return -EIO;
-}
-
-/* Returns extent, or NULL. If a second READ extent exists, it is returned
- * in cow_read, if given.
- *
- * The extents are kept in two seperate ordered lists, one for READ and NONE,
- * one for READWRITE and INVALID. Within each list, we assume:
- * 1. Extents are ordered by file offset.
- * 2. For any given isect, there is at most one extents that matches.
- */
-struct pnfs_block_extent *
-bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
- struct pnfs_block_extent **cow_read)
-{
- struct pnfs_block_extent *be, *cow, *ret;
- int i;
-
- dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
- cow = ret = NULL;
- spin_lock(&bl->bl_ext_lock);
- for (i = 0; i < EXTENT_LISTS; i++) {
- list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
- if (isect >= be->be_f_offset + be->be_length)
- break;
- if (isect >= be->be_f_offset) {
- /* We have found an extent */
- dprintk("%s Get %p (%i)\n", __func__, be,
- atomic_read(&be->be_refcnt.refcount));
- kref_get(&be->be_refcnt);
- if (!ret)
- ret = be;
- else if (be->be_state != PNFS_BLOCK_READ_DATA)
- bl_put_extent(be);
- else
- cow = be;
- break;
- }
- }
- if (ret &&
- (!cow_read || ret->be_state != PNFS_BLOCK_INVALID_DATA))
- break;
- }
- spin_unlock(&bl->bl_ext_lock);
- if (cow_read)
- *cow_read = cow;
- print_bl_extent(ret);
- return ret;
-}
-
-/* Similar to bl_find_get_extent, but called with lock held, and ignores cow */
-static struct pnfs_block_extent *
-bl_find_get_extent_locked(struct pnfs_block_layout *bl, sector_t isect)
-{
- struct pnfs_block_extent *be, *ret = NULL;
- int i;
-
- dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
- for (i = 0; i < EXTENT_LISTS; i++) {
- if (ret)
- break;
- list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
- if (isect >= be->be_f_offset + be->be_length)
- break;
- if (isect >= be->be_f_offset) {
- /* We have found an extent */
- dprintk("%s Get %p (%i)\n", __func__, be,
- atomic_read(&be->be_refcnt.refcount));
- kref_get(&be->be_refcnt);
- ret = be;
- break;
- }
- }
- }
- print_bl_extent(ret);
- return ret;
-}
-
-int
-encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
- struct xdr_stream *xdr,
- const struct nfs4_layoutcommit_args *arg)
-{
- struct pnfs_block_short_extent *lce, *save;
- unsigned int count = 0;
- __be32 *p, *xdr_start;
-
- dprintk("%s enter\n", __func__);
- /* BUG - creation of bl_commit is buggy - need to wait for
- * entire block to be marked WRITTEN before it can be added.
- */
- spin_lock(&bl->bl_ext_lock);
- /* Want to adjust for possible truncate */
- /* We now want to adjust argument range */
-
- /* XDR encode the ranges found */
- xdr_start = xdr_reserve_space(xdr, 8);
- if (!xdr_start)
- goto out;
- list_for_each_entry_safe(lce, save, &bl->bl_commit, bse_node) {
- p = xdr_reserve_space(xdr, 7 * 4 + sizeof(lce->bse_devid.data));
- if (!p)
- break;
- p = xdr_encode_opaque_fixed(p, lce->bse_devid.data, NFS4_DEVICEID4_SIZE);
- p = xdr_encode_hyper(p, lce->bse_f_offset << SECTOR_SHIFT);
- p = xdr_encode_hyper(p, lce->bse_length << SECTOR_SHIFT);
- p = xdr_encode_hyper(p, 0LL);
- *p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
- list_move_tail(&lce->bse_node, &bl->bl_committing);
- bl->bl_count--;
- count++;
- }
- xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
- xdr_start[1] = cpu_to_be32(count);
-out:
- spin_unlock(&bl->bl_ext_lock);
- dprintk("%s found %i ranges\n", __func__, count);
- return 0;
-}
-
-/* Helper function to set_to_rw that initialize a new extent */
-static void
-_prep_new_extent(struct pnfs_block_extent *new,
- struct pnfs_block_extent *orig,
- sector_t offset, sector_t length, int state)
-{
- kref_init(&new->be_refcnt);
- /* don't need to INIT_LIST_HEAD(&new->be_node) */
- memcpy(&new->be_devid, &orig->be_devid, sizeof(struct nfs4_deviceid));
- new->be_mdev = orig->be_mdev;
- new->be_f_offset = offset;
- new->be_length = length;
- new->be_v_offset = orig->be_v_offset - orig->be_f_offset + offset;
- new->be_state = state;
- new->be_inval = orig->be_inval;
-}
-
-/* Tries to merge be with extent in front of it in list.
- * Frees storage if not used.
- */
-static struct pnfs_block_extent *
-_front_merge(struct pnfs_block_extent *be, struct list_head *head,
- struct pnfs_block_extent *storage)
-{
- struct pnfs_block_extent *prev;
-
- if (!storage)
- goto no_merge;
- if (&be->be_node == head || be->be_node.prev == head)
- goto no_merge;
- prev = list_entry(be->be_node.prev, struct pnfs_block_extent, be_node);
- if ((prev->be_f_offset + prev->be_length != be->be_f_offset) ||
- !extents_consistent(prev, be))
- goto no_merge;
- _prep_new_extent(storage, prev, prev->be_f_offset,
- prev->be_length + be->be_length, prev->be_state);
- list_replace(&prev->be_node, &storage->be_node);
- bl_put_extent(prev);
- list_del(&be->be_node);
- bl_put_extent(be);
- return storage;
-
- no_merge:
- kfree(storage);
- return be;
-}
-
-static u64
-set_to_rw(struct pnfs_block_layout *bl, u64 offset, u64 length)
-{
- u64 rv = offset + length;
- struct pnfs_block_extent *be, *e1, *e2, *e3, *new, *old;
- struct pnfs_block_extent *children[3];
- struct pnfs_block_extent *merge1 = NULL, *merge2 = NULL;
- int i = 0, j;
-
- dprintk("%s(%llu, %llu)\n", __func__, offset, length);
- /* Create storage for up to three new extents e1, e2, e3 */
- e1 = kmalloc(sizeof(*e1), GFP_ATOMIC);
- e2 = kmalloc(sizeof(*e2), GFP_ATOMIC);
- e3 = kmalloc(sizeof(*e3), GFP_ATOMIC);
- /* BUG - we are ignoring any failure */
- if (!e1 || !e2 || !e3)
- goto out_nosplit;
-
- spin_lock(&bl->bl_ext_lock);
- be = bl_find_get_extent_locked(bl, offset);
- rv = be->be_f_offset + be->be_length;
- if (be->be_state != PNFS_BLOCK_INVALID_DATA) {
- spin_unlock(&bl->bl_ext_lock);
- goto out_nosplit;
- }
- /* Add e* to children, bumping e*'s krefs */
- if (be->be_f_offset != offset) {
- _prep_new_extent(e1, be, be->be_f_offset,
- offset - be->be_f_offset,
- PNFS_BLOCK_INVALID_DATA);
- children[i++] = e1;
- print_bl_extent(e1);
- } else
- merge1 = e1;
- _prep_new_extent(e2, be, offset,
- min(length, be->be_f_offset + be->be_length - offset),
- PNFS_BLOCK_READWRITE_DATA);
- children[i++] = e2;
- print_bl_extent(e2);
- if (offset + length < be->be_f_offset + be->be_length) {
- _prep_new_extent(e3, be, e2->be_f_offset + e2->be_length,
- be->be_f_offset + be->be_length -
- offset - length,
- PNFS_BLOCK_INVALID_DATA);
- children[i++] = e3;
- print_bl_extent(e3);
- } else
- merge2 = e3;
-
- /* Remove be from list, and insert the e* */
- /* We don't get refs on e*, since this list is the base reference
- * set when init'ed.
- */
- if (i < 3)
- children[i] = NULL;
- new = children[0];
- list_replace(&be->be_node, &new->be_node);
- bl_put_extent(be);
- new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge1);
- for (j = 1; j < i; j++) {
- old = new;
- new = children[j];
- list_add(&new->be_node, &old->be_node);
- }
- if (merge2) {
- /* This is a HACK, should just create a _back_merge function */
- new = list_entry(new->be_node.next,
- struct pnfs_block_extent, be_node);
- new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge2);
- }
- spin_unlock(&bl->bl_ext_lock);
-
- /* Since we removed the base reference above, be is now scheduled for
- * destruction.
- */
- bl_put_extent(be);
- dprintk("%s returns %llu after split\n", __func__, rv);
- return rv;
-
- out_nosplit:
- kfree(e1);
- kfree(e2);
- kfree(e3);
- dprintk("%s returns %llu without splitting\n", __func__, rv);
- return rv;
-}
-
-void
-clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
- const struct nfs4_layoutcommit_args *arg,
- int status)
-{
- struct pnfs_block_short_extent *lce, *save;
-
- dprintk("%s status %d\n", __func__, status);
- list_for_each_entry_safe(lce, save, &bl->bl_committing, bse_node) {
- if (likely(!status)) {
- u64 offset = lce->bse_f_offset;
- u64 end = offset + lce->bse_length;
-
- do {
- offset = set_to_rw(bl, offset, end - offset);
- } while (offset < end);
- list_del(&lce->bse_node);
-
- kfree(lce);
- } else {
- list_del(&lce->bse_node);
- spin_lock(&bl->bl_ext_lock);
- add_to_commitlist(bl, lce);
- spin_unlock(&bl->bl_ext_lock);
- }
- }
-}
-
-int bl_push_one_short_extent(struct pnfs_inval_markings *marks)
-{
- struct pnfs_block_short_extent *new;
-
- new = kmalloc(sizeof(*new), GFP_NOFS);
- if (unlikely(!new))
- return -ENOMEM;
-
- spin_lock_bh(&marks->im_lock);
- list_add(&new->bse_node, &marks->im_extents);
- spin_unlock_bh(&marks->im_lock);
-
- return 0;
-}
-
-struct pnfs_block_short_extent *
-bl_pop_one_short_extent(struct pnfs_inval_markings *marks)
-{
- struct pnfs_block_short_extent *rv = NULL;
-
- spin_lock_bh(&marks->im_lock);
- if (!list_empty(&marks->im_extents)) {
- rv = list_entry((&marks->im_extents)->next,
- struct pnfs_block_short_extent, bse_node);
- list_del_init(&rv->bse_node);
- }
- spin_unlock_bh(&marks->im_lock);
-
- return rv;
-}
-
-void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free)
-{
- struct pnfs_block_short_extent *se = NULL, *tmp;
-
- if (num_to_free <= 0)
- return;
-
- spin_lock(&marks->im_lock);
- list_for_each_entry_safe(se, tmp, &marks->im_extents, bse_node) {
- list_del(&se->bse_node);
- kfree(se);
- if (--num_to_free == 0)
- break;
- }
- spin_unlock(&marks->im_lock);
-
- BUG_ON(num_to_free > 0);
-}
--
1.9.1


2014-08-21 16:07:30

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 06/19] pnfs: avoid using stale stateids after layoutreturn

After we issued a layoutreturn operations the may free the layout stateid
and will thus cause bad stateid error when the client uses it again.

We currently try to avoid this case by chosing the open stateid if not
lsegs are present for this inode. But various places can hold refererence
on lsegs and thus cause the list not to be empty shortly after a layout
return. Add an explicit flag to mark the current layout stateid invalid
and force usage of the openstateid after we did a full file layoutreturn.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/pnfs.c | 7 ++++++-
fs/nfs/pnfs.h | 1 +
2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 32a14a6..bce7f1b 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -741,7 +741,8 @@ pnfs_choose_layoutget_stateid(nfs4_stateid *dst, struct pnfs_layout_hdr *lo,
status = -EAGAIN;
} else if (!nfs4_valid_open_stateid(open_state)) {
status = -EBADF;
- } else if (list_empty(&lo->plh_segs)) {
+ } else if (list_empty(&lo->plh_segs) ||
+ test_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags)) {
int seq;

do {
@@ -863,6 +864,8 @@ _pnfs_return_layout(struct inode *ino)
dprintk("NFS: %s no layout segments to return\n", __func__);
goto out;
}
+
+ set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
lo->plh_block_lgets++;
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&tmp_list);
@@ -1383,6 +1386,8 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
lo->plh_barrier = be32_to_cpu(res->stateid.seqid);
}

+ clear_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+
pnfs_get_lseg(lseg);
pnfs_layout_insert_lseg(lo, lseg);

diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index aca3dff..16cd14f 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -65,6 +65,7 @@ enum {
NFS_LAYOUT_BULK_RECALL, /* bulk recall affecting layout */
NFS_LAYOUT_ROC, /* some lseg had roc bit set */
NFS_LAYOUT_RETURN, /* Return this layout ASAP */
+ NFS_LAYOUT_INVALID_STID, /* layout stateid id is invalid */
};

enum layoutdriver_policy_flags {
--
1.9.1


2014-08-26 14:26:28

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 10:10 AM, Boaz Harrosh <[email protected]> wrote:
> From: Boaz Harrosh <[email protected]>
>
> This fixes a dead-lock in the pnfs recall processing
>
> pnfs_layoutcommit_inode() is called through update_inode()
> called from VFS. By setting set_inode_dirty during
> pnfs write IO.
>
> But the VFS will not schedule another update_inode()
> If it is already inside an update_inode() or an sb-writeback
>
> As part of writeback pnfs code might get stuck in LAYOUT_GET
> with the server returning ERR_RECALL_CONFLICT because some
> operation has caused the server to RECALL all layouts, including
> those from our client.
>
> So the RECALL is received, but our client is returning ERR_DELAY
> because its write-segments need a LAYOUT_COMMIT, but
> pnfs_layoutcommit_inode will never come because it is scheduled
> behind the LAYOUT_GET which is stuck waiting for the recall to
> finish
>
> Hence the deadlock, client is stuck polling LAYOUT_GET receiving
> ERR_RECALL_CONFLICT. Server is stuck polling RECALL receiving
> ERR_DELAY.
>
> With pnfs-objects the above condition can easily happen, when
> a file grows beyond a group of devices. The pnfs-objects-server
> will RECALL all layouts because the file-objects-map will
> change and all old layouts will have stale attributes, therefor
> the RECALL is initiated as part of a LAYOUT_GET, and this can
> be triggered from within a single client operation.
>
> A simple solution is to kick out a pnfs_layoutcommit_inode()
> from within the recall, to free any need-to-commit segments
> and let the client return success on the RECALL, so streaming
> can continue.
>
> This patch Is based on 3.17-rc1. It is completely UNTESTED.
> I have tested a version of this patch at around the 3.12 Kernel
> at which point the deadlock was resolved but I hit some race
> conditions on pnfs state management farther on, so the actual
> overall processing was not fixed. But hopefully these were fixed
> by Trond and Christoph, and it should work better now.
>
> Signed-off-by: Boaz Harrosh <[email protected]>
> ---
> fs/nfs/callback_proc.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 41db525..8660f96 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -171,6 +171,14 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> goto out;
>
> ino = lo->plh_inode;
> +
> + spin_lock(&ino->i_lock);
> + pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> + spin_unlock(&ino->i_lock);
> +
> + /* kick out any segs held by need to commit */
> + pnfs_layoutcommit_inode(ino, true);

Making this call synchronous could deadlock the entire back channel.
Is there any reason why it can't just be made asynchonous?

> +
> spin_lock(&ino->i_lock);
> if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
> pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> @@ -178,7 +186,6 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> rv = NFS4ERR_DELAY;
> else
> rv = NFS4ERR_NOMATCHING_LAYOUT;
> - pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> spin_unlock(&ino->i_lock);
> pnfs_free_lseg_list(&free_me_list);
> pnfs_put_layout_hdr(lo);
> --
> 1.9.3
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-27 08:50:44

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 10:46 PM, Trond Myklebust wrote:
<>
>
> So to return to the original question: could we please change the
> layoutcommit in your patch so that it is asynchronous?
>

There is nothing change. The original message at the head of
this thread, is Christoph's patch with the asynchronous layoutcommit.

As a reply I posted what I had in my tree, with a sync operation.
Because I wanted the recall to take 500 micro-seconds instead of like
300-1000 mili-seconds.

When the STD supports my model specifically I do not see why not

Don't you guys have recalls on the hot path?

Sigh, another nail in the coffin of what was suppose to be a dam
good STD.

Thinking about it from day one you had a personal agenda against
RECALLS. For some reason, which you never told me, you decided to
crash and burn any proper use of LO_RECALLS. With that dreaded
*synchronous* polling driven, forgetful--model recall.
(One day we'll sit on a Bear and you will tell me why? I really
want to know)

Now that there is the last ever chance to make an half dissent
citizen out of what was left of recalls, you are all on your
back feet, against it. With religious mantras, when you have
never sat and actually stared at the prints and experienced
recalls for yourself. Your words

Christoph I have a deja-vu its principles against good science
again. Lets revert to your patch.

Reviewed-by: Boaz Harrosh <[email protected]>


2014-08-21 16:07:58

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 17/19] pnfs/blocklayout: implement the return_range method

This allows removing extents from the extent tree especially on truncate
operations, and thus fixing reads from truncated and re-extended that
previously returned stale data.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 42b6f9c..a7524c4 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -470,6 +470,35 @@ static struct pnfs_layout_segment *bl_alloc_lseg(struct pnfs_layout_hdr *lo,
}

static void
+bl_return_range(struct pnfs_layout_hdr *lo,
+ struct pnfs_layout_range *range)
+{
+ struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
+ sector_t offset = range->offset >> SECTOR_SHIFT, end;
+ int err;
+
+ if (range->offset % 8) {
+ dprintk("%s: offset %lld not block size aligned\n",
+ __func__, range->offset);
+ return;
+ }
+
+ if (range->length != NFS4_MAX_UINT64) {
+ if (range->length % 8) {
+ dprintk("%s: length %lld not block size aligned\n",
+ __func__, range->length);
+ return;
+ }
+
+ end = offset + (range->length >> SECTOR_SHIFT);
+ } else {
+ end = round_down(NFS4_MAX_UINT64, PAGE_SIZE);
+ }
+
+ err = ext_tree_remove(bl, range->iomode & IOMODE_RW, offset, end);
+}
+
+static void
bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
const struct nfs4_layoutcommit_args *arg)
{
@@ -777,6 +806,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
.free_layout_hdr = bl_free_layout_hdr,
.alloc_lseg = bl_alloc_lseg,
.free_lseg = bl_free_lseg,
+ .return_range = bl_return_range,
.encode_layoutcommit = bl_encode_layoutcommit,
.cleanup_layoutcommit = bl_cleanup_layoutcommit,
.set_layoutdriver = bl_set_layoutdriver,
--
1.9.1


2014-08-25 14:43:43

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 16/19] pnfs/blocklayout: rewrite extent tracking

On Mon, Aug 25, 2014 at 10:36:09AM -0400, Anna Schumaker wrote:
> Whoa, I like the amount of code you manage to remove in this patch! However, there is a lot going on here. Any way this patch could be split into incremental changes? (it's okay to say "no"!)

The answer really is no - it's a wholesale replacement of three tracking
structures with an integrated one that is managed very differently. The only
thing I had split out ealier where various trivial bug fixes like sector
truncations in code that gets ripped out entirely later, so I decided to
fold those again, which didn't increase the patch size by more than 10 lines.


2014-08-26 17:54:23

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 1:06 PM, Boaz Harrosh <[email protected]> wrote:
> On 08/26/2014 07:59 PM, Trond Myklebust wrote:
>> On Tue, Aug 26, 2014 at 12:56 PM, Boaz Harrosh <[email protected]> wrote:
>>> On 08/26/2014 06:36 PM, Trond Myklebust wrote:
>>>> On Tue, Aug 26, 2014 at 11:24 AM, Matt W. Benjamin <[email protected]> wrote:
>>>>> IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?
>>>
>>>
>>> Matt no top post on a Linux mailing list ;-)
>>>
>>>> Yep. layoutcommit will be sent on the fore channel, which is why it
>>>> can deadlock with the initial layoutget (or whatever operation that
>>>> triggered the layout recall).
>>>
>>> Trond you said below:
>>>> The above can deadlock if there are no session slots available to send
>>>> the layoutcommit, in which case the recall won't complete, and the
>>>> layoutget won't get a reply (which would free up the slot).
>>>
>>> Why would the layoutget not-get-a-reply ?
>>> This is how it goes with Both ganesha server and knfsd last I tested.
>>>
>>> [1]
>>> The LAYOUT_GET cause LAYOUT_RECALL case: (including the lo_commit)
>>>
>>> client Server comments
>>> ~~~~~~ ~~~~~~ ~~~~~~~~
>>> LAYOUT_GET ==>
>>> <== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
>>> <--------- fore-channel is free
>>> <== RECALL
>>> LAYOUT_COMMIT ==>
>>> <== LAYOUT_COMMIT_REPLAY
>>> <--------- fore-channel is free
>>
>> Beep! No free slots, so this hangs.
>>
>
> Beep! does not do a very good of a job to explain. Sorry
>
> What do you mean? which slot? which channel? Just above your text it says
> "fore-channel is free" so are you saying it is not free? why not.
> Please use more then one line of text to explain. It might be clear to
> you but not to me.

The deadlock occurs _if_ the above layout commit is unable to get a
slot. You can't guarantee that it will, because the slot table is a
finite resource and it can be exhausted if you allow fore channel
calls to trigger synchronous recalls on the back channel that again
trigger synchronous calls on the fore channel. You're basically saying
that the client needs to guarantee that it can allocate 2 slots before
it is allowed to send a layoutget just in case the server needs to
recall a layout.

If, OTOH, the layoutcommit is asynchronous, then there is no
serialisation and the back channel thread can happily reply to the
layout recall even if there are no free slots in the fore channel.

>>> RECALL_REPLY(NO_MATCHING) =>
>>> <--------- back-channel is free
>
> Thanks
> Boaz
>



--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-21 16:07:50

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 15/19] pnfs/blocklayout: don't set pages uptodate

The core nfs code handles setting pages uptodate on reads, no need to mess
with the pageflags outselves. Also remove a debug function to dump page
flags.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 24 +-----------------------
1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 6484b9f..92be984 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -49,20 +49,6 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Andy Adamson <[email protected]>");
MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");

-static void print_page(struct page *page)
-{
- dprintk("PRINTPAGE page %p\n", page);
- dprintk(" PagePrivate %d\n", PagePrivate(page));
- dprintk(" PageUptodate %d\n", PageUptodate(page));
- dprintk(" PageError %d\n", PageError(page));
- dprintk(" PageDirty %d\n", PageDirty(page));
- dprintk(" PageReferenced %d\n", PageReferenced(page));
- dprintk(" PageLocked %d\n", PageLocked(page));
- dprintk(" PageWriteback %d\n", PageWriteback(page));
- dprintk(" PageMappedToDisk %d\n", PageMappedToDisk(page));
- dprintk("\n");
-}
-
/* Given the be associated with isect, determine if page data needs to be
* initialized.
*/
@@ -187,16 +173,9 @@ retry:
return bio;
}

-/* This is basically copied from mpage_end_io_read */
static void bl_end_io_read(struct bio *bio, int err)
{
struct parallel_io *par = bio->bi_private;
- struct bio_vec *bvec;
- int i;
-
- if (!err)
- bio_for_each_segment_all(bvec, bio, i)
- SetPageUptodate(bvec->bv_page);

if (err) {
struct nfs_pgio_header *header = par->data;
@@ -205,6 +184,7 @@ static void bl_end_io_read(struct bio *bio, int err)
header->pnfs_error = -EIO;
pnfs_set_lo_fail(header->lseg);
}
+
bio_put(bio);
put_parallel(par);
}
@@ -304,8 +284,6 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
/* Fill hole w/ zeroes w/o accessing device */
dprintk("%s Zeroing page for hole\n", __func__);
zero_user_segment(pages[i], pg_offset, pg_len);
- print_page(pages[i]);
- SetPageUptodate(pages[i]);
} else {
struct pnfs_block_extent *be_read;

--
1.9.1


2014-08-21 16:07:28

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 05/19] pnfs: retry after a bad stateid error from layoutget

Currently we fall through to nfs4_async_handle_error when we get
a bad stateid error back from layoutget. nfs4_async_handle_error
with a NULL state argument will never retry the operations but return
the error to higher layer, causing an avoiable fallback to MDS I/O.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/nfs4proc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 75ae8d2..4ca5880 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -7571,11 +7571,16 @@ static void nfs4_layoutget_done(struct rpc_task *task, void *calldata)
} else {
LIST_HEAD(head);

+ /*
+ * Mark the bad layout state as invalid, then retry
+ * with the current stateid.
+ */
pnfs_mark_matching_lsegs_invalid(lo, &head, NULL);
spin_unlock(&inode->i_lock);
- /* Mark the bad layout state as invalid, then
- * retry using the open stateid. */
pnfs_free_lseg_list(&head);
+
+ task->tk_status = 0;
+ rpc_restart_call_prepare(task);
}
}
if (nfs4_async_handle_error(task, server, state) == -EAGAIN)
--
1.9.1


2014-08-26 15:24:39

by Matt W. Benjamin

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?

Matt

----- "Boaz Harrosh" <[email protected]> wrote:

> On 08/26/2014 05:55 PM, Trond Myklebust wrote:
> > On Tue, Aug 26, 2014 at 10:37 AM, Boaz Harrosh <[email protected]>
> wrote:
> >
> > The above can deadlock if there are no session slots available to
> send
> > the layoutcommit, in which case the recall won't complete, and the
> > layoutget won't get a reply (which would free up the slot).
> >
>
> What? the back-channel and the fore-channel do not use the same
> slots. these are two different slots, No?
>
> Matt, Adam you need to chip in here.
>
> If it is as you say, then yes it must be as Christoph wrote it.
>
> And the Ganesha server must be fixed because it has a slot system per
> channel.
>
> Thanks
> Boaz

--
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI 48104

http://linuxbox.com

tel. 734-761-4689
fax. 734-769-8938
cel. 734-216-5309

2014-08-21 16:07:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 01/19] nfs: cap request size to fit a kmalloced page array

pNFS servers may return arbitrarily large layouts. Trim back the I/O size
to one that we can at least allocate the page array for.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/pagelist.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
index ba49192..74a4ff9 100644
--- a/fs/nfs/pagelist.c
+++ b/fs/nfs/pagelist.c
@@ -458,6 +458,14 @@ size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
return 0;
}

+ /*
+ * Limit the request size so that we can still allocate a page array
+ * for it without upsetting the slab allocator.
+ */
+ if (((desc->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
+ sizeof(struct page) > PAGE_SIZE)
+ return 0;
+
return min(desc->pg_bsize - desc->pg_count, (size_t)req->wb_bytes);
}
EXPORT_SYMBOL_GPL(nfs_generic_pg_test);
--
1.9.1


2014-08-25 14:36:13

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH 16/19] pnfs/blocklayout: rewrite extent tracking

On 08/21/2014 12:09 PM, Christoph Hellwig wrote:
> Currently the block layout driver tracks extents in three separate
> data structures:
>
> - the two list of pnfs_block_extent structures returned by the server
> - the list of sectors that were in invalid state but have been written to
> - a list of pnfs_block_short_extent structures for LAYOUTCOMMIT
>
> All of these share the property that they are not only highly inefficient
> data structures, but also that operations on them are even more inefficient
> than nessecary.
>
> In addition there are various implementation defects like:
>
> - using an int to track sectors, causing corruption for large offsets
> - incorrect normalization of page or block granularity ranges
> - insufficient error handling
> - incorrect synchronization as extents can be modified while they are in
> use
>
> This patch replace all three data with a single unified rbtree structure
> tracking all extents, as well as their in-memory state, although we still
> need to instance for read-only and read-write extent due to the arcane
> client side COW feature in the block layouts spec.
>
> To fix the problem of extent possibly being modified while in use we make
> sure to return a copy of the extent for use in the write path - the
> extent can only be invalidated by a layout recall or return which has
> to wait until the I/O operations finished due to refcounts on the layout
> segment.
>
> The new extent tree work similar to the schemes used by block based
> filesystems like XFS or ext4.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/Makefile | 3 +-
> fs/nfs/blocklayout/blocklayout.c | 258 +++-------
> fs/nfs/blocklayout/blocklayout.h | 112 +----
> fs/nfs/blocklayout/blocklayoutdev.c | 35 +-
> fs/nfs/blocklayout/extent_tree.c | 545 ++++++++++++++++++++++
> fs/nfs/blocklayout/extents.c | 908 ------------------------------------
> 6 files changed, 649 insertions(+), 1212 deletions(-)
> create mode 100644 fs/nfs/blocklayout/extent_tree.c
> delete mode 100644 fs/nfs/blocklayout/extents.c
Whoa, I like the amount of code you manage to remove in this patch! However, there is a lot going on here. Any way this patch could be split into incremental changes? (it's okay to say "no"!)

Anna
>
> diff --git a/fs/nfs/blocklayout/Makefile b/fs/nfs/blocklayout/Makefile
> index d581550..3fa5ec7 100644
> --- a/fs/nfs/blocklayout/Makefile
> +++ b/fs/nfs/blocklayout/Makefile
> @@ -2,4 +2,5 @@
> # Makefile for the pNFS block layout driver kernel module
> #
> obj-$(CONFIG_PNFS_BLOCK) += blocklayoutdriver.o
> -blocklayoutdriver-objs := blocklayout.o extents.o blocklayoutdev.o blocklayoutdm.o
> +blocklayoutdriver-objs := blocklayout.o blocklayoutdev.o blocklayoutdm.o \
> + extent_tree.o
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index 92be984..42b6f9c 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -49,26 +49,16 @@ MODULE_LICENSE("GPL");
> MODULE_AUTHOR("Andy Adamson <[email protected]>");
> MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");
>
> -/* Given the be associated with isect, determine if page data needs to be
> - * initialized.
> - */
> -static int is_hole(struct pnfs_block_extent *be, sector_t isect)
> -{
> - if (be->be_state == PNFS_BLOCK_NONE_DATA)
> - return 1;
> - else if (be->be_state != PNFS_BLOCK_INVALID_DATA)
> - return 0;
> - else
> - return !bl_is_sector_init(be->be_inval, isect);
> -}
> -
> -/* Given the be associated with isect, determine if page data can be
> - * written to disk.
> - */
> -static int is_writable(struct pnfs_block_extent *be, sector_t isect)
> +static bool is_hole(struct pnfs_block_extent *be)
> {
> - return (be->be_state == PNFS_BLOCK_READWRITE_DATA ||
> - be->be_state == PNFS_BLOCK_INVALID_DATA);
> + switch (be->be_state) {
> + case PNFS_BLOCK_NONE_DATA:
> + return true;
> + case PNFS_BLOCK_INVALID_DATA:
> + return be->be_tag ? false : true;
> + default:
> + return false;
> + }
> }
>
> /* The data we are handed might be spread across several bios. We need
> @@ -76,9 +66,8 @@ static int is_writable(struct pnfs_block_extent *be, sector_t isect)
> */
> struct parallel_io {
> struct kref refcnt;
> - void (*pnfs_callback) (void *data, int num_se);
> + void (*pnfs_callback) (void *data);
> void *data;
> - int bse_count;
> };
>
> static inline struct parallel_io *alloc_parallel(void *data)
> @@ -89,7 +78,6 @@ static inline struct parallel_io *alloc_parallel(void *data)
> if (rv) {
> rv->data = data;
> kref_init(&rv->refcnt);
> - rv->bse_count = 0;
> }
> return rv;
> }
> @@ -104,7 +92,7 @@ static void destroy_parallel(struct kref *kref)
> struct parallel_io *p = container_of(kref, struct parallel_io, refcnt);
>
> dprintk("%s enter\n", __func__);
> - p->pnfs_callback(p->data, p->bse_count);
> + p->pnfs_callback(p->data);
> kfree(p);
> }
>
> @@ -200,7 +188,7 @@ static void bl_read_cleanup(struct work_struct *work)
> }
>
> static void
> -bl_end_par_io_read(void *data, int unused)
> +bl_end_par_io_read(void *data)
> {
> struct nfs_pgio_header *hdr = data;
>
> @@ -210,56 +198,46 @@ bl_end_par_io_read(void *data, int unused)
> }
>
> static enum pnfs_try_status
> -bl_read_pagelist(struct nfs_pgio_header *hdr)
> +bl_read_pagelist(struct nfs_pgio_header *header)
> {
> - struct nfs_pgio_header *header = hdr;
> - int i, hole;
> + struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
> struct bio *bio = NULL;
> - struct pnfs_block_extent *be = NULL, *cow_read = NULL;
> + struct pnfs_block_extent be;
> sector_t isect, extent_length = 0;
> struct parallel_io *par;
> - loff_t f_offset = hdr->args.offset;
> - size_t bytes_left = hdr->args.count;
> + loff_t f_offset = header->args.offset;
> + size_t bytes_left = header->args.count;
> unsigned int pg_offset, pg_len;
> - struct page **pages = hdr->args.pages;
> - int pg_index = hdr->args.pgbase >> PAGE_CACHE_SHIFT;
> + struct page **pages = header->args.pages;
> + int pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
> const bool is_dio = (header->dreq != NULL);
> struct blk_plug plug;
> + int i;
>
> dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
> - hdr->page_array.npages, f_offset,
> - (unsigned int)hdr->args.count);
> + header->page_array.npages, f_offset,
> + (unsigned int)header->args.count);
>
> - par = alloc_parallel(hdr);
> + par = alloc_parallel(header);
> if (!par)
> - goto use_mds;
> + return PNFS_NOT_ATTEMPTED;
> par->pnfs_callback = bl_end_par_io_read;
> - /* At this point, we can no longer jump to use_mds */
>
> blk_start_plug(&plug);
>
> isect = (sector_t) (f_offset >> SECTOR_SHIFT);
> /* Code assumes extents are page-aligned */
> - for (i = pg_index; i < hdr->page_array.npages; i++) {
> + for (i = pg_index; i < header->page_array.npages; i++) {
> if (extent_length <= 0) {
> /* We've used up the previous extent */
> - bl_put_extent(be);
> - bl_put_extent(cow_read);
> bio = bl_submit_bio(READ, bio);
> +
> /* Get the next one */
> - be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
> - isect, &cow_read);
> - if (!be) {
> + if (!ext_tree_lookup(bl, isect, &be, false)) {
> header->pnfs_error = -EIO;
> goto out;
> }
> - extent_length = be->be_length -
> - (isect - be->be_f_offset);
> - if (cow_read) {
> - sector_t cow_length = cow_read->be_length -
> - (isect - cow_read->be_f_offset);
> - extent_length = min(extent_length, cow_length);
> - }
> + extent_length = be.be_length - (isect - be.be_f_offset);
> }
>
> pg_offset = f_offset & ~PAGE_CACHE_MASK;
> @@ -278,20 +256,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> pg_len = PAGE_CACHE_SIZE;
> }
>
> - hole = is_hole(be, isect);
> - if (hole && !cow_read) {
> + if (is_hole(&be)) {
> bio = bl_submit_bio(READ, bio);
> /* Fill hole w/ zeroes w/o accessing device */
> dprintk("%s Zeroing page for hole\n", __func__);
> zero_user_segment(pages[i], pg_offset, pg_len);
> } else {
> - struct pnfs_block_extent *be_read;
> -
> - be_read = (hole && cow_read) ? cow_read : be;
> bio = do_add_page_to_bio(bio,
> - hdr->page_array.npages - i,
> + header->page_array.npages - i,
> READ,
> - isect, pages[i], be_read,
> + isect, pages[i], &be,
> bl_end_io_read, par,
> pg_offset, pg_len);
> if (IS_ERR(bio)) {
> @@ -304,50 +278,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> extent_length -= (pg_len >> SECTOR_SHIFT);
> }
> if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
> - hdr->res.eof = 1;
> - hdr->res.count = header->inode->i_size - hdr->args.offset;
> + header->res.eof = 1;
> + header->res.count = header->inode->i_size - header->args.offset;
> } else {
> - hdr->res.count = (isect << SECTOR_SHIFT) - hdr->args.offset;
> + header->res.count = (isect << SECTOR_SHIFT) - header->args.offset;
> }
> out:
> - bl_put_extent(be);
> - bl_put_extent(cow_read);
> bl_submit_bio(READ, bio);
> blk_finish_plug(&plug);
> put_parallel(par);
> return PNFS_ATTEMPTED;
> -
> - use_mds:
> - dprintk("Giving up and using normal NFS\n");
> - return PNFS_NOT_ATTEMPTED;
> -}
> -
> -static void mark_extents_written(struct pnfs_block_layout *bl,
> - __u64 offset, __u32 count)
> -{
> - sector_t isect, end;
> - struct pnfs_block_extent *be;
> - struct pnfs_block_short_extent *se;
> -
> - dprintk("%s(%llu, %u)\n", __func__, offset, count);
> - if (count == 0)
> - return;
> - isect = (offset & (long)(PAGE_CACHE_MASK)) >> SECTOR_SHIFT;
> - end = (offset + count + PAGE_CACHE_SIZE - 1) & (long)(PAGE_CACHE_MASK);
> - end >>= SECTOR_SHIFT;
> - while (isect < end) {
> - sector_t len;
> - be = bl_find_get_extent(bl, isect, NULL);
> - BUG_ON(!be); /* FIXME */
> - len = min(end, be->be_f_offset + be->be_length) - isect;
> - if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
> - se = bl_pop_one_short_extent(be->be_inval);
> - BUG_ON(!se);
> - bl_mark_for_commit(be, isect, len, se);
> - }
> - isect += len;
> - bl_put_extent(be);
> - }
> }
>
> static void bl_end_io_write(struct bio *bio, int err)
> @@ -370,29 +310,30 @@ static void bl_end_io_write(struct bio *bio, int err)
> */
> static void bl_write_cleanup(struct work_struct *work)
> {
> - struct rpc_task *task;
> - struct nfs_pgio_header *hdr;
> + struct rpc_task *task = container_of(work, struct rpc_task, u.tk_work);
> + struct nfs_pgio_header *hdr =
> + container_of(task, struct nfs_pgio_header, task);
> +
> dprintk("%s enter\n", __func__);
> - task = container_of(work, struct rpc_task, u.tk_work);
> - hdr = container_of(task, struct nfs_pgio_header, task);
> +
> if (likely(!hdr->pnfs_error)) {
> - /* Marks for LAYOUTCOMMIT */
> - mark_extents_written(BLK_LSEG2EXT(hdr->lseg),
> - hdr->args.offset, hdr->args.count);
> + struct pnfs_block_layout *bl = BLK_LSEG2EXT(hdr->lseg);
> + u64 start = hdr->args.offset & (loff_t)PAGE_CACHE_MASK;
> + u64 end = (hdr->args.offset + hdr->args.count +
> + PAGE_CACHE_SIZE - 1) & (loff_t)PAGE_CACHE_MASK;
> +
> + ext_tree_mark_written(bl, start >> SECTOR_SHIFT,
> + (end - start) >> SECTOR_SHIFT);
> }
> +
> pnfs_ld_write_done(hdr);
> }
>
> /* Called when last of bios associated with a bl_write_pagelist call finishes */
> -static void bl_end_par_io_write(void *data, int num_se)
> +static void bl_end_par_io_write(void *data)
> {
> struct nfs_pgio_header *hdr = data;
>
> - if (unlikely(hdr->pnfs_error)) {
> - bl_free_short_extents(&BLK_LSEG2EXT(hdr->lseg)->bl_inval,
> - num_se);
> - }
> -
> hdr->task.tk_status = hdr->pnfs_error;
> hdr->verf.committed = NFS_FILE_SYNC;
> INIT_WORK(&hdr->task.u.tk_work, bl_write_cleanup);
> @@ -402,9 +343,9 @@ static void bl_end_par_io_write(void *data, int num_se)
> static enum pnfs_try_status
> bl_write_pagelist(struct nfs_pgio_header *header, int sync)
> {
> - int i, ret;
> + struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
> struct bio *bio = NULL;
> - struct pnfs_block_extent *be = NULL;
> + struct pnfs_block_extent be;
> sector_t isect, extent_length = 0;
> struct parallel_io *par = NULL;
> loff_t offset = header->args.offset;
> @@ -412,6 +353,7 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
> struct page **pages = header->args.pages;
> int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
> struct blk_plug plug;
> + int i;
>
> dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);
>
> @@ -421,9 +363,8 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
> */
> par = alloc_parallel(header);
> if (!par)
> - goto out_mds;
> + return PNFS_NOT_ATTEMPTED;
> par->pnfs_callback = bl_end_par_io_write;
> - /* At this point, have to be more careful with error handling */
>
> blk_start_plug(&plug);
>
> @@ -434,44 +375,18 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
> for (i = pg_index; i < header->page_array.npages; i++) {
> if (extent_length <= 0) {
> /* We've used up the previous extent */
> - bl_put_extent(be);
> bio = bl_submit_bio(WRITE, bio);
> /* Get the next one */
> - be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
> - isect, NULL);
> - if (!be || !is_writable(be, isect)) {
> + if (!ext_tree_lookup(bl, isect, &be, true)) {
> header->pnfs_error = -EINVAL;
> goto out;
> }
> - if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
> - if (likely(!bl_push_one_short_extent(
> - be->be_inval)))
> - par->bse_count++;
> - else {
> - header->pnfs_error = -ENOMEM;
> - goto out;
> - }
> - }
> - extent_length = be->be_length -
> - (isect - be->be_f_offset);
> - }
>
> - BUG_ON(offset & ~PAGE_CACHE_MASK);
> -
> - if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
> - !bl_is_sector_init(be->be_inval, isect)) {
> - ret = bl_mark_sectors_init(be->be_inval, isect,
> - PAGE_CACHE_SECTORS);
> - if (unlikely(ret)) {
> - dprintk("%s bl_mark_sectors_init fail %d\n",
> - __func__, ret);
> - header->pnfs_error = ret;
> - goto out;
> - }
> + extent_length = be.be_length - (isect - be.be_f_offset);
> }
>
> bio = do_add_page_to_bio(bio, header->page_array.npages - i,
> - WRITE, isect, pages[i], be,
> + WRITE, isect, pages[i], &be,
> bl_end_io_write, par,
> 0, PAGE_CACHE_SIZE);
> if (IS_ERR(bio)) {
> @@ -487,60 +402,22 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
>
> header->res.count = header->args.count;
> out:
> - bl_put_extent(be);
> bl_submit_bio(WRITE, bio);
> blk_finish_plug(&plug);
> put_parallel(par);
> return PNFS_ATTEMPTED;
> -out_mds:
> - return PNFS_NOT_ATTEMPTED;
> -}
> -
> -/* FIXME - range ignored */
> -static void
> -release_extents(struct pnfs_block_layout *bl, struct pnfs_layout_range *range)
> -{
> - int i;
> - struct pnfs_block_extent *be;
> -
> - spin_lock(&bl->bl_ext_lock);
> - for (i = 0; i < EXTENT_LISTS; i++) {
> - while (!list_empty(&bl->bl_extents[i])) {
> - be = list_first_entry(&bl->bl_extents[i],
> - struct pnfs_block_extent,
> - be_node);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> - }
> - }
> - spin_unlock(&bl->bl_ext_lock);
> -}
> -
> -static void
> -release_inval_marks(struct pnfs_inval_markings *marks)
> -{
> - struct pnfs_inval_tracking *pos, *temp;
> - struct pnfs_block_short_extent *se, *stemp;
> -
> - list_for_each_entry_safe(pos, temp, &marks->im_tree.mtt_stub, it_link) {
> - list_del(&pos->it_link);
> - kfree(pos);
> - }
> -
> - list_for_each_entry_safe(se, stemp, &marks->im_extents, bse_node) {
> - list_del(&se->bse_node);
> - kfree(se);
> - }
> - return;
> }
>
> static void bl_free_layout_hdr(struct pnfs_layout_hdr *lo)
> {
> struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
> + int err;
>
> dprintk("%s enter\n", __func__);
> - release_extents(bl, NULL);
> - release_inval_marks(&bl->bl_inval);
> +
> + err = ext_tree_remove(bl, true, 0, LLONG_MAX);
> + WARN_ON(err);
> +
> kfree(bl);
> }
>
> @@ -553,14 +430,11 @@ static struct pnfs_layout_hdr *bl_alloc_layout_hdr(struct inode *inode,
> bl = kzalloc(sizeof(*bl), gfp_flags);
> if (!bl)
> return NULL;
> +
> + bl->bl_ext_rw = RB_ROOT;
> + bl->bl_ext_ro = RB_ROOT;
> spin_lock_init(&bl->bl_ext_lock);
> - INIT_LIST_HEAD(&bl->bl_extents[0]);
> - INIT_LIST_HEAD(&bl->bl_extents[1]);
> - INIT_LIST_HEAD(&bl->bl_commit);
> - INIT_LIST_HEAD(&bl->bl_committing);
> - bl->bl_count = 0;
> - bl->bl_blocksize = NFS_SERVER(inode)->pnfs_blksize >> SECTOR_SHIFT;
> - BL_INIT_INVAL_MARKS(&bl->bl_inval, bl->bl_blocksize);
> +
> return &bl->bl_layout;
> }
>
> @@ -600,7 +474,7 @@ bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
> const struct nfs4_layoutcommit_args *arg)
> {
> dprintk("%s enter\n", __func__);
> - encode_pnfs_block_layoutupdate(BLK_LO2EXT(lo), xdr, arg);
> + ext_tree_encode_commit(BLK_LO2EXT(lo), xdr);
> }
>
> static void
> @@ -609,7 +483,7 @@ bl_cleanup_layoutcommit(struct nfs4_layoutcommit_data *lcdata)
> struct pnfs_layout_hdr *lo = NFS_I(lcdata->args.inode)->layout;
>
> dprintk("%s enter\n", __func__);
> - clean_pnfs_block_layoutupdate(BLK_LO2EXT(lo), &lcdata->args, lcdata->res.status);
> + ext_tree_mark_committed(BLK_LO2EXT(lo), lcdata->res.status);
> }
>
> static void free_blk_mountid(struct block_mount_id *mid)
> diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
> index 9838fb0..b4f66d8 100644
> --- a/fs/nfs/blocklayout/blocklayout.h
> +++ b/fs/nfs/blocklayout/blocklayout.h
> @@ -63,82 +63,28 @@ enum exstate4 {
> PNFS_BLOCK_NONE_DATA = 3 /* unmapped, it's a hole */
> };
>
> -#define MY_MAX_TAGS (15) /* tag bitnums used must be less than this */
> -
> -struct my_tree {
> - sector_t mtt_step_size; /* Internal sector alignment */
> - struct list_head mtt_stub; /* Should be a radix tree */
> -};
> -
> -struct pnfs_inval_markings {
> - spinlock_t im_lock;
> - struct my_tree im_tree; /* Sectors that need LAYOUTCOMMIT */
> - sector_t im_block_size; /* Server blocksize in sectors */
> - struct list_head im_extents; /* Short extents for INVAL->RW conversion */
> -};
> -
> -struct pnfs_inval_tracking {
> - struct list_head it_link;
> - int it_sector;
> - int it_tags;
> -};
> -
> /* sector_t fields are all in 512-byte sectors */
> struct pnfs_block_extent {
> - struct kref be_refcnt;
> - struct list_head be_node; /* link into lseg list */
> - struct nfs4_deviceid be_devid; /* FIXME: could use device cache instead */
> + union {
> + struct rb_node be_node;
> + struct list_head be_list;
> + };
> + struct nfs4_deviceid be_devid; /* FIXME: could use device cache instead */
> struct block_device *be_mdev;
> sector_t be_f_offset; /* the starting offset in the file */
> sector_t be_length; /* the size of the extent */
> sector_t be_v_offset; /* the starting offset in the volume */
> enum exstate4 be_state; /* the state of this extent */
> - struct pnfs_inval_markings *be_inval; /* tracks INVAL->RW transition */
> -};
> -
> -/* Shortened extent used by LAYOUTCOMMIT */
> -struct pnfs_block_short_extent {
> - struct list_head bse_node;
> - struct nfs4_deviceid bse_devid;
> - struct block_device *bse_mdev;
> - sector_t bse_f_offset; /* the starting offset in the file */
> - sector_t bse_length; /* the size of the extent */
> +#define EXTENT_WRITTEN 1
> +#define EXTENT_COMMITTING 2
> + unsigned int be_tag;
> };
>
> -static inline void
> -BL_INIT_INVAL_MARKS(struct pnfs_inval_markings *marks, sector_t blocksize)
> -{
> - spin_lock_init(&marks->im_lock);
> - INIT_LIST_HEAD(&marks->im_tree.mtt_stub);
> - INIT_LIST_HEAD(&marks->im_extents);
> - marks->im_block_size = blocksize;
> - marks->im_tree.mtt_step_size = min((sector_t)PAGE_CACHE_SECTORS,
> - blocksize);
> -}
> -
> -enum extentclass4 {
> - RW_EXTENT = 0, /* READWRTE and INVAL */
> - RO_EXTENT = 1, /* READ and NONE */
> - EXTENT_LISTS = 2,
> -};
> -
> -static inline int bl_choose_list(enum exstate4 state)
> -{
> - if (state == PNFS_BLOCK_READ_DATA || state == PNFS_BLOCK_NONE_DATA)
> - return RO_EXTENT;
> - else
> - return RW_EXTENT;
> -}
> -
> struct pnfs_block_layout {
> - struct pnfs_layout_hdr bl_layout;
> - struct pnfs_inval_markings bl_inval; /* tracks INVAL->RW transition */
> + struct pnfs_layout_hdr bl_layout;
> + struct rb_root bl_ext_rw;
> + struct rb_root bl_ext_ro;
> spinlock_t bl_ext_lock; /* Protects list manipulation */
> - struct list_head bl_extents[EXTENT_LISTS]; /* R and RW extents */
> - struct list_head bl_commit; /* Needs layout commit */
> - struct list_head bl_committing; /* Layout committing */
> - unsigned int bl_count; /* entries in bl_commit */
> - sector_t bl_blocksize; /* Server blocksize in sectors */
> };
>
> #define BLK_ID(lo) ((struct block_mount_id *)(NFS_SERVER(lo->plh_inode)->pnfs_ld_data))
> @@ -183,29 +129,17 @@ int nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
> /* blocklayoutdm.c */
> void bl_free_block_dev(struct pnfs_block_dev *bdev);
>
> -/* extents.c */
> -struct pnfs_block_extent *
> -bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
> - struct pnfs_block_extent **cow_read);
> -int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
> - sector_t offset, sector_t length);
> -void bl_put_extent(struct pnfs_block_extent *be);
> -struct pnfs_block_extent *bl_alloc_extent(void);
> -int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect);
> -int encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
> - struct xdr_stream *xdr,
> - const struct nfs4_layoutcommit_args *arg);
> -void clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
> - const struct nfs4_layoutcommit_args *arg,
> - int status);
> -int bl_add_merge_extent(struct pnfs_block_layout *bl,
> - struct pnfs_block_extent *new);
> -int bl_mark_for_commit(struct pnfs_block_extent *be,
> - sector_t offset, sector_t length,
> - struct pnfs_block_short_extent *new);
> -int bl_push_one_short_extent(struct pnfs_inval_markings *marks);
> -struct pnfs_block_short_extent *
> -bl_pop_one_short_extent(struct pnfs_inval_markings *marks);
> -void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free);
> +/* extent_tree.c */
> +int ext_tree_insert(struct pnfs_block_layout *bl,
> + struct pnfs_block_extent *new);
> +int ext_tree_remove(struct pnfs_block_layout *bl, bool rw, sector_t start,
> + sector_t end);
> +int ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
> + sector_t len);
> +bool ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
> + struct pnfs_block_extent *ret, bool rw);
> +int ext_tree_encode_commit(struct pnfs_block_layout *bl,
> + struct xdr_stream *xdr);
> +void ext_tree_mark_committed(struct pnfs_block_layout *bl, int status);
>
> #endif /* FS_NFS_NFS4BLOCKLAYOUT_H */
> diff --git a/fs/nfs/blocklayout/blocklayoutdev.c b/fs/nfs/blocklayout/blocklayoutdev.c
> index 63f77925..cd71b5e 100644
> --- a/fs/nfs/blocklayout/blocklayoutdev.c
> +++ b/fs/nfs/blocklayout/blocklayoutdev.c
> @@ -309,7 +309,7 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
> * recovery easier.
> */
> for (i = 0; i < count; i++) {
> - be = bl_alloc_extent();
> + be = kzalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
> if (!be) {
> status = -ENOMEM;
> goto out_err;
> @@ -330,13 +330,11 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
> if (decode_sector_number(&p, &be->be_v_offset) < 0)
> goto out_err;
> be->be_state = be32_to_cpup(p++);
> - if (be->be_state == PNFS_BLOCK_INVALID_DATA)
> - be->be_inval = &bl->bl_inval;
> if (verify_extent(be, &lv)) {
> dprintk("%s verify failed\n", __func__);
> goto out_err;
> }
> - list_add_tail(&be->be_node, &extents);
> + list_add_tail(&be->be_list, &extents);
> }
> if (lgr->range.offset + lgr->range.length !=
> lv.start << SECTOR_SHIFT) {
> @@ -352,21 +350,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
> /* Extents decoded properly, now try to merge them in to
> * existing layout extents.
> */
> - spin_lock(&bl->bl_ext_lock);
> - list_for_each_entry_safe(be, save, &extents, be_node) {
> - list_del(&be->be_node);
> - status = bl_add_merge_extent(bl, be);
> - if (status) {
> - spin_unlock(&bl->bl_ext_lock);
> - /* This is a fairly catastrophic error, as the
> - * entire layout extent lists are now corrupted.
> - * We should have some way to distinguish this.
> - */
> - be = NULL;
> - goto out_err;
> - }
> + list_for_each_entry_safe(be, save, &extents, be_list) {
> + list_del(&be->be_list);
> +
> + status = ext_tree_insert(bl, be);
> + if (status)
> + goto out_free_list;
> }
> - spin_unlock(&bl->bl_ext_lock);
> status = 0;
> out:
> __free_page(scratch);
> @@ -374,12 +364,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
> return status;
>
> out_err:
> - bl_put_extent(be);
> + kfree(be);
> + out_free_list:
> while (!list_empty(&extents)) {
> be = list_first_entry(&extents, struct pnfs_block_extent,
> - be_node);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> + be_list);
> + list_del(&be->be_list);
> + kfree(be);
> }
> goto out;
> }
> diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
> new file mode 100644
> index 0000000..c7dacfa
> --- /dev/null
> +++ b/fs/nfs/blocklayout/extent_tree.c
> @@ -0,0 +1,545 @@
> +/*
> + * Copyright (c) 2014 Christoph Hellwig.
> + */
> +
> +#include "blocklayout.h"
> +
> +static inline struct pnfs_block_extent *
> +ext_node(struct rb_node *node)
> +{
> + return rb_entry(node, struct pnfs_block_extent, be_node);
> +}
> +
> +static struct pnfs_block_extent *
> +ext_tree_first(struct rb_root *root)
> +{
> + struct rb_node *node = rb_first(root);
> + return node ? ext_node(node) : NULL;
> +}
> +
> +static struct pnfs_block_extent *
> +ext_tree_prev(struct pnfs_block_extent *be)
> +{
> + struct rb_node *node = rb_prev(&be->be_node);
> + return node ? ext_node(node) : NULL;
> +}
> +
> +static struct pnfs_block_extent *
> +ext_tree_next(struct pnfs_block_extent *be)
> +{
> + struct rb_node *node = rb_next(&be->be_node);
> + return node ? ext_node(node) : NULL;
> +}
> +
> +static inline sector_t
> +ext_f_end(struct pnfs_block_extent *be)
> +{
> + return be->be_f_offset + be->be_length;
> +}
> +
> +static struct pnfs_block_extent *
> +__ext_tree_search(struct rb_root *root, sector_t start)
> +{
> + struct rb_node *node = root->rb_node;
> + struct pnfs_block_extent *be = NULL;
> +
> + while (node) {
> + be = ext_node(node);
> + if (start < be->be_f_offset)
> + node = node->rb_left;
> + else if (start >= ext_f_end(be))
> + node = node->rb_right;
> + else
> + return be;
> + }
> +
> + if (be) {
> + if (start < be->be_f_offset)
> + return be;
> +
> + if (start >= ext_f_end(be))
> + return ext_tree_next(be);
> + }
> +
> + return NULL;
> +}
> +
> +static bool
> +ext_can_merge(struct pnfs_block_extent *be1, struct pnfs_block_extent *be2)
> +{
> + if (be1->be_state != be2->be_state)
> + return false;
> + if (be1->be_mdev != be2->be_mdev)
> + return false;
> +
> + if (be1->be_f_offset + be1->be_length != be2->be_f_offset)
> + return false;
> +
> + if (be1->be_state != PNFS_BLOCK_NONE_DATA &&
> + (be1->be_v_offset + be1->be_length != be2->be_v_offset))
> + return false;
> +
> + if (be1->be_state == PNFS_BLOCK_INVALID_DATA &&
> + be1->be_tag != be2->be_tag)
> + return false;
> +
> + return true;
> +}
> +
> +static struct pnfs_block_extent *
> +ext_try_to_merge_left(struct rb_root *root, struct pnfs_block_extent *be)
> +{
> + struct pnfs_block_extent *left = ext_tree_prev(be);
> +
> + if (left && ext_can_merge(left, be)) {
> + left->be_length += be->be_length;
> + rb_erase(&be->be_node, root);
> + kfree(be);
> + return left;
> + }
> +
> + return be;
> +}
> +
> +static struct pnfs_block_extent *
> +ext_try_to_merge_right(struct rb_root *root, struct pnfs_block_extent *be)
> +{
> + struct pnfs_block_extent *right = ext_tree_next(be);
> +
> + if (right && ext_can_merge(be, right)) {
> + be->be_length += right->be_length;
> + rb_erase(&right->be_node, root);
> + kfree(right);
> + }
> +
> + return be;
> +}
> +
> +static void
> +__ext_tree_insert(struct rb_root *root,
> + struct pnfs_block_extent *new, bool merge_ok)
> +{
> + struct rb_node **p = &root->rb_node, *parent = NULL;
> + struct pnfs_block_extent *be;
> +
> + while (*p) {
> + parent = *p;
> + be = ext_node(parent);
> +
> + if (new->be_f_offset < be->be_f_offset) {
> + if (merge_ok && ext_can_merge(new, be)) {
> + be->be_f_offset = new->be_f_offset;
> + if (be->be_state != PNFS_BLOCK_NONE_DATA)
> + be->be_v_offset = new->be_v_offset;
> + be->be_length += new->be_length;
> + be = ext_try_to_merge_left(root, be);
> + kfree(new);
> + return;
> + }
> + p = &(*p)->rb_left;
> + } else if (new->be_f_offset >= ext_f_end(be)) {
> + if (merge_ok && ext_can_merge(be, new)) {
> + be->be_length += new->be_length;
> + be = ext_try_to_merge_right(root, be);
> + kfree(new);
> + return;
> + }
> + p = &(*p)->rb_right;
> + } else {
> + BUG();
> + }
> + }
> +
> + rb_link_node(&new->be_node, parent, p);
> + rb_insert_color(&new->be_node, root);
> +}
> +
> +static int
> +__ext_tree_remove(struct rb_root *root, sector_t start, sector_t end)
> +{
> + struct pnfs_block_extent *be;
> + sector_t len1 = 0, len2 = 0;
> + sector_t orig_f_offset;
> + sector_t orig_v_offset;
> + sector_t orig_len;
> +
> + be = __ext_tree_search(root, start);
> + if (!be)
> + return 0;
> + if (be->be_f_offset >= end)
> + return 0;
> +
> + orig_f_offset = be->be_f_offset;
> + orig_v_offset = be->be_v_offset;
> + orig_len = be->be_length;
> +
> + if (start > be->be_f_offset)
> + len1 = start - be->be_f_offset;
> + if (ext_f_end(be) > end)
> + len2 = ext_f_end(be) - end;
> +
> + if (len2 > 0) {
> + if (len1 > 0) {
> + struct pnfs_block_extent *new;
> +
> + new = kzalloc(sizeof(*new), GFP_ATOMIC);
> + if (!new)
> + return -ENOMEM;
> +
> + be->be_length = len1;
> +
> + new->be_f_offset = end;
> + if (be->be_state != PNFS_BLOCK_NONE_DATA) {
> + new->be_v_offset =
> + orig_v_offset + orig_len - len2;
> + }
> + new->be_length = len2;
> + new->be_state = be->be_state;
> + new->be_tag = be->be_tag;
> + new->be_mdev = be->be_mdev;
> + memcpy(&new->be_devid, &be->be_devid,
> + sizeof(struct nfs4_deviceid));
> +
> + __ext_tree_insert(root, new, true);
> + } else {
> + be->be_f_offset = end;
> + if (be->be_state != PNFS_BLOCK_NONE_DATA) {
> + be->be_v_offset =
> + orig_v_offset + orig_len - len2;
> + }
> + be->be_length = len2;
> + }
> + } else {
> + if (len1 > 0) {
> + be->be_length = len1;
> + be = ext_tree_next(be);
> + }
> +
> + while (be && ext_f_end(be) <= end) {
> + struct pnfs_block_extent *next = ext_tree_next(be);
> +
> + rb_erase(&be->be_node, root);
> + kfree(be);
> + be = next;
> + }
> +
> + if (be && be->be_f_offset < end) {
> + len1 = ext_f_end(be) - end;
> + be->be_f_offset = end;
> + if (be->be_state != PNFS_BLOCK_NONE_DATA)
> + be->be_v_offset += be->be_length - len1;
> + be->be_length = len1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +int
> +ext_tree_insert(struct pnfs_block_layout *bl, struct pnfs_block_extent *new)
> +{
> + struct pnfs_block_extent *be;
> + struct rb_root *root;
> + int err = 0;
> +
> + switch (new->be_state) {
> + case PNFS_BLOCK_READWRITE_DATA:
> + case PNFS_BLOCK_INVALID_DATA:
> + root = &bl->bl_ext_rw;
> + break;
> + case PNFS_BLOCK_READ_DATA:
> + case PNFS_BLOCK_NONE_DATA:
> + root = &bl->bl_ext_ro;
> + break;
> + default:
> + dprintk("invalid extent type\n");
> + return -EINVAL;
> + }
> +
> + spin_lock(&bl->bl_ext_lock);
> +retry:
> + be = __ext_tree_search(root, new->be_f_offset);
> + if (!be || be->be_f_offset >= ext_f_end(new)) {
> + __ext_tree_insert(root, new, true);
> + } else if (new->be_f_offset >= be->be_f_offset) {
> + if (ext_f_end(new) <= ext_f_end(be)) {
> + kfree(new);
> + } else {
> + sector_t new_len = ext_f_end(new) - ext_f_end(be);
> + sector_t diff = new->be_length - new_len;
> +
> + new->be_f_offset += diff;
> + new->be_v_offset += diff;
> + new->be_length = new_len;
> + goto retry;
> + }
> + } else if (ext_f_end(new) <= ext_f_end(be)) {
> + new->be_length = be->be_f_offset - new->be_f_offset;
> + __ext_tree_insert(root, new, true);
> + } else {
> + struct pnfs_block_extent *split;
> + sector_t new_len = ext_f_end(new) - ext_f_end(be);
> + sector_t diff = new->be_length - new_len;
> +
> + split = kmemdup(new, sizeof(*new), GFP_ATOMIC);
> + if (!split) {
> + err = -EINVAL;
> + goto out;
> + }
> +
> + split->be_length = be->be_f_offset - split->be_f_offset;
> + __ext_tree_insert(root, split, true);
> +
> + new->be_f_offset += diff;
> + new->be_v_offset += diff;
> + new->be_length = new_len;
> + goto retry;
> + }
> +out:
> + spin_unlock(&bl->bl_ext_lock);
> + return err;
> +}
> +
> +static bool
> +__ext_tree_lookup(struct rb_root *root, sector_t isect,
> + struct pnfs_block_extent *ret)
> +{
> + struct rb_node *node;
> + struct pnfs_block_extent *be;
> +
> + node = root->rb_node;
> + while (node) {
> + be = ext_node(node);
> + if (isect < be->be_f_offset)
> + node = node->rb_left;
> + else if (isect >= ext_f_end(be))
> + node = node->rb_right;
> + else {
> + *ret = *be;
> + return true;
> + }
> + }
> +
> + return false;
> +}
> +
> +bool
> +ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
> + struct pnfs_block_extent *ret, bool rw)
> +{
> + bool found = false;
> +
> + spin_lock(&bl->bl_ext_lock);
> + if (!rw)
> + found = __ext_tree_lookup(&bl->bl_ext_ro, isect, ret);
> + if (!found)
> + found = __ext_tree_lookup(&bl->bl_ext_rw, isect, ret);
> + spin_unlock(&bl->bl_ext_lock);
> +
> + return found;
> +}
> +
> +int ext_tree_remove(struct pnfs_block_layout *bl, bool rw,
> + sector_t start, sector_t end)
> +{
> + int err, err2;
> +
> + spin_lock(&bl->bl_ext_lock);
> + err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
> + if (rw) {
> + err2 = __ext_tree_remove(&bl->bl_ext_rw, start, end);
> + if (!err)
> + err = err2;
> + }
> + spin_unlock(&bl->bl_ext_lock);
> +
> + return err;
> +}
> +
> +static int
> +ext_tree_split(struct rb_root *root, struct pnfs_block_extent *be,
> + sector_t split)
> +{
> + struct pnfs_block_extent *new;
> + sector_t orig_len = be->be_length;
> +
> + dprintk("%s: need split for 0x%lx:0x%lx at 0x%lx\n",
> + __func__, be->be_f_offset, ext_f_end(be), split);
> +
> + new = kzalloc(sizeof(*new), GFP_ATOMIC);
> + if (!new)
> + return -ENOMEM;
> +
> + be->be_length = split - be->be_f_offset;
> +
> + new->be_f_offset = split;
> + if (be->be_state != PNFS_BLOCK_NONE_DATA)
> + new->be_v_offset = be->be_v_offset + be->be_length;
> + new->be_length = orig_len - be->be_length;
> + new->be_state = be->be_state;
> + new->be_tag = be->be_tag;
> +
> + new->be_mdev = be->be_mdev;
> + memcpy(&new->be_devid, &be->be_devid, sizeof(struct nfs4_deviceid));
> +
> + dprintk("%s: got 0x%lx:0x%lx!\n",
> + __func__, be->be_f_offset, ext_f_end(be));
> + dprintk("%s: got 0x%lx:0x%lx!\n",
> + __func__, new->be_f_offset, ext_f_end(new));
> +
> + __ext_tree_insert(root, new, false);
> + return 0;
> +}
> +
> +int
> +ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
> + sector_t len)
> +{
> + struct rb_root *root = &bl->bl_ext_rw;
> + sector_t end = start + len;
> + struct pnfs_block_extent *be;
> + int err = 0;
> +
> + spin_lock(&bl->bl_ext_lock);
> + /*
> + * First remove all COW extents or holes from written to range.
> + */
> + err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
> + if (err)
> + goto out;
> +
> + /*
> + * Then mark all invalid extents in the range as written to.
> + */
> + for (be = __ext_tree_search(root, start); be; be = ext_tree_next(be)) {
> + if (be->be_f_offset >= end)
> + break;
> +
> + if (be->be_state != PNFS_BLOCK_INVALID_DATA || be->be_tag)
> + continue;
> +
> + if (be->be_f_offset < start) {
> + struct pnfs_block_extent *left = ext_tree_prev(be);
> +
> + if (left && ext_can_merge(left, be)) {
> + sector_t diff = start - be->be_f_offset;
> +
> + left->be_length += diff;
> +
> + be->be_f_offset += diff;
> + be->be_v_offset += diff;
> + be->be_length -= diff;
> + } else {
> + err = ext_tree_split(root, be, start);
> + if (err)
> + goto out;
> + }
> + }
> +
> + if (ext_f_end(be) > end) {
> + struct pnfs_block_extent *right = ext_tree_next(be);
> +
> + if (right && ext_can_merge(be, right)) {
> + sector_t diff = end - be->be_f_offset;
> +
> + be->be_length -= diff;
> +
> + right->be_f_offset -= diff;
> + right->be_v_offset -= diff;
> + right->be_length += diff;
> + } else {
> + err = ext_tree_split(root, be, end);
> + if (err)
> + goto out;
> + }
> + }
> +
> + if (be->be_f_offset >= start && ext_f_end(be) <= end) {
> + be->be_tag = EXTENT_WRITTEN;
> + be = ext_try_to_merge_left(root, be);
> + be = ext_try_to_merge_right(root, be);
> + }
> + }
> +out:
> + spin_unlock(&bl->bl_ext_lock);
> + return err;
> +}
> +
> +int
> +ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
> +{
> + struct pnfs_block_extent *be;
> + unsigned int count = 0;
> + __be32 *p, *xdr_start;
> + int ret = 0;
> +
> + dprintk("%s enter\n", __func__);
> +
> + xdr_start = xdr_reserve_space(xdr, 8);
> + if (!xdr_start)
> + return -ENOSPC;
> +
> + spin_lock(&bl->bl_ext_lock);
> + for (be = ext_tree_first(&bl->bl_ext_rw); be; be = ext_tree_next(be)) {
> + if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
> + be->be_tag != EXTENT_WRITTEN)
> + continue;
> +
> + p = xdr_reserve_space(xdr, 7 * sizeof(__be32) +
> + NFS4_DEVICEID4_SIZE);
> + if (!p) {
> + printk("%s: out of space for extent list\n", __func__);
> + ret = -ENOSPC;
> + break;
> + }
> +
> + p = xdr_encode_opaque_fixed(p, be->be_devid.data,
> + NFS4_DEVICEID4_SIZE);
> + p = xdr_encode_hyper(p, be->be_f_offset << SECTOR_SHIFT);
> + p = xdr_encode_hyper(p, be->be_length << SECTOR_SHIFT);
> + p = xdr_encode_hyper(p, 0LL);
> + *p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
> +
> + be->be_tag = EXTENT_COMMITTING;
> + count++;
> + }
> + spin_unlock(&bl->bl_ext_lock);
> +
> + xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
> + xdr_start[1] = cpu_to_be32(count);
> +
> + dprintk("%s found %i ranges\n", __func__, count);
> + return ret;
> +}
> +
> +void
> +ext_tree_mark_committed(struct pnfs_block_layout *bl, int status)
> +{
> + struct rb_root *root = &bl->bl_ext_rw;
> + struct pnfs_block_extent *be;
> +
> + dprintk("%s status %d\n", __func__, status);
> +
> + spin_lock(&bl->bl_ext_lock);
> + for (be = ext_tree_first(root); be; be = ext_tree_next(be)) {
> + if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
> + be->be_tag != EXTENT_COMMITTING)
> + continue;
> +
> + if (status) {
> + /*
> + * Mark as written and try again.
> + *
> + * XXX: some real error handling here wouldn't hurt..
> + */
> + be->be_tag = EXTENT_WRITTEN;
> + } else {
> + be->be_state = PNFS_BLOCK_READWRITE_DATA;
> + be->be_tag = 0;
> + }
> +
> + be = ext_try_to_merge_left(root, be);
> + be = ext_try_to_merge_right(root, be);
> + }
> + spin_unlock(&bl->bl_ext_lock);
> +}
> diff --git a/fs/nfs/blocklayout/extents.c b/fs/nfs/blocklayout/extents.c
> deleted file mode 100644
> index 4d01614..0000000
> --- a/fs/nfs/blocklayout/extents.c
> +++ /dev/null
> @@ -1,908 +0,0 @@
> -/*
> - * linux/fs/nfs/blocklayout/blocklayout.h
> - *
> - * Module for the NFSv4.1 pNFS block layout driver.
> - *
> - * Copyright (c) 2006 The Regents of the University of Michigan.
> - * All rights reserved.
> - *
> - * Andy Adamson <[email protected]>
> - * Fred Isaman <[email protected]>
> - *
> - * permission is granted to use, copy, create derivative works and
> - * redistribute this software and such derivative works for any purpose,
> - * so long as the name of the university of michigan is not used in
> - * any advertising or publicity pertaining to the use or distribution
> - * of this software without specific, written prior authorization. if
> - * the above copyright notice or any other identification of the
> - * university of michigan is included in any copy of any portion of
> - * this software, then the disclaimer below must also be included.
> - *
> - * this software is provided as is, without representation from the
> - * university of michigan as to its fitness for any purpose, and without
> - * warranty by the university of michigan of any kind, either express
> - * or implied, including without limitation the implied warranties of
> - * merchantability and fitness for a particular purpose. the regents
> - * of the university of michigan shall not be liable for any damages,
> - * including special, indirect, incidental, or consequential damages,
> - * with respect to any claim arising out or in connection with the use
> - * of the software, even if it has been or is hereafter advised of the
> - * possibility of such damages.
> - */
> -
> -#include "blocklayout.h"
> -#define NFSDBG_FACILITY NFSDBG_PNFS_LD
> -
> -/* Bit numbers */
> -#define EXTENT_INITIALIZED 0
> -#define EXTENT_WRITTEN 1
> -#define EXTENT_IN_COMMIT 2
> -#define INTERNAL_EXISTS MY_MAX_TAGS
> -#define INTERNAL_MASK ((1 << INTERNAL_EXISTS) - 1)
> -
> -/* Returns largest t<=s s.t. t%base==0 */
> -static inline sector_t normalize(sector_t s, int base)
> -{
> - sector_t tmp = s; /* Since do_div modifies its argument */
> - return s - sector_div(tmp, base);
> -}
> -
> -static inline sector_t normalize_up(sector_t s, int base)
> -{
> - return normalize(s + base - 1, base);
> -}
> -
> -/* Complete stub using list while determine API wanted */
> -
> -/* Returns tags, or negative */
> -static int32_t _find_entry(struct my_tree *tree, u64 s)
> -{
> - struct pnfs_inval_tracking *pos;
> -
> - dprintk("%s(%llu) enter\n", __func__, s);
> - list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
> - if (pos->it_sector > s)
> - continue;
> - else if (pos->it_sector == s)
> - return pos->it_tags & INTERNAL_MASK;
> - else
> - break;
> - }
> - return -ENOENT;
> -}
> -
> -static inline
> -int _has_tag(struct my_tree *tree, u64 s, int32_t tag)
> -{
> - int32_t tags;
> -
> - dprintk("%s(%llu, %i) enter\n", __func__, s, tag);
> - s = normalize(s, tree->mtt_step_size);
> - tags = _find_entry(tree, s);
> - if ((tags < 0) || !(tags & (1 << tag)))
> - return 0;
> - else
> - return 1;
> -}
> -
> -/* Creates entry with tag, or if entry already exists, unions tag to it.
> - * If storage is not NULL, newly created entry will use it.
> - * Returns number of entries added, or negative on error.
> - */
> -static int _add_entry(struct my_tree *tree, u64 s, int32_t tag,
> - struct pnfs_inval_tracking *storage)
> -{
> - int found = 0;
> - struct pnfs_inval_tracking *pos;
> -
> - dprintk("%s(%llu, %i, %p) enter\n", __func__, s, tag, storage);
> - list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
> - if (pos->it_sector > s)
> - continue;
> - else if (pos->it_sector == s) {
> - found = 1;
> - break;
> - } else
> - break;
> - }
> - if (found) {
> - pos->it_tags |= (1 << tag);
> - return 0;
> - } else {
> - struct pnfs_inval_tracking *new;
> - new = storage;
> - new->it_sector = s;
> - new->it_tags = (1 << tag);
> - list_add(&new->it_link, &pos->it_link);
> - return 1;
> - }
> -}
> -
> -/* XXXX Really want option to not create */
> -/* Over range, unions tag with existing entries, else creates entry with tag */
> -static int _set_range(struct my_tree *tree, int32_t tag, u64 s, u64 length)
> -{
> - u64 i;
> -
> - dprintk("%s(%i, %llu, %llu) enter\n", __func__, tag, s, length);
> - for (i = normalize(s, tree->mtt_step_size); i < s + length;
> - i += tree->mtt_step_size)
> - if (_add_entry(tree, i, tag, NULL))
> - return -ENOMEM;
> - return 0;
> -}
> -
> -/* Ensure that future operations on given range of tree will not malloc */
> -static int _preload_range(struct pnfs_inval_markings *marks,
> - u64 offset, u64 length)
> -{
> - u64 start, end, s;
> - int count, i, used = 0, status = -ENOMEM;
> - struct pnfs_inval_tracking **storage;
> - struct my_tree *tree = &marks->im_tree;
> -
> - dprintk("%s(%llu, %llu) enter\n", __func__, offset, length);
> - start = normalize(offset, tree->mtt_step_size);
> - end = normalize_up(offset + length, tree->mtt_step_size);
> - count = (int)(end - start) / (int)tree->mtt_step_size;
> -
> - /* Pre-malloc what memory we might need */
> - storage = kcalloc(count, sizeof(*storage), GFP_NOFS);
> - if (!storage)
> - return -ENOMEM;
> - for (i = 0; i < count; i++) {
> - storage[i] = kmalloc(sizeof(struct pnfs_inval_tracking),
> - GFP_NOFS);
> - if (!storage[i])
> - goto out_cleanup;
> - }
> -
> - spin_lock_bh(&marks->im_lock);
> - for (s = start; s < end; s += tree->mtt_step_size)
> - used += _add_entry(tree, s, INTERNAL_EXISTS, storage[used]);
> - spin_unlock_bh(&marks->im_lock);
> -
> - status = 0;
> -
> - out_cleanup:
> - for (i = used; i < count; i++) {
> - if (!storage[i])
> - break;
> - kfree(storage[i]);
> - }
> - kfree(storage);
> - return status;
> -}
> -
> -/* We are relying on page lock to serialize this */
> -int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect)
> -{
> - int rv;
> -
> - spin_lock_bh(&marks->im_lock);
> - rv = _has_tag(&marks->im_tree, isect, EXTENT_INITIALIZED);
> - spin_unlock_bh(&marks->im_lock);
> - return rv;
> -}
> -
> -/* Assume start, end already sector aligned */
> -static int
> -_range_has_tag(struct my_tree *tree, u64 start, u64 end, int32_t tag)
> -{
> - struct pnfs_inval_tracking *pos;
> - u64 expect = 0;
> -
> - dprintk("%s(%llu, %llu, %i) enter\n", __func__, start, end, tag);
> - list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
> - if (pos->it_sector >= end)
> - continue;
> - if (!expect) {
> - if ((pos->it_sector == end - tree->mtt_step_size) &&
> - (pos->it_tags & (1 << tag))) {
> - expect = pos->it_sector - tree->mtt_step_size;
> - if (pos->it_sector < tree->mtt_step_size || expect < start)
> - return 1;
> - continue;
> - } else {
> - return 0;
> - }
> - }
> - if (pos->it_sector != expect || !(pos->it_tags & (1 << tag)))
> - return 0;
> - expect -= tree->mtt_step_size;
> - if (expect < start)
> - return 1;
> - }
> - return 0;
> -}
> -
> -static int is_range_written(struct pnfs_inval_markings *marks,
> - sector_t start, sector_t end)
> -{
> - int rv;
> -
> - spin_lock_bh(&marks->im_lock);
> - rv = _range_has_tag(&marks->im_tree, start, end, EXTENT_WRITTEN);
> - spin_unlock_bh(&marks->im_lock);
> - return rv;
> -}
> -
> -/* Marks sectors in [offest, offset_length) as having been initialized.
> - * All lengths are step-aligned, where step is min(pagesize, blocksize).
> - * Currently assumes offset is page-aligned
> - */
> -int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
> - sector_t offset, sector_t length)
> -{
> - sector_t start, end;
> -
> - dprintk("%s(offset=%llu,len=%llu) enter\n",
> - __func__, (u64)offset, (u64)length);
> -
> - start = normalize(offset, marks->im_block_size);
> - end = normalize_up(offset + length, marks->im_block_size);
> - if (_preload_range(marks, start, end - start))
> - goto outerr;
> -
> - spin_lock_bh(&marks->im_lock);
> - if (_set_range(&marks->im_tree, EXTENT_INITIALIZED, offset, length))
> - goto out_unlock;
> - spin_unlock_bh(&marks->im_lock);
> -
> - return 0;
> -
> -out_unlock:
> - spin_unlock_bh(&marks->im_lock);
> -outerr:
> - return -ENOMEM;
> -}
> -
> -/* Marks sectors in [offest, offset+length) as having been written to disk.
> - * All lengths should be block aligned.
> - */
> -static int mark_written_sectors(struct pnfs_inval_markings *marks,
> - sector_t offset, sector_t length)
> -{
> - int status;
> -
> - dprintk("%s(offset=%llu,len=%llu) enter\n", __func__,
> - (u64)offset, (u64)length);
> - spin_lock_bh(&marks->im_lock);
> - status = _set_range(&marks->im_tree, EXTENT_WRITTEN, offset, length);
> - spin_unlock_bh(&marks->im_lock);
> - return status;
> -}
> -
> -static void print_short_extent(struct pnfs_block_short_extent *be)
> -{
> - dprintk("PRINT SHORT EXTENT extent %p\n", be);
> - if (be) {
> - dprintk(" be_f_offset %llu\n", (u64)be->bse_f_offset);
> - dprintk(" be_length %llu\n", (u64)be->bse_length);
> - }
> -}
> -
> -static void print_clist(struct list_head *list, unsigned int count)
> -{
> - struct pnfs_block_short_extent *be;
> - unsigned int i = 0;
> -
> - ifdebug(FACILITY) {
> - printk(KERN_DEBUG "****************\n");
> - printk(KERN_DEBUG "Extent list looks like:\n");
> - list_for_each_entry(be, list, bse_node) {
> - i++;
> - print_short_extent(be);
> - }
> - if (i != count)
> - printk(KERN_DEBUG "\n\nExpected %u entries\n\n\n", count);
> - printk(KERN_DEBUG "****************\n");
> - }
> -}
> -
> -/* Note: In theory, we should do more checking that devid's match between
> - * old and new, but if they don't, the lists are too corrupt to salvage anyway.
> - */
> -/* Note this is very similar to bl_add_merge_extent */
> -static void add_to_commitlist(struct pnfs_block_layout *bl,
> - struct pnfs_block_short_extent *new)
> -{
> - struct list_head *clist = &bl->bl_commit;
> - struct pnfs_block_short_extent *old, *save;
> - sector_t end = new->bse_f_offset + new->bse_length;
> -
> - dprintk("%s enter\n", __func__);
> - print_short_extent(new);
> - print_clist(clist, bl->bl_count);
> - bl->bl_count++;
> - /* Scan for proper place to insert, extending new to the left
> - * as much as possible.
> - */
> - list_for_each_entry_safe(old, save, clist, bse_node) {
> - if (new->bse_f_offset < old->bse_f_offset)
> - break;
> - if (end <= old->bse_f_offset + old->bse_length) {
> - /* Range is already in list */
> - bl->bl_count--;
> - kfree(new);
> - return;
> - } else if (new->bse_f_offset <=
> - old->bse_f_offset + old->bse_length) {
> - /* new overlaps or abuts existing be */
> - if (new->bse_mdev == old->bse_mdev) {
> - /* extend new to fully replace old */
> - new->bse_length += new->bse_f_offset -
> - old->bse_f_offset;
> - new->bse_f_offset = old->bse_f_offset;
> - list_del(&old->bse_node);
> - bl->bl_count--;
> - kfree(old);
> - }
> - }
> - }
> - /* Note that if we never hit the above break, old will not point to a
> - * valid extent. However, in that case &old->bse_node==list.
> - */
> - list_add_tail(&new->bse_node, &old->bse_node);
> - /* Scan forward for overlaps. If we find any, extend new and
> - * remove the overlapped extent.
> - */
> - old = list_prepare_entry(new, clist, bse_node);
> - list_for_each_entry_safe_continue(old, save, clist, bse_node) {
> - if (end < old->bse_f_offset)
> - break;
> - /* new overlaps or abuts old */
> - if (new->bse_mdev == old->bse_mdev) {
> - if (end < old->bse_f_offset + old->bse_length) {
> - /* extend new to fully cover old */
> - end = old->bse_f_offset + old->bse_length;
> - new->bse_length = end - new->bse_f_offset;
> - }
> - list_del(&old->bse_node);
> - bl->bl_count--;
> - kfree(old);
> - }
> - }
> - dprintk("%s: after merging\n", __func__);
> - print_clist(clist, bl->bl_count);
> -}
> -
> -/* Note the range described by offset, length is guaranteed to be contained
> - * within be.
> - * new will be freed, either by this function or add_to_commitlist if they
> - * decide not to use it, or after LAYOUTCOMMIT uses it in the commitlist.
> - */
> -int bl_mark_for_commit(struct pnfs_block_extent *be,
> - sector_t offset, sector_t length,
> - struct pnfs_block_short_extent *new)
> -{
> - sector_t new_end, end = offset + length;
> - struct pnfs_block_layout *bl = container_of(be->be_inval,
> - struct pnfs_block_layout,
> - bl_inval);
> -
> - mark_written_sectors(be->be_inval, offset, length);
> - /* We want to add the range to commit list, but it must be
> - * block-normalized, and verified that the normalized range has
> - * been entirely written to disk.
> - */
> - new->bse_f_offset = offset;
> - offset = normalize(offset, bl->bl_blocksize);
> - if (offset < new->bse_f_offset) {
> - if (is_range_written(be->be_inval, offset, new->bse_f_offset))
> - new->bse_f_offset = offset;
> - else
> - new->bse_f_offset = offset + bl->bl_blocksize;
> - }
> - new_end = normalize_up(end, bl->bl_blocksize);
> - if (end < new_end) {
> - if (is_range_written(be->be_inval, end, new_end))
> - end = new_end;
> - else
> - end = new_end - bl->bl_blocksize;
> - }
> - if (end <= new->bse_f_offset) {
> - kfree(new);
> - return 0;
> - }
> - new->bse_length = end - new->bse_f_offset;
> - new->bse_devid = be->be_devid;
> - new->bse_mdev = be->be_mdev;
> -
> - spin_lock(&bl->bl_ext_lock);
> - add_to_commitlist(bl, new);
> - spin_unlock(&bl->bl_ext_lock);
> - return 0;
> -}
> -
> -static void print_bl_extent(struct pnfs_block_extent *be)
> -{
> - dprintk("PRINT EXTENT extent %p\n", be);
> - if (be) {
> - dprintk(" be_f_offset %llu\n", (u64)be->be_f_offset);
> - dprintk(" be_length %llu\n", (u64)be->be_length);
> - dprintk(" be_v_offset %llu\n", (u64)be->be_v_offset);
> - dprintk(" be_state %d\n", be->be_state);
> - }
> -}
> -
> -static void
> -destroy_extent(struct kref *kref)
> -{
> - struct pnfs_block_extent *be;
> -
> - be = container_of(kref, struct pnfs_block_extent, be_refcnt);
> - dprintk("%s be=%p\n", __func__, be);
> - kfree(be);
> -}
> -
> -void
> -bl_put_extent(struct pnfs_block_extent *be)
> -{
> - if (be) {
> - dprintk("%s enter %p (%i)\n", __func__, be,
> - atomic_read(&be->be_refcnt.refcount));
> - kref_put(&be->be_refcnt, destroy_extent);
> - }
> -}
> -
> -struct pnfs_block_extent *bl_alloc_extent(void)
> -{
> - struct pnfs_block_extent *be;
> -
> - be = kmalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
> - if (!be)
> - return NULL;
> - INIT_LIST_HEAD(&be->be_node);
> - kref_init(&be->be_refcnt);
> - be->be_inval = NULL;
> - return be;
> -}
> -
> -static void print_elist(struct list_head *list)
> -{
> - struct pnfs_block_extent *be;
> - dprintk("****************\n");
> - dprintk("Extent list looks like:\n");
> - list_for_each_entry(be, list, be_node) {
> - print_bl_extent(be);
> - }
> - dprintk("****************\n");
> -}
> -
> -static inline int
> -extents_consistent(struct pnfs_block_extent *old, struct pnfs_block_extent *new)
> -{
> - /* Note this assumes new->be_f_offset >= old->be_f_offset */
> - return (new->be_state == old->be_state) &&
> - ((new->be_state == PNFS_BLOCK_NONE_DATA) ||
> - ((new->be_v_offset - old->be_v_offset ==
> - new->be_f_offset - old->be_f_offset) &&
> - new->be_mdev == old->be_mdev));
> -}
> -
> -/* Adds new to appropriate list in bl, modifying new and removing existing
> - * extents as appropriate to deal with overlaps.
> - *
> - * See bl_find_get_extent for list constraints.
> - *
> - * Refcount on new is already set. If end up not using it, or error out,
> - * need to put the reference.
> - *
> - * bl->bl_ext_lock is held by caller.
> - */
> -int
> -bl_add_merge_extent(struct pnfs_block_layout *bl,
> - struct pnfs_block_extent *new)
> -{
> - struct pnfs_block_extent *be, *tmp;
> - sector_t end = new->be_f_offset + new->be_length;
> - struct list_head *list;
> -
> - dprintk("%s enter with be=%p\n", __func__, new);
> - print_bl_extent(new);
> - list = &bl->bl_extents[bl_choose_list(new->be_state)];
> - print_elist(list);
> -
> - /* Scan for proper place to insert, extending new to the left
> - * as much as possible.
> - */
> - list_for_each_entry_safe_reverse(be, tmp, list, be_node) {
> - if (new->be_f_offset >= be->be_f_offset + be->be_length)
> - break;
> - if (new->be_f_offset >= be->be_f_offset) {
> - if (end <= be->be_f_offset + be->be_length) {
> - /* new is a subset of existing be*/
> - if (extents_consistent(be, new)) {
> - dprintk("%s: new is subset, ignoring\n",
> - __func__);
> - bl_put_extent(new);
> - return 0;
> - } else {
> - goto out_err;
> - }
> - } else {
> - /* |<-- be -->|
> - * |<-- new -->| */
> - if (extents_consistent(be, new)) {
> - /* extend new to fully replace be */
> - new->be_length += new->be_f_offset -
> - be->be_f_offset;
> - new->be_f_offset = be->be_f_offset;
> - new->be_v_offset = be->be_v_offset;
> - dprintk("%s: removing %p\n", __func__, be);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> - } else {
> - goto out_err;
> - }
> - }
> - } else if (end >= be->be_f_offset + be->be_length) {
> - /* new extent overlap existing be */
> - if (extents_consistent(be, new)) {
> - /* extend new to fully replace be */
> - dprintk("%s: removing %p\n", __func__, be);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> - } else {
> - goto out_err;
> - }
> - } else if (end > be->be_f_offset) {
> - /* |<-- be -->|
> - *|<-- new -->| */
> - if (extents_consistent(new, be)) {
> - /* extend new to fully replace be */
> - new->be_length += be->be_f_offset + be->be_length -
> - new->be_f_offset - new->be_length;
> - dprintk("%s: removing %p\n", __func__, be);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> - } else {
> - goto out_err;
> - }
> - }
> - }
> - /* Note that if we never hit the above break, be will not point to a
> - * valid extent. However, in that case &be->be_node==list.
> - */
> - list_add(&new->be_node, &be->be_node);
> - dprintk("%s: inserting new\n", __func__);
> - print_elist(list);
> - /* FIXME - The per-list consistency checks have all been done,
> - * should now check cross-list consistency.
> - */
> - return 0;
> -
> - out_err:
> - bl_put_extent(new);
> - return -EIO;
> -}
> -
> -/* Returns extent, or NULL. If a second READ extent exists, it is returned
> - * in cow_read, if given.
> - *
> - * The extents are kept in two seperate ordered lists, one for READ and NONE,
> - * one for READWRITE and INVALID. Within each list, we assume:
> - * 1. Extents are ordered by file offset.
> - * 2. For any given isect, there is at most one extents that matches.
> - */
> -struct pnfs_block_extent *
> -bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
> - struct pnfs_block_extent **cow_read)
> -{
> - struct pnfs_block_extent *be, *cow, *ret;
> - int i;
> -
> - dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
> - cow = ret = NULL;
> - spin_lock(&bl->bl_ext_lock);
> - for (i = 0; i < EXTENT_LISTS; i++) {
> - list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
> - if (isect >= be->be_f_offset + be->be_length)
> - break;
> - if (isect >= be->be_f_offset) {
> - /* We have found an extent */
> - dprintk("%s Get %p (%i)\n", __func__, be,
> - atomic_read(&be->be_refcnt.refcount));
> - kref_get(&be->be_refcnt);
> - if (!ret)
> - ret = be;
> - else if (be->be_state != PNFS_BLOCK_READ_DATA)
> - bl_put_extent(be);
> - else
> - cow = be;
> - break;
> - }
> - }
> - if (ret &&
> - (!cow_read || ret->be_state != PNFS_BLOCK_INVALID_DATA))
> - break;
> - }
> - spin_unlock(&bl->bl_ext_lock);
> - if (cow_read)
> - *cow_read = cow;
> - print_bl_extent(ret);
> - return ret;
> -}
> -
> -/* Similar to bl_find_get_extent, but called with lock held, and ignores cow */
> -static struct pnfs_block_extent *
> -bl_find_get_extent_locked(struct pnfs_block_layout *bl, sector_t isect)
> -{
> - struct pnfs_block_extent *be, *ret = NULL;
> - int i;
> -
> - dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
> - for (i = 0; i < EXTENT_LISTS; i++) {
> - if (ret)
> - break;
> - list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
> - if (isect >= be->be_f_offset + be->be_length)
> - break;
> - if (isect >= be->be_f_offset) {
> - /* We have found an extent */
> - dprintk("%s Get %p (%i)\n", __func__, be,
> - atomic_read(&be->be_refcnt.refcount));
> - kref_get(&be->be_refcnt);
> - ret = be;
> - break;
> - }
> - }
> - }
> - print_bl_extent(ret);
> - return ret;
> -}
> -
> -int
> -encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
> - struct xdr_stream *xdr,
> - const struct nfs4_layoutcommit_args *arg)
> -{
> - struct pnfs_block_short_extent *lce, *save;
> - unsigned int count = 0;
> - __be32 *p, *xdr_start;
> -
> - dprintk("%s enter\n", __func__);
> - /* BUG - creation of bl_commit is buggy - need to wait for
> - * entire block to be marked WRITTEN before it can be added.
> - */
> - spin_lock(&bl->bl_ext_lock);
> - /* Want to adjust for possible truncate */
> - /* We now want to adjust argument range */
> -
> - /* XDR encode the ranges found */
> - xdr_start = xdr_reserve_space(xdr, 8);
> - if (!xdr_start)
> - goto out;
> - list_for_each_entry_safe(lce, save, &bl->bl_commit, bse_node) {
> - p = xdr_reserve_space(xdr, 7 * 4 + sizeof(lce->bse_devid.data));
> - if (!p)
> - break;
> - p = xdr_encode_opaque_fixed(p, lce->bse_devid.data, NFS4_DEVICEID4_SIZE);
> - p = xdr_encode_hyper(p, lce->bse_f_offset << SECTOR_SHIFT);
> - p = xdr_encode_hyper(p, lce->bse_length << SECTOR_SHIFT);
> - p = xdr_encode_hyper(p, 0LL);
> - *p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
> - list_move_tail(&lce->bse_node, &bl->bl_committing);
> - bl->bl_count--;
> - count++;
> - }
> - xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
> - xdr_start[1] = cpu_to_be32(count);
> -out:
> - spin_unlock(&bl->bl_ext_lock);
> - dprintk("%s found %i ranges\n", __func__, count);
> - return 0;
> -}
> -
> -/* Helper function to set_to_rw that initialize a new extent */
> -static void
> -_prep_new_extent(struct pnfs_block_extent *new,
> - struct pnfs_block_extent *orig,
> - sector_t offset, sector_t length, int state)
> -{
> - kref_init(&new->be_refcnt);
> - /* don't need to INIT_LIST_HEAD(&new->be_node) */
> - memcpy(&new->be_devid, &orig->be_devid, sizeof(struct nfs4_deviceid));
> - new->be_mdev = orig->be_mdev;
> - new->be_f_offset = offset;
> - new->be_length = length;
> - new->be_v_offset = orig->be_v_offset - orig->be_f_offset + offset;
> - new->be_state = state;
> - new->be_inval = orig->be_inval;
> -}
> -
> -/* Tries to merge be with extent in front of it in list.
> - * Frees storage if not used.
> - */
> -static struct pnfs_block_extent *
> -_front_merge(struct pnfs_block_extent *be, struct list_head *head,
> - struct pnfs_block_extent *storage)
> -{
> - struct pnfs_block_extent *prev;
> -
> - if (!storage)
> - goto no_merge;
> - if (&be->be_node == head || be->be_node.prev == head)
> - goto no_merge;
> - prev = list_entry(be->be_node.prev, struct pnfs_block_extent, be_node);
> - if ((prev->be_f_offset + prev->be_length != be->be_f_offset) ||
> - !extents_consistent(prev, be))
> - goto no_merge;
> - _prep_new_extent(storage, prev, prev->be_f_offset,
> - prev->be_length + be->be_length, prev->be_state);
> - list_replace(&prev->be_node, &storage->be_node);
> - bl_put_extent(prev);
> - list_del(&be->be_node);
> - bl_put_extent(be);
> - return storage;
> -
> - no_merge:
> - kfree(storage);
> - return be;
> -}
> -
> -static u64
> -set_to_rw(struct pnfs_block_layout *bl, u64 offset, u64 length)
> -{
> - u64 rv = offset + length;
> - struct pnfs_block_extent *be, *e1, *e2, *e3, *new, *old;
> - struct pnfs_block_extent *children[3];
> - struct pnfs_block_extent *merge1 = NULL, *merge2 = NULL;
> - int i = 0, j;
> -
> - dprintk("%s(%llu, %llu)\n", __func__, offset, length);
> - /* Create storage for up to three new extents e1, e2, e3 */
> - e1 = kmalloc(sizeof(*e1), GFP_ATOMIC);
> - e2 = kmalloc(sizeof(*e2), GFP_ATOMIC);
> - e3 = kmalloc(sizeof(*e3), GFP_ATOMIC);
> - /* BUG - we are ignoring any failure */
> - if (!e1 || !e2 || !e3)
> - goto out_nosplit;
> -
> - spin_lock(&bl->bl_ext_lock);
> - be = bl_find_get_extent_locked(bl, offset);
> - rv = be->be_f_offset + be->be_length;
> - if (be->be_state != PNFS_BLOCK_INVALID_DATA) {
> - spin_unlock(&bl->bl_ext_lock);
> - goto out_nosplit;
> - }
> - /* Add e* to children, bumping e*'s krefs */
> - if (be->be_f_offset != offset) {
> - _prep_new_extent(e1, be, be->be_f_offset,
> - offset - be->be_f_offset,
> - PNFS_BLOCK_INVALID_DATA);
> - children[i++] = e1;
> - print_bl_extent(e1);
> - } else
> - merge1 = e1;
> - _prep_new_extent(e2, be, offset,
> - min(length, be->be_f_offset + be->be_length - offset),
> - PNFS_BLOCK_READWRITE_DATA);
> - children[i++] = e2;
> - print_bl_extent(e2);
> - if (offset + length < be->be_f_offset + be->be_length) {
> - _prep_new_extent(e3, be, e2->be_f_offset + e2->be_length,
> - be->be_f_offset + be->be_length -
> - offset - length,
> - PNFS_BLOCK_INVALID_DATA);
> - children[i++] = e3;
> - print_bl_extent(e3);
> - } else
> - merge2 = e3;
> -
> - /* Remove be from list, and insert the e* */
> - /* We don't get refs on e*, since this list is the base reference
> - * set when init'ed.
> - */
> - if (i < 3)
> - children[i] = NULL;
> - new = children[0];
> - list_replace(&be->be_node, &new->be_node);
> - bl_put_extent(be);
> - new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge1);
> - for (j = 1; j < i; j++) {
> - old = new;
> - new = children[j];
> - list_add(&new->be_node, &old->be_node);
> - }
> - if (merge2) {
> - /* This is a HACK, should just create a _back_merge function */
> - new = list_entry(new->be_node.next,
> - struct pnfs_block_extent, be_node);
> - new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge2);
> - }
> - spin_unlock(&bl->bl_ext_lock);
> -
> - /* Since we removed the base reference above, be is now scheduled for
> - * destruction.
> - */
> - bl_put_extent(be);
> - dprintk("%s returns %llu after split\n", __func__, rv);
> - return rv;
> -
> - out_nosplit:
> - kfree(e1);
> - kfree(e2);
> - kfree(e3);
> - dprintk("%s returns %llu without splitting\n", __func__, rv);
> - return rv;
> -}
> -
> -void
> -clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
> - const struct nfs4_layoutcommit_args *arg,
> - int status)
> -{
> - struct pnfs_block_short_extent *lce, *save;
> -
> - dprintk("%s status %d\n", __func__, status);
> - list_for_each_entry_safe(lce, save, &bl->bl_committing, bse_node) {
> - if (likely(!status)) {
> - u64 offset = lce->bse_f_offset;
> - u64 end = offset + lce->bse_length;
> -
> - do {
> - offset = set_to_rw(bl, offset, end - offset);
> - } while (offset < end);
> - list_del(&lce->bse_node);
> -
> - kfree(lce);
> - } else {
> - list_del(&lce->bse_node);
> - spin_lock(&bl->bl_ext_lock);
> - add_to_commitlist(bl, lce);
> - spin_unlock(&bl->bl_ext_lock);
> - }
> - }
> -}
> -
> -int bl_push_one_short_extent(struct pnfs_inval_markings *marks)
> -{
> - struct pnfs_block_short_extent *new;
> -
> - new = kmalloc(sizeof(*new), GFP_NOFS);
> - if (unlikely(!new))
> - return -ENOMEM;
> -
> - spin_lock_bh(&marks->im_lock);
> - list_add(&new->bse_node, &marks->im_extents);
> - spin_unlock_bh(&marks->im_lock);
> -
> - return 0;
> -}
> -
> -struct pnfs_block_short_extent *
> -bl_pop_one_short_extent(struct pnfs_inval_markings *marks)
> -{
> - struct pnfs_block_short_extent *rv = NULL;
> -
> - spin_lock_bh(&marks->im_lock);
> - if (!list_empty(&marks->im_extents)) {
> - rv = list_entry((&marks->im_extents)->next,
> - struct pnfs_block_short_extent, bse_node);
> - list_del_init(&rv->bse_node);
> - }
> - spin_unlock_bh(&marks->im_lock);
> -
> - return rv;
> -}
> -
> -void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free)
> -{
> - struct pnfs_block_short_extent *se = NULL, *tmp;
> -
> - if (num_to_free <= 0)
> - return;
> -
> - spin_lock(&marks->im_lock);
> - list_for_each_entry_safe(se, tmp, &marks->im_extents, bse_node) {
> - list_del(&se->bse_node);
> - kfree(se);
> - if (--num_to_free == 0)
> - break;
> - }
> - spin_unlock(&marks->im_lock);
> -
> - BUG_ON(num_to_free > 0);
> -}


2014-08-26 15:02:30

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 05:55 PM, Trond Myklebust wrote:
> On Tue, Aug 26, 2014 at 10:37 AM, Boaz Harrosh <[email protected]> wrote:
>
> The above can deadlock if there are no session slots available to send
> the layoutcommit, in which case the recall won't complete, and the
> layoutget won't get a reply (which would free up the slot).
>

What? the back-channel and the fore-channel do not use the same
slots. these are two different slots, No?

Matt, Adam you need to chip in here.

If it is as you say, then yes it must be as Christoph wrote it.

And the Ganesha server must be fixed because it has a slot system per
channel.

Thanks
Boaz


2014-08-26 14:10:17

by Boaz Harrosh

[permalink] [raw]
Subject: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

From: Boaz Harrosh <[email protected]>

This fixes a dead-lock in the pnfs recall processing

pnfs_layoutcommit_inode() is called through update_inode()
called from VFS. By setting set_inode_dirty during
pnfs write IO.

But the VFS will not schedule another update_inode()
If it is already inside an update_inode() or an sb-writeback

As part of writeback pnfs code might get stuck in LAYOUT_GET
with the server returning ERR_RECALL_CONFLICT because some
operation has caused the server to RECALL all layouts, including
those from our client.

So the RECALL is received, but our client is returning ERR_DELAY
because its write-segments need a LAYOUT_COMMIT, but
pnfs_layoutcommit_inode will never come because it is scheduled
behind the LAYOUT_GET which is stuck waiting for the recall to
finish

Hence the deadlock, client is stuck polling LAYOUT_GET receiving
ERR_RECALL_CONFLICT. Server is stuck polling RECALL receiving
ERR_DELAY.

With pnfs-objects the above condition can easily happen, when
a file grows beyond a group of devices. The pnfs-objects-server
will RECALL all layouts because the file-objects-map will
change and all old layouts will have stale attributes, therefor
the RECALL is initiated as part of a LAYOUT_GET, and this can
be triggered from within a single client operation.

A simple solution is to kick out a pnfs_layoutcommit_inode()
from within the recall, to free any need-to-commit segments
and let the client return success on the RECALL, so streaming
can continue.

This patch Is based on 3.17-rc1. It is completely UNTESTED.
I have tested a version of this patch at around the 3.12 Kernel
at which point the deadlock was resolved but I hit some race
conditions on pnfs state management farther on, so the actual
overall processing was not fixed. But hopefully these were fixed
by Trond and Christoph, and it should work better now.

Signed-off-by: Boaz Harrosh <[email protected]>
---
fs/nfs/callback_proc.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 41db525..8660f96 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -171,6 +171,14 @@ static u32 initiate_file_draining(struct nfs_client *clp,
goto out;

ino = lo->plh_inode;
+
+ spin_lock(&ino->i_lock);
+ pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
+ spin_unlock(&ino->i_lock);
+
+ /* kick out any segs held by need to commit */
+ pnfs_layoutcommit_inode(ino, true);
+
spin_lock(&ino->i_lock);
if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
@@ -178,7 +186,6 @@ static u32 initiate_file_draining(struct nfs_client *clp,
rv = NFS4ERR_DELAY;
else
rv = NFS4ERR_NOMATCHING_LAYOUT;
- pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&free_me_list);
pnfs_put_layout_hdr(lo);
--
1.9.3



2014-08-21 16:08:00

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 18/19] pnfs/blocklayout: return layouts on setattr

This speads up truncate-heavy workloads like fsx by multiple orders of
magnitude.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index a7524c4..d5a2b87 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -799,7 +799,8 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
.id = LAYOUT_BLOCK_VOLUME,
.name = "LAYOUT_BLOCK_VOLUME",
.owner = THIS_MODULE,
- .flags = PNFS_READ_WHOLE_PAGE,
+ .flags = PNFS_LAYOUTRET_ON_SETATTR |
+ PNFS_READ_WHOLE_PAGE,
.read_pagelist = bl_read_pagelist,
.write_pagelist = bl_write_pagelist,
.alloc_layout_hdr = bl_alloc_layout_hdr,
--
1.9.1


2014-08-27 08:22:44

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 09:41 PM, Trond Myklebust wrote:
<>
> Whether or not your particular server allows it or not is irrelevant.
> We're not coding the client to a particular implementation.

God, No this is not up to any particular server implementation
it is stated clearly in the draft text. The Server is specifically
forbidden to wait in lo_get and device_info/device_list actually
any resource that is recallable.

This is the STD exactly for those reasons you stated.

> None of
> the other callbacks do synchronous RPC calls, and that's very
> intentional.
>

Because none of the other callbacks have the protection of the STD
that forbids the server of synchronous operations, so the client
have to.

Thanks
Boaz


2014-08-21 16:07:34

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 08/19] pnfs: add return_range method

If a layout driver keeps per-inode state outside of the layout segments it
needs to be notified of any layout returns or recalls on an inode, and not
just about the freeing of layout segments. Add a method to acomplish this,
which will allow the block layout driver to handle the case of truncated
and re-expanded files properly.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/callback_proc.c | 6 +++++-
fs/nfs/pnfs.c | 10 ++++++++++
fs/nfs/pnfs.h | 3 +++
3 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index bf017b0..86541e0 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -179,8 +179,12 @@ static u32 initiate_file_draining(struct nfs_client *clp,
&args->cbl_range)) {
need_commit = true;
rv = NFS4ERR_DELAY;
+ } else {
+ if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+ NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
+ &args->cbl_range);
+ }
}
-
pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&free_me_list);
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index bce7f1b..e481d1c 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -857,6 +857,16 @@ _pnfs_return_layout(struct inode *ino)
empty = list_empty(&lo->plh_segs);
pnfs_clear_layoutcommit(ino, &tmp_list);
pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
+
+ if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+ struct pnfs_layout_range range = {
+ .iomode = IOMODE_ANY,
+ .offset = 0,
+ .length = NFS4_MAX_UINT64,
+ };
+ NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
+ }
+
/* Don't send a LAYOUTRETURN if list was initially empty */
if (empty) {
spin_unlock(&ino->i_lock);
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 302b279..044c071 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -94,6 +94,9 @@ struct pnfs_layoutdriver_type {
struct pnfs_layout_segment * (*alloc_lseg) (struct pnfs_layout_hdr *layoutid, struct nfs4_layoutget_res *lgr, gfp_t gfp_flags);
void (*free_lseg) (struct pnfs_layout_segment *lseg);

+ void (*return_range) (struct pnfs_layout_hdr *lo,
+ struct pnfs_layout_range *range);
+
/* test for nfs page cache coalescing */
const struct nfs_pageio_ops *pg_read_ops;
const struct nfs_pageio_ops *pg_write_ops;
--
1.9.1


2014-08-21 16:07:36

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 09/19] pnfs: allow splicing pre-encoded pages into the layoutcommit args

Currently there is no XDR buffer space allocated for the per-layout driver
layoutcommit payload, which leads to server buffer overflows in the
blocklayout driver even under simple workloads. As we can't do per-layout
sizes for XDR operations we'll have to splice a previously encoded list
of pages into the XDR stream, similar to how we handle ACL buffers.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/nfs4xdr.c | 18 +++++++++++++-----
fs/nfs/pnfs.c | 15 +++++++++++++++
fs/nfs/pnfs.h | 4 ++--
include/linux/nfs_xdr.h | 3 +++
4 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
index e13b59d..f2cd957 100644
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -395,7 +395,10 @@ static int nfs4_stat_to_errno(int);
2 /* last byte written */ + \
1 /* nt_timechanged (false) */ + \
1 /* layoutupdate4 layout type */ + \
- 1 /* NULL filelayout layoutupdate4 payload */)
+ 1 /* layoutupdate4 opaqueue len */)
+ /* the actual content of layoutupdate4 should
+ be allocated by drivers and spliced in
+ using xdr_write_pages */
#define decode_layoutcommit_maxsz (op_decode_hdr_maxsz + 3)
#define encode_layoutreturn_maxsz (8 + op_encode_hdr_maxsz + \
encode_stateid_maxsz + \
@@ -1990,7 +1993,7 @@ encode_layoutget(struct xdr_stream *xdr,
static int
encode_layoutcommit(struct xdr_stream *xdr,
struct inode *inode,
- const struct nfs4_layoutcommit_args *args,
+ struct nfs4_layoutcommit_args *args,
struct compound_hdr *hdr)
{
__be32 *p;
@@ -2011,11 +2014,16 @@ encode_layoutcommit(struct xdr_stream *xdr,
*p++ = cpu_to_be32(0); /* Never send time_modify_changed */
*p++ = cpu_to_be32(NFS_SERVER(args->inode)->pnfs_curr_ld->id);/* type */

- if (NFS_SERVER(inode)->pnfs_curr_ld->encode_layoutcommit)
+ if (NFS_SERVER(inode)->pnfs_curr_ld->encode_layoutcommit) {
NFS_SERVER(inode)->pnfs_curr_ld->encode_layoutcommit(
NFS_I(inode)->layout, xdr, args);
- else
- encode_uint32(xdr, 0); /* no layout-type payload */
+ } else {
+ encode_uint32(xdr, args->layoutupdate_len);
+ if (args->layoutupdate_pages) {
+ xdr_write_pages(xdr, args->layoutupdate_pages, 0,
+ args->layoutupdate_len);
+ }
+ }

return 0;
}
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index e481d1c..a823edf 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -1838,6 +1838,7 @@ void pnfs_cleanup_layoutcommit(struct nfs4_layoutcommit_data *data)
int
pnfs_layoutcommit_inode(struct inode *inode, bool sync)
{
+ struct pnfs_layoutdriver_type *ld = NFS_SERVER(inode)->pnfs_curr_ld;
struct nfs4_layoutcommit_data *data;
struct nfs_inode *nfsi = NFS_I(inode);
loff_t end_pos;
@@ -1888,6 +1889,20 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
data->args.lastbytewritten = end_pos - 1;
data->res.server = NFS_SERVER(inode);

+ if (ld->prepare_layoutcommit) {
+ status = ld->prepare_layoutcommit(&data->args);
+ if (status) {
+ spin_lock(&inode->i_lock);
+ if (end_pos < nfsi->layout->plh_lwb)
+ nfsi->layout->plh_lwb = end_pos;
+ spin_unlock(&inode->i_lock);
+ put_rpccred(data->cred);
+ set_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags);
+ goto clear_layoutcommitting;
+ }
+ }
+
+
status = nfs4_proc_layoutcommit(data, sync);
out:
if (status)
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 044c071..e11a161 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -132,8 +132,8 @@ struct pnfs_layoutdriver_type {
const struct nfs4_layoutreturn_args *args);

void (*cleanup_layoutcommit) (struct nfs4_layoutcommit_data *data);
-
- void (*encode_layoutcommit) (struct pnfs_layout_hdr *layoutid,
+ int (*prepare_layoutcommit) (struct nfs4_layoutcommit_args *args);
+ void (*encode_layoutcommit) (struct pnfs_layout_hdr *lo,
struct xdr_stream *xdr,
const struct nfs4_layoutcommit_args *args);
};
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 0040629..d79b9c6 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -279,6 +279,9 @@ struct nfs4_layoutcommit_args {
__u64 lastbytewritten;
struct inode *inode;
const u32 *bitmask;
+ size_t layoutupdate_len;
+ struct page *layoutupdate_page;
+ struct page **layoutupdate_pages;
};

struct nfs4_layoutcommit_res {
--
1.9.1


2014-08-26 16:59:48

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 12:56 PM, Boaz Harrosh <[email protected]> wrote:
> On 08/26/2014 06:36 PM, Trond Myklebust wrote:
>> On Tue, Aug 26, 2014 at 11:24 AM, Matt W. Benjamin <[email protected]> wrote:
>>> IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?
>
>
> Matt no top post on a Linux mailing list ;-)
>
>> Yep. layoutcommit will be sent on the fore channel, which is why it
>> can deadlock with the initial layoutget (or whatever operation that
>> triggered the layout recall).
>
> Trond you said below:
>> The above can deadlock if there are no session slots available to send
>> the layoutcommit, in which case the recall won't complete, and the
>> layoutget won't get a reply (which would free up the slot).
>
> Why would the layoutget not-get-a-reply ?
> This is how it goes with Both ganesha server and knfsd last I tested.
>
> [1]
> The LAYOUT_GET cause LAYOUT_RECALL case: (including the lo_commit)
>
> client Server comments
> ~~~~~~ ~~~~~~ ~~~~~~~~
> LAYOUT_GET ==>
> <== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
> <--------- fore-channel is free
> <== RECALL
> LAYOUT_COMMIT ==>
> <== LAYOUT_COMMIT_REPLAY
> <--------- fore-channel is free

Beep! No free slots, so this hangs.

> RECALL_REPLY(NO_MATCHING) =>
> <--------- back-channel is free
>
> Note that in this case the server is to send the RECALL only after
> the error reply to LAYOUT_GET, specifically it is not aloud to get stuck
> inside LAYOUT_GET and wait for the RECALL. (mandated by STD)
>
> [2]
> The LAYOUT_GET sent all the while a RECALL is on the wire:
> client Server comments
> ~~~~~~ ~~~~~~ ~~~~~~~~
> <== RECALL
> LAYOUT_GET ==>
> <== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
> <--------- fore-channel is free
> LAYOUT_COMMIT ==>
> LAYOUT_COMMIT_REPLAY
> <--------- fore-channel is free
> RECALL_REPLY(NO_MATCHING) =>
> <--------- back-channel is free
>
>
> [3]
> Or the worst case that lo_commit needs to wait for the channel Similar
> to [2] above:
>
> client Server comments
> ~~~~~~ ~~~~~~ ~~~~~~~~
> <== RECALL
> LAYOUT_GET ==>
> initiate_lo_commit ==> slot is taken needs to wait
>
> <== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
> <--------- fore-channel is free
> LAYOUT_COMMIT ==> slot is now free lo_commit goes through
> <== LAYOUT_COMMIT_REPLAY
> <--------- fore-channel is free
> RECALL_REPLY(NO_MATCHING) =>
> <--------- back-channel is free
>
> So the most important is that the server must not get stuck in lo_get
> and since there is a slot for each channel the lo_commit can be sent
> from within the recall.
>
> What am I missing?
>
> Thanks
> Boaz
>



--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-26 18:41:40

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 2:19 PM, Boaz Harrosh <[email protected]> wrote:
> On 08/26/2014 08:54 PM, Trond Myklebust wrote:
>> On Tue, Aug 26, 2014 at 1:06 PM, Boaz Harrosh <[email protected]> wrote:
>
>>
>> The deadlock occurs _if_ the above layout commit is unable to get a
>> slot. You can't guarantee that it will, because the slot table is a
>> finite resource and it can be exhausted
>
> Yes all I ever seen is 1 slot in any of the clients/servers I've
> seen so I assume 1 slot ever
>
>> if you allow fore channel
>> calls to trigger synchronous recalls on the back channel
>
> Beep! but this is exactly what I'm trying to say. The STD specifically
> forbids that. The server is not allowed to wait here, it must return
> imitatively, with an error that frees the slot and then later issue the
> RECALL.
>
> This is what I said exactly three times in my mail, and what I have
> depicted in my flow:
> Server async operation (mandated by the STD)
> Client back-channel can be sync with for channel (Not mentioned by the STD)
>
>> that again trigger synchronous calls on the fore channel.
>
>
>> You're basically saying
>> that the client needs to guarantee that it can allocate 2 slots before
>> it is allowed to send a layoutget just in case the server needs to
>> recall a layout.
>>
>
> No I am not saying that, please count. Since the Server is not allowed
> sync operation then one slot is enough and the client can do sync lo_commit
> while in recall.
>
>> If, OTOH, the layoutcommit is asynchronous, then there is no
>> serialisation and the back channel thread can happily reply to the
>> layout recall even if there are no free slots in the fore channel.
>>
>
> Sure that will work as well, but not optimally, and for no good reason.
>
> Please go back to my flow with the 3 cases. See how the server never waits
> for anything and will always imitatively reply to the layout_get.
> Since the server is not allowed a sync operation and is mandated by the
> RFC text to not wait, then the client is allowed and can do sync operations
> because it is enough that only one do async.
>
> BTW: If what you are saying is true than there is a bug in the slot code
> because this patch does work, and everything flows past this situation.
> I have a reproducer test that fails 100% of the time without this patch
> and only fails much later at some other place, but not at this deadlock,
> with this patch applied.
>
> Cheers
> Boaz
>

Whether or not your particular server allows it or not is irrelevant.
We're not coding the client to a particular implementation. None of
the other callbacks do synchronous RPC calls, and that's very
intentional.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-25 14:21:01

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 08/19] pnfs: add return_range method

On Mon, Aug 25, 2014 at 10:17:24AM -0400, Anna Schumaker wrote:
> > To catch the intent - the first two clauses find excuses why we can't return
> > quite yet, while this if is for an optional feature in the actual return
> > path. If I wasn't updating but newly writing the function I'd actually
> > do something like:
>
> I'm a fan of nice looking code, and I like what you have below better. Can you arrange things to end up in this state? Or maybe send a cleanup patch after?

I'll send a cleanup later, that is unless I need to respin the series in this
area anyway for Boaz different layoutcommit on recall proposal.


2014-08-21 16:07:22

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 02/19] pnfs: do not pass uninitialized lsegs to ->free_lseg

Ensure the lsegs are initialized early so that we don't pass an unitialized
one back to ->free_lseg during error processing.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/pnfs.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index a3851de..6e0fa71 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -1358,6 +1358,9 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
goto out;
}

+ init_lseg(lo, lseg);
+ lseg->pls_range = res->range;
+
spin_lock(&ino->i_lock);
if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags)) {
dprintk("%s forget reply due to recall\n", __func__);
@@ -1375,8 +1378,6 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
/* Done processing layoutget. Set the layout stateid */
pnfs_set_layout_stateid(lo, &res->stateid, false);

- init_lseg(lo, lseg);
- lseg->pls_range = res->range;
pnfs_get_lseg(lseg);
pnfs_layout_insert_lseg(lo, lseg);

--
1.9.1


2014-08-24 17:49:20

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

On 08/21/2014 07:09 PM, Christoph Hellwig wrote:
> Expedite layout recall processing by forcing a layout commit when
> we see busy segments. Without it the layout recall might have to wait
> until the VM decided to start writeback for the file, which can introduce
> long delays.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Good god, Hi Christoph

I've been sitting on client RECALL bugs over a year NOW. I have you scenario
but actually a real DEAD-LOCK instead of an annoying delay.

You have the same deadlock only it is harder for you to hit, with objects
layout it is very easy to reproduce. (Files layout would have the same
bug if it would support segments)

The scenario is as follows:

* Client is doing a LAYOUT_GET and is returned RECALL_CONFLICT

Comment: If your server is serious about it's recalls, then all the
while a recall is in progress it will return RECALL_CONFLICT on any
segment in conflict with the RECALL.
In objects layout this is easy to hit, because the LAYOUT_GET itself
may cause the issue of the RECALL, because if the objects map grows
do to the current LAYOUT_GET then all clients are RECALLed including
the one issuing the call.
But this can also happen when one client caused an operation that
sends a RECALL on our client while our client is in the middle of
issuing a LAYOUT_GET.

So our client is stuck in LAYOUT_GET until RECALL from self is
satisfied.

* The RECALL is received but LAYOUTs are busy because they need
a LAYOUTCOMMIT. ERR_DELAY is returned.

Note the server will busy loop on RECALLs until success (NO_MATCHING_LAYOUT)

* Ha ha. LAYOUTCOMMIT will never be called because our client is stuck inside
LAYOUTGET, and we only call LAYOUTCOMMIT from update_inode() but LAYOUTGET
is already in an update_inode and VFS will not concurrently call update_inode()
twice, it will always wait for one to finish in order to notice the inode_dirty
flag and issue a new one.

So now we are dead-locked, LAYOUT_GET will wait for the Server to finish the
RECALL, and will pole for LAYOUT.
Server is stuck on Polling RECALL, waiting for the client to do a LO_COMMIT
but this one will never happen because it is waiting for the LAYOUT_GET to
return.

* The way to try and solve this is like you did below by pushing an immediate
LAYOUTCOMMIT as part of the recall thread and thous releasing the segments.

I had a slight different solution though

> ---
> fs/nfs/callback_proc.c | 16 +++++++++++-----
> fs/nfs/pnfs.c | 3 +++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 41db525..bf017b0 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -164,6 +164,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> struct inode *ino;
> struct pnfs_layout_hdr *lo;
> u32 rv = NFS4ERR_NOMATCHING_LAYOUT;
> + bool need_commit = false;
> LIST_HEAD(free_me_list);
>
> lo = get_layout_by_fh(clp, &args->cbl_fh, &args->cbl_stateid);
> @@ -172,16 +173,21 @@ static u32 initiate_file_draining(struct nfs_client *clp,
>
> ino = lo->plh_inode;
> spin_lock(&ino->i_lock);
> - if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
> - pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> - &args->cbl_range))
> + if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags)) {
> rv = NFS4ERR_DELAY;
> - else
> - rv = NFS4ERR_NOMATCHING_LAYOUT;
> + } else if (pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> + &args->cbl_range)) {
> + need_commit = true;
> + rv = NFS4ERR_DELAY;
> + }
> +
> pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> spin_unlock(&ino->i_lock);
> pnfs_free_lseg_list(&free_me_list);
> pnfs_put_layout_hdr(lo);
> +
> + if (need_commit)
> + pnfs_layoutcommit_inode(ino, false);
> iput(ino);
> out:
> return rv;

I did this like below:

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 41db525..59f76bf 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -171,6 +171,14 @@ static u32 initiate_file_draining(struct nfs_client *clp,
goto out;

ino = lo->plh_inode;
+
+ spin_lock(&ino->i_lock);
+ pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
+ spin_unlock(&ino->i_lock);
+
+ /* kick out any segs held by need to commit */
+ pnfs_layoutcommit_inode(ino, true);
+
spin_lock(&ino->i_lock);
if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
@@ -178,7 +186,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
rv = NFS4ERR_DELAY;
else
rv = NFS4ERR_NOMATCHING_LAYOUT;
- pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&free_me_list);
pnfs_put_layout_hdr(lo);


Comments:

1. I do the pnfs_layoutcommit_inode() regrdless of busy segments because
if it has-nothing-to-do it returns right-away. Segments may be busy
because of need-to-commit but also because they are used by in-flight-IO
So busy segments are not an exact indication.
In any way we can always do pnfs_layoutcommit_inode() to kick a LAYOUTCOMMIT
it will never do any harm.

2. This has a performance advantage, any segments held by LAYOUTCOMMIT will
now be freed, and the RECALL will return success instead of forcing the
server to one or more RECALL rounds with ERR_DELAY.

It is allowed by the protocol to issue a LAYOUTCOMMIT while in recall because
RECALL is governed by the BACK-CHANNEL seq_id and LAYOUTCOMMIT by the for-channel
seq_id and they need not wait for each other to finish.
(Like for example LAYOUT_GET and LAYOUT_COMMIT which are serialized by the seq_id)


> diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
> index 6e0fa71..242e73f 100644
> --- a/fs/nfs/pnfs.c
> +++ b/fs/nfs/pnfs.c
> @@ -604,6 +604,9 @@ pnfs_layout_free_bulk_destroy_list(struct list_head *layout_list,
> spin_unlock(&inode->i_lock);
> pnfs_free_lseg_list(&lseg_list);
> pnfs_put_layout_hdr(lo);
> +
> + if (ret)
> + pnfs_layoutcommit_inode(inode, false);
> iput(inode);
> }
> return ret;
>

With My patch I could go farther on but hit some of the other stuff you have
fixes for with the state_ids and other protocol stuff.

Also with my patch I hit races in state management, because my patch waits
for LAYOUT_COMMIT to execute synchronously from the RECALL thread, your
patch of asynchronous LAYOUT_COMMIT has a lower chance of hitting. But I
think Trond might have fixed these races, as I have tested this code like
6 month a go.

If you are up to it you might want to test my synchronous way and see if you like
things better. I'm testing your code as well to see how it looks.

BTW: It looks like the hch-pnfs/getdeviceinfo has some of the pnfs fixes but that
the hch-pnfs/blocklayout-for-3.18 has newer fixes but without the getdeviceinfo
stuff. I'm testing with the older getdeviceinfo branch.

[hch-pnfs == git://git.infradead.org/users/hch/pnfs.git]

[Testing is not so easy because I need to merge in my pnfs-server as well as this
here and I needed to do some forward porting as newest code was stuck on like 6
month ago. That was easy, now I need to go figure out what Ganesha to use.

Kernel-pnfs-server is out of the question because it is stuck on 3.12 and will not
merge very well with this here, But I'm stupid I can just run a 3.12 based Server,
and this here as client, Ye I'll go do this tomorrow. See who gets stuck sooner
Ganesha or Kpnfsd
]


Thanks for working on this
Boaz


2014-08-24 19:18:42

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

On Sun, Aug 24, 2014 at 08:49:16PM +0300, Boaz Harrosh wrote:
> I've been sitting on client RECALL bugs over a year NOW. I have you scenario
> but actually a real DEAD-LOCK instead of an annoying delay.

A sufficiently long delay is undistinguishable from a deadlock :)

> * Client is doing a LAYOUT_GET and is returned RECALL_CONFLICT
>
> Comment: If your server is serious about it's recalls, then all the
> while a recall is in progress it will return RECALL_CONFLICT on any
> segment in conflict with the RECALL.

It does.

> In objects layout this is easy to hit, because the LAYOUT_GET itself
> may cause the issue of the RECALL, because if the objects map grows
> do to the current LAYOUT_GET then all clients are RECALLed including
> the one issuing the call.

RFC5663 also requires recalls from layoutget in certain cases. The language
in is rather vague though, and I did chose to interpret it that the client
is responsible for coherency management on it's outstanding layouts, and thus
I will only recall layouts from other clientids. Without that utter madness
would happen with the forgetful client model that Linux uses.

> But this can also happen when one client caused an operation that
> sends a RECALL on our client while our client is in the middle of
> issuing a LAYOUT_GET.

This is something I could hit a well. Might be worth to write a reproducer
(I've been trying to play a bit with pynfs, but it still confuses the heck
out of me)

> 1. I do the pnfs_layoutcommit_inode() regrdless of busy segments because
> if it has-nothing-to-do it returns right-away. Segments may be busy
> because of need-to-commit but also because they are used by in-flight-IO
> So busy segments are not an exact indication.
> In any way we can always do pnfs_layoutcommit_inode() to kick a LAYOUTCOMMIT
> it will never do any harm.

Sounds fine to me.

> 2. This has a performance advantage, any segments held by LAYOUTCOMMIT will
> now be freed, and the RECALL will return success instead of forcing the
> server to one or more RECALL rounds with ERR_DELAY.

Sounds good to me as well.

> Also with my patch I hit races in state management, because my patch waits
> for LAYOUT_COMMIT to execute synchronously from the RECALL thread, your
> patch of asynchronous LAYOUT_COMMIT has a lower chance of hitting. But I
> think Trond might have fixed these races, as I have tested this code like
> 6 month a go.

I've been running into various stateid handling problems, of which some
could be considered races. Look at the other patches in this series - two of
those only appeared in the second iteration as they were only causing
MDS fallbacks, but no actual data corruption.

> If you are up to it you might want to test my synchronous way and see if you like
> things better. I'm testing your code as well to see how it looks.

Can you send me a full patch? Either against mainline or my tree is fine.

> BTW: It looks like the hch-pnfs/getdeviceinfo has some of the pnfs fixes but that
> the hch-pnfs/blocklayout-for-3.18 has newer fixes but without the getdeviceinfo
> stuff. I'm testing with the older getdeviceinfo branch.

The getdeviceinfo as of now is missing two stateid handling fixes. It was
based on blocklayout-for-3.18 when I pushed it out, but I have since updated
blocklayout-for-3.18. I will push out a rebased getdeviceinfo branch later
today.


2014-08-26 19:46:52

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 2:41 PM, Trond Myklebust
<[email protected]> wrote:
> On Tue, Aug 26, 2014 at 2:19 PM, Boaz Harrosh <[email protected]> wrote:
>> On 08/26/2014 08:54 PM, Trond Myklebust wrote:
>>> On Tue, Aug 26, 2014 at 1:06 PM, Boaz Harrosh <[email protected]> wrote:
>>
>>>
>>> The deadlock occurs _if_ the above layout commit is unable to get a
>>> slot. You can't guarantee that it will, because the slot table is a
>>> finite resource and it can be exhausted
>>
>> Yes all I ever seen is 1 slot in any of the clients/servers I've
>> seen so I assume 1 slot ever
>>
>>> if you allow fore channel
>>> calls to trigger synchronous recalls on the back channel
>>
>> Beep! but this is exactly what I'm trying to say. The STD specifically
>> forbids that. The server is not allowed to wait here, it must return
>> imitatively, with an error that frees the slot and then later issue the
>> RECALL.
>>
>> This is what I said exactly three times in my mail, and what I have
>> depicted in my flow:
>> Server async operation (mandated by the STD)
>> Client back-channel can be sync with for channel (Not mentioned by the STD)
>>
>>> that again trigger synchronous calls on the fore channel.
>>
>>
>>> You're basically saying
>>> that the client needs to guarantee that it can allocate 2 slots before
>>> it is allowed to send a layoutget just in case the server needs to
>>> recall a layout.
>>>
>>
>> No I am not saying that, please count. Since the Server is not allowed
>> sync operation then one slot is enough and the client can do sync lo_commit
>> while in recall.
>>
>>> If, OTOH, the layoutcommit is asynchronous, then there is no
>>> serialisation and the back channel thread can happily reply to the
>>> layout recall even if there are no free slots in the fore channel.
>>>
>>
>> Sure that will work as well, but not optimally, and for no good reason.
>>
>> Please go back to my flow with the 3 cases. See how the server never waits
>> for anything and will always imitatively reply to the layout_get.
>> Since the server is not allowed a sync operation and is mandated by the
>> RFC text to not wait, then the client is allowed and can do sync operations
>> because it is enough that only one do async.
>>
>> BTW: If what you are saying is true than there is a bug in the slot code
>> because this patch does work, and everything flows past this situation.
>> I have a reproducer test that fails 100% of the time without this patch
>> and only fails much later at some other place, but not at this deadlock,
>> with this patch applied.
>>
>> Cheers
>> Boaz
>>
>
> Whether or not your particular server allows it or not is irrelevant.
> We're not coding the client to a particular implementation. None of
> the other callbacks do synchronous RPC calls, and that's very
> intentional.
>

So to return to the original question: could we please change the
layoutcommit in your patch so that it is asynchronous?

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-25 13:50:37

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH 08/19] pnfs: add return_range method

On 08/21/2014 12:09 PM, Christoph Hellwig wrote:
> If a layout driver keeps per-inode state outside of the layout segments it
> needs to be notified of any layout returns or recalls on an inode, and not
> just about the freeing of layout segments. Add a method to acomplish this,
> which will allow the block layout driver to handle the case of truncated
> and re-expanded files properly.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/callback_proc.c | 6 +++++-
> fs/nfs/pnfs.c | 10 ++++++++++
> fs/nfs/pnfs.h | 3 +++
> 3 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index bf017b0..86541e0 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -179,8 +179,12 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> &args->cbl_range)) {
> need_commit = true;
> rv = NFS4ERR_DELAY;
> + } else {
> + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
> + &args->cbl_range);
> + }
> }
Is there a reason you're nesting the else-if here?

Anna
> -
> pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> spin_unlock(&ino->i_lock);
> pnfs_free_lseg_list(&free_me_list);
> diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
> index bce7f1b..e481d1c 100644
> --- a/fs/nfs/pnfs.c
> +++ b/fs/nfs/pnfs.c
> @@ -857,6 +857,16 @@ _pnfs_return_layout(struct inode *ino)
> empty = list_empty(&lo->plh_segs);
> pnfs_clear_layoutcommit(ino, &tmp_list);
> pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
> +
> + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> + struct pnfs_layout_range range = {
> + .iomode = IOMODE_ANY,
> + .offset = 0,
> + .length = NFS4_MAX_UINT64,
> + };
> + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
> + }
> +
> /* Don't send a LAYOUTRETURN if list was initially empty */
> if (empty) {
> spin_unlock(&ino->i_lock);
> diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
> index 302b279..044c071 100644
> --- a/fs/nfs/pnfs.h
> +++ b/fs/nfs/pnfs.h
> @@ -94,6 +94,9 @@ struct pnfs_layoutdriver_type {
> struct pnfs_layout_segment * (*alloc_lseg) (struct pnfs_layout_hdr *layoutid, struct nfs4_layoutget_res *lgr, gfp_t gfp_flags);
> void (*free_lseg) (struct pnfs_layout_segment *lseg);
>
> + void (*return_range) (struct pnfs_layout_hdr *lo,
> + struct pnfs_layout_range *range);
> +
> /* test for nfs page cache coalescing */
> const struct nfs_pageio_ops *pg_read_ops;
> const struct nfs_pageio_ops *pg_write_ops;


2014-08-26 18:19:44

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 08:54 PM, Trond Myklebust wrote:
> On Tue, Aug 26, 2014 at 1:06 PM, Boaz Harrosh <[email protected]> wrote:

>
> The deadlock occurs _if_ the above layout commit is unable to get a
> slot. You can't guarantee that it will, because the slot table is a
> finite resource and it can be exhausted

Yes all I ever seen is 1 slot in any of the clients/servers I've
seen so I assume 1 slot ever

> if you allow fore channel
> calls to trigger synchronous recalls on the back channel

Beep! but this is exactly what I'm trying to say. The STD specifically
forbids that. The server is not allowed to wait here, it must return
imitatively, with an error that frees the slot and then later issue the
RECALL.

This is what I said exactly three times in my mail, and what I have
depicted in my flow:
Server async operation (mandated by the STD)
Client back-channel can be sync with for channel (Not mentioned by the STD)

> that again trigger synchronous calls on the fore channel.


> You're basically saying
> that the client needs to guarantee that it can allocate 2 slots before
> it is allowed to send a layoutget just in case the server needs to
> recall a layout.
>

No I am not saying that, please count. Since the Server is not allowed
sync operation then one slot is enough and the client can do sync lo_commit
while in recall.

> If, OTOH, the layoutcommit is asynchronous, then there is no
> serialisation and the back channel thread can happily reply to the
> layout recall even if there are no free slots in the fore channel.
>

Sure that will work as well, but not optimally, and for no good reason.

Please go back to my flow with the 3 cases. See how the server never waits
for anything and will always imitatively reply to the layout_get.
Since the server is not allowed a sync operation and is mandated by the
RFC text to not wait, then the client is allowed and can do sync operations
because it is enough that only one do async.

BTW: If what you are saying is true than there is a bug in the slot code
because this patch does work, and everything flows past this situation.
I have a reproducer test that fails 100% of the time without this patch
and only fails much later at some other place, but not at this deadlock,
with this patch applied.

Cheers
Boaz


2014-08-26 14:55:14

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 10:37 AM, Boaz Harrosh <[email protected]> wrote:
> On 08/26/2014 05:26 PM, Trond Myklebust wrote:
>> On Tue, Aug 26, 2014 at 10:10 AM, Boaz Harrosh <[email protected]> wrote:
>>> +
>>> + /* kick out any segs held by need to commit */
>>> + pnfs_layoutcommit_inode(ino, true);
>>
>> Making this call synchronous could deadlock the entire back channel.
>> Is there any reason why it can't just be made asynchonous?
>>
>
> We were just talking about that.
>
> So the logic here is that we want to save round trips and make this
> as efficient as possible with no extra round trips for the server
> recall. A single RECALL => LAYOUT_COMMIT => LAYOUT_COMMIT_REPLAY
> REACLL_REPLAY_NO_MATCHING.
>
> Please explain the deadlock you foresee. The worst is that the
> mal-behaving server will time-out and after a long time the
> RECALL_REPLAY will return with ERR. But why do you say deadlock
> how can this deadlock?
>
> Otherwise Christoph's version of this patch does the asynchonous
> way which will always cause another poll of the RECALL and more delays
> for every RECALL operation, which I was trying to avoid.
>
> Thanks
> Boaz
>

The above can deadlock if there are no session slots available to send
the layoutcommit, in which case the recall won't complete, and the
layoutget won't get a reply (which would free up the slot).

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-21 16:07:42

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 12/19] pnfs/blocklayout: plug block queues

Make sure the block queue is plugged when performing pNFS blocklayout I/O.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 6c1a421..5427ae7 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -255,6 +255,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
struct page **pages = hdr->args.pages;
int pg_index = hdr->args.pgbase >> PAGE_CACHE_SHIFT;
const bool is_dio = (header->dreq != NULL);
+ struct blk_plug plug;

dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
hdr->page_array.npages, f_offset,
@@ -266,6 +267,8 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
par->pnfs_callback = bl_end_par_io_read;
/* At this point, we can no longer jump to use_mds */

+ blk_start_plug(&plug);
+
isect = (sector_t) (f_offset >> SECTOR_SHIFT);
/* Code assumes extents are page-aligned */
for (i = pg_index; i < hdr->page_array.npages; i++) {
@@ -342,6 +345,7 @@ out:
bl_put_extent(be);
bl_put_extent(cow_read);
bl_submit_bio(READ, bio);
+ blk_finish_plug(&plug);
put_parallel(par);
return PNFS_ATTEMPTED;

@@ -688,9 +692,12 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
u64 temp;
int npg_per_block =
NFS_SERVER(header->inode)->pnfs_blksize >> PAGE_CACHE_SHIFT;
+ struct blk_plug plug;

dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);

+ blk_start_plug(&plug);
+
if (header->dreq != NULL &&
(!IS_ALIGNED(offset, NFS_SERVER(header->inode)->pnfs_blksize) ||
!IS_ALIGNED(count, NFS_SERVER(header->inode)->pnfs_blksize))) {
@@ -894,9 +901,11 @@ out:
bl_put_extent(be);
bl_put_extent(cow_read);
bl_submit_bio(WRITE, bio);
+ blk_finish_plug(&plug);
put_parallel(par);
return PNFS_ATTEMPTED;
out_mds:
+ blk_finish_plug(&plug);
bl_put_extent(be);
bl_put_extent(cow_read);
kfree(par);
--
1.9.1


2014-08-21 16:07:44

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 13/19] pnfs/blocklayout: correctly decrement extent length

When we do non-page sized reads we can underflow the extent_length variable
and read incorrect data. Fix the extent_length calculation and change to
defensive <= checks for the extent length in the read and write path.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 5427ae7..87a633d 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -272,7 +272,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
isect = (sector_t) (f_offset >> SECTOR_SHIFT);
/* Code assumes extents are page-aligned */
for (i = pg_index; i < hdr->page_array.npages; i++) {
- if (!extent_length) {
+ if (extent_length <= 0) {
/* We've used up the previous extent */
bl_put_extent(be);
bl_put_extent(cow_read);
@@ -303,6 +303,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
f_offset += pg_len;
bytes_left -= pg_len;
isect += (pg_offset >> SECTOR_SHIFT);
+ extent_length -= (pg_offset >> SECTOR_SHIFT);
} else {
pg_offset = 0;
pg_len = PAGE_CACHE_SIZE;
@@ -333,7 +334,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
}
}
isect += (pg_len >> SECTOR_SHIFT);
- extent_length -= PAGE_CACHE_SECTORS;
+ extent_length -= (pg_len >> SECTOR_SHIFT);
}
if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
hdr->res.eof = 1;
@@ -797,7 +798,7 @@ next_page:
/* Middle pages */
pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
for (i = pg_index; i < header->page_array.npages; i++) {
- if (!extent_length) {
+ if (extent_length <= 0) {
/* We've used up the previous extent */
bl_put_extent(be);
bl_put_extent(cow_read);
--
1.9.1


2014-08-26 16:57:02

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 06:36 PM, Trond Myklebust wrote:
> On Tue, Aug 26, 2014 at 11:24 AM, Matt W. Benjamin <[email protected]> wrote:
>> IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?


Matt no top post on a Linux mailing list ;-)

> Yep. layoutcommit will be sent on the fore channel, which is why it
> can deadlock with the initial layoutget (or whatever operation that
> triggered the layout recall).

Trond you said below:
> The above can deadlock if there are no session slots available to send
> the layoutcommit, in which case the recall won't complete, and the
> layoutget won't get a reply (which would free up the slot).

Why would the layoutget not-get-a-reply ?
This is how it goes with Both ganesha server and knfsd last I tested.

[1]
The LAYOUT_GET cause LAYOUT_RECALL case: (including the lo_commit)

client Server comments
~~~~~~ ~~~~~~ ~~~~~~~~
LAYOUT_GET ==>
<== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
<--------- fore-channel is free
<== RECALL
LAYOUT_COMMIT ==>
<== LAYOUT_COMMIT_REPLAY
<--------- fore-channel is free
RECALL_REPLY(NO_MATCHING) =>
<--------- back-channel is free

Note that in this case the server is to send the RECALL only after
the error reply to LAYOUT_GET, specifically it is not aloud to get stuck
inside LAYOUT_GET and wait for the RECALL. (mandated by STD)

[2]
The LAYOUT_GET sent all the while a RECALL is on the wire:
client Server comments
~~~~~~ ~~~~~~ ~~~~~~~~
<== RECALL
LAYOUT_GET ==>
<== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
<--------- fore-channel is free
LAYOUT_COMMIT ==>
LAYOUT_COMMIT_REPLAY
<--------- fore-channel is free
RECALL_REPLY(NO_MATCHING) =>
<--------- back-channel is free


[3]
Or the worst case that lo_commit needs to wait for the channel Similar
to [2] above:

client Server comments
~~~~~~ ~~~~~~ ~~~~~~~~
<== RECALL
LAYOUT_GET ==>
initiate_lo_commit ==> slot is taken needs to wait

<== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
<--------- fore-channel is free
LAYOUT_COMMIT ==> slot is now free lo_commit goes through
<== LAYOUT_COMMIT_REPLAY
<--------- fore-channel is free
RECALL_REPLY(NO_MATCHING) =>
<--------- back-channel is free

So the most important is that the server must not get stuck in lo_get
and since there is a slot for each channel the lo_commit can be sent
from within the recall.

What am I missing?

Thanks
Boaz


2014-08-21 16:08:03

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 19/19] pnfs/blocklayout: allocate separate pages for the layoutcommit payload

Instead of overflowing the XDR send buffer with our extent list allocate
pages and pre-encode the layoutupdate payload into them. We optimistically
allocate a single page use alloc_page and only switch to vmalloc when we
have more extents outstanding. Currently there is only a single testcase
(xfstests generic/113) which can reproduce large enough extent lists for
this to occur.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 15 ++----
fs/nfs/blocklayout/blocklayout.h | 8 ++--
fs/nfs/blocklayout/extent_tree.c | 100 +++++++++++++++++++++++++++++++--------
3 files changed, 90 insertions(+), 33 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index d5a2b87..fdc065c 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -498,21 +498,16 @@ bl_return_range(struct pnfs_layout_hdr *lo,
err = ext_tree_remove(bl, range->iomode & IOMODE_RW, offset, end);
}

-static void
-bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
- const struct nfs4_layoutcommit_args *arg)
+static int
+bl_prepare_layoutcommit(struct nfs4_layoutcommit_args *arg)
{
- dprintk("%s enter\n", __func__);
- ext_tree_encode_commit(BLK_LO2EXT(lo), xdr);
+ return ext_tree_prepare_commit(arg);
}

static void
bl_cleanup_layoutcommit(struct nfs4_layoutcommit_data *lcdata)
{
- struct pnfs_layout_hdr *lo = NFS_I(lcdata->args.inode)->layout;
-
- dprintk("%s enter\n", __func__);
- ext_tree_mark_committed(BLK_LO2EXT(lo), lcdata->res.status);
+ ext_tree_mark_committed(&lcdata->args, lcdata->res.status);
}

static void free_blk_mountid(struct block_mount_id *mid)
@@ -808,7 +803,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
.alloc_lseg = bl_alloc_lseg,
.free_lseg = bl_free_lseg,
.return_range = bl_return_range,
- .encode_layoutcommit = bl_encode_layoutcommit,
+ .prepare_layoutcommit = bl_prepare_layoutcommit,
.cleanup_layoutcommit = bl_cleanup_layoutcommit,
.set_layoutdriver = bl_set_layoutdriver,
.clear_layoutdriver = bl_clear_layoutdriver,
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index b4f66d8..6f3a550 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -80,6 +80,9 @@ struct pnfs_block_extent {
unsigned int be_tag;
};

+/* on the wire size of the extent */
+#define BL_EXTENT_SIZE (7 * sizeof(__be32) + NFS4_DEVICEID4_SIZE)
+
struct pnfs_block_layout {
struct pnfs_layout_hdr bl_layout;
struct rb_root bl_ext_rw;
@@ -138,8 +141,7 @@ int ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
sector_t len);
bool ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
struct pnfs_block_extent *ret, bool rw);
-int ext_tree_encode_commit(struct pnfs_block_layout *bl,
- struct xdr_stream *xdr);
-void ext_tree_mark_committed(struct pnfs_block_layout *bl, int status);
+int ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg);
+void ext_tree_mark_committed(struct nfs4_layoutcommit_args *arg, int status);

#endif /* FS_NFS_NFS4BLOCKLAYOUT_H */
diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
index c7dacfa..acdfcb9 100644
--- a/fs/nfs/blocklayout/extent_tree.c
+++ b/fs/nfs/blocklayout/extent_tree.c
@@ -465,19 +465,25 @@ out:
return err;
}

-int
-ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
+static void ext_tree_free_commitdata(struct nfs4_layoutcommit_args *arg,
+ size_t buffer_size)
{
- struct pnfs_block_extent *be;
- unsigned int count = 0;
- __be32 *p, *xdr_start;
- int ret = 0;
+ if (arg->layoutupdate_pages != &arg->layoutupdate_page) {
+ int nr_pages = DIV_ROUND_UP(buffer_size, PAGE_SIZE), i;

- dprintk("%s enter\n", __func__);
+ for (i = 0; i < nr_pages; i++)
+ put_page(arg->layoutupdate_pages[i]);
+ kfree(arg->layoutupdate_pages);
+ } else {
+ put_page(arg->layoutupdate_page);
+ }
+}

- xdr_start = xdr_reserve_space(xdr, 8);
- if (!xdr_start)
- return -ENOSPC;
+static int ext_tree_encode_commit(struct pnfs_block_layout *bl, __be32 *p,
+ size_t buffer_size, size_t *count)
+{
+ struct pnfs_block_extent *be;
+ int ret = 0;

spin_lock(&bl->bl_ext_lock);
for (be = ext_tree_first(&bl->bl_ext_rw); be; be = ext_tree_next(be)) {
@@ -485,12 +491,11 @@ ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
be->be_tag != EXTENT_WRITTEN)
continue;

- p = xdr_reserve_space(xdr, 7 * sizeof(__be32) +
- NFS4_DEVICEID4_SIZE);
- if (!p) {
- printk("%s: out of space for extent list\n", __func__);
+ (*count)++;
+ if (*count * BL_EXTENT_SIZE > buffer_size) {
+ /* keep counting.. */
ret = -ENOSPC;
- break;
+ continue;
}

p = xdr_encode_opaque_fixed(p, be->be_devid.data,
@@ -501,25 +506,80 @@ ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
*p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);

be->be_tag = EXTENT_COMMITTING;
- count++;
}
spin_unlock(&bl->bl_ext_lock);

- xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
- xdr_start[1] = cpu_to_be32(count);
+ return ret;
+}
+
+int
+ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+{
+ struct pnfs_block_layout *bl = BLK_LO2EXT(NFS_I(arg->inode)->layout);
+ size_t count = 0, buffer_size = PAGE_SIZE;
+ __be32 *start_p;
+ int ret;
+
+ dprintk("%s enter\n", __func__);
+
+ arg->layoutupdate_page = alloc_page(GFP_NOFS);
+ if (!arg->layoutupdate_page)
+ return -ENOMEM;
+ start_p = page_address(arg->layoutupdate_page);
+ arg->layoutupdate_pages = &arg->layoutupdate_page;
+
+retry:
+ ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size, &count);
+ if (unlikely(ret)) {
+ ext_tree_free_commitdata(arg, buffer_size);
+
+ buffer_size = sizeof(__be32) + BL_EXTENT_SIZE * count;
+ count = 0;
+
+ arg->layoutupdate_pages =
+ kcalloc(DIV_ROUND_UP(buffer_size, PAGE_SIZE),
+ sizeof(struct page *), GFP_NOFS);
+ if (!arg->layoutupdate_pages)
+ return -ENOMEM;
+
+ start_p = __vmalloc(buffer_size, GFP_NOFS, PAGE_KERNEL);
+ if (!start_p) {
+ kfree(arg->layoutupdate_pages);
+ return -ENOMEM;
+ }
+
+ goto retry;
+ }
+
+ *start_p = cpu_to_be32(count);
+ arg->layoutupdate_len = sizeof(__be32) + BL_EXTENT_SIZE * count;
+
+ if (unlikely(arg->layoutupdate_pages != &arg->layoutupdate_page)) {
+ __be32 *p = start_p;
+ int i = 0;
+
+ for (p = start_p;
+ p < start_p + arg->layoutupdate_len;
+ p += PAGE_SIZE) {
+ arg->layoutupdate_pages[i++] = vmalloc_to_page(p);
+ }
+ }

dprintk("%s found %i ranges\n", __func__, count);
- return ret;
+ return 0;
}

void
-ext_tree_mark_committed(struct pnfs_block_layout *bl, int status)
+ext_tree_mark_committed(struct nfs4_layoutcommit_args *arg, int status)
{
+ struct pnfs_block_layout *bl = BLK_LO2EXT(NFS_I(arg->inode)->layout);
struct rb_root *root = &bl->bl_ext_rw;
struct pnfs_block_extent *be;

dprintk("%s status %d\n", __func__, status);

+ ext_tree_free_commitdata(arg, arg->layoutupdate_len);
+
spin_lock(&bl->bl_ext_lock);
for (be = ext_tree_first(root); be; be = ext_tree_next(be)) {
if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
--
1.9.1


2014-08-26 14:37:40

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 05:26 PM, Trond Myklebust wrote:
> On Tue, Aug 26, 2014 at 10:10 AM, Boaz Harrosh <[email protected]> wrote:
>> +
>> + /* kick out any segs held by need to commit */
>> + pnfs_layoutcommit_inode(ino, true);
>
> Making this call synchronous could deadlock the entire back channel.
> Is there any reason why it can't just be made asynchonous?
>

We were just talking about that.

So the logic here is that we want to save round trips and make this
as efficient as possible with no extra round trips for the server
recall. A single RECALL => LAYOUT_COMMIT => LAYOUT_COMMIT_REPLAY
REACLL_REPLAY_NO_MATCHING.

Please explain the deadlock you foresee. The worst is that the
mal-behaving server will time-out and after a long time the
RECALL_REPLAY will return with ERR. But why do you say deadlock
how can this deadlock?

Otherwise Christoph's version of this patch does the asynchonous
way which will always cause another poll of the RECALL and more delays
for every RECALL operation, which I was trying to avoid.

Thanks
Boaz


2014-08-21 16:07:38

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 10/19] pnfs/blocklayout: reject pnfs blocksize larger than page size

The Linux VM subsystem can't support block sizes larger than page size
for block based filesystems very well. While this can be hacked around
to some extent for simple filesystems the read-modify-write cycles
required for pnfs block invalid extents are extremly deadlock prone
when operating on multiple pages. Reject this case early on instead
of pretending to support it (badly).

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index cbb1797..6c1a421 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -1115,6 +1115,12 @@ bl_set_layoutdriver(struct nfs_server *server, const struct nfs_fh *fh)
dprintk("%s Server did not return blksize\n", __func__);
return -EINVAL;
}
+ if (server->pnfs_blksize > PAGE_SIZE) {
+ printk(KERN_ERR "%s: pNFS blksize %d not supported.\n",
+ __func__, server->pnfs_blksize);
+ return -EINVAL;
+ }
+
b_mt_id = kzalloc(sizeof(struct block_mount_id), GFP_NOFS);
if (!b_mt_id) {
status = -ENOMEM;
--
1.9.1


2014-08-26 17:06:38

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 07:59 PM, Trond Myklebust wrote:
> On Tue, Aug 26, 2014 at 12:56 PM, Boaz Harrosh <[email protected]> wrote:
>> On 08/26/2014 06:36 PM, Trond Myklebust wrote:
>>> On Tue, Aug 26, 2014 at 11:24 AM, Matt W. Benjamin <[email protected]> wrote:
>>>> IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?
>>
>>
>> Matt no top post on a Linux mailing list ;-)
>>
>>> Yep. layoutcommit will be sent on the fore channel, which is why it
>>> can deadlock with the initial layoutget (or whatever operation that
>>> triggered the layout recall).
>>
>> Trond you said below:
>>> The above can deadlock if there are no session slots available to send
>>> the layoutcommit, in which case the recall won't complete, and the
>>> layoutget won't get a reply (which would free up the slot).
>>
>> Why would the layoutget not-get-a-reply ?
>> This is how it goes with Both ganesha server and knfsd last I tested.
>>
>> [1]
>> The LAYOUT_GET cause LAYOUT_RECALL case: (including the lo_commit)
>>
>> client Server comments
>> ~~~~~~ ~~~~~~ ~~~~~~~~
>> LAYOUT_GET ==>
>> <== LAYOUT_GET_REPLAY(ERR_RECALL_CONFLICT)
>> <--------- fore-channel is free
>> <== RECALL
>> LAYOUT_COMMIT ==>
>> <== LAYOUT_COMMIT_REPLAY
>> <--------- fore-channel is free
>
> Beep! No free slots, so this hangs.
>

Beep! does not do a very good of a job to explain. Sorry

What do you mean? which slot? which channel? Just above your text it says
"fore-channel is free" so are you saying it is not free? why not.
Please use more then one line of text to explain. It might be clear to
you but not to me.

>> RECALL_REPLY(NO_MATCHING) =>
>> <--------- back-channel is free

Thanks
Boaz


2014-08-26 15:36:18

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On Tue, Aug 26, 2014 at 11:24 AM, Matt W. Benjamin <[email protected]> wrote:
> IIUC, the problem is the forechannel slot count, since the call you want to make synchronously is on the forechannel?

Yep. layoutcommit will be sent on the fore channel, which is why it
can deadlock with the initial layoutget (or whatever operation that
triggered the layout recall).


> Matt
>
> ----- "Boaz Harrosh" <[email protected]> wrote:
>
>> On 08/26/2014 05:55 PM, Trond Myklebust wrote:
>> > On Tue, Aug 26, 2014 at 10:37 AM, Boaz Harrosh <[email protected]>
>> wrote:
>> >
>> > The above can deadlock if there are no session slots available to
>> send
>> > the layoutcommit, in which case the recall won't complete, and the
>> > layoutget won't get a reply (which would free up the slot).
>> >
>>
>> What? the back-channel and the fore-channel do not use the same
>> slots. these are two different slots, No?
>>
>> Matt, Adam you need to chip in here.
>>
>> If it is as you say, then yes it must be as Christoph wrote it.
>>
>> And the Ganesha server must be fixed because it has a slot system per
>> channel.
>>
>> Thanks
>> Boaz
>
> --
> Matt Benjamin
> The Linux Box
> 206 South Fifth Ave. Suite 150
> Ann Arbor, MI 48104
>
> http://linuxbox.com
>
> tel. 734-761-4689
> fax. 734-769-8938
> cel. 734-216-5309



--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-08-25 14:17:30

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH 08/19] pnfs: add return_range method

On 08/25/2014 10:09 AM, Christoph Hellwig wrote:
> On Mon, Aug 25, 2014 at 09:50:34AM -0400, Anna Schumaker wrote:
>>> + } else {
>>> + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
>>> + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
>>> + &args->cbl_range);
>>> + }
>>> }
>> Is there a reason you're nesting the else-if here?
>
> To catch the intent - the first two clauses find excuses why we can't return
> quite yet, while this if is for an optional feature in the actual return
> path. If I wasn't updating but newly writing the function I'd actually
> do something like:

I'm a fan of nice looking code, and I like what you have below better. Can you arrange things to end up in this state? Or maybe send a cleanup patch after?

Anna

>
> ...
>
> rv = NFS4ERR_DELAY;
> if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags))
> goto out_set_stateid;
>
> if (pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> &args->cbl_range)) {
> need_commit = true;
> goto out_set_stateid;
> }
>
> rv = NFS4ERR_NOMATCHING_LAYOUT;
> if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
> &args->cbl_range);
> }
>
> out_set_stateid:
> ...
>


2014-08-26 18:34:09

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH] pnfs: Kick a pnfs_layoutcommit_inode on recall

On 08/26/2014 09:19 PM, Boaz Harrosh wrote:
<>
> Beep! but this is exactly what I'm trying to say. The STD specifically
> forbids that. The server is not allowed to wait here, it must return
> imitatively, with an error that frees the slot and then later issue the
> RECALL.
>
> This is what I said exactly three times in my mail, and what I have
> depicted in my flow:
> Server async operation (mandated by the STD)
> Client back-channel can be sync with for channel (Not mentioned by the STD)
>


BTW Both Ganesha and kpnfsd behave the same. A recall issued while in layout_get
the layout-get reply will be sent first and then the recall will be sent on the wire.

If the recall was sent before the receive of the layout_get then an error/success is
returned imitatively without ever waiting for recall_reply to return.

Thanks
Boaz


2014-08-21 16:13:39

by Christoph Hellwig

[permalink] [raw]
Subject: pnfs block layout driver fixes V2

For some reason the subject of the mail got lost, fixed it.

On Thu, Aug 21, 2014 at 11:09:16AM -0500, Christoph Hellwig wrote:
> This series fixes a wide range of issues with the pnfs block layout driver.
>
> Before this we were seeing frequent silent data corruption, softlockups and
> kernel crashes when running both user applications and test cases like xfstests.
>
> After this rewrite of the I/O path we've sorted out all issues under normal
> operations, although error handling in the block layout driver and its
> interaction with the core nfs and pnfs code still needs further work.
>
> It was pointed out that the requirement for blocksizes smaller or equal to
> the page size conflicts with existing servers. From the state of the code
> I'm very sure it was never used for real against those servers, but as
> soon as someone signs up to testing and fix support for these servers I'm
> happy to land a helping hand, although rock solid support probably is
> imposible.
>
> This work was sponsored by NetApp, Inc.
>
> Changes since V1:
> - added two more layout stateid handling fixes
> - change the layoutget path so that the layout driver is responsible for
> freeing the spliced in payload.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
---end quoted text---

2014-09-09 04:12:01

by Trond Myklebust

[permalink] [raw]
Subject: Re: pnfs block layout driver fixes V2

On Thu, Aug 21, 2014 at 9:13 AM, Christoph Hellwig <[email protected]> wrote:
> For some reason the subject of the mail got lost, fixed it.
>
> On Thu, Aug 21, 2014 at 11:09:16AM -0500, Christoph Hellwig wrote:
>> This series fixes a wide range of issues with the pnfs block layout driver.
>>
>> Before this we were seeing frequent silent data corruption, softlockups and
>> kernel crashes when running both user applications and test cases like xfstests.
>>
>> After this rewrite of the I/O path we've sorted out all issues under normal
>> operations, although error handling in the block layout driver and its
>> interaction with the core nfs and pnfs code still needs further work.
>>
>> It was pointed out that the requirement for blocksizes smaller or equal to
>> the page size conflicts with existing servers. From the state of the code
>> I'm very sure it was never used for real against those servers, but as
>> soon as someone signs up to testing and fix support for these servers I'm
>> happy to land a helping hand, although rock solid support probably is
>> imposible.
>>
>> This work was sponsored by NetApp, Inc.
>>
>> Changes since V1:
>> - added two more layout stateid handling fixes
>> - change the layoutget path so that the layout driver is responsible for
>> freeing the spliced in payload.
>>

Applying all patches except the ones for which I explicitly sent you
an email with questions and/or a notification of a dependency on a
questioned patch.


--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 03:50:14

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 07/19] pnfs: add flag to force read-modify-write in ->write_begin

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> Like all block based filesystems, the pNFS block layout driver can't read
> or write at a byte granularity and thus has to perform read-modify-write
> cycles on writes smaller than this granularity.
>
> Add a flag so that the core NFS code always reads a whole page when
> starting a smaller write, so that we can do it in the place where the VFS
> expects it instead of doing in very deadlock prone way in the writeback
> handler.
>
> Note that in theory we could do less than page size reads here for disks
> that have a smaller sector size which are server by a server with a smaller
> pnfs block size. But so far that doesn't seem like a worthwhile
> optimization.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/file.c | 9 +++++++++
> fs/nfs/pnfs.h | 1 +
> 2 files changed, 10 insertions(+)
>
> diff --git a/fs/nfs/file.c b/fs/nfs/file.c
> index 524dd80..58566e9 100644
> --- a/fs/nfs/file.c
> +++ b/fs/nfs/file.c
> @@ -36,6 +36,7 @@
> #include "internal.h"
> #include "iostat.h"
> #include "fscache.h"
> +#include "pnfs.h"
>
> #include "nfstrace.h"
>
> @@ -323,10 +324,18 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
> static int nfs_want_read_modify_write(struct file *file, struct page *page,
> loff_t pos, unsigned len)
> {
> + struct nfs_server *server = NFS_SERVER(file->f_mapping->host);
> unsigned int pglen = nfs_page_length(page);
> unsigned int offset = pos & (PAGE_CACHE_SIZE - 1);
> unsigned int end = offset + len;
>
> + if (server->pnfs_curr_ld &&
> + (server->pnfs_curr_ld->flags & PNFS_READ_WHOLE_PAGE)) {
> + if (!PageUptodate(page))
> + return 1;
> + return 0;
> + }
> +
> if ((file->f_mode & FMODE_READ) && /* open for read? */
> !PageUptodate(page) && /* Uptodate? */
> !PagePrivate(page) && /* i/o request already? */
> diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
> index 16cd14f..302b279 100644
> --- a/fs/nfs/pnfs.h
> +++ b/fs/nfs/pnfs.h
> @@ -72,6 +72,7 @@ enum layoutdriver_policy_flags {
> /* Should the pNFS client commit and return the layout upon a setattr */
> PNFS_LAYOUTRET_ON_SETATTR = 1 << 0,
> PNFS_LAYOUTRET_ON_ERROR = 1 << 1,
> + PNFS_READ_WHOLE_PAGE = 1 << 2,
> };
>
> struct nfs4_deviceid_node;
>

This doesn't look as if it will compile without IS_ENABLED(CONFIG_NFS_V4).

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 05:53:10

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 07/19] pnfs: add flag to force read-modify-write in ->write_begin

On Mon, Sep 08, 2014 at 08:50:13PM -0700, Trond Myklebust wrote:
> This doesn't look as if it will compile without IS_ENABLED(CONFIG_NFS_V4).

Indeed it doesn't. I've added an inline wrapper similar to the other
PNFS layoutdriver checks.


2014-09-09 04:43:19

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 14/19] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> Use the new PNFS_READ_WHOLE_PAGE flag to offload read-modify-write
> handling to core nfs code, and remove a huge chunk of deadlock prone
> mess from the block layout writeback path.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---

This also depends on "[PATCH 07/19] pnfs: add flag to force
read-modify-write in ->write_begin". Holding pending resolution.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 05:49:32

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

On Mon, Sep 08, 2014 at 05:37:42PM -0700, Trond Myklebust wrote:
> Why wouldn't it make more sense to call pnfs_layoutcommit_inode()
> unconditionally before the call to pnfs_mark_matching_lsegs_invalid()?

It would minimally reduce the latency, but otherwise be not very different.
The downside is that we'll now need two i_lock roundtrips per recall.

But if this is your preference I can easily fix it up.

2014-09-09 04:03:32

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 17/19] pnfs/blocklayout: implement the return_range method

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> This allows removing extents from the extent tree especially on truncate
> operations, and thus fixing reads from truncated and re-extended that
> previously returned stale data.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/blocklayout.c | 30 ++++++++++++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index 42b6f9c..a7524c4 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -470,6 +470,35 @@ static struct pnfs_layout_segment *bl_alloc_lseg(struct pnfs_layout_hdr *lo,
> }
>
> static void
> +bl_return_range(struct pnfs_layout_hdr *lo,
> + struct pnfs_layout_range *range)
> +{
> + struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
> + sector_t offset = range->offset >> SECTOR_SHIFT, end;
> + int err;
> +
> + if (range->offset % 8) {
> + dprintk("%s: offset %lld not block size aligned\n",
> + __func__, range->offset);
> + return;
> + }
> +
> + if (range->length != NFS4_MAX_UINT64) {
> + if (range->length % 8) {
> + dprintk("%s: length %lld not block size aligned\n",
> + __func__, range->length);
> + return;
> + }
> +
> + end = offset + (range->length >> SECTOR_SHIFT);
> + } else {
> + end = round_down(NFS4_MAX_UINT64, PAGE_SIZE);
> + }
> +
> + err = ext_tree_remove(bl, range->iomode & IOMODE_RW, offset, end);
> +}
> +
> +static void
> bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
> const struct nfs4_layoutcommit_args *arg)
> {
> @@ -777,6 +806,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
> .free_layout_hdr = bl_free_layout_hdr,
> .alloc_lseg = bl_alloc_lseg,
> .free_lseg = bl_free_lseg,
> + .return_range = bl_return_range,
> .encode_layoutcommit = bl_encode_layoutcommit,
> .cleanup_layoutcommit = bl_cleanup_layoutcommit,
> .set_layoutdriver = bl_set_layoutdriver,
> --
> 1.9.1
>

Again, holding until we resolve [3/19] and apply the patch that
introduces ->return_range()

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 00:37:44

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> Expedite layout recall processing by forcing a layout commit when
> we see busy segments. Without it the layout recall might have to wait
> until the VM decided to start writeback for the file, which can introduce
> long delays.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/callback_proc.c | 16 +++++++++++-----
> fs/nfs/pnfs.c | 3 +++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 41db525..bf017b0 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -164,6 +164,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> struct inode *ino;
> struct pnfs_layout_hdr *lo;
> u32 rv = NFS4ERR_NOMATCHING_LAYOUT;
> + bool need_commit = false;
> LIST_HEAD(free_me_list);
>
> lo = get_layout_by_fh(clp, &args->cbl_fh, &args->cbl_stateid);
> @@ -172,16 +173,21 @@ static u32 initiate_file_draining(struct nfs_client *clp,
>
> ino = lo->plh_inode;
> spin_lock(&ino->i_lock);
> - if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
> - pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> - &args->cbl_range))
> + if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags)) {
> rv = NFS4ERR_DELAY;
> - else
> - rv = NFS4ERR_NOMATCHING_LAYOUT;
> + } else if (pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> + &args->cbl_range)) {
> + need_commit = true;
> + rv = NFS4ERR_DELAY;
> + }
> +
> pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> spin_unlock(&ino->i_lock);
> pnfs_free_lseg_list(&free_me_list);
> pnfs_put_layout_hdr(lo);
> +
> + if (need_commit)
> + pnfs_layoutcommit_inode(ino, false);

Why wouldn't it make more sense to call pnfs_layoutcommit_inode()
unconditionally before the call to pnfs_mark_matching_lsegs_invalid()?

> iput(ino);
> out:
> return rv;
> diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
> index 6e0fa71..242e73f 100644
> --- a/fs/nfs/pnfs.c
> +++ b/fs/nfs/pnfs.c
> @@ -604,6 +604,9 @@ pnfs_layout_free_bulk_destroy_list(struct list_head *layout_list,
> spin_unlock(&inode->i_lock);
> pnfs_free_lseg_list(&lseg_list);
> pnfs_put_layout_hdr(lo);
> +
> + if (ret)
> + pnfs_layoutcommit_inode(inode, false);

Ditto. The test for 'ret' here is particularly confusing given that it
can get set in a previous iteration of the loop.

> iput(inode);
> }
> return ret;
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 03:57:33

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 08/19] pnfs: add return_range method

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> If a layout driver keeps per-inode state outside of the layout segments it
> needs to be notified of any layout returns or recalls on an inode, and not
> just about the freeing of layout segments. Add a method to acomplish this,
> which will allow the block layout driver to handle the case of truncated
> and re-expanded files properly.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/callback_proc.c | 6 +++++-
> fs/nfs/pnfs.c | 10 ++++++++++
> fs/nfs/pnfs.h | 3 +++
> 3 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index bf017b0..86541e0 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -179,8 +179,12 @@ static u32 initiate_file_draining(struct nfs_client *clp,
> &args->cbl_range)) {
> need_commit = true;
> rv = NFS4ERR_DELAY;
> + } else {
> + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
> + &args->cbl_range);
> + }
> }
> -
> pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
> spin_unlock(&ino->i_lock);
> pnfs_free_lseg_list(&free_me_list);
> diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
> index bce7f1b..e481d1c 100644
> --- a/fs/nfs/pnfs.c
> +++ b/fs/nfs/pnfs.c
> @@ -857,6 +857,16 @@ _pnfs_return_layout(struct inode *ino)
> empty = list_empty(&lo->plh_segs);
> pnfs_clear_layoutcommit(ino, &tmp_list);
> pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
> +
> + if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
> + struct pnfs_layout_range range = {
> + .iomode = IOMODE_ANY,
> + .offset = 0,
> + .length = NFS4_MAX_UINT64,
> + };
> + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
> + }
> +
> /* Don't send a LAYOUTRETURN if list was initially empty */
> if (empty) {
> spin_unlock(&ino->i_lock);
> diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
> index 302b279..044c071 100644
> --- a/fs/nfs/pnfs.h
> +++ b/fs/nfs/pnfs.h
> @@ -94,6 +94,9 @@ struct pnfs_layoutdriver_type {
> struct pnfs_layout_segment * (*alloc_lseg) (struct pnfs_layout_hdr *layoutid, struct nfs4_layoutget_res *lgr, gfp_t gfp_flags);
> void (*free_lseg) (struct pnfs_layout_segment *lseg);
>
> + void (*return_range) (struct pnfs_layout_hdr *lo,
> + struct pnfs_layout_range *range);
> +
> /* test for nfs page cache coalescing */
> const struct nfs_pageio_ops *pg_read_ops;
> const struct nfs_pageio_ops *pg_write_ops;

Holding off applying this patch for now until we figure out the right
behaviour for "[PATCH 03/19] pnfs: force a layout commit when
encountering busy segments during recall". There is a small dependency
in initiate_file_draining().

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 14:40:48

by Trond Myklebust

[permalink] [raw]
Subject: Re: pnfs block layout driver fixes V2

On Mon, Sep 8, 2014 at 10:54 PM, Christoph Hellwig <[email protected]> wrote:
> On Mon, Sep 08, 2014 at 09:12:01PM -0700, Trond Myklebust wrote:
>> Applying all patches except the ones for which I explicitly sent you
>> an email with questions and/or a notification of a dependency on a
>> questioned patch.
>
> Might be easier to just resend the whole series as there's quite a few
> dependencies. But if you push out a tree with the applied patches I can
> just rebase against that.
>

Done. Please see
git://git.linux-nfs.org/projects/trondmy/linux-nfs.git with branch
linux-next.

Cheers
Trond
--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 04:50:32

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 16/19] pnfs/blocklayout: rewrite extent tracking

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> Currently the block layout driver tracks extents in three separate
> data structures:
>
> - the two list of pnfs_block_extent structures returned by the server
> - the list of sectors that were in invalid state but have been written to
> - a list of pnfs_block_short_extent structures for LAYOUTCOMMIT
>
> All of these share the property that they are not only highly inefficient
> data structures, but also that operations on them are even more inefficient
> than nessecary.
>
> In addition there are various implementation defects like:
>
> - using an int to track sectors, causing corruption for large offsets
> - incorrect normalization of page or block granularity ranges
> - insufficient error handling
> - incorrect synchronization as extents can be modified while they are in
> use
>
> This patch replace all three data with a single unified rbtree structure
> tracking all extents, as well as their in-memory state, although we still
> need to instance for read-only and read-write extent due to the arcane
> client side COW feature in the block layouts spec.
>
> To fix the problem of extent possibly being modified while in use we make
> sure to return a copy of the extent for use in the write path - the
> extent can only be invalidated by a layout recall or return which has
> to wait until the I/O operations finished due to refcounts on the layout
> segment.
>
> The new extent tree work similar to the schemes used by block based
> filesystems like XFS or ext4.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/Makefile | 3 +-
> fs/nfs/blocklayout/blocklayout.c | 258 +++-------
> fs/nfs/blocklayout/blocklayout.h | 112 +----
> fs/nfs/blocklayout/blocklayoutdev.c | 35 +-
> fs/nfs/blocklayout/extent_tree.c | 545 ++++++++++++++++++++++
> fs/nfs/blocklayout/extents.c | 908 ------------------------------------
> 6 files changed, 649 insertions(+), 1212 deletions(-)
> create mode 100644 fs/nfs/blocklayout/extent_tree.c
> delete mode 100644 fs/nfs/blocklayout/extents.c
>

Holding due to dependencies on unapplied patches in the series.


--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 04:09:05

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 18/19] pnfs/blocklayout: return layouts on setattr

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> This speads up truncate-heavy workloads like fsx by multiple orders of
> magnitude.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/blocklayout.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index a7524c4..d5a2b87 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -799,7 +799,8 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
> .id = LAYOUT_BLOCK_VOLUME,
> .name = "LAYOUT_BLOCK_VOLUME",
> .owner = THIS_MODULE,
> - .flags = PNFS_READ_WHOLE_PAGE,
> + .flags = PNFS_LAYOUTRET_ON_SETATTR |
> + PNFS_READ_WHOLE_PAGE,
> .read_pagelist = bl_read_pagelist,
> .write_pagelist = bl_write_pagelist,
> .alloc_layout_hdr = bl_alloc_layout_hdr,
> --


Holding this pending resolution of issues with "[PATCH 07/19] pnfs:
add flag to force read-modify-write in ->write_begin" on which there
is a dependency.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 04:52:05

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 19/19] pnfs/blocklayout: allocate separate pages for the layoutcommit payload

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> Instead of overflowing the XDR send buffer with our extent list allocate
> pages and pre-encode the layoutupdate payload into them. We optimistically
> allocate a single page use alloc_page and only switch to vmalloc when we
> have more extents outstanding. Currently there is only a single testcase
> (xfstests generic/113) which can reproduce large enough extent lists for
> this to occur.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/blocklayout.c | 15 ++----
> fs/nfs/blocklayout/blocklayout.h | 8 ++--
> fs/nfs/blocklayout/extent_tree.c | 100 +++++++++++++++++++++++++++++++--------
> 3 files changed, 90 insertions(+), 33 deletions(-)
>

Holding due to dependencies on unapplied patches.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 14:41:32

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 07/19] pnfs: add flag to force read-modify-write in ->write_begin

On Mon, Sep 8, 2014 at 10:53 PM, Christoph Hellwig <[email protected]> wrote:
> On Mon, Sep 08, 2014 at 08:50:13PM -0700, Trond Myklebust wrote:
>> This doesn't look as if it will compile without IS_ENABLED(CONFIG_NFS_V4).
>
> Indeed it doesn't. I've added an inline wrapper similar to the other
> PNFS layoutdriver checks.
>

Cool. Thanks!

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 14:38:02

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 03/19] pnfs: force a layout commit when encountering busy segments during recall

On Mon, Sep 8, 2014 at 10:49 PM, Christoph Hellwig <[email protected]> wrote:
> On Mon, Sep 08, 2014 at 05:37:42PM -0700, Trond Myklebust wrote:
>> Why wouldn't it make more sense to call pnfs_layoutcommit_inode()
>> unconditionally before the call to pnfs_mark_matching_lsegs_invalid()?
>
> It would minimally reduce the latency, but otherwise be not very different.
> The downside is that we'll now need two i_lock roundtrips per recall.
>
> But if this is your preference I can easily fix it up.

Please do. It would make the code a little easier on the eye.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2014-09-09 05:54:04

by Christoph Hellwig

[permalink] [raw]
Subject: Re: pnfs block layout driver fixes V2

On Mon, Sep 08, 2014 at 09:12:01PM -0700, Trond Myklebust wrote:
> Applying all patches except the ones for which I explicitly sent you
> an email with questions and/or a notification of a dependency on a
> questioned patch.

Might be easier to just resend the whole series as there's quite a few
dependencies. But if you push out a tree with the applied patches I can
just rebase against that.


2014-09-09 04:48:04

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 15/19] pnfs/blocklayout: don't set pages uptodate

On Thu, Aug 21, 2014 at 9:09 AM, Christoph Hellwig <[email protected]> wrote:
> The core nfs code handles setting pages uptodate on reads, no need to mess
> with the pageflags outselves. Also remove a debug function to dump page
> flags.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> fs/nfs/blocklayout/blocklayout.c | 24 +-----------------------
> 1 file changed, 1 insertion(+), 23 deletions(-)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index 6484b9f..92be984 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -49,20 +49,6 @@ MODULE_LICENSE("GPL");
> MODULE_AUTHOR("Andy Adamson <[email protected]>");
> MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");
>
> -static void print_page(struct page *page)
> -{
> - dprintk("PRINTPAGE page %p\n", page);
> - dprintk(" PagePrivate %d\n", PagePrivate(page));
> - dprintk(" PageUptodate %d\n", PageUptodate(page));
> - dprintk(" PageError %d\n", PageError(page));
> - dprintk(" PageDirty %d\n", PageDirty(page));
> - dprintk(" PageReferenced %d\n", PageReferenced(page));
> - dprintk(" PageLocked %d\n", PageLocked(page));
> - dprintk(" PageWriteback %d\n", PageWriteback(page));
> - dprintk(" PageMappedToDisk %d\n", PageMappedToDisk(page));
> - dprintk("\n");
> -}
> -
> /* Given the be associated with isect, determine if page data needs to be
> * initialized.
> */
> @@ -187,16 +173,9 @@ retry:
> return bio;
> }
>
> -/* This is basically copied from mpage_end_io_read */
> static void bl_end_io_read(struct bio *bio, int err)
> {
> struct parallel_io *par = bio->bi_private;
> - struct bio_vec *bvec;
> - int i;
> -
> - if (!err)
> - bio_for_each_segment_all(bvec, bio, i)
> - SetPageUptodate(bvec->bv_page);
>
> if (err) {
> struct nfs_pgio_header *header = par->data;
> @@ -205,6 +184,7 @@ static void bl_end_io_read(struct bio *bio, int err)
> header->pnfs_error = -EIO;
> pnfs_set_lo_fail(header->lseg);
> }
> +
> bio_put(bio);
> put_parallel(par);
> }
> @@ -304,8 +284,6 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> /* Fill hole w/ zeroes w/o accessing device */
> dprintk("%s Zeroing page for hole\n", __func__);
> zero_user_segment(pages[i], pg_offset, pg_len);
> - print_page(pages[i]);
> - SetPageUptodate(pages[i]);
> } else {
> struct pnfs_block_extent *be_read;
>
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

Depends on "[PATCH 14/19] pnfs/blocklayout: remove read-modify-write
handling in bl_write_pagelist". Holding for now.

--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

[email protected]

2015-02-09 06:01:48

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 13/19] pnfs/blocklayout: correctly decrement extent length

On Thu, 21 Aug 2014 11:09:29 -0500 Christoph Hellwig <[email protected]> wrote:

> When we do non-page sized reads we can underflow the extent_length variable
> and read incorrect data. Fix the extent_length calculation and change to
> defensive <= checks for the extent length in the read and write path.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Hi Christoph,
I was reviewing this patch for possible backport.
As 'extent_length' is sector_t, it is unsigned (either u64 or unsigned long).

So comparing "<= 0" has the same effect as comparing "== 0".
So the new checks are not "defensive".

That doesn't mean they are wrong, but they could be misleading...

There may be nothing that needs to be done here, but I thought I should let
you know.

NeilBrown

> ---
> fs/nfs/blocklayout/blocklayout.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index 5427ae7..87a633d 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -272,7 +272,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> isect = (sector_t) (f_offset >> SECTOR_SHIFT);
> /* Code assumes extents are page-aligned */
> for (i = pg_index; i < hdr->page_array.npages; i++) {
> - if (!extent_length) {
> + if (extent_length <= 0) {
> /* We've used up the previous extent */
> bl_put_extent(be);
> bl_put_extent(cow_read);
> @@ -303,6 +303,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> f_offset += pg_len;
> bytes_left -= pg_len;
> isect += (pg_offset >> SECTOR_SHIFT);
> + extent_length -= (pg_offset >> SECTOR_SHIFT);
> } else {
> pg_offset = 0;
> pg_len = PAGE_CACHE_SIZE;
> @@ -333,7 +334,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
> }
> }
> isect += (pg_len >> SECTOR_SHIFT);
> - extent_length -= PAGE_CACHE_SECTORS;
> + extent_length -= (pg_len >> SECTOR_SHIFT);
> }
> if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
> hdr->res.eof = 1;
> @@ -797,7 +798,7 @@ next_page:
> /* Middle pages */
> pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
> for (i = pg_index; i < header->page_array.npages; i++) {
> - if (!extent_length) {
> + if (extent_length <= 0) {
> /* We've used up the previous extent */
> bl_put_extent(be);
> bl_put_extent(cow_read);


Attachments:
(No filename) (811.00 B)
OpenPGP digital signature

2015-02-09 18:24:36

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 13/19] pnfs/blocklayout: correctly decrement extent length

On Mon, Feb 09, 2015 at 05:01:40PM +1100, NeilBrown wrote:
> Hi Christoph,
> I was reviewing this patch for possible backport.
> As 'extent_length' is sector_t, it is unsigned (either u64 or unsigned long).
>
> So comparing "<= 0" has the same effect as comparing "== 0".
> So the new checks are not "defensive".
>
> That doesn't mean they are wrong, but they could be misleading...
>
> There may be nothing that needs to be done here, but I thought I should let
> you know.

I think this is just an artefact of my earlier unsuccessful attempts at
solving the problem. The real fix is the part that calculates
the "extent_length" variable correctly.