Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754985AbbKQWqk (ORCPT ); Tue, 17 Nov 2015 17:46:40 -0500 Received: from mga02.intel.com ([134.134.136.20]:63209 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751882AbbKQWqj (ORCPT ); Tue, 17 Nov 2015 17:46:39 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,310,1444719600"; d="scan'208";a="853325849" From: Octavian Purdila To: xfs@oss.sgi.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Octavian Purdila Subject: [RFC PATCH] xfs: support for non-mmu architectures Date: Wed, 18 Nov 2015 00:46:21 +0200 Message-Id: <1447800381-20167-1-git-send-email-octavian.purdila@intel.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2843 Lines: 111 Naive implementation for non-mmu architectures: allocate physically contiguous xfs buffers with alloc_pages. Terribly inefficient with memory and fragmentation on high I/O loads but it may be good enough for basic usage (which most non-mmu architectures will need). This patch was tested with lklfuse [1] and basic operations seems to work even with 16MB allocated for LKL. [1] https://github.com/lkl/linux Signed-off-by: Octavian Purdila --- fs/xfs/xfs_buf.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 8ecffb3..50b5246 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -261,6 +261,7 @@ xfs_buf_free( ASSERT(list_empty(&bp->b_lru)); if (bp->b_flags & _XBF_PAGES) { +#ifdef CONFIG_MMU uint i; if (xfs_buf_is_vmapped(bp)) @@ -272,6 +273,10 @@ xfs_buf_free( __free_page(page); } +#else + free_pages((unsigned long)page_to_virt(bp->b_pages[0]), + order_base_2(bp->b_page_count)); +#endif } else if (bp->b_flags & _XBF_KMEM) kmem_free(bp->b_addr); _xfs_buf_free_pages(bp); @@ -338,7 +343,14 @@ use_alloc_page: struct page *page; uint retries = 0; retry: +#ifdef CONFIG_MMU page = alloc_page(gfp_mask); +#else + if (i == 0) + page = alloc_pages(gfp_mask, order_base_2(page_count)); + else + page = bp->b_pages[0] + i; +#endif if (unlikely(page == NULL)) { if (flags & XBF_READ_AHEAD) { bp->b_page_count = i; @@ -372,8 +384,11 @@ retry: return 0; out_free_pages: +#ifdef CONFIG_MMU for (i = 0; i < bp->b_page_count; i++) __free_page(bp->b_pages[i]); +#endif + return error; } @@ -392,6 +407,7 @@ _xfs_buf_map_pages( } else if (flags & XBF_UNMAPPED) { bp->b_addr = NULL; } else { +#ifdef CONFIG_MMU int retried = 0; unsigned noio_flag; @@ -412,6 +428,9 @@ _xfs_buf_map_pages( vm_unmap_aliases(); } while (retried++ <= 1); memalloc_noio_restore(noio_flag); +#else + bp->b_addr = page_to_virt(bp->b_pages[0]); +#endif if (!bp->b_addr) return -ENOMEM; @@ -816,11 +835,19 @@ xfs_buf_get_uncached( if (error) goto fail_free_buf; +#ifdef CONFIG_MMU for (i = 0; i < page_count; i++) { bp->b_pages[i] = alloc_page(xb_to_gfp(flags)); if (!bp->b_pages[i]) goto fail_free_mem; } +#else + bp->b_pages[0] = alloc_pages(flags, order_base_2(page_count)); + if (!bp->b_pages[0]) + goto fail_free_buf; + for (i = 1; i < page_count; i++) + bp->b_pages[i] = bp->b_pages[i-1] + 1; +#endif bp->b_flags |= _XBF_PAGES; error = _xfs_buf_map_pages(bp, 0); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/