2013-05-08 22:38:01

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv10 0/4] zswap: compressed swap caching

This is the latest version of the zswap patchset for compressed swap caching.
This is submitted for merging into linux-next and inclusion in v3.11.

New in this Version:

This version of the patchset contains significant changes from past versions.

At the top of the list of changes, zbud replaced zsmalloc as the allocator for
compressed page storage. The zbud allocator in this patchset is a generic
rewrite of the zcache specific zbud implementation found in
drivers/staging/zcache/zbud.c.

This switch was made to satisfy some determinism concerns raised at LSF/MM
surrounding reclaim/writeback I/O delays with zsmalloc and the effectiveness of
the reclaim heuristic to free entire page frames. zbud, while having worse
best case density than zsmalloc, is fast, simple, and deterministic when it
comes to reclaiming page frames.

The I/O reduction and performance gains remain the same with zbud compared with
zsmalloc during the time the pool is not full. However, with zbud, the pool
fills more quickly and reaches its full point at a lower memory overcommit
level. The other side of that coin is zswap+zbud performs less writeback than
zswap+zsmalloc once the pool is full since zbud precisely evicts allocations
that will result in the freeing of a page.

A side effect of the allocator change is that the LRU logic has been moved from
zswap to zbud, which simplifies the zswap code. The mempool and tmppage code
was also removed in the refactoring.

Two patches that changed some existing swap code have already been merged
and are no longer included in the patchset.

Future development includes equipping zsmalloc with the LRU logic it needs to
do reclaim well and then adapt zswap with a pluggable allocator interface,
enabling the user to choose between the zbud and zsmalloc (or others yet
unknown) with zbud as a safe default.

Useful References:

LSFMM: In-kernel memory compression
https://lwn.net/Articles/548109/

The zswap compressed swap cache
https://lwn.net/Articles/537422/

Zswap Overview:

Zswap is a lightweight compressed cache for swap pages. It takes
pages that are in the process of being swapped out and attempts to
compress them into a dynamically allocated RAM-based memory pool.
If this process is successful, the writeback to the swap device is
deferred and, in many cases, avoided completely. This results in
a significant I/O reduction and performance gains for systems that
are swapping.

The results of a kernel building benchmark indicate a
runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
swapping with a kernel build under heavy memory pressure (see
Performance section for more).

Some addition performance metrics regarding the performance
improvements and I/O reductions that can be achieved using zswap as
measured by SPECjbb are provided here:
http://ibm.co/VCgHvM

These results include runs on x86 and new results on Power7+ with
hardware compression acceleration.

Of particular note is that zswap is able to evict pages from the compressed
cache, on an LRU basis, to the backing swap device when the compressed pool
reaches it size limit or the pool is unable to obtain additional pages
from the buddy allocator. This eviction functionality had been identified
as a requirement in prior community discussions.

Rationale:

Zswap provides compressed swap caching that basically trades CPU cycles
for reduced swap I/O. This trade-off can result in a significant
performance improvement as reads to/writes from to the compressed
cache almost always faster that reading from a swap device
which incurs the latency of an asynchronous block I/O read.

Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
dramatically reduce their swap I/O pressure, avoiding heavy
handed I/O throttling by the hypervisor. This allows more work
to get done with less impact to the guest workload and guests
sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
drastically reducing life-shortening writes.

Compressed swap is also provided in zcache, along with page cache
compression and RAM clustering through RAMSter. Zswap seeks to deliver
the benefit of swap compression to users in a discrete function.
This design decision is akin to Unix design philosophy of doing one
thing well, it leaves file cache compression and other features
for separate code.

Design:

Zswap receives pages for compression through the Frontswap API and
is able to evict pages from its own compressed pool on an LRU basis
and write them back to the backing swap device in the case that the
compressed pool is full or unable to secure additional pages from
the buddy allocator.

Zswap makes use of zbud for the managing the compressed memory pool.
Each allocation in zbud is not directly accessible by address. Rather,
a handle is return by the allocation routine and that handle must be
mapped before being accessed. The compressed memory pool grows on
demand and shrinks as compressed pages are freed. The pool is not
preallocated.

When a swap page is passed from frontswap to zswap, zswap maintains
a mapping of the swap entry, a combination of the swap type and swap
offset, to the zbud handle that references that compressed swap
page. This mapping is achieved with a red-black tree per swap type.
The swap offset is the search key for the tree nodes.

Zswap seeks to be simple in its policies. Sysfs attributes allow for
two user controlled policies:
* max_compression_ratio - Maximum compression ratio, as as percentage,
for an acceptable compressed page. Any page that does not compress
by at least this ratio will be rejected.
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.

To enabled zswap, the "enabled" attribute must be set to 1 at boot time.

Zswap allows the compressor to be selected at kernel boot time by
setting the “compressor” attribute. The default compressor is lzo.

A debugfs interface is provided for various statistic about pool size,
number of pages stored, and various counters for the reasons pages
are rejected.

Changelog:

v10:
* replace zsmalloc with zbud (zsmalloc to come back as option in future dev)
* lru logic moved out of zswap into allocator
* simplified and improved writeback logic
* removed memory pool and tmpage pool as part of refactoring
* Rebase to (almost) v3.10-rc1

v9:
* Fix load-during-writeback race; double lru add (for real this time)
* checkpatch and comment fixes
* Fix __swap_writepage() return value check
* Move check for max outstanding writebacks (dedup some code)
* Rebase to v3.9-rc6

v8:
* Fix load-during-writeback race; double lru add
* checkpatch fixups
* s/NOWAIT/ATOMIC for tree allocation (Dave)
* Check __swap_writepage() for error before incr outstanding write count (Rob)
* Convert pcpu compression buffer alloc from alloc_page() to kmalloc() (Dave)
* Rebase to v3.9-rc5

v7:
* Decrease zswap_stored_pages during tree cleanup (Joonsoo)
* Move zswap_entry_cache_alloc() earlier during store (Joonsoo)
* Move type field from struct zswap_entry to struct zswap_tree
* Change to swapper_space array (-rc1 change)
* s/reset_page_mapcount/page_mapcount_reset in zsmalloc (-rc1 change)
* Rebase to v3.9-rc1

v6:
* fix access-after-free regression introduced in v5
(rb_erase() outside the lock)
* fix improper freeing of rbtree (Cody)
* fix comment typo (Ric)
* add comments about ZS_MM_WO usage and page mapping mode (Joonsoo)
* don't use page->object (Joonsoo)
* remove DEBUG (Joonsoo)
* rebase to v3.8

v5:
* zsmalloc patch converted from promotion to "new code" (for review only,
see note in [1/8])
* promote zsmalloc to mm/ instead of /lib
* add more documentation everywhere
* convert USE_PGTABLE_MAPPING to kconfig option, thanks to Minchan
* s/flush/writeback/
* #define pr_fmt() for formatting messages (Joe)
* checkpatch fixups
* lots of changes suggested Minchan

v4:
* Added Acks (Minchan)
* Separated flushing functionality into standalone patch
for easier review (Minchan)
* fix comment on zswap enabled attribute (Minchan)
* add TODO for dynamic mempool size (Minchan)
* and check for NULL in zswap_free_page() (Minchan)
* add missing zs_free() in error path (Minchan)
* TODO: add comments for flushing/refcounting (Minchan)

v3:
* Dropped the zsmalloc patches from the set, except the promotion patch
which has be converted to a rename patch (vs full diff). The dropped
patches have been Acked and are going into Greg's staging tree soon.
* Separated [PATCHv2 7/9] into two patches since it makes changes for two
different reasons (Minchan)
* Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
* Rebase to v3.8-rc5. linux-next is a little volatile with the
swapper_space per type changes which will effect this patchset.
* TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)

v2:
* Rename zswap_fs_* functions to zswap_frontswap_* to avoid
confusion with "filesystem"
* Add comment about what the tree lock protects
* Remove "#if 0" code (should have been done before)
* Break out changes to existing swap code into separate patch
* Fix blank line EOF warning on documentation file
* Rebase to next-20130107

Performance, Kernel Building:

Setup
========
Gentoo w/ kernel v3.7-rc7
Quad-core i5-2500 @ 3.3GHz
512MB DDR3 1600MHz (limited with mem=512m on boot)
Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
majflt are major page faults reported by the time command
pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
the make -jN

Summary
========
* Zswap reduces I/O and improves performance at all swap pressure levels.

* Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
over 1.5GB of I/O, and cut runtime in half.

Details
========
I/O (in pages)
base zswap change change
N pswpin pswpout majflt I/O sum pswpin pswpout majflt I/O sum %I/O MB
8 1 335 291 627 0 0 249 249 -60% 1
12 3688 14315 5290 23293 123 860 5954 6937 -70% 64
16 12711 46179 16803 75693 2936 7390 46092 56418 -25% 75
20 42178 133781 49898 225857 9460 28382 92951 130793 -42% 371
24 96079 357280 105242 558601 7719 18484 109309 135512 -76% 1653

Runtime (in seconds)
N base zswap %change
8 107 107 0%
12 128 110 -14%
16 191 179 -6%
20 371 240 -35%
24 570 267 -53%

%CPU utilization (out of 400% on 4 cpus)
N base zswap %change
8 317 319 1%
12 267 311 16%
16 179 191 7%
20 94 143 52%
24 60 128 113%

Seth Jennings (4):
debugfs: add get/set for atomic types
zbud: add to mm/
zswap: add to mm/
zswap: add documentation

Documentation/vm/zswap.txt | 72 ++++
fs/debugfs/file.c | 42 ++
include/linux/debugfs.h | 2 +
include/linux/zbud.h | 22 ++
mm/Kconfig | 25 ++
mm/Makefile | 2 +
mm/zbud.c | 564 +++++++++++++++++++++++++++
mm/zswap.c | 952 +++++++++++++++++++++++++++++++++++++++++++++
8 files changed, 1681 insertions(+)
create mode 100644 Documentation/vm/zswap.txt
create mode 100644 include/linux/zbud.h
create mode 100644 mm/zbud.c
create mode 100644 mm/zswap.c

--
1.8.2.2


2013-05-08 22:38:07

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv10 1/4] debugfs: add get/set for atomic types

debugfs currently lack the ability to create attributes
that set/get atomic_t values.

This patch adds support for this through a new
debugfs_create_atomic_t() function.

Signed-off-by: Seth Jennings <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Acked-by: Mel Gorman <[email protected]>
---
fs/debugfs/file.c | 42 ++++++++++++++++++++++++++++++++++++++++++
include/linux/debugfs.h | 2 ++
2 files changed, 44 insertions(+)

diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
index c5ca6ae..fa26d5b 100644
--- a/fs/debugfs/file.c
+++ b/fs/debugfs/file.c
@@ -21,6 +21,7 @@
#include <linux/debugfs.h>
#include <linux/io.h>
#include <linux/slab.h>
+#include <linux/atomic.h>

static ssize_t default_read_file(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
@@ -403,6 +404,47 @@ struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
}
EXPORT_SYMBOL_GPL(debugfs_create_size_t);

+static int debugfs_atomic_t_set(void *data, u64 val)
+{
+ atomic_set((atomic_t *)data, val);
+ return 0;
+}
+static int debugfs_atomic_t_get(void *data, u64 *val)
+{
+ *val = atomic_read((atomic_t *)data);
+ return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
+ debugfs_atomic_t_set, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set, "%llu\n");
+
+/**
+ * debugfs_create_atomic_t - create a debugfs file that is used to read and
+ * write an atomic_t value
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this parameter is %NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ */
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+ struct dentry *parent, atomic_t *value)
+{
+ /* if there are no write bits set, make read only */
+ if (!(mode & S_IWUGO))
+ return debugfs_create_file(name, mode, parent, value,
+ &fops_atomic_t_ro);
+ /* if there are no read bits set, make write only */
+ if (!(mode & S_IRUGO))
+ return debugfs_create_file(name, mode, parent, value,
+ &fops_atomic_t_wo);
+
+ return debugfs_create_file(name, mode, parent, value, &fops_atomic_t);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_atomic_t);

static ssize_t read_file_bool(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
index 63f2465..d68b4ea 100644
--- a/include/linux/debugfs.h
+++ b/include/linux/debugfs.h
@@ -79,6 +79,8 @@ struct dentry *debugfs_create_x64(const char *name, umode_t mode,
struct dentry *parent, u64 *value);
struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
struct dentry *parent, size_t *value);
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+ struct dentry *parent, atomic_t *value);
struct dentry *debugfs_create_bool(const char *name, umode_t mode,
struct dentry *parent, u32 *value);

--
1.8.2.2

2013-05-08 22:38:14

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv10 3/4] zswap: add to mm/

zswap is a thin compression backend for frontswap. It receives pages from
frontswap and attempts to store them in a compressed memory pool, resulting in
an effective partial memory reclaim and dramatically reduced swap device I/O.

Additionally, in most cases, pages can be retrieved from this compressed store
much more quickly than reading from tradition swap devices resulting in faster
performance for many workloads.

It also has support for evicting swap pages that are currently compressed in
zswap to the swap device on an LRU(ish) basis. This functionality is very
important and make zswap a true cache in that, once the cache is full or can't
grow due to memory pressure, the oldest pages can be moved out of zswap to the
swap device so newer pages can be compressed and stored in zswap.

This patch adds the zswap driver to mm/

Signed-off-by: Seth Jennings <[email protected]>
---
mm/Kconfig | 15 +
mm/Makefile | 1 +
mm/zswap.c | 952 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 968 insertions(+)
create mode 100644 mm/zswap.c

diff --git a/mm/Kconfig b/mm/Kconfig
index 908f41b..4042e07 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -487,3 +487,18 @@ config ZBUD
While this design limits storage density, it has simple and
deterministic reclaim properties that make it preferable to a higher
density approach when reclaim will be used.
+
+config ZSWAP
+ bool "In-kernel swap page compression"
+ depends on FRONTSWAP && CRYPTO
+ select CRYPTO_LZO
+ select ZBUD
+ default n
+ help
+ Zswap is a backend for the frontswap mechanism in the VMM.
+ It receives pages from frontswap and attempts to store them
+ in a compressed memory pool, resulting in an effective
+ partial memory reclaim. In addition, pages and be retrieved
+ from this compressed store much faster than most tradition
+ swap devices resulting in reduced I/O and faster performance
+ for many workloads.
diff --git a/mm/Makefile b/mm/Makefile
index 95f0197..f008033 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
obj-$(CONFIG_FRONTSWAP) += frontswap.o
+obj-$(CONFIG_ZSWAP) += zswap.o
obj-$(CONFIG_HAS_DMA) += dmapool.o
obj-$(CONFIG_HUGETLBFS) += hugetlb.o
obj-$(CONFIG_NUMA) += mempolicy.o
diff --git a/mm/zswap.c b/mm/zswap.c
new file mode 100644
index 0000000..b1070ca
--- /dev/null
+++ b/mm/zswap.c
@@ -0,0 +1,952 @@
+/*
+ * zswap.c - zswap driver file
+ *
+ * zswap is a backend for frontswap that takes pages that are in the
+ * process of being swapped out and attempts to compress them and store
+ * them in a RAM-based memory pool. This results in a significant I/O
+ * reduction on the real swap device and, in the case of a slow swap
+ * device, can also improve workload performance.
+ *
+ * Copyright (C) 2012 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/cpu.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/frontswap.h>
+#include <linux/rbtree.h>
+#include <linux/swap.h>
+#include <linux/crypto.h>
+#include <linux/mempool.h>
+#include <linux/zbud.h>
+
+#include <linux/mm_types.h>
+#include <linux/page-flags.h>
+#include <linux/swapops.h>
+#include <linux/writeback.h>
+#include <linux/pagemap.h>
+
+/*********************************
+* statistics
+**********************************/
+/* Number of memory pages used by the compressed pool */
+static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
+/* The number of compressed pages currently stored in zswap */
+static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+
+/*
+ * The statistics below are not protected from concurrent access for
+ * performance reasons so they may not be a 100% accurate. However,
+ * they do provide useful information on roughly how many times a
+ * certain event is occurring.
+*/
+static u64 zswap_pool_limit_hit;
+static u64 zswap_written_back_pages;
+static u64 zswap_reject_reclaim_fail;
+static u64 zswap_reject_compress_poor;
+static u64 zswap_reject_alloc_fail;
+static u64 zswap_reject_kmemcache_fail;
+static u64 zswap_duplicate_entry;
+
+/*********************************
+* tunables
+**********************************/
+/* Enable/disable zswap (disabled by default, fixed at boot for now) */
+static bool zswap_enabled;
+module_param_named(enabled, zswap_enabled, bool, 0);
+
+/* Compressor to be used by zswap (fixed at boot for now) */
+#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
+static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+module_param_named(compressor, zswap_compressor, charp, 0);
+
+/* The maximum percentage of memory that the compressed pool can occupy */
+static unsigned int zswap_max_pool_percent = 20;
+module_param_named(max_pool_percent,
+ zswap_max_pool_percent, uint, 0644);
+
+/*
+ * Maximum compression ratio, as as percentage, for an acceptable
+ * compressed page. Any pages that do not compress by at least
+ * this ratio will be rejected.
+*/
+static unsigned int zswap_max_compression_ratio = 80;
+module_param_named(max_compression_ratio,
+ zswap_max_compression_ratio, uint, 0644);
+
+/*********************************
+* compression functions
+**********************************/
+/* per-cpu compression transforms */
+static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
+
+enum comp_op {
+ ZSWAP_COMPOP_COMPRESS,
+ ZSWAP_COMPOP_DECOMPRESS
+};
+
+static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ struct crypto_comp *tfm;
+ int ret;
+
+ tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
+ switch (op) {
+ case ZSWAP_COMPOP_COMPRESS:
+ ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
+ break;
+ case ZSWAP_COMPOP_DECOMPRESS:
+ ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ put_cpu();
+ return ret;
+}
+
+static int __init zswap_comp_init(void)
+{
+ if (!crypto_has_comp(zswap_compressor, 0, 0)) {
+ pr_info("%s compressor not available\n", zswap_compressor);
+ /* fall back to default compressor */
+ zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+ if (!crypto_has_comp(zswap_compressor, 0, 0))
+ /* can't even load the default compressor */
+ return -ENODEV;
+ }
+ pr_info("using %s compressor\n", zswap_compressor);
+
+ /* alloc percpu transforms */
+ zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
+ if (!zswap_comp_pcpu_tfms)
+ return -ENOMEM;
+ return 0;
+}
+
+static void zswap_comp_exit(void)
+{
+ /* free percpu transforms */
+ if (zswap_comp_pcpu_tfms)
+ free_percpu(zswap_comp_pcpu_tfms);
+}
+
+/*********************************
+* data structures
+**********************************/
+/*
+ * struct zswap_entry
+ *
+ * This structure contains the metadata for tracking a single compressed
+ * page within zswap.
+ *
+ * rbnode - links the entry into red-black tree for the appropriate swap type
+ * refcount - the number of outstanding reference to the entry. This is needed
+ * to protect against premature freeing of the entry by code
+ * concurent calls to load, invalidate, and writeback. The lock
+ * for the zswap_tree structure that contains the entry must
+ * be held while changing the refcount. Since the lock must
+ * be held, there is no reason to also make refcount atomic.
+ * type - the swap type for the entry. Used to map back to the zswap_tree
+ * structure that contains the entry.
+ * offset - the swap offset for the entry. Index into the red-black tree.
+ * handle - zsmalloc allocation handle that stores the compressed page data
+ * length - the length in bytes of the compressed page data. Needed during
+ * decompression
+ */
+struct zswap_entry {
+ struct rb_node rbnode;
+ pgoff_t offset;
+ int refcount;
+ unsigned int length;
+ unsigned long handle;
+};
+
+struct zswap_header {
+ swp_entry_t swpentry;
+};
+
+/*
+ * The tree lock in the zswap_tree struct protects a few things:
+ * - the rbtree
+ * - the refcount field of each entry in the tree
+ */
+struct zswap_tree {
+ struct rb_root rbroot;
+ spinlock_t lock;
+ struct zbud_pool *pool;
+ unsigned type;
+};
+
+static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
+
+/*********************************
+* zswap entry functions
+**********************************/
+#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
+static struct kmem_cache *zswap_entry_cache;
+
+static inline int zswap_entry_cache_create(void)
+{
+ zswap_entry_cache =
+ kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
+ sizeof(struct zswap_entry), 0, 0, NULL);
+ return (zswap_entry_cache == NULL);
+}
+
+static inline void zswap_entry_cache_destory(void)
+{
+ kmem_cache_destroy(zswap_entry_cache);
+}
+
+static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+{
+ struct zswap_entry *entry;
+ entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+ if (!entry)
+ return NULL;
+ entry->refcount = 1;
+ return entry;
+}
+
+static inline void zswap_entry_cache_free(struct zswap_entry *entry)
+{
+ kmem_cache_free(zswap_entry_cache, entry);
+}
+
+static inline void zswap_entry_get(struct zswap_entry *entry)
+{
+ entry->refcount++;
+}
+
+static inline int zswap_entry_put(struct zswap_entry *entry)
+{
+ entry->refcount--;
+ return entry->refcount;
+}
+
+/*********************************
+* rbtree functions
+**********************************/
+static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
+{
+ struct rb_node *node = root->rb_node;
+ struct zswap_entry *entry;
+
+ while (node) {
+ entry = rb_entry(node, struct zswap_entry, rbnode);
+ if (entry->offset > offset)
+ node = node->rb_left;
+ else if (entry->offset < offset)
+ node = node->rb_right;
+ else
+ return entry;
+ }
+ return NULL;
+}
+
+/*
+ * In the case that a entry with the same offset is found, it a pointer to
+ * the existing entry is stored in dupentry and the function returns -EEXIST
+*/
+static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
+ struct zswap_entry **dupentry)
+{
+ struct rb_node **link = &root->rb_node, *parent = NULL;
+ struct zswap_entry *myentry;
+
+ while (*link) {
+ parent = *link;
+ myentry = rb_entry(parent, struct zswap_entry, rbnode);
+ if (myentry->offset > entry->offset)
+ link = &(*link)->rb_left;
+ else if (myentry->offset < entry->offset)
+ link = &(*link)->rb_right;
+ else {
+ *dupentry = myentry;
+ return -EEXIST;
+ }
+ }
+ rb_link_node(&entry->rbnode, parent, link);
+ rb_insert_color(&entry->rbnode, root);
+ return 0;
+}
+
+/*********************************
+* per-cpu code
+**********************************/
+static DEFINE_PER_CPU(u8 *, zswap_dstmem);
+
+static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
+{
+ struct crypto_comp *tfm;
+ u8 *dst;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
+ if (IS_ERR(tfm)) {
+ pr_err("can't allocate compressor transform\n");
+ return NOTIFY_BAD;
+ }
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
+ dst = kmalloc(PAGE_SIZE * 2, GFP_KERNEL);
+ if (!dst) {
+ pr_err("can't allocate compressor buffer\n");
+ crypto_free_comp(tfm);
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+ return NOTIFY_BAD;
+ }
+ per_cpu(zswap_dstmem, cpu) = dst;
+ break;
+ case CPU_DEAD:
+ case CPU_UP_CANCELED:
+ tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
+ if (tfm) {
+ crypto_free_comp(tfm);
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+ }
+ dst = per_cpu(zswap_dstmem, cpu);
+ kfree(dst);
+ per_cpu(zswap_dstmem, cpu) = NULL;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static int zswap_cpu_notifier(struct notifier_block *nb,
+ unsigned long action, void *pcpu)
+{
+ unsigned long cpu = (unsigned long)pcpu;
+ return __zswap_cpu_notifier(action, cpu);
+}
+
+static struct notifier_block zswap_cpu_notifier_block = {
+ .notifier_call = zswap_cpu_notifier
+};
+
+static int zswap_cpu_init(void)
+{
+ unsigned long cpu;
+
+ get_online_cpus();
+ for_each_online_cpu(cpu)
+ if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
+ goto cleanup;
+ register_cpu_notifier(&zswap_cpu_notifier_block);
+ put_online_cpus();
+ return 0;
+
+cleanup:
+ for_each_online_cpu(cpu)
+ __zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
+ put_online_cpus();
+ return -ENOMEM;
+}
+
+/*********************************
+* helpers
+**********************************/
+static inline bool zswap_is_full(void)
+{
+ int pool_pages = atomic_read(&zswap_pool_pages);
+ return (totalram_pages * zswap_max_pool_percent / 100 < pool_pages);
+}
+
+/*
+ * Carries out the common pattern of freeing and entry's zsmalloc allocation,
+ * freeing the entry itself, and decrementing the number of stored pages.
+ */
+static void zswap_free_entry(struct zswap_tree *tree, struct zswap_entry *entry)
+{
+ zbud_free(tree->pool, entry->handle);
+ zswap_entry_cache_free(entry);
+ atomic_dec(&zswap_stored_pages);
+ atomic_set(&zswap_pool_pages, zbud_get_pool_size(tree->pool));
+}
+
+/*********************************
+* writeback code
+**********************************/
+/* return enum for zswap_get_swap_cache_page */
+enum zswap_get_swap_ret {
+ ZSWAP_SWAPCACHE_NEW,
+ ZSWAP_SWAPCACHE_EXIST,
+ ZSWAP_SWAPCACHE_NOMEM
+};
+
+/*
+ * zswap_get_swap_cache_page
+ *
+ * This is an adaption of read_swap_cache_async()
+ *
+ * This function tries to find a page with the given swap entry
+ * in the swapper_space address space (the swap cache). If the page
+ * is found, it is returned in retpage. Otherwise, a page is allocated,
+ * added to the swap cache, and returned in retpage.
+ *
+ * If success, the swap cache page is returned in retpage
+ * Returns 0 if page was already in the swap cache, page is not locked
+ * Returns 1 if the new page needs to be populated, page is locked
+ * Returns <0 on error
+ */
+static int zswap_get_swap_cache_page(swp_entry_t entry,
+ struct page **retpage)
+{
+ struct page *found_page, *new_page = NULL;
+ struct address_space *swapper_space = &swapper_spaces[swp_type(entry)];
+ int err;
+
+ *retpage = NULL;
+ do {
+ /*
+ * First check the swap cache. Since this is normally
+ * called after lookup_swap_cache() failed, re-calling
+ * that would confuse statistics.
+ */
+ found_page = find_get_page(swapper_space, entry.val);
+ if (found_page)
+ break;
+
+ /*
+ * Get a new page to read into from swap.
+ */
+ if (!new_page) {
+ new_page = alloc_page(GFP_KERNEL);
+ if (!new_page)
+ break; /* Out of memory */
+ }
+
+ /*
+ * call radix_tree_preload() while we can wait.
+ */
+ err = radix_tree_preload(GFP_KERNEL);
+ if (err)
+ break;
+
+ /*
+ * Swap entry may have been freed since our caller observed it.
+ */
+ err = swapcache_prepare(entry);
+ if (err == -EEXIST) { /* seems racy */
+ radix_tree_preload_end();
+ continue;
+ }
+ if (err) { /* swp entry is obsolete ? */
+ radix_tree_preload_end();
+ break;
+ }
+
+ /* May fail (-ENOMEM) if radix-tree node allocation failed. */
+ __set_page_locked(new_page);
+ SetPageSwapBacked(new_page);
+ err = __add_to_swap_cache(new_page, entry);
+ if (likely(!err)) {
+ radix_tree_preload_end();
+ lru_cache_add_anon(new_page);
+ *retpage = new_page;
+ return ZSWAP_SWAPCACHE_NEW;
+ }
+ radix_tree_preload_end();
+ ClearPageSwapBacked(new_page);
+ __clear_page_locked(new_page);
+ /*
+ * add_to_swap_cache() doesn't return -EEXIST, so we can safely
+ * clear SWAP_HAS_CACHE flag.
+ */
+ swapcache_free(entry, NULL);
+ } while (err != -ENOMEM);
+
+ if (new_page)
+ page_cache_release(new_page);
+ if (!found_page)
+ return ZSWAP_SWAPCACHE_NOMEM;
+ *retpage = found_page;
+ return ZSWAP_SWAPCACHE_EXIST;
+}
+
+/*
+ * Attempts to free and entry by adding a page to the swap cache,
+ * decompressing the entry data into the page, and issuing a
+ * bio write to write the page back to the swap device.
+ *
+ * This can be thought of as a "resumed writeback" of the page
+ * to the swap device. We are basically resuming the same swap
+ * writeback path that was intercepted with the frontswap_store()
+ * in the first place. After the page has been decompressed into
+ * the swap cache, the compressed version stored by zswap can be
+ * freed.
+ */
+static int zswap_writeback_entry(struct zbud_pool *pool, unsigned long handle)
+{
+ struct zswap_header *zhdr;
+ swp_entry_t swpentry;
+ struct zswap_tree *tree;
+ pgoff_t offset;
+ struct zswap_entry *entry;
+ struct page *page;
+ u8 *src, *dst;
+ unsigned int dlen;
+ int ret, refcount;
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_NONE,
+ };
+
+ /* extract swpentry from data */
+ zhdr = zbud_map(pool, handle);
+ swpentry = zhdr->swpentry; /* here */
+ zbud_unmap(pool, handle);
+ tree = zswap_trees[swp_type(swpentry)];
+ offset = swp_offset(swpentry);
+ BUG_ON(pool != tree->pool);
+
+ /* find and ref zswap entry */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ if (!entry) {
+ /* entry was invalidated */
+ spin_unlock(&tree->lock);
+ return 0;
+ }
+ zswap_entry_get(entry);
+ spin_unlock(&tree->lock);
+ BUG_ON(offset != entry->offset);
+
+ /* try to allocate swap cache page */
+ switch (zswap_get_swap_cache_page(swpentry, &page)) {
+ case ZSWAP_SWAPCACHE_NOMEM: /* no memory */
+ ret = -ENOMEM;
+ goto fail;
+
+ case ZSWAP_SWAPCACHE_EXIST: /* page is unlocked */
+ /* page is already in the swap cache, ignore for now */
+ page_cache_release(page);
+ ret = -EEXIST;
+ goto fail;
+
+ case ZSWAP_SWAPCACHE_NEW: /* page is locked */
+ /* decompress */
+ dlen = PAGE_SIZE;
+ src = (u8 *)zbud_map(tree->pool, entry->handle) +
+ sizeof(struct zswap_header);
+ dst = kmap_atomic(page);
+ ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src,
+ entry->length, dst, &dlen);
+ kunmap_atomic(dst);
+ zbud_unmap(tree->pool, entry->handle);
+ BUG_ON(ret);
+ BUG_ON(dlen != PAGE_SIZE);
+
+ /* page is up to date */
+ SetPageUptodate(page);
+ }
+
+ /* start writeback */
+ SetPageReclaim(page);
+ __swap_writepage(page, &wbc, end_swap_bio_write);
+ page_cache_release(page);
+ zswap_written_back_pages++;
+
+ spin_lock(&tree->lock);
+
+ /* drop local reference */
+ zswap_entry_put(entry);
+ /* drop the initial reference from entry creation */
+ refcount = zswap_entry_put(entry);
+
+ /*
+ * There are three possible values for refcount here:
+ * (1) refcount is 1, load is in progress, unlink from rbtree,
+ * load will free
+ * (2) refcount is 0, (normal case) entry is valid,
+ * remove from rbtree and free entry
+ * (3) refcount is -1, invalidate happened during writeback,
+ * free entry
+ */
+ if (refcount >= 0) {
+ /* no invalidate yet, remove from rbtree */
+ rb_erase(&entry->rbnode, &tree->rbroot);
+ }
+ spin_unlock(&tree->lock);
+ if (refcount <= 0) {
+ /* free the entry */
+ zswap_free_entry(tree, entry);
+ return 0;
+ }
+ return -EAGAIN;
+
+fail:
+ spin_lock(&tree->lock);
+ zswap_entry_put(entry);
+ spin_unlock(&tree->lock);
+ return ret;
+}
+
+/*********************************
+* frontswap hooks
+**********************************/
+/* attempts to compress and store an single page */
+static int zswap_frontswap_store(unsigned type, pgoff_t offset,
+ struct page *page)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry, *dupentry;
+ int ret;
+ unsigned int dlen = PAGE_SIZE, len;
+ unsigned long handle;
+ char *buf;
+ u8 *src, *dst;
+ struct zswap_header *zhdr;
+
+ if (!tree) {
+ ret = -ENODEV;
+ goto reject;
+ }
+
+ /* reclaim space if needed */
+ if (zswap_is_full()) {
+ zswap_pool_limit_hit++;
+ if (zbud_reclaim_page(tree->pool, 8)) {
+ zswap_reject_reclaim_fail++;
+ ret = -ENOMEM;
+ goto reject;
+ }
+ }
+
+ /* allocate entry */
+ entry = zswap_entry_cache_alloc(GFP_KERNEL);
+ if (!entry) {
+ zswap_reject_kmemcache_fail++;
+ ret = -ENOMEM;
+ goto reject;
+ }
+
+ /* compress */
+ dst = get_cpu_var(zswap_dstmem);
+ src = kmap_atomic(page);
+ ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
+ kunmap_atomic(src);
+ if (ret) {
+ ret = -EINVAL;
+ goto freepage;
+ }
+ len = dlen + sizeof(struct zswap_header);
+ if ((len * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
+ zswap_reject_compress_poor++;
+ ret = -E2BIG;
+ goto freepage;
+ }
+
+ /* store */
+ ret = zbud_alloc(tree->pool, len, __GFP_NORETRY | __GFP_NOWARN,
+ &handle);
+ if (ret) {
+ zswap_reject_alloc_fail++;
+ goto freepage;
+ }
+ zhdr = zbud_map(tree->pool, handle);
+ zhdr->swpentry = swp_entry(type, offset);
+ buf = (u8 *)(zhdr + 1);
+ memcpy(buf, dst, dlen);
+ zbud_unmap(tree->pool, handle);
+ put_cpu_var(zswap_dstmem);
+
+ /* populate entry */
+ entry->offset = offset;
+ entry->handle = handle;
+ entry->length = dlen;
+
+ /* map */
+ spin_lock(&tree->lock);
+ do {
+ ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
+ if (ret == -EEXIST) {
+ zswap_duplicate_entry++;
+ /* remove from rbtree */
+ rb_erase(&dupentry->rbnode, &tree->rbroot);
+ if (!zswap_entry_put(dupentry)) {
+ /* free */
+ zswap_free_entry(tree, dupentry);
+ }
+ }
+ } while (ret == -EEXIST);
+ spin_unlock(&tree->lock);
+
+ /* update stats */
+ atomic_inc(&zswap_stored_pages);
+ atomic_set(&zswap_pool_pages, zbud_get_pool_size(tree->pool));
+
+ return 0;
+
+freepage:
+ put_cpu_var(zswap_dstmem);
+ zswap_entry_cache_free(entry);
+reject:
+ return ret;
+}
+
+/*
+ * returns 0 if the page was successfully decompressed
+ * return -1 on entry not found or error
+*/
+static int zswap_frontswap_load(unsigned type, pgoff_t offset,
+ struct page *page)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry;
+ u8 *src, *dst;
+ unsigned int dlen;
+ int refcount, ret;
+
+ /* find */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ if (!entry) {
+ /* entry was written back */
+ spin_unlock(&tree->lock);
+ return -1;
+ }
+ zswap_entry_get(entry);
+ spin_unlock(&tree->lock);
+
+ /* decompress */
+ dlen = PAGE_SIZE;
+ src = (u8 *)zbud_map(tree->pool, entry->handle) +
+ sizeof(struct zswap_header);
+ dst = kmap_atomic(page);
+ ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
+ dst, &dlen);
+ kunmap_atomic(dst);
+ zbud_unmap(tree->pool, entry->handle);
+ BUG_ON(ret);
+
+ spin_lock(&tree->lock);
+ refcount = zswap_entry_put(entry);
+ if (likely(refcount)) {
+ spin_unlock(&tree->lock);
+ return 0;
+ }
+ spin_unlock(&tree->lock);
+
+ /*
+ * We don't have to unlink from the rbtree because
+ * zswap_writeback_entry() or zswap_frontswap_invalidate page()
+ * has already done this for us if we are the last reference.
+ */
+ /* free */
+
+ zswap_free_entry(tree, entry);
+
+ return 0;
+}
+
+/* invalidates a single page */
+static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry;
+ int refcount;
+
+ /* find */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ if (!entry) {
+ /* entry was written back */
+ spin_unlock(&tree->lock);
+ return;
+ }
+
+ /* remove from rbtree */
+ rb_erase(&entry->rbnode, &tree->rbroot);
+
+ /* drop the initial reference from entry creation */
+ refcount = zswap_entry_put(entry);
+
+ spin_unlock(&tree->lock);
+
+ if (refcount) {
+ /* writeback in progress, writeback will free */
+ return;
+ }
+
+ /* free */
+ zswap_free_entry(tree, entry);
+}
+
+/* invalidates all pages for the given swap type */
+static void zswap_frontswap_invalidate_area(unsigned type)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct rb_node *node;
+ struct zswap_entry *entry;
+
+ if (!tree)
+ return;
+
+ /* walk the tree and free everything */
+ spin_lock(&tree->lock);
+ /*
+ * TODO: Even though this code should not be executed because
+ * the try_to_unuse() in swapoff should have emptied the tree,
+ * it is very wasteful to rebalance the tree after every
+ * removal when we are freeing the whole tree.
+ *
+ * If post-order traversal code is ever added to the rbtree
+ * implementation, it should be used here.
+ */
+ while ((node = rb_first(&tree->rbroot))) {
+ entry = rb_entry(node, struct zswap_entry, rbnode);
+ rb_erase(&entry->rbnode, &tree->rbroot);
+ zbud_free(tree->pool, entry->handle);
+ zswap_entry_cache_free(entry);
+ atomic_dec(&zswap_stored_pages);
+ }
+ tree->rbroot = RB_ROOT;
+ spin_unlock(&tree->lock);
+}
+
+static struct zbud_ops zswap_zbud_ops = {
+ .evict = zswap_writeback_entry
+};
+
+/* NOTE: this is called in atomic context from swapon and must not sleep */
+static void zswap_frontswap_init(unsigned type)
+{
+ struct zswap_tree *tree;
+
+ tree = kzalloc(sizeof(struct zswap_tree), GFP_ATOMIC);
+ if (!tree)
+ goto err;
+ tree->pool = zbud_create_pool(GFP_NOWAIT, &zswap_zbud_ops);
+ if (!tree->pool)
+ goto freetree;
+ tree->rbroot = RB_ROOT;
+ spin_lock_init(&tree->lock);
+ tree->type = type;
+ zswap_trees[type] = tree;
+ return;
+
+freetree:
+ kfree(tree);
+err:
+ pr_err("alloc failed, zswap disabled for swap type %d\n", type);
+}
+
+static struct frontswap_ops zswap_frontswap_ops = {
+ .store = zswap_frontswap_store,
+ .load = zswap_frontswap_load,
+ .invalidate_page = zswap_frontswap_invalidate_page,
+ .invalidate_area = zswap_frontswap_invalidate_area,
+ .init = zswap_frontswap_init
+};
+
+/*********************************
+* debugfs functions
+**********************************/
+#ifdef CONFIG_DEBUG_FS
+#include <linux/debugfs.h>
+
+static struct dentry *zswap_debugfs_root;
+
+static int __init zswap_debugfs_init(void)
+{
+ if (!debugfs_initialized())
+ return -ENODEV;
+
+ zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
+ if (!zswap_debugfs_root)
+ return -ENOMEM;
+
+ debugfs_create_u64("pool_limit_hit", S_IRUGO,
+ zswap_debugfs_root, &zswap_pool_limit_hit);
+ debugfs_create_u64("reject_reclaim_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_reclaim_fail);
+ debugfs_create_u64("reject_alloc_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_alloc_fail);
+ debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_kmemcache_fail);
+ debugfs_create_u64("reject_compress_poor", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_compress_poor);
+ debugfs_create_u64("written_back_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_written_back_pages);
+ debugfs_create_u64("duplicate_entry", S_IRUGO,
+ zswap_debugfs_root, &zswap_duplicate_entry);
+ debugfs_create_atomic_t("pool_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_pool_pages);
+ debugfs_create_atomic_t("stored_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_stored_pages);
+
+ return 0;
+}
+
+static void __exit zswap_debugfs_exit(void)
+{
+ debugfs_remove_recursive(zswap_debugfs_root);
+}
+#else
+static inline int __init zswap_debugfs_init(void)
+{
+ return 0;
+}
+
+static inline void __exit zswap_debugfs_exit(void) { }
+#endif
+
+/*********************************
+* module init and exit
+**********************************/
+static int __init init_zswap(void)
+{
+ if (!zswap_enabled)
+ return 0;
+
+ pr_info("loading zswap\n");
+ if (zswap_entry_cache_create()) {
+ pr_err("entry cache creation failed\n");
+ goto error;
+ }
+ if (zswap_comp_init()) {
+ pr_err("compressor initialization failed\n");
+ goto compfail;
+ }
+ if (zswap_cpu_init()) {
+ pr_err("per-cpu initialization failed\n");
+ goto pcpufail;
+ }
+ frontswap_register_ops(&zswap_frontswap_ops);
+ if (zswap_debugfs_init())
+ pr_warn("debugfs initialization failed\n");
+ return 0;
+pcpufail:
+ zswap_comp_exit();
+compfail:
+ zswap_entry_cache_destory();
+error:
+ return -ENOMEM;
+}
+/* must be late so crypto has time to come up */
+late_initcall(init_zswap);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Seth Jennings <[email protected]>");
+MODULE_DESCRIPTION("Compressed cache for swap pages");
--
1.8.2.2

2013-05-08 22:38:33

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv10 4/4] zswap: add documentation

This patch adds the documentation file for the zswap functionality

Signed-off-by: Seth Jennings <[email protected]>
---
Documentation/vm/zswap.txt | 72 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
create mode 100644 Documentation/vm/zswap.txt

diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt
new file mode 100644
index 0000000..88384b3
--- /dev/null
+++ b/Documentation/vm/zswap.txt
@@ -0,0 +1,72 @@
+Overview:
+
+Zswap is a lightweight compressed cache for swap pages. It takes pages that are
+in the process of being swapped out and attempts to compress them into a
+dynamically allocated RAM-based memory pool. If this process is successful,
+the writeback to the swap device is deferred and, in many cases, avoided
+completely.  This results in a significant I/O reduction and performance gains
+for systems that are swapping.
+
+Zswap provides compressed swap caching that basically trades CPU cycles for
+reduced swap I/O.  This trade-off can result in a significant performance
+improvement as reads to/writes from to the compressed cache almost always
+faster that reading from a swap device which incurs the latency of an
+asynchronous block I/O read.
+
+Some potential benefits:
+* Desktop/laptop users with limited RAM capacities can mitigate the
+    performance impact of swapping.
+* Overcommitted guests that share a common I/O resource can
+    dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
+ throttling by the hypervisor. This allows more work to get done with less
+ impact to the guest workload and guests sharing the I/O subsystem
+* Users with SSDs as swap devices can extend the life of the device by
+    drastically reducing life-shortening writes.
+
+Zswap evicts pages from compressed cache on an LRU basis to the backing swap
+device when the compressed pool reaches it size limit. This requirement had
+been identified in prior community discussions.
+
+To enabled zswap, the "enabled" attribute must be set to 1 at boot time. e.g.
+zswap.enabled=1
+
+Design:
+
+Zswap receives pages for compression through the Frontswap API and is able to
+evict pages from its own compressed pool on an LRU basis and write them back to
+the backing swap device in the case that the compressed pool is full.
+
+Zswap makes use of zbud for the managing the compressed memory pool. Each
+allocation in zbud is not directly accessible by address. Rather, a handle is
+return by the allocation routine and that handle must be mapped before being
+accessed. The compressed memory pool grows on demand and shrinks as compressed
+pages are freed. The pool is not preallocated.
+
+When a swap page is passed from frontswap to zswap, zswap maintains a mapping
+of the swap entry, a combination of the swap type and swap offset, to the zbud
+handle that references that compressed swap page. This mapping is achieved
+with a red-black tree per swap type. The swap offset is the search key for the
+tree nodes.
+
+During a page fault on a PTE that is a swap entry, frontswap calls the zswap
+load function to decompress the page into the page allocated by the page fault
+handler.
+
+Once there are no PTEs referencing a swap page stored in zswap (i.e. the count
+in the swap_map goes to 0) the swap code calls the zswap invalidate function,
+via frontswap, to free the compressed entry.
+
+Zswap seeks to be simple in its policies. Sysfs attributes allow for two user
+controlled policies:
+* max_compression_ratio - Maximum compression ratio, as as percentage,
+ for an acceptable compressed page. Any page that does not compress by at
+ least this ratio will be rejected.
+* max_pool_percent - The maximum percentage of memory that the compressed
+ pool can occupy.
+
+Zswap allows the compressor to be selected at kernel boot time by setting the
+“compressor” attribute. The default compressor is lzo. e.g.
+zswap.compressor=deflate
+
+A debugfs interface is provided for various statistic about pool size, number
+of pages stored, and various counters for the reasons pages are rejected.
--
1.8.2.2

2013-05-08 22:38:37

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv10 2/4] zbud: add to mm/

zbud is an special purpose allocator for storing compressed pages. It is
designed to store up to two compressed pages per physical page. While this
design limits storage density, it has simple and deterministic reclaim
properties that make it preferable to a higher density approach when reclaim
will be used.

zbud works by storing compressed pages, or "zpages", together in pairs in a
single memory page called a "zbud page". The first buddy is "left
justifed" at the beginning of the zbud page, and the last buddy is "right
justified" at the end of the zbud page. The benefit is that if either
buddy is freed, the freed buddy space, coalesced with whatever slack space
that existed between the buddies, results in the largest possible free region
within the zbud page.

zbud also provides an attractive lower bound on density. The ratio of zpages
to zbud pages can not be less than 1. This ensures that zbud can never "do
harm" by using more pages to store zpages than the uncompressed zpages would
have used on their own.

This patch adds zbud to mm/ for later use by zswap.

Signed-off-by: Seth Jennings <[email protected]>
---
include/linux/zbud.h | 22 ++
mm/Kconfig | 10 +
mm/Makefile | 1 +
mm/zbud.c | 564 +++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 597 insertions(+)
create mode 100644 include/linux/zbud.h
create mode 100644 mm/zbud.c

diff --git a/include/linux/zbud.h b/include/linux/zbud.h
new file mode 100644
index 0000000..954252b
--- /dev/null
+++ b/include/linux/zbud.h
@@ -0,0 +1,22 @@
+#ifndef _ZBUD_H_
+#define _ZBUD_H_
+
+#include <linux/types.h>
+
+struct zbud_pool;
+
+struct zbud_ops {
+ int (*evict)(struct zbud_pool *pool, unsigned long handle);
+};
+
+struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops);
+void zbud_destroy_pool(struct zbud_pool *pool);
+int zbud_alloc(struct zbud_pool *pool, int size, gfp_t gfp,
+ unsigned long *handle);
+void zbud_free(struct zbud_pool *pool, unsigned long handle);
+int zbud_reclaim_page(struct zbud_pool *pool, unsigned int retries);
+void *zbud_map(struct zbud_pool *pool, unsigned long handle);
+void zbud_unmap(struct zbud_pool *pool, unsigned long handle);
+int zbud_get_pool_size(struct zbud_pool *pool);
+
+#endif /* _ZBUD_H_ */
diff --git a/mm/Kconfig b/mm/Kconfig
index e742d06..908f41b 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -477,3 +477,13 @@ config FRONTSWAP
and swap data is stored as normal on the matching swap device.

If unsure, say Y to enable frontswap.
+
+config ZBUD
+ tristate "Buddy allocator for compressed pages"
+ default n
+ help
+ zbud is an special purpose allocator for storing compressed pages.
+ It is designed to store up to two compressed pages per physical page.
+ While this design limits storage density, it has simple and
+ deterministic reclaim properties that make it preferable to a higher
+ density approach when reclaim will be used.
diff --git a/mm/Makefile b/mm/Makefile
index 72c5acb..95f0197 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CLEANCACHE) += cleancache.o
obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
+obj-$(CONFIG_ZBUD) += zbud.o
diff --git a/mm/zbud.c b/mm/zbud.c
new file mode 100644
index 0000000..e5bd0e6
--- /dev/null
+++ b/mm/zbud.c
@@ -0,0 +1,564 @@
+/*
+ * zbud.c - Buddy Allocator for Compressed Pages
+ *
+ * Copyright (C) 2013, Seth Jennings, IBM
+ *
+ * Concepts based on zcache internal zbud allocator by Dan Magenheimer.
+ *
+ * zbud is an special purpose allocator for storing compressed pages. It is
+ * designed to store up to two compressed pages per physical page. While this
+ * design limits storage density, it has simple and deterministic reclaim
+ * properties that make it preferable to a higher density approach when reclaim
+ * will be used.
+ *
+ * zbud works by storing compressed pages, or "zpages", together in pairs in a
+ * single memory page called a "zbud page". The first buddy is "left
+ * justifed" at the beginning of the zbud page, and the last buddy is "right
+ * justified" at the end of the zbud page. The benefit is that if either
+ * buddy is freed, the freed buddy space, coalesced with whatever slack space
+ * that existed between the buddies, results in the largest possible free region
+ * within the zbud page.
+ *
+ * zbud also provides an attractive lower bound on density. The ratio of zpages
+ * to zbud pages can not be less than 1. This ensures that zbud can never "do
+ * harm" by using more pages to store zpages than the uncompressed zpages would
+ * have used on their own.
+ *
+ * zbud pages are divided into "chunks". The size of the chunks is fixed at
+ * compile time and determined by NCHUNKS_ORDER below. Dividing zbud pages
+ * into chunks allows organizing unbuddied zbud pages into a manageable number
+ * of unbuddied lists according to the number of free chunks available in the
+ * zbud page.
+ *
+ * The zbud API differs from that of conventional allocators in that the
+ * allocation function, zbud_alloc(), returns an opaque handle to the user,
+ * not a dereferenceable pointer. The user must map the handle using
+ * zbud_map() in order to get a usable pointer by which to access the
+ * allocation data and unmap the handle with zbud_unmap() when operations
+ * on the allocation data are complete.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/atomic.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/preempt.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/zbud.h>
+
+/*****************
+ * Structures
+*****************/
+/**
+ * struct zbud_page - zbud page metadata overlay
+ * @page: typed reference to the underlying struct page
+ * @donotuse: this overlays the page flags and should not be used
+ * @first_chunks: the size of the first buddy in chunks, 0 if free
+ * @last_chunks: the size of the last buddy in chunks, 0 if free
+ * @buddy: links the zbud page into the unbuddied/buddied lists in the pool
+ * @lru: links the zbud page into the lru list in the pool
+ *
+ * This structure overlays the struct page to store metadata needed for a
+ * single storage page in for zbud. There is a BUILD_BUG_ON in zbud_init()
+ * that ensures this structure is not larger that struct page.
+ *
+ * The PG_reclaim flag of the underlying page is used for indicating
+ * that this zbud page is under reclaim (see zbud_reclaim_page())
+ */
+struct zbud_page {
+ union {
+ struct page page;
+ struct {
+ unsigned long donotuse;
+ u16 first_chunks;
+ u16 last_chunks;
+ struct list_head buddy;
+ struct list_head lru;
+ };
+ };
+};
+
+/*
+ * NCHUNKS_ORDER determines the internal allocation granularity, effectively
+ * adjusting internal fragmentation. It also determines the number of
+ * freelists maintained in each pool. NCHUNKS_ORDER of 6 means that the
+ * allocation granularity will be in chunks of size PAGE_SIZE/64, and there
+ * will be 64 freelists per pool.
+ */
+#define NCHUNKS_ORDER 6
+
+#define CHUNK_SHIFT (PAGE_SHIFT - NCHUNKS_ORDER)
+#define CHUNK_SIZE (1 << CHUNK_SHIFT)
+#define NCHUNKS (PAGE_SIZE >> CHUNK_SHIFT)
+
+/**
+ * struct zbud_pool - stores metadata for each zbud pool
+ * @lock: protects all pool lists and first|last_chunk fields of any
+ * zbud page in the pool
+ * @unbuddied: array of lists tracking zbud pages that only contain one buddy;
+ * the lists each zbud page is added to depends on the size of
+ * its free region.
+ * @buddied: list tracking the zbud pages that contain two buddies;
+ * these zbud pages are full
+ * @pages_nr: number of zbud pages in the pool.
+ * @ops: pointer to a structure of user defined operations specified at
+ * pool creation time.
+ *
+ * This structure is allocated at pool creation time and maintains metadata
+ * pertaining to a particular zbud pool.
+ */
+struct zbud_pool {
+ spinlock_t lock;
+ struct list_head unbuddied[NCHUNKS];
+ struct list_head buddied;
+ struct list_head lru;
+ atomic_t pages_nr;
+ struct zbud_ops *ops;
+};
+
+/*****************
+ * Helpers
+*****************/
+/* Just to make the code easier to read */
+enum buddy {
+ FIRST,
+ LAST
+};
+
+/* Converts an allocation size in bytes to size in zbud chunks */
+static inline int size_to_chunks(int size)
+{
+ return (size + CHUNK_SIZE - 1) >> CHUNK_SHIFT;
+}
+
+#define for_each_unbuddied_list(_iter, _begin) \
+ for ((_iter) = (_begin); (_iter) < NCHUNKS; (_iter)++)
+
+/* Initializes a zbud page from a newly allocated page */
+static inline struct zbud_page *init_zbud_page(struct page *page)
+{
+ struct zbud_page *zbpage = (struct zbud_page *)page;
+ zbpage->first_chunks = 0;
+ zbpage->last_chunks = 0;
+ INIT_LIST_HEAD(&zbpage->buddy);
+ INIT_LIST_HEAD(&zbpage->lru);
+ return zbpage;
+}
+
+/* Resets a zbud page so that it can be properly freed */
+static inline struct page *reset_zbud_page(struct zbud_page *zbpage)
+{
+ struct page *page = &zbpage->page;
+ set_page_private(page, 0);
+ page->mapping = NULL;
+ page->index = 0;
+ page_mapcount_reset(page);
+ init_page_count(page);
+ INIT_LIST_HEAD(&page->lru);
+ return page;
+}
+
+/*
+ * Encodes the handle of a particular buddy within a zbud page
+ * Pool lock should be held as this function accesses first|last_chunks
+ */
+static inline unsigned long encode_handle(struct zbud_page *zbpage,
+ enum buddy bud)
+{
+ unsigned long handle;
+
+ /*
+ * For now, the encoded handle is actually just the pointer to the data
+ * but this might not always be the case. A little information hiding.
+ */
+ handle = (unsigned long)page_address(&zbpage->page);
+ if (bud == FIRST)
+ return handle;
+ handle += PAGE_SIZE - (zbpage->last_chunks << CHUNK_SHIFT);
+ return handle;
+}
+
+/* Returns the zbud page where a given handle is stored */
+static inline struct zbud_page *handle_to_zbud_page(unsigned long handle)
+{
+ return (struct zbud_page *)(virt_to_page(handle));
+}
+
+/* Returns the number of free chunks in a zbud page */
+static inline int num_free_chunks(struct zbud_page *zbpage)
+{
+ /*
+ * Rather than branch for different situations, just use the fact that
+ * free buddies have a length of zero to simplify everything.
+ */
+ return NCHUNKS - zbpage->first_chunks - zbpage->last_chunks;
+}
+
+/*****************
+ * API Functions
+*****************/
+/**
+ * zbud_create_pool() - create a new zbud pool
+ * @gfp: gfp flags when allocating the zbud pool structure
+ * @ops: user-defined operations for the zbud pool
+ *
+ * Return: pointer to the new zbud pool or NULL if the metadata allocation
+ * failed.
+ */
+struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops)
+{
+ struct zbud_pool *pool;
+ int i;
+
+ pool = kmalloc(sizeof(struct zbud_pool), gfp);
+ if (!pool)
+ return NULL;
+ spin_lock_init(&pool->lock);
+ for_each_unbuddied_list(i, 0)
+ INIT_LIST_HEAD(&pool->unbuddied[i]);
+ INIT_LIST_HEAD(&pool->buddied);
+ INIT_LIST_HEAD(&pool->lru);
+ atomic_set(&pool->pages_nr, 0);
+ pool->ops = ops;
+ return pool;
+}
+EXPORT_SYMBOL_GPL(zbud_create_pool);
+
+/**
+ * zbud_destroy_pool() - destroys an existing zbud pool
+ * @pool: the zbud pool to be destroyed
+ */
+void zbud_destroy_pool(struct zbud_pool *pool)
+{
+ kfree(pool);
+}
+EXPORT_SYMBOL_GPL(zbud_destroy_pool);
+
+/**
+ * zbud_alloc() - allocates a region of a given size
+ * @pool: zbud pool from which to allocate
+ * @size: size in bytes of the desired allocation
+ * @gfp: gfp flags used if the pool needs to grow
+ * @handle: handle of the new allocation
+ *
+ * This function will attempt to find a free region in the pool large
+ * enough to satisfy the allocation request. First, it tries to use
+ * free space in the most recently used zbud page, at the beginning of
+ * the pool LRU list. If that zbud page is full or doesn't have the
+ * required free space, a best fit search of the unbuddied lists is
+ * performed. If no suitable free region is found, then a new page
+ * is allocated and added to the pool to satisfy the request.
+ *
+ * gfp should not set __GFP_HIGHMEM as highmem pages cannot be used
+ * as zbud pool pages.
+ *
+ * Return: 0 if success and handle is set, otherwise -EINVAL is the size or
+ * gfp arguments are invalid or -ENOMEM if the pool was unable to allocate
+ * a new page.
+ */
+int zbud_alloc(struct zbud_pool *pool, int size, gfp_t gfp,
+ unsigned long *handle)
+{
+ int chunks, i, freechunks;
+ struct zbud_page *zbpage = NULL;
+ enum buddy bud;
+ struct page *page;
+
+ if (size <= 0 || size > PAGE_SIZE || gfp & __GFP_HIGHMEM)
+ return -EINVAL;
+ chunks = size_to_chunks(size);
+ spin_lock(&pool->lock);
+
+ /*
+ * First, try to use the zbpage we last used (at the head of the
+ * LRU) to increase LRU locality of the buddies. This is first fit.
+ */
+ if (!list_empty(&pool->lru)) {
+ zbpage = list_first_entry(&pool->lru, struct zbud_page, lru);
+ if (num_free_chunks(zbpage) >= chunks) {
+ if (zbpage->first_chunks == 0) {
+ list_del(&zbpage->buddy);
+ bud = FIRST;
+ goto found;
+ }
+ if (zbpage->last_chunks == 0) {
+ list_del(&zbpage->buddy);
+ bud = LAST;
+ goto found;
+ }
+ }
+ }
+
+ /* Second, try to find an unbuddied zbpage. This is best fit. */
+ zbpage = NULL;
+ for_each_unbuddied_list(i, chunks) {
+ if (!list_empty(&pool->unbuddied[i])) {
+ zbpage = list_first_entry(&pool->unbuddied[i],
+ struct zbud_page, buddy);
+ list_del(&zbpage->buddy);
+ if (zbpage->first_chunks == 0)
+ bud = FIRST;
+ else
+ bud = LAST;
+ goto found;
+ }
+ }
+
+ /* Lastly, couldn't find unbuddied zbpage, create new one */
+ spin_unlock(&pool->lock);
+ page = alloc_page(gfp);
+ if (!page)
+ return -ENOMEM;
+ spin_lock(&pool->lock);
+ atomic_inc(&pool->pages_nr);
+ zbpage = init_zbud_page(page);
+ bud = FIRST;
+
+found:
+ if (bud == FIRST)
+ zbpage->first_chunks = chunks;
+ else
+ zbpage->last_chunks = chunks;
+
+ if (zbpage->first_chunks == 0 || zbpage->last_chunks == 0) {
+ /* Add to unbuddied list */
+ freechunks = num_free_chunks(zbpage);
+ list_add(&zbpage->buddy, &pool->unbuddied[freechunks]);
+ } else {
+ /* Add to buddied list */
+ list_add(&zbpage->buddy, &pool->buddied);
+ }
+
+ /* Add/move zbpage to beginning of LRU */
+ if (!list_empty(&zbpage->lru))
+ list_del(&zbpage->lru);
+ list_add(&zbpage->lru, &pool->lru);
+
+ *handle = encode_handle(zbpage, bud);
+ spin_unlock(&pool->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(zbud_alloc);
+
+/**
+ * zbud_free() - frees the allocation associated with the given handle
+ * @pool: pool in which the allocation resided
+ * @handle: handle associated with the allocation returned by zbud_alloc()
+ *
+ * In the case that the zbud page in which the allocation resides is under
+ * reclaim, as indicated by the PG_reclaim flag being set, this function
+ * only sets the first|last_chunks to 0. The page is actually freed
+ * once both buddies are evicted (see zbud_reclaim_page() below).
+ */
+void zbud_free(struct zbud_pool *pool, unsigned long handle)
+{
+ struct zbud_page *zbpage;
+ int freechunks;
+
+ spin_lock(&pool->lock);
+ zbpage = handle_to_zbud_page(handle);
+
+ /* If first buddy, handle will be page aligned */
+ if (handle & ~PAGE_MASK)
+ zbpage->last_chunks = 0;
+ else
+ zbpage->first_chunks = 0;
+
+ if (PageReclaim(&zbpage->page)) {
+ /* zbpage is under reclaim, reclaim will free */
+ spin_unlock(&pool->lock);
+ return;
+ }
+
+ /* Remove from existing buddy list */
+ list_del(&zbpage->buddy);
+
+ if (zbpage->first_chunks == 0 && zbpage->last_chunks == 0) {
+ /* zbpage is empty, free */
+ list_del(&zbpage->lru);
+ __free_page(reset_zbud_page(zbpage));
+ atomic_dec(&pool->pages_nr);
+ } else {
+ /* Add to unbuddied list */
+ freechunks = num_free_chunks(zbpage);
+ list_add(&zbpage->buddy, &pool->unbuddied[freechunks]);
+ }
+
+ spin_unlock(&pool->lock);
+}
+EXPORT_SYMBOL_GPL(zbud_free);
+
+#define list_tail_entry(ptr, type, member) \
+ list_entry((ptr)->prev, type, member)
+
+/**
+ * zbud_reclaim_page() - evicts allocations from a pool page and frees it
+ * @pool: pool from which a page will attempt to be evicted
+ * @retires: number of pages on the LRU list for which eviction will
+ * be attempted before failing
+ *
+ * zbud reclaim is different from normal system reclaim in that the reclaim is
+ * done from the bottom, up. This is because only the bottom layer, zbud, has
+ * information on how the allocations are organized within each zbud page. This
+ * has the potential to create interesting locking situations between zbud and
+ * the user, however.
+ *
+ * To avoid these, this is how zbud_reclaim_page() should be called:
+
+ * The user detects a page should be reclaimed and calls zbud_reclaim_page().
+ * zbud_reclaim_page() will remove a zbud page from the pool LRU list and call
+ * the user-defined eviction handler with the pool and handle as arguments.
+ *
+ * If the handle can not be evicted, the eviction handler should return
+ * non-zero. zbud_reclaim_page() will add the zbud page back to the
+ * appropriate list and try the next zbud page on the LRU up to
+ * a user defined number of retries.
+ *
+ * If the handle is successfully evicted, the eviction handler should
+ * return 0 _and_ should have called zbud_free() on the handle. zbud_free()
+ * contains logic to delay freeing the page if the page is under reclaim,
+ * as indicated by the setting of the PG_reclaim flag on the underlying page.
+ *
+ * If all buddies in the zbud page are successfully evicted, then the
+ * zbud page can be freed.
+ *
+ * Returns: 0 if page is successfully freed, otherwise -EINVAL if there are
+ * no pages to evict or an eviction handler is not registered, -EAGAIN if
+ * the retry limit was hit.
+ */
+int zbud_reclaim_page(struct zbud_pool *pool, unsigned int retries)
+{
+ int i, ret, freechunks;
+ struct zbud_page *zbpage;
+ unsigned long first_handle = 0, last_handle = 0;
+
+ spin_lock(&pool->lock);
+ if (!pool->ops || !pool->ops->evict || list_empty(&pool->lru) ||
+ retries == 0) {
+ spin_unlock(&pool->lock);
+ return -EINVAL;
+ }
+ for (i = 0; i < retries; i++) {
+ zbpage = list_tail_entry(&pool->lru, struct zbud_page, lru);
+ list_del(&zbpage->lru);
+ list_del(&zbpage->buddy);
+ /* Protect zbpage against free */
+ SetPageReclaim(&zbpage->page);
+ /*
+ * We need encode the handles before unlocking, since we can
+ * race with free that will set (first|last)_chunks to 0
+ */
+ first_handle = 0;
+ last_handle = 0;
+ if (zbpage->first_chunks)
+ first_handle = encode_handle(zbpage, FIRST);
+ if (zbpage->last_chunks)
+ last_handle = encode_handle(zbpage, LAST);
+ spin_unlock(&pool->lock);
+
+ /* Issue the eviction callback(s) */
+ if (first_handle) {
+ ret = pool->ops->evict(pool, first_handle);
+ if (ret)
+ goto next;
+ }
+ if (last_handle) {
+ ret = pool->ops->evict(pool, last_handle);
+ if (ret)
+ goto next;
+ }
+next:
+ spin_lock(&pool->lock);
+ ClearPageReclaim(&zbpage->page);
+ if (zbpage->first_chunks == 0 && zbpage->last_chunks == 0) {
+ /*
+ * Both buddies are now free, free the zbpage and
+ * return success.
+ */
+ __free_page(reset_zbud_page(zbpage));
+ atomic_dec(&pool->pages_nr);
+ spin_unlock(&pool->lock);
+ return 0;
+ } else if (zbpage->first_chunks == 0 ||
+ zbpage->last_chunks == 0) {
+ /* add to unbuddied list */
+ freechunks = num_free_chunks(zbpage);
+ list_add(&zbpage->buddy, &pool->unbuddied[freechunks]);
+ } else {
+ /* add to buddied list */
+ list_add(&zbpage->buddy, &pool->buddied);
+ }
+
+ /* add to beginning of LRU */
+ list_add(&zbpage->lru, &pool->lru);
+ }
+ spin_unlock(&pool->lock);
+ return -EAGAIN;
+}
+EXPORT_SYMBOL_GPL(zbud_reclaim_page);
+
+/**
+ * zbud_map() - maps the allocation associated with the given handle
+ * @pool: pool in which the allocation resides
+ * @handle: handle associated with the allocation to be mapped
+ *
+ * While trivial for zbud, the mapping functions for others allocators
+ * implementing this allocation API could have more complex information encoded
+ * in the handle and could create temporary mappings to make the data
+ * accessible to the user.
+ *
+ * Returns: a pointer to the mapped allocation
+ */
+void *zbud_map(struct zbud_pool *pool, unsigned long handle)
+{
+ return (void *)(handle);
+}
+EXPORT_SYMBOL_GPL(zbud_map);
+
+/**
+ * zbud_unmap() - maps the allocation associated with the given handle
+ * @pool: pool in which the allocation resides
+ * @handle: handle associated with the allocation to be unmapped
+ */
+void zbud_unmap(struct zbud_pool *pool, unsigned long handle)
+{
+}
+EXPORT_SYMBOL_GPL(zbud_unmap);
+
+/**
+ * zbud_get_pool_size() - gets the zbud pool size in pages
+ * @pool: pool whose size is being queried
+ *
+ * Returns: size in pages of the given pool
+ */
+int zbud_get_pool_size(struct zbud_pool *pool)
+{
+ return atomic_read(&pool->pages_nr);
+}
+EXPORT_SYMBOL_GPL(zbud_get_pool_size);
+
+static int __init init_zbud(void)
+{
+ /* Make sure we aren't overflowing the underlying struct page */
+ BUILD_BUG_ON(sizeof(struct zbud_page) != sizeof(struct page));
+ /* Make sure we can represent any chunk offset with a u16 */
+ BUILD_BUG_ON(sizeof(u16) * BITS_PER_BYTE < PAGE_SHIFT - CHUNK_SHIFT);
+ pr_info("loaded\n");
+ return 0;
+}
+
+static void __exit exit_zbud(void)
+{
+ pr_info("unloaded\n");
+}
+
+module_init(init_zbud);
+module_exit(exit_zbud);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Seth Jennings <[email protected]>");
+MODULE_DESCRIPTION("Buddy Allocator for Compressed Pages");
--
1.8.2.2

2013-05-09 18:58:54

by Greg Thelen

[permalink] [raw]
Subject: Re: [PATCHv10 1/4] debugfs: add get/set for atomic types

On Wed, May 08 2013, Seth Jennings wrote:

> debugfs currently lack the ability to create attributes
> that set/get atomic_t values.
>
> This patch adds support for this through a new
> debugfs_create_atomic_t() function.
>
> Signed-off-by: Seth Jennings <[email protected]>
> Acked-by: Greg Kroah-Hartman <[email protected]>
> Acked-by: Mel Gorman <[email protected]>
> ---
> fs/debugfs/file.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> include/linux/debugfs.h | 2 ++
> 2 files changed, 44 insertions(+)
>
> diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
> index c5ca6ae..fa26d5b 100644
> --- a/fs/debugfs/file.c
> +++ b/fs/debugfs/file.c
> @@ -21,6 +21,7 @@
> #include <linux/debugfs.h>
> #include <linux/io.h>
> #include <linux/slab.h>
> +#include <linux/atomic.h>
>
> static ssize_t default_read_file(struct file *file, char __user *buf,
> size_t count, loff_t *ppos)
> @@ -403,6 +404,47 @@ struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
> }
> EXPORT_SYMBOL_GPL(debugfs_create_size_t);
>
> +static int debugfs_atomic_t_set(void *data, u64 val)
> +{
> + atomic_set((atomic_t *)data, val);
> + return 0;
> +}
> +static int debugfs_atomic_t_get(void *data, u64 *val)
> +{
> + *val = atomic_read((atomic_t *)data);
> + return 0;
> +}
> +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
> + debugfs_atomic_t_set, "%llu\n");
> +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL, "%llu\n");
> +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set, "%llu\n");
> +
> +/**
> + * debugfs_create_atomic_t - create a debugfs file that is used to read and
> + * write an atomic_t value
> + * @name: a pointer to a string containing the name of the file to create.
> + * @mode: the permission that the file should have
> + * @parent: a pointer to the parent dentry for this file. This should be a
> + * directory dentry if set. If this parameter is %NULL, then the
> + * file will be created in the root of the debugfs filesystem.
> + * @value: a pointer to the variable that the file should read to and write
> + * from.
> + */
> +struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
> + struct dentry *parent, atomic_t *value)
> +{
> + /* if there are no write bits set, make read only */
> + if (!(mode & S_IWUGO))
> + return debugfs_create_file(name, mode, parent, value,
> + &fops_atomic_t_ro);
> + /* if there are no read bits set, make write only */
> + if (!(mode & S_IRUGO))
> + return debugfs_create_file(name, mode, parent, value,
> + &fops_atomic_t_wo);
> +
> + return debugfs_create_file(name, mode, parent, value, &fops_atomic_t);
> +}
> +EXPORT_SYMBOL_GPL(debugfs_create_atomic_t);
>
> static ssize_t read_file_bool(struct file *file, char __user *user_buf,
> size_t count, loff_t *ppos)
> diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
> index 63f2465..d68b4ea 100644
> --- a/include/linux/debugfs.h
> +++ b/include/linux/debugfs.h
> @@ -79,6 +79,8 @@ struct dentry *debugfs_create_x64(const char *name, umode_t mode,
> struct dentry *parent, u64 *value);
> struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
> struct dentry *parent, size_t *value);
> +struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
> + struct dentry *parent, atomic_t *value);
> struct dentry *debugfs_create_bool(const char *name, umode_t mode,
> struct dentry *parent, u32 *value);

Looking at v3.9 I see a conflicting definition of
debugfs_create_atomic_t() in lib/fault-inject.c. A kernel with this
patch and CONFIG_FAULT_INJECTION=y and CONFIG_FAULT_INJECTION_DEBUG_FS=y
will not build:

lib/fault-inject.c:196:23: error: static declaration of 'debugfs_create_atomic_t' follows non-static declaration
include/linux/debugfs.h:87:16: note: previous declaration of 'debugfs_create_atomic_t' was here

2013-05-09 19:26:09

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv10 1/4] debugfs: add get/set for atomic types

On Thu, May 09, 2013 at 11:58:49AM -0700, Greg Thelen wrote:
> On Wed, May 08 2013, Seth Jennings wrote:
>
> > debugfs currently lack the ability to create attributes
> > that set/get atomic_t values.
> >
> > This patch adds support for this through a new
> > debugfs_create_atomic_t() function.
> >
> > Signed-off-by: Seth Jennings <[email protected]>
> > Acked-by: Greg Kroah-Hartman <[email protected]>
> > Acked-by: Mel Gorman <[email protected]>
> > ---
> > fs/debugfs/file.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> > include/linux/debugfs.h | 2 ++
> > 2 files changed, 44 insertions(+)
> >
> > diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
> > index c5ca6ae..fa26d5b 100644
> > --- a/fs/debugfs/file.c
> > +++ b/fs/debugfs/file.c
> > @@ -21,6 +21,7 @@
> > #include <linux/debugfs.h>
> > #include <linux/io.h>
> > #include <linux/slab.h>
> > +#include <linux/atomic.h>
> >
> > static ssize_t default_read_file(struct file *file, char __user *buf,
> > size_t count, loff_t *ppos)
> > @@ -403,6 +404,47 @@ struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
> > }
> > EXPORT_SYMBOL_GPL(debugfs_create_size_t);
> >
> > +static int debugfs_atomic_t_set(void *data, u64 val)
> > +{
> > + atomic_set((atomic_t *)data, val);
> > + return 0;
> > +}
> > +static int debugfs_atomic_t_get(void *data, u64 *val)
> > +{
> > + *val = atomic_read((atomic_t *)data);
> > + return 0;
> > +}
> > +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
> > + debugfs_atomic_t_set, "%llu\n");
> > +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL, "%llu\n");
> > +DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set, "%llu\n");
> > +
> > +/**
> > + * debugfs_create_atomic_t - create a debugfs file that is used to read and
> > + * write an atomic_t value
> > + * @name: a pointer to a string containing the name of the file to create.
> > + * @mode: the permission that the file should have
> > + * @parent: a pointer to the parent dentry for this file. This should be a
> > + * directory dentry if set. If this parameter is %NULL, then the
> > + * file will be created in the root of the debugfs filesystem.
> > + * @value: a pointer to the variable that the file should read to and write
> > + * from.
> > + */
> > +struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
> > + struct dentry *parent, atomic_t *value)
> > +{
> > + /* if there are no write bits set, make read only */
> > + if (!(mode & S_IWUGO))
> > + return debugfs_create_file(name, mode, parent, value,
> > + &fops_atomic_t_ro);
> > + /* if there are no read bits set, make write only */
> > + if (!(mode & S_IRUGO))
> > + return debugfs_create_file(name, mode, parent, value,
> > + &fops_atomic_t_wo);
> > +
> > + return debugfs_create_file(name, mode, parent, value, &fops_atomic_t);
> > +}
> > +EXPORT_SYMBOL_GPL(debugfs_create_atomic_t);
> >
> > static ssize_t read_file_bool(struct file *file, char __user *user_buf,
> > size_t count, loff_t *ppos)
> > diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
> > index 63f2465..d68b4ea 100644
> > --- a/include/linux/debugfs.h
> > +++ b/include/linux/debugfs.h
> > @@ -79,6 +79,8 @@ struct dentry *debugfs_create_x64(const char *name, umode_t mode,
> > struct dentry *parent, u64 *value);
> > struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
> > struct dentry *parent, size_t *value);
> > +struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
> > + struct dentry *parent, atomic_t *value);
> > struct dentry *debugfs_create_bool(const char *name, umode_t mode,
> > struct dentry *parent, u32 *value);
>
> Looking at v3.9 I see a conflicting definition of
> debugfs_create_atomic_t() in lib/fault-inject.c. A kernel with this
> patch and CONFIG_FAULT_INJECTION=y and CONFIG_FAULT_INJECTION_DEBUG_FS=y
> will not build:
>
> lib/fault-inject.c:196:23: error: static declaration of 'debugfs_create_atomic_t' follows non-static declaration
> include/linux/debugfs.h:87:16: note: previous declaration of 'debugfs_create_atomic_t' was here
>

Wow... didn't think to look for debugfs functions _outside_ the debugfs code :-/

Great find though, thanks Greg!

Seth

2013-05-17 22:26:43

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv10 1/4] debugfs: add get/set for atomic types

On 05/08/2013 03:37 PM, Seth Jennings wrote:
> +struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
> + struct dentry *parent, atomic_t *value)
> +{

lib/fault_inject.c has something that looks pretty similar:

static struct dentry *debugfs_create_atomic_t(const char *name, umode_t
mode, struct dentry *parent, atomic_t *value)

should add even more of an argument to merge this patch _now_, and
separately from the rest of zswap.