2013-04-10 18:19:46

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 0/8] zswap: compressed swap caching

I wasn't looking to post this set again before the summit, but I
messed up v8 and didn't include one of the fixes claimed in the
changelog :-/ Namely "Fix load-during-writeback race; double lru add",
which is a stability issue, as discovered by Heesub Shin (thanks for
noticing!). There were also a couple of other fixups that didn't make
it in that should have. I rebased to v3.9-rc6 while I was at it.

zswap greatly improves performance and reduces swap I/O on systems
in a state of VM thrashing (see details below). While this might not
seem a likely scenario to those that have full control over the
workloads that run on their systems, it can be very valuable to IaaS
providers that have workloads running in customer managed guests with
undersized RAM allocations. It is also beneficial in virtualized
environments where the hypervisor either can't do or is not configured
to do I/O QoS and heavy paging by a single guest can drastically increase
I/O latency for all users of the shared I/O. zswap also helps the
overcommitted guest as well by avoiding throttled swap I/O.

I'll be attending the LSF/MM summit where there (hopefully) will be a
discussion this patchset and memory compression in general.

Zswap Overview:

Zswap is a lightweight compressed cache for swap pages. It takes
pages that are in the process of being swapped out and attempts to
compress them into a dynamically allocated RAM-based memory pool.
If this process is successful, the writeback to the swap device is
deferred and, in many cases, avoided completely. This results in
a significant I/O reduction and performance gains for systems that
are swapping.

The results of a kernel building benchmark indicate a
runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
swapping with a kernel build under heavy memory pressure (see
Performance section for more).

Some addition performance metrics regarding the performance
improvements and I/O reductions that can be achieved using zswap as
measured by SPECjbb are provided here:
http://ibm.co/VCgHvM

These results include runs on x86 and new results on Power7+ with
hardware compression acceleration.

Of particular note is that zswap is able to evict pages from the compressed
cache, on an LRU basis, to the backing swap device when the compressed pool
reaches it size limit or the pool is unable to obtain additional pages
from the buddy allocator. This eviction functionality had been identified
as a requirement in prior community discussions.

Patchset Structure:
1-2: add zsmalloc and documentation
3: add atomic_t get/set to debugfs
4: add basic zswap functionality
4,5: changes to existing swap code for zswap
6,7: add zswap writeback support
8: add zswap documentation

Rationale:

Zswap provides compressed swap caching that basically trades CPU cycles
for reduced swap I/O. This trade-off can result in a significant
performance improvement as reads to/writes from to the compressed
cache almost always faster that reading from a swap device
which incurs the latency of an asynchronous block I/O read.

Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
dramatically reduce their swap I/O pressure, avoiding heavy
handed I/O throttling by the hypervisor. This allows more work
to get done with less impact to the guest workload and guests
sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
drastically reducing life-shortening writes.

Compressed swap is also provided in zcache, along with page cache
compression and RAM clustering through RAMSter. Zswap seeks to deliver
the benefit of swap compression to users in a discrete function.
This design decision is akin to Unix design philosophy of doing one
thing well, it leaves file cache compression and other features
for separate code.

Design:

Zswap receives pages for compression through the Frontswap API and
is able to evict pages from its own compressed pool on an LRU basis
and write them back to the backing swap device in the case that the
compressed pool is full or unable to secure additional pages from
the buddy allocator.

Zswap makes use of zsmalloc for the managing the compressed memory
pool. This is because zsmalloc is specifically designed to minimize
fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each
allocation in zsmalloc is not directly accessible by address.
Rather, a handle is return by the allocation routine and that handle
must be mapped before being accessed. The compressed memory pool grows
on demand and shrinks as compressed pages are freed. The pool is
not preallocated.

When a swap page is passed from frontswap to zswap, zswap maintains
a mapping of the swap entry, a combination of the swap type and swap
offset, to the zsmalloc handle that references that compressed swap
page. This mapping is achieved with a red-black tree per swap type.
The swap offset is the search key for the tree nodes.

Zswap seeks to be simple in its policies. Sysfs attributes allow for
two user controlled policies:
* max_compression_ratio - Maximum compression ratio, as as percentage,
for an acceptable compressed page. Any page that does not compress
by at least this ratio will be rejected.
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.

To enabled zswap, the "enabled" attribute must be set to 1 at boot time.

Zswap allows the compressor to be selected at kernel boot time by
setting the “compressor” attribute. The default compressor is lzo.

A debugfs interface is provided for various statistic about pool size,
number of pages stored, and various counters for the reasons pages
are rejected.

Changelog:

v9:
* Fix load-during-writeback race; double lru add (for real this time)
* checkpatch and comment fixes
* Fix __swap_writepage() return value check
* Move check for max outstanding writebacks (dedup some code)
* Rebase to v3.9-rc6

v8:
* Fix load-during-writeback race; double lru add
* checkpatch fixups
* s/NOWAIT/ATOMIC for tree allocation (Dave)
* Check __swap_writepage() for error before incr outstanding write count (Rob)
* Convert pcpu compression buffer alloc from alloc_page() to kmalloc() (Dave)
* Rebase to v3.9-rc5

v7:
* Decrease zswap_stored_pages during tree cleanup (Joonsoo)
* Move zswap_entry_cache_alloc() earlier during store (Joonsoo)
* Move type field from struct zswap_entry to struct zswap_tree
* Change to swapper_space array (-rc1 change)
* s/reset_page_mapcount/page_mapcount_reset in zsmalloc (-rc1 change)
* Rebase to v3.9-rc1

v6:
* fix access-after-free regression introduced in v5
(rb_erase() outside the lock)
* fix improper freeing of rbtree (Cody)
* fix comment typo (Ric)
* add comments about ZS_MM_WO usage and page mapping mode (Joonsoo)
* don't use page->object (Joonsoo)
* remove DEBUG (Joonsoo)
* rebase to v3.8

v5:
* zsmalloc patch converted from promotion to "new code" (for review only,
see note in [1/8])
* promote zsmalloc to mm/ instead of /lib
* add more documentation everywhere
* convert USE_PGTABLE_MAPPING to kconfig option, thanks to Minchan
* s/flush/writeback/
* #define pr_fmt() for formatting messages (Joe)
* checkpatch fixups
* lots of changes suggested Minchan

v4:
* Added Acks (Minchan)
* Separated flushing functionality into standalone patch
for easier review (Minchan)
* fix comment on zswap enabled attribute (Minchan)
* add TODO for dynamic mempool size (Minchan)
* and check for NULL in zswap_free_page() (Minchan)
* add missing zs_free() in error path (Minchan)
* TODO: add comments for flushing/refcounting (Minchan)

v3:
* Dropped the zsmalloc patches from the set, except the promotion patch
which has be converted to a rename patch (vs full diff). The dropped
patches have been Acked and are going into Greg's staging tree soon.
* Separated [PATCHv2 7/9] into two patches since it makes changes for two
different reasons (Minchan)
* Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
* Rebase to v3.8-rc5. linux-next is a little volatile with the
swapper_space per type changes which will effect this patchset.
* TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)

v2:
* Rename zswap_fs_* functions to zswap_frontswap_* to avoid
confusion with "filesystem"
* Add comment about what the tree lock protects
* Remove "#if 0" code (should have been done before)
* Break out changes to existing swap code into separate patch
* Fix blank line EOF warning on documentation file
* Rebase to next-20130107

Performance, Kernel Building:

Setup
========
Gentoo w/ kernel v3.7-rc7
Quad-core i5-2500 @ 3.3GHz
512MB DDR3 1600MHz (limited with mem=512m on boot)
Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
majflt are major page faults reported by the time command
pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
the make -jN

Summary
========
* Zswap reduces I/O and improves performance at all swap pressure levels.

* Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
over 1.5GB of I/O, and cut runtime in half.

Details
========
I/O (in pages)
base zswap change change
N pswpin pswpout majflt I/O sum pswpin pswpout majflt I/O sum %I/O MB
8 1 335 291 627 0 0 249 249 -60% 1
12 3688 14315 5290 23293 123 860 5954 6937 -70% 64
16 12711 46179 16803 75693 2936 7390 46092 56418 -25% 75
20 42178 133781 49898 225857 9460 28382 92951 130793 -42% 371
24 96079 357280 105242 558601 7719 18484 109309 135512 -76% 1653

Runtime (in seconds)
N base zswap %change
8 107 107 0%
12 128 110 -14%
16 191 179 -6%
20 371 240 -35%
24 570 267 -53%

%CPU utilization (out of 400% on 4 cpus)
N base zswap %change
8 317 319 1%
12 267 311 16%
16 179 191 7%
20 94 143 52%
24 60 128 113%

Seth Jennings (8):
zsmalloc: add to mm/
zsmalloc: add documentation
debugfs: add get/set for atomic types
zswap: add to mm/
mm: break up swap_writepage() for frontswap backends
mm: allow for outstanding swap writeback accounting
zswap: add swap page writeback support
zswap: add documentation

Documentation/vm/zsmalloc.txt | 68 +++
Documentation/vm/zswap.txt | 82 +++
fs/debugfs/file.c | 42 ++
include/linux/debugfs.h | 2 +
include/linux/swap.h | 4 +
include/linux/zsmalloc.h | 56 ++
mm/Kconfig | 39 ++
mm/Makefile | 2 +
mm/page_io.c | 22 +-
mm/swap_state.c | 2 +-
mm/zsmalloc.c | 1117 +++++++++++++++++++++++++++++++++++++++
mm/zswap.c | 1151 +++++++++++++++++++++++++++++++++++++++++
12 files changed, 2581 insertions(+), 6 deletions(-)
create mode 100644 Documentation/vm/zsmalloc.txt
create mode 100644 Documentation/vm/zswap.txt
create mode 100644 include/linux/zsmalloc.h
create mode 100644 mm/zsmalloc.c
create mode 100644 mm/zswap.c

--
1.8.2.1


2013-04-10 18:19:33

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 1/8] zsmalloc: add to mm/

=========
DO NOT MERGE, FOR REVIEW ONLY
This patch introduces zsmalloc as new code, however, it already
exists in drivers/staging. In order to build successfully, you
must select EITHER to driver/staging version OR this version.
Once zsmalloc is reviewed in this format (and hopefully accepted),
I will create a new patchset that properly promotes zsmalloc from
staging.
=========

This patchset introduces a new slab-based memory allocator,
zsmalloc, for storing compressed pages. It is designed for
low fragmentation and high allocation success rate on
large object, but <= PAGE_SIZE allocations.

zsmalloc differs from the kernel slab allocator in two primary
ways to achieve these design goals.

zsmalloc never requires high order page allocations to back
slabs, or "size classes" in zsmalloc terms. Instead it allows
multiple single-order pages to be stitched together into a
"zspage" which backs the slab. This allows for higher allocation
success rate under memory pressure.

Also, zsmalloc allows objects to span page boundaries within the
zspage. This allows for lower fragmentation than could be had
with the kernel slab allocator for objects between PAGE_SIZE/2
and PAGE_SIZE. With the kernel slab allocator, if a page compresses
to 60% of it original size, the memory savings gained through
compression is lost in fragmentation because another object of
the same size can't be stored in the leftover space.

This ability to span pages results in zsmalloc allocations not being
directly addressable by the user. The user is given an
non-dereferencable handle in response to an allocation request.
That handle must be mapped, using zs_map_object(), which returns
a pointer to the mapped region that can be used. The mapping is
necessary since the object data may reside in two different
noncontigious pages.

zsmalloc fulfills the allocation needs for zram and zswap.

Acked-by: Nitin Gupta <[email protected]>
Acked-by: Minchan Kim <[email protected]>
Signed-off-by: Seth Jennings <[email protected]>
---
include/linux/zsmalloc.h | 56 +++
mm/Kconfig | 24 +
mm/Makefile | 1 +
mm/zsmalloc.c | 1117 ++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 1198 insertions(+)
create mode 100644 include/linux/zsmalloc.h
create mode 100644 mm/zsmalloc.c

diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
new file mode 100644
index 0000000..398dae3
--- /dev/null
+++ b/include/linux/zsmalloc.h
@@ -0,0 +1,56 @@
+/*
+ * zsmalloc memory allocator
+ *
+ * Copyright (C) 2011 Nitin Gupta
+ *
+ * This code is released using a dual license strategy: BSD/GPL
+ * You can choose the license that better fits your requirements.
+ *
+ * Released under the terms of 3-clause BSD License
+ * Released under the terms of GNU General Public License Version 2.0
+ */
+
+#ifndef _ZS_MALLOC_H_
+#define _ZS_MALLOC_H_
+
+#include <linux/types.h>
+#include <linux/mm_types.h>
+
+/*
+ * zsmalloc mapping modes
+ *
+ * NOTE: These only make a difference when a mapped object spans pages.
+ * They also have no effect when PGTABLE_MAPPING is selected.
+*/
+enum zs_mapmode {
+ ZS_MM_RW, /* normal read-write mapping */
+ ZS_MM_RO, /* read-only (no copy-out at unmap time) */
+ ZS_MM_WO /* write-only (no copy-in at map time) */
+ /*
+ * NOTE: ZS_MM_WO should only be used for initializing new
+ * (uninitialized) allocations. Partial writes to already
+ * initialized allocations should use ZS_MM_RW to preserve the
+ * existing data.
+ */
+};
+
+struct zs_ops {
+ struct page * (*alloc)(gfp_t);
+ void (*free)(struct page *);
+};
+
+struct zs_pool;
+
+struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
+void zs_destroy_pool(struct zs_pool *pool);
+
+unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
+void zs_free(struct zs_pool *pool, unsigned long obj);
+
+void *zs_map_object(struct zs_pool *pool, unsigned long handle,
+ enum zs_mapmode mm);
+void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
+
+u64 zs_get_total_size_bytes(struct zs_pool *pool);
+
+#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index 3bea74f..aa054fc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -471,3 +471,27 @@ config FRONTSWAP
and swap data is stored as normal on the matching swap device.

If unsure, say Y to enable frontswap.
+
+config ZSMALLOC
+ tristate "Memory allocator for compressed pages"
+ default n
+ help
+ zsmalloc is a slab-based memory allocator designed to store
+ compressed RAM pages. zsmalloc uses virtual memory mapping
+ in order to reduce fragmentation. However, this results in a
+ non-standard allocator interface where a handle, not a pointer, is
+ returned by an alloc(). This handle must be mapped in order to
+ access the allocated space.
+
+config PGTABLE_MAPPING
+ bool "Use page table mapping to access object in zsmalloc"
+ depends on ZSMALLOC
+ help
+ By default, zsmalloc uses a copy-based object mapping method to
+ access allocations that span two pages. However, if a particular
+ architecture (ex, ARM) performs VM mapping faster than copying,
+ then you should select this. This causes zsmalloc to use page table
+ mapping rather than copying for object mapping.
+
+ You can check speed with zsmalloc benchmark[1].
+ [1] https://github.com/spartacus06/zsmalloc
diff --git a/mm/Makefile b/mm/Makefile
index 3a46287..0f6ef0a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CLEANCACHE) += cleancache.o
obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
+obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
new file mode 100644
index 0000000..adaeee5
--- /dev/null
+++ b/mm/zsmalloc.c
@@ -0,0 +1,1117 @@
+/*
+ * zsmalloc memory allocator
+ *
+ * Copyright (C) 2011 Nitin Gupta
+ *
+ * This code is released using a dual license strategy: BSD/GPL
+ * You can choose the license that better fits your requirements.
+ *
+ * Released under the terms of 3-clause BSD License
+ * Released under the terms of GNU General Public License Version 2.0
+ */
+
+
+/*
+ * This allocator is designed for use with zcache and zram. Thus, the
+ * allocator is supposed to work well under low memory conditions. In
+ * particular, it never attempts higher order page allocation which is
+ * very likely to fail under memory pressure. On the other hand, if we
+ * just use single (0-order) pages, it would suffer from very high
+ * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
+ * an entire page. This was one of the major issues with its predecessor
+ * (xvmalloc).
+ *
+ * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
+ * and links them together using various 'struct page' fields. These linked
+ * pages act as a single higher-order page i.e. an object can span 0-order
+ * page boundaries. The code refers to these linked pages as a single entity
+ * called zspage.
+ *
+ * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
+ * since this satisfies the requirements of all its current users (in the
+ * worst case, page is incompressible and is thus stored "as-is" i.e. in
+ * uncompressed form). For allocation requests larger than this size, failure
+ * is returned (see zs_malloc).
+ *
+ * Additionally, zs_malloc() does not return a dereferenceable pointer.
+ * Instead, it returns an opaque handle (unsigned long) which encodes actual
+ * location of the allocated object. The reason for this indirection is that
+ * zsmalloc does not keep zspages permanently mapped since that would cause
+ * issues on 32-bit systems where the VA region for kernel space mappings
+ * is very small. So, before using the allocating memory, the object has to
+ * be mapped using zs_map_object() to get a usable pointer and subsequently
+ * unmapped using zs_unmap_object().
+ *
+ * Following is how we use various fields and flags of underlying
+ * struct page(s) to form a zspage.
+ *
+ * Usage of struct page fields:
+ * page->first_page: points to the first component (0-order) page
+ * page->index (union with page->freelist): offset of the first object
+ * starting in this page. For the first page, this is
+ * always 0, so we use this field (aka freelist) to point
+ * to the first free object in zspage.
+ * page->lru: links together all component pages (except the first page)
+ * of a zspage
+ *
+ * For _first_ page only:
+ *
+ * page->private (union with page->first_page): refers to the
+ * component page after the first page
+ * page->freelist: points to the first free object in zspage.
+ * Free objects are linked together using in-place
+ * metadata.
+ * page->lru: links together first pages of various zspages.
+ * Basically forming list of zspages in a fullness group.
+ * page->mapping: class index and fullness group of the zspage
+ *
+ * Usage of struct page flags:
+ * PG_private: identifies the first component page
+ * PG_private2: identifies the last component page
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/string.h>
+#include <linux/slab.h>
+#include <asm/tlbflush.h>
+#include <asm/pgtable.h>
+#include <linux/cpumask.h>
+#include <linux/cpu.h>
+#include <linux/vmalloc.h>
+#include <linux/hardirq.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include <linux/zsmalloc.h>
+
+/*
+ * This must be power of 2 and greater than of equal to sizeof(link_free).
+ * These two conditions ensure that any 'struct link_free' itself doesn't
+ * span more than 1 page which avoids complex case of mapping 2 pages simply
+ * to restore link_free pointer values.
+ */
+#define ZS_ALIGN 8
+
+/*
+ * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
+ * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
+ */
+#define ZS_MAX_ZSPAGE_ORDER 2
+#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
+
+/*
+ * Object location (<PFN>, <obj_idx>) is encoded as
+ * as single (unsigned long) handle value.
+ *
+ * Note that object index <obj_idx> is relative to system
+ * page <PFN> it is stored in, so for each sub-page belonging
+ * to a zspage, obj_idx starts with 0.
+ *
+ * This is made more complicated by various memory models and PAE.
+ */
+
+#ifndef MAX_PHYSMEM_BITS
+#ifdef CONFIG_HIGHMEM64G
+#define MAX_PHYSMEM_BITS 36
+#else /* !CONFIG_HIGHMEM64G */
+/*
+ * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
+ * be PAGE_SHIFT
+ */
+#define MAX_PHYSMEM_BITS BITS_PER_LONG
+#endif
+#endif
+#define _PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT)
+#define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS)
+#define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
+
+#define MAX(a, b) ((a) >= (b) ? (a) : (b))
+/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
+#define ZS_MIN_ALLOC_SIZE \
+ MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
+#define ZS_MAX_ALLOC_SIZE PAGE_SIZE
+
+/*
+ * On systems with 4K page size, this gives 254 size classes! There is a
+ * trader-off here:
+ * - Large number of size classes is potentially wasteful as free page are
+ * spread across these classes
+ * - Small number of size classes causes large internal fragmentation
+ * - Probably its better to use specific size classes (empirically
+ * determined). NOTE: all those class sizes must be set as multiple of
+ * ZS_ALIGN to make sure link_free itself never has to span 2 pages.
+ *
+ * ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
+ * (reason above)
+ */
+#define ZS_SIZE_CLASS_DELTA (PAGE_SIZE >> 8)
+#define ZS_SIZE_CLASSES ((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
+ ZS_SIZE_CLASS_DELTA + 1)
+
+/*
+ * We do not maintain any list for completely empty or full pages
+ */
+enum fullness_group {
+ ZS_ALMOST_FULL,
+ ZS_ALMOST_EMPTY,
+ _ZS_NR_FULLNESS_GROUPS,
+
+ ZS_EMPTY,
+ ZS_FULL
+};
+
+/*
+ * We assign a page to ZS_ALMOST_EMPTY fullness group when:
+ * n <= N / f, where
+ * n = number of allocated objects
+ * N = total number of objects zspage can store
+ * f = 1/fullness_threshold_frac
+ *
+ * Similarly, we assign zspage to:
+ * ZS_ALMOST_FULL when n > N / f
+ * ZS_EMPTY when n == 0
+ * ZS_FULL when n == N
+ *
+ * (see: fix_fullness_group())
+ */
+static const int fullness_threshold_frac = 4;
+
+struct size_class {
+ /*
+ * Size of objects stored in this class. Must be multiple
+ * of ZS_ALIGN.
+ */
+ int size;
+ unsigned int index;
+
+ /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
+ int pages_per_zspage;
+
+ spinlock_t lock;
+
+ /* stats */
+ u64 pages_allocated;
+
+ struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
+};
+
+/*
+ * Placed within free objects to form a singly linked list.
+ * For every zspage, first_page->freelist gives head of this list.
+ *
+ * This must be power of 2 and less than or equal to ZS_ALIGN
+ */
+struct link_free {
+ /* Handle of next free chunk (encodes <PFN, obj_idx>) */
+ void *next;
+};
+
+struct zs_pool {
+ struct size_class size_class[ZS_SIZE_CLASSES];
+
+ struct zs_ops *ops;
+};
+
+/*
+ * A zspage's class index and fullness group
+ * are encoded in its (first)page->mapping
+ */
+#define CLASS_IDX_BITS 28
+#define FULLNESS_BITS 4
+#define CLASS_IDX_MASK ((1 << CLASS_IDX_BITS) - 1)
+#define FULLNESS_MASK ((1 << FULLNESS_BITS) - 1)
+
+struct mapping_area {
+#ifdef CONFIG_PGTABLE_MAPPING
+ struct vm_struct *vm; /* vm area for mapping object that span pages */
+#else
+ char *vm_buf; /* copy buffer for objects that span pages */
+#endif
+ char *vm_addr; /* address of kmap_atomic()'ed pages */
+ enum zs_mapmode vm_mm; /* mapping mode */
+};
+
+/* default page alloc/free ops */
+struct page *zs_alloc_page(gfp_t flags)
+{
+ return alloc_page(flags);
+}
+
+void zs_free_page(struct page *page)
+{
+ __free_page(page);
+}
+
+struct zs_ops zs_default_ops = {
+ .alloc = zs_alloc_page,
+ .free = zs_free_page
+};
+
+/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
+static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
+
+static int is_first_page(struct page *page)
+{
+ return PagePrivate(page);
+}
+
+static int is_last_page(struct page *page)
+{
+ return PagePrivate2(page);
+}
+
+static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
+ enum fullness_group *fullness)
+{
+ unsigned long m;
+ BUG_ON(!is_first_page(page));
+
+ m = (unsigned long)page->mapping;
+ *fullness = m & FULLNESS_MASK;
+ *class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
+}
+
+static void set_zspage_mapping(struct page *page, unsigned int class_idx,
+ enum fullness_group fullness)
+{
+ unsigned long m;
+ BUG_ON(!is_first_page(page));
+
+ m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
+ (fullness & FULLNESS_MASK);
+ page->mapping = (struct address_space *)m;
+}
+
+/*
+ * zsmalloc divides the pool into various size classes where each
+ * class maintains a list of zspages where each zspage is divided
+ * into equal sized chunks. Each allocation falls into one of these
+ * classes depending on its size. This function returns index of the
+ * size class which has chunk size big enough to hold the give size.
+ */
+static int get_size_class_index(int size)
+{
+ int idx = 0;
+
+ if (likely(size > ZS_MIN_ALLOC_SIZE))
+ idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
+ ZS_SIZE_CLASS_DELTA);
+
+ return idx;
+}
+
+/*
+ * For each size class, zspages are divided into different groups
+ * depending on how "full" they are. This was done so that we could
+ * easily find empty or nearly empty zspages when we try to shrink
+ * the pool (not yet implemented). This function returns fullness
+ * status of the given page.
+ */
+static enum fullness_group get_fullness_group(struct page *page,
+ struct size_class *class)
+{
+ int inuse, max_objects;
+ enum fullness_group fg;
+ BUG_ON(!is_first_page(page));
+
+ inuse = page->inuse;
+ max_objects = class->pages_per_zspage * PAGE_SIZE / class->size;
+
+ if (inuse == 0)
+ fg = ZS_EMPTY;
+ else if (inuse == max_objects)
+ fg = ZS_FULL;
+ else if (inuse <= max_objects / fullness_threshold_frac)
+ fg = ZS_ALMOST_EMPTY;
+ else
+ fg = ZS_ALMOST_FULL;
+
+ return fg;
+}
+
+/*
+ * Each size class maintains various freelists and zspages are assigned
+ * to one of these freelists based on the number of live objects they
+ * have. This functions inserts the given zspage into the freelist
+ * identified by <class, fullness_group>.
+ */
+static void insert_zspage(struct page *page, struct size_class *class,
+ enum fullness_group fullness)
+{
+ struct page **head;
+
+ BUG_ON(!is_first_page(page));
+
+ if (fullness >= _ZS_NR_FULLNESS_GROUPS)
+ return;
+
+ head = &class->fullness_list[fullness];
+ if (*head)
+ list_add_tail(&page->lru, &(*head)->lru);
+
+ *head = page;
+}
+
+/*
+ * This function removes the given zspage from the freelist identified
+ * by <class, fullness_group>.
+ */
+static void remove_zspage(struct page *page, struct size_class *class,
+ enum fullness_group fullness)
+{
+ struct page **head;
+
+ BUG_ON(!is_first_page(page));
+
+ if (fullness >= _ZS_NR_FULLNESS_GROUPS)
+ return;
+
+ head = &class->fullness_list[fullness];
+ BUG_ON(!*head);
+ if (list_empty(&(*head)->lru))
+ *head = NULL;
+ else if (*head == page)
+ *head = (struct page *)list_entry((*head)->lru.next,
+ struct page, lru);
+
+ list_del_init(&page->lru);
+}
+
+/*
+ * Each size class maintains zspages in different fullness groups depending
+ * on the number of live objects they contain. When allocating or freeing
+ * objects, the fullness status of the page can change, say, from ALMOST_FULL
+ * to ALMOST_EMPTY when freeing an object. This function checks if such
+ * a status change has occurred for the given page and accordingly moves the
+ * page from the freelist of the old fullness group to that of the new
+ * fullness group.
+ */
+static enum fullness_group fix_fullness_group(struct zs_pool *pool,
+ struct page *page)
+{
+ int class_idx;
+ struct size_class *class;
+ enum fullness_group currfg, newfg;
+
+ BUG_ON(!is_first_page(page));
+
+ get_zspage_mapping(page, &class_idx, &currfg);
+ class = &pool->size_class[class_idx];
+ newfg = get_fullness_group(page, class);
+ if (newfg == currfg)
+ goto out;
+
+ remove_zspage(page, class, currfg);
+ insert_zspage(page, class, newfg);
+ set_zspage_mapping(page, class_idx, newfg);
+
+out:
+ return newfg;
+}
+
+/*
+ * We have to decide on how many pages to link together
+ * to form a zspage for each size class. This is important
+ * to reduce wastage due to unusable space left at end of
+ * each zspage which is given as:
+ * wastage = Zp - Zp % size_class
+ * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
+ *
+ * For example, for size class of 3/8 * PAGE_SIZE, we should
+ * link together 3 PAGE_SIZE sized pages to form a zspage
+ * since then we can perfectly fit in 8 such objects.
+ */
+static int get_pages_per_zspage(int class_size)
+{
+ int i, max_usedpc = 0;
+ /* zspage order which gives maximum used size per KB */
+ int max_usedpc_order = 1;
+
+ for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
+ int zspage_size;
+ int waste, usedpc;
+
+ zspage_size = i * PAGE_SIZE;
+ waste = zspage_size % class_size;
+ usedpc = (zspage_size - waste) * 100 / zspage_size;
+
+ if (usedpc > max_usedpc) {
+ max_usedpc = usedpc;
+ max_usedpc_order = i;
+ }
+ }
+
+ return max_usedpc_order;
+}
+
+/*
+ * A single 'zspage' is composed of many system pages which are
+ * linked together using fields in struct page. This function finds
+ * the first/head page, given any component page of a zspage.
+ */
+static struct page *get_first_page(struct page *page)
+{
+ if (is_first_page(page))
+ return page;
+ else
+ return page->first_page;
+}
+
+static struct page *get_next_page(struct page *page)
+{
+ struct page *next;
+
+ if (is_last_page(page))
+ next = NULL;
+ else if (is_first_page(page))
+ next = (struct page *)page->private;
+ else
+ next = list_entry(page->lru.next, struct page, lru);
+
+ return next;
+}
+
+/* Encode <page, obj_idx> as a single handle value */
+static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
+{
+ unsigned long handle;
+
+ if (!page) {
+ BUG_ON(obj_idx);
+ return NULL;
+ }
+
+ handle = page_to_pfn(page) << OBJ_INDEX_BITS;
+ handle |= (obj_idx & OBJ_INDEX_MASK);
+
+ return (void *)handle;
+}
+
+/* Decode <page, obj_idx> pair from the given object handle */
+static void obj_handle_to_location(unsigned long handle, struct page **page,
+ unsigned long *obj_idx)
+{
+ *page = pfn_to_page(handle >> OBJ_INDEX_BITS);
+ *obj_idx = handle & OBJ_INDEX_MASK;
+}
+
+static unsigned long obj_idx_to_offset(struct page *page,
+ unsigned long obj_idx, int class_size)
+{
+ unsigned long off = 0;
+
+ if (!is_first_page(page))
+ off = page->index;
+
+ return off + obj_idx * class_size;
+}
+
+static void reset_page(struct page *page)
+{
+ clear_bit(PG_private, &page->flags);
+ clear_bit(PG_private_2, &page->flags);
+ set_page_private(page, 0);
+ page->mapping = NULL;
+ page->freelist = NULL;
+ page_mapcount_reset(page);
+}
+
+static void free_zspage(struct zs_ops *ops, struct page *first_page)
+{
+ struct page *nextp, *tmp, *head_extra;
+
+ BUG_ON(!is_first_page(first_page));
+ BUG_ON(first_page->inuse);
+
+ head_extra = (struct page *)page_private(first_page);
+
+ reset_page(first_page);
+ ops->free(first_page);
+
+ /* zspage with only 1 system page */
+ if (!head_extra)
+ return;
+
+ list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
+ list_del(&nextp->lru);
+ reset_page(nextp);
+ ops->free(nextp);
+ }
+ reset_page(head_extra);
+ ops->free(head_extra);
+}
+
+/* Initialize a newly allocated zspage */
+static void init_zspage(struct page *first_page, struct size_class *class)
+{
+ unsigned long off = 0;
+ struct page *page = first_page;
+
+ BUG_ON(!is_first_page(first_page));
+ while (page) {
+ struct page *next_page;
+ struct link_free *link;
+ unsigned int i, objs_on_page;
+
+ /*
+ * page->index stores offset of first object starting
+ * in the page. For the first page, this is always 0,
+ * so we use first_page->index (aka ->freelist) to store
+ * head of corresponding zspage's freelist.
+ */
+ if (page != first_page)
+ page->index = off;
+
+ link = (struct link_free *)kmap_atomic(page) +
+ off / sizeof(*link);
+ objs_on_page = (PAGE_SIZE - off) / class->size;
+
+ for (i = 1; i <= objs_on_page; i++) {
+ off += class->size;
+ if (off < PAGE_SIZE) {
+ link->next = obj_location_to_handle(page, i);
+ link += class->size / sizeof(*link);
+ }
+ }
+
+ /*
+ * We now come to the last (full or partial) object on this
+ * page, which must point to the first object on the next
+ * page (if present)
+ */
+ next_page = get_next_page(page);
+ link->next = obj_location_to_handle(next_page, 0);
+ kunmap_atomic(link);
+ page = next_page;
+ off = (off + class->size) % PAGE_SIZE;
+ }
+}
+
+/*
+ * Allocate a zspage for the given size class
+ */
+static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
+ gfp_t flags)
+{
+ int i, error;
+ struct page *first_page = NULL, *uninitialized_var(prev_page);
+
+ /*
+ * Allocate individual pages and link them together as:
+ * 1. first page->private = first sub-page
+ * 2. all sub-pages are linked together using page->lru
+ * 3. each sub-page is linked to the first page using page->first_page
+ *
+ * For each size class, First/Head pages are linked together using
+ * page->lru. Also, we set PG_private to identify the first page
+ * (i.e. no other sub-page has this flag set) and PG_private_2 to
+ * identify the last page.
+ */
+ error = -ENOMEM;
+ for (i = 0; i < class->pages_per_zspage; i++) {
+ struct page *page;
+
+ page = ops->alloc(flags);
+ if (!page)
+ goto cleanup;
+
+ INIT_LIST_HEAD(&page->lru);
+ if (i == 0) { /* first page */
+ SetPagePrivate(page);
+ set_page_private(page, 0);
+ first_page = page;
+ first_page->inuse = 0;
+ }
+ if (i == 1)
+ first_page->private = (unsigned long)page;
+ if (i >= 1)
+ page->first_page = first_page;
+ if (i >= 2)
+ list_add(&page->lru, &prev_page->lru);
+ if (i == class->pages_per_zspage - 1) /* last page */
+ SetPagePrivate2(page);
+ prev_page = page;
+ }
+
+ init_zspage(first_page, class);
+
+ first_page->freelist = obj_location_to_handle(first_page, 0);
+
+ error = 0; /* Success */
+
+cleanup:
+ if (unlikely(error) && first_page) {
+ free_zspage(ops, first_page);
+ first_page = NULL;
+ }
+
+ return first_page;
+}
+
+static struct page *find_get_zspage(struct size_class *class)
+{
+ int i;
+ struct page *page;
+
+ for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
+ page = class->fullness_list[i];
+ if (page)
+ break;
+ }
+
+ return page;
+}
+
+#ifdef CONFIG_PGTABLE_MAPPING
+static inline int __zs_cpu_up(struct mapping_area *area)
+{
+ /*
+ * Make sure we don't leak memory if a cpu UP notification
+ * and zs_init() race and both call zs_cpu_up() on the same cpu
+ */
+ if (area->vm)
+ return 0;
+ area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
+ if (!area->vm)
+ return -ENOMEM;
+ return 0;
+}
+
+static inline void __zs_cpu_down(struct mapping_area *area)
+{
+ if (area->vm)
+ free_vm_area(area->vm);
+ area->vm = NULL;
+}
+
+static inline void *__zs_map_object(struct mapping_area *area,
+ struct page *pages[2], int off, int size)
+{
+ BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
+ area->vm_addr = area->vm->addr;
+ return area->vm_addr + off;
+}
+
+static inline void __zs_unmap_object(struct mapping_area *area,
+ struct page *pages[2], int off, int size)
+{
+ unsigned long addr = (unsigned long)area->vm_addr;
+ unsigned long end = addr + (PAGE_SIZE * 2);
+
+ flush_cache_vunmap(addr, end);
+ unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
+ flush_tlb_kernel_range(addr, end);
+}
+
+#else /* CONFIG_PGTABLE_MAPPING*/
+
+static inline int __zs_cpu_up(struct mapping_area *area)
+{
+ /*
+ * Make sure we don't leak memory if a cpu UP notification
+ * and zs_init() race and both call zs_cpu_up() on the same cpu
+ */
+ if (area->vm_buf)
+ return 0;
+ area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
+ if (!area->vm_buf)
+ return -ENOMEM;
+ return 0;
+}
+
+static inline void __zs_cpu_down(struct mapping_area *area)
+{
+ if (area->vm_buf)
+ free_page((unsigned long)area->vm_buf);
+ area->vm_buf = NULL;
+}
+
+static void *__zs_map_object(struct mapping_area *area,
+ struct page *pages[2], int off, int size)
+{
+ int sizes[2];
+ void *addr;
+ char *buf = area->vm_buf;
+
+ /* disable page faults to match kmap_atomic() return conditions */
+ pagefault_disable();
+
+ /* no read fastpath */
+ if (area->vm_mm == ZS_MM_WO)
+ goto out;
+
+ sizes[0] = PAGE_SIZE - off;
+ sizes[1] = size - sizes[0];
+
+ /* copy object to per-cpu buffer */
+ addr = kmap_atomic(pages[0]);
+ memcpy(buf, addr + off, sizes[0]);
+ kunmap_atomic(addr);
+ addr = kmap_atomic(pages[1]);
+ memcpy(buf + sizes[0], addr, sizes[1]);
+ kunmap_atomic(addr);
+out:
+ return area->vm_buf;
+}
+
+static void __zs_unmap_object(struct mapping_area *area,
+ struct page *pages[2], int off, int size)
+{
+ int sizes[2];
+ void *addr;
+ char *buf = area->vm_buf;
+
+ /* no write fastpath */
+ if (area->vm_mm == ZS_MM_RO)
+ goto out;
+
+ sizes[0] = PAGE_SIZE - off;
+ sizes[1] = size - sizes[0];
+
+ /* copy per-cpu buffer to object */
+ addr = kmap_atomic(pages[0]);
+ memcpy(addr + off, buf, sizes[0]);
+ kunmap_atomic(addr);
+ addr = kmap_atomic(pages[1]);
+ memcpy(addr, buf + sizes[0], sizes[1]);
+ kunmap_atomic(addr);
+
+out:
+ /* enable page faults to match kunmap_atomic() return conditions */
+ pagefault_enable();
+}
+
+#endif /* CONFIG_PGTABLE_MAPPING */
+
+static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
+ void *pcpu)
+{
+ int ret, cpu = (long)pcpu;
+ struct mapping_area *area;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ area = &per_cpu(zs_map_area, cpu);
+ ret = __zs_cpu_up(area);
+ if (ret)
+ return notifier_from_errno(ret);
+ break;
+ case CPU_DEAD:
+ case CPU_UP_CANCELED:
+ area = &per_cpu(zs_map_area, cpu);
+ __zs_cpu_down(area);
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block zs_cpu_nb = {
+ .notifier_call = zs_cpu_notifier
+};
+
+static void zs_exit(void)
+{
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ zs_cpu_notifier(NULL, CPU_DEAD, (void *)(long)cpu);
+ unregister_cpu_notifier(&zs_cpu_nb);
+}
+
+static int zs_init(void)
+{
+ int cpu, ret;
+
+ register_cpu_notifier(&zs_cpu_nb);
+ for_each_online_cpu(cpu) {
+ ret = zs_cpu_notifier(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
+ if (notifier_to_errno(ret))
+ goto fail;
+ }
+ return 0;
+fail:
+ zs_exit();
+ return notifier_to_errno(ret);
+}
+
+/**
+ * zs_create_pool - Creates an allocation pool to work from.
+ * @flags: allocation flags used to allocate pool metadata
+ * @ops: allocation/free callbacks for expanding the pool
+ *
+ * This function must be called before anything when using
+ * the zsmalloc allocator.
+ *
+ * On success, a pointer to the newly created pool is returned,
+ * otherwise NULL.
+ */
+struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops)
+{
+ int i, ovhd_size;
+ struct zs_pool *pool;
+
+ ovhd_size = roundup(sizeof(*pool), PAGE_SIZE);
+ pool = kzalloc(ovhd_size, flags);
+ if (!pool)
+ return NULL;
+
+ for (i = 0; i < ZS_SIZE_CLASSES; i++) {
+ int size;
+ struct size_class *class;
+
+ size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA;
+ if (size > ZS_MAX_ALLOC_SIZE)
+ size = ZS_MAX_ALLOC_SIZE;
+
+ class = &pool->size_class[i];
+ class->size = size;
+ class->index = i;
+ spin_lock_init(&class->lock);
+ class->pages_per_zspage = get_pages_per_zspage(size);
+
+ }
+
+ if (ops)
+ pool->ops = ops;
+ else
+ pool->ops = &zs_default_ops;
+
+ return pool;
+}
+EXPORT_SYMBOL_GPL(zs_create_pool);
+
+void zs_destroy_pool(struct zs_pool *pool)
+{
+ int i;
+
+ for (i = 0; i < ZS_SIZE_CLASSES; i++) {
+ int fg;
+ struct size_class *class = &pool->size_class[i];
+
+ for (fg = 0; fg < _ZS_NR_FULLNESS_GROUPS; fg++) {
+ if (class->fullness_list[fg]) {
+ pr_info("Freeing non-empty class with size "
+ "%db, fullness group %d\n",
+ class->size, fg);
+ }
+ }
+ }
+ kfree(pool);
+}
+EXPORT_SYMBOL_GPL(zs_destroy_pool);
+
+/**
+ * zs_malloc - Allocate block of given size from pool.
+ * @pool: pool to allocate from
+ * @size: size of block to allocate
+ *
+ * On success, handle to the allocated object is returned,
+ * otherwise 0.
+ * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail.
+ */
+unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags)
+{
+ unsigned long obj;
+ struct link_free *link;
+ int class_idx;
+ struct size_class *class;
+
+ struct page *first_page, *m_page;
+ unsigned long m_objidx, m_offset;
+
+ if (unlikely(!size || size > ZS_MAX_ALLOC_SIZE))
+ return 0;
+
+ class_idx = get_size_class_index(size);
+ class = &pool->size_class[class_idx];
+ BUG_ON(class_idx != class->index);
+
+ spin_lock(&class->lock);
+ first_page = find_get_zspage(class);
+
+ if (!first_page) {
+ spin_unlock(&class->lock);
+ first_page = alloc_zspage(pool->ops, class, flags);
+ if (unlikely(!first_page))
+ return 0;
+
+ set_zspage_mapping(first_page, class->index, ZS_EMPTY);
+ spin_lock(&class->lock);
+ class->pages_allocated += class->pages_per_zspage;
+ }
+
+ obj = (unsigned long)first_page->freelist;
+ obj_handle_to_location(obj, &m_page, &m_objidx);
+ m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
+
+ link = (struct link_free *)kmap_atomic(m_page) +
+ m_offset / sizeof(*link);
+ first_page->freelist = link->next;
+ memset(link, POISON_INUSE, sizeof(*link));
+ kunmap_atomic(link);
+
+ first_page->inuse++;
+ /* Now move the zspage to another fullness group, if required */
+ fix_fullness_group(pool, first_page);
+ spin_unlock(&class->lock);
+
+ return obj;
+}
+EXPORT_SYMBOL_GPL(zs_malloc);
+
+void zs_free(struct zs_pool *pool, unsigned long obj)
+{
+ struct link_free *link;
+ struct page *first_page, *f_page;
+ unsigned long f_objidx, f_offset;
+
+ int class_idx;
+ struct size_class *class;
+ enum fullness_group fullness;
+
+ if (unlikely(!obj))
+ return;
+
+ obj_handle_to_location(obj, &f_page, &f_objidx);
+ first_page = get_first_page(f_page);
+
+ get_zspage_mapping(first_page, &class_idx, &fullness);
+ class = &pool->size_class[class_idx];
+ f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
+
+ spin_lock(&class->lock);
+
+ /* Insert this object in containing zspage's freelist */
+ link = (struct link_free *)((unsigned char *)kmap_atomic(f_page)
+ + f_offset);
+ link->next = first_page->freelist;
+ kunmap_atomic(link);
+ first_page->freelist = (void *)obj;
+
+ first_page->inuse--;
+ fullness = fix_fullness_group(pool, first_page);
+
+ if (fullness == ZS_EMPTY)
+ class->pages_allocated -= class->pages_per_zspage;
+
+ spin_unlock(&class->lock);
+
+ if (fullness == ZS_EMPTY)
+ free_zspage(pool->ops, first_page);
+}
+EXPORT_SYMBOL_GPL(zs_free);
+
+/**
+ * zs_map_object - get address of allocated object from handle.
+ * @pool: pool from which the object was allocated
+ * @handle: handle returned from zs_malloc
+ *
+ * Before using an object allocated from zs_malloc, it must be mapped using
+ * this function. When done with the object, it must be unmapped using
+ * zs_unmap_object.
+ *
+ * Only one object can be mapped per cpu at a time. There is no protection
+ * against nested mappings.
+ *
+ * This function returns with preemption and page faults disabled.
+*/
+void *zs_map_object(struct zs_pool *pool, unsigned long handle,
+ enum zs_mapmode mm)
+{
+ struct page *page;
+ unsigned long obj_idx, off;
+
+ unsigned int class_idx;
+ enum fullness_group fg;
+ struct size_class *class;
+ struct mapping_area *area;
+ struct page *pages[2];
+
+ BUG_ON(!handle);
+
+ /*
+ * Because we use per-cpu mapping areas shared among the
+ * pools/users, we can't allow mapping in interrupt context
+ * because it can corrupt another users mappings.
+ */
+ BUG_ON(in_interrupt());
+
+ obj_handle_to_location(handle, &page, &obj_idx);
+ get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+ class = &pool->size_class[class_idx];
+ off = obj_idx_to_offset(page, obj_idx, class->size);
+
+ area = &get_cpu_var(zs_map_area);
+ area->vm_mm = mm;
+ if (off + class->size <= PAGE_SIZE) {
+ /* this object is contained entirely within a page */
+ area->vm_addr = kmap_atomic(page);
+ return area->vm_addr + off;
+ }
+
+ /* this object spans two pages */
+ pages[0] = page;
+ pages[1] = get_next_page(page);
+ BUG_ON(!pages[1]);
+
+ return __zs_map_object(area, pages, off, class->size);
+}
+EXPORT_SYMBOL_GPL(zs_map_object);
+
+void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
+{
+ struct page *page;
+ unsigned long obj_idx, off;
+
+ unsigned int class_idx;
+ enum fullness_group fg;
+ struct size_class *class;
+ struct mapping_area *area;
+
+ BUG_ON(!handle);
+
+ obj_handle_to_location(handle, &page, &obj_idx);
+ get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+ class = &pool->size_class[class_idx];
+ off = obj_idx_to_offset(page, obj_idx, class->size);
+
+ area = &__get_cpu_var(zs_map_area);
+ if (off + class->size <= PAGE_SIZE)
+ kunmap_atomic(area->vm_addr);
+ else {
+ struct page *pages[2];
+
+ pages[0] = page;
+ pages[1] = get_next_page(page);
+ BUG_ON(!pages[1]);
+
+ __zs_unmap_object(area, pages, off, class->size);
+ }
+ put_cpu_var(zs_map_area);
+}
+EXPORT_SYMBOL_GPL(zs_unmap_object);
+
+u64 zs_get_total_size_bytes(struct zs_pool *pool)
+{
+ int i;
+ u64 npages = 0;
+
+ for (i = 0; i < ZS_SIZE_CLASSES; i++)
+ npages += pool->size_class[i].pages_allocated;
+
+ return npages << PAGE_SHIFT;
+}
+EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
+
+module_init(zs_init);
+module_exit(zs_exit);
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Nitin Gupta <[email protected]>");
--
1.8.2.1

2013-04-10 18:19:44

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 7/8] zswap: add swap page writeback support

This patch adds support for evicting swap pages that are currently
compressed in zswap to the swap device. This functionality is very
important and make zswap a true cache in that, once the cache is full
or can't grow due to memory pressure, the oldest pages can be moved
out of zswap to the swap device so newer pages can be compressed and
stored in zswap.

This introduces a good amount of new code to guarantee coherency.
Most notably, and LRU list is added to the zswap_tree structure,
and refcounts are added to each entry to ensure that one code path
doesn't free then entry while another code path is operating on it.

Signed-off-by: Seth Jennings <[email protected]>
---
mm/zswap.c | 530 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 508 insertions(+), 22 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index db283c4..edb354b 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -36,6 +36,12 @@
#include <linux/mempool.h>
#include <linux/zsmalloc.h>

+#include <linux/mm_types.h>
+#include <linux/page-flags.h>
+#include <linux/swapops.h>
+#include <linux/writeback.h>
+#include <linux/pagemap.h>
+
/*********************************
* statistics
**********************************/
@@ -43,6 +49,8 @@
static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
/* The number of compressed pages currently stored in zswap */
static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+/* The number of outstanding pages awaiting writeback */
+static atomic_t zswap_outstanding_writebacks = ATOMIC_INIT(0);

/*
* The statistics below are not protected from concurrent access for
@@ -51,9 +59,13 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
* certain event is occurring.
*/
static u64 zswap_pool_limit_hit;
+static u64 zswap_written_back_pages;
static u64 zswap_reject_compress_poor;
+static u64 zswap_writeback_attempted;
+static u64 zswap_reject_tmppage_fail;
static u64 zswap_reject_zsmalloc_fail;
static u64 zswap_reject_kmemcache_fail;
+static u64 zswap_saved_by_writeback;
static u64 zswap_duplicate_entry;

/*********************************
@@ -82,6 +94,14 @@ static unsigned int zswap_max_compression_ratio = 80;
module_param_named(max_compression_ratio,
zswap_max_compression_ratio, uint, 0644);

+/*
+ * Maximum number of outstanding writebacks allowed at any given time.
+ * This is to prevent decompressing an unbounded number of compressed
+ * pages into the swap cache all at once, and to help with writeback
+ * congestion.
+*/
+#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
+
/*********************************
* compression functions
**********************************/
@@ -144,18 +164,49 @@ static void zswap_comp_exit(void)
/*********************************
* data structures
**********************************/
+
+/*
+ * struct zswap_entry
+ *
+ * This structure contains the metadata for tracking a single compressed
+ * page within zswap.
+ *
+ * rbnode - links the entry into red-black tree for the appropriate swap type
+ * lru - links the entry into the lru list for the appropriate swap type
+ * refcount - the number of outstanding reference to the entry. This is needed
+ * to protect against premature freeing of the entry by code
+ * concurent calls to load, invalidate, and writeback. The lock
+ * for the zswap_tree structure that contains the entry must
+ * be held while changing the refcount. Since the lock must
+ * be held, there is no reason to also make refcount atomic.
+ * type - the swap type for the entry. Used to map back to the zswap_tree
+ * structure that contains the entry.
+ * offset - the swap offset for the entry. Index into the red-black tree.
+ * handle - zsmalloc allocation handle that stores the compressed page data
+ * length - the length in bytes of the compressed page data. Needed during
+ * decompression
+ */
struct zswap_entry {
struct rb_node rbnode;
- unsigned type;
+ struct list_head lru;
+ int refcount;
pgoff_t offset;
unsigned long handle;
unsigned int length;
};

+/*
+ * The tree lock in the zswap_tree struct protects a few things:
+ * - the rbtree
+ * - the lru list
+ * - the refcount field of each entry in the tree
+ */
struct zswap_tree {
struct rb_root rbroot;
+ struct list_head lru;
spinlock_t lock;
struct zs_pool *pool;
+ unsigned type;
};

static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
@@ -185,6 +236,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
entry = kmem_cache_alloc(zswap_entry_cache, gfp);
if (!entry)
return NULL;
+ INIT_LIST_HEAD(&entry->lru);
+ entry->refcount = 1;
return entry;
}

@@ -193,6 +246,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}

+static inline void zswap_entry_get(struct zswap_entry *entry)
+{
+ entry->refcount++;
+}
+
+static inline int zswap_entry_put(struct zswap_entry *entry)
+{
+ entry->refcount--;
+ return entry->refcount;
+}
+
/*********************************
* rbtree functions
**********************************/
@@ -367,6 +431,328 @@ static struct zs_ops zswap_zs_ops = {
.free = zswap_free_page
};

+
+/*********************************
+* helpers
+**********************************/
+
+/*
+ * Carries out the common pattern of freeing and entry's zsmalloc allocation,
+ * freeing the entry itself, and decrementing the number of stored pages.
+ */
+static void zswap_free_entry(struct zswap_tree *tree, struct zswap_entry *entry)
+{
+ zs_free(tree->pool, entry->handle);
+ zswap_entry_cache_free(entry);
+ atomic_dec(&zswap_stored_pages);
+}
+
+/*********************************
+* writeback code
+**********************************/
+static void zswap_end_swap_write(struct bio *bio, int err)
+{
+ end_swap_bio_write(bio, err);
+ atomic_dec(&zswap_outstanding_writebacks);
+ zswap_written_back_pages++;
+}
+
+/* return enum for zswap_get_swap_cache_page */
+enum zswap_get_swap_ret {
+ ZSWAP_SWAPCACHE_NEW,
+ ZSWAP_SWAPCACHE_EXIST,
+ ZSWAP_SWAPCACHE_NOMEM
+};
+
+/*
+ * zswap_get_swap_cache_page
+ *
+ * This is an adaption of read_swap_cache_async()
+ *
+ * This function tries to find a page with the given swap entry
+ * in the swapper_space address space (the swap cache). If the page
+ * is found, it is returned in retpage. Otherwise, a page is allocated,
+ * added to the swap cache, and returned in retpage.
+ *
+ * If success, the swap cache page is returned in retpage
+ * Returns 0 if page was already in the swap cache, page is not locked
+ * Returns 1 if the new page needs to be populated, page is locked
+ * Returns <0 on error
+ */
+static int zswap_get_swap_cache_page(swp_entry_t entry,
+ struct page **retpage)
+{
+ struct page *found_page, *new_page = NULL;
+ struct address_space *swapper_space = &swapper_spaces[swp_type(entry)];
+ int err;
+
+ *retpage = NULL;
+ do {
+ /*
+ * First check the swap cache. Since this is normally
+ * called after lookup_swap_cache() failed, re-calling
+ * that would confuse statistics.
+ */
+ found_page = find_get_page(swapper_space, entry.val);
+ if (found_page)
+ break;
+
+ /*
+ * Get a new page to read into from swap.
+ */
+ if (!new_page) {
+ new_page = alloc_page(GFP_KERNEL);
+ if (!new_page)
+ break; /* Out of memory */
+ }
+
+ /*
+ * call radix_tree_preload() while we can wait.
+ */
+ err = radix_tree_preload(GFP_KERNEL);
+ if (err)
+ break;
+
+ /*
+ * Swap entry may have been freed since our caller observed it.
+ */
+ err = swapcache_prepare(entry);
+ if (err == -EEXIST) { /* seems racy */
+ radix_tree_preload_end();
+ continue;
+ }
+ if (err) { /* swp entry is obsolete ? */
+ radix_tree_preload_end();
+ break;
+ }
+
+ /* May fail (-ENOMEM) if radix-tree node allocation failed. */
+ __set_page_locked(new_page);
+ SetPageSwapBacked(new_page);
+ err = __add_to_swap_cache(new_page, entry);
+ if (likely(!err)) {
+ radix_tree_preload_end();
+ lru_cache_add_anon(new_page);
+ *retpage = new_page;
+ return ZSWAP_SWAPCACHE_NEW;
+ }
+ radix_tree_preload_end();
+ ClearPageSwapBacked(new_page);
+ __clear_page_locked(new_page);
+ /*
+ * add_to_swap_cache() doesn't return -EEXIST, so we can safely
+ * clear SWAP_HAS_CACHE flag.
+ */
+ swapcache_free(entry, NULL);
+ } while (err != -ENOMEM);
+
+ if (new_page)
+ page_cache_release(new_page);
+ if (!found_page)
+ return ZSWAP_SWAPCACHE_NOMEM;
+ *retpage = found_page;
+ return ZSWAP_SWAPCACHE_EXIST;
+}
+
+/*
+ * Attempts to free and entry by adding a page to the swap cache,
+ * decompressing the entry data into the page, and issuing a
+ * bio write to write the page back to the swap device.
+ *
+ * This can be thought of as a "resumed writeback" of the page
+ * to the swap device. We are basically resuming the same swap
+ * writeback path that was intercepted with the frontswap_store()
+ * in the first place. After the page has been decompressed into
+ * the swap cache, the compressed version stored by zswap can be
+ * freed.
+ */
+static int zswap_writeback_entry(struct zswap_tree *tree,
+ struct zswap_entry *entry)
+{
+ unsigned long type = tree->type;
+ struct page *page;
+ swp_entry_t swpentry;
+ u8 *src, *dst;
+ unsigned int dlen;
+ int ret;
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_NONE,
+ };
+
+ /* get/allocate page in the swap cache */
+ swpentry = swp_entry(type, entry->offset);
+
+ /* try to allocate swap cache page */
+ switch (zswap_get_swap_cache_page(swpentry, &page)) {
+
+ case ZSWAP_SWAPCACHE_NOMEM: /* no memory */
+ return -ENOMEM;
+ break; /* not reached */
+
+ case ZSWAP_SWAPCACHE_EXIST: /* page is unlocked */
+ /* page is already in the swap cache, ignore for now */
+ return -EEXIST;
+ break; /* not reached */
+
+ case ZSWAP_SWAPCACHE_NEW: /* page is locked */
+ /* decompress */
+ dlen = PAGE_SIZE;
+ src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
+ dst = kmap_atomic(page);
+ ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
+ dst, &dlen);
+ kunmap_atomic(dst);
+ zs_unmap_object(tree->pool, entry->handle);
+ BUG_ON(ret);
+ BUG_ON(dlen != PAGE_SIZE);
+
+ /* page is up to date */
+ SetPageUptodate(page);
+ }
+
+ /* start writeback */
+ SetPageReclaim(page);
+ if (!__swap_writepage(page, &wbc, zswap_end_swap_write))
+ atomic_inc(&zswap_outstanding_writebacks);
+ page_cache_release(page);
+
+ return 0;
+}
+
+/*
+ * Attempts to free nr of entries via writeback to the swap device.
+ * The number of entries that were actually freed is returned.
+ */
+static int zswap_writeback_entries(struct zswap_tree *tree, int nr)
+{
+ struct zswap_entry *entry;
+ int i, ret, refcount, freed_nr = 0;
+
+ for (i = 0; i < nr; i++) {
+ /*
+ * This limits is arbitrary for now until a better
+ * policy can be implemented. This is so we don't
+ * eat all of RAM decompressing pages for writeback.
+ */
+ if (atomic_read(&zswap_outstanding_writebacks) >
+ ZSWAP_MAX_OUTSTANDING_FLUSHES)
+ break;
+
+ spin_lock(&tree->lock);
+
+ /* dequeue from lru */
+ if (list_empty(&tree->lru)) {
+ spin_unlock(&tree->lock);
+ break;
+ }
+ entry = list_first_entry(&tree->lru,
+ struct zswap_entry, lru);
+ list_del_init(&entry->lru);
+
+ /* so invalidate doesn't free the entry from under us */
+ zswap_entry_get(entry);
+
+ spin_unlock(&tree->lock);
+
+ /* attempt writeback */
+ ret = zswap_writeback_entry(tree, entry);
+
+ spin_lock(&tree->lock);
+
+ /* drop reference from above */
+ refcount = zswap_entry_put(entry);
+
+ if (!ret)
+ /* drop the initial reference from entry creation */
+ refcount = zswap_entry_put(entry);
+
+ /*
+ * There are four possible values for refcount here:
+ * (1) refcount is 2, writeback failed and load is in progress;
+ * do nothing, load will add us back to the LRU
+ * (2) refcount is 1, writeback failed; do not free entry,
+ * add back to LRU
+ * (3) refcount is 0, (normal case) not invalidate yet;
+ * remove from rbtree and free entry
+ * (4) refcount is -1, invalidate happened during writeback;
+ * free entry
+ */
+ if (refcount == 1)
+ list_add(&entry->lru, &tree->lru);
+
+ if (refcount == 0) {
+ /* no invalidate yet, remove from rbtree */
+ rb_erase(&entry->rbnode, &tree->rbroot);
+ }
+ spin_unlock(&tree->lock);
+ if (refcount <= 0) {
+ /* free the entry */
+ zswap_free_entry(tree, entry);
+ freed_nr++;
+ }
+ }
+ return freed_nr;
+}
+
+/*******************************************
+* page pool for temporary compression result
+********************************************/
+#define ZSWAP_TMPPAGE_POOL_PAGES 16
+static LIST_HEAD(zswap_tmppage_list);
+static DEFINE_SPINLOCK(zswap_tmppage_lock);
+
+static void zswap_tmppage_pool_destroy(void)
+{
+ struct page *page, *tmppage;
+
+ spin_lock(&zswap_tmppage_lock);
+ list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
+ list_del(&page->lru);
+ __free_pages(page, 1);
+ }
+ spin_unlock(&zswap_tmppage_lock);
+}
+
+static int zswap_tmppage_pool_create(void)
+{
+ int i;
+ struct page *page;
+
+ for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
+ page = alloc_pages(GFP_KERNEL, 1);
+ if (!page) {
+ zswap_tmppage_pool_destroy();
+ return -ENOMEM;
+ }
+ spin_lock(&zswap_tmppage_lock);
+ list_add(&page->lru, &zswap_tmppage_list);
+ spin_unlock(&zswap_tmppage_lock);
+ }
+ return 0;
+}
+
+static inline struct page *zswap_tmppage_alloc(void)
+{
+ struct page *page;
+
+ spin_lock(&zswap_tmppage_lock);
+ if (list_empty(&zswap_tmppage_list)) {
+ spin_unlock(&zswap_tmppage_lock);
+ return NULL;
+ }
+ page = list_first_entry(&zswap_tmppage_list, struct page, lru);
+ list_del(&page->lru);
+ spin_unlock(&zswap_tmppage_lock);
+ return page;
+}
+
+static inline void zswap_tmppage_free(struct page *page)
+{
+ spin_lock(&zswap_tmppage_lock);
+ list_add(&page->lru, &zswap_tmppage_list);
+ spin_unlock(&zswap_tmppage_lock);
+}
+
/*********************************
* frontswap hooks
**********************************/
@@ -380,7 +766,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
unsigned int dlen = PAGE_SIZE;
unsigned long handle;
char *buf;
- u8 *src, *dst;
+ u8 *src, *dst, *tmpdst;
+ struct page *tmppage;
+ bool writeback_attempted = 0;

if (!tree) {
ret = -ENODEV;
@@ -402,12 +790,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
kunmap_atomic(src);
if (ret) {
ret = -EINVAL;
- goto putcpu;
+ goto freepage;
}
if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
zswap_reject_compress_poor++;
ret = -E2BIG;
- goto putcpu;
+ goto freepage;
}

/* store */
@@ -415,18 +803,48 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
__GFP_NOWARN);
if (!handle) {
- zswap_reject_zsmalloc_fail++;
- ret = -ENOMEM;
- goto putcpu;
+ zswap_writeback_attempted++;
+ /*
+ * Copy compressed buffer out of per-cpu storage so
+ * we can re-enable preemption.
+ */
+ tmppage = zswap_tmppage_alloc();
+ if (!tmppage) {
+ zswap_reject_tmppage_fail++;
+ ret = -ENOMEM;
+ goto freepage;
+ }
+ writeback_attempted = 1;
+ tmpdst = page_address(tmppage);
+ memcpy(tmpdst, dst, dlen);
+ dst = tmpdst;
+ put_cpu_var(zswap_dstmem);
+
+ /* try to free up some space */
+ /* TODO: replace with more targeted policy */
+ zswap_writeback_entries(tree, 16);
+ /* try again, allowing wait */
+ handle = zs_malloc(tree->pool, dlen,
+ __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
+ __GFP_NOWARN);
+ if (!handle) {
+ /* still no space, fail */
+ zswap_reject_zsmalloc_fail++;
+ ret = -ENOMEM;
+ goto freepage;
+ }
+ zswap_saved_by_writeback++;
}

buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
memcpy(buf, dst, dlen);
zs_unmap_object(tree->pool, handle);
- put_cpu_var(zswap_dstmem);
+ if (writeback_attempted)
+ zswap_tmppage_free(tmppage);
+ else
+ put_cpu_var(zswap_dstmem);

/* populate entry */
- entry->type = type;
entry->offset = offset;
entry->handle = handle;
entry->length = dlen;
@@ -437,16 +855,17 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
if (ret == -EEXIST) {
zswap_duplicate_entry++;
-
- /* remove from rbtree */
+ /* remove from rbtree and lru */
rb_erase(&dupentry->rbnode, &tree->rbroot);
-
- /* free */
- zs_free(tree->pool, dupentry->handle);
- zswap_entry_cache_free(dupentry);
- atomic_dec(&zswap_stored_pages);
+ if (!list_empty(&dupentry->lru))
+ list_del_init(&dupentry->lru);
+ if (!zswap_entry_put(dupentry)) {
+ /* free */
+ zswap_free_entry(tree, dupentry);
+ }
}
} while (ret == -EEXIST);
+ list_add_tail(&entry->lru, &tree->lru);
spin_unlock(&tree->lock);

/* update stats */
@@ -454,8 +873,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,

return 0;

-putcpu:
- put_cpu_var(zswap_dstmem);
+freepage:
+ if (writeback_attempted)
+ zswap_tmppage_free(tmppage);
+ else
+ put_cpu_var(zswap_dstmem);
zswap_entry_cache_free(entry);
reject:
return ret;
@@ -472,10 +894,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
struct zswap_entry *entry;
u8 *src, *dst;
unsigned int dlen;
+ int refcount;

/* find */
spin_lock(&tree->lock);
entry = zswap_rb_search(&tree->rbroot, offset);
+ if (!entry) {
+ /* entry was written back */
+ spin_unlock(&tree->lock);
+ return -1;
+ }
+ zswap_entry_get(entry);
+
+ /* remove from lru */
+ if (!list_empty(&entry->lru))
+ list_del_init(&entry->lru);
spin_unlock(&tree->lock);

/* decompress */
@@ -487,6 +920,24 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
kunmap_atomic(dst);
zs_unmap_object(tree->pool, entry->handle);

+ spin_lock(&tree->lock);
+ refcount = zswap_entry_put(entry);
+ if (likely(refcount)) {
+ list_add_tail(&entry->lru, &tree->lru);
+ spin_unlock(&tree->lock);
+ return 0;
+ }
+ spin_unlock(&tree->lock);
+
+ /*
+ * We don't have to unlink from the rbtree because
+ * zswap_writeback_entry() or zswap_frontswap_invalidate page()
+ * has already done this for us if we are the last reference.
+ */
+ /* free */
+
+ zswap_free_entry(tree, entry);
+
return 0;
}

@@ -495,19 +946,34 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
{
struct zswap_tree *tree = zswap_trees[type];
struct zswap_entry *entry;
+ int refcount;

/* find */
spin_lock(&tree->lock);
entry = zswap_rb_search(&tree->rbroot, offset);
+ if (!entry) {
+ /* entry was written back */
+ spin_unlock(&tree->lock);
+ return;
+ }

- /* remove from rbtree */
+ /* remove from rbtree and lru */
rb_erase(&entry->rbnode, &tree->rbroot);
+ if (!list_empty(&entry->lru))
+ list_del_init(&entry->lru);
+
+ /* drop the initial reference from entry creation */
+ refcount = zswap_entry_put(entry);
+
spin_unlock(&tree->lock);

+ if (refcount) {
+ /* writeback in progress, writeback will free */
+ return;
+ }
+
/* free */
- zs_free(tree->pool, entry->handle);
- zswap_entry_cache_free(entry);
- atomic_dec(&zswap_stored_pages);
+ zswap_free_entry(tree, entry);
}

/* invalidates all pages for the given swap type */
@@ -536,8 +1002,10 @@ static void zswap_frontswap_invalidate_area(unsigned type)
rb_erase(&entry->rbnode, &tree->rbroot);
zs_free(tree->pool, entry->handle);
zswap_entry_cache_free(entry);
+ atomic_dec(&zswap_stored_pages);
}
tree->rbroot = RB_ROOT;
+ INIT_LIST_HEAD(&tree->lru);
spin_unlock(&tree->lock);
}

@@ -553,7 +1021,9 @@ static void zswap_frontswap_init(unsigned type)
if (!tree->pool)
goto freetree;
tree->rbroot = RB_ROOT;
+ INIT_LIST_HEAD(&tree->lru);
spin_lock_init(&tree->lock);
+ tree->type = type;
zswap_trees[type] = tree;
return;

@@ -588,20 +1058,30 @@ static int __init zswap_debugfs_init(void)
if (!zswap_debugfs_root)
return -ENOMEM;

+ debugfs_create_u64("saved_by_writeback", S_IRUGO,
+ zswap_debugfs_root, &zswap_saved_by_writeback);
debugfs_create_u64("pool_limit_hit", S_IRUGO,
zswap_debugfs_root, &zswap_pool_limit_hit);
+ debugfs_create_u64("reject_writeback_attempted", S_IRUGO,
+ zswap_debugfs_root, &zswap_writeback_attempted);
+ debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_tmppage_fail);
debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
zswap_debugfs_root, &zswap_reject_kmemcache_fail);
debugfs_create_u64("reject_compress_poor", S_IRUGO,
zswap_debugfs_root, &zswap_reject_compress_poor);
+ debugfs_create_u64("written_back_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_written_back_pages);
debugfs_create_u64("duplicate_entry", S_IRUGO,
zswap_debugfs_root, &zswap_duplicate_entry);
debugfs_create_atomic_t("pool_pages", S_IRUGO,
zswap_debugfs_root, &zswap_pool_pages);
debugfs_create_atomic_t("stored_pages", S_IRUGO,
zswap_debugfs_root, &zswap_stored_pages);
+ debugfs_create_atomic_t("outstanding_writebacks", S_IRUGO,
+ zswap_debugfs_root, &zswap_outstanding_writebacks);

return 0;
}
@@ -636,6 +1116,10 @@ static int __init init_zswap(void)
pr_err("page pool initialization failed\n");
goto pagepoolfail;
}
+ if (zswap_tmppage_pool_create()) {
+ pr_err("workmem pool initialization failed\n");
+ goto tmppoolfail;
+ }
if (zswap_comp_init()) {
pr_err("compressor initialization failed\n");
goto compfail;
@@ -651,6 +1135,8 @@ static int __init init_zswap(void)
pcpufail:
zswap_comp_exit();
compfail:
+ zswap_tmppage_pool_destroy();
+tmppoolfail:
zswap_page_pool_destroy();
pagepoolfail:
zswap_entry_cache_destory();
--
1.8.2.1

2013-04-10 18:20:10

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 8/8] zswap: add documentation

This patch adds the documentation file for the zswap functionality

Signed-off-by: Seth Jennings <[email protected]>
---
Documentation/vm/zsmalloc.txt | 2 +-
Documentation/vm/zswap.txt | 82 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 83 insertions(+), 1 deletion(-)
create mode 100644 Documentation/vm/zswap.txt

diff --git a/Documentation/vm/zsmalloc.txt b/Documentation/vm/zsmalloc.txt
index 85aa617..4133ade 100644
--- a/Documentation/vm/zsmalloc.txt
+++ b/Documentation/vm/zsmalloc.txt
@@ -65,4 +65,4 @@ zs_unmap_object(pool, handle);
zs_free(pool, handle);

/* destroy the pool */
-zs_destroy_pool(pool);
+zs_destroy_pool(pool);
diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt
new file mode 100644
index 0000000..f29b82f
--- /dev/null
+++ b/Documentation/vm/zswap.txt
@@ -0,0 +1,82 @@
+Overview:
+
+Zswap is a lightweight compressed cache for swap pages. It takes
+pages that are in the process of being swapped out and attempts to
+compress them into a dynamically allocated RAM-based memory pool.
+If this process is successful, the writeback to the swap device is
+deferred and, in many cases, avoided completely.  This results in
+a significant I/O reduction and performance gains for systems that
+are swapping.
+
+Zswap provides compressed swap caching that basically trades CPU cycles
+for reduced swap I/O.  This trade-off can result in a significant
+performance improvement as reads to/writes from to the compressed
+cache almost always faster that reading from a swap device
+which incurs the latency of an asynchronous block I/O read.
+
+Some potential benefits:
+* Desktop/laptop users with limited RAM capacities can mitigate the
+    performance impact of swapping.
+* Overcommitted guests that share a common I/O resource can
+    dramatically reduce their swap I/O pressure, avoiding heavy
+    handed I/O throttling by the hypervisor.  This allows more work
+    to get done with less impact to the guest workload and guests
+    sharing the I/O subsystem
+* Users with SSDs as swap devices can extend the life of the device by
+    drastically reducing life-shortening writes.
+
+Zswap evicts pages from compressed cache on an LRU basis to the backing
+swap device when the compress pool reaches it size limit or the pool is
+unable to obtain additional pages from the buddy allocator.  This
+requirement had been identified in prior community discussions.
+
+To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
+e.g. zswap.enabled=1
+
+Design:
+
+Zswap receives pages for compression through the Frontswap API and
+is able to evict pages from its own compressed pool on an LRU basis
+and write them back to the backing swap device in the case that the
+compressed pool is full or unable to secure additional pages from
+the buddy allocator.
+
+Zswap makes use of zsmalloc for the managing the compressed memory
+pool. This is because zsmalloc is specifically designed to minimize
+fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each
+allocation in zsmalloc is not directly accessible by address.
+Rather, a handle is return by the allocation routine and that handle
+must be mapped before being accessed. The compressed memory pool grows
+on demand and shrinks as compressed pages are freed. The pool is
+not preallocated.
+
+When a swap page is passed from frontswap to zswap, zswap maintains
+a mapping of the swap entry, a combination of the swap type and swap
+offset, to the zsmalloc handle that references that compressed swap
+page. This mapping is achieved with a red-black tree per swap type.
+The swap offset is the search key for the tree nodes.
+
+During a page fault on a PTE that is a swap entry, frontswap calls
+the zswap load function to decompress the page into the page
+allocated by the page fault handler.
+
+Once there are no PTEs referencing a swap page stored in zswap
+(i.e. the count in the swap_map goes to 0) the swap code calls
+the zswap invalidate function, via frontswap, to free the compressed
+entry.
+
+Zswap seeks to be simple in its policies. Sysfs attributes allow for
+two user controlled policies:
+* max_compression_ratio - Maximum compression ratio, as as percentage,
+ for an acceptable compressed page. Any page that does not compress
+ by at least this ratio will be rejected.
+* max_pool_percent - The maximum percentage of memory that the compressed
+ pool can occupy.
+
+Zswap allows the compressor to be selected at kernel boot time by
+setting the “compressor” attribute. The default compressor is lzo.
+e.g. zswap.compressor=deflate
+
+A debugfs interface is provided for various statistic about pool size,
+number of pages stored, and various counters for the reasons pages
+are rejected.
--
1.8.2.1

2013-04-10 18:20:30

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 6/8] mm: allow for outstanding swap writeback accounting

To prevent flooding the swap device with writebacks, frontswap
backends need to count and limit the number of outstanding
writebacks. The incrementing of the counter can be done before
the call to __swap_writepage(). However, the caller must receive
a notification when the writeback completes in order to decrement
the counter.

To achieve this functionality, this patch modifies
__swap_writepage() to take the bio completion callback function
as an argument.

end_swap_bio_write(), the normal bio completion function, is also
made non-static so that code doing the accounting can call it
after the accounting is done.

Acked-by: Minchan Kim <[email protected]>
Signed-off-by: Seth Jennings <[email protected]>
---
include/linux/swap.h | 4 +++-
mm/page_io.c | 12 +++++++-----
2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 76f6c3b..b5b12c7 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -330,7 +330,9 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t ent)
/* linux/mm/page_io.c */
extern int swap_readpage(struct page *);
extern int swap_writepage(struct page *page, struct writeback_control *wbc);
-extern int __swap_writepage(struct page *page, struct writeback_control *wbc);
+extern void end_swap_bio_write(struct bio *bio, int err);
+extern int __swap_writepage(struct page *page, struct writeback_control *wbc,
+ void (*end_write_func)(struct bio *, int));
extern int swap_set_page_dirty(struct page *page);
extern void end_swap_bio_read(struct bio *bio, int err);

diff --git a/mm/page_io.c b/mm/page_io.c
index 1cb382d..56276fe 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -42,7 +42,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
return bio;
}

-static void end_swap_bio_write(struct bio *bio, int err)
+void end_swap_bio_write(struct bio *bio, int err)
{
const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
struct page *page = bio->bi_io_vec[0].bv_page;
@@ -179,7 +179,8 @@ bad_bmap:
goto out;
}

-int __swap_writepage(struct page *page, struct writeback_control *wbc);
+int __swap_writepage(struct page *page, struct writeback_control *wbc,
+ void (*end_write_func)(struct bio *, int));

/*
* We may have stale swap cache pages in memory: notice
@@ -199,12 +200,13 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
end_page_writeback(page);
goto out;
}
- ret = __swap_writepage(page, wbc);
+ ret = __swap_writepage(page, wbc, end_swap_bio_write);
out:
return ret;
}

-int __swap_writepage(struct page *page, struct writeback_control *wbc)
+int __swap_writepage(struct page *page, struct writeback_control *wbc,
+ void (*end_write_func)(struct bio *, int))
{
struct bio *bio;
int ret = 0, rw = WRITE;
@@ -236,7 +238,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc)
return ret;
}

- bio = get_swap_bio(GFP_NOIO, page, end_swap_bio_write);
+ bio = get_swap_bio(GFP_NOIO, page, end_write_func);
if (bio == NULL) {
set_page_dirty(page);
unlock_page(page);
--
1.8.2.1

2013-04-10 18:20:28

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 4/8] zswap: add to mm/

zswap is a thin compression backend for frontswap. It receives
pages from frontswap and attempts to store them in a compressed
memory pool, resulting in an effective partial memory reclaim and
dramatically reduced swap device I/O.

Additionally, in most cases, pages can be retrieved from this
compressed store much more quickly than reading from tradition
swap devices resulting in faster performance for many workloads.

This patch adds the zswap driver to mm/

Signed-off-by: Seth Jennings <[email protected]>
---
mm/Kconfig | 15 ++
mm/Makefile | 1 +
mm/zswap.c | 665 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 681 insertions(+)
create mode 100644 mm/zswap.c

diff --git a/mm/Kconfig b/mm/Kconfig
index aa054fc..36d93b0 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -495,3 +495,18 @@ config PGTABLE_MAPPING

You can check speed with zsmalloc benchmark[1].
[1] https://github.com/spartacus06/zsmalloc
+
+config ZSWAP
+ bool "In-kernel swap page compression"
+ depends on FRONTSWAP && CRYPTO
+ select CRYPTO_LZO
+ select ZSMALLOC
+ default n
+ help
+ Zswap is a backend for the frontswap mechanism in the VMM.
+ It receives pages from frontswap and attempts to store them
+ in a compressed memory pool, resulting in an effective
+ partial memory reclaim. In addition, pages and be retrieved
+ from this compressed store much faster than most tradition
+ swap devices resulting in reduced I/O and faster performance
+ for many workloads.
diff --git a/mm/Makefile b/mm/Makefile
index 0f6ef0a..1e0198f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
obj-$(CONFIG_FRONTSWAP) += frontswap.o
+obj-$(CONFIG_ZSWAP) += zswap.o
obj-$(CONFIG_HAS_DMA) += dmapool.o
obj-$(CONFIG_HUGETLBFS) += hugetlb.o
obj-$(CONFIG_NUMA) += mempolicy.o
diff --git a/mm/zswap.c b/mm/zswap.c
new file mode 100644
index 0000000..db283c4
--- /dev/null
+++ b/mm/zswap.c
@@ -0,0 +1,665 @@
+/*
+ * zswap.c - zswap driver file
+ *
+ * zswap is a backend for frontswap that takes pages that are in the
+ * process of being swapped out and attempts to compress them and store
+ * them in a RAM-based memory pool. This results in a significant I/O
+ * reduction on the real swap device and, in the case of a slow swap
+ * device, can also improve workload performance.
+ *
+ * Copyright (C) 2012 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/cpu.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/frontswap.h>
+#include <linux/rbtree.h>
+#include <linux/swap.h>
+#include <linux/crypto.h>
+#include <linux/mempool.h>
+#include <linux/zsmalloc.h>
+
+/*********************************
+* statistics
+**********************************/
+/* Number of memory pages used by the compressed pool */
+static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
+/* The number of compressed pages currently stored in zswap */
+static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+
+/*
+ * The statistics below are not protected from concurrent access for
+ * performance reasons so they may not be a 100% accurate. However,
+ * they do provide useful information on roughly how many times a
+ * certain event is occurring.
+*/
+static u64 zswap_pool_limit_hit;
+static u64 zswap_reject_compress_poor;
+static u64 zswap_reject_zsmalloc_fail;
+static u64 zswap_reject_kmemcache_fail;
+static u64 zswap_duplicate_entry;
+
+/*********************************
+* tunables
+**********************************/
+/* Enable/disable zswap (disabled by default, fixed at boot for now) */
+static bool zswap_enabled;
+module_param_named(enabled, zswap_enabled, bool, 0);
+
+/* Compressor to be used by zswap (fixed at boot for now) */
+#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
+static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+module_param_named(compressor, zswap_compressor, charp, 0);
+
+/* The maximum percentage of memory that the compressed pool can occupy */
+static unsigned int zswap_max_pool_percent = 20;
+module_param_named(max_pool_percent,
+ zswap_max_pool_percent, uint, 0644);
+
+/*
+ * Maximum compression ratio, as as percentage, for an acceptable
+ * compressed page. Any pages that do not compress by at least
+ * this ratio will be rejected.
+*/
+static unsigned int zswap_max_compression_ratio = 80;
+module_param_named(max_compression_ratio,
+ zswap_max_compression_ratio, uint, 0644);
+
+/*********************************
+* compression functions
+**********************************/
+/* per-cpu compression transforms */
+static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
+
+enum comp_op {
+ ZSWAP_COMPOP_COMPRESS,
+ ZSWAP_COMPOP_DECOMPRESS
+};
+
+static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ struct crypto_comp *tfm;
+ int ret;
+
+ tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
+ switch (op) {
+ case ZSWAP_COMPOP_COMPRESS:
+ ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
+ break;
+ case ZSWAP_COMPOP_DECOMPRESS:
+ ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ put_cpu();
+ return ret;
+}
+
+static int __init zswap_comp_init(void)
+{
+ if (!crypto_has_comp(zswap_compressor, 0, 0)) {
+ pr_info("%s compressor not available\n", zswap_compressor);
+ /* fall back to default compressor */
+ zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+ if (!crypto_has_comp(zswap_compressor, 0, 0))
+ /* can't even load the default compressor */
+ return -ENODEV;
+ }
+ pr_info("using %s compressor\n", zswap_compressor);
+
+ /* alloc percpu transforms */
+ zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
+ if (!zswap_comp_pcpu_tfms)
+ return -ENOMEM;
+ return 0;
+}
+
+static void zswap_comp_exit(void)
+{
+ /* free percpu transforms */
+ if (zswap_comp_pcpu_tfms)
+ free_percpu(zswap_comp_pcpu_tfms);
+}
+
+/*********************************
+* data structures
+**********************************/
+struct zswap_entry {
+ struct rb_node rbnode;
+ unsigned type;
+ pgoff_t offset;
+ unsigned long handle;
+ unsigned int length;
+};
+
+struct zswap_tree {
+ struct rb_root rbroot;
+ spinlock_t lock;
+ struct zs_pool *pool;
+};
+
+static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
+
+/*********************************
+* zswap entry functions
+**********************************/
+#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
+static struct kmem_cache *zswap_entry_cache;
+
+static inline int zswap_entry_cache_create(void)
+{
+ zswap_entry_cache =
+ kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
+ sizeof(struct zswap_entry), 0, 0, NULL);
+ return (zswap_entry_cache == NULL);
+}
+
+static inline void zswap_entry_cache_destory(void)
+{
+ kmem_cache_destroy(zswap_entry_cache);
+}
+
+static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+{
+ struct zswap_entry *entry;
+ entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+ if (!entry)
+ return NULL;
+ return entry;
+}
+
+static inline void zswap_entry_cache_free(struct zswap_entry *entry)
+{
+ kmem_cache_free(zswap_entry_cache, entry);
+}
+
+/*********************************
+* rbtree functions
+**********************************/
+static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
+{
+ struct rb_node *node = root->rb_node;
+ struct zswap_entry *entry;
+
+ while (node) {
+ entry = rb_entry(node, struct zswap_entry, rbnode);
+ if (entry->offset > offset)
+ node = node->rb_left;
+ else if (entry->offset < offset)
+ node = node->rb_right;
+ else
+ return entry;
+ }
+ return NULL;
+}
+
+/*
+ * In the case that a entry with the same offset is found, it a pointer to
+ * the existing entry is stored in dupentry and the function returns -EEXIST
+*/
+static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
+ struct zswap_entry **dupentry)
+{
+ struct rb_node **link = &root->rb_node, *parent = NULL;
+ struct zswap_entry *myentry;
+
+ while (*link) {
+ parent = *link;
+ myentry = rb_entry(parent, struct zswap_entry, rbnode);
+ if (myentry->offset > entry->offset)
+ link = &(*link)->rb_left;
+ else if (myentry->offset < entry->offset)
+ link = &(*link)->rb_right;
+ else {
+ *dupentry = myentry;
+ return -EEXIST;
+ }
+ }
+ rb_link_node(&entry->rbnode, parent, link);
+ rb_insert_color(&entry->rbnode, root);
+ return 0;
+}
+
+/*********************************
+* per-cpu code
+**********************************/
+static DEFINE_PER_CPU(u8 *, zswap_dstmem);
+
+static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
+{
+ struct crypto_comp *tfm;
+ u8 *dst;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
+ if (IS_ERR(tfm)) {
+ pr_err("can't allocate compressor transform\n");
+ return NOTIFY_BAD;
+ }
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
+ dst = kmalloc(PAGE_SIZE * 2, GFP_KERNEL);
+ if (!dst) {
+ pr_err("can't allocate compressor buffer\n");
+ crypto_free_comp(tfm);
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+ return NOTIFY_BAD;
+ }
+ per_cpu(zswap_dstmem, cpu) = dst;
+ break;
+ case CPU_DEAD:
+ case CPU_UP_CANCELED:
+ tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
+ if (tfm) {
+ crypto_free_comp(tfm);
+ *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+ }
+ dst = per_cpu(zswap_dstmem, cpu);
+ if (dst) {
+ kfree(dst);
+ per_cpu(zswap_dstmem, cpu) = NULL;
+ }
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static int zswap_cpu_notifier(struct notifier_block *nb,
+ unsigned long action, void *pcpu)
+{
+ unsigned long cpu = (unsigned long)pcpu;
+ return __zswap_cpu_notifier(action, cpu);
+}
+
+static struct notifier_block zswap_cpu_notifier_block = {
+ .notifier_call = zswap_cpu_notifier
+};
+
+static int zswap_cpu_init(void)
+{
+ unsigned long cpu;
+
+ get_online_cpus();
+ for_each_online_cpu(cpu)
+ if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
+ goto cleanup;
+ register_cpu_notifier(&zswap_cpu_notifier_block);
+ put_online_cpus();
+ return 0;
+
+cleanup:
+ for_each_online_cpu(cpu)
+ __zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
+ put_online_cpus();
+ return -ENOMEM;
+}
+
+/*********************************
+* zsmalloc callbacks
+**********************************/
+static mempool_t *zswap_page_pool;
+
+static inline unsigned int zswap_max_pool_pages(void)
+{
+ return zswap_max_pool_percent * totalram_pages / 100;
+}
+
+static inline int zswap_page_pool_create(void)
+{
+ /* TODO: dynamically size mempool */
+ zswap_page_pool = mempool_create_page_pool(256, 0);
+ if (!zswap_page_pool)
+ return -ENOMEM;
+ return 0;
+}
+
+static inline void zswap_page_pool_destroy(void)
+{
+ mempool_destroy(zswap_page_pool);
+}
+
+static struct page *zswap_alloc_page(gfp_t flags)
+{
+ struct page *page;
+
+ if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
+ zswap_pool_limit_hit++;
+ return NULL;
+ }
+ page = mempool_alloc(zswap_page_pool, flags);
+ if (page)
+ atomic_inc(&zswap_pool_pages);
+ return page;
+}
+
+static void zswap_free_page(struct page *page)
+{
+ if (!page)
+ return;
+ mempool_free(page, zswap_page_pool);
+ atomic_dec(&zswap_pool_pages);
+}
+
+static struct zs_ops zswap_zs_ops = {
+ .alloc = zswap_alloc_page,
+ .free = zswap_free_page
+};
+
+/*********************************
+* frontswap hooks
+**********************************/
+/* attempts to compress and store an single page */
+static int zswap_frontswap_store(unsigned type, pgoff_t offset,
+ struct page *page)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry, *dupentry;
+ int ret;
+ unsigned int dlen = PAGE_SIZE;
+ unsigned long handle;
+ char *buf;
+ u8 *src, *dst;
+
+ if (!tree) {
+ ret = -ENODEV;
+ goto reject;
+ }
+
+ /* allocate entry */
+ entry = zswap_entry_cache_alloc(GFP_KERNEL);
+ if (!entry) {
+ zswap_reject_kmemcache_fail++;
+ ret = -ENOMEM;
+ goto reject;
+ }
+
+ /* compress */
+ dst = get_cpu_var(zswap_dstmem);
+ src = kmap_atomic(page);
+ ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
+ kunmap_atomic(src);
+ if (ret) {
+ ret = -EINVAL;
+ goto putcpu;
+ }
+ if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
+ zswap_reject_compress_poor++;
+ ret = -E2BIG;
+ goto putcpu;
+ }
+
+ /* store */
+ handle = zs_malloc(tree->pool, dlen,
+ __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
+ __GFP_NOWARN);
+ if (!handle) {
+ zswap_reject_zsmalloc_fail++;
+ ret = -ENOMEM;
+ goto putcpu;
+ }
+
+ buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
+ memcpy(buf, dst, dlen);
+ zs_unmap_object(tree->pool, handle);
+ put_cpu_var(zswap_dstmem);
+
+ /* populate entry */
+ entry->type = type;
+ entry->offset = offset;
+ entry->handle = handle;
+ entry->length = dlen;
+
+ /* map */
+ spin_lock(&tree->lock);
+ do {
+ ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
+ if (ret == -EEXIST) {
+ zswap_duplicate_entry++;
+
+ /* remove from rbtree */
+ rb_erase(&dupentry->rbnode, &tree->rbroot);
+
+ /* free */
+ zs_free(tree->pool, dupentry->handle);
+ zswap_entry_cache_free(dupentry);
+ atomic_dec(&zswap_stored_pages);
+ }
+ } while (ret == -EEXIST);
+ spin_unlock(&tree->lock);
+
+ /* update stats */
+ atomic_inc(&zswap_stored_pages);
+
+ return 0;
+
+putcpu:
+ put_cpu_var(zswap_dstmem);
+ zswap_entry_cache_free(entry);
+reject:
+ return ret;
+}
+
+/*
+ * returns 0 if the page was successfully decompressed
+ * return -1 on entry not found or error
+*/
+static int zswap_frontswap_load(unsigned type, pgoff_t offset,
+ struct page *page)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry;
+ u8 *src, *dst;
+ unsigned int dlen;
+
+ /* find */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ spin_unlock(&tree->lock);
+
+ /* decompress */
+ dlen = PAGE_SIZE;
+ src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
+ dst = kmap_atomic(page);
+ zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
+ dst, &dlen);
+ kunmap_atomic(dst);
+ zs_unmap_object(tree->pool, entry->handle);
+
+ return 0;
+}
+
+/* invalidates a single page */
+static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct zswap_entry *entry;
+
+ /* find */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+
+ /* remove from rbtree */
+ rb_erase(&entry->rbnode, &tree->rbroot);
+ spin_unlock(&tree->lock);
+
+ /* free */
+ zs_free(tree->pool, entry->handle);
+ zswap_entry_cache_free(entry);
+ atomic_dec(&zswap_stored_pages);
+}
+
+/* invalidates all pages for the given swap type */
+static void zswap_frontswap_invalidate_area(unsigned type)
+{
+ struct zswap_tree *tree = zswap_trees[type];
+ struct rb_node *node;
+ struct zswap_entry *entry;
+
+ if (!tree)
+ return;
+
+ /* walk the tree and free everything */
+ spin_lock(&tree->lock);
+ /*
+ * TODO: Even though this code should not be executed because
+ * the try_to_unuse() in swapoff should have emptied the tree,
+ * it is very wasteful to rebalance the tree after every
+ * removal when we are freeing the whole tree.
+ *
+ * If post-order traversal code is ever added to the rbtree
+ * implementation, it should be used here.
+ */
+ while ((node = rb_first(&tree->rbroot))) {
+ entry = rb_entry(node, struct zswap_entry, rbnode);
+ rb_erase(&entry->rbnode, &tree->rbroot);
+ zs_free(tree->pool, entry->handle);
+ zswap_entry_cache_free(entry);
+ }
+ tree->rbroot = RB_ROOT;
+ spin_unlock(&tree->lock);
+}
+
+/* NOTE: this is called in atomic context from swapon and must not sleep */
+static void zswap_frontswap_init(unsigned type)
+{
+ struct zswap_tree *tree;
+
+ tree = kzalloc(sizeof(struct zswap_tree), GFP_ATOMIC);
+ if (!tree)
+ goto err;
+ tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
+ if (!tree->pool)
+ goto freetree;
+ tree->rbroot = RB_ROOT;
+ spin_lock_init(&tree->lock);
+ zswap_trees[type] = tree;
+ return;
+
+freetree:
+ kfree(tree);
+err:
+ pr_err("alloc failed, zswap disabled for swap type %d\n", type);
+}
+
+static struct frontswap_ops zswap_frontswap_ops = {
+ .store = zswap_frontswap_store,
+ .load = zswap_frontswap_load,
+ .invalidate_page = zswap_frontswap_invalidate_page,
+ .invalidate_area = zswap_frontswap_invalidate_area,
+ .init = zswap_frontswap_init
+};
+
+/*********************************
+* debugfs functions
+**********************************/
+#ifdef CONFIG_DEBUG_FS
+#include <linux/debugfs.h>
+
+static struct dentry *zswap_debugfs_root;
+
+static int __init zswap_debugfs_init(void)
+{
+ if (!debugfs_initialized())
+ return -ENODEV;
+
+ zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
+ if (!zswap_debugfs_root)
+ return -ENOMEM;
+
+ debugfs_create_u64("pool_limit_hit", S_IRUGO,
+ zswap_debugfs_root, &zswap_pool_limit_hit);
+ debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
+ debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_kmemcache_fail);
+ debugfs_create_u64("reject_compress_poor", S_IRUGO,
+ zswap_debugfs_root, &zswap_reject_compress_poor);
+ debugfs_create_u64("duplicate_entry", S_IRUGO,
+ zswap_debugfs_root, &zswap_duplicate_entry);
+ debugfs_create_atomic_t("pool_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_pool_pages);
+ debugfs_create_atomic_t("stored_pages", S_IRUGO,
+ zswap_debugfs_root, &zswap_stored_pages);
+
+ return 0;
+}
+
+static void __exit zswap_debugfs_exit(void)
+{
+ debugfs_remove_recursive(zswap_debugfs_root);
+}
+#else
+static inline int __init zswap_debugfs_init(void)
+{
+ return 0;
+}
+
+static inline void __exit zswap_debugfs_exit(void) { }
+#endif
+
+/*********************************
+* module init and exit
+**********************************/
+static int __init init_zswap(void)
+{
+ if (!zswap_enabled)
+ return 0;
+
+ pr_info("loading zswap\n");
+ if (zswap_entry_cache_create()) {
+ pr_err("entry cache creation failed\n");
+ goto error;
+ }
+ if (zswap_page_pool_create()) {
+ pr_err("page pool initialization failed\n");
+ goto pagepoolfail;
+ }
+ if (zswap_comp_init()) {
+ pr_err("compressor initialization failed\n");
+ goto compfail;
+ }
+ if (zswap_cpu_init()) {
+ pr_err("per-cpu initialization failed\n");
+ goto pcpufail;
+ }
+ frontswap_register_ops(&zswap_frontswap_ops);
+ if (zswap_debugfs_init())
+ pr_warn("debugfs initialization failed\n");
+ return 0;
+pcpufail:
+ zswap_comp_exit();
+compfail:
+ zswap_page_pool_destroy();
+pagepoolfail:
+ zswap_entry_cache_destory();
+error:
+ return -ENOMEM;
+}
+/* must be late so crypto has time to come up */
+late_initcall(init_zswap);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Seth Jennings <[email protected]>");
+MODULE_DESCRIPTION("Compressed cache for swap pages");
--
1.8.2.1

2013-04-10 18:19:43

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 5/8] mm: break up swap_writepage() for frontswap backends

swap_writepage() is currently where frontswap hooks into the swap
write path to capture pages with the frontswap_store() function.
However, if a frontswap backend wants to "resume" the writeback of
a page to the swap device, it can't call swap_writepage() as
the page will simply reenter the backend.

This patch separates swap_writepage() into a top and bottom half, the
bottom half named __swap_writepage() to allow a frontswap backend,
like zswap, to resume writeback beyond the frontswap_store() hook.

__add_to_swap_cache() is also made non-static so that the page for
which writeback is to be resumed can be added to the swap cache.

Acked-by: Minchan Kim <[email protected]>
Signed-off-by: Seth Jennings <[email protected]>
---
include/linux/swap.h | 2 ++
mm/page_io.c | 16 +++++++++++++---
mm/swap_state.c | 2 +-
3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 2818a12..76f6c3b 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -330,6 +330,7 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t ent)
/* linux/mm/page_io.c */
extern int swap_readpage(struct page *);
extern int swap_writepage(struct page *page, struct writeback_control *wbc);
+extern int __swap_writepage(struct page *page, struct writeback_control *wbc);
extern int swap_set_page_dirty(struct page *page);
extern void end_swap_bio_read(struct bio *bio, int err);

@@ -345,6 +346,7 @@ extern unsigned long total_swapcache_pages(void);
extern void show_swap_cache_info(void);
extern int add_to_swap(struct page *);
extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t);
+extern int __add_to_swap_cache(struct page *page, swp_entry_t entry);
extern void __delete_from_swap_cache(struct page *);
extern void delete_from_swap_cache(struct page *);
extern void free_page_and_swap_cache(struct page *);
diff --git a/mm/page_io.c b/mm/page_io.c
index 78eee32..1cb382d 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -179,15 +179,15 @@ bad_bmap:
goto out;
}

+int __swap_writepage(struct page *page, struct writeback_control *wbc);
+
/*
* We may have stale swap cache pages in memory: notice
* them here and get rid of the unnecessary final write.
*/
int swap_writepage(struct page *page, struct writeback_control *wbc)
{
- struct bio *bio;
- int ret = 0, rw = WRITE;
- struct swap_info_struct *sis = page_swap_info(page);
+ int ret = 0;

if (try_to_free_swap(page)) {
unlock_page(page);
@@ -199,6 +199,16 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
end_page_writeback(page);
goto out;
}
+ ret = __swap_writepage(page, wbc);
+out:
+ return ret;
+}
+
+int __swap_writepage(struct page *page, struct writeback_control *wbc)
+{
+ struct bio *bio;
+ int ret = 0, rw = WRITE;
+ struct swap_info_struct *sis = page_swap_info(page);

if (sis->flags & SWP_FILE) {
struct kiocb kiocb;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 7efcf15..fe43fd5 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -78,7 +78,7 @@ void show_swap_cache_info(void)
* __add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
* but sets SwapCache flag and private instead of mapping and index.
*/
-static int __add_to_swap_cache(struct page *page, swp_entry_t entry)
+int __add_to_swap_cache(struct page *page, swp_entry_t entry)
{
int error;
struct address_space *address_space;
--
1.8.2.1

2013-04-10 18:24:34

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 2/8] zsmalloc: add documentation

This patch adds a documentation file for zsmalloc at
Documentation/vm/zsmalloc.txt

Signed-off-by: Seth Jennings <[email protected]>
---
Documentation/vm/zsmalloc.txt | 68 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 Documentation/vm/zsmalloc.txt

diff --git a/Documentation/vm/zsmalloc.txt b/Documentation/vm/zsmalloc.txt
new file mode 100644
index 0000000..85aa617
--- /dev/null
+++ b/Documentation/vm/zsmalloc.txt
@@ -0,0 +1,68 @@
+zsmalloc Memory Allocator
+
+Overview
+
+zmalloc a new slab-based memory allocator,
+zsmalloc, for storing compressed pages. It is designed for
+low fragmentation and high allocation success rate on
+large object, but <= PAGE_SIZE allocations.
+
+zsmalloc differs from the kernel slab allocator in two primary
+ways to achieve these design goals.
+
+zsmalloc never requires high order page allocations to back
+slabs, or "size classes" in zsmalloc terms. Instead it allows
+multiple single-order pages to be stitched together into a
+"zspage" which backs the slab. This allows for higher allocation
+success rate under memory pressure.
+
+Also, zsmalloc allows objects to span page boundaries within the
+zspage. This allows for lower fragmentation than could be had
+with the kernel slab allocator for objects between PAGE_SIZE/2
+and PAGE_SIZE. With the kernel slab allocator, if a page compresses
+to 60% of it original size, the memory savings gained through
+compression is lost in fragmentation because another object of
+the same size can't be stored in the leftover space.
+
+This ability to span pages results in zsmalloc allocations not being
+directly addressable by the user. The user is given an
+non-dereferencable handle in response to an allocation request.
+That handle must be mapped, using zs_map_object(), which returns
+a pointer to the mapped region that can be used. The mapping is
+necessary since the object data may reside in two different
+noncontigious pages.
+
+For 32-bit systems, zsmalloc has the added benefit of being
+able to back slabs with HIGHMEM pages, something not possible
+with the kernel slab allocators (SLAB or SLUB).
+
+Usage:
+
+#include <linux/zsmalloc.h>
+
+/* create a new pool */
+struct zs_pool *pool = zs_create_pool("mypool", GFP_KERNEL);
+
+/* allocate a 256 byte object */
+unsigned long handle = zs_malloc(pool, 256);
+
+/*
+ * Map the object to get a dereferenceable pointer in "read-write mode"
+ * (see zsmalloc.h for additional modes)
+ */
+void *ptr = zs_map_object(pool, handle, ZS_MM_RW);
+
+/* do something with ptr */
+
+/*
+ * Unmap the object when done dealing with it. You should try to
+ * minimize the time for which the object is mapped since preemption
+ * is disabled during the mapped period.
+ */
+zs_unmap_object(pool, handle);
+
+/* free the object */
+zs_free(pool, handle);
+
+/* destroy the pool */
+zs_destroy_pool(pool);
--
1.8.2.1

2013-04-10 18:24:48

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv9 3/8] debugfs: add get/set for atomic types

debugfs currently lack the ability to create attributes
that set/get atomic_t values.

This patch adds support for this through a new
debugfs_create_atomic_t() function.

Acked-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Seth Jennings <[email protected]>
---
fs/debugfs/file.c | 42 ++++++++++++++++++++++++++++++++++++++++++
include/linux/debugfs.h | 2 ++
2 files changed, 44 insertions(+)

diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
index c5ca6ae..fa26d5b 100644
--- a/fs/debugfs/file.c
+++ b/fs/debugfs/file.c
@@ -21,6 +21,7 @@
#include <linux/debugfs.h>
#include <linux/io.h>
#include <linux/slab.h>
+#include <linux/atomic.h>

static ssize_t default_read_file(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
@@ -403,6 +404,47 @@ struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
}
EXPORT_SYMBOL_GPL(debugfs_create_size_t);

+static int debugfs_atomic_t_set(void *data, u64 val)
+{
+ atomic_set((atomic_t *)data, val);
+ return 0;
+}
+static int debugfs_atomic_t_get(void *data, u64 *val)
+{
+ *val = atomic_read((atomic_t *)data);
+ return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
+ debugfs_atomic_t_set, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set, "%llu\n");
+
+/**
+ * debugfs_create_atomic_t - create a debugfs file that is used to read and
+ * write an atomic_t value
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this parameter is %NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ */
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+ struct dentry *parent, atomic_t *value)
+{
+ /* if there are no write bits set, make read only */
+ if (!(mode & S_IWUGO))
+ return debugfs_create_file(name, mode, parent, value,
+ &fops_atomic_t_ro);
+ /* if there are no read bits set, make write only */
+ if (!(mode & S_IRUGO))
+ return debugfs_create_file(name, mode, parent, value,
+ &fops_atomic_t_wo);
+
+ return debugfs_create_file(name, mode, parent, value, &fops_atomic_t);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_atomic_t);

static ssize_t read_file_bool(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
index 63f2465..d68b4ea 100644
--- a/include/linux/debugfs.h
+++ b/include/linux/debugfs.h
@@ -79,6 +79,8 @@ struct dentry *debugfs_create_x64(const char *name, umode_t mode,
struct dentry *parent, u64 *value);
struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
struct dentry *parent, size_t *value);
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+ struct dentry *parent, atomic_t *value);
struct dentry *debugfs_create_bool(const char *name, umode_t mode,
struct dentry *parent, u32 *value);

--
1.8.2.1

2013-04-11 01:43:38

by Rob Landley

[permalink] [raw]
Subject: Re: [PATCHv9 8/8] zswap: add documentation

On 04/10/2013 01:19:00 PM, Seth Jennings wrote:
> This patch adds the documentation file for the zswap functionality
>
> Signed-off-by: Seth Jennings <[email protected]>
> ---
> Documentation/vm/zsmalloc.txt | 2 +-
> Documentation/vm/zswap.txt | 82
> +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 83 insertions(+), 1 deletion(-)
> create mode 100644 Documentation/vm/zswap.txt

Acked-by: Rob Landley <[email protected]>

Minor kibbitzing anyway:

> diff --git a/Documentation/vm/zsmalloc.txt
> b/Documentation/vm/zsmalloc.txt
> index 85aa617..4133ade 100644
> --- a/Documentation/vm/zsmalloc.txt
> +++ b/Documentation/vm/zsmalloc.txt
> @@ -65,4 +65,4 @@ zs_unmap_object(pool, handle);
> zs_free(pool, handle);
>
> /* destroy the pool */
> -zs_destroy_pool(pool);
> +zs_destroy_pool(pool);
> diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt
> new file mode 100644
> index 0000000..f29b82f
> --- /dev/null
> +++ b/Documentation/vm/zswap.txt
> @@ -0,0 +1,82 @@
> +Overview:
> +
> +Zswap is a lightweight compressed cache for swap pages. It takes
> +pages that are in the process of being swapped out and attempts to
> +compress them into a dynamically allocated RAM-based memory pool.
> +If this process is successful, the writeback to the swap device is
> +deferred and, in many cases, avoided completely.  This results in
> +a significant I/O reduction and performance gains for systems that
> +are swapping.
> +
> +Zswap provides compressed swap caching that basically trades CPU
> cycles
> +for reduced swap I/O.  This trade-off can result in a significant
> +performance improvement as reads to/writes from to the compressed

writes from to?

> +cache almost always faster that reading from a swap device

are almost

> +which incurs the latency of an asynchronous block I/O read.
> +
> +Some potential benefits:
> +* Desktop/laptop users with limited RAM capacities can mitigate the
> +    performance impact of swapping.
> +* Overcommitted guests that share a common I/O resource can
> +    dramatically reduce their swap I/O pressure, avoiding heavy
> +    handed I/O throttling by the hypervisor.  This allows more work
> +    to get done with less impact to the guest workload and guests
> +    sharing the I/O subsystem
> +* Users with SSDs as swap devices can extend the life of the device
> by
> +    drastically reducing life-shortening writes.

Does it work even if you have no actual swap mounted? And if you swap
to NBD in a cluster it can keep network traffic down.

> +Zswap evicts pages from compressed cache on an LRU basis to the
> backing
> +swap device when the compress pool reaches it size limit or the pool
> is
> +unable to obtain additional pages from the buddy allocator.  This
> +requirement had been identified in prior community discussions.

I do not understand the "this requirement" sentence: aren't you just
describing the design here? Memory evicts to the compressed cache,
which evicts to persistent storage? What do historical community
discussions have to do with it? "We designed this feature based on user
feedback" is pretty much like saying "and this was developed in an open
source manner"...

> +To enabled zswap, the "enabled" attribute must be set to 1 at boot
> time.
> +e.g. zswap.enabled=1

So if you configure it in, nothing happens. You have to press an extra
button on the command line to have anything actually happen.

Why? (And why can't swapon do this? I dunno, swapon /dev/null or
something, which the swapon guys can make a nice flag for later.)

> +Design:
> +
> +Zswap receives pages for compression through the Frontswap API and
> +is able to evict pages from its own compressed pool on an LRU basis
> +and write them back to the backing swap device in the case that the
> +compressed pool is full or unable to secure additional pages from
> +the buddy allocator.
> +
> +Zswap makes use of zsmalloc for the managing the compressed memory
> +pool. This is because zsmalloc is specifically designed to minimize

s/. This is because zsmalloc/, which/

> +fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each
> +allocation in zsmalloc is not directly accessible by address.
> +Rather, a handle is return by the allocation routine and that handle

returned

> +must be mapped before being accessed. The compressed memory pool
> grows
> +on demand and shrinks as compressed pages are freed. The pool is
> +not preallocated.
> +
> +When a swap page is passed from frontswap to zswap, zswap maintains
> +a mapping of the swap entry, a combination of the swap type and swap
> +offset, to the zsmalloc handle that references that compressed swap
> +page. This mapping is achieved with a red-black tree per swap type.
> +The swap offset is the search key for the tree nodes.
> +
> +During a page fault on a PTE that is a swap entry, frontswap calls
> +the zswap load function to decompress the page into the page
> +allocated by the page fault handler.
> +
> +Once there are no PTEs referencing a swap page stored in zswap
> +(i.e. the count in the swap_map goes to 0) the swap code calls
> +the zswap invalidate function, via frontswap, to free the compressed
> +entry.
> +
> +Zswap seeks to be simple in its policies.

Does that last sentence actually provide any information, or can it go?

> Sysfs attributes allow for two user controlled policies:
> +* max_compression_ratio - Maximum compression ratio, as as
> percentage,
> + for an acceptable compressed page. Any page that does not
> compress
> + by at least this ratio will be rejected.
> +* max_pool_percent - The maximum percentage of memory that the
> compressed
> + pool can occupy.

Personally I'd put the user-visible control knobs earlier in the file,
before implementation details.

> +Zswap allows the compressor to be selected at kernel boot time by
> +setting the “compressor” attribute. The default compressor is lzo.
> +e.g. zswap.compressor=deflate

Can we hardwire in one at compile time and not have to do this?

> +A debugfs interface is provided for various statistic about pool
> size,

statistics

> +number of pages stored, and various counters for the reasons pages
> +are rejected.
> --
> 1.8.2.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2013-04-11 01:55:25

by Rob Landley

[permalink] [raw]
Subject: Re: [PATCHv9 2/8] zsmalloc: add documentation

On 04/10/2013 01:18:54 PM, Seth Jennings wrote:
> This patch adds a documentation file for zsmalloc at
> Documentation/vm/zsmalloc.txt

Docs acked-by: Rob Landley <[email protected]>

Literary criticism below:

> Signed-off-by: Seth Jennings <[email protected]>
> ---
> Documentation/vm/zsmalloc.txt | 68
> +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 68 insertions(+)
> create mode 100644 Documentation/vm/zsmalloc.txt
>
> diff --git a/Documentation/vm/zsmalloc.txt
> b/Documentation/vm/zsmalloc.txt
> new file mode 100644
> index 0000000..85aa617
> --- /dev/null
> +++ b/Documentation/vm/zsmalloc.txt
> @@ -0,0 +1,68 @@
> +zsmalloc Memory Allocator
> +
> +Overview
> +
> +zmalloc a new slab-based memory allocator,
> +zsmalloc, for storing compressed pages.

zmalloc a new slab-based memory allocator, zsmalloc? (How does one
zmalloc zsmalloc?)

Out of curiosity, what does zsmalloc stand for, anyway?

> It is designed for
> +low fragmentation and high allocation success rate on
> +large object, but <= PAGE_SIZE allocations.

1) objects

2) maybe "large objects for <= PAGE_SIZE"...

> +zsmalloc differs from the kernel slab allocator in two primary
> +ways to achieve these design goals.
> +
> +zsmalloc never requires high order page allocations to back
> +slabs, or "size classes" in zsmalloc terms. Instead it allows
> +multiple single-order pages to be stitched together into a
> +"zspage" which backs the slab. This allows for higher allocation
> +success rate under memory pressure.
> +
> +Also, zsmalloc allows objects to span page boundaries within the
> +zspage. This allows for lower fragmentation than could be had
> +with the kernel slab allocator for objects between PAGE_SIZE/2
> +and PAGE_SIZE. With the kernel slab allocator, if a page compresses
> +to 60% of it original size, the memory savings gained through
> +compression is lost in fragmentation because another object of

I lean towards "are lost", but it's debatable. (Savings are plural, but
savings could also be treated as a mass noun like water/air/bison that
doesn't get pluralized because you can't count instances of a liquid.
No idea which is more common.)

> +the same size can't be stored in the leftover space.
> +
> +This ability to span pages results in zsmalloc allocations not being
> +directly addressable by the user. The user is given an
> +non-dereferencable handle in response to an allocation request.
> +That handle must be mapped, using zs_map_object(), which returns
> +a pointer to the mapped region that can be used. The mapping is
> +necessary since the object data may reside in two different
> +noncontigious pages.

Presumably this allows packing of unmapped entities if you detect
fragmentation and are up for a latency spike?

Rob-

2013-04-13 06:26:57

by Suleiman Souhlal

[permalink] [raw]
Subject: Re: [PATCHv9 4/8] zswap: add to mm/

Hello,

On Apr 10, 2013, at 11:18 , Seth Jennings wrote:

> +/* invalidates all pages for the given swap type */
> +static void zswap_frontswap_invalidate_area(unsigned type)
> +{
> + struct zswap_tree *tree = zswap_trees[type];
> + struct rb_node *node;
> + struct zswap_entry *entry;
> +
> + if (!tree)
> + return;
> +
> + /* walk the tree and free everything */
> + spin_lock(&tree->lock);
> + /*
> + * TODO: Even though this code should not be executed because
> + * the try_to_unuse() in swapoff should have emptied the tree,
> + * it is very wasteful to rebalance the tree after every
> + * removal when we are freeing the whole tree.
> + *
> + * If post-order traversal code is ever added to the rbtree
> + * implementation, it should be used here.
> + */
> + while ((node = rb_first(&tree->rbroot))) {
> + entry = rb_entry(node, struct zswap_entry, rbnode);
> + rb_erase(&entry->rbnode, &tree->rbroot);
> + zs_free(tree->pool, entry->handle);
> + zswap_entry_cache_free(entry);
> + }
> + tree->rbroot = RB_ROOT;
> + spin_unlock(&tree->lock);
> +}

Should both the pool and the tree also be freed, here?

-- Suleiman

2013-04-14 00:55:00

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCHv9 1/8] zsmalloc: add to mm/

I no longer remember any of the previous z* discussions, including my
own review and I was not online as I wrote this. I may repeat myself,
contradict myself or rehash topics that were visited already and have
been concluded. If I do any of that then sorry.

On Wed, Apr 10, 2013 at 01:18:53PM -0500, Seth Jennings wrote:
> <SNIP>
>
> Also, zsmalloc allows objects to span page boundaries within the
> zspage. This allows for lower fragmentation than could be had
> with the kernel slab allocator for objects between PAGE_SIZE/2
> and PAGE_SIZE.

Be aware that this reduces *internal* fragmentation but not necessarily
external fragmentation. If a page portion cannot be freed for some reason
then the entire page cannot be freed. If it is possible for a page fragment
to be pinned then it is potentially a serious problem because the zswap
portion of memory does not necessarily shrink forever. This means that a
large process exiting that had been pushed to swap may not free any
physical memory due to fragmentation within zsmalloc which might be a
big surprise to the OOM killer.

Even assuming though that a page can be forcibly evicted then moving data
from zswap to disk has two strange effects.

1. Reclaiming a single page requires an unpredictable amount of
page frames to be uncompressed and written to swap. Swapout times may
vary considerably as a result.

2. It make cause aging inversions. If an old page fragment and new page
fragment are co-located then a new page can be written to swap before
there was an opportunity to refault it.

Both yield unpredictable performance characteristics for zswap.
zbud conceptually (I can't remember any of the code details) suffers from
internal fragmentation wastage but it would have more predictable performance
characterisics. The worst of the fragmentaiton problems may be mitigated
if a zero-filled page was special cased (if it hasn't already). If the
compressed page cannot fit into PAGE_SIZE/2 then too bad, dump it to swap.
It still would suffer from an age inversion but at worst it only affects
one other swap page so at least it's bound to a known value.

I think I said it before but I worry that testing has seen the ideal
behaviour for zsmalloc because it is based on kernel compiles which has
data that compresses easily and processes that are relatively short lived.

I recognise that a lot of work has gone into zsmalloc and that it exists
for a reason. I'm not going to make it a blocker for merging because frankly
I'm not familiar enough with zbud to know it actually can be used by zswap
and my performance characterisic objections have not been proven. However,
my gut feeling says that the allocators should have had compatible APIs
or an operations struct with a default to zbud for predictable performance
characterisics (assuming zbud is not completely broken of course).

Furthermore if any of this is accurate then the limitations of the
allocator should be described in the changelog (copy and paste this if
you wish). When/if this gets deployed and a vendor is handed a bug about
unpredictable performance characteristics of zswap then there is a remote
chance they learn why.

> With the kernel slab allocator, if a page compresses
> to 60% of it original size, the memory savings gained through
> compression is lost in fragmentation because another object of
> the same size can't be stored in the leftover space.
>
> This ability to span pages results in zsmalloc allocations not being
> directly addressable by the user. The user is given an
> non-dereferencable handle in response to an allocation request.
> That handle must be mapped, using zs_map_object(), which returns
> a pointer to the mapped region that can be used. The mapping is
> necessary since the object data may reside in two different
> noncontigious pages.
>
> zsmalloc fulfills the allocation needs for zram and zswap.
>
> Acked-by: Nitin Gupta <[email protected]>
> Acked-by: Minchan Kim <[email protected]>
> Signed-off-by: Seth Jennings <[email protected]>
> ---
> include/linux/zsmalloc.h | 56 +++
> mm/Kconfig | 24 +
> mm/Makefile | 1 +
> mm/zsmalloc.c | 1117 ++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 1198 insertions(+)
> create mode 100644 include/linux/zsmalloc.h
> create mode 100644 mm/zsmalloc.c
>
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> new file mode 100644
> index 0000000..398dae3
> --- /dev/null
> +++ b/include/linux/zsmalloc.h
> @@ -0,0 +1,56 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011 Nitin Gupta
> + *

git blame indicates there are more people than Nitin involved although
the bulk of the code does appear to be his.

> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +#ifndef _ZS_MALLOC_H_
> +#define _ZS_MALLOC_H_
> +
> +#include <linux/types.h>
> +#include <linux/mm_types.h>
> +
> +/*
> + * zsmalloc mapping modes
> + *
> + * NOTE: These only make a difference when a mapped object spans pages.
> + * They also have no effect when PGTABLE_MAPPING is selected.
> +*/
> +enum zs_mapmode {
> + ZS_MM_RW, /* normal read-write mapping */
> + ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> + ZS_MM_WO /* write-only (no copy-in at map time) */
> + /*
> + * NOTE: ZS_MM_WO should only be used for initializing new
> + * (uninitialized) allocations. Partial writes to already
> + * initialized allocations should use ZS_MM_RW to preserve the
> + * existing data.
> + */
> +};
> +
> +struct zs_ops {
> + struct page * (*alloc)(gfp_t);
> + void (*free)(struct page *);
> +};
> +


Hmm, zs_ops deserves a comment! It's quite curious because the zsmalloc
implies it is an allocator but the user of zsmalloc is expected to allocate
and free the physical memory. That looks like a layering inversion.

I suspect the motivation is because only the user of zsmalloc can sensibly
decide what the pool size should be, particularly if it's dynamically
sized. If this is the case then a more appropriate callback interface
may be to inform it when the physical page pool shrinks (instead of free)
and a request to increase the size of the pool by one page (instead of alloc.

> +struct zs_pool;
> +
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> +void zs_destroy_pool(struct zs_pool *pool);
> +
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> +void zs_free(struct zs_pool *pool, unsigned long obj);
> +
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> + enum zs_mapmode mm);
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +
> +#endif
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 3bea74f..aa054fc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -471,3 +471,27 @@ config FRONTSWAP
> and swap data is stored as normal on the matching swap device.
>
> If unsure, say Y to enable frontswap.
> +
> +config ZSMALLOC
> + tristate "Memory allocator for compressed pages"
> + default n
> + help
> + zsmalloc is a slab-based memory allocator designed to store
> + compressed RAM pages. zsmalloc uses virtual memory mapping
> + in order to reduce fragmentation. However, this results in a
> + non-standard allocator interface where a handle, not a pointer, is
> + returned by an alloc(). This handle must be mapped in order to
> + access the allocated space.
> +
> +config PGTABLE_MAPPING
> + bool "Use page table mapping to access object in zsmalloc"
> + depends on ZSMALLOC
> + help
> + By default, zsmalloc uses a copy-based object mapping method to
> + access allocations that span two pages. However, if a particular
> + architecture (ex, ARM) performs VM mapping faster than copying,
> + then you should select this. This causes zsmalloc to use page table
> + mapping rather than copying for object mapping.
> +
> + You can check speed with zsmalloc benchmark[1].
> + [1] https://github.com/spartacus06/zsmalloc

Should PGTABLE_MAPPING be selected by the architecture instead of the
user configuring the kernel?

> diff --git a/mm/Makefile b/mm/Makefile
> index 3a46287..0f6ef0a 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
> obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
> obj-$(CONFIG_CLEANCACHE) += cleancache.o
> obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> +obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> new file mode 100644
> index 0000000..adaeee5
> --- /dev/null
> +++ b/mm/zsmalloc.c
> @@ -0,0 +1,1117 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011 Nitin Gupta
> + *
> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +
> +/*
> + * This allocator is designed for use with zcache and zram. Thus, the
> + * allocator is supposed to work well under low memory conditions. In
> + * particular, it never attempts higher order page allocation which is
> + * very likely to fail under memory pressure. On the other hand, if we
> + * just use single (0-order) pages, it would suffer from very high
> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> + * an entire page. This was one of the major issues with its predecessor
> + * (xvmalloc).
> + *
> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> + * and links them together using various 'struct page' fields. These linked
> + * pages act as a single higher-order page i.e. an object can span 0-order
> + * page boundaries. The code refers to these linked pages as a single entity
> + * called zspage.
> + *
> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> + * since this satisfies the requirements of all its current users (in the
> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> + * uncompressed form). For allocation requests larger than this size, failure
> + * is returned (see zs_malloc).
> + *
> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> + * Instead, it returns an opaque handle (unsigned long) which encodes actual

There are places where it's assumed that an unsigned long is an address
that can be used. It's a nit-pick but it might be worth explicitly declaring
an opaque type that just happens to be unsigned long.

> + * location of the allocated object. The reason for this indirection is that
> + * zsmalloc does not keep zspages permanently mapped since that would cause
> + * issues on 32-bit systems where the VA region for kernel space mappings
> + * is very small. So, before using the allocating memory, the object has to
> + * be mapped using zs_map_object() to get a usable pointer and subsequently
> + * unmapped using zs_unmap_object().
> + *
> + * Following is how we use various fields and flags of underlying
> + * struct page(s) to form a zspage.
> + *
> + * Usage of struct page fields:
> + * page->first_page: points to the first component (0-order) page
> + * page->index (union with page->freelist): offset of the first object
> + * starting in this page. For the first page, this is
> + * always 0, so we use this field (aka freelist) to point
> + * to the first free object in zspage.
> + * page->lru: links together all component pages (except the first page)
> + * of a zspage
> + *
> + * For _first_ page only:
> + *
> + * page->private (union with page->first_page): refers to the
> + * component page after the first page
> + * page->freelist: points to the first free object in zspage.
> + * Free objects are linked together using in-place
> + * metadata.
> + * page->lru: links together first pages of various zspages.
> + * Basically forming list of zspages in a fullness group.
> + * page->mapping: class index and fullness group of the zspage
> + *

Heh, that's some packing.

> + * Usage of struct page flags:
> + * PG_private: identifies the first component page
> + * PG_private2: identifies the last component page
> + *
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/bitops.h>
> +#include <linux/errno.h>
> +#include <linux/highmem.h>
> +#include <linux/init.h>
> +#include <linux/string.h>
> +#include <linux/slab.h>
> +#include <asm/tlbflush.h>
> +#include <asm/pgtable.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu.h>
> +#include <linux/vmalloc.h>
> +#include <linux/hardirq.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +
> +#include <linux/zsmalloc.h>
> +
> +/*
> + * This must be power of 2 and greater than of equal to sizeof(link_free).
> + * These two conditions ensure that any 'struct link_free' itself doesn't
> + * span more than 1 page which avoids complex case of mapping 2 pages simply
> + * to restore link_free pointer values.
> + */
> +#define ZS_ALIGN 8
> +
> +/*
> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> + */
> +#define ZS_MAX_ZSPAGE_ORDER 2
> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> +
> +/*
> + * Object location (<PFN>, <obj_idx>) is encoded as
> + * as single (unsigned long) handle value.
> + *
> + * Note that object index <obj_idx> is relative to system
> + * page <PFN> it is stored in, so for each sub-page belonging
> + * to a zspage, obj_idx starts with 0.
> + *
> + * This is made more complicated by various memory models and PAE.
> + */
> +
> +#ifndef MAX_PHYSMEM_BITS
> +#ifdef CONFIG_HIGHMEM64G
> +#define MAX_PHYSMEM_BITS 36
> +#else /* !CONFIG_HIGHMEM64G */
> +/*
> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> + * be PAGE_SHIFT
> + */
> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> +#endif
> +#endif
> +#define _PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT)
> +#define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS)
> +#define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> +
> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> +#define ZS_MIN_ALLOC_SIZE \
> + MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> +#define ZS_MAX_ALLOC_SIZE PAGE_SIZE
> +
> +/*
> + * On systems with 4K page size, this gives 254 size classes! There is a
> + * trader-off here:
> + * - Large number of size classes is potentially wasteful as free page are
> + * spread across these classes
> + * - Small number of size classes causes large internal fragmentation
> + * - Probably its better to use specific size classes (empirically
> + * determined). NOTE: all those class sizes must be set as multiple of
> + * ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> + *
> + * ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> + * (reason above)
> + */
> +#define ZS_SIZE_CLASS_DELTA (PAGE_SIZE >> 8)
> +#define ZS_SIZE_CLASSES ((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> + ZS_SIZE_CLASS_DELTA + 1)
> +
> +/*
> + * We do not maintain any list for completely empty or full pages
> + */
> +enum fullness_group {
> + ZS_ALMOST_FULL,
> + ZS_ALMOST_EMPTY,
> + _ZS_NR_FULLNESS_GROUPS,
> +
> + ZS_EMPTY,
> + ZS_FULL
> +};

Ok, I see that you then use the fullness class to try and pack new
allocations into "almost full" zspages. This could mean that a zspage
spans an unpredictable number of physical pages but no idea if that's a
problem or not.

> +
> +/*
> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> + * n <= N / f, where
> + * n = number of allocated objects
> + * N = total number of objects zspage can store
> + * f = 1/fullness_threshold_frac
> + *
> + * Similarly, we assign zspage to:
> + * ZS_ALMOST_FULL when n > N / f
> + * ZS_EMPTY when n == 0
> + * ZS_FULL when n == N
> + *
> + * (see: fix_fullness_group())
> + */
> +static const int fullness_threshold_frac = 4;
> +
> +struct size_class {
> + /*
> + * Size of objects stored in this class. Must be multiple
> + * of ZS_ALIGN.
> + */
> + int size;
> + unsigned int index;
> +

You can drop index and use a lookup that calculates it as

index = size_class - zs_pool->size_class;

> + /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> + int pages_per_zspage;
> +
> + spinlock_t lock;
> +
> + /* stats */
> + u64 pages_allocated;
> +
> + struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];

The fact that you don't track full pages is curious. It may imply that
it's not possible to forcibly reclaim a full zspage or maybe it's just
not implemented.

I initially worrised that ZS_EMPTY pages leaked but it looks like such
pages are always freed

> +};
> +
> +/*
> + * Placed within free objects to form a singly linked list.
> + * For every zspage, first_page->freelist gives head of this list.
> + *
> + * This must be power of 2 and less than or equal to ZS_ALIGN
> + */
> +struct link_free {
> + /* Handle of next free chunk (encodes <PFN, obj_idx>) */
> + void *next;
> +};
> +
> +struct zs_pool {
> + struct size_class size_class[ZS_SIZE_CLASSES];
> +
> + struct zs_ops *ops;
> +};
> +
> +/*
> + * A zspage's class index and fullness group
> + * are encoded in its (first)page->mapping
> + */
> +#define CLASS_IDX_BITS 28
> +#define FULLNESS_BITS 4
> +#define CLASS_IDX_MASK ((1 << CLASS_IDX_BITS) - 1)
> +#define FULLNESS_MASK ((1 << FULLNESS_BITS) - 1)
> +
> +struct mapping_area {
> +#ifdef CONFIG_PGTABLE_MAPPING
> + struct vm_struct *vm; /* vm area for mapping object that span pages */
> +#else
> + char *vm_buf; /* copy buffer for objects that span pages */
> +#endif
> + char *vm_addr; /* address of kmap_atomic()'ed pages */
> + enum zs_mapmode vm_mm; /* mapping mode */
> +};
> +
> +/* default page alloc/free ops */
> +struct page *zs_alloc_page(gfp_t flags)
> +{
> + return alloc_page(flags);
> +}
> +
> +void zs_free_page(struct page *page)
> +{
> + __free_page(page);
> +}
> +
> +struct zs_ops zs_default_ops = {
> + .alloc = zs_alloc_page,
> + .free = zs_free_page
> +};
> +
> +/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
> +static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
> +
> +static int is_first_page(struct page *page)
> +{
> + return PagePrivate(page);
> +}
> +
> +static int is_last_page(struct page *page)
> +{
> + return PagePrivate2(page);
> +}
> +
> +static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
> + enum fullness_group *fullness)
> +{
> + unsigned long m;
> + BUG_ON(!is_first_page(page));
> +
> + m = (unsigned long)page->mapping;
> + *fullness = m & FULLNESS_MASK;
> + *class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
> +}
> +
> +static void set_zspage_mapping(struct page *page, unsigned int class_idx,
> + enum fullness_group fullness)
> +{
> + unsigned long m;
> + BUG_ON(!is_first_page(page));
> +
> + m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
> + (fullness & FULLNESS_MASK);
> + page->mapping = (struct address_space *)m;
> +}
> +
> +/*
> + * zsmalloc divides the pool into various size classes where each
> + * class maintains a list of zspages where each zspage is divided
> + * into equal sized chunks. Each allocation falls into one of these
> + * classes depending on its size. This function returns index of the
> + * size class which has chunk size big enough to hold the give size.
> + */
> +static int get_size_class_index(int size)
> +{
> + int idx = 0;
> +
> + if (likely(size > ZS_MIN_ALLOC_SIZE))
> + idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
> + ZS_SIZE_CLASS_DELTA);
> +
> + return idx;
> +}
> +
> +/*
> + * For each size class, zspages are divided into different groups
> + * depending on how "full" they are. This was done so that we could
> + * easily find empty or nearly empty zspages when we try to shrink
> + * the pool (not yet implemented). This function returns fullness
> + * status of the given page.
> + */

We can't forcibly shrink this thing? I'll be curious to see what happens
when zswap is full then.

> +static enum fullness_group get_fullness_group(struct page *page,
> + struct size_class *class)
> +{
> + int inuse, max_objects;
> + enum fullness_group fg;
> + BUG_ON(!is_first_page(page));
> +
> + inuse = page->inuse;
> + max_objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> +

As class->size must be a multiple of ZS_ALIGN which is a power-of-two
then this calculation could be done as bit shifts if class->size was a
shift instead of a size. Not that important.

> + if (inuse == 0)
> + fg = ZS_EMPTY;
> + else if (inuse == max_objects)
> + fg = ZS_FULL;
> + else if (inuse <= max_objects / fullness_threshold_frac)
> + fg = ZS_ALMOST_EMPTY;
> + else
> + fg = ZS_ALMOST_FULL;
> +
> + return fg;
> +}
> +
> +/*
> + * Each size class maintains various freelists and zspages are assigned
> + * to one of these freelists based on the number of live objects they
> + * have. This functions inserts the given zspage into the freelist
> + * identified by <class, fullness_group>.
> + */
> +static void insert_zspage(struct page *page, struct size_class *class,
> + enum fullness_group fullness)
> +{
> + struct page **head;
> +
> + BUG_ON(!is_first_page(page));
> +
> + if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> + return;
> +
> + head = &class->fullness_list[fullness];
> + if (*head)
> + list_add_tail(&page->lru, &(*head)->lru);
> +
> + *head = page;
> +}
> +
> +/*
> + * This function removes the given zspage from the freelist identified
> + * by <class, fullness_group>.
> + */
> +static void remove_zspage(struct page *page, struct size_class *class,
> + enum fullness_group fullness)
> +{
> + struct page **head;
> +
> + BUG_ON(!is_first_page(page));
> +
> + if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> + return;
> +
> + head = &class->fullness_list[fullness];
> + BUG_ON(!*head);
> + if (list_empty(&(*head)->lru))
> + *head = NULL;
> + else if (*head == page)
> + *head = (struct page *)list_entry((*head)->lru.next,
> + struct page, lru);
> +
> + list_del_init(&page->lru);
> +}
> +
> +/*
> + * Each size class maintains zspages in different fullness groups depending
> + * on the number of live objects they contain. When allocating or freeing
> + * objects, the fullness status of the page can change, say, from ALMOST_FULL
> + * to ALMOST_EMPTY when freeing an object. This function checks if such
> + * a status change has occurred for the given page and accordingly moves the
> + * page from the freelist of the old fullness group to that of the new
> + * fullness group.
> + */
> +static enum fullness_group fix_fullness_group(struct zs_pool *pool,
> + struct page *page)
> +{
> + int class_idx;
> + struct size_class *class;
> + enum fullness_group currfg, newfg;
> +
> + BUG_ON(!is_first_page(page));
> +
> + get_zspage_mapping(page, &class_idx, &currfg);
> + class = &pool->size_class[class_idx];
> + newfg = get_fullness_group(page, class);
> + if (newfg == currfg)
> + goto out;
> +
> + remove_zspage(page, class, currfg);
> + insert_zspage(page, class, newfg);
> + set_zspage_mapping(page, class_idx, newfg);
> +
> +out:
> + return newfg;
> +}
> +
> +/*
> + * We have to decide on how many pages to link together
> + * to form a zspage for each size class. This is important
> + * to reduce wastage due to unusable space left at end of
> + * each zspage which is given as:
> + * wastage = Zp - Zp % size_class
> + * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
> + *
> + * For example, for size class of 3/8 * PAGE_SIZE, we should
> + * link together 3 PAGE_SIZE sized pages to form a zspage
> + * since then we can perfectly fit in 8 such objects.
> + */
> +static int get_pages_per_zspage(int class_size)
> +{
> + int i, max_usedpc = 0;
> + /* zspage order which gives maximum used size per KB */
> + int max_usedpc_order = 1;
> +
> + for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> + int zspage_size;
> + int waste, usedpc;
> +
> + zspage_size = i * PAGE_SIZE;
> + waste = zspage_size % class_size;
> + usedpc = (zspage_size - waste) * 100 / zspage_size;
> +
> + if (usedpc > max_usedpc) {
> + max_usedpc = usedpc;
> + max_usedpc_order = i;
> + }
> + }
> +
> + return max_usedpc_order;
> +}
> +
> +/*
> + * A single 'zspage' is composed of many system pages which are
> + * linked together using fields in struct page. This function finds
> + * the first/head page, given any component page of a zspage.
> + */
> +static struct page *get_first_page(struct page *page)
> +{
> + if (is_first_page(page))
> + return page;
> + else
> + return page->first_page;
> +}
> +
> +static struct page *get_next_page(struct page *page)
> +{
> + struct page *next;
> +
> + if (is_last_page(page))
> + next = NULL;
> + else if (is_first_page(page))
> + next = (struct page *)page->private;
> + else
> + next = list_entry(page->lru.next, struct page, lru);
> +
> + return next;
> +}
> +
> +/* Encode <page, obj_idx> as a single handle value */
> +static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
> +{
> + unsigned long handle;
> +
> + if (!page) {
> + BUG_ON(obj_idx);
> + return NULL;
> + }
> +
> + handle = page_to_pfn(page) << OBJ_INDEX_BITS;
> + handle |= (obj_idx & OBJ_INDEX_MASK);
> +
> + return (void *)handle;
> +}
> +
> +/* Decode <page, obj_idx> pair from the given object handle */
> +static void obj_handle_to_location(unsigned long handle, struct page **page,
> + unsigned long *obj_idx)
> +{
> + *page = pfn_to_page(handle >> OBJ_INDEX_BITS);
> + *obj_idx = handle & OBJ_INDEX_MASK;
> +}
> +
> +static unsigned long obj_idx_to_offset(struct page *page,
> + unsigned long obj_idx, int class_size)
> +{
> + unsigned long off = 0;
> +
> + if (!is_first_page(page))
> + off = page->index;
> +
> + return off + obj_idx * class_size;
> +}
> +
> +static void reset_page(struct page *page)
> +{
> + clear_bit(PG_private, &page->flags);
> + clear_bit(PG_private_2, &page->flags);
> + set_page_private(page, 0);
> + page->mapping = NULL;
> + page->freelist = NULL;
> + page_mapcount_reset(page);
> +}
> +
> +static void free_zspage(struct zs_ops *ops, struct page *first_page)
> +{
> + struct page *nextp, *tmp, *head_extra;
> +
> + BUG_ON(!is_first_page(first_page));
> + BUG_ON(first_page->inuse);
> +
> + head_extra = (struct page *)page_private(first_page);
> +
> + reset_page(first_page);
> + ops->free(first_page);
> +
> + /* zspage with only 1 system page */
> + if (!head_extra)
> + return;
> +
> + list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
> + list_del(&nextp->lru);
> + reset_page(nextp);
> + ops->free(nextp);
> + }
> + reset_page(head_extra);
> + ops->free(head_extra);
> +}
> +
> +/* Initialize a newly allocated zspage */
> +static void init_zspage(struct page *first_page, struct size_class *class)
> +{
> + unsigned long off = 0;
> + struct page *page = first_page;
> +
> + BUG_ON(!is_first_page(first_page));
> + while (page) {
> + struct page *next_page;
> + struct link_free *link;
> + unsigned int i, objs_on_page;
> +
> + /*
> + * page->index stores offset of first object starting
> + * in the page. For the first page, this is always 0,
> + * so we use first_page->index (aka ->freelist) to store
> + * head of corresponding zspage's freelist.
> + */
> + if (page != first_page)
> + page->index = off;
> +
> + link = (struct link_free *)kmap_atomic(page) +
> + off / sizeof(*link);
> + objs_on_page = (PAGE_SIZE - off) / class->size;
> +
> + for (i = 1; i <= objs_on_page; i++) {
> + off += class->size;
> + if (off < PAGE_SIZE) {
> + link->next = obj_location_to_handle(page, i);
> + link += class->size / sizeof(*link);
> + }
> + }
> +
> + /*
> + * We now come to the last (full or partial) object on this
> + * page, which must point to the first object on the next
> + * page (if present)
> + */
> + next_page = get_next_page(page);
> + link->next = obj_location_to_handle(next_page, 0);
> + kunmap_atomic(link);
> + page = next_page;
> + off = (off + class->size) % PAGE_SIZE;
> + }
> +}
> +
> +/*
> + * Allocate a zspage for the given size class
> + */
> +static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
> + gfp_t flags)
> +{
> + int i, error;
> + struct page *first_page = NULL, *uninitialized_var(prev_page);
> +
> + /*
> + * Allocate individual pages and link them together as:
> + * 1. first page->private = first sub-page
> + * 2. all sub-pages are linked together using page->lru
> + * 3. each sub-page is linked to the first page using page->first_page
> + *
> + * For each size class, First/Head pages are linked together using
> + * page->lru. Also, we set PG_private to identify the first page
> + * (i.e. no other sub-page has this flag set) and PG_private_2 to
> + * identify the last page.
> + */
> + error = -ENOMEM;
> + for (i = 0; i < class->pages_per_zspage; i++) {
> + struct page *page;
> +
> + page = ops->alloc(flags);
> + if (!page)
> + goto cleanup;
> +

After this point, error is guaranteed to be set to 0. It looks like the
free_zspage code at cleanup: is necessary. If we goto cleanup from here,
first_page is NULL and after here error is always 0.

> + INIT_LIST_HEAD(&page->lru);
> + if (i == 0) { /* first page */
> + SetPagePrivate(page);
> + set_page_private(page, 0);
> + first_page = page;
> + first_page->inuse = 0;
> + }
> + if (i == 1)
> + first_page->private = (unsigned long)page;
> + if (i >= 1)
> + page->first_page = first_page;
> + if (i >= 2)
> + list_add(&page->lru, &prev_page->lru);
> + if (i == class->pages_per_zspage - 1) /* last page */
> + SetPagePrivate2(page);
> + prev_page = page;
> + }
> +
> + init_zspage(first_page, class);
> +
> + first_page->freelist = obj_location_to_handle(first_page, 0);
> +
> + error = 0; /* Success */
> +
> +cleanup:
> + if (unlikely(error) && first_page) {
> + free_zspage(ops, first_page);
> + first_page = NULL;
> + }
> +
> + return first_page;
> +}
> +
> +static struct page *find_get_zspage(struct size_class *class)
> +{
> + int i;
> + struct page *page;
> +
> + for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
> + page = class->fullness_list[i];
> + if (page)
> + break;
> + }
> +
> + return page;
> +}
> +

Just an observation but the locking around this is for the entire size
class and not for a given zspage. However, I also doubt that this lock
is a heavily contended one. I'd expect any contention to be negligible
in comparison to the cost of compressing a page.

> +#ifdef CONFIG_PGTABLE_MAPPING
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> + /*
> + * Make sure we don't leak memory if a cpu UP notification
> + * and zs_init() race and both call zs_cpu_up() on the same cpu
> + */
> + if (area->vm)
> + return 0;
> + area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
> + if (!area->vm)
> + return -ENOMEM;
> + return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> + if (area->vm)
> + free_vm_area(area->vm);
> + area->vm = NULL;
> +}
> +
> +static inline void *__zs_map_object(struct mapping_area *area,
> + struct page *pages[2], int off, int size)
> +{
> + BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
> + area->vm_addr = area->vm->addr;
> + return area->vm_addr + off;
> +}
> +
> +static inline void __zs_unmap_object(struct mapping_area *area,
> + struct page *pages[2], int off, int size)
> +{
> + unsigned long addr = (unsigned long)area->vm_addr;
> + unsigned long end = addr + (PAGE_SIZE * 2);
> +
> + flush_cache_vunmap(addr, end);
> + unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> + flush_tlb_kernel_range(addr, end);
> +}
> +
> +#else /* CONFIG_PGTABLE_MAPPING*/
> +
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> + /*
> + * Make sure we don't leak memory if a cpu UP notification
> + * and zs_init() race and both call zs_cpu_up() on the same cpu
> + */
> + if (area->vm_buf)
> + return 0;
> + area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
> + if (!area->vm_buf)
> + return -ENOMEM;
> + return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> + if (area->vm_buf)
> + free_page((unsigned long)area->vm_buf);
> + area->vm_buf = NULL;
> +}
> +
> +static void *__zs_map_object(struct mapping_area *area,
> + struct page *pages[2], int off, int size)
> +{
> + int sizes[2];
> + void *addr;
> + char *buf = area->vm_buf;
> +
> + /* disable page faults to match kmap_atomic() return conditions */
> + pagefault_disable();
> +
> + /* no read fastpath */
> + if (area->vm_mm == ZS_MM_WO)
> + goto out;
> +
> + sizes[0] = PAGE_SIZE - off;
> + sizes[1] = size - sizes[0];
> +
> + /* copy object to per-cpu buffer */
> + addr = kmap_atomic(pages[0]);
> + memcpy(buf, addr + off, sizes[0]);
> + kunmap_atomic(addr);
> + addr = kmap_atomic(pages[1]);
> + memcpy(buf + sizes[0], addr, sizes[1]);
> + kunmap_atomic(addr);
> +out:
> + return area->vm_buf;
> +}
> +
> +static void __zs_unmap_object(struct mapping_area *area,
> + struct page *pages[2], int off, int size)
> +{
> + int sizes[2];
> + void *addr;
> + char *buf = area->vm_buf;
> +
> + /* no write fastpath */
> + if (area->vm_mm == ZS_MM_RO)
> + goto out;
> +
> + sizes[0] = PAGE_SIZE - off;
> + sizes[1] = size - sizes[0];
> +
> + /* copy per-cpu buffer to object */
> + addr = kmap_atomic(pages[0]);
> + memcpy(addr + off, buf, sizes[0]);
> + kunmap_atomic(addr);
> + addr = kmap_atomic(pages[1]);
> + memcpy(addr, buf + sizes[0], sizes[1]);
> + kunmap_atomic(addr);
> +
> +out:
> + /* enable page faults to match kunmap_atomic() return conditions */
> + pagefault_enable();
> +}
> +
> +#endif /* CONFIG_PGTABLE_MAPPING */
> +
> +static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
> + void *pcpu)
> +{
> + int ret, cpu = (long)pcpu;
> + struct mapping_area *area;
> +
> + switch (action) {
> + case CPU_UP_PREPARE:
> + area = &per_cpu(zs_map_area, cpu);
> + ret = __zs_cpu_up(area);
> + if (ret)
> + return notifier_from_errno(ret);
> + break;
> + case CPU_DEAD:
> + case CPU_UP_CANCELED:
> + area = &per_cpu(zs_map_area, cpu);
> + __zs_cpu_down(area);
> + break;
> + }
> +
> + return NOTIFY_OK;
> +}
> +
> +static struct notifier_block zs_cpu_nb = {
> + .notifier_call = zs_cpu_notifier
> +};
> +
> +static void zs_exit(void)
> +{
> + int cpu;
> +
> + for_each_online_cpu(cpu)
> + zs_cpu_notifier(NULL, CPU_DEAD, (void *)(long)cpu);
> + unregister_cpu_notifier(&zs_cpu_nb);
> +}
> +
> +static int zs_init(void)
> +{
> + int cpu, ret;
> +
> + register_cpu_notifier(&zs_cpu_nb);
> + for_each_online_cpu(cpu) {
> + ret = zs_cpu_notifier(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
> + if (notifier_to_errno(ret))
> + goto fail;
> + }
> + return 0;
> +fail:
> + zs_exit();
> + return notifier_to_errno(ret);
> +}
> +
> +/**
> + * zs_create_pool - Creates an allocation pool to work from.
> + * @flags: allocation flags used to allocate pool metadata
> + * @ops: allocation/free callbacks for expanding the pool
> + *
> + * This function must be called before anything when using
> + * the zsmalloc allocator.
> + *
> + * On success, a pointer to the newly created pool is returned,
> + * otherwise NULL.
> + */
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops)
> +{
> + int i, ovhd_size;
> + struct zs_pool *pool;
> +
> + ovhd_size = roundup(sizeof(*pool), PAGE_SIZE);
> + pool = kzalloc(ovhd_size, flags);
> + if (!pool)
> + return NULL;
> +
> + for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> + int size;
> + struct size_class *class;
> +
> + size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA;
> + if (size > ZS_MAX_ALLOC_SIZE)
> + size = ZS_MAX_ALLOC_SIZE;
> +
> + class = &pool->size_class[i];
> + class->size = size;
> + class->index = i;
> + spin_lock_init(&class->lock);
> + class->pages_per_zspage = get_pages_per_zspage(size);
> +
> + }
> +
> + if (ops)
> + pool->ops = ops;
> + else
> + pool->ops = &zs_default_ops;
> +
> + return pool;
> +}
> +EXPORT_SYMBOL_GPL(zs_create_pool);
> +
> +void zs_destroy_pool(struct zs_pool *pool)
> +{
> + int i;
> +
> + for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> + int fg;
> + struct size_class *class = &pool->size_class[i];
> +
> + for (fg = 0; fg < _ZS_NR_FULLNESS_GROUPS; fg++) {
> + if (class->fullness_list[fg]) {
> + pr_info("Freeing non-empty class with size "
> + "%db, fullness group %d\n",
> + class->size, fg);
> + }
> + }
> + }
> + kfree(pool);
> +}
> +EXPORT_SYMBOL_GPL(zs_destroy_pool);
> +
> +/**
> + * zs_malloc - Allocate block of given size from pool.
> + * @pool: pool to allocate from
> + * @size: size of block to allocate
> + *
> + * On success, handle to the allocated object is returned,
> + * otherwise 0.
> + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail.
> + */
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags)
> +{
> + unsigned long obj;
> + struct link_free *link;
> + int class_idx;
> + struct size_class *class;
> +
> + struct page *first_page, *m_page;
> + unsigned long m_objidx, m_offset;
> +
> + if (unlikely(!size || size > ZS_MAX_ALLOC_SIZE))
> + return 0;
> +
> + class_idx = get_size_class_index(size);
> + class = &pool->size_class[class_idx];
> + BUG_ON(class_idx != class->index);
> +
> + spin_lock(&class->lock);
> + first_page = find_get_zspage(class);
> +
> + if (!first_page) {
> + spin_unlock(&class->lock);
> + first_page = alloc_zspage(pool->ops, class, flags);
> + if (unlikely(!first_page))
> + return 0;
> +
> + set_zspage_mapping(first_page, class->index, ZS_EMPTY);
> + spin_lock(&class->lock);
> + class->pages_allocated += class->pages_per_zspage;
> + }
> +
> + obj = (unsigned long)first_page->freelist;
> + obj_handle_to_location(obj, &m_page, &m_objidx);
> + m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
> +
> + link = (struct link_free *)kmap_atomic(m_page) +
> + m_offset / sizeof(*link);
> + first_page->freelist = link->next;
> + memset(link, POISON_INUSE, sizeof(*link));
> + kunmap_atomic(link);
> +

Pity about the kmap_atomic but I guess it doesn't matter as the lock is
serialising the entire size class anyway.

> + first_page->inuse++;
> + /* Now move the zspage to another fullness group, if required */
> + fix_fullness_group(pool, first_page);
> + spin_unlock(&class->lock);
> +
> + return obj;
> +}
> +EXPORT_SYMBOL_GPL(zs_malloc);
> +
> +void zs_free(struct zs_pool *pool, unsigned long obj)
> +{
> + struct link_free *link;
> + struct page *first_page, *f_page;
> + unsigned long f_objidx, f_offset;
> +
> + int class_idx;
> + struct size_class *class;
> + enum fullness_group fullness;
> +
> + if (unlikely(!obj))
> + return;
> +
> + obj_handle_to_location(obj, &f_page, &f_objidx);
> + first_page = get_first_page(f_page);
> +
> + get_zspage_mapping(first_page, &class_idx, &fullness);
> + class = &pool->size_class[class_idx];
> + f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
> +
> + spin_lock(&class->lock);
> +
> + /* Insert this object in containing zspage's freelist */
> + link = (struct link_free *)((unsigned char *)kmap_atomic(f_page)
> + + f_offset);
> + link->next = first_page->freelist;
> + kunmap_atomic(link);
> + first_page->freelist = (void *)obj;
> +
> + first_page->inuse--;
> + fullness = fix_fullness_group(pool, first_page);
> +
> + if (fullness == ZS_EMPTY)
> + class->pages_allocated -= class->pages_per_zspage;
> +
> + spin_unlock(&class->lock);
> +
> + if (fullness == ZS_EMPTY)
> + free_zspage(pool->ops, first_page);
> +}
> +EXPORT_SYMBOL_GPL(zs_free);
> +
> +/**
> + * zs_map_object - get address of allocated object from handle.
> + * @pool: pool from which the object was allocated
> + * @handle: handle returned from zs_malloc
> + *
> + * Before using an object allocated from zs_malloc, it must be mapped using
> + * this function. When done with the object, it must be unmapped using
> + * zs_unmap_object.
> + *
> + * Only one object can be mapped per cpu at a time. There is no protection
> + * against nested mappings.
> + *
> + * This function returns with preemption and page faults disabled.
> +*/
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> + enum zs_mapmode mm)
> +{
> + struct page *page;
> + unsigned long obj_idx, off;
> +
> + unsigned int class_idx;
> + enum fullness_group fg;
> + struct size_class *class;
> + struct mapping_area *area;
> + struct page *pages[2];
> +
> + BUG_ON(!handle);
> +
> + /*
> + * Because we use per-cpu mapping areas shared among the
> + * pools/users, we can't allow mapping in interrupt context
> + * because it can corrupt another users mappings.
> + */
> + BUG_ON(in_interrupt());
> +
> + obj_handle_to_location(handle, &page, &obj_idx);
> + get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> + class = &pool->size_class[class_idx];
> + off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> + area = &get_cpu_var(zs_map_area);
> + area->vm_mm = mm;
> + if (off + class->size <= PAGE_SIZE) {
> + /* this object is contained entirely within a page */
> + area->vm_addr = kmap_atomic(page);
> + return area->vm_addr + off;
> + }
> +
> + /* this object spans two pages */
> + pages[0] = page;
> + pages[1] = get_next_page(page);
> + BUG_ON(!pages[1]);
> +
> + return __zs_map_object(area, pages, off, class->size);
> +}
> +EXPORT_SYMBOL_GPL(zs_map_object);
> +
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> +{
> + struct page *page;
> + unsigned long obj_idx, off;
> +
> + unsigned int class_idx;
> + enum fullness_group fg;
> + struct size_class *class;
> + struct mapping_area *area;
> +
> + BUG_ON(!handle);
> +
> + obj_handle_to_location(handle, &page, &obj_idx);
> + get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> + class = &pool->size_class[class_idx];
> + off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> + area = &__get_cpu_var(zs_map_area);
> + if (off + class->size <= PAGE_SIZE)
> + kunmap_atomic(area->vm_addr);
> + else {
> + struct page *pages[2];
> +
> + pages[0] = page;
> + pages[1] = get_next_page(page);
> + BUG_ON(!pages[1]);
> +
> + __zs_unmap_object(area, pages, off, class->size);
> + }
> + put_cpu_var(zs_map_area);
> +}
> +EXPORT_SYMBOL_GPL(zs_unmap_object);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool)
> +{
> + int i;
> + u64 npages = 0;
> +
> + for (i = 0; i < ZS_SIZE_CLASSES; i++)
> + npages += pool->size_class[i].pages_allocated;
> +
> + return npages << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> +
> +module_init(zs_init);
> +module_exit(zs_exit);
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("Nitin Gupta <[email protected]>");
> --
> 1.8.2.1
>

--
Mel Gorman
SUSE Labs

2013-04-14 00:55:20

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCHv9 3/8] debugfs: add get/set for atomic types

On Wed, Apr 10, 2013 at 01:18:55PM -0500, Seth Jennings wrote:
> debugfs currently lack the ability to create attributes
> that set/get atomic_t values.
>
> This patch adds support for this through a new
> debugfs_create_atomic_t() function.
>
> Acked-by: Greg Kroah-Hartman <[email protected]>
> Signed-off-by: Seth Jennings <[email protected]>

Acked-by: Mel Gorman <[email protected]>

--
Mel Gorman
SUSE Labs

2013-04-14 00:56:28

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCHv9 4/8] zswap: add to mm/

On Wed, Apr 10, 2013 at 01:18:56PM -0500, Seth Jennings wrote:
> zswap is a thin compression backend for frontswap. It receives
> pages from frontswap and attempts to store them in a compressed
> memory pool, resulting in an effective partial memory reclaim and
> dramatically reduced swap device I/O.
>
> Additionally, in most cases, pages can be retrieved from this
> compressed store much more quickly than reading from tradition
> swap devices resulting in faster performance for many workloads.
>

Except in the case where the zswap pool is externally fragmented, occupies
its maximum configured size and a workload that would otherwise have fit
in memory gets pushed to swap.

Yes, it's a corner case but the changelog portrays zswap as an unconditional
win and while it certainly is going to help some cases, it won't help
them all.

> This patch adds the zswap driver to mm/
>
> Signed-off-by: Seth Jennings <[email protected]>
> ---
> mm/Kconfig | 15 ++
> mm/Makefile | 1 +
> mm/zswap.c | 665 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 681 insertions(+)
> create mode 100644 mm/zswap.c
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index aa054fc..36d93b0 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -495,3 +495,18 @@ config PGTABLE_MAPPING
>
> You can check speed with zsmalloc benchmark[1].
> [1] https://github.com/spartacus06/zsmalloc
> +
> +config ZSWAP
> + bool "In-kernel swap page compression"
> + depends on FRONTSWAP && CRYPTO
> + select CRYPTO_LZO
> + select ZSMALLOC
> + default n
> + help
> + Zswap is a backend for the frontswap mechanism in the VMM.
> + It receives pages from frontswap and attempts to store them
> + in a compressed memory pool, resulting in an effective
> + partial memory reclaim. In addition, pages and be retrieved
> + from this compressed store much faster than most tradition
> + swap devices resulting in reduced I/O and faster performance
> + for many workloads.
> diff --git a/mm/Makefile b/mm/Makefile
> index 0f6ef0a..1e0198f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
> obj-$(CONFIG_BOUNCE) += bounce.o
> obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
> obj-$(CONFIG_FRONTSWAP) += frontswap.o
> +obj-$(CONFIG_ZSWAP) += zswap.o
> obj-$(CONFIG_HAS_DMA) += dmapool.o
> obj-$(CONFIG_HUGETLBFS) += hugetlb.o
> obj-$(CONFIG_NUMA) += mempolicy.o
> diff --git a/mm/zswap.c b/mm/zswap.c
> new file mode 100644
> index 0000000..db283c4
> --- /dev/null
> +++ b/mm/zswap.c
> @@ -0,0 +1,665 @@
> +/*
> + * zswap.c - zswap driver file
> + *
> + * zswap is a backend for frontswap that takes pages that are in the
> + * process of being swapped out and attempts to compress them and store
> + * them in a RAM-based memory pool. This results in a significant I/O
> + * reduction on the real swap device and, in the case of a slow swap
> + * device, can also improve workload performance.
> + *
> + * Copyright (C) 2012 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> +*/
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/module.h>
> +#include <linux/cpu.h>
> +#include <linux/highmem.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +#include <linux/atomic.h>
> +#include <linux/frontswap.h>
> +#include <linux/rbtree.h>
> +#include <linux/swap.h>
> +#include <linux/crypto.h>
> +#include <linux/mempool.h>
> +#include <linux/zsmalloc.h>
> +
> +/*********************************
> +* statistics
> +**********************************/
> +/* Number of memory pages used by the compressed pool */
> +static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> +/* The number of compressed pages currently stored in zswap */
> +static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +
> +/*
> + * The statistics below are not protected from concurrent access for
> + * performance reasons so they may not be a 100% accurate. However,
> + * they do provide useful information on roughly how many times a
> + * certain event is occurring.
> +*/
> +static u64 zswap_pool_limit_hit;
> +static u64 zswap_reject_compress_poor;
> +static u64 zswap_reject_zsmalloc_fail;
> +static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_duplicate_entry;
> +

Ok. Initially I thought "vmstat" but it would be overkill in this case
and the fact zswap can be a module would be a problem.

> +/*********************************
> +* tunables
> +**********************************/
> +/* Enable/disable zswap (disabled by default, fixed at boot for now) */
> +static bool zswap_enabled;
> +module_param_named(enabled, zswap_enabled, bool, 0);
> +
> +/* Compressor to be used by zswap (fixed at boot for now) */
> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> +module_param_named(compressor, zswap_compressor, charp, 0);
> +
> +/* The maximum percentage of memory that the compressed pool can occupy */
> +static unsigned int zswap_max_pool_percent = 20;
> +module_param_named(max_pool_percent,
> + zswap_max_pool_percent, uint, 0644);
> +

This has some potentially interesting NUMA characteristics.
The location of the allocated pages will depend on the process that first
allocated the page. As the pages can then be used by remote processes,
there may be increased remote accesses when accessing zswap.
Furthermore, if zone_reclaim_mode is enabled and allowed to swap it
could setup a weird situation whereby a process pushes itself fully into
zswap trying to reclaim local memory and instead pushing itself into
zswap on the local node.

If this is every reported as a problem then a workaround is to always
allocate zswap pages round-robin between online nodes.

> +/*
> + * Maximum compression ratio, as as percentage, for an acceptable

s/as as/as a/

> + * compressed page. Any pages that do not compress by at least
> + * this ratio will be rejected.
> +*/
> +static unsigned int zswap_max_compression_ratio = 80;
> +module_param_named(max_compression_ratio,
> + zswap_max_compression_ratio, uint, 0644);
> +
> +/*********************************
> +* compression functions
> +**********************************/
> +/* per-cpu compression transforms */
> +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
> +
> +enum comp_op {
> + ZSWAP_COMPOP_COMPRESS,
> + ZSWAP_COMPOP_DECOMPRESS
> +};
> +
> +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
> + u8 *dst, unsigned int *dlen)
> +{
> + struct crypto_comp *tfm;
> + int ret;
> +
> + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());

It's always the local CPU so why not get_cpu_var()?

> + switch (op) {
> + case ZSWAP_COMPOP_COMPRESS:
> + ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
> + break;
> + case ZSWAP_COMPOP_DECOMPRESS:
> + ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
> + break;
> + default:
> + ret = -EINVAL;
> + }
> +
> + put_cpu();
> + return ret;
> +}
> +
> +static int __init zswap_comp_init(void)
> +{
> + if (!crypto_has_comp(zswap_compressor, 0, 0)) {
> + pr_info("%s compressor not available\n", zswap_compressor);
> + /* fall back to default compressor */
> + zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> + if (!crypto_has_comp(zswap_compressor, 0, 0))
> + /* can't even load the default compressor */
> + return -ENODEV;
> + }
> + pr_info("using %s compressor\n", zswap_compressor);
> +
> + /* alloc percpu transforms */
> + zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
> + if (!zswap_comp_pcpu_tfms)
> + return -ENOMEM;
> + return 0;
> +}
> +
> +static void zswap_comp_exit(void)
> +{
> + /* free percpu transforms */
> + if (zswap_comp_pcpu_tfms)
> + free_percpu(zswap_comp_pcpu_tfms);
> +}
> +
> +/*********************************
> +* data structures
> +**********************************/
> +struct zswap_entry {
> + struct rb_node rbnode;
> + unsigned type;
> + pgoff_t offset;
> + unsigned long handle;
> + unsigned int length;
> +};

Document that the types and offset are from frontswap and the handle is
an opaque type from zsmalloc. This indicates that zswap is hard-coded
against zsmalloc but so far I do not believe I have seen anything that
forces it to be and allow either zbud or zsmalloc to be pluggable.

> +
> +struct zswap_tree {
> + struct rb_root rbroot;
> + spinlock_t lock;
> + struct zs_pool *pool;
> +};
> +
> +static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> +
> +/*********************************
> +* zswap entry functions
> +**********************************/
> +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"

heh, it's only used once and it's not exactly a magic number. Seems
overkill for a #define

> +static struct kmem_cache *zswap_entry_cache;
> +
> +static inline int zswap_entry_cache_create(void)
> +{

No need to declare it inline, compiler will figure it out. Also should
return bool.

> + zswap_entry_cache =
> + kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
> + sizeof(struct zswap_entry), 0, 0, NULL);
> + return (zswap_entry_cache == NULL);
> +}
> +
> +static inline void zswap_entry_cache_destory(void)
> +{
> + kmem_cache_destroy(zswap_entry_cache);
> +}
> +
> +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +{
> + struct zswap_entry *entry;
> + entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> + if (!entry)
> + return NULL;

No need to check !entry, just return it.

> + return entry;
> +}
> +
> +static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> +{
> + kmem_cache_free(zswap_entry_cache, entry);
> +}
> +
> +/*********************************
> +* rbtree functions
> +**********************************/
> +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
> +{
> + struct rb_node *node = root->rb_node;
> + struct zswap_entry *entry;
> +
> + while (node) {
> + entry = rb_entry(node, struct zswap_entry, rbnode);
> + if (entry->offset > offset)
> + node = node->rb_left;
> + else if (entry->offset < offset)
> + node = node->rb_right;
> + else
> + return entry;
> + }
> + return NULL;
> +}
> +
> +/*
> + * In the case that a entry with the same offset is found, it a pointer to
> + * the existing entry is stored in dupentry and the function returns -EEXIST
> +*/
> +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
> + struct zswap_entry **dupentry)
> +{
> + struct rb_node **link = &root->rb_node, *parent = NULL;
> + struct zswap_entry *myentry;
> +
> + while (*link) {
> + parent = *link;
> + myentry = rb_entry(parent, struct zswap_entry, rbnode);
> + if (myentry->offset > entry->offset)
> + link = &(*link)->rb_left;
> + else if (myentry->offset < entry->offset)
> + link = &(*link)->rb_right;
> + else {
> + *dupentry = myentry;
> + return -EEXIST;
> + }
> + }
> + rb_link_node(&entry->rbnode, parent, link);
> + rb_insert_color(&entry->rbnode, root);
> + return 0;
> +}
> +
> +/*********************************
> +* per-cpu code
> +**********************************/
> +static DEFINE_PER_CPU(u8 *, zswap_dstmem);
> +

That deserves a comment. It per-cpu buffers for compressing or
decompressing data.


> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
> +{
> + struct crypto_comp *tfm;
> + u8 *dst;
> +
> + switch (action) {
> + case CPU_UP_PREPARE:
> + tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
> + if (IS_ERR(tfm)) {
> + pr_err("can't allocate compressor transform\n");
> + return NOTIFY_BAD;
> + }
> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
> + dst = kmalloc(PAGE_SIZE * 2, GFP_KERNEL);
> + if (!dst) {
> + pr_err("can't allocate compressor buffer\n");
> + crypto_free_comp(tfm);
> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> + return NOTIFY_BAD;
> + }
> + per_cpu(zswap_dstmem, cpu) = dst;
> + break;
> + case CPU_DEAD:
> + case CPU_UP_CANCELED:
> + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
> + if (tfm) {
> + crypto_free_comp(tfm);
> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> + }
> + dst = per_cpu(zswap_dstmem, cpu);
> + if (dst) {
> + kfree(dst);
> + per_cpu(zswap_dstmem, cpu) = NULL;
> + }
> + break;
> + default:
> + break;
> + }
> + return NOTIFY_OK;
> +}
> +
> +static int zswap_cpu_notifier(struct notifier_block *nb,
> + unsigned long action, void *pcpu)
> +{
> + unsigned long cpu = (unsigned long)pcpu;
> + return __zswap_cpu_notifier(action, cpu);
> +}
> +
> +static struct notifier_block zswap_cpu_notifier_block = {
> + .notifier_call = zswap_cpu_notifier
> +};
> +
> +static int zswap_cpu_init(void)
> +{
> + unsigned long cpu;
> +
> + get_online_cpus();
> + for_each_online_cpu(cpu)
> + if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
> + goto cleanup;
> + register_cpu_notifier(&zswap_cpu_notifier_block);
> + put_online_cpus();
> + return 0;
> +
> +cleanup:
> + for_each_online_cpu(cpu)
> + __zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
> + put_online_cpus();
> + return -ENOMEM;
> +}
> +
> +/*********************************
> +* zsmalloc callbacks
> +**********************************/
> +static mempool_t *zswap_page_pool;
> +
> +static inline unsigned int zswap_max_pool_pages(void)
> +{
> + return zswap_max_pool_percent * totalram_pages / 100;
> +}
> +
> +static inline int zswap_page_pool_create(void)
> +{
> + /* TODO: dynamically size mempool */
> + zswap_page_pool = mempool_create_page_pool(256, 0);
> + if (!zswap_page_pool)
> + return -ENOMEM;
> + return 0;
> +}
> +
> +static inline void zswap_page_pool_destroy(void)
> +{
> + mempool_destroy(zswap_page_pool);
> +}
> +
> +static struct page *zswap_alloc_page(gfp_t flags)
> +{
> + struct page *page;
> +
> + if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
> + zswap_pool_limit_hit++;
> + return NULL;
> + }
> + page = mempool_alloc(zswap_page_pool, flags);
> + if (page)
> + atomic_inc(&zswap_pool_pages);
> + return page;
> +}
> +
> +static void zswap_free_page(struct page *page)
> +{
> + if (!page)
> + return;
> + mempool_free(page, zswap_page_pool);
> + atomic_dec(&zswap_pool_pages);
> +}

Again I find it odd that the mempool is here instead of within zsmalloc
itself. It's also not superclear why you used mempool instead of just
alloc_page/free_page


> +
> +static struct zs_ops zswap_zs_ops = {
> + .alloc = zswap_alloc_page,
> + .free = zswap_free_page
> +};
> +
> +/*********************************
> +* frontswap hooks
> +**********************************/
> +/* attempts to compress and store an single page */
> +static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> + struct page *page)
> +{
> + struct zswap_tree *tree = zswap_trees[type];
> + struct zswap_entry *entry, *dupentry;
> + int ret;
> + unsigned int dlen = PAGE_SIZE;
> + unsigned long handle;
> + char *buf;
> + u8 *src, *dst;
> +
> + if (!tree) {
> + ret = -ENODEV;
> + goto reject;
> + }
> +
> + /* allocate entry */
> + entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + if (!entry) {
> + zswap_reject_kmemcache_fail++;
> + ret = -ENOMEM;
> + goto reject;
> + }
> +
> + /* compress */
> + dst = get_cpu_var(zswap_dstmem);
> + src = kmap_atomic(page);

Why kmap_atomic? We do not appear to be in an atomic context here and
you've already disabled preempt for the compression op.

> + ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
> + kunmap_atomic(src);
> + if (ret) {
> + ret = -EINVAL;
> + goto putcpu;
> + }
> + if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> + zswap_reject_compress_poor++;
> + ret = -E2BIG;
> + goto putcpu;
> + }
> +
> + /* store */
> + handle = zs_malloc(tree->pool, dlen,
> + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> + __GFP_NOWARN);
> + if (!handle) {
> + zswap_reject_zsmalloc_fail++;
> + ret = -ENOMEM;
> + goto putcpu;
> + }
> +

This is an aging inversion problem. Once zswap is full, the newest pages
are written to swap instead of old zswap pages and freeing up some
space. This means two things

1. Once zswap is full, it's performance degrades immediately to just
being even worst than traditional swap except we're doing all the
swapping but have 20% (by default) less physical memory to work with.

2. zswap is vunerable to a DOS by a process starting, allocating a
buffer that is RAM + max zswap size to fill zswap, freeing its remaining
in-core pages and then sleeping forever

zswap pages should also be maintained on a LRU with old zswap pages written
to backing storage when it's full. A logical follow-on then would be that
the size of the zswap pool can be dynamically shrunk to free physical RAM
if the refault rate between zswap and normal RAM is low.

> + buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> + memcpy(buf, dst, dlen);
> + zs_unmap_object(tree->pool, handle);
> + put_cpu_var(zswap_dstmem);
> +
> + /* populate entry */
> + entry->type = type;
> + entry->offset = offset;
> + entry->handle = handle;
> + entry->length = dlen;
> +
> + /* map */
> + spin_lock(&tree->lock);
> + do {
> + ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> + if (ret == -EEXIST) {
> + zswap_duplicate_entry++;
> +
> + /* remove from rbtree */
> + rb_erase(&dupentry->rbnode, &tree->rbroot);
> +
> + /* free */
> + zs_free(tree->pool, dupentry->handle);
> + zswap_entry_cache_free(dupentry);
> + atomic_dec(&zswap_stored_pages);
> + }
> + } while (ret == -EEXIST);
> + spin_unlock(&tree->lock);
> +
> + /* update stats */
> + atomic_inc(&zswap_stored_pages);
> +
> + return 0;
> +
> +putcpu:
> + put_cpu_var(zswap_dstmem);
> + zswap_entry_cache_free(entry);
> +reject:
> + return ret;
> +}
> +
> +/*
> + * returns 0 if the page was successfully decompressed
> + * return -1 on entry not found or error
> +*/
> +static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> + struct page *page)
> +{
> + struct zswap_tree *tree = zswap_trees[type];
> + struct zswap_entry *entry;
> + u8 *src, *dst;
> + unsigned int dlen;
> +
> + /* find */
> + spin_lock(&tree->lock);
> + entry = zswap_rb_search(&tree->rbroot, offset);
> + spin_unlock(&tree->lock);
> +
> + /* decompress */
> + dlen = PAGE_SIZE;
> + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> + dst = kmap_atomic(page);
> + zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> + dst, &dlen);
> + kunmap_atomic(dst);
> + zs_unmap_object(tree->pool, entry->handle);
> +
> + return 0;
> +}
> +
> +/* invalidates a single page */
> +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> +{
> + struct zswap_tree *tree = zswap_trees[type];
> + struct zswap_entry *entry;
> +
> + /* find */
> + spin_lock(&tree->lock);
> + entry = zswap_rb_search(&tree->rbroot, offset);
> +
> + /* remove from rbtree */
> + rb_erase(&entry->rbnode, &tree->rbroot);
> + spin_unlock(&tree->lock);
> +
> + /* free */
> + zs_free(tree->pool, entry->handle);
> + zswap_entry_cache_free(entry);
> + atomic_dec(&zswap_stored_pages);
> +}
> +
> +/* invalidates all pages for the given swap type */
> +static void zswap_frontswap_invalidate_area(unsigned type)
> +{
> + struct zswap_tree *tree = zswap_trees[type];
> + struct rb_node *node;
> + struct zswap_entry *entry;
> +
> + if (!tree)
> + return;
> +
> + /* walk the tree and free everything */
> + spin_lock(&tree->lock);
> + /*
> + * TODO: Even though this code should not be executed because
> + * the try_to_unuse() in swapoff should have emptied the tree,
> + * it is very wasteful to rebalance the tree after every
> + * removal when we are freeing the whole tree.
> + *
> + * If post-order traversal code is ever added to the rbtree
> + * implementation, it should be used here.
> + */
> + while ((node = rb_first(&tree->rbroot))) {
> + entry = rb_entry(node, struct zswap_entry, rbnode);
> + rb_erase(&entry->rbnode, &tree->rbroot);
> + zs_free(tree->pool, entry->handle);
> + zswap_entry_cache_free(entry);
> + }
> + tree->rbroot = RB_ROOT;
> + spin_unlock(&tree->lock);
> +}
> +
> +/* NOTE: this is called in atomic context from swapon and must not sleep */
> +static void zswap_frontswap_init(unsigned type)
> +{
> + struct zswap_tree *tree;
> +
> + tree = kzalloc(sizeof(struct zswap_tree), GFP_ATOMIC);
> + if (!tree)
> + goto err;
> + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
> + if (!tree->pool)
> + goto freetree;
> + tree->rbroot = RB_ROOT;
> + spin_lock_init(&tree->lock);
> + zswap_trees[type] = tree;
> + return;
> +

Ok I think. I didn't read this as carefully because I assumed that it
would either work or blow up spectacularly and there was little scope
for being clever.

> +freetree:
> + kfree(tree);
> +err:
> + pr_err("alloc failed, zswap disabled for swap type %d\n", type);
> +}
> +
> +static struct frontswap_ops zswap_frontswap_ops = {
> + .store = zswap_frontswap_store,
> + .load = zswap_frontswap_load,
> + .invalidate_page = zswap_frontswap_invalidate_page,
> + .invalidate_area = zswap_frontswap_invalidate_area,
> + .init = zswap_frontswap_init
> +};
> +
> +/*********************************
> +* debugfs functions
> +**********************************/
> +#ifdef CONFIG_DEBUG_FS
> +#include <linux/debugfs.h>
> +
> +static struct dentry *zswap_debugfs_root;
> +
> +static int __init zswap_debugfs_init(void)
> +{
> + if (!debugfs_initialized())
> + return -ENODEV;
> +
> + zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
> + if (!zswap_debugfs_root)
> + return -ENOMEM;
> +
> + debugfs_create_u64("pool_limit_hit", S_IRUGO,
> + zswap_debugfs_root, &zswap_pool_limit_hit);
> + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> + zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> + zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> + debugfs_create_u64("reject_compress_poor", S_IRUGO,
> + zswap_debugfs_root, &zswap_reject_compress_poor);
> + debugfs_create_u64("duplicate_entry", S_IRUGO,
> + zswap_debugfs_root, &zswap_duplicate_entry);
> + debugfs_create_atomic_t("pool_pages", S_IRUGO,
> + zswap_debugfs_root, &zswap_pool_pages);
> + debugfs_create_atomic_t("stored_pages", S_IRUGO,
> + zswap_debugfs_root, &zswap_stored_pages);
> +
> + return 0;
> +}
> +
> +static void __exit zswap_debugfs_exit(void)
> +{
> + debugfs_remove_recursive(zswap_debugfs_root);
> +}
> +#else
> +static inline int __init zswap_debugfs_init(void)
> +{
> + return 0;
> +}
> +
> +static inline void __exit zswap_debugfs_exit(void) { }
> +#endif
> +
> +/*********************************
> +* module init and exit
> +**********************************/
> +static int __init init_zswap(void)
> +{
> + if (!zswap_enabled)
> + return 0;
> +
> + pr_info("loading zswap\n");
> + if (zswap_entry_cache_create()) {
> + pr_err("entry cache creation failed\n");
> + goto error;
> + }
> + if (zswap_page_pool_create()) {
> + pr_err("page pool initialization failed\n");
> + goto pagepoolfail;
> + }
> + if (zswap_comp_init()) {
> + pr_err("compressor initialization failed\n");
> + goto compfail;
> + }
> + if (zswap_cpu_init()) {
> + pr_err("per-cpu initialization failed\n");
> + goto pcpufail;
> + }
> + frontswap_register_ops(&zswap_frontswap_ops);
> + if (zswap_debugfs_init())
> + pr_warn("debugfs initialization failed\n");
> + return 0;
> +pcpufail:
> + zswap_comp_exit();
> +compfail:
> + zswap_page_pool_destroy();
> +pagepoolfail:
> + zswap_entry_cache_destory();
> +error:
> + return -ENOMEM;
> +}
> +/* must be late so crypto has time to come up */
> +late_initcall(init_zswap);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Seth Jennings <[email protected]>");
> +MODULE_DESCRIPTION("Compressed cache for swap pages");

Ok, so there are some problems in there. For me, the zsmalloc fragmentation
issues are potentially the far scarier problem because unpredictable
performance characteristics tend to generate really painful bug reports
with difficult (if not impossible) to replicate problems. Those reports
are so painful in fact that I'm inclined to dig my heels in and make loud
noises unless an allocator with predictable performance characteritics
can also be used (presumably zbud) -- as a comparison point if nothing
else but also to have as a workaround for performance problems in zsmalloc.

It also looks like performance will fall off a cliff when zswap is full
but at least that's a predictable problem and easily explained to a
user. An LRU for zswap pages could always be implemented later with
bonus points if it uses refault rates to judge when the pool can be
shrunk more agressively to free physical RAM.

--
Mel Gorman
SUSE Labs

2013-04-14 00:56:57

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCHv9 7/8] zswap: add swap page writeback support

On Wed, Apr 10, 2013 at 01:18:59PM -0500, Seth Jennings wrote:
> This patch adds support for evicting swap pages that are currently
> compressed in zswap to the swap device. This functionality is very
> important and make zswap a true cache in that, once the cache is full
> or can't grow due to memory pressure, the oldest pages can be moved
> out of zswap to the swap device so newer pages can be compressed and
> stored in zswap.
>

Oh great, this may cover one of my larger objections from an earlier patch!
I had not guessed from the leader mail or the subject that this patch
implemented zswap page aging of some sort.

> This introduces a good amount of new code to guarantee coherency.
> Most notably, and LRU list is added to the zswap_tree structure,
> and refcounts are added to each entry to ensure that one code path
> doesn't free then entry while another code path is operating on it.
>
> Signed-off-by: Seth Jennings <[email protected]>
> ---
> mm/zswap.c | 530 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 508 insertions(+), 22 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index db283c4..edb354b 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -36,6 +36,12 @@
> #include <linux/mempool.h>
> #include <linux/zsmalloc.h>
>
> +#include <linux/mm_types.h>
> +#include <linux/page-flags.h>
> +#include <linux/swapops.h>
> +#include <linux/writeback.h>
> +#include <linux/pagemap.h>
> +
> /*********************************
> * statistics
> **********************************/
> @@ -43,6 +49,8 @@
> static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> /* The number of compressed pages currently stored in zswap */
> static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +/* The number of outstanding pages awaiting writeback */
> +static atomic_t zswap_outstanding_writebacks = ATOMIC_INIT(0);
>
> /*
> * The statistics below are not protected from concurrent access for
> @@ -51,9 +59,13 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> * certain event is occurring.
> */
> static u64 zswap_pool_limit_hit;
> +static u64 zswap_written_back_pages;
> static u64 zswap_reject_compress_poor;
> +static u64 zswap_writeback_attempted;
> +static u64 zswap_reject_tmppage_fail;
> static u64 zswap_reject_zsmalloc_fail;
> static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_saved_by_writeback;
> static u64 zswap_duplicate_entry;
>

At some point it would be nice to document what these mean. I know what
they mean now because I read the code recently but I'll have forgotten in
6 months time.

> /*********************************
> @@ -82,6 +94,14 @@ static unsigned int zswap_max_compression_ratio = 80;
> module_param_named(max_compression_ratio,
> zswap_max_compression_ratio, uint, 0644);
>
> +/*
> + * Maximum number of outstanding writebacks allowed at any given time.
> + * This is to prevent decompressing an unbounded number of compressed
> + * pages into the swap cache all at once, and to help with writeback
> + * congestion.
> +*/
> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
> +

Why 64?

> /*********************************
> * compression functions
> **********************************/
> @@ -144,18 +164,49 @@ static void zswap_comp_exit(void)
> /*********************************
> * data structures
> **********************************/
> +
> +/*
> + * struct zswap_entry
> + *
> + * This structure contains the metadata for tracking a single compressed
> + * page within zswap.
> + *
> + * rbnode - links the entry into red-black tree for the appropriate swap type
> + * lru - links the entry into the lru list for the appropriate swap type
> + * refcount - the number of outstanding reference to the entry. This is needed
> + * to protect against premature freeing of the entry by code
> + * concurent calls to load, invalidate, and writeback. The lock

s/concurent/concurrent/

> + * for the zswap_tree structure that contains the entry must
> + * be held while changing the refcount. Since the lock must
> + * be held, there is no reason to also make refcount atomic.
> + * type - the swap type for the entry. Used to map back to the zswap_tree
> + * structure that contains the entry.
> + * offset - the swap offset for the entry. Index into the red-black tree.
> + * handle - zsmalloc allocation handle that stores the compressed page data
> + * length - the length in bytes of the compressed page data. Needed during
> + * decompression
> + */

It's good that you document the fields but from a review perspective it
would be easier if the documentation was introduced in an earlier patch
and then update it here. Note for example that you document "type" here
even though this patch removes it.

> struct zswap_entry {
> struct rb_node rbnode;
> - unsigned type;
> + struct list_head lru;
> + int refcount;

Any particular reason you did not use struct kref (include/linux/kref.h)
for the refcount? I suppose it's because your refcount is protected by
the lock and the atomics are unnecessary but it seems unfortunate to
roll your own refcounting unless there is a good reason for it.

There is a place later where the refcount looks like it's used like a
state machine which is a bit weird.

> pgoff_t offset;
> unsigned long handle;
> unsigned int length;
> };
>
> +/*
> + * The tree lock in the zswap_tree struct protects a few things:
> + * - the rbtree
> + * - the lru list
> + * - the refcount field of each entry in the tree
> + */
> struct zswap_tree {
> struct rb_root rbroot;
> + struct list_head lru;
> spinlock_t lock;
> struct zs_pool *pool;
> + unsigned type;
> };
>
> static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> @@ -185,6 +236,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> if (!entry)
> return NULL;
> + INIT_LIST_HEAD(&entry->lru);
> + entry->refcount = 1;
> return entry;
> }
>
> @@ -193,6 +246,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +static inline void zswap_entry_get(struct zswap_entry *entry)
> +{
> + entry->refcount++;
> +}
> +
> +static inline int zswap_entry_put(struct zswap_entry *entry)
> +{
> + entry->refcount--;
> + return entry->refcount;
> +}
> +

I find it surprising to have a put-like interface that returns the
count. Ordinarily this would raise alarm bells because a decision could
be made based on a stale read of a refcount. In this case I expect you
are protected by the tree lock but if you ever want to make that lock
more fine-grained then you are already backed into a corner.

> /*********************************
> * rbtree functions
> **********************************/
> @@ -367,6 +431,328 @@ static struct zs_ops zswap_zs_ops = {
> .free = zswap_free_page
> };
>
> +
> +/*********************************
> +* helpers
> +**********************************/
> +
> +/*
> + * Carries out the common pattern of freeing and entry's zsmalloc allocation,
> + * freeing the entry itself, and decrementing the number of stored pages.
> + */
> +static void zswap_free_entry(struct zswap_tree *tree, struct zswap_entry *entry)
> +{
> + zs_free(tree->pool, entry->handle);
> + zswap_entry_cache_free(entry);
> + atomic_dec(&zswap_stored_pages);
> +}
> +
> +/*********************************
> +* writeback code
> +**********************************/
> +static void zswap_end_swap_write(struct bio *bio, int err)
> +{
> + end_swap_bio_write(bio, err);
> + atomic_dec(&zswap_outstanding_writebacks);
> + zswap_written_back_pages++;
> +}
> +
> +/* return enum for zswap_get_swap_cache_page */
> +enum zswap_get_swap_ret {
> + ZSWAP_SWAPCACHE_NEW,
> + ZSWAP_SWAPCACHE_EXIST,
> + ZSWAP_SWAPCACHE_NOMEM
> +};
> +
> +/*
> + * zswap_get_swap_cache_page
> + *
> + * This is an adaption of read_swap_cache_async()
> + *

This in fact looks almost identical to read_swap_cache_async(). Can the
code not be reused or the function split up in some fashion to it can be
shared between swap_state.c and zswap.c? As it is, this is just begging
to get out of sync if read_swap_cache_async() ever gets any sort of
enhancement or fix.

> + * This function tries to find a page with the given swap entry
> + * in the swapper_space address space (the swap cache). If the page
> + * is found, it is returned in retpage. Otherwise, a page is allocated,
> + * added to the swap cache, and returned in retpage.
> + *
> + * If success, the swap cache page is returned in retpage
> + * Returns 0 if page was already in the swap cache, page is not locked
> + * Returns 1 if the new page needs to be populated, page is locked
> + * Returns <0 on error
> + */
> +static int zswap_get_swap_cache_page(swp_entry_t entry,
> + struct page **retpage)
> +{
> + struct page *found_page, *new_page = NULL;
> + struct address_space *swapper_space = &swapper_spaces[swp_type(entry)];
> + int err;
> +
> + *retpage = NULL;
> + do {
> + /*
> + * First check the swap cache. Since this is normally
> + * called after lookup_swap_cache() failed, re-calling
> + * that would confuse statistics.
> + */
> + found_page = find_get_page(swapper_space, entry.val);
> + if (found_page)
> + break;
> +
> + /*
> + * Get a new page to read into from swap.
> + */
> + if (!new_page) {
> + new_page = alloc_page(GFP_KERNEL);
> + if (!new_page)
> + break; /* Out of memory */
> + }
> +
> + /*
> + * call radix_tree_preload() while we can wait.
> + */
> + err = radix_tree_preload(GFP_KERNEL);
> + if (err)
> + break;
> +
> + /*
> + * Swap entry may have been freed since our caller observed it.
> + */
> + err = swapcache_prepare(entry);
> + if (err == -EEXIST) { /* seems racy */
> + radix_tree_preload_end();
> + continue;
> + }
> + if (err) { /* swp entry is obsolete ? */
> + radix_tree_preload_end();
> + break;
> + }
> +
> + /* May fail (-ENOMEM) if radix-tree node allocation failed. */
> + __set_page_locked(new_page);
> + SetPageSwapBacked(new_page);
> + err = __add_to_swap_cache(new_page, entry);
> + if (likely(!err)) {
> + radix_tree_preload_end();
> + lru_cache_add_anon(new_page);
> + *retpage = new_page;
> + return ZSWAP_SWAPCACHE_NEW;
> + }
> + radix_tree_preload_end();
> + ClearPageSwapBacked(new_page);
> + __clear_page_locked(new_page);
> + /*
> + * add_to_swap_cache() doesn't return -EEXIST, so we can safely
> + * clear SWAP_HAS_CACHE flag.
> + */
> + swapcache_free(entry, NULL);
> + } while (err != -ENOMEM);
> +
> + if (new_page)
> + page_cache_release(new_page);
> + if (!found_page)
> + return ZSWAP_SWAPCACHE_NOMEM;
> + *retpage = found_page;
> + return ZSWAP_SWAPCACHE_EXIST;
> +}
> +
> +/*
> + * Attempts to free and entry by adding a page to the swap cache,
> + * decompressing the entry data into the page, and issuing a
> + * bio write to write the page back to the swap device.
> + *
> + * This can be thought of as a "resumed writeback" of the page
> + * to the swap device. We are basically resuming the same swap
> + * writeback path that was intercepted with the frontswap_store()
> + * in the first place. After the page has been decompressed into
> + * the swap cache, the compressed version stored by zswap can be
> + * freed.
> + */
> +static int zswap_writeback_entry(struct zswap_tree *tree,
> + struct zswap_entry *entry)
> +{
> + unsigned long type = tree->type;
> + struct page *page;
> + swp_entry_t swpentry;
> + u8 *src, *dst;
> + unsigned int dlen;
> + int ret;
> + struct writeback_control wbc = {
> + .sync_mode = WB_SYNC_NONE,
> + };
> +
> + /* get/allocate page in the swap cache */
> + swpentry = swp_entry(type, entry->offset);
> +
> + /* try to allocate swap cache page */
> + switch (zswap_get_swap_cache_page(swpentry, &page)) {
> +
> + case ZSWAP_SWAPCACHE_NOMEM: /* no memory */
> + return -ENOMEM;
> + break; /* not reached */
> +

Can leave out the break

> + case ZSWAP_SWAPCACHE_EXIST: /* page is unlocked */
> + /* page is already in the swap cache, ignore for now */
> + return -EEXIST;
> + break; /* not reached */
> +

Can leave out the break.

> + case ZSWAP_SWAPCACHE_NEW: /* page is locked */
> + /* decompress */
> + dlen = PAGE_SIZE;
> + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> + dst = kmap_atomic(page);
> + ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> + dst, &dlen);
> + kunmap_atomic(dst);
> + zs_unmap_object(tree->pool, entry->handle);
> + BUG_ON(ret);
> + BUG_ON(dlen != PAGE_SIZE);
> +
> + /* page is up to date */
> + SetPageUptodate(page);
> + }
> +
> + /* start writeback */
> + SetPageReclaim(page);
> + if (!__swap_writepage(page, &wbc, zswap_end_swap_write))
> + atomic_inc(&zswap_outstanding_writebacks);

humm, you queue something and then increment the writebacks. That looks
like it would be vulnerable to a race of

1. 10 processes read counter see it's ZSWAP_MAX_OUTSTANDING_FLUSHES-1
2. 10 processes queue IO
3. 10 processes increment the counter

We've now gone over the max number of outstanding flushes. It still
sortof gets limited but it's still a bit sloppy.

> + page_cache_release(page);
> +
> + return 0;
> +}
> +
> +/*
> + * Attempts to free nr of entries via writeback to the swap device.
> + * The number of entries that were actually freed is returned.
> + */
> +static int zswap_writeback_entries(struct zswap_tree *tree, int nr)
> +{
> + struct zswap_entry *entry;
> + int i, ret, refcount, freed_nr = 0;
> +
> + for (i = 0; i < nr; i++) {
> + /*
> + * This limits is arbitrary for now until a better
> + * policy can be implemented. This is so we don't
> + * eat all of RAM decompressing pages for writeback.
> + */
> + if (atomic_read(&zswap_outstanding_writebacks) >
> + ZSWAP_MAX_OUTSTANDING_FLUSHES)
> + break;
> +

This is handled a bit badly. If the max outstanding flushes are reached at
i == 0 then we return. The caller does not check the return value and if
it fails the zsmalloc again then it just fails to store the page entirely.
This means that when the maximum allowed number of pages are in flight that
new pages go straight to swap and it's again an age inversion problem. The
performance cliff when zswap full is still there although it may be harder
to hit.

You need to go on a waitqueue here until the in-flight pages are written
and it gets woken up. You will likely need to make sure it can make forward
progress even if it's crude as just sleeping here until it's woken and
returning. If it fails still then zswap writeback is congested and it
might as well just go straight to swap and take the age inversion hit.

> + spin_lock(&tree->lock);
> +
> + /* dequeue from lru */
> + if (list_empty(&tree->lru)) {
> + spin_unlock(&tree->lock);
> + break;
> + }
> + entry = list_first_entry(&tree->lru,
> + struct zswap_entry, lru);
> + list_del_init(&entry->lru);
> +
> + /* so invalidate doesn't free the entry from under us */
> + zswap_entry_get(entry);
> +
> + spin_unlock(&tree->lock);
> +
> + /* attempt writeback */
> + ret = zswap_writeback_entry(tree, entry);
> +
> + spin_lock(&tree->lock);
> +
> + /* drop reference from above */
> + refcount = zswap_entry_put(entry);
> +
> + if (!ret)
> + /* drop the initial reference from entry creation */
> + refcount = zswap_entry_put(entry);
> +

zswap_writeback_entry returns an enum but here you make assumptions on
the meaning of 0. While it's correct, it's vulnerable to bugs if someone
adds a new enum state at position 0. Compare ret to ZSWAP_SWAPCACHE_NEW.

> + /*
> + * There are four possible values for refcount here:
> + * (1) refcount is 2, writeback failed and load is in progress;
> + * do nothing, load will add us back to the LRU
> + * (2) refcount is 1, writeback failed; do not free entry,
> + * add back to LRU
> + * (3) refcount is 0, (normal case) not invalidate yet;
> + * remove from rbtree and free entry
> + * (4) refcount is -1, invalidate happened during writeback;
> + * free entry
> + */
> + if (refcount == 1)
> + list_add(&entry->lru, &tree->lru);
> +
> + if (refcount == 0) {
> + /* no invalidate yet, remove from rbtree */
> + rb_erase(&entry->rbnode, &tree->rbroot);
> + }
> + spin_unlock(&tree->lock);
> + if (refcount <= 0) {
> + /* free the entry */
> + zswap_free_entry(tree, entry);
> + freed_nr++;
> + }
> + }
> + return freed_nr;
> +}
> +
> +/*******************************************
> +* page pool for temporary compression result
> +********************************************/
> +#define ZSWAP_TMPPAGE_POOL_PAGES 16

It's strange to me that the number of pages queued in a writeback at the
same time, the size of the the tmp page pool and the maximum allowed number
of pages under writeback are independent. It seems like it should be

MAX_FLUSHES 64
WRITEBACK_BATCH (MAX_FLUSHES >> 2)
TMPPAGE_PAGES MAX_FLUSHES / WRITEBACK_BATCH

or something similar.

> +static LIST_HEAD(zswap_tmppage_list);
> +static DEFINE_SPINLOCK(zswap_tmppage_lock);
> +
> +static void zswap_tmppage_pool_destroy(void)
> +{
> + struct page *page, *tmppage;
> +
> + spin_lock(&zswap_tmppage_lock);
> + list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
> + list_del(&page->lru);
> + __free_pages(page, 1);
> + }
> + spin_unlock(&zswap_tmppage_lock);
> +}

This looks very like a mempool but is a custom implementation. It could
have been a kref-counted pool size with an alloc function that returns
NULL if kref == ZSWAP_TMPPAGE_POOL_PAGES. Granted, that would introduce
atomics into this path but the use of a custom mempool here does not appear
justified. Any particular reason a mempool was avoided?

> +
> +static int zswap_tmppage_pool_create(void)
> +{
> + int i;
> + struct page *page;
> +
> + for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
> + page = alloc_pages(GFP_KERNEL, 1);

The per-cpu compression/decompression buffers are allocated from kmalloc
as kmalloc(PAGE_SIZE*2) but here we allocate an order-1 page. Functionally
there is no difference but it seems like the order of the buffer be #defined
somewhere and them allocated with one interface or the other, not a mix.

> + if (!page) {
> + zswap_tmppage_pool_destroy();
> + return -ENOMEM;
> + }
> + spin_lock(&zswap_tmppage_lock);
> + list_add(&page->lru, &zswap_tmppage_list);
> + spin_unlock(&zswap_tmppage_lock);
> + }
> + return 0;
> +}
> +
> +static inline struct page *zswap_tmppage_alloc(void)
> +{
> + struct page *page;
> +
> + spin_lock(&zswap_tmppage_lock);
> + if (list_empty(&zswap_tmppage_list)) {
> + spin_unlock(&zswap_tmppage_lock);
> + return NULL;
> + }
> + page = list_first_entry(&zswap_tmppage_list, struct page, lru);
> + list_del(&page->lru);
> + spin_unlock(&zswap_tmppage_lock);
> + return page;
> +}
> +
> +static inline void zswap_tmppage_free(struct page *page)
> +{
> + spin_lock(&zswap_tmppage_lock);
> + list_add(&page->lru, &zswap_tmppage_list);
> + spin_unlock(&zswap_tmppage_lock);
> +}
> +
> /*********************************
> * frontswap hooks
> **********************************/
> @@ -380,7 +766,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> unsigned int dlen = PAGE_SIZE;
> unsigned long handle;
> char *buf;
> - u8 *src, *dst;
> + u8 *src, *dst, *tmpdst;
> + struct page *tmppage;
> + bool writeback_attempted = 0;

bool = false.

>
> if (!tree) {
> ret = -ENODEV;
> @@ -402,12 +790,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> kunmap_atomic(src);
> if (ret) {
> ret = -EINVAL;
> - goto putcpu;
> + goto freepage;
> }
> if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> zswap_reject_compress_poor++;
> ret = -E2BIG;
> - goto putcpu;
> + goto freepage;
> }
>
> /* store */
> @@ -415,18 +803,48 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> __GFP_NOWARN);
> if (!handle) {
> - zswap_reject_zsmalloc_fail++;
> - ret = -ENOMEM;
> - goto putcpu;
> + zswap_writeback_attempted++;
> + /*
> + * Copy compressed buffer out of per-cpu storage so
> + * we can re-enable preemption.
> + */
> + tmppage = zswap_tmppage_alloc();
> + if (!tmppage) {
> + zswap_reject_tmppage_fail++;
> + ret = -ENOMEM;
> + goto freepage;
> + }

Similar to ZSWAP_MAX_OUTSTANDING_FLUSHES, a failure to allocate a tmppage
should result in the process waiting or it's back again to the age
inversion problem.

> + writeback_attempted = 1;
> + tmpdst = page_address(tmppage);
> + memcpy(tmpdst, dst, dlen);
> + dst = tmpdst;
> + put_cpu_var(zswap_dstmem);
> +
> + /* try to free up some space */
> + /* TODO: replace with more targeted policy */
> + zswap_writeback_entries(tree, 16);
> + /* try again, allowing wait */
> + handle = zs_malloc(tree->pool, dlen,
> + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> + __GFP_NOWARN);
> + if (!handle) {
> + /* still no space, fail */
> + zswap_reject_zsmalloc_fail++;
> + ret = -ENOMEM;
> + goto freepage;
> + }
> + zswap_saved_by_writeback++;
> }
>
> buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> memcpy(buf, dst, dlen);
> zs_unmap_object(tree->pool, handle);
> - put_cpu_var(zswap_dstmem);
> + if (writeback_attempted)
> + zswap_tmppage_free(tmppage);
> + else
> + put_cpu_var(zswap_dstmem);
>
> /* populate entry */
> - entry->type = type;
> entry->offset = offset;
> entry->handle = handle;
> entry->length = dlen;
> @@ -437,16 +855,17 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> if (ret == -EEXIST) {
> zswap_duplicate_entry++;
> -
> - /* remove from rbtree */
> + /* remove from rbtree and lru */
> rb_erase(&dupentry->rbnode, &tree->rbroot);
> -
> - /* free */
> - zs_free(tree->pool, dupentry->handle);
> - zswap_entry_cache_free(dupentry);
> - atomic_dec(&zswap_stored_pages);
> + if (!list_empty(&dupentry->lru))
> + list_del_init(&dupentry->lru);
> + if (!zswap_entry_put(dupentry)) {
> + /* free */
> + zswap_free_entry(tree, dupentry);
> + }
> }
> } while (ret == -EEXIST);
> + list_add_tail(&entry->lru, &tree->lru);
> spin_unlock(&tree->lock);
>
> /* update stats */
> @@ -454,8 +873,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>
> return 0;
>
> -putcpu:
> - put_cpu_var(zswap_dstmem);
> +freepage:
> + if (writeback_attempted)
> + zswap_tmppage_free(tmppage);
> + else
> + put_cpu_var(zswap_dstmem);
> zswap_entry_cache_free(entry);
> reject:
> return ret;
> @@ -472,10 +894,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> struct zswap_entry *entry;
> u8 *src, *dst;
> unsigned int dlen;
> + int refcount;
>
> /* find */
> spin_lock(&tree->lock);
> entry = zswap_rb_search(&tree->rbroot, offset);
> + if (!entry) {
> + /* entry was written back */
> + spin_unlock(&tree->lock);
> + return -1;
> + }
> + zswap_entry_get(entry);
> +
> + /* remove from lru */
> + if (!list_empty(&entry->lru))
> + list_del_init(&entry->lru);
> spin_unlock(&tree->lock);
>
> /* decompress */
> @@ -487,6 +920,24 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> kunmap_atomic(dst);
> zs_unmap_object(tree->pool, entry->handle);
>
> + spin_lock(&tree->lock);
> + refcount = zswap_entry_put(entry);
> + if (likely(refcount)) {
> + list_add_tail(&entry->lru, &tree->lru);
> + spin_unlock(&tree->lock);
> + return 0;
> + }
> + spin_unlock(&tree->lock);
> +
> + /*
> + * We don't have to unlink from the rbtree because
> + * zswap_writeback_entry() or zswap_frontswap_invalidate page()
> + * has already done this for us if we are the last reference.
> + */
> + /* free */
> +
> + zswap_free_entry(tree, entry);
> +
> return 0;
> }
>
> @@ -495,19 +946,34 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> {
> struct zswap_tree *tree = zswap_trees[type];
> struct zswap_entry *entry;
> + int refcount;
>
> /* find */
> spin_lock(&tree->lock);
> entry = zswap_rb_search(&tree->rbroot, offset);
> + if (!entry) {
> + /* entry was written back */
> + spin_unlock(&tree->lock);
> + return;
> + }
>
> - /* remove from rbtree */
> + /* remove from rbtree and lru */
> rb_erase(&entry->rbnode, &tree->rbroot);
> + if (!list_empty(&entry->lru))
> + list_del_init(&entry->lru);
> +
> + /* drop the initial reference from entry creation */
> + refcount = zswap_entry_put(entry);
> +
> spin_unlock(&tree->lock);
>
> + if (refcount) {
> + /* writeback in progress, writeback will free */
> + return;
> + }
> +

I'm not keen on a refcount check, lock drop and then action based on
the stale read even though I do not see it causing a problem here as
such. Still, if at all possible I'd prefer to see the usual pattern of an
atomic_sub_and_test calling a release function like what kref_sub does to
avoid any potential problems with stale refcount checks in the future.


> /* free */
> - zs_free(tree->pool, entry->handle);
> - zswap_entry_cache_free(entry);
> - atomic_dec(&zswap_stored_pages);
> + zswap_free_entry(tree, entry);
> }
>
> /* invalidates all pages for the given swap type */
> @@ -536,8 +1002,10 @@ static void zswap_frontswap_invalidate_area(unsigned type)
> rb_erase(&entry->rbnode, &tree->rbroot);
> zs_free(tree->pool, entry->handle);
> zswap_entry_cache_free(entry);
> + atomic_dec(&zswap_stored_pages);
> }
> tree->rbroot = RB_ROOT;
> + INIT_LIST_HEAD(&tree->lru);
> spin_unlock(&tree->lock);
> }
>
> @@ -553,7 +1021,9 @@ static void zswap_frontswap_init(unsigned type)
> if (!tree->pool)
> goto freetree;
> tree->rbroot = RB_ROOT;
> + INIT_LIST_HEAD(&tree->lru);
> spin_lock_init(&tree->lock);
> + tree->type = type;
> zswap_trees[type] = tree;
> return;
>
> @@ -588,20 +1058,30 @@ static int __init zswap_debugfs_init(void)
> if (!zswap_debugfs_root)
> return -ENOMEM;
>
> + debugfs_create_u64("saved_by_writeback", S_IRUGO,
> + zswap_debugfs_root, &zswap_saved_by_writeback);
> debugfs_create_u64("pool_limit_hit", S_IRUGO,
> zswap_debugfs_root, &zswap_pool_limit_hit);
> + debugfs_create_u64("reject_writeback_attempted", S_IRUGO,
> + zswap_debugfs_root, &zswap_writeback_attempted);
> + debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
> + zswap_debugfs_root, &zswap_reject_tmppage_fail);
> debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> debugfs_create_u64("reject_compress_poor", S_IRUGO,
> zswap_debugfs_root, &zswap_reject_compress_poor);
> + debugfs_create_u64("written_back_pages", S_IRUGO,
> + zswap_debugfs_root, &zswap_written_back_pages);
> debugfs_create_u64("duplicate_entry", S_IRUGO,
> zswap_debugfs_root, &zswap_duplicate_entry);
> debugfs_create_atomic_t("pool_pages", S_IRUGO,
> zswap_debugfs_root, &zswap_pool_pages);
> debugfs_create_atomic_t("stored_pages", S_IRUGO,
> zswap_debugfs_root, &zswap_stored_pages);
> + debugfs_create_atomic_t("outstanding_writebacks", S_IRUGO,
> + zswap_debugfs_root, &zswap_outstanding_writebacks);
>
> return 0;
> }
> @@ -636,6 +1116,10 @@ static int __init init_zswap(void)
> pr_err("page pool initialization failed\n");
> goto pagepoolfail;
> }
> + if (zswap_tmppage_pool_create()) {
> + pr_err("workmem pool initialization failed\n");
> + goto tmppoolfail;
> + }
> if (zswap_comp_init()) {
> pr_err("compressor initialization failed\n");
> goto compfail;
> @@ -651,6 +1135,8 @@ static int __init init_zswap(void)
> pcpufail:
> zswap_comp_exit();
> compfail:
> + zswap_tmppage_pool_destroy();
> +tmppoolfail:
> zswap_page_pool_destroy();
> pagepoolfail:
> zswap_entry_cache_destory();

--
Mel Gorman
SUSE Labs

2013-04-17 17:17:47

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv9 7/8] zswap: add swap page writeback support

On Sun, Apr 14, 2013 at 01:45:28AM +0100, Mel Gorman wrote:
> On Wed, Apr 10, 2013 at 01:18:59PM -0500, Seth Jennings wrote:
> > This patch adds support for evicting swap pages that are currently
> > compressed in zswap to the swap device. This functionality is very
> > important and make zswap a true cache in that, once the cache is full
> > or can't grow due to memory pressure, the oldest pages can be moved
> > out of zswap to the swap device so newer pages can be compressed and
> > stored in zswap.
> >
>
> Oh great, this may cover one of my larger objections from an earlier patch!
> I had not guessed from the leader mail or the subject that this patch
> implemented zswap page aging of some sort.
>

Glad this satisfies some objections :)

> > This introduces a good amount of new code to guarantee coherency.
> > Most notably, and LRU list is added to the zswap_tree structure,
> > and refcounts are added to each entry to ensure that one code path
> > doesn't free then entry while another code path is operating on it.
> >
> > Signed-off-by: Seth Jennings <[email protected]>
> > ---
> > mm/zswap.c | 530 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 508 insertions(+), 22 deletions(-)
> >
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index db283c4..edb354b 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -36,6 +36,12 @@
> > #include <linux/mempool.h>
> > #include <linux/zsmalloc.h>
> >
> > +#include <linux/mm_types.h>
> > +#include <linux/page-flags.h>
> > +#include <linux/swapops.h>
> > +#include <linux/writeback.h>
> > +#include <linux/pagemap.h>
> > +
> > /*********************************
> > * statistics
> > **********************************/
> > @@ -43,6 +49,8 @@
> > static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> > /* The number of compressed pages currently stored in zswap */
> > static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> > +/* The number of outstanding pages awaiting writeback */
> > +static atomic_t zswap_outstanding_writebacks = ATOMIC_INIT(0);
> >
> > /*
> > * The statistics below are not protected from concurrent access for
> > @@ -51,9 +59,13 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> > * certain event is occurring.
> > */
> > static u64 zswap_pool_limit_hit;
> > +static u64 zswap_written_back_pages;
> > static u64 zswap_reject_compress_poor;
> > +static u64 zswap_writeback_attempted;
> > +static u64 zswap_reject_tmppage_fail;
> > static u64 zswap_reject_zsmalloc_fail;
> > static u64 zswap_reject_kmemcache_fail;
> > +static u64 zswap_saved_by_writeback;
> > static u64 zswap_duplicate_entry;
> >
>
> At some point it would be nice to document what these mean. I know what
> they mean now because I read the code recently but I'll have forgotten in
> 6 months time.

Will do.

>
> > /*********************************
> > @@ -82,6 +94,14 @@ static unsigned int zswap_max_compression_ratio = 80;
> > module_param_named(max_compression_ratio,
> > zswap_max_compression_ratio, uint, 0644);
> >
> > +/*
> > + * Maximum number of outstanding writebacks allowed at any given time.
> > + * This is to prevent decompressing an unbounded number of compressed
> > + * pages into the swap cache all at once, and to help with writeback
> > + * congestion.
> > +*/
> > +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
> > +
>
> Why 64?

Right now, it's rather arbitrary. Ideally, it would check for underlying I/O
congestion rather than setting this hard limit. I did this as a quick way to
bound the amount of pages that can be decompressed at once. We don't want
writeback to cause too much addition memory pressure.

>
> > /*********************************
> > * compression functions
> > **********************************/
> > @@ -144,18 +164,49 @@ static void zswap_comp_exit(void)
> > /*********************************
> > * data structures
> > **********************************/
> > +
> > +/*
> > + * struct zswap_entry
> > + *
> > + * This structure contains the metadata for tracking a single compressed
> > + * page within zswap.
> > + *
> > + * rbnode - links the entry into red-black tree for the appropriate swap type
> > + * lru - links the entry into the lru list for the appropriate swap type
> > + * refcount - the number of outstanding reference to the entry. This is needed
> > + * to protect against premature freeing of the entry by code
> > + * concurent calls to load, invalidate, and writeback. The lock
>
> s/concurent/concurrent/

Yes.

>
> > + * for the zswap_tree structure that contains the entry must
> > + * be held while changing the refcount. Since the lock must
> > + * be held, there is no reason to also make refcount atomic.
> > + * type - the swap type for the entry. Used to map back to the zswap_tree
> > + * structure that contains the entry.
> > + * offset - the swap offset for the entry. Index into the red-black tree.
> > + * handle - zsmalloc allocation handle that stores the compressed page data
> > + * length - the length in bytes of the compressed page data. Needed during
> > + * decompression
> > + */
>
> It's good that you document the fields but from a review perspective it
> would be easier if the documentation was introduced in an earlier patch
> and then update it here. Note for example that you document "type" here
> even though this patch removes it.

I'll fix it up.

>
> > struct zswap_entry {
> > struct rb_node rbnode;
> > - unsigned type;
> > + struct list_head lru;
> > + int refcount;
>
> Any particular reason you did not use struct kref (include/linux/kref.h)
> for the refcount? I suppose it's because your refcount is protected by
> the lock and the atomics are unnecessary but it seems unfortunate to
> roll your own refcounting unless there is a good reason for it.

kref uses atomic refcounters. However, because this design requires a lock be
held for other operations that occur at the same time as the refcount changes,
there was no reason to make the refcounts atomic as it would be redundantly
atomic. That was my rationale.

>
> There is a place later where the refcount looks like it's used like a
> state machine which is a bit weird.

Yes... I'm happy to entertain thoughts on how to do that in a cleaner way.

>
> > pgoff_t offset;
> > unsigned long handle;
> > unsigned int length;
> > };
> >
> > +/*
> > + * The tree lock in the zswap_tree struct protects a few things:
> > + * - the rbtree
> > + * - the lru list
> > + * - the refcount field of each entry in the tree
> > + */
> > struct zswap_tree {
> > struct rb_root rbroot;
> > + struct list_head lru;
> > spinlock_t lock;
> > struct zs_pool *pool;
> > + unsigned type;
> > };
> >
> > static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> > @@ -185,6 +236,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > if (!entry)
> > return NULL;
> > + INIT_LIST_HEAD(&entry->lru);
> > + entry->refcount = 1;
> > return entry;
> > }
> >
> > @@ -193,6 +246,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> > kmem_cache_free(zswap_entry_cache, entry);
> > }
> >
> > +static inline void zswap_entry_get(struct zswap_entry *entry)
> > +{
> > + entry->refcount++;
> > +}
> > +
> > +static inline int zswap_entry_put(struct zswap_entry *entry)
> > +{
> > + entry->refcount--;
> > + return entry->refcount;
> > +}
> > +
>
> I find it surprising to have a put-like interface that returns the
> count. Ordinarily this would raise alarm bells because a decision could
> be made based on a stale read of a refcount. In this case I expect you
> are protected by the tree lock but if you ever want to make that lock
> more fine-grained then you are already backed into a corner.
>
> > /*********************************
> > * rbtree functions
> > **********************************/
> > @@ -367,6 +431,328 @@ static struct zs_ops zswap_zs_ops = {
> > .free = zswap_free_page
> > };
> >
> > +
> > +/*********************************
> > +* helpers
> > +**********************************/
> > +
> > +/*
> > + * Carries out the common pattern of freeing and entry's zsmalloc allocation,
> > + * freeing the entry itself, and decrementing the number of stored pages.
> > + */
> > +static void zswap_free_entry(struct zswap_tree *tree, struct zswap_entry *entry)
> > +{
> > + zs_free(tree->pool, entry->handle);
> > + zswap_entry_cache_free(entry);
> > + atomic_dec(&zswap_stored_pages);
> > +}
> > +
> > +/*********************************
> > +* writeback code
> > +**********************************/
> > +static void zswap_end_swap_write(struct bio *bio, int err)
> > +{
> > + end_swap_bio_write(bio, err);
> > + atomic_dec(&zswap_outstanding_writebacks);
> > + zswap_written_back_pages++;
> > +}
> > +
> > +/* return enum for zswap_get_swap_cache_page */
> > +enum zswap_get_swap_ret {
> > + ZSWAP_SWAPCACHE_NEW,
> > + ZSWAP_SWAPCACHE_EXIST,
> > + ZSWAP_SWAPCACHE_NOMEM
> > +};
> > +
> > +/*
> > + * zswap_get_swap_cache_page
> > + *
> > + * This is an adaption of read_swap_cache_async()
> > + *
>
> This in fact looks almost identical to read_swap_cache_async(). Can the
> code not be reused or the function split up in some fashion to it can be
> shared between swap_state.c and zswap.c? As it is, this is just begging
> to get out of sync if read_swap_cache_async() ever gets any sort of
> enhancement or fix.

So I'll try to make this quick. The main differences from
read_swap_cache_async() are:

1) zswap needs to know if the page already existed in the swap cache or a new
page had to be added. Right now zswap avoids writeback on pages that are
still in the swapcache to avoid yet-to-be-explored races.

2) read_swap_cache_async() assumes (and reasonably so) that it is being cause as
a result of a fault on a swaped out PTE that corresponds to a vma in a
userspace program. In our case, there is no fault causing the operation.

3) read_swap_cache_async() calls swap_readpage() to fill the page if it isn't in
the swapcache. While the frontswap_load() hook is in the swap_readpage()
function, I tired to short-circuit the lookup in the frontswap_map since we
_know_ that the entry resides in zswap.

I'm definitely in favor of reusing code wherever possible if there is a way to
do it cleanly. I'm open to suggestions.

>
> > + * This function tries to find a page with the given swap entry
> > + * in the swapper_space address space (the swap cache). If the page
> > + * is found, it is returned in retpage. Otherwise, a page is allocated,
> > + * added to the swap cache, and returned in retpage.
> > + *
> > + * If success, the swap cache page is returned in retpage
> > + * Returns 0 if page was already in the swap cache, page is not locked
> > + * Returns 1 if the new page needs to be populated, page is locked
> > + * Returns <0 on error
> > + */
> > +static int zswap_get_swap_cache_page(swp_entry_t entry,
> > + struct page **retpage)
> > +{
> > + struct page *found_page, *new_page = NULL;
> > + struct address_space *swapper_space = &swapper_spaces[swp_type(entry)];
> > + int err;
> > +
> > + *retpage = NULL;
> > + do {
> > + /*
> > + * First check the swap cache. Since this is normally
> > + * called after lookup_swap_cache() failed, re-calling
> > + * that would confuse statistics.
> > + */
> > + found_page = find_get_page(swapper_space, entry.val);
> > + if (found_page)
> > + break;
> > +
> > + /*
> > + * Get a new page to read into from swap.
> > + */
> > + if (!new_page) {
> > + new_page = alloc_page(GFP_KERNEL);
> > + if (!new_page)
> > + break; /* Out of memory */
> > + }
> > +
> > + /*
> > + * call radix_tree_preload() while we can wait.
> > + */
> > + err = radix_tree_preload(GFP_KERNEL);
> > + if (err)
> > + break;
> > +
> > + /*
> > + * Swap entry may have been freed since our caller observed it.
> > + */
> > + err = swapcache_prepare(entry);
> > + if (err == -EEXIST) { /* seems racy */
> > + radix_tree_preload_end();
> > + continue;
> > + }
> > + if (err) { /* swp entry is obsolete ? */
> > + radix_tree_preload_end();
> > + break;
> > + }
> > +
> > + /* May fail (-ENOMEM) if radix-tree node allocation failed. */
> > + __set_page_locked(new_page);
> > + SetPageSwapBacked(new_page);
> > + err = __add_to_swap_cache(new_page, entry);
> > + if (likely(!err)) {
> > + radix_tree_preload_end();
> > + lru_cache_add_anon(new_page);
> > + *retpage = new_page;
> > + return ZSWAP_SWAPCACHE_NEW;
> > + }
> > + radix_tree_preload_end();
> > + ClearPageSwapBacked(new_page);
> > + __clear_page_locked(new_page);
> > + /*
> > + * add_to_swap_cache() doesn't return -EEXIST, so we can safely
> > + * clear SWAP_HAS_CACHE flag.
> > + */
> > + swapcache_free(entry, NULL);
> > + } while (err != -ENOMEM);
> > +
> > + if (new_page)
> > + page_cache_release(new_page);
> > + if (!found_page)
> > + return ZSWAP_SWAPCACHE_NOMEM;
> > + *retpage = found_page;
> > + return ZSWAP_SWAPCACHE_EXIST;
> > +}
> > +
> > +/*
> > + * Attempts to free and entry by adding a page to the swap cache,
> > + * decompressing the entry data into the page, and issuing a
> > + * bio write to write the page back to the swap device.
> > + *
> > + * This can be thought of as a "resumed writeback" of the page
> > + * to the swap device. We are basically resuming the same swap
> > + * writeback path that was intercepted with the frontswap_store()
> > + * in the first place. After the page has been decompressed into
> > + * the swap cache, the compressed version stored by zswap can be
> > + * freed.
> > + */
> > +static int zswap_writeback_entry(struct zswap_tree *tree,
> > + struct zswap_entry *entry)
> > +{
> > + unsigned long type = tree->type;
> > + struct page *page;
> > + swp_entry_t swpentry;
> > + u8 *src, *dst;
> > + unsigned int dlen;
> > + int ret;
> > + struct writeback_control wbc = {
> > + .sync_mode = WB_SYNC_NONE,
> > + };
> > +
> > + /* get/allocate page in the swap cache */
> > + swpentry = swp_entry(type, entry->offset);
> > +
> > + /* try to allocate swap cache page */
> > + switch (zswap_get_swap_cache_page(swpentry, &page)) {
> > +
> > + case ZSWAP_SWAPCACHE_NOMEM: /* no memory */
> > + return -ENOMEM;
> > + break; /* not reached */
> > +
>
> Can leave out the break

Ok

>
> > + case ZSWAP_SWAPCACHE_EXIST: /* page is unlocked */
> > + /* page is already in the swap cache, ignore for now */
> > + return -EEXIST;
> > + break; /* not reached */
> > +
>
> Can leave out the break.

Ok

>
> > + case ZSWAP_SWAPCACHE_NEW: /* page is locked */
> > + /* decompress */
> > + dlen = PAGE_SIZE;
> > + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> > + dst = kmap_atomic(page);
> > + ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> > + dst, &dlen);
> > + kunmap_atomic(dst);
> > + zs_unmap_object(tree->pool, entry->handle);
> > + BUG_ON(ret);
> > + BUG_ON(dlen != PAGE_SIZE);
> > +
> > + /* page is up to date */
> > + SetPageUptodate(page);
> > + }
> > +
> > + /* start writeback */
> > + SetPageReclaim(page);
> > + if (!__swap_writepage(page, &wbc, zswap_end_swap_write))
> > + atomic_inc(&zswap_outstanding_writebacks);
>
> humm, you queue something and then increment the writebacks. That looks
> like it would be vulnerable to a race of
>
> 1. 10 processes read counter see it's ZSWAP_MAX_OUTSTANDING_FLUSHES-1
> 2. 10 processes queue IO
> 3. 10 processes increment the counter
>
> We've now gone over the max number of outstanding flushes. It still
> sortof gets limited but it's still a bit sloppy.

Yes, ZSWAP_MAX_OUTSTANDING_FLUSHES is not a hard limit on the number of page
that can be in writeback at any particular time. And yes, it's not the best.
As I mentioned earlier, I did spend to much time thinking about how to make
this approach hard bounded since such an approach would 1) require additional
serialization and hurt performance and 2) be replaced soon by a mechanism based
on congestion rather than some hard-coded limit.

>
> > + page_cache_release(page);
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Attempts to free nr of entries via writeback to the swap device.
> > + * The number of entries that were actually freed is returned.
> > + */
> > +static int zswap_writeback_entries(struct zswap_tree *tree, int nr)
> > +{
> > + struct zswap_entry *entry;
> > + int i, ret, refcount, freed_nr = 0;
> > +
> > + for (i = 0; i < nr; i++) {
> > + /*
> > + * This limits is arbitrary for now until a better
> > + * policy can be implemented. This is so we don't
> > + * eat all of RAM decompressing pages for writeback.
> > + */
> > + if (atomic_read(&zswap_outstanding_writebacks) >
> > + ZSWAP_MAX_OUTSTANDING_FLUSHES)
> > + break;
> > +
>
> This is handled a bit badly. If the max outstanding flushes are reached at
> i == 0 then we return. The caller does not check the return value and if
> it fails the zsmalloc again then it just fails to store the page entirely.
> This means that when the maximum allowed number of pages are in flight that
> new pages go straight to swap and it's again an age inversion problem. The
> performance cliff when zswap full is still there although it may be harder
> to hit.

Yes, once you overflow zswap, you can hit the age inversion situation again.
This mainly stems from the frontswap API hooks in past the point in pageout()
that checks for congestion. We currently can't rate limit the number of pages
coming into zswap...

>
> You need to go on a waitqueue here until the in-flight pages are written
> and it gets woken up. You will likely need to make sure it can make forward
> progress even if it's crude as just sleeping here until it's woken and
> returning. If it fails still then zswap writeback is congested and it
> might as well just go straight to swap and take the age inversion hit.

except of course by blocking :)

>
> > + spin_lock(&tree->lock);
> > +
> > + /* dequeue from lru */
> > + if (list_empty(&tree->lru)) {
> > + spin_unlock(&tree->lock);
> > + break;
> > + }
> > + entry = list_first_entry(&tree->lru,
> > + struct zswap_entry, lru);
> > + list_del_init(&entry->lru);
> > +
> > + /* so invalidate doesn't free the entry from under us */
> > + zswap_entry_get(entry);
> > +
> > + spin_unlock(&tree->lock);
> > +
> > + /* attempt writeback */
> > + ret = zswap_writeback_entry(tree, entry);
> > +
> > + spin_lock(&tree->lock);
> > +
> > + /* drop reference from above */
> > + refcount = zswap_entry_put(entry);
> > +
> > + if (!ret)
> > + /* drop the initial reference from entry creation */
> > + refcount = zswap_entry_put(entry);
> > +
>
> zswap_writeback_entry returns an enum but here you make assumptions on
> the meaning of 0. While it's correct, it's vulnerable to bugs if someone
> adds a new enum state at position 0. Compare ret to ZSWAP_SWAPCACHE_NEW.

zswap_writebck_entry() returns an int.

>
> > + /*
> > + * There are four possible values for refcount here:
> > + * (1) refcount is 2, writeback failed and load is in progress;
> > + * do nothing, load will add us back to the LRU
> > + * (2) refcount is 1, writeback failed; do not free entry,
> > + * add back to LRU
> > + * (3) refcount is 0, (normal case) not invalidate yet;
> > + * remove from rbtree and free entry
> > + * (4) refcount is -1, invalidate happened during writeback;
> > + * free entry
> > + */
> > + if (refcount == 1)
> > + list_add(&entry->lru, &tree->lru);
> > +
> > + if (refcount == 0) {
> > + /* no invalidate yet, remove from rbtree */
> > + rb_erase(&entry->rbnode, &tree->rbroot);
> > + }
> > + spin_unlock(&tree->lock);
> > + if (refcount <= 0) {
> > + /* free the entry */
> > + zswap_free_entry(tree, entry);
> > + freed_nr++;
> > + }
> > + }
> > + return freed_nr;
> > +}
> > +
> > +/*******************************************
> > +* page pool for temporary compression result
> > +********************************************/
> > +#define ZSWAP_TMPPAGE_POOL_PAGES 16
>
> It's strange to me that the number of pages queued in a writeback at the
> same time, the size of the the tmp page pool and the maximum allowed number
> of pages under writeback are independent. It seems like it should be
>
> MAX_FLUSHES 64
> WRITEBACK_BATCH (MAX_FLUSHES >> 2)
> TMPPAGE_PAGES MAX_FLUSHES / WRITEBACK_BATCH
>
> or something similar.

See next comment

>
> > +static LIST_HEAD(zswap_tmppage_list);
> > +static DEFINE_SPINLOCK(zswap_tmppage_lock);
> > +
> > +static void zswap_tmppage_pool_destroy(void)
> > +{
> > + struct page *page, *tmppage;
> > +
> > + spin_lock(&zswap_tmppage_lock);
> > + list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
> > + list_del(&page->lru);
> > + __free_pages(page, 1);
> > + }
> > + spin_unlock(&zswap_tmppage_lock);
> > +}
>
> This looks very like a mempool but is a custom implementation. It could
> have been a kref-counted pool size with an alloc function that returns
> NULL if kref == ZSWAP_TMPPAGE_POOL_PAGES. Granted, that would introduce
> atomics into this path but the use of a custom mempool here does not appear
> justified. Any particular reason a mempool was avoided?

Yes, it is similar with a subtle difference: that the pool is statically sized.
mempool will try to maintain a fixed number of free pages available in the
pool. Also we are allocating order-1 pages.

This whole tmppage mechanism was designed to avoid recompression during the
store in the case we had to re-enable preemption for writeback. I'm hoping
that with improvements in the writeback mechanism, we might remove the need for
this.

>
> > +
> > +static int zswap_tmppage_pool_create(void)
> > +{
> > + int i;
> > + struct page *page;
> > +
> > + for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
> > + page = alloc_pages(GFP_KERNEL, 1);
>
> The per-cpu compression/decompression buffers are allocated from kmalloc
> as kmalloc(PAGE_SIZE*2) but here we allocate an order-1 page. Functionally
> there is no difference but it seems like the order of the buffer be #defined
> somewhere and them allocated with one interface or the other, not a mix.

Yes. That difference looks wonky. I'll fix it up.

>
> > + if (!page) {
> > + zswap_tmppage_pool_destroy();
> > + return -ENOMEM;
> > + }
> > + spin_lock(&zswap_tmppage_lock);
> > + list_add(&page->lru, &zswap_tmppage_list);
> > + spin_unlock(&zswap_tmppage_lock);
> > + }
> > + return 0;
> > +}
> > +
> > +static inline struct page *zswap_tmppage_alloc(void)
> > +{
> > + struct page *page;
> > +
> > + spin_lock(&zswap_tmppage_lock);
> > + if (list_empty(&zswap_tmppage_list)) {
> > + spin_unlock(&zswap_tmppage_lock);
> > + return NULL;
> > + }
> > + page = list_first_entry(&zswap_tmppage_list, struct page, lru);
> > + list_del(&page->lru);
> > + spin_unlock(&zswap_tmppage_lock);
> > + return page;
> > +}
> > +
> > +static inline void zswap_tmppage_free(struct page *page)
> > +{
> > + spin_lock(&zswap_tmppage_lock);
> > + list_add(&page->lru, &zswap_tmppage_list);
> > + spin_unlock(&zswap_tmppage_lock);
> > +}
> > +
> > /*********************************
> > * frontswap hooks
> > **********************************/
> > @@ -380,7 +766,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> > unsigned int dlen = PAGE_SIZE;
> > unsigned long handle;
> > char *buf;
> > - u8 *src, *dst;
> > + u8 *src, *dst, *tmpdst;
> > + struct page *tmppage;
> > + bool writeback_attempted = 0;
>
> bool = false.

Ok

>
> >
> > if (!tree) {
> > ret = -ENODEV;
> > @@ -402,12 +790,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> > kunmap_atomic(src);
> > if (ret) {
> > ret = -EINVAL;
> > - goto putcpu;
> > + goto freepage;
> > }
> > if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> > zswap_reject_compress_poor++;
> > ret = -E2BIG;
> > - goto putcpu;
> > + goto freepage;
> > }
> >
> > /* store */
> > @@ -415,18 +803,48 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> > __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> > __GFP_NOWARN);
> > if (!handle) {
> > - zswap_reject_zsmalloc_fail++;
> > - ret = -ENOMEM;
> > - goto putcpu;
> > + zswap_writeback_attempted++;
> > + /*
> > + * Copy compressed buffer out of per-cpu storage so
> > + * we can re-enable preemption.
> > + */
> > + tmppage = zswap_tmppage_alloc();
> > + if (!tmppage) {
> > + zswap_reject_tmppage_fail++;
> > + ret = -ENOMEM;
> > + goto freepage;
> > + }
>
> Similar to ZSWAP_MAX_OUTSTANDING_FLUSHES, a failure to allocate a tmppage
> should result in the process waiting or it's back again to the age
> inversion problem.

Ok, I really didn't think about blocking here as a mechanism for throttling
page-out rate. I'll put some thought into it.

>
> > + writeback_attempted = 1;
> > + tmpdst = page_address(tmppage);
> > + memcpy(tmpdst, dst, dlen);
> > + dst = tmpdst;
> > + put_cpu_var(zswap_dstmem);
> > +
> > + /* try to free up some space */
> > + /* TODO: replace with more targeted policy */
> > + zswap_writeback_entries(tree, 16);
> > + /* try again, allowing wait */
> > + handle = zs_malloc(tree->pool, dlen,
> > + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> > + __GFP_NOWARN);
> > + if (!handle) {
> > + /* still no space, fail */
> > + zswap_reject_zsmalloc_fail++;
> > + ret = -ENOMEM;
> > + goto freepage;
> > + }
> > + zswap_saved_by_writeback++;
> > }
> >
> > buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> > memcpy(buf, dst, dlen);
> > zs_unmap_object(tree->pool, handle);
> > - put_cpu_var(zswap_dstmem);
> > + if (writeback_attempted)
> > + zswap_tmppage_free(tmppage);
> > + else
> > + put_cpu_var(zswap_dstmem);
> >
> > /* populate entry */
> > - entry->type = type;
> > entry->offset = offset;
> > entry->handle = handle;
> > entry->length = dlen;
> > @@ -437,16 +855,17 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> > ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> > if (ret == -EEXIST) {
> > zswap_duplicate_entry++;
> > -
> > - /* remove from rbtree */
> > + /* remove from rbtree and lru */
> > rb_erase(&dupentry->rbnode, &tree->rbroot);
> > -
> > - /* free */
> > - zs_free(tree->pool, dupentry->handle);
> > - zswap_entry_cache_free(dupentry);
> > - atomic_dec(&zswap_stored_pages);
> > + if (!list_empty(&dupentry->lru))
> > + list_del_init(&dupentry->lru);
> > + if (!zswap_entry_put(dupentry)) {
> > + /* free */
> > + zswap_free_entry(tree, dupentry);
> > + }
> > }
> > } while (ret == -EEXIST);
> > + list_add_tail(&entry->lru, &tree->lru);
> > spin_unlock(&tree->lock);
> >
> > /* update stats */
> > @@ -454,8 +873,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> >
> > return 0;
> >
> > -putcpu:
> > - put_cpu_var(zswap_dstmem);
> > +freepage:
> > + if (writeback_attempted)
> > + zswap_tmppage_free(tmppage);
> > + else
> > + put_cpu_var(zswap_dstmem);
> > zswap_entry_cache_free(entry);
> > reject:
> > return ret;
> > @@ -472,10 +894,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> > struct zswap_entry *entry;
> > u8 *src, *dst;
> > unsigned int dlen;
> > + int refcount;
> >
> > /* find */
> > spin_lock(&tree->lock);
> > entry = zswap_rb_search(&tree->rbroot, offset);
> > + if (!entry) {
> > + /* entry was written back */
> > + spin_unlock(&tree->lock);
> > + return -1;
> > + }
> > + zswap_entry_get(entry);
> > +
> > + /* remove from lru */
> > + if (!list_empty(&entry->lru))
> > + list_del_init(&entry->lru);
> > spin_unlock(&tree->lock);
> >
> > /* decompress */
> > @@ -487,6 +920,24 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> > kunmap_atomic(dst);
> > zs_unmap_object(tree->pool, entry->handle);
> >
> > + spin_lock(&tree->lock);
> > + refcount = zswap_entry_put(entry);
> > + if (likely(refcount)) {
> > + list_add_tail(&entry->lru, &tree->lru);
> > + spin_unlock(&tree->lock);
> > + return 0;
> > + }
> > + spin_unlock(&tree->lock);
> > +
> > + /*
> > + * We don't have to unlink from the rbtree because
> > + * zswap_writeback_entry() or zswap_frontswap_invalidate page()
> > + * has already done this for us if we are the last reference.
> > + */
> > + /* free */
> > +
> > + zswap_free_entry(tree, entry);
> > +
> > return 0;
> > }
> >
> > @@ -495,19 +946,34 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> > {
> > struct zswap_tree *tree = zswap_trees[type];
> > struct zswap_entry *entry;
> > + int refcount;
> >
> > /* find */
> > spin_lock(&tree->lock);
> > entry = zswap_rb_search(&tree->rbroot, offset);
> > + if (!entry) {
> > + /* entry was written back */
> > + spin_unlock(&tree->lock);
> > + return;
> > + }
> >
> > - /* remove from rbtree */
> > + /* remove from rbtree and lru */
> > rb_erase(&entry->rbnode, &tree->rbroot);
> > + if (!list_empty(&entry->lru))
> > + list_del_init(&entry->lru);
> > +
> > + /* drop the initial reference from entry creation */
> > + refcount = zswap_entry_put(entry);
> > +
> > spin_unlock(&tree->lock);
> >
> > + if (refcount) {
> > + /* writeback in progress, writeback will free */
> > + return;
> > + }
> > +
>
> I'm not keen on a refcount check, lock drop and then action based on
> the stale read even though I do not see it causing a problem here as
> such. Still, if at all possible I'd prefer to see the usual pattern of an
> atomic_sub_and_test calling a release function like what kref_sub does to
> avoid any potential problems with stale refcount checks in the future.

I'll look again at using kref's here. The reason it isn't a problem here
is that the entry has been remove from both the rb tree and the lru list at
this point. Even once we release the lock, there is no way to gain a reference
to the entry.

>
>
> > /* free */
> > - zs_free(tree->pool, entry->handle);
> > - zswap_entry_cache_free(entry);
> > - atomic_dec(&zswap_stored_pages);
> > + zswap_free_entry(tree, entry);
> > }
> >
> > /* invalidates all pages for the given swap type */
> > @@ -536,8 +1002,10 @@ static void zswap_frontswap_invalidate_area(unsigned type)
> > rb_erase(&entry->rbnode, &tree->rbroot);
> > zs_free(tree->pool, entry->handle);
> > zswap_entry_cache_free(entry);
> > + atomic_dec(&zswap_stored_pages);
> > }
> > tree->rbroot = RB_ROOT;
> > + INIT_LIST_HEAD(&tree->lru);
> > spin_unlock(&tree->lock);
> > }
> >
> > @@ -553,7 +1021,9 @@ static void zswap_frontswap_init(unsigned type)
> > if (!tree->pool)
> > goto freetree;
> > tree->rbroot = RB_ROOT;
> > + INIT_LIST_HEAD(&tree->lru);
> > spin_lock_init(&tree->lock);
> > + tree->type = type;
> > zswap_trees[type] = tree;
> > return;
> >
> > @@ -588,20 +1058,30 @@ static int __init zswap_debugfs_init(void)
> > if (!zswap_debugfs_root)
> > return -ENOMEM;
> >
> > + debugfs_create_u64("saved_by_writeback", S_IRUGO,
> > + zswap_debugfs_root, &zswap_saved_by_writeback);
> > debugfs_create_u64("pool_limit_hit", S_IRUGO,
> > zswap_debugfs_root, &zswap_pool_limit_hit);
> > + debugfs_create_u64("reject_writeback_attempted", S_IRUGO,
> > + zswap_debugfs_root, &zswap_writeback_attempted);
> > + debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
> > + zswap_debugfs_root, &zswap_reject_tmppage_fail);
> > debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> > zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> > debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> > zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> > debugfs_create_u64("reject_compress_poor", S_IRUGO,
> > zswap_debugfs_root, &zswap_reject_compress_poor);
> > + debugfs_create_u64("written_back_pages", S_IRUGO,
> > + zswap_debugfs_root, &zswap_written_back_pages);
> > debugfs_create_u64("duplicate_entry", S_IRUGO,
> > zswap_debugfs_root, &zswap_duplicate_entry);
> > debugfs_create_atomic_t("pool_pages", S_IRUGO,
> > zswap_debugfs_root, &zswap_pool_pages);
> > debugfs_create_atomic_t("stored_pages", S_IRUGO,
> > zswap_debugfs_root, &zswap_stored_pages);
> > + debugfs_create_atomic_t("outstanding_writebacks", S_IRUGO,
> > + zswap_debugfs_root, &zswap_outstanding_writebacks);
> >
> > return 0;
> > }
> > @@ -636,6 +1116,10 @@ static int __init init_zswap(void)
> > pr_err("page pool initialization failed\n");
> > goto pagepoolfail;
> > }
> > + if (zswap_tmppage_pool_create()) {
> > + pr_err("workmem pool initialization failed\n");
> > + goto tmppoolfail;
> > + }
> > if (zswap_comp_init()) {
> > pr_err("compressor initialization failed\n");
> > goto compfail;
> > @@ -651,6 +1135,8 @@ static int __init init_zswap(void)
> > pcpufail:
> > zswap_comp_exit();
> > compfail:
> > + zswap_tmppage_pool_destroy();
> > +tmppoolfail:
> > zswap_page_pool_destroy();
> > pagepoolfail:
> > zswap_entry_cache_destory();
>
> --
> Mel Gorman
> SUSE Labs
>

2013-04-17 17:46:29

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv9 4/8] zswap: add to mm/

On Sun, Apr 14, 2013 at 01:45:06AM +0100, Mel Gorman wrote:
> On Wed, Apr 10, 2013 at 01:18:56PM -0500, Seth Jennings wrote:
> > zswap is a thin compression backend for frontswap. It receives
> > pages from frontswap and attempts to store them in a compressed
> > memory pool, resulting in an effective partial memory reclaim and
> > dramatically reduced swap device I/O.
> >
> > Additionally, in most cases, pages can be retrieved from this
> > compressed store much more quickly than reading from tradition
> > swap devices resulting in faster performance for many workloads.
> >
>
> Except in the case where the zswap pool is externally fragmented, occupies
> its maximum configured size and a workload that would otherwise have fit
> in memory gets pushed to swap.
>
> Yes, it's a corner case but the changelog portrays zswap as an unconditional
> win and while it certainly is going to help some cases, it won't help
> them all.

Yes, and I (and others) are already working on improvements to the allocator to
make this corner case even smaller.

>
> > This patch adds the zswap driver to mm/
> >
> > Signed-off-by: Seth Jennings <[email protected]>
> > ---
> > mm/Kconfig | 15 ++
> > mm/Makefile | 1 +
> > mm/zswap.c | 665 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 681 insertions(+)
> > create mode 100644 mm/zswap.c
> >
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index aa054fc..36d93b0 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -495,3 +495,18 @@ config PGTABLE_MAPPING
> >
> > You can check speed with zsmalloc benchmark[1].
> > [1] https://github.com/spartacus06/zsmalloc
> > +
> > +config ZSWAP
> > + bool "In-kernel swap page compression"
> > + depends on FRONTSWAP && CRYPTO
> > + select CRYPTO_LZO
> > + select ZSMALLOC
> > + default n
> > + help
> > + Zswap is a backend for the frontswap mechanism in the VMM.
> > + It receives pages from frontswap and attempts to store them
> > + in a compressed memory pool, resulting in an effective
> > + partial memory reclaim. In addition, pages and be retrieved
> > + from this compressed store much faster than most tradition
> > + swap devices resulting in reduced I/O and faster performance
> > + for many workloads.
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 0f6ef0a..1e0198f 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
> > obj-$(CONFIG_BOUNCE) += bounce.o
> > obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
> > obj-$(CONFIG_FRONTSWAP) += frontswap.o
> > +obj-$(CONFIG_ZSWAP) += zswap.o
> > obj-$(CONFIG_HAS_DMA) += dmapool.o
> > obj-$(CONFIG_HUGETLBFS) += hugetlb.o
> > obj-$(CONFIG_NUMA) += mempolicy.o
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > new file mode 100644
> > index 0000000..db283c4
> > --- /dev/null
> > +++ b/mm/zswap.c
> > @@ -0,0 +1,665 @@
> > +/*
> > + * zswap.c - zswap driver file
> > + *
> > + * zswap is a backend for frontswap that takes pages that are in the
> > + * process of being swapped out and attempts to compress them and store
> > + * them in a RAM-based memory pool. This results in a significant I/O
> > + * reduction on the real swap device and, in the case of a slow swap
> > + * device, can also improve workload performance.
> > + *
> > + * Copyright (C) 2012 Seth Jennings <[email protected]>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License
> > + * as published by the Free Software Foundation; either version 2
> > + * of the License, or (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > +*/
> > +
> > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > +
> > +#include <linux/module.h>
> > +#include <linux/cpu.h>
> > +#include <linux/highmem.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/types.h>
> > +#include <linux/atomic.h>
> > +#include <linux/frontswap.h>
> > +#include <linux/rbtree.h>
> > +#include <linux/swap.h>
> > +#include <linux/crypto.h>
> > +#include <linux/mempool.h>
> > +#include <linux/zsmalloc.h>
> > +
> > +/*********************************
> > +* statistics
> > +**********************************/
> > +/* Number of memory pages used by the compressed pool */
> > +static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> > +/* The number of compressed pages currently stored in zswap */
> > +static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> > +
> > +/*
> > + * The statistics below are not protected from concurrent access for
> > + * performance reasons so they may not be a 100% accurate. However,
> > + * they do provide useful information on roughly how many times a
> > + * certain event is occurring.
> > +*/
> > +static u64 zswap_pool_limit_hit;
> > +static u64 zswap_reject_compress_poor;
> > +static u64 zswap_reject_zsmalloc_fail;
> > +static u64 zswap_reject_kmemcache_fail;
> > +static u64 zswap_duplicate_entry;
> > +
>
> Ok. Initially I thought "vmstat" but it would be overkill in this case
> and the fact zswap can be a module would be a problem.

Agreed.

>
> > +/*********************************
> > +* tunables
> > +**********************************/
> > +/* Enable/disable zswap (disabled by default, fixed at boot for now) */
> > +static bool zswap_enabled;
> > +module_param_named(enabled, zswap_enabled, bool, 0);
> > +
> > +/* Compressor to be used by zswap (fixed at boot for now) */
> > +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
> > +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> > +module_param_named(compressor, zswap_compressor, charp, 0);
> > +
> > +/* The maximum percentage of memory that the compressed pool can occupy */
> > +static unsigned int zswap_max_pool_percent = 20;
> > +module_param_named(max_pool_percent,
> > + zswap_max_pool_percent, uint, 0644);
> > +
>
> This has some potentially interesting NUMA characteristics.
> The location of the allocated pages will depend on the process that first
> allocated the page. As the pages can then be used by remote processes,
> there may be increased remote accesses when accessing zswap.
> Furthermore, if zone_reclaim_mode is enabled and allowed to swap it
> could setup a weird situation whereby a process pushes itself fully into
> zswap trying to reclaim local memory and instead pushing itself into
> zswap on the local node.

Yes, there are some NUMA ramifications to flush out here. None of them
completely detrimental I don't think, but sub-optimal yes.

>
> If this is every reported as a problem then a workaround is to always
> allocate zswap pages round-robin between online nodes.

Even the round-robin approach could have a caveat if the nodes are not
identically sized. I guess you could do a weighted round-robin proportional to
node size in that case.

>
> > +/*
> > + * Maximum compression ratio, as as percentage, for an acceptable
>
> s/as as/as a/

Yes.

>
> > + * compressed page. Any pages that do not compress by at least
> > + * this ratio will be rejected.
> > +*/
> > +static unsigned int zswap_max_compression_ratio = 80;
> > +module_param_named(max_compression_ratio,
> > + zswap_max_compression_ratio, uint, 0644);
> > +
> > +/*********************************
> > +* compression functions
> > +**********************************/
> > +/* per-cpu compression transforms */
> > +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
> > +
> > +enum comp_op {
> > + ZSWAP_COMPOP_COMPRESS,
> > + ZSWAP_COMPOP_DECOMPRESS
> > +};
> > +
> > +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
> > + u8 *dst, unsigned int *dlen)
> > +{
> > + struct crypto_comp *tfm;
> > + int ret;
> > +
> > + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
>
> It's always the local CPU so why not get_cpu_var()?

Hmm... I'm not sure :) Let me take another look.

>
> > + switch (op) {
> > + case ZSWAP_COMPOP_COMPRESS:
> > + ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
> > + break;
> > + case ZSWAP_COMPOP_DECOMPRESS:
> > + ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
> > + break;
> > + default:
> > + ret = -EINVAL;
> > + }
> > +
> > + put_cpu();
> > + return ret;
> > +}
> > +
> > +static int __init zswap_comp_init(void)
> > +{
> > + if (!crypto_has_comp(zswap_compressor, 0, 0)) {
> > + pr_info("%s compressor not available\n", zswap_compressor);
> > + /* fall back to default compressor */
> > + zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> > + if (!crypto_has_comp(zswap_compressor, 0, 0))
> > + /* can't even load the default compressor */
> > + return -ENODEV;
> > + }
> > + pr_info("using %s compressor\n", zswap_compressor);
> > +
> > + /* alloc percpu transforms */
> > + zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
> > + if (!zswap_comp_pcpu_tfms)
> > + return -ENOMEM;
> > + return 0;
> > +}
> > +
> > +static void zswap_comp_exit(void)
> > +{
> > + /* free percpu transforms */
> > + if (zswap_comp_pcpu_tfms)
> > + free_percpu(zswap_comp_pcpu_tfms);
> > +}
> > +
> > +/*********************************
> > +* data structures
> > +**********************************/
> > +struct zswap_entry {
> > + struct rb_node rbnode;
> > + unsigned type;
> > + pgoff_t offset;
> > + unsigned long handle;
> > + unsigned int length;
> > +};
>
> Document that the types and offset are from frontswap and the handle is
> an opaque type from zsmalloc. This indicates that zswap is hard-coded
> against zsmalloc but so far I do not believe I have seen anything that
> forces it to be and allow either zbud or zsmalloc to be pluggable.

Ok.

>
> > +
> > +struct zswap_tree {
> > + struct rb_root rbroot;
> > + spinlock_t lock;
> > + struct zs_pool *pool;
> > +};
> > +
> > +static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> > +
> > +/*********************************
> > +* zswap entry functions
> > +**********************************/
> > +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
>
> heh, it's only used once and it's not exactly a magic number. Seems
> overkill for a #define

Yes.

>
> > +static struct kmem_cache *zswap_entry_cache;
> > +
> > +static inline int zswap_entry_cache_create(void)
> > +{
>
> No need to declare it inline, compiler will figure it out. Also should
> return bool.

Yes.

>
> > + zswap_entry_cache =
> > + kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
> > + sizeof(struct zswap_entry), 0, 0, NULL);
> > + return (zswap_entry_cache == NULL);
> > +}
> > +
> > +static inline void zswap_entry_cache_destory(void)
> > +{
> > + kmem_cache_destroy(zswap_entry_cache);
> > +}
> > +
> > +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > +{
> > + struct zswap_entry *entry;
> > + entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > + if (!entry)
> > + return NULL;
>
> No need to check !entry, just return it.

Yes.

>
> > + return entry;
> > +}
> > +
> > +static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> > +{
> > + kmem_cache_free(zswap_entry_cache, entry);
> > +}
> > +
> > +/*********************************
> > +* rbtree functions
> > +**********************************/
> > +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
> > +{
> > + struct rb_node *node = root->rb_node;
> > + struct zswap_entry *entry;
> > +
> > + while (node) {
> > + entry = rb_entry(node, struct zswap_entry, rbnode);
> > + if (entry->offset > offset)
> > + node = node->rb_left;
> > + else if (entry->offset < offset)
> > + node = node->rb_right;
> > + else
> > + return entry;
> > + }
> > + return NULL;
> > +}
> > +
> > +/*
> > + * In the case that a entry with the same offset is found, it a pointer to
> > + * the existing entry is stored in dupentry and the function returns -EEXIST
> > +*/
> > +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
> > + struct zswap_entry **dupentry)
> > +{
> > + struct rb_node **link = &root->rb_node, *parent = NULL;
> > + struct zswap_entry *myentry;
> > +
> > + while (*link) {
> > + parent = *link;
> > + myentry = rb_entry(parent, struct zswap_entry, rbnode);
> > + if (myentry->offset > entry->offset)
> > + link = &(*link)->rb_left;
> > + else if (myentry->offset < entry->offset)
> > + link = &(*link)->rb_right;
> > + else {
> > + *dupentry = myentry;
> > + return -EEXIST;
> > + }
> > + }
> > + rb_link_node(&entry->rbnode, parent, link);
> > + rb_insert_color(&entry->rbnode, root);
> > + return 0;
> > +}
> > +
> > +/*********************************
> > +* per-cpu code
> > +**********************************/
> > +static DEFINE_PER_CPU(u8 *, zswap_dstmem);
> > +
>
> That deserves a comment. It per-cpu buffers for compressing or
> decompressing data.

Will do.

>
>
> > +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
> > +{
> > + struct crypto_comp *tfm;
> > + u8 *dst;
> > +
> > + switch (action) {
> > + case CPU_UP_PREPARE:
> > + tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
> > + if (IS_ERR(tfm)) {
> > + pr_err("can't allocate compressor transform\n");
> > + return NOTIFY_BAD;
> > + }
> > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
> > + dst = kmalloc(PAGE_SIZE * 2, GFP_KERNEL);
> > + if (!dst) {
> > + pr_err("can't allocate compressor buffer\n");
> > + crypto_free_comp(tfm);
> > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> > + return NOTIFY_BAD;
> > + }
> > + per_cpu(zswap_dstmem, cpu) = dst;
> > + break;
> > + case CPU_DEAD:
> > + case CPU_UP_CANCELED:
> > + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
> > + if (tfm) {
> > + crypto_free_comp(tfm);
> > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> > + }
> > + dst = per_cpu(zswap_dstmem, cpu);
> > + if (dst) {
> > + kfree(dst);
> > + per_cpu(zswap_dstmem, cpu) = NULL;
> > + }
> > + break;
> > + default:
> > + break;
> > + }
> > + return NOTIFY_OK;
> > +}
> > +
> > +static int zswap_cpu_notifier(struct notifier_block *nb,
> > + unsigned long action, void *pcpu)
> > +{
> > + unsigned long cpu = (unsigned long)pcpu;
> > + return __zswap_cpu_notifier(action, cpu);
> > +}
> > +
> > +static struct notifier_block zswap_cpu_notifier_block = {
> > + .notifier_call = zswap_cpu_notifier
> > +};
> > +
> > +static int zswap_cpu_init(void)
> > +{
> > + unsigned long cpu;
> > +
> > + get_online_cpus();
> > + for_each_online_cpu(cpu)
> > + if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
> > + goto cleanup;
> > + register_cpu_notifier(&zswap_cpu_notifier_block);
> > + put_online_cpus();
> > + return 0;
> > +
> > +cleanup:
> > + for_each_online_cpu(cpu)
> > + __zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
> > + put_online_cpus();
> > + return -ENOMEM;
> > +}
> > +
> > +/*********************************
> > +* zsmalloc callbacks
> > +**********************************/
> > +static mempool_t *zswap_page_pool;
> > +
> > +static inline unsigned int zswap_max_pool_pages(void)
> > +{
> > + return zswap_max_pool_percent * totalram_pages / 100;
> > +}
> > +
> > +static inline int zswap_page_pool_create(void)
> > +{
> > + /* TODO: dynamically size mempool */
> > + zswap_page_pool = mempool_create_page_pool(256, 0);
> > + if (!zswap_page_pool)
> > + return -ENOMEM;
> > + return 0;
> > +}
> > +
> > +static inline void zswap_page_pool_destroy(void)
> > +{
> > + mempool_destroy(zswap_page_pool);
> > +}
> > +
> > +static struct page *zswap_alloc_page(gfp_t flags)
> > +{
> > + struct page *page;
> > +
> > + if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
> > + zswap_pool_limit_hit++;
> > + return NULL;
> > + }
> > + page = mempool_alloc(zswap_page_pool, flags);
> > + if (page)
> > + atomic_inc(&zswap_pool_pages);
> > + return page;
> > +}
> > +
> > +static void zswap_free_page(struct page *page)
> > +{
> > + if (!page)
> > + return;
> > + mempool_free(page, zswap_page_pool);
> > + atomic_dec(&zswap_pool_pages);
> > +}
>
> Again I find it odd that the mempool is here instead of within zsmalloc
> itself. It's also not superclear why you used mempool instead of just
> alloc_page/free_page

I am reexamining this approach since the removal of __GFP_NOMEMALLOC from the
mask has bascially done away with the issue of zsmalloc being able to allocate
pages for the pool.

This could potentially be done away with.

>
>
> > +
> > +static struct zs_ops zswap_zs_ops = {
> > + .alloc = zswap_alloc_page,
> > + .free = zswap_free_page
> > +};
> > +
> > +/*********************************
> > +* frontswap hooks
> > +**********************************/
> > +/* attempts to compress and store an single page */
> > +static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> > + struct page *page)
> > +{
> > + struct zswap_tree *tree = zswap_trees[type];
> > + struct zswap_entry *entry, *dupentry;
> > + int ret;
> > + unsigned int dlen = PAGE_SIZE;
> > + unsigned long handle;
> > + char *buf;
> > + u8 *src, *dst;
> > +
> > + if (!tree) {
> > + ret = -ENODEV;
> > + goto reject;
> > + }
> > +
> > + /* allocate entry */
> > + entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > + if (!entry) {
> > + zswap_reject_kmemcache_fail++;
> > + ret = -ENOMEM;
> > + goto reject;
> > + }
> > +
> > + /* compress */
> > + dst = get_cpu_var(zswap_dstmem);
> > + src = kmap_atomic(page);
>
> Why kmap_atomic? We do not appear to be in an atomic context here and
> you've already disabled preempt for the compression op.

To support swapping out highmem pages in 32-bit systems. Is this not needed
for that?

>
> > + ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
> > + kunmap_atomic(src);
> > + if (ret) {
> > + ret = -EINVAL;
> > + goto putcpu;
> > + }
> > + if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> > + zswap_reject_compress_poor++;
> > + ret = -E2BIG;
> > + goto putcpu;
> > + }
> > +
> > + /* store */
> > + handle = zs_malloc(tree->pool, dlen,
> > + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> > + __GFP_NOWARN);
> > + if (!handle) {
> > + zswap_reject_zsmalloc_fail++;
> > + ret = -ENOMEM;
> > + goto putcpu;
> > + }
> > +
>
> This is an aging inversion problem. Once zswap is full, the newest pages
> are written to swap instead of old zswap pages and freeing up some
> space. This means two things

Regarding the age inversion, this is addressed, except in the case where zswap
is full, by writeback.

>
> 1. Once zswap is full, it's performance degrades immediately to just
> being even worst than traditional swap except we're doing all the
> swapping but have 20% (by default) less physical memory to work with.

zswap can still help in this case but yes, the performance will converge on the
normal off-the-cliff thrashing performance.

>
> 2. zswap is vunerable to a DOS by a process starting, allocating a
> buffer that is RAM + max zswap size to fill zswap, freeing its remaining
> in-core pages and then sleeping forever

I think writeback addresses this issue. If it doesn't let me know.

Additionally, I'm working on code to introduce "periodic writeback" where, if
zpages are not accessed in zswap within a certain time, they are written back
to free up memory. This prevents zswap from occupying memory forever on
systems that use swap in the perferred manner: to contain pages that are
unlikely to swapped back in any time soon.

>
> zswap pages should also be maintained on a LRU with old zswap pages written
> to backing storage when it's full. A logical follow-on then would be that
> the size of the zswap pool can be dynamically shrunk to free physical RAM
> if the refault rate between zswap and normal RAM is low.

I think this is fully addressed by the writeback patch.

>
> > + buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> > + memcpy(buf, dst, dlen);
> > + zs_unmap_object(tree->pool, handle);
> > + put_cpu_var(zswap_dstmem);
> > +
> > + /* populate entry */
> > + entry->type = type;
> > + entry->offset = offset;
> > + entry->handle = handle;
> > + entry->length = dlen;
> > +
> > + /* map */
> > + spin_lock(&tree->lock);
> > + do {
> > + ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> > + if (ret == -EEXIST) {
> > + zswap_duplicate_entry++;
> > +
> > + /* remove from rbtree */
> > + rb_erase(&dupentry->rbnode, &tree->rbroot);
> > +
> > + /* free */
> > + zs_free(tree->pool, dupentry->handle);
> > + zswap_entry_cache_free(dupentry);
> > + atomic_dec(&zswap_stored_pages);
> > + }
> > + } while (ret == -EEXIST);
> > + spin_unlock(&tree->lock);
> > +
> > + /* update stats */
> > + atomic_inc(&zswap_stored_pages);
> > +
> > + return 0;
> > +
> > +putcpu:
> > + put_cpu_var(zswap_dstmem);
> > + zswap_entry_cache_free(entry);
> > +reject:
> > + return ret;
> > +}
> > +
> > +/*
> > + * returns 0 if the page was successfully decompressed
> > + * return -1 on entry not found or error
> > +*/
> > +static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> > + struct page *page)
> > +{
> > + struct zswap_tree *tree = zswap_trees[type];
> > + struct zswap_entry *entry;
> > + u8 *src, *dst;
> > + unsigned int dlen;
> > +
> > + /* find */
> > + spin_lock(&tree->lock);
> > + entry = zswap_rb_search(&tree->rbroot, offset);
> > + spin_unlock(&tree->lock);
> > +
> > + /* decompress */
> > + dlen = PAGE_SIZE;
> > + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> > + dst = kmap_atomic(page);
> > + zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> > + dst, &dlen);
> > + kunmap_atomic(dst);
> > + zs_unmap_object(tree->pool, entry->handle);
> > +
> > + return 0;
> > +}
> > +
> > +/* invalidates a single page */
> > +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> > +{
> > + struct zswap_tree *tree = zswap_trees[type];
> > + struct zswap_entry *entry;
> > +
> > + /* find */
> > + spin_lock(&tree->lock);
> > + entry = zswap_rb_search(&tree->rbroot, offset);
> > +
> > + /* remove from rbtree */
> > + rb_erase(&entry->rbnode, &tree->rbroot);
> > + spin_unlock(&tree->lock);
> > +
> > + /* free */
> > + zs_free(tree->pool, entry->handle);
> > + zswap_entry_cache_free(entry);
> > + atomic_dec(&zswap_stored_pages);
> > +}
> > +
> > +/* invalidates all pages for the given swap type */
> > +static void zswap_frontswap_invalidate_area(unsigned type)
> > +{
> > + struct zswap_tree *tree = zswap_trees[type];
> > + struct rb_node *node;
> > + struct zswap_entry *entry;
> > +
> > + if (!tree)
> > + return;
> > +
> > + /* walk the tree and free everything */
> > + spin_lock(&tree->lock);
> > + /*
> > + * TODO: Even though this code should not be executed because
> > + * the try_to_unuse() in swapoff should have emptied the tree,
> > + * it is very wasteful to rebalance the tree after every
> > + * removal when we are freeing the whole tree.
> > + *
> > + * If post-order traversal code is ever added to the rbtree
> > + * implementation, it should be used here.
> > + */
> > + while ((node = rb_first(&tree->rbroot))) {
> > + entry = rb_entry(node, struct zswap_entry, rbnode);
> > + rb_erase(&entry->rbnode, &tree->rbroot);
> > + zs_free(tree->pool, entry->handle);
> > + zswap_entry_cache_free(entry);
> > + }
> > + tree->rbroot = RB_ROOT;
> > + spin_unlock(&tree->lock);
> > +}
> > +
> > +/* NOTE: this is called in atomic context from swapon and must not sleep */
> > +static void zswap_frontswap_init(unsigned type)
> > +{
> > + struct zswap_tree *tree;
> > +
> > + tree = kzalloc(sizeof(struct zswap_tree), GFP_ATOMIC);
> > + if (!tree)
> > + goto err;
> > + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
> > + if (!tree->pool)
> > + goto freetree;
> > + tree->rbroot = RB_ROOT;
> > + spin_lock_init(&tree->lock);
> > + zswap_trees[type] = tree;
> > + return;
> > +
>
> Ok I think. I didn't read this as carefully because I assumed that it
> would either work or blow up spectacularly and there was little scope
> for being clever.
>
> > +freetree:
> > + kfree(tree);
> > +err:
> > + pr_err("alloc failed, zswap disabled for swap type %d\n", type);
> > +}
> > +
> > +static struct frontswap_ops zswap_frontswap_ops = {
> > + .store = zswap_frontswap_store,
> > + .load = zswap_frontswap_load,
> > + .invalidate_page = zswap_frontswap_invalidate_page,
> > + .invalidate_area = zswap_frontswap_invalidate_area,
> > + .init = zswap_frontswap_init
> > +};
> > +
> > +/*********************************
> > +* debugfs functions
> > +**********************************/
> > +#ifdef CONFIG_DEBUG_FS
> > +#include <linux/debugfs.h>
> > +
> > +static struct dentry *zswap_debugfs_root;
> > +
> > +static int __init zswap_debugfs_init(void)
> > +{
> > + if (!debugfs_initialized())
> > + return -ENODEV;
> > +
> > + zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
> > + if (!zswap_debugfs_root)
> > + return -ENOMEM;
> > +
> > + debugfs_create_u64("pool_limit_hit", S_IRUGO,
> > + zswap_debugfs_root, &zswap_pool_limit_hit);
> > + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> > + zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> > + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> > + zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> > + debugfs_create_u64("reject_compress_poor", S_IRUGO,
> > + zswap_debugfs_root, &zswap_reject_compress_poor);
> > + debugfs_create_u64("duplicate_entry", S_IRUGO,
> > + zswap_debugfs_root, &zswap_duplicate_entry);
> > + debugfs_create_atomic_t("pool_pages", S_IRUGO,
> > + zswap_debugfs_root, &zswap_pool_pages);
> > + debugfs_create_atomic_t("stored_pages", S_IRUGO,
> > + zswap_debugfs_root, &zswap_stored_pages);
> > +
> > + return 0;
> > +}
> > +
> > +static void __exit zswap_debugfs_exit(void)
> > +{
> > + debugfs_remove_recursive(zswap_debugfs_root);
> > +}
> > +#else
> > +static inline int __init zswap_debugfs_init(void)
> > +{
> > + return 0;
> > +}
> > +
> > +static inline void __exit zswap_debugfs_exit(void) { }
> > +#endif
> > +
> > +/*********************************
> > +* module init and exit
> > +**********************************/
> > +static int __init init_zswap(void)
> > +{
> > + if (!zswap_enabled)
> > + return 0;
> > +
> > + pr_info("loading zswap\n");
> > + if (zswap_entry_cache_create()) {
> > + pr_err("entry cache creation failed\n");
> > + goto error;
> > + }
> > + if (zswap_page_pool_create()) {
> > + pr_err("page pool initialization failed\n");
> > + goto pagepoolfail;
> > + }
> > + if (zswap_comp_init()) {
> > + pr_err("compressor initialization failed\n");
> > + goto compfail;
> > + }
> > + if (zswap_cpu_init()) {
> > + pr_err("per-cpu initialization failed\n");
> > + goto pcpufail;
> > + }
> > + frontswap_register_ops(&zswap_frontswap_ops);
> > + if (zswap_debugfs_init())
> > + pr_warn("debugfs initialization failed\n");
> > + return 0;
> > +pcpufail:
> > + zswap_comp_exit();
> > +compfail:
> > + zswap_page_pool_destroy();
> > +pagepoolfail:
> > + zswap_entry_cache_destory();
> > +error:
> > + return -ENOMEM;
> > +}
> > +/* must be late so crypto has time to come up */
> > +late_initcall(init_zswap);
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_AUTHOR("Seth Jennings <[email protected]>");
> > +MODULE_DESCRIPTION("Compressed cache for swap pages");
>
> Ok, so there are some problems in there. For me, the zsmalloc fragmentation
> issues are potentially the far scarier problem because unpredictable
> performance characteristics tend to generate really painful bug reports
> with difficult (if not impossible) to replicate problems. Those reports
> are so painful in fact that I'm inclined to dig my heels in and make loud
> noises unless an allocator with predictable performance characteritics
> can also be used (presumably zbud) -- as a comparison point if nothing
> else but also to have as a workaround for performance problems in zsmalloc.

Yes, that allocator continues to be a point of discussion, which I find
unfortunate since it is just a means to an end; that end being compressed swap.
However, it is important to get somewhat right from the beginning to avoid
throwing large amounts of code out of the mm/ later.

>
> It also looks like performance will fall off a cliff when zswap is full
> but at least that's a predictable problem and easily explained to a
> user. An LRU for zswap pages could always be implemented later with
> bonus points if it uses refault rates to judge when the pool can be
> shrunk more agressively to free physical RAM.

Hopefully addressed by writeback patch.

>
> --
> Mel Gorman
> SUSE Labs
>

2013-04-17 18:05:24

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv9 1/8] zsmalloc: add to mm/

On Sun, Apr 14, 2013 at 01:43:22AM +0100, Mel Gorman wrote:
> I no longer remember any of the previous z* discussions, including my
> own review and I was not online as I wrote this. I may repeat myself,
> contradict myself or rehash topics that were visited already and have
> been concluded. If I do any of that then sorry.

Great! That means you'll have the most fresh perspective :)

I very much appreciate you taking your valuable time to understand and review
the code!

>
> On Wed, Apr 10, 2013 at 01:18:53PM -0500, Seth Jennings wrote:
> > <SNIP>
> >
> > Also, zsmalloc allows objects to span page boundaries within the
> > zspage. This allows for lower fragmentation than could be had
> > with the kernel slab allocator for objects between PAGE_SIZE/2
> > and PAGE_SIZE.
>
> Be aware that this reduces *internal* fragmentation but not necessarily
> external fragmentation. If a page portion cannot be freed for some reason
> then the entire page cannot be freed. If it is possible for a page fragment
> to be pinned then it is potentially a serious problem because the zswap
> portion of memory does not necessarily shrink forever. This means that a
> large process exiting that had been pushed to swap may not free any
> physical memory due to fragmentation within zsmalloc which might be a
> big surprise to the OOM killer.

Yes. This has been something Dan mentioned as well. This design element of
zsmalloc was derived from 1) slab design and 2) need to efficiently store large
objects. The kernel SLAB/SLUB allocators have that same issue for, say, a kmem
cache with objects 3k in size. A high-order page allocation is needed to back
the slab and even if there is only one object in the slab none of the pages can
be freed.

But this does point out the desired behavior of both LRU and process locality
in the compressed pool pages so that when a process exits, the invalidating of
the unshared zpages frees entire underlying pages from the compressed pool.

>
> Even assuming though that a page can be forcibly evicted then moving data
> from zswap to disk has two strange effects.
>
> 1. Reclaiming a single page requires an unpredictable amount of
> page frames to be uncompressed and written to swap. Swapout times may
> vary considerably as a result.

Yes, this is less than ideal and stems from the fact that zswap has LRU
knowledge but does not have knowledge of how zpages are arranged in compressed
pool pages. This is something Dan and I have been discussing: 1) should the
job of reclaim be done by the zsmalloc user or zsmalloc itself 2) what are the
API ramifications of each option.

>
> 2. It make cause aging inversions. If an old page fragment and new page
> fragment are co-located then a new page can be written to swap before
> there was an opportunity to refault it.
>
> Both yield unpredictable performance characteristics for zswap.
> zbud conceptually (I can't remember any of the code details) suffers from
> internal fragmentation wastage but it would have more predictable performance
> characterisics. The worst of the fragmentaiton problems may be mitigated
> if a zero-filled page was special cased (if it hasn't already). If the
> compressed page cannot fit into PAGE_SIZE/2 then too bad, dump it to swap.
> It still would suffer from an age inversion but at worst it only affects
> one other swap page so at least it's bound to a known value.
>
> I think I said it before but I worry that testing has seen the ideal
> behaviour for zsmalloc because it is based on kernel compiles which has
> data that compresses easily and processes that are relatively short lived.
>
> I recognise that a lot of work has gone into zsmalloc and that it exists
> for a reason. I'm not going to make it a blocker for merging because frankly
> I'm not familiar enough with zbud to know it actually can be used by zswap
> and my performance characterisic objections have not been proven. However,
> my gut feeling says that the allocators should have had compatible APIs
> or an operations struct with a default to zbud for predictable performance
> characterisics (assuming zbud is not completely broken of course).
>
> Furthermore if any of this is accurate then the limitations of the
> allocator should be described in the changelog (copy and paste this if
> you wish). When/if this gets deployed and a vendor is handed a bug about
> unpredictable performance characteristics of zswap then there is a remote
> chance they learn why.
>
> > With the kernel slab allocator, if a page compresses
> > to 60% of it original size, the memory savings gained through
> > compression is lost in fragmentation because another object of
> > the same size can't be stored in the leftover space.
> >
> > This ability to span pages results in zsmalloc allocations not being
> > directly addressable by the user. The user is given an
> > non-dereferencable handle in response to an allocation request.
> > That handle must be mapped, using zs_map_object(), which returns
> > a pointer to the mapped region that can be used. The mapping is
> > necessary since the object data may reside in two different
> > noncontigious pages.
> >
> > zsmalloc fulfills the allocation needs for zram and zswap.
> >
> > Acked-by: Nitin Gupta <[email protected]>
> > Acked-by: Minchan Kim <[email protected]>
> > Signed-off-by: Seth Jennings <[email protected]>
> > ---
> > include/linux/zsmalloc.h | 56 +++
> > mm/Kconfig | 24 +
> > mm/Makefile | 1 +
> > mm/zsmalloc.c | 1117 ++++++++++++++++++++++++++++++++++++++++++++++
> > 4 files changed, 1198 insertions(+)
> > create mode 100644 include/linux/zsmalloc.h
> > create mode 100644 mm/zsmalloc.c
> >
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > new file mode 100644
> > index 0000000..398dae3
> > --- /dev/null
> > +++ b/include/linux/zsmalloc.h
> > @@ -0,0 +1,56 @@
> > +/*
> > + * zsmalloc memory allocator
> > + *
> > + * Copyright (C) 2011 Nitin Gupta
> > + *
>
> git blame indicates there are more people than Nitin involved although
> the bulk of the code does appear to be his.
>
> > + * This code is released using a dual license strategy: BSD/GPL
> > + * You can choose the license that better fits your requirements.
> > + *
> > + * Released under the terms of 3-clause BSD License
> > + * Released under the terms of GNU General Public License Version 2.0
> > + */
> > +
> > +#ifndef _ZS_MALLOC_H_
> > +#define _ZS_MALLOC_H_
> > +
> > +#include <linux/types.h>
> > +#include <linux/mm_types.h>
> > +
> > +/*
> > + * zsmalloc mapping modes
> > + *
> > + * NOTE: These only make a difference when a mapped object spans pages.
> > + * They also have no effect when PGTABLE_MAPPING is selected.
> > +*/
> > +enum zs_mapmode {
> > + ZS_MM_RW, /* normal read-write mapping */
> > + ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> > + ZS_MM_WO /* write-only (no copy-in at map time) */
> > + /*
> > + * NOTE: ZS_MM_WO should only be used for initializing new
> > + * (uninitialized) allocations. Partial writes to already
> > + * initialized allocations should use ZS_MM_RW to preserve the
> > + * existing data.
> > + */
> > +};
> > +
> > +struct zs_ops {
> > + struct page * (*alloc)(gfp_t);
> > + void (*free)(struct page *);
> > +};
> > +
>
>
> Hmm, zs_ops deserves a comment! It's quite curious because the zsmalloc
> implies it is an allocator but the user of zsmalloc is expected to allocate
> and free the physical memory. That looks like a layering inversion.

Yes, it might be. It was added so that zswap can enforce the pool limit
accurately. There might be a better way (i.e. a zsmalloc_pool_size()
function).

>
> I suspect the motivation is because only the user of zsmalloc can sensibly
> decide what the pool size should be, particularly if it's dynamically
> sized. If this is the case then a more appropriate callback interface
> may be to inform it when the physical page pool shrinks (instead of free)
> and a request to increase the size of the pool by one page (instead of alloc.

That would work too :)

>
> > +struct zs_pool;
> > +
> > +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> > +void zs_destroy_pool(struct zs_pool *pool);
> > +
> > +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> > +void zs_free(struct zs_pool *pool, unsigned long obj);
> > +
> > +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> > + enum zs_mapmode mm);
> > +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> > +
> > +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +
> > +#endif
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 3bea74f..aa054fc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -471,3 +471,27 @@ config FRONTSWAP
> > and swap data is stored as normal on the matching swap device.
> >
> > If unsure, say Y to enable frontswap.
> > +
> > +config ZSMALLOC
> > + tristate "Memory allocator for compressed pages"
> > + default n
> > + help
> > + zsmalloc is a slab-based memory allocator designed to store
> > + compressed RAM pages. zsmalloc uses virtual memory mapping
> > + in order to reduce fragmentation. However, this results in a
> > + non-standard allocator interface where a handle, not a pointer, is
> > + returned by an alloc(). This handle must be mapped in order to
> > + access the allocated space.
> > +
> > +config PGTABLE_MAPPING
> > + bool "Use page table mapping to access object in zsmalloc"
> > + depends on ZSMALLOC
> > + help
> > + By default, zsmalloc uses a copy-based object mapping method to
> > + access allocations that span two pages. However, if a particular
> > + architecture (ex, ARM) performs VM mapping faster than copying,
> > + then you should select this. This causes zsmalloc to use page table
> > + mapping rather than copying for object mapping.
> > +
> > + You can check speed with zsmalloc benchmark[1].
> > + [1] https://github.com/spartacus06/zsmalloc
>
> Should PGTABLE_MAPPING be selected by the architecture instead of the
> user configuring the kernel?

This has seen some thrashing lately. I'd perfer it be selected automatically by
arch, but I think that approach had some objections that I'm not recalling at
the time. Minchan might know.

>
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 3a46287..0f6ef0a 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
> > obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
> > obj-$(CONFIG_CLEANCACHE) += cleancache.o
> > obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> > +obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > new file mode 100644
> > index 0000000..adaeee5
> > --- /dev/null
> > +++ b/mm/zsmalloc.c
> > @@ -0,0 +1,1117 @@
> > +/*
> > + * zsmalloc memory allocator
> > + *
> > + * Copyright (C) 2011 Nitin Gupta
> > + *
> > + * This code is released using a dual license strategy: BSD/GPL
> > + * You can choose the license that better fits your requirements.
> > + *
> > + * Released under the terms of 3-clause BSD License
> > + * Released under the terms of GNU General Public License Version 2.0
> > + */
> > +
> > +
> > +/*
> > + * This allocator is designed for use with zcache and zram. Thus, the
> > + * allocator is supposed to work well under low memory conditions. In
> > + * particular, it never attempts higher order page allocation which is
> > + * very likely to fail under memory pressure. On the other hand, if we
> > + * just use single (0-order) pages, it would suffer from very high
> > + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> > + * an entire page. This was one of the major issues with its predecessor
> > + * (xvmalloc).
> > + *
> > + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> > + * and links them together using various 'struct page' fields. These linked
> > + * pages act as a single higher-order page i.e. an object can span 0-order
> > + * page boundaries. The code refers to these linked pages as a single entity
> > + * called zspage.
> > + *
> > + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> > + * since this satisfies the requirements of all its current users (in the
> > + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> > + * uncompressed form). For allocation requests larger than this size, failure
> > + * is returned (see zs_malloc).
> > + *
> > + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> > + * Instead, it returns an opaque handle (unsigned long) which encodes actual
>
> There are places where it's assumed that an unsigned long is an address
> that can be used. It's a nit-pick but it might be worth explicitly declaring
> an opaque type that just happens to be unsigned long.

Ok.

>
> > + * location of the allocated object. The reason for this indirection is that
> > + * zsmalloc does not keep zspages permanently mapped since that would cause
> > + * issues on 32-bit systems where the VA region for kernel space mappings
> > + * is very small. So, before using the allocating memory, the object has to
> > + * be mapped using zs_map_object() to get a usable pointer and subsequently
> > + * unmapped using zs_unmap_object().
> > + *
> > + * Following is how we use various fields and flags of underlying
> > + * struct page(s) to form a zspage.
> > + *
> > + * Usage of struct page fields:
> > + * page->first_page: points to the first component (0-order) page
> > + * page->index (union with page->freelist): offset of the first object
> > + * starting in this page. For the first page, this is
> > + * always 0, so we use this field (aka freelist) to point
> > + * to the first free object in zspage.
> > + * page->lru: links together all component pages (except the first page)
> > + * of a zspage
> > + *
> > + * For _first_ page only:
> > + *
> > + * page->private (union with page->first_page): refers to the
> > + * component page after the first page
> > + * page->freelist: points to the first free object in zspage.
> > + * Free objects are linked together using in-place
> > + * metadata.
> > + * page->lru: links together first pages of various zspages.
> > + * Basically forming list of zspages in a fullness group.
> > + * page->mapping: class index and fullness group of the zspage
> > + *
>
> Heh, that's some packing.

Yes, yes it is.

>
> > + * Usage of struct page flags:
> > + * PG_private: identifies the first component page
> > + * PG_private2: identifies the last component page
> > + *
> > + */
> > +
> > +#include <linux/module.h>
> > +#include <linux/kernel.h>
> > +#include <linux/bitops.h>
> > +#include <linux/errno.h>
> > +#include <linux/highmem.h>
> > +#include <linux/init.h>
> > +#include <linux/string.h>
> > +#include <linux/slab.h>
> > +#include <asm/tlbflush.h>
> > +#include <asm/pgtable.h>
> > +#include <linux/cpumask.h>
> > +#include <linux/cpu.h>
> > +#include <linux/vmalloc.h>
> > +#include <linux/hardirq.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/types.h>
> > +
> > +#include <linux/zsmalloc.h>
> > +
> > +/*
> > + * This must be power of 2 and greater than of equal to sizeof(link_free).
> > + * These two conditions ensure that any 'struct link_free' itself doesn't
> > + * span more than 1 page which avoids complex case of mapping 2 pages simply
> > + * to restore link_free pointer values.
> > + */
> > +#define ZS_ALIGN 8
> > +
> > +/*
> > + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> > + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> > + */
> > +#define ZS_MAX_ZSPAGE_ORDER 2
> > +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> > +
> > +/*
> > + * Object location (<PFN>, <obj_idx>) is encoded as
> > + * as single (unsigned long) handle value.
> > + *
> > + * Note that object index <obj_idx> is relative to system
> > + * page <PFN> it is stored in, so for each sub-page belonging
> > + * to a zspage, obj_idx starts with 0.
> > + *
> > + * This is made more complicated by various memory models and PAE.
> > + */
> > +
> > +#ifndef MAX_PHYSMEM_BITS
> > +#ifdef CONFIG_HIGHMEM64G
> > +#define MAX_PHYSMEM_BITS 36
> > +#else /* !CONFIG_HIGHMEM64G */
> > +/*
> > + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> > + * be PAGE_SHIFT
> > + */
> > +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> > +#endif
> > +#endif
> > +#define _PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT)
> > +#define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS)
> > +#define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> > +
> > +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> > +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> > +#define ZS_MIN_ALLOC_SIZE \
> > + MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> > +#define ZS_MAX_ALLOC_SIZE PAGE_SIZE
> > +
> > +/*
> > + * On systems with 4K page size, this gives 254 size classes! There is a
> > + * trader-off here:
> > + * - Large number of size classes is potentially wasteful as free page are
> > + * spread across these classes
> > + * - Small number of size classes causes large internal fragmentation
> > + * - Probably its better to use specific size classes (empirically
> > + * determined). NOTE: all those class sizes must be set as multiple of
> > + * ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> > + *
> > + * ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> > + * (reason above)
> > + */
> > +#define ZS_SIZE_CLASS_DELTA (PAGE_SIZE >> 8)
> > +#define ZS_SIZE_CLASSES ((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> > + ZS_SIZE_CLASS_DELTA + 1)
> > +
> > +/*
> > + * We do not maintain any list for completely empty or full pages
> > + */
> > +enum fullness_group {
> > + ZS_ALMOST_FULL,
> > + ZS_ALMOST_EMPTY,
> > + _ZS_NR_FULLNESS_GROUPS,
> > +
> > + ZS_EMPTY,
> > + ZS_FULL
> > +};
>
> Ok, I see that you then use the fullness class to try and pack new
> allocations into "almost full" zspages. This could mean that a zspage
> spans an unpredictable number of physical pages but no idea if that's a
> problem or not.
>
> > +
> > +/*
> > + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> > + * n <= N / f, where
> > + * n = number of allocated objects
> > + * N = total number of objects zspage can store
> > + * f = 1/fullness_threshold_frac
> > + *
> > + * Similarly, we assign zspage to:
> > + * ZS_ALMOST_FULL when n > N / f
> > + * ZS_EMPTY when n == 0
> > + * ZS_FULL when n == N
> > + *
> > + * (see: fix_fullness_group())
> > + */
> > +static const int fullness_threshold_frac = 4;
> > +
> > +struct size_class {
> > + /*
> > + * Size of objects stored in this class. Must be multiple
> > + * of ZS_ALIGN.
> > + */
> > + int size;
> > + unsigned int index;
> > +
>
> You can drop index and use a lookup that calculates it as
>
> index = size_class - zs_pool->size_class;

Clever :)

>
> > + /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> > + int pages_per_zspage;
> > +
> > + spinlock_t lock;
> > +
> > + /* stats */
> > + u64 pages_allocated;
> > +
> > + struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
>
> The fact that you don't track full pages is curious. It may imply that
> it's not possible to forcibly reclaim a full zspage or maybe it's just
> not implemented.

Yes, that is the reason we don't track them. If evict functionally were added
to zsmalloc, then we'd have to start tracking them.

>
> I initially worrised that ZS_EMPTY pages leaked but it looks like such
> pages are always freed
>
> > +};
> > +
> > +/*
> > + * Placed within free objects to form a singly linked list.
> > + * For every zspage, first_page->freelist gives head of this list.
> > + *
> > + * This must be power of 2 and less than or equal to ZS_ALIGN
> > + */
> > +struct link_free {
> > + /* Handle of next free chunk (encodes <PFN, obj_idx>) */
> > + void *next;
> > +};
> > +
> > +struct zs_pool {
> > + struct size_class size_class[ZS_SIZE_CLASSES];
> > +
> > + struct zs_ops *ops;
> > +};
> > +
> > +/*
> > + * A zspage's class index and fullness group
> > + * are encoded in its (first)page->mapping
> > + */
> > +#define CLASS_IDX_BITS 28
> > +#define FULLNESS_BITS 4
> > +#define CLASS_IDX_MASK ((1 << CLASS_IDX_BITS) - 1)
> > +#define FULLNESS_MASK ((1 << FULLNESS_BITS) - 1)
> > +
> > +struct mapping_area {
> > +#ifdef CONFIG_PGTABLE_MAPPING
> > + struct vm_struct *vm; /* vm area for mapping object that span pages */
> > +#else
> > + char *vm_buf; /* copy buffer for objects that span pages */
> > +#endif
> > + char *vm_addr; /* address of kmap_atomic()'ed pages */
> > + enum zs_mapmode vm_mm; /* mapping mode */
> > +};
> > +
> > +/* default page alloc/free ops */
> > +struct page *zs_alloc_page(gfp_t flags)
> > +{
> > + return alloc_page(flags);
> > +}
> > +
> > +void zs_free_page(struct page *page)
> > +{
> > + __free_page(page);
> > +}
> > +
> > +struct zs_ops zs_default_ops = {
> > + .alloc = zs_alloc_page,
> > + .free = zs_free_page
> > +};
> > +
> > +/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
> > +static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
> > +
> > +static int is_first_page(struct page *page)
> > +{
> > + return PagePrivate(page);
> > +}
> > +
> > +static int is_last_page(struct page *page)
> > +{
> > + return PagePrivate2(page);
> > +}
> > +
> > +static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
> > + enum fullness_group *fullness)
> > +{
> > + unsigned long m;
> > + BUG_ON(!is_first_page(page));
> > +
> > + m = (unsigned long)page->mapping;
> > + *fullness = m & FULLNESS_MASK;
> > + *class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
> > +}
> > +
> > +static void set_zspage_mapping(struct page *page, unsigned int class_idx,
> > + enum fullness_group fullness)
> > +{
> > + unsigned long m;
> > + BUG_ON(!is_first_page(page));
> > +
> > + m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
> > + (fullness & FULLNESS_MASK);
> > + page->mapping = (struct address_space *)m;
> > +}
> > +
> > +/*
> > + * zsmalloc divides the pool into various size classes where each
> > + * class maintains a list of zspages where each zspage is divided
> > + * into equal sized chunks. Each allocation falls into one of these
> > + * classes depending on its size. This function returns index of the
> > + * size class which has chunk size big enough to hold the give size.
> > + */
> > +static int get_size_class_index(int size)
> > +{
> > + int idx = 0;
> > +
> > + if (likely(size > ZS_MIN_ALLOC_SIZE))
> > + idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
> > + ZS_SIZE_CLASS_DELTA);
> > +
> > + return idx;
> > +}
> > +
> > +/*
> > + * For each size class, zspages are divided into different groups
> > + * depending on how "full" they are. This was done so that we could
> > + * easily find empty or nearly empty zspages when we try to shrink
> > + * the pool (not yet implemented). This function returns fullness
> > + * status of the given page.
> > + */
>
> We can't forcibly shrink this thing? I'll be curious to see what happens
> when zswap is full then.
>
> > +static enum fullness_group get_fullness_group(struct page *page,
> > + struct size_class *class)
> > +{
> > + int inuse, max_objects;
> > + enum fullness_group fg;
> > + BUG_ON(!is_first_page(page));
> > +
> > + inuse = page->inuse;
> > + max_objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> > +
>
> As class->size must be a multiple of ZS_ALIGN which is a power-of-two
> then this calculation could be done as bit shifts if class->size was a
> shift instead of a size. Not that important.

Would be cleaner that way.

>
> > + if (inuse == 0)
> > + fg = ZS_EMPTY;
> > + else if (inuse == max_objects)
> > + fg = ZS_FULL;
> > + else if (inuse <= max_objects / fullness_threshold_frac)
> > + fg = ZS_ALMOST_EMPTY;
> > + else
> > + fg = ZS_ALMOST_FULL;
> > +
> > + return fg;
> > +}
> > +
> > +/*
> > + * Each size class maintains various freelists and zspages are assigned
> > + * to one of these freelists based on the number of live objects they
> > + * have. This functions inserts the given zspage into the freelist
> > + * identified by <class, fullness_group>.
> > + */
> > +static void insert_zspage(struct page *page, struct size_class *class,
> > + enum fullness_group fullness)
> > +{
> > + struct page **head;
> > +
> > + BUG_ON(!is_first_page(page));
> > +
> > + if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> > + return;
> > +
> > + head = &class->fullness_list[fullness];
> > + if (*head)
> > + list_add_tail(&page->lru, &(*head)->lru);
> > +
> > + *head = page;
> > +}
> > +
> > +/*
> > + * This function removes the given zspage from the freelist identified
> > + * by <class, fullness_group>.
> > + */
> > +static void remove_zspage(struct page *page, struct size_class *class,
> > + enum fullness_group fullness)
> > +{
> > + struct page **head;
> > +
> > + BUG_ON(!is_first_page(page));
> > +
> > + if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> > + return;
> > +
> > + head = &class->fullness_list[fullness];
> > + BUG_ON(!*head);
> > + if (list_empty(&(*head)->lru))
> > + *head = NULL;
> > + else if (*head == page)
> > + *head = (struct page *)list_entry((*head)->lru.next,
> > + struct page, lru);
> > +
> > + list_del_init(&page->lru);
> > +}
> > +
> > +/*
> > + * Each size class maintains zspages in different fullness groups depending
> > + * on the number of live objects they contain. When allocating or freeing
> > + * objects, the fullness status of the page can change, say, from ALMOST_FULL
> > + * to ALMOST_EMPTY when freeing an object. This function checks if such
> > + * a status change has occurred for the given page and accordingly moves the
> > + * page from the freelist of the old fullness group to that of the new
> > + * fullness group.
> > + */
> > +static enum fullness_group fix_fullness_group(struct zs_pool *pool,
> > + struct page *page)
> > +{
> > + int class_idx;
> > + struct size_class *class;
> > + enum fullness_group currfg, newfg;
> > +
> > + BUG_ON(!is_first_page(page));
> > +
> > + get_zspage_mapping(page, &class_idx, &currfg);
> > + class = &pool->size_class[class_idx];
> > + newfg = get_fullness_group(page, class);
> > + if (newfg == currfg)
> > + goto out;
> > +
> > + remove_zspage(page, class, currfg);
> > + insert_zspage(page, class, newfg);
> > + set_zspage_mapping(page, class_idx, newfg);
> > +
> > +out:
> > + return newfg;
> > +}
> > +
> > +/*
> > + * We have to decide on how many pages to link together
> > + * to form a zspage for each size class. This is important
> > + * to reduce wastage due to unusable space left at end of
> > + * each zspage which is given as:
> > + * wastage = Zp - Zp % size_class
> > + * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
> > + *
> > + * For example, for size class of 3/8 * PAGE_SIZE, we should
> > + * link together 3 PAGE_SIZE sized pages to form a zspage
> > + * since then we can perfectly fit in 8 such objects.
> > + */
> > +static int get_pages_per_zspage(int class_size)
> > +{
> > + int i, max_usedpc = 0;
> > + /* zspage order which gives maximum used size per KB */
> > + int max_usedpc_order = 1;
> > +
> > + for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> > + int zspage_size;
> > + int waste, usedpc;
> > +
> > + zspage_size = i * PAGE_SIZE;
> > + waste = zspage_size % class_size;
> > + usedpc = (zspage_size - waste) * 100 / zspage_size;
> > +
> > + if (usedpc > max_usedpc) {
> > + max_usedpc = usedpc;
> > + max_usedpc_order = i;
> > + }
> > + }
> > +
> > + return max_usedpc_order;
> > +}
> > +
> > +/*
> > + * A single 'zspage' is composed of many system pages which are
> > + * linked together using fields in struct page. This function finds
> > + * the first/head page, given any component page of a zspage.
> > + */
> > +static struct page *get_first_page(struct page *page)
> > +{
> > + if (is_first_page(page))
> > + return page;
> > + else
> > + return page->first_page;
> > +}
> > +
> > +static struct page *get_next_page(struct page *page)
> > +{
> > + struct page *next;
> > +
> > + if (is_last_page(page))
> > + next = NULL;
> > + else if (is_first_page(page))
> > + next = (struct page *)page->private;
> > + else
> > + next = list_entry(page->lru.next, struct page, lru);
> > +
> > + return next;
> > +}
> > +
> > +/* Encode <page, obj_idx> as a single handle value */
> > +static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
> > +{
> > + unsigned long handle;
> > +
> > + if (!page) {
> > + BUG_ON(obj_idx);
> > + return NULL;
> > + }
> > +
> > + handle = page_to_pfn(page) << OBJ_INDEX_BITS;
> > + handle |= (obj_idx & OBJ_INDEX_MASK);
> > +
> > + return (void *)handle;
> > +}
> > +
> > +/* Decode <page, obj_idx> pair from the given object handle */
> > +static void obj_handle_to_location(unsigned long handle, struct page **page,
> > + unsigned long *obj_idx)
> > +{
> > + *page = pfn_to_page(handle >> OBJ_INDEX_BITS);
> > + *obj_idx = handle & OBJ_INDEX_MASK;
> > +}
> > +
> > +static unsigned long obj_idx_to_offset(struct page *page,
> > + unsigned long obj_idx, int class_size)
> > +{
> > + unsigned long off = 0;
> > +
> > + if (!is_first_page(page))
> > + off = page->index;
> > +
> > + return off + obj_idx * class_size;
> > +}
> > +
> > +static void reset_page(struct page *page)
> > +{
> > + clear_bit(PG_private, &page->flags);
> > + clear_bit(PG_private_2, &page->flags);
> > + set_page_private(page, 0);
> > + page->mapping = NULL;
> > + page->freelist = NULL;
> > + page_mapcount_reset(page);
> > +}
> > +
> > +static void free_zspage(struct zs_ops *ops, struct page *first_page)
> > +{
> > + struct page *nextp, *tmp, *head_extra;
> > +
> > + BUG_ON(!is_first_page(first_page));
> > + BUG_ON(first_page->inuse);
> > +
> > + head_extra = (struct page *)page_private(first_page);
> > +
> > + reset_page(first_page);
> > + ops->free(first_page);
> > +
> > + /* zspage with only 1 system page */
> > + if (!head_extra)
> > + return;
> > +
> > + list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
> > + list_del(&nextp->lru);
> > + reset_page(nextp);
> > + ops->free(nextp);
> > + }
> > + reset_page(head_extra);
> > + ops->free(head_extra);
> > +}
> > +
> > +/* Initialize a newly allocated zspage */
> > +static void init_zspage(struct page *first_page, struct size_class *class)
> > +{
> > + unsigned long off = 0;
> > + struct page *page = first_page;
> > +
> > + BUG_ON(!is_first_page(first_page));
> > + while (page) {
> > + struct page *next_page;
> > + struct link_free *link;
> > + unsigned int i, objs_on_page;
> > +
> > + /*
> > + * page->index stores offset of first object starting
> > + * in the page. For the first page, this is always 0,
> > + * so we use first_page->index (aka ->freelist) to store
> > + * head of corresponding zspage's freelist.
> > + */
> > + if (page != first_page)
> > + page->index = off;
> > +
> > + link = (struct link_free *)kmap_atomic(page) +
> > + off / sizeof(*link);
> > + objs_on_page = (PAGE_SIZE - off) / class->size;
> > +
> > + for (i = 1; i <= objs_on_page; i++) {
> > + off += class->size;
> > + if (off < PAGE_SIZE) {
> > + link->next = obj_location_to_handle(page, i);
> > + link += class->size / sizeof(*link);
> > + }
> > + }
> > +
> > + /*
> > + * We now come to the last (full or partial) object on this
> > + * page, which must point to the first object on the next
> > + * page (if present)
> > + */
> > + next_page = get_next_page(page);
> > + link->next = obj_location_to_handle(next_page, 0);
> > + kunmap_atomic(link);
> > + page = next_page;
> > + off = (off + class->size) % PAGE_SIZE;
> > + }
> > +}
> > +
> > +/*
> > + * Allocate a zspage for the given size class
> > + */
> > +static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
> > + gfp_t flags)
> > +{
> > + int i, error;
> > + struct page *first_page = NULL, *uninitialized_var(prev_page);
> > +
> > + /*
> > + * Allocate individual pages and link them together as:
> > + * 1. first page->private = first sub-page
> > + * 2. all sub-pages are linked together using page->lru
> > + * 3. each sub-page is linked to the first page using page->first_page
> > + *
> > + * For each size class, First/Head pages are linked together using
> > + * page->lru. Also, we set PG_private to identify the first page
> > + * (i.e. no other sub-page has this flag set) and PG_private_2 to
> > + * identify the last page.
> > + */
> > + error = -ENOMEM;
> > + for (i = 0; i < class->pages_per_zspage; i++) {
> > + struct page *page;
> > +
> > + page = ops->alloc(flags);
> > + if (!page)
> > + goto cleanup;
> > +
>
> After this point, error is guaranteed to be set to 0. It looks like the
> free_zspage code at cleanup: is necessary. If we goto cleanup from here,
> first_page is NULL and after here error is always 0.

I have to say, I'm not following here. error is init'ed in -ENOMEM before the
for loop and is only set to 0 outside the look if each page allocation succeed.

I'm not clear on what optimization you are suggesting.

>
> > + INIT_LIST_HEAD(&page->lru);
> > + if (i == 0) { /* first page */
> > + SetPagePrivate(page);
> > + set_page_private(page, 0);
> > + first_page = page;
> > + first_page->inuse = 0;
> > + }
> > + if (i == 1)
> > + first_page->private = (unsigned long)page;
> > + if (i >= 1)
> > + page->first_page = first_page;
> > + if (i >= 2)
> > + list_add(&page->lru, &prev_page->lru);
> > + if (i == class->pages_per_zspage - 1) /* last page */
> > + SetPagePrivate2(page);
> > + prev_page = page;
> > + }
> > +
> > + init_zspage(first_page, class);
> > +
> > + first_page->freelist = obj_location_to_handle(first_page, 0);
> > +
> > + error = 0; /* Success */
> > +
> > +cleanup:
> > + if (unlikely(error) && first_page) {
> > + free_zspage(ops, first_page);
> > + first_page = NULL;
> > + }
> > +
> > + return first_page;
> > +}
> > +
> > +static struct page *find_get_zspage(struct size_class *class)
> > +{
> > + int i;
> > + struct page *page;
> > +
> > + for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
> > + page = class->fullness_list[i];
> > + if (page)
> > + break;
> > + }
> > +
> > + return page;
> > +}
> > +
>
> Just an observation but the locking around this is for the entire size
> class and not for a given zspage. However, I also doubt that this lock
> is a heavily contended one. I'd expect any contention to be negligible
> in comparison to the cost of compressing a page.

Yes, the locking is coarse, but in practice, contention at higher levels in the
swap subsystem make this a non-issue (for better or worse).

>
> > +#ifdef CONFIG_PGTABLE_MAPPING
> > +static inline int __zs_cpu_up(struct mapping_area *area)
> > +{
> > + /*
> > + * Make sure we don't leak memory if a cpu UP notification
> > + * and zs_init() race and both call zs_cpu_up() on the same cpu
> > + */
> > + if (area->vm)
> > + return 0;
> > + area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
> > + if (!area->vm)
> > + return -ENOMEM;
> > + return 0;
> > +}
> > +
> > +static inline void __zs_cpu_down(struct mapping_area *area)
> > +{
> > + if (area->vm)
> > + free_vm_area(area->vm);
> > + area->vm = NULL;
> > +}
> > +
> > +static inline void *__zs_map_object(struct mapping_area *area,
> > + struct page *pages[2], int off, int size)
> > +{
> > + BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
> > + area->vm_addr = area->vm->addr;
> > + return area->vm_addr + off;
> > +}
> > +
> > +static inline void __zs_unmap_object(struct mapping_area *area,
> > + struct page *pages[2], int off, int size)
> > +{
> > + unsigned long addr = (unsigned long)area->vm_addr;
> > + unsigned long end = addr + (PAGE_SIZE * 2);
> > +
> > + flush_cache_vunmap(addr, end);
> > + unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> > + flush_tlb_kernel_range(addr, end);
> > +}
> > +
> > +#else /* CONFIG_PGTABLE_MAPPING*/
> > +
> > +static inline int __zs_cpu_up(struct mapping_area *area)
> > +{
> > + /*
> > + * Make sure we don't leak memory if a cpu UP notification
> > + * and zs_init() race and both call zs_cpu_up() on the same cpu
> > + */
> > + if (area->vm_buf)
> > + return 0;
> > + area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
> > + if (!area->vm_buf)
> > + return -ENOMEM;
> > + return 0;
> > +}
> > +
> > +static inline void __zs_cpu_down(struct mapping_area *area)
> > +{
> > + if (area->vm_buf)
> > + free_page((unsigned long)area->vm_buf);
> > + area->vm_buf = NULL;
> > +}
> > +
> > +static void *__zs_map_object(struct mapping_area *area,
> > + struct page *pages[2], int off, int size)
> > +{
> > + int sizes[2];
> > + void *addr;
> > + char *buf = area->vm_buf;
> > +
> > + /* disable page faults to match kmap_atomic() return conditions */
> > + pagefault_disable();
> > +
> > + /* no read fastpath */
> > + if (area->vm_mm == ZS_MM_WO)
> > + goto out;
> > +
> > + sizes[0] = PAGE_SIZE - off;
> > + sizes[1] = size - sizes[0];
> > +
> > + /* copy object to per-cpu buffer */
> > + addr = kmap_atomic(pages[0]);
> > + memcpy(buf, addr + off, sizes[0]);
> > + kunmap_atomic(addr);
> > + addr = kmap_atomic(pages[1]);
> > + memcpy(buf + sizes[0], addr, sizes[1]);
> > + kunmap_atomic(addr);
> > +out:
> > + return area->vm_buf;
> > +}
> > +
> > +static void __zs_unmap_object(struct mapping_area *area,
> > + struct page *pages[2], int off, int size)
> > +{
> > + int sizes[2];
> > + void *addr;
> > + char *buf = area->vm_buf;
> > +
> > + /* no write fastpath */
> > + if (area->vm_mm == ZS_MM_RO)
> > + goto out;
> > +
> > + sizes[0] = PAGE_SIZE - off;
> > + sizes[1] = size - sizes[0];
> > +
> > + /* copy per-cpu buffer to object */
> > + addr = kmap_atomic(pages[0]);
> > + memcpy(addr + off, buf, sizes[0]);
> > + kunmap_atomic(addr);
> > + addr = kmap_atomic(pages[1]);
> > + memcpy(addr, buf + sizes[0], sizes[1]);
> > + kunmap_atomic(addr);
> > +
> > +out:
> > + /* enable page faults to match kunmap_atomic() return conditions */
> > + pagefault_enable();
> > +}
> > +
> > +#endif /* CONFIG_PGTABLE_MAPPING */
> > +
> > +static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
> > + void *pcpu)
> > +{
> > + int ret, cpu = (long)pcpu;
> > + struct mapping_area *area;
> > +
> > + switch (action) {
> > + case CPU_UP_PREPARE:
> > + area = &per_cpu(zs_map_area, cpu);
> > + ret = __zs_cpu_up(area);
> > + if (ret)
> > + return notifier_from_errno(ret);
> > + break;
> > + case CPU_DEAD:
> > + case CPU_UP_CANCELED:
> > + area = &per_cpu(zs_map_area, cpu);
> > + __zs_cpu_down(area);
> > + break;
> > + }
> > +
> > + return NOTIFY_OK;
> > +}
> > +
> > +static struct notifier_block zs_cpu_nb = {
> > + .notifier_call = zs_cpu_notifier
> > +};
> > +
> > +static void zs_exit(void)
> > +{
> > + int cpu;
> > +
> > + for_each_online_cpu(cpu)
> > + zs_cpu_notifier(NULL, CPU_DEAD, (void *)(long)cpu);
> > + unregister_cpu_notifier(&zs_cpu_nb);
> > +}
> > +
> > +static int zs_init(void)
> > +{
> > + int cpu, ret;
> > +
> > + register_cpu_notifier(&zs_cpu_nb);
> > + for_each_online_cpu(cpu) {
> > + ret = zs_cpu_notifier(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
> > + if (notifier_to_errno(ret))
> > + goto fail;
> > + }
> > + return 0;
> > +fail:
> > + zs_exit();
> > + return notifier_to_errno(ret);
> > +}
> > +
> > +/**
> > + * zs_create_pool - Creates an allocation pool to work from.
> > + * @flags: allocation flags used to allocate pool metadata
> > + * @ops: allocation/free callbacks for expanding the pool
> > + *
> > + * This function must be called before anything when using
> > + * the zsmalloc allocator.
> > + *
> > + * On success, a pointer to the newly created pool is returned,
> > + * otherwise NULL.
> > + */
> > +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops)
> > +{
> > + int i, ovhd_size;
> > + struct zs_pool *pool;
> > +
> > + ovhd_size = roundup(sizeof(*pool), PAGE_SIZE);
> > + pool = kzalloc(ovhd_size, flags);
> > + if (!pool)
> > + return NULL;
> > +
> > + for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> > + int size;
> > + struct size_class *class;
> > +
> > + size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA;
> > + if (size > ZS_MAX_ALLOC_SIZE)
> > + size = ZS_MAX_ALLOC_SIZE;
> > +
> > + class = &pool->size_class[i];
> > + class->size = size;
> > + class->index = i;
> > + spin_lock_init(&class->lock);
> > + class->pages_per_zspage = get_pages_per_zspage(size);
> > +
> > + }
> > +
> > + if (ops)
> > + pool->ops = ops;
> > + else
> > + pool->ops = &zs_default_ops;
> > +
> > + return pool;
> > +}
> > +EXPORT_SYMBOL_GPL(zs_create_pool);
> > +
> > +void zs_destroy_pool(struct zs_pool *pool)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> > + int fg;
> > + struct size_class *class = &pool->size_class[i];
> > +
> > + for (fg = 0; fg < _ZS_NR_FULLNESS_GROUPS; fg++) {
> > + if (class->fullness_list[fg]) {
> > + pr_info("Freeing non-empty class with size "
> > + "%db, fullness group %d\n",
> > + class->size, fg);
> > + }
> > + }
> > + }
> > + kfree(pool);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_destroy_pool);
> > +
> > +/**
> > + * zs_malloc - Allocate block of given size from pool.
> > + * @pool: pool to allocate from
> > + * @size: size of block to allocate
> > + *
> > + * On success, handle to the allocated object is returned,
> > + * otherwise 0.
> > + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail.
> > + */
> > +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags)
> > +{
> > + unsigned long obj;
> > + struct link_free *link;
> > + int class_idx;
> > + struct size_class *class;
> > +
> > + struct page *first_page, *m_page;
> > + unsigned long m_objidx, m_offset;
> > +
> > + if (unlikely(!size || size > ZS_MAX_ALLOC_SIZE))
> > + return 0;
> > +
> > + class_idx = get_size_class_index(size);
> > + class = &pool->size_class[class_idx];
> > + BUG_ON(class_idx != class->index);
> > +
> > + spin_lock(&class->lock);
> > + first_page = find_get_zspage(class);
> > +
> > + if (!first_page) {
> > + spin_unlock(&class->lock);
> > + first_page = alloc_zspage(pool->ops, class, flags);
> > + if (unlikely(!first_page))
> > + return 0;
> > +
> > + set_zspage_mapping(first_page, class->index, ZS_EMPTY);
> > + spin_lock(&class->lock);
> > + class->pages_allocated += class->pages_per_zspage;
> > + }
> > +
> > + obj = (unsigned long)first_page->freelist;
> > + obj_handle_to_location(obj, &m_page, &m_objidx);
> > + m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
> > +
> > + link = (struct link_free *)kmap_atomic(m_page) +
> > + m_offset / sizeof(*link);
> > + first_page->freelist = link->next;
> > + memset(link, POISON_INUSE, sizeof(*link));
> > + kunmap_atomic(link);
> > +
>
> Pity about the kmap_atomic but I guess it doesn't matter as the lock is
> serialising the entire size class anyway.

Yes. This is really only to support highmem pages. We could #ifdef
CONFIG_64BIT and not disable preemption for 64-bit but that makes for
inconsistent (and confusing) state on return.

>
> > + first_page->inuse++;
> > + /* Now move the zspage to another fullness group, if required */
> > + fix_fullness_group(pool, first_page);
> > + spin_unlock(&class->lock);
> > +
> > + return obj;
> > +}
> > +EXPORT_SYMBOL_GPL(zs_malloc);
> > +
> > +void zs_free(struct zs_pool *pool, unsigned long obj)
> > +{
> > + struct link_free *link;
> > + struct page *first_page, *f_page;
> > + unsigned long f_objidx, f_offset;
> > +
> > + int class_idx;
> > + struct size_class *class;
> > + enum fullness_group fullness;
> > +
> > + if (unlikely(!obj))
> > + return;
> > +
> > + obj_handle_to_location(obj, &f_page, &f_objidx);
> > + first_page = get_first_page(f_page);
> > +
> > + get_zspage_mapping(first_page, &class_idx, &fullness);
> > + class = &pool->size_class[class_idx];
> > + f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
> > +
> > + spin_lock(&class->lock);
> > +
> > + /* Insert this object in containing zspage's freelist */
> > + link = (struct link_free *)((unsigned char *)kmap_atomic(f_page)
> > + + f_offset);
> > + link->next = first_page->freelist;
> > + kunmap_atomic(link);
> > + first_page->freelist = (void *)obj;
> > +
> > + first_page->inuse--;
> > + fullness = fix_fullness_group(pool, first_page);
> > +
> > + if (fullness == ZS_EMPTY)
> > + class->pages_allocated -= class->pages_per_zspage;
> > +
> > + spin_unlock(&class->lock);
> > +
> > + if (fullness == ZS_EMPTY)
> > + free_zspage(pool->ops, first_page);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_free);
> > +
> > +/**
> > + * zs_map_object - get address of allocated object from handle.
> > + * @pool: pool from which the object was allocated
> > + * @handle: handle returned from zs_malloc
> > + *
> > + * Before using an object allocated from zs_malloc, it must be mapped using
> > + * this function. When done with the object, it must be unmapped using
> > + * zs_unmap_object.
> > + *
> > + * Only one object can be mapped per cpu at a time. There is no protection
> > + * against nested mappings.
> > + *
> > + * This function returns with preemption and page faults disabled.
> > +*/
> > +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> > + enum zs_mapmode mm)
> > +{
> > + struct page *page;
> > + unsigned long obj_idx, off;
> > +
> > + unsigned int class_idx;
> > + enum fullness_group fg;
> > + struct size_class *class;
> > + struct mapping_area *area;
> > + struct page *pages[2];
> > +
> > + BUG_ON(!handle);
> > +
> > + /*
> > + * Because we use per-cpu mapping areas shared among the
> > + * pools/users, we can't allow mapping in interrupt context
> > + * because it can corrupt another users mappings.
> > + */
> > + BUG_ON(in_interrupt());
> > +
> > + obj_handle_to_location(handle, &page, &obj_idx);
> > + get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> > + class = &pool->size_class[class_idx];
> > + off = obj_idx_to_offset(page, obj_idx, class->size);
> > +
> > + area = &get_cpu_var(zs_map_area);
> > + area->vm_mm = mm;
> > + if (off + class->size <= PAGE_SIZE) {
> > + /* this object is contained entirely within a page */
> > + area->vm_addr = kmap_atomic(page);
> > + return area->vm_addr + off;
> > + }
> > +
> > + /* this object spans two pages */
> > + pages[0] = page;
> > + pages[1] = get_next_page(page);
> > + BUG_ON(!pages[1]);
> > +
> > + return __zs_map_object(area, pages, off, class->size);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_map_object);
> > +
> > +void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> > +{
> > + struct page *page;
> > + unsigned long obj_idx, off;
> > +
> > + unsigned int class_idx;
> > + enum fullness_group fg;
> > + struct size_class *class;
> > + struct mapping_area *area;
> > +
> > + BUG_ON(!handle);
> > +
> > + obj_handle_to_location(handle, &page, &obj_idx);
> > + get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> > + class = &pool->size_class[class_idx];
> > + off = obj_idx_to_offset(page, obj_idx, class->size);
> > +
> > + area = &__get_cpu_var(zs_map_area);
> > + if (off + class->size <= PAGE_SIZE)
> > + kunmap_atomic(area->vm_addr);
> > + else {
> > + struct page *pages[2];
> > +
> > + pages[0] = page;
> > + pages[1] = get_next_page(page);
> > + BUG_ON(!pages[1]);
> > +
> > + __zs_unmap_object(area, pages, off, class->size);
> > + }
> > + put_cpu_var(zs_map_area);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_unmap_object);
> > +
> > +u64 zs_get_total_size_bytes(struct zs_pool *pool)
> > +{
> > + int i;
> > + u64 npages = 0;
> > +
> > + for (i = 0; i < ZS_SIZE_CLASSES; i++)
> > + npages += pool->size_class[i].pages_allocated;
> > +
> > + return npages << PAGE_SHIFT;
> > +}
> > +EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> > +
> > +module_init(zs_init);
> > +module_exit(zs_exit);
> > +
> > +MODULE_LICENSE("Dual BSD/GPL");
> > +MODULE_AUTHOR("Nitin Gupta <[email protected]>");
> > --
> > 1.8.2.1
> >
>
> --
> Mel Gorman
> SUSE Labs
>
>