Changelog:
v1 -> v2:
* fix bisect issue
* fix issue in patch staging: ramster: Provide accessory functions for counter decrease
* drop patch staging: zcache: remove zcache_freeze
* Add Dan Acked-by
Fix bugs in zcache and rips out the debug counters out of ramster.c and
sticks them in a debug.c file. Introduce accessory functions for counters
increase/decrease, they are available when config RAMSTER_DEBUG, otherwise
they are empty non-debug functions. Using an array to initialize/use debugfs
attributes to make them neater. Dan Magenheimer confirm these works
are needed. http://marc.info/?l=linux-mm&m=136535713106882&w=2
Patch 1~2 fix bugs in zcache
Patch 3~8 rips out the debug counters out of ramster.c and sticks them
in a debug.c file
Patch 9 fix coding style issue introduced in zcache2 cleanups
(s/int/bool + debugfs movement) patchset
Patch 10 add how-to for ramster
Dan Magenheimer (1):
staging: ramster: add how-to for ramster
Wanpeng Li (6):
staging: ramster: decrease foregin pers pages when count < 0
staging: ramster: Move debugfs code out of ramster.c files
staging: ramster/debug: Use an array to initialize/use debugfs attributes
staging: ramster/debug: Add RAMSTER_DEBUG Kconfig entry
staging: ramster: Add incremental accessory counters
staging: zcache/debug: fix coding style
drivers/staging/zcache/Kconfig | 8 +
drivers/staging/zcache/Makefile | 1 +
drivers/staging/zcache/debug.h | 95 ++++++++---
drivers/staging/zcache/ramster/HOWTO.txt | 257 ++++++++++++++++++++++++++++++
drivers/staging/zcache/ramster/debug.c | 66 ++++++++
drivers/staging/zcache/ramster/debug.h | 143 +++++++++++++++++
drivers/staging/zcache/ramster/ramster.c | 148 +++--------------
7 files changed, 573 insertions(+), 145 deletions(-)
create mode 100644 drivers/staging/zcache/ramster/HOWTO.txt
create mode 100644 drivers/staging/zcache/ramster/debug.c
create mode 100644 drivers/staging/zcache/ramster/debug.h
--
1.7.10.4
commit 9a5c59687ad ("staging: ramster: Provide accessory functions for
counter decrease") forget decrease foregin pers pages, this patch fix
it.
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/ramster/ramster.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/staging/zcache/ramster/ramster.c b/drivers/staging/zcache/ramster/ramster.c
index c3d7f96..444189e 100644
--- a/drivers/staging/zcache/ramster/ramster.c
+++ b/drivers/staging/zcache/ramster/ramster.c
@@ -508,6 +508,7 @@ void ramster_count_foreign_pages(bool eph, int count)
if (count > 0) {
inc_ramster_foreign_pers_pages();
} else {
+ dec_ramster_foreign_pers_pages();
WARN_ON_ONCE(ramster_foreign_pers_pages < 0);
}
}
--
1.7.10.4
Add how-to for ramster.
Acked-by: Dan Magenheimer <[email protected]>
Singed-off-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/ramster/HOWTO.txt | 257 ++++++++++++++++++++++++++++++
1 file changed, 257 insertions(+)
create mode 100644 drivers/staging/zcache/ramster/HOWTO.txt
diff --git a/drivers/staging/zcache/ramster/HOWTO.txt b/drivers/staging/zcache/ramster/HOWTO.txt
new file mode 100644
index 0000000..a4ee979
--- /dev/null
+++ b/drivers/staging/zcache/ramster/HOWTO.txt
@@ -0,0 +1,257 @@
+Version: 130309
+ Dan Magenheimer <[email protected]>
+
+This is a how-to document for RAMster. It applies to the March 9, 2013
+version of RAMster, re-merged with the new zcache codebase, built and tested
+on the 3.9 tree and submitted for the staging tree for 3.9.
+
+Note that this document was created from notes taken earlier. I would
+appreciate any feedback from anyone who follows the process as described
+to confirm that it works and to clarify any possible misunderstandings,
+or to report problems.
+
+A. PRELIMINARY
+
+1) Install two or more Linux systems that are known to work when upgraded
+ to a recent upstream Linux kernel version (e.g. v3.9). I used Oracle
+ Linux 6 ("OL6") on two Dell Optiplex 790s. Note that it should be possible
+ to use ocfs2 as a filesystem on your systems but this hasn't been
+ tested thoroughly, so if you do use ocfs2 and run into problems, please
+ report them. Up to eight nodes should work, but not much testing has
+ been done with more than three nodes.
+
+On each system:
+
+2) Configure, build and install then boot Linux (e.g. 3.9), just to ensure it
+ can be done with an unmodified upstream kernel. Confirm you booted
+ the upstream kernel with "uname -a".
+
+3) Install ramster-tools. The src.rpm and an OL6 rpm are available
+ in this directory. I'm not very good at userspace stuff and
+ would welcome any help in turning ramster-tools into more
+ distributable rpms/debs for a wider range of distros.
+
+B. BUILDING RAMSTER INTO THE KERNEL
+
+Do the following on each system:
+
+1) Ensure you have the new codebase for drivers/staging/zcache in your source.
+
+2) Change your .config to have:
+
+ CONFIG_CLEANCACHE=y
+ CONFIG_FRONTSWAP=y
+ CONFIG_STAGING=y
+ CONFIG_ZCACHE=y
+ CONFIG_RAMSTER=y
+
+ You may have to reconfigure your kernel multiple times to ensure
+ all of these are set properly. I use:
+
+ # yes "" | make oldconfig
+
+ and then manually check the .config file to ensure my selections
+ have "taken".
+
+ Do not bother to build the kernel until you are certain all of
+ the above config selections will stick for the build.
+
+3) Build this kernel and "make install" so that you have a new kernel
+ in /etc/grub.conf
+
+4) Add "ramster" to the kernel boot line in /etc/grub.conf.
+
+5) Reboot and check dmesg to ensure there are some messages from ramster
+ and that "ramster_enabled=1" appears.
+
+ # dmesg | grep ramster
+
+ You should also see a lot of files in:
+
+ # ls /sys/kernel/debug/zcache
+ # ls /sys/kernel/debug/ramster
+
+ and a few files in:
+
+ # ls /sys/kernel/mm/ramster
+
+ RAMster now will act as a single-system zcache but doesn't yet
+ know anything about the cluster so can't do anything remotely.
+
+C. BUILDING THE RAMSTER CLUSTER
+
+This is the error prone part unless you are a clustering expert. We need
+to describe the cluster in /etc/ramster.conf file and the init scripts
+that parse it are extremely picky about the syntax.
+
+1) Create the /etc/ramster.conf file and ensure it is identical
+ on both systems. There is a good amount of similar documentation
+ for ocfs2 /etc/cluster.conf that can be googled for this, but I use:
+
+ cluster:
+ name = ramster
+ node_count = 2
+ node:
+ name = system1
+ cluster = ramster
+ number = 0
+ ip_address = my.ip.ad.r1
+ ip_port = 7777
+ node:
+ name = system2
+ cluster = ramster
+ number = 0
+ ip_address = my.ip.ad.r2
+ ip_port = 7777
+
+ You must ensure that the "name" field in the file exactly matches
+ the output of "hostname" on each system. The following assumes
+ you use "ramster" as the name of your cluster.
+
+2) Enable the ramster service and configure it:
+
+ # chkconfig --add ramster
+ # service ramster configure
+
+ Set "load on boot" to "y", cluster to start is "ramster" (or whatever
+ name you chose in ramster.conf), heartbeat dead threshold as "500",
+ network idle timeout as "1000000". Leave the others as default.
+
+4) Reboot. After reboot, try:
+
+ # service ramster status
+
+ You should see "Checking ramster cluster ramster: Online". If you do
+ not, something is wrong and RAMster will not work. Note that you
+ should also see that the driver for "configfs" is loaded and mounted,
+ the driver for ocfs2_dlmfs is not loaded, and some numbers for network
+ parameters. You will also see "Checking ramster heartbeat: Not active".
+ That's all OK.
+
+5) Now you need to start the cluster heartbeat; the cluster is not "up"
+ until all nodes detect a heartbeat. Normally this is done via
+ a cluster filesystem, but you don't have one. Some hack-y
+ code in RAMster can start it for you though if you tell it what
+ nodes are "up". To enable it for nodes 0 and 1, do:
+
+ # echo 0 > /sys/kernel/mm/ramster/manual_node_up
+ # echo 1 > /sys/kernel/mm/ramster/manual_node_up
+
+ This must be done on ALL nodes. I usually put these lines
+ in /etc/rc.local as otherwise I forget. To confirm that
+ the cluster is now up, on both systems do:
+
+ # dmesg | grep ramster
+
+ You should see "Accepted connection" messages in dmesg after this.
+
+6) You must tell each node the node to which it should "remotify" pages.
+ For example if you have a three-node cluster and you want nodes
+ 1 and 2 to be "clients" and node 0 to be the "memory server", then
+ on nodes 1 and 2, you do:
+
+ # echo 0 > /sys/kernel/mm/ramster/remote_target_nodenum
+
+ You should see "ramster: node N set as remotification target"
+ in dmesg. Again, /etc/rc.local is a good place to put this
+ so you don't forget to do it at each boot.
+
+7) One more step: By default, the RAMster code does not "remotify" any
+ pages; this is primarily for testing purposes, but sometimes it is
+ useful. This may change in the future, but for now, you must:
+
+ # echo 1 > /sys/kernel/mm/ramster/pers_remotify_enable
+ # echo 1 > /sys/kernel/mm/ramster/eph_remotify_enable
+
+ The first enables remotifying swap (persistent, aka frontswap) pages,
+ the second enables remotifying of page cache (ephemeral, cleancache)
+ pages.
+
+ These lines can also be put in /etc/rc.local (AFTER the node_up
+ lines), or I often just put them at the beginning of my script that
+ runs a workload.
+
+8) Most testing has been done with both/all machines booted roughly
+ simultaneously. Ideally, you should do this too unless you are
+ trying to break RAMster rather than just use it. ;-)
+
+D. TESTING RAMSTER
+
+1) Note that RAMster has no value unless pages get "remotified". For
+ swap/frontswap/persistent pages, this doesn't happen unless/until
+ the workload would cause swapping to occur, at which point pages
+ are put into frontswap/zcache, and the remotification thread starts
+ working. To get to the point where the system swaps, you either
+ need a workload for which the working set exceeds the RAM in the
+ system; or you need to somehow reduce the amount of RAM one of
+ the system sees. This latter is easy when testing in a VM, but
+ harder on physical systems. In some cases, "mem=xxxM" on the
+ kernel command line restricts memory, but for some values of xxx
+ my kernel fails to boot. I may also try creating a fixed RAMdisk,
+ doing nothing with it, but ensuring that it eats up a fixed
+ amount of RAM.
+2) To see if RAMster is working, on the remote system, I do:
+
+ # watch -d 'cat /sys/kernel/debug/ramster/foreign_*'
+
+ to monitor the number (and max) ephemeral and persistent pages
+ that RAMster has sent. If these stay at 0, RAMster is not working
+ either because the workload isn't creating enough memory pressure
+ or because "remotifying" isn't working. On the system with the
+ workload, you can watch lots of useful information also, but beware
+ that you may be affecting the workload and performance. I use
+ # watch ./watchme
+ where the watchme file contains:
+
+ for i in /sys/kernel/debug/zcache/evicted_buddied_pages \
+ /sys/kernel/debug/zcache/evicted_raw_pages \
+ /sys/kernel/debug/zcache/evicted_unbuddied_pages \
+ /sys/kernel/debug/zcache/zbud_curr_raw_pages \
+ /sys/kernel/debug/zcache/zbud_curr_zbytes \
+ /sys/kernel/debug/zcache/zbud_curr_zpages \
+ /sys/kernel/debug/ramster/eph_pages_remoted \
+ /sys/kernel/debug/ramster/remote_eph_pages_succ_get \
+ /sys/kernel/debug/ramster/remote_pers_pages_succ_get \
+ /sys/kernel/debug/frontswap/succ_puts
+ do
+ echo $i ": " $(cat $i)
+ done
+ And if you have debugfs mounted (as /sys/kernel/debug), you can
+ add to the watchme script some interesting counters in
+ /sys/kernel/debug/cleancache/* and /sys/kernel/debug/frontswap/*
+
+3) In v4, there are known issues in counting certain values. As a result
+ you may see periodic warnings from the kernel. Almost always you
+ will see "ramster: bad accounting for XXX". There are also "WARN_ONCE"
+ messages. If you see kernel warnings with a tombstone, please report
+ them. They are harmless but reflect bugs that need to be eventually fixed.
+
+AUTOMATIC SWAP REPATRIATION
+
+You may notice that while the systems are idle, the foreign persistent
+page count on the remote machine slowly decreases. This is because
+RAMster implements "frontswap selfshrinking": When possible, swap
+pages that have been remotified are slowly repatriated to the local
+machine. This is so that local RAM can be used when possible and
+so that, in case of remote machine crash, the probability of loss
+of data is reduced.
+
+REBOOTING / POWEROFF
+
+If a system is shut down while some of its swap pages still reside
+on a remote system, the system may lock up partially through the shutdown
+sequence. This is because the network is shut down before the
+swap mechansim is shut down. To avoid this annoying problem, simply
+shut off the swap subsystem before starting the shutdown sequence, e.g.:
+
+ # swapoff -a
+ # reboot
+
+
+CHANGELOG:
+v5-120214->120817: updated for merge into new zcache codebase
+v4-120126->v5-120214: updated for V5
+111227->v4-120126: added info on selfshrinking and rebooting
+111227->v4-120126: added more info for tracking RAMster stats
+111227->v4-120126: CONFIG_PREEMPT_NONE no longer necessary
+111227->v4-120126: cleancache now works completely so no need to disable it
--
1.7.10.4
Fix coding style issue: ERROR: space prohibited before that '++' (ctx:WxO)
and line beyond 8 characters.
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/debug.h | 95 ++++++++++++++++++++++++++++++++--------
1 file changed, 76 insertions(+), 19 deletions(-)
diff --git a/drivers/staging/zcache/debug.h b/drivers/staging/zcache/debug.h
index ddad92f..8088d28 100644
--- a/drivers/staging/zcache/debug.h
+++ b/drivers/staging/zcache/debug.h
@@ -174,26 +174,83 @@ extern ssize_t zcache_writtenback_pages;
extern ssize_t zcache_outstanding_writeback_pages;
#endif
-static inline void inc_zcache_flush_total(void) { zcache_flush_total ++; };
-static inline void inc_zcache_flush_found(void) { zcache_flush_found ++; };
-static inline void inc_zcache_flobj_total(void) { zcache_flobj_total ++; };
-static inline void inc_zcache_flobj_found(void) { zcache_flobj_found ++; };
-static inline void inc_zcache_failed_eph_puts(void) { zcache_failed_eph_puts ++; };
-static inline void inc_zcache_failed_pers_puts(void) { zcache_failed_pers_puts ++; };
-static inline void inc_zcache_failed_getfreepages(void) { zcache_failed_getfreepages ++; };
-static inline void inc_zcache_failed_alloc(void) { zcache_failed_alloc ++; };
-static inline void inc_zcache_put_to_flush(void) { zcache_put_to_flush ++; };
-static inline void inc_zcache_compress_poor(void) { zcache_compress_poor ++; };
-static inline void inc_zcache_mean_compress_poor(void) { zcache_mean_compress_poor ++; };
-static inline void inc_zcache_eph_ate_tail(void) { zcache_eph_ate_tail ++; };
-static inline void inc_zcache_eph_ate_tail_failed(void) { zcache_eph_ate_tail_failed ++; };
-static inline void inc_zcache_pers_ate_eph(void) { zcache_pers_ate_eph ++; };
-static inline void inc_zcache_pers_ate_eph_failed(void) { zcache_pers_ate_eph_failed ++; };
-static inline void inc_zcache_evicted_eph_zpages(unsigned zpages) { zcache_evicted_eph_zpages += zpages; };
-static inline void inc_zcache_evicted_eph_pageframes(void) { zcache_evicted_eph_pageframes ++; };
+static inline void inc_zcache_flush_total(void)
+{
+ zcache_flush_total++;
+};
+static inline void inc_zcache_flush_found(void)
+{
+ zcache_flush_found++;
+};
+static inline void inc_zcache_flobj_total(void)
+{
+ zcache_flobj_total++;
+};
+static inline void inc_zcache_flobj_found(void)
+{
+ zcache_flobj_found++;
+};
+static inline void inc_zcache_failed_eph_puts(void)
+{
+ zcache_failed_eph_puts++;
+};
+static inline void inc_zcache_failed_pers_puts(void)
+{
+ zcache_failed_pers_puts++;
+};
+static inline void inc_zcache_failed_getfreepages(void)
+{
+ zcache_failed_getfreepages++;
+};
+static inline void inc_zcache_failed_alloc(void)
+{
+ zcache_failed_alloc++;
+};
+static inline void inc_zcache_put_to_flush(void)
+{
+ zcache_put_to_flush++;
+};
+static inline void inc_zcache_compress_poor(void)
+{
+ zcache_compress_poor++;
+};
+static inline void inc_zcache_mean_compress_poor(void)
+{
+ zcache_mean_compress_poor++;
+};
+static inline void inc_zcache_eph_ate_tail(void)
+{
+ zcache_eph_ate_tail++;
+};
+static inline void inc_zcache_eph_ate_tail_failed(void)
+{
+ zcache_eph_ate_tail_failed++;
+};
+static inline void inc_zcache_pers_ate_eph(void)
+{
+ zcache_pers_ate_eph++;
+};
+static inline void inc_zcache_pers_ate_eph_failed(void)
+{
+ zcache_pers_ate_eph_failed++;
+};
+static inline void inc_zcache_evicted_eph_zpages(unsigned zpages)
+{
+ zcache_evicted_eph_zpages += zpages;
+};
+static inline void inc_zcache_evicted_eph_pageframes(void)
+{
+ zcache_evicted_eph_pageframes++;
+};
-static inline void inc_zcache_eph_nonactive_puts_ignored(void) { zcache_eph_nonactive_puts_ignored ++; };
-static inline void inc_zcache_pers_nonactive_puts_ignored(void) { zcache_pers_nonactive_puts_ignored ++; };
+static inline void inc_zcache_eph_nonactive_puts_ignored(void)
+{
+ zcache_eph_nonactive_puts_ignored++;
+};
+static inline void inc_zcache_pers_nonactive_puts_ignored(void)
+{
+ zcache_pers_nonactive_puts_ignored++;
+};
int zcache_debugfs_init(void);
#else
--
1.7.10.4
Use an array to initialize/use debugfs attributes, it makes them
neater as zcache/debug.c does.
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/ramster/debug.c | 68 +++++++++++++++-----------------
1 file changed, 32 insertions(+), 36 deletions(-)
diff --git a/drivers/staging/zcache/ramster/debug.c b/drivers/staging/zcache/ramster/debug.c
index 76861e4..bf34133 100644
--- a/drivers/staging/zcache/ramster/debug.c
+++ b/drivers/staging/zcache/ramster/debug.c
@@ -3,8 +3,6 @@
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
-#define zdfs debugfs_create_size_t
-#define zdfs64 debugfs_create_u64
ssize_t ramster_eph_pages_remoted;
ssize_t ramster_pers_pages_remoted;
@@ -20,48 +18,46 @@ ssize_t ramster_remote_object_flushes_failed;
ssize_t ramster_remote_pages_flushed;
ssize_t ramster_remote_page_flushes_failed;
+#define ATTR(x) { .name = #x, .val = &ramster_##x, }
+static struct debug_entry {
+ const char *name;
+ ssize_t *val;
+} attrs[] = {
+ ATTR(eph_pages_remoted),
+ ATTR(pers_pages_remoted),
+ ATTR(eph_pages_remote_failed),
+ ATTR(pers_pages_remote_failed),
+ ATTR(remote_eph_pages_succ_get),
+ ATTR(remote_pers_pages_succ_get),
+ ATTR(remote_eph_pages_unsucc_get),
+ ATTR(remote_pers_pages_unsucc_get),
+ ATTR(pers_pages_remote_nomem),
+ ATTR(remote_objects_flushed),
+ ATTR(remote_pages_flushed),
+ ATTR(remote_object_flushes_failed),
+ ATTR(remote_page_flushes_failed),
+ ATTR(foreign_eph_pages),
+ ATTR(foreign_eph_pages_max),
+ ATTR(foreign_pers_pages),
+ ATTR(foreign_pers_pages_max),
+};
+#undef ATTR
+
int __init ramster_debugfs_init(void)
{
+ int i;
struct dentry *root = debugfs_create_dir("ramster", NULL);
if (root == NULL)
return -ENXIO;
- zdfs("eph_pages_remoted", S_IRUGO, root, &ramster_eph_pages_remoted);
- zdfs("pers_pages_remoted", S_IRUGO, root, &ramster_pers_pages_remoted);
- zdfs("eph_pages_remote_failed", S_IRUGO, root,
- &ramster_eph_pages_remote_failed);
- zdfs("pers_pages_remote_failed", S_IRUGO, root,
- &ramster_pers_pages_remote_failed);
- zdfs("remote_eph_pages_succ_get", S_IRUGO, root,
- &ramster_remote_eph_pages_succ_get);
- zdfs("remote_pers_pages_succ_get", S_IRUGO, root,
- &ramster_remote_pers_pages_succ_get);
- zdfs("remote_eph_pages_unsucc_get", S_IRUGO, root,
- &ramster_remote_eph_pages_unsucc_get);
- zdfs("remote_pers_pages_unsucc_get", S_IRUGO, root,
- &ramster_remote_pers_pages_unsucc_get);
- zdfs("pers_pages_remote_nomem", S_IRUGO, root,
- &ramster_pers_pages_remote_nomem);
- zdfs("remote_objects_flushed", S_IRUGO, root,
- &ramster_remote_objects_flushed);
- zdfs("remote_pages_flushed", S_IRUGO, root,
- &ramster_remote_pages_flushed);
- zdfs("remote_object_flushes_failed", S_IRUGO, root,
- &ramster_remote_object_flushes_failed);
- zdfs("remote_page_flushes_failed", S_IRUGO, root,
- &ramster_remote_page_flushes_failed);
- zdfs("foreign_eph_pages", S_IRUGO, root,
- &ramster_foreign_eph_pages);
- zdfs("foreign_eph_pages_max", S_IRUGO, root,
- &ramster_foreign_eph_pages_max);
- zdfs("foreign_pers_pages", S_IRUGO, root,
- &ramster_foreign_pers_pages);
- zdfs("foreign_pers_pages_max", S_IRUGO, root,
- &ramster_foreign_pers_pages_max);
+ for (i = 0; i < ARRAY_SIZE(attrs); i++)
+ if (!debugfs_create_size_t(attrs[i].name,
+ S_IRUGO, root, attrs[i].val))
+ goto out;
return 0;
+out:
+ return -ENODEV;
}
-#undef zdebugfs
-#undef zdfs64
#else
static inline int ramster_debugfs_init(void)
{
--
1.7.10.4
Note that at this point there is no CONFIG_RAMSTER_DEBUG
option in the Kconfig. So in effect all of the counters
are nop until that option gets re-introduced in:
zcache/ramster/debug: Add RAMSTE_DEBUG Kconfig entry
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/Makefile | 1 +
drivers/staging/zcache/ramster/debug.c | 70 ++++++++++++++++++
drivers/staging/zcache/ramster/debug.h | 76 ++++++++++++++++++++
drivers/staging/zcache/ramster/ramster.c | 115 ++----------------------------
4 files changed, 152 insertions(+), 110 deletions(-)
create mode 100644 drivers/staging/zcache/ramster/debug.c
create mode 100644 drivers/staging/zcache/ramster/debug.h
diff --git a/drivers/staging/zcache/Makefile b/drivers/staging/zcache/Makefile
index 24fd6aa..4956fa0 100644
--- a/drivers/staging/zcache/Makefile
+++ b/drivers/staging/zcache/Makefile
@@ -1,5 +1,6 @@
zcache-y := zcache-main.o tmem.o zbud.o
zcache-$(CONFIG_ZCACHE_DEBUG) += debug.o
+zcache-$(CONFIG_RAMSTER) += ramster/debug.o
zcache-$(CONFIG_RAMSTER) += ramster/ramster.o ramster/r2net.o
zcache-$(CONFIG_RAMSTER) += ramster/nodemanager.o ramster/tcp.o
zcache-$(CONFIG_RAMSTER) += ramster/heartbeat.o ramster/masklog.o
diff --git a/drivers/staging/zcache/ramster/debug.c b/drivers/staging/zcache/ramster/debug.c
new file mode 100644
index 0000000..76861e4
--- /dev/null
+++ b/drivers/staging/zcache/ramster/debug.c
@@ -0,0 +1,70 @@
+#include <linux/atomic.h>
+#include "debug.h"
+
+#ifdef CONFIG_DEBUG_FS
+#include <linux/debugfs.h>
+#define zdfs debugfs_create_size_t
+#define zdfs64 debugfs_create_u64
+
+ssize_t ramster_eph_pages_remoted;
+ssize_t ramster_pers_pages_remoted;
+ssize_t ramster_eph_pages_remote_failed;
+ssize_t ramster_pers_pages_remote_failed;
+ssize_t ramster_remote_eph_pages_succ_get;
+ssize_t ramster_remote_pers_pages_succ_get;
+ssize_t ramster_remote_eph_pages_unsucc_get;
+ssize_t ramster_remote_pers_pages_unsucc_get;
+ssize_t ramster_pers_pages_remote_nomem;
+ssize_t ramster_remote_objects_flushed;
+ssize_t ramster_remote_object_flushes_failed;
+ssize_t ramster_remote_pages_flushed;
+ssize_t ramster_remote_page_flushes_failed;
+
+int __init ramster_debugfs_init(void)
+{
+ struct dentry *root = debugfs_create_dir("ramster", NULL);
+ if (root == NULL)
+ return -ENXIO;
+
+ zdfs("eph_pages_remoted", S_IRUGO, root, &ramster_eph_pages_remoted);
+ zdfs("pers_pages_remoted", S_IRUGO, root, &ramster_pers_pages_remoted);
+ zdfs("eph_pages_remote_failed", S_IRUGO, root,
+ &ramster_eph_pages_remote_failed);
+ zdfs("pers_pages_remote_failed", S_IRUGO, root,
+ &ramster_pers_pages_remote_failed);
+ zdfs("remote_eph_pages_succ_get", S_IRUGO, root,
+ &ramster_remote_eph_pages_succ_get);
+ zdfs("remote_pers_pages_succ_get", S_IRUGO, root,
+ &ramster_remote_pers_pages_succ_get);
+ zdfs("remote_eph_pages_unsucc_get", S_IRUGO, root,
+ &ramster_remote_eph_pages_unsucc_get);
+ zdfs("remote_pers_pages_unsucc_get", S_IRUGO, root,
+ &ramster_remote_pers_pages_unsucc_get);
+ zdfs("pers_pages_remote_nomem", S_IRUGO, root,
+ &ramster_pers_pages_remote_nomem);
+ zdfs("remote_objects_flushed", S_IRUGO, root,
+ &ramster_remote_objects_flushed);
+ zdfs("remote_pages_flushed", S_IRUGO, root,
+ &ramster_remote_pages_flushed);
+ zdfs("remote_object_flushes_failed", S_IRUGO, root,
+ &ramster_remote_object_flushes_failed);
+ zdfs("remote_page_flushes_failed", S_IRUGO, root,
+ &ramster_remote_page_flushes_failed);
+ zdfs("foreign_eph_pages", S_IRUGO, root,
+ &ramster_foreign_eph_pages);
+ zdfs("foreign_eph_pages_max", S_IRUGO, root,
+ &ramster_foreign_eph_pages_max);
+ zdfs("foreign_pers_pages", S_IRUGO, root,
+ &ramster_foreign_pers_pages);
+ zdfs("foreign_pers_pages_max", S_IRUGO, root,
+ &ramster_foreign_pers_pages_max);
+ return 0;
+}
+#undef zdebugfs
+#undef zdfs64
+#else
+static inline int ramster_debugfs_init(void)
+{
+ return 0;
+}
+#endif
diff --git a/drivers/staging/zcache/ramster/debug.h b/drivers/staging/zcache/ramster/debug.h
new file mode 100644
index 0000000..17a8435
--- /dev/null
+++ b/drivers/staging/zcache/ramster/debug.h
@@ -0,0 +1,76 @@
+#ifdef CONFIG_RAMSTER
+
+extern long ramster_flnodes;
+static atomic_t ramster_flnodes_atomic = ATOMIC_INIT(0);
+extern unsigned long ramster_flnodes_max;
+static inline void inc_ramster_flnodes(void)
+{
+ ramster_flnodes = atomic_inc_return(&ramster_flnodes_atomic);
+ if (ramster_flnodes > ramster_flnodes_max)
+ ramster_flnodes_max = ramster_flnodes;
+}
+static inline void dec_ramster_flnodes(void)
+{
+ ramster_flnodes = atomic_dec_return(&ramster_flnodes_atomic);
+}
+extern ssize_t ramster_foreign_eph_pages;
+static atomic_t ramster_foreign_eph_pages_atomic = ATOMIC_INIT(0);
+extern ssize_t ramster_foreign_eph_pages_max;
+static inline void inc_ramster_foreign_eph_pages(void)
+{
+ ramster_foreign_eph_pages = atomic_inc_return(
+ &ramster_foreign_eph_pages_atomic);
+ if (ramster_foreign_eph_pages > ramster_foreign_eph_pages_max)
+ ramster_foreign_eph_pages_max = ramster_foreign_eph_pages;
+}
+static inline void dec_ramster_foreign_eph_pages(void)
+{
+ ramster_foreign_eph_pages = atomic_dec_return(
+ &ramster_foreign_eph_pages_atomic);
+}
+extern ssize_t ramster_foreign_pers_pages;
+static atomic_t ramster_foreign_pers_pages_atomic = ATOMIC_INIT(0);
+extern ssize_t ramster_foreign_pers_pages_max;
+static inline void inc_ramster_foreign_pers_pages(void)
+{
+ ramster_foreign_pers_pages = atomic_inc_return(
+ &ramster_foreign_pers_pages_atomic);
+ if (ramster_foreign_pers_pages > ramster_foreign_pers_pages_max)
+ ramster_foreign_pers_pages_max = ramster_foreign_pers_pages;
+}
+static inline void dec_ramster_foreign_pers_pages(void)
+{
+ ramster_foreign_pers_pages = atomic_dec_return(
+ &ramster_foreign_pers_pages_atomic);
+}
+
+extern ssize_t ramster_eph_pages_remoted;
+extern ssize_t ramster_pers_pages_remoted;
+extern ssize_t ramster_eph_pages_remote_failed;
+extern ssize_t ramster_pers_pages_remote_failed;
+extern ssize_t ramster_remote_eph_pages_succ_get;
+extern ssize_t ramster_remote_pers_pages_succ_get;
+extern ssize_t ramster_remote_eph_pages_unsucc_get;
+extern ssize_t ramster_remote_pers_pages_unsucc_get;
+extern ssize_t ramster_pers_pages_remote_nomem;
+extern ssize_t ramster_remote_objects_flushed;
+extern ssize_t ramster_remote_object_flushes_failed;
+extern ssize_t ramster_remote_pages_flushed;
+extern ssize_t ramster_remote_page_flushes_failed;
+
+int ramster_debugfs_init(void);
+
+#else
+
+static inline void inc_ramster_flnodes(void) { };
+static inline void dec_ramster_flnodes(void) { };
+static inline void inc_ramster_foreign_eph_pages(void) { };
+static inline void dec_ramster_foreign_eph_pages(void) { };
+static inline void inc_ramster_foreign_pers_pages(void) { };
+static inline void dec_ramster_foreign_pers_pages(void) { };
+
+static inline int ramster_debugfs_init(void)
+{
+ return 0;
+}
+#endif
diff --git a/drivers/staging/zcache/ramster/ramster.c b/drivers/staging/zcache/ramster/ramster.c
index 444189e..1d29f5b 100644
--- a/drivers/staging/zcache/ramster/ramster.c
+++ b/drivers/staging/zcache/ramster/ramster.c
@@ -42,6 +42,7 @@
#include "ramster.h"
#include "ramster_nodemanager.h"
#include "tcp.h"
+#include "debug.h"
#define RAMSTER_TESTING
@@ -63,118 +64,12 @@ static atomic_t ramster_remote_pers_pages = ATOMIC_INIT(0);
static bool ramster_nodes_manual_up[MANUAL_NODES] __read_mostly;
static int ramster_remote_target_nodenum __read_mostly = -1;
-/* these counters are made available via debugfs */
-static long ramster_flnodes;
-static atomic_t ramster_flnodes_atomic = ATOMIC_INIT(0);
-static unsigned long ramster_flnodes_max;
-static inline void inc_ramster_flnodes(void)
-{
- ramster_flnodes = atomic_inc_return(&ramster_flnodes_atomic);
- if (ramster_flnodes > ramster_flnodes_max)
- ramster_flnodes_max = ramster_flnodes;
-}
-static inline void dec_ramster_flnodes(void)
-{
- ramster_flnodes = atomic_dec_return(&ramster_flnodes_atomic);
-}
-static ssize_t ramster_foreign_eph_pages;
-static atomic_t ramster_foreign_eph_pages_atomic = ATOMIC_INIT(0);
-static ssize_t ramster_foreign_eph_pages_max;
-static inline void inc_ramster_foreign_eph_pages(void)
-{
- ramster_foreign_eph_pages = atomic_inc_return(
- &ramster_foreign_eph_pages_atomic);
- if (ramster_foreign_eph_pages > ramster_foreign_eph_pages_max)
- ramster_foreign_eph_pages_max = ramster_foreign_eph_pages;
-}
-static inline void dec_ramster_foreign_eph_pages(void)
-{
- ramster_foreign_eph_pages = atomic_dec_return(
- &ramster_foreign_eph_pages_atomic);
-}
-static ssize_t ramster_foreign_pers_pages;
-static atomic_t ramster_foreign_pers_pages_atomic = ATOMIC_INIT(0);
-static ssize_t ramster_foreign_pers_pages_max;
-static inline void inc_ramster_foreign_pers_pages(void)
-{
- ramster_foreign_pers_pages = atomic_inc_return(
- &ramster_foreign_pers_pages_atomic);
- if (ramster_foreign_pers_pages > ramster_foreign_pers_pages_max)
- ramster_foreign_pers_pages_max = ramster_foreign_pers_pages;
-}
-static inline void dec_ramster_foreign_pers_pages(void)
-{
- ramster_foreign_pers_pages = atomic_dec_return(
- &ramster_foreign_pers_pages_atomic);
-}
-static ssize_t ramster_eph_pages_remoted;
-static ssize_t ramster_pers_pages_remoted;
-static ssize_t ramster_eph_pages_remote_failed;
-static ssize_t ramster_pers_pages_remote_failed;
-static ssize_t ramster_remote_eph_pages_succ_get;
-static ssize_t ramster_remote_pers_pages_succ_get;
-static ssize_t ramster_remote_eph_pages_unsucc_get;
-static ssize_t ramster_remote_pers_pages_unsucc_get;
-static ssize_t ramster_pers_pages_remote_nomem;
-static ssize_t ramster_remote_objects_flushed;
-static ssize_t ramster_remote_object_flushes_failed;
-static ssize_t ramster_remote_pages_flushed;
-static ssize_t ramster_remote_page_flushes_failed;
+/* Used by this code. */
+long ramster_flnodes;
+ssize_t ramster_foreign_eph_pages;
+ssize_t ramster_foreign_pers_pages;
/* FIXME frontswap selfshrinking knobs in debugfs? */
-#ifdef CONFIG_DEBUG_FS
-#include <linux/debugfs.h>
-#define zdfs debugfs_create_size_t
-#define zdfs64 debugfs_create_u64
-static int __init ramster_debugfs_init(void)
-{
- struct dentry *root = debugfs_create_dir("ramster", NULL);
- if (root == NULL)
- return -ENXIO;
-
- zdfs("eph_pages_remoted", S_IRUGO, root, &ramster_eph_pages_remoted);
- zdfs("pers_pages_remoted", S_IRUGO, root, &ramster_pers_pages_remoted);
- zdfs("eph_pages_remote_failed", S_IRUGO, root,
- &ramster_eph_pages_remote_failed);
- zdfs("pers_pages_remote_failed", S_IRUGO, root,
- &ramster_pers_pages_remote_failed);
- zdfs("remote_eph_pages_succ_get", S_IRUGO, root,
- &ramster_remote_eph_pages_succ_get);
- zdfs("remote_pers_pages_succ_get", S_IRUGO, root,
- &ramster_remote_pers_pages_succ_get);
- zdfs("remote_eph_pages_unsucc_get", S_IRUGO, root,
- &ramster_remote_eph_pages_unsucc_get);
- zdfs("remote_pers_pages_unsucc_get", S_IRUGO, root,
- &ramster_remote_pers_pages_unsucc_get);
- zdfs("pers_pages_remote_nomem", S_IRUGO, root,
- &ramster_pers_pages_remote_nomem);
- zdfs("remote_objects_flushed", S_IRUGO, root,
- &ramster_remote_objects_flushed);
- zdfs("remote_pages_flushed", S_IRUGO, root,
- &ramster_remote_pages_flushed);
- zdfs("remote_object_flushes_failed", S_IRUGO, root,
- &ramster_remote_object_flushes_failed);
- zdfs("remote_page_flushes_failed", S_IRUGO, root,
- &ramster_remote_page_flushes_failed);
- zdfs("foreign_eph_pages", S_IRUGO, root,
- &ramster_foreign_eph_pages);
- zdfs("foreign_eph_pages_max", S_IRUGO, root,
- &ramster_foreign_eph_pages_max);
- zdfs("foreign_pers_pages", S_IRUGO, root,
- &ramster_foreign_pers_pages);
- zdfs("foreign_pers_pages_max", S_IRUGO, root,
- &ramster_foreign_pers_pages_max);
- return 0;
-}
-#undef zdebugfs
-#undef zdfs64
-#else
-static inline int ramster_debugfs_init(void)
-{
- return 0;
-}
-#endif
-
static LIST_HEAD(ramster_rem_op_list);
static DEFINE_SPINLOCK(ramster_rem_op_list_lock);
static DEFINE_PER_CPU(struct ramster_preload, ramster_preloads);
--
1.7.10.4
Add RAMSTER_DEBUG Kconfig entry.
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/Kconfig | 8 ++++++++
drivers/staging/zcache/Makefile | 2 +-
drivers/staging/zcache/ramster/debug.h | 2 +-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/zcache/Kconfig b/drivers/staging/zcache/Kconfig
index c3b8a10..05e87a1 100644
--- a/drivers/staging/zcache/Kconfig
+++ b/drivers/staging/zcache/Kconfig
@@ -33,6 +33,14 @@ config RAMSTER
zcache2, compresses swap pages into local RAM, but then remotifies
the compressed pages to another node in the RAMster cluster.
+config RAMSTER_DEBUG
+ bool "Enable ramster debug statistics"
+ depends on DEBUG_FS && RAMSTER
+ default n
+ help
+ This is used to provide an debugfs directory with counters of
+ how ramster is doing. You probably want to set this to 'N'.
+
# Depends on not-yet-upstreamed mm patches to export end_swap_bio_write and
# __add_to_swap_cache, and implement __swap_writepage (which is swap_writepage
# without the frontswap call. When these are in-tree, the dependency on
diff --git a/drivers/staging/zcache/Makefile b/drivers/staging/zcache/Makefile
index 4956fa0..845a5c2 100644
--- a/drivers/staging/zcache/Makefile
+++ b/drivers/staging/zcache/Makefile
@@ -1,6 +1,6 @@
zcache-y := zcache-main.o tmem.o zbud.o
zcache-$(CONFIG_ZCACHE_DEBUG) += debug.o
-zcache-$(CONFIG_RAMSTER) += ramster/debug.o
+zcache-$(CONFIG_RAMSTER_DEBUG) += ramster/debug.o
zcache-$(CONFIG_RAMSTER) += ramster/ramster.o ramster/r2net.o
zcache-$(CONFIG_RAMSTER) += ramster/nodemanager.o ramster/tcp.o
zcache-$(CONFIG_RAMSTER) += ramster/heartbeat.o ramster/masklog.o
diff --git a/drivers/staging/zcache/ramster/debug.h b/drivers/staging/zcache/ramster/debug.h
index 7b2deaa..7f80dd4 100644
--- a/drivers/staging/zcache/ramster/debug.h
+++ b/drivers/staging/zcache/ramster/debug.h
@@ -1,4 +1,4 @@
-#ifdef CONFIG_RAMSTER
+#ifdef CONFIG_RAMSTER_DEBUG
extern long ramster_flnodes;
static atomic_t ramster_flnodes_atomic = ATOMIC_INIT(0);
--
1.7.10.4
Add incremental accessory counters that are going to be used for
debug fs entries.
Acked-by: Dan Magenheimer <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
drivers/staging/zcache/ramster/debug.h | 67 ++++++++++++++++++++++++++++++
drivers/staging/zcache/ramster/ramster.c | 32 +++++++-------
2 files changed, 83 insertions(+), 16 deletions(-)
diff --git a/drivers/staging/zcache/ramster/debug.h b/drivers/staging/zcache/ramster/debug.h
index 17a8435..7b2deaa 100644
--- a/drivers/staging/zcache/ramster/debug.h
+++ b/drivers/staging/zcache/ramster/debug.h
@@ -60,6 +60,59 @@ extern ssize_t ramster_remote_page_flushes_failed;
int ramster_debugfs_init(void);
+static inline void inc_ramster_eph_pages_remoted(void)
+{
+ ramster_eph_pages_remoted++;
+};
+static inline void inc_ramster_pers_pages_remoted(void)
+{
+ ramster_pers_pages_remoted++;
+};
+static inline void inc_ramster_eph_pages_remote_failed(void)
+{
+ ramster_eph_pages_remote_failed++;
+};
+static inline void inc_ramster_pers_pages_remote_failed(void)
+{
+ ramster_pers_pages_remote_failed++;
+};
+static inline void inc_ramster_remote_eph_pages_succ_get(void)
+{
+ ramster_remote_eph_pages_succ_get++;
+};
+static inline void inc_ramster_remote_pers_pages_succ_get(void)
+{
+ ramster_remote_pers_pages_succ_get++;
+};
+static inline void inc_ramster_remote_eph_pages_unsucc_get(void)
+{
+ ramster_remote_eph_pages_unsucc_get++;
+};
+static inline void inc_ramster_remote_pers_pages_unsucc_get(void)
+{
+ ramster_remote_pers_pages_unsucc_get++;
+};
+static inline void inc_ramster_pers_pages_remote_nomem(void)
+{
+ ramster_pers_pages_remote_nomem++;
+};
+static inline void inc_ramster_remote_objects_flushed(void)
+{
+ ramster_remote_objects_flushed++;
+};
+static inline void inc_ramster_remote_object_flushes_failed(void)
+{
+ ramster_remote_object_flushes_failed++;
+};
+static inline void inc_ramster_remote_pages_flushed(void)
+{
+ ramster_remote_pages_flushed++;
+};
+static inline void inc_ramster_remote_page_flushes_failed(void)
+{
+ ramster_remote_page_flushes_failed++;
+};
+
#else
static inline void inc_ramster_flnodes(void) { };
@@ -69,6 +122,20 @@ static inline void dec_ramster_foreign_eph_pages(void) { };
static inline void inc_ramster_foreign_pers_pages(void) { };
static inline void dec_ramster_foreign_pers_pages(void) { };
+static inline void inc_ramster_eph_pages_remoted(void) { };
+static inline void inc_ramster_pers_pages_remoted(void) { };
+static inline void inc_ramster_eph_pages_remote_failed(void) { };
+static inline void inc_ramster_pers_pages_remote_failed(void) { };
+static inline void inc_ramster_remote_eph_pages_succ_get(void) { };
+static inline void inc_ramster_remote_pers_pages_succ_get(void) { };
+static inline void inc_ramster_remote_eph_pages_unsucc_get(void) { };
+static inline void inc_ramster_remote_pers_pages_unsucc_get(void) { };
+static inline void inc_ramster_pers_pages_remote_nomem(void) { };
+static inline void inc_ramster_remote_objects_flushed(void) { };
+static inline void inc_ramster_remote_object_flushes_failed(void) { };
+static inline void inc_ramster_remote_pages_flushed(void) { };
+static inline void inc_ramster_remote_page_flushes_failed(void) { };
+
static inline int ramster_debugfs_init(void)
{
return 0;
diff --git a/drivers/staging/zcache/ramster/ramster.c b/drivers/staging/zcache/ramster/ramster.c
index 1d29f5b..8781627 100644
--- a/drivers/staging/zcache/ramster/ramster.c
+++ b/drivers/staging/zcache/ramster/ramster.c
@@ -156,9 +156,9 @@ int ramster_localify(int pool_id, struct tmem_oid *oidp, uint32_t index,
pr_err("UNTESTED pampd==NULL in ramster_localify\n");
#endif
if (eph)
- ramster_remote_eph_pages_unsucc_get++;
+ inc_ramster_remote_eph_pages_unsucc_get();
else
- ramster_remote_pers_pages_unsucc_get++;
+ inc_ramster_remote_pers_pages_unsucc_get();
obj = NULL;
goto finish;
} else if (unlikely(!pampd_is_remote(pampd))) {
@@ -167,9 +167,9 @@ int ramster_localify(int pool_id, struct tmem_oid *oidp, uint32_t index,
pr_err("UNTESTED dup while waiting in ramster_localify\n");
#endif
if (eph)
- ramster_remote_eph_pages_unsucc_get++;
+ inc_ramster_remote_eph_pages_unsucc_get();
else
- ramster_remote_pers_pages_unsucc_get++;
+ inc_ramster_remote_pers_pages_unsucc_get();
obj = NULL;
pampd = NULL;
ret = -EEXIST;
@@ -178,7 +178,7 @@ int ramster_localify(int pool_id, struct tmem_oid *oidp, uint32_t index,
/* no remote data, delete the local is_remote pampd */
pampd = NULL;
if (eph)
- ramster_remote_eph_pages_unsucc_get++;
+ inc_ramster_remote_eph_pages_unsucc_get();
else
BUG();
delete = true;
@@ -209,9 +209,9 @@ int ramster_localify(int pool_id, struct tmem_oid *oidp, uint32_t index,
BUG_ON(extra == NULL);
zcache_decompress_to_page(data, size, (struct page *)extra);
if (eph)
- ramster_remote_eph_pages_succ_get++;
+ inc_ramster_remote_eph_pages_succ_get();
else
- ramster_remote_pers_pages_succ_get++;
+ inc_ramster_remote_pers_pages_succ_get();
ret = 0;
finish:
tmem_localify_finish(obj, index, pampd, saved_hb, delete);
@@ -296,7 +296,7 @@ void *ramster_pampd_repatriate_preload(void *pampd, struct tmem_pool *pool,
c = atomic_dec_return(&ramster_remote_pers_pages);
WARN_ON_ONCE(c < 0);
} else {
- ramster_pers_pages_remote_nomem++;
+ inc_ramster_pers_pages_remote_nomem();
}
local_irq_restore(flags);
out:
@@ -435,9 +435,9 @@ static void ramster_remote_flush_page(struct flushlist_node *flnode)
remotenode = flnode->xh.client_id;
ret = r2net_remote_flush(xh, remotenode);
if (ret >= 0)
- ramster_remote_pages_flushed++;
+ inc_ramster_remote_pages_flushed();
else
- ramster_remote_page_flushes_failed++;
+ inc_ramster_remote_page_flushes_failed();
preempt_enable_no_resched();
ramster_flnode_free(flnode, NULL);
}
@@ -452,9 +452,9 @@ static void ramster_remote_flush_object(struct flushlist_node *flnode)
remotenode = flnode->xh.client_id;
ret = r2net_remote_flush_object(xh, remotenode);
if (ret >= 0)
- ramster_remote_objects_flushed++;
+ inc_ramster_remote_objects_flushed();
else
- ramster_remote_object_flushes_failed++;
+ inc_ramster_remote_object_flushes_failed();
preempt_enable_no_resched();
ramster_flnode_free(flnode, NULL);
}
@@ -505,18 +505,18 @@ int ramster_remotify_pageframe(bool eph)
* But count them so we know if it becomes a problem.
*/
if (eph)
- ramster_eph_pages_remote_failed++;
+ inc_ramster_eph_pages_remote_failed();
else
- ramster_pers_pages_remote_failed++;
+ inc_ramster_pers_pages_remote_failed();
break;
} else {
if (!eph)
atomic_inc(&ramster_remote_pers_pages);
}
if (eph)
- ramster_eph_pages_remoted++;
+ inc_ramster_eph_pages_remoted();
else
- ramster_pers_pages_remoted++;
+ inc_ramster_pers_pages_remoted();
/*
* data was successfully remoted so change the local version to
* point to the remote node where it landed
--
1.7.10.4
On Fri, Apr 12, 2013 at 09:31:22AM +0800, Wanpeng Li wrote:
> Note that at this point there is no CONFIG_RAMSTER_DEBUG
> option in the Kconfig. So in effect all of the counters
> are nop until that option gets re-introduced in:
> zcache/ramster/debug: Add RAMSTE_DEBUG Kconfig entry
RAMSTE_DEBUG? :)
On Fri, Apr 12, 2013 at 03:16:03PM -0700, Greg Kroah-Hartman wrote:
> On Fri, Apr 12, 2013 at 09:31:22AM +0800, Wanpeng Li wrote:
> > Note that at this point there is no CONFIG_RAMSTER_DEBUG
> > option in the Kconfig. So in effect all of the counters
> > are nop until that option gets re-introduced in:
> > zcache/ramster/debug: Add RAMSTE_DEBUG Kconfig entry
>
> RAMSTE_DEBUG? :)
>
And I fat-fingered my scripts, and deleted this email, sorry.
Can you send the 2-7 patches again, it's my fault.
greg k-h
On Sat, Apr 13, 2013 at 08:29:39AM +0800, Wanpeng Li wrote:
> On Fri, Apr 12, 2013 at 03:17:44PM -0700, Greg Kroah-Hartman wrote:
> >On Fri, Apr 12, 2013 at 03:16:03PM -0700, Greg Kroah-Hartman wrote:
> >> On Fri, Apr 12, 2013 at 09:31:22AM +0800, Wanpeng Li wrote:
> >> > Note that at this point there is no CONFIG_RAMSTER_DEBUG
> >> > option in the Kconfig. So in effect all of the counters
> >> > are nop until that option gets re-introduced in:
> >> > zcache/ramster/debug: Add RAMSTE_DEBUG Kconfig entry
> >>
> >> RAMSTE_DEBUG? :)
> >>
> >
> >And I fat-fingered my scripts, and deleted this email, sorry.
> >
>
> No problem, I will send 2-7 ASAP. ;-)
Thanks. 5 years since my last email deletion, not that bad :)
greg k-h