From: SeongJae Park <[email protected]>
DAMON[1] programming interface users can extend DAMON for any address space by
configuring the address-space specific low level primitives with appropriate
ones including their own implementations. However, because the implementation
for the virtual address space is only available now, the users should implement
their own for other address spaces. Worse yet, the user space users who rely
on the debugfs interface and user space tool, cannot implement their own.
This patchset implements another reference implementation of the low level
primitives for the physical memory address space. With this change, hence, the
kernel space users can monitor both the virtual and the physical address spaces
by simply changing the configuration in the runtime. Further, this patchset
links the implementation to the debugfs interface and the user space tool for
the user space users.
Note that the implementation supports only the user memory, as same to the idle
page access tracking feature.
[1] https://lore.kernel.org/linux-mm/[email protected]/
Baseline and Complete Git Trees
===============================
The patches are based on the v5.7 plus DAMON v17 patchset[1] and DAMOS RFC v13
patchset[2]. You can also clone the complete git tree:
$ git clone git://github.com/sjp38/linux -b cdamon/rfc/v5
The web is also available:
https://github.com/sjp38/linux/releases/tag/cdamon/rfc/v5
[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/linux-mm/[email protected]/
Sequence of Patches
===================
The sequence of patches is as follow.
The first 5 patches allow the user space users manually set the monitoring
regions. The 1st and 2nd patches implements the features in the debugfs
interface and the user space tool . Following two patches each implement
unittests (the 3rd patch) and selftests (the 4th patch) for the new feature.
Finally, the 5th patch documents this new feature.
Following 6 patches implement the physical memory monitoring. The 6th patch
exports rmap essential functions to GPL modules as those will be used by the
DAMON's implementation of the low level primitives for the physical memory
address space. The 7th patch implements the low level primitives. The 8th and
the 9th patches links the feature to the debugfs and the user space tool,
respectively. The 10th patch further implement a handy NUMA specific memory
monitoring feature on the user space tool. Finally, the 11th patch documents
this new features.
Patch History
=============
Changes from RFC v4
(https://lore.kernel.org/linux-mm/[email protected]/)
- Support NUMA specific physical memory monitoring
Changes from RFC v3
(https://lore.kernel.org/linux-mm/[email protected]/)
- Export rmap functions
- Reorganize for physical memory monitoring support only
- Clean up debugfs code
Changes from RFC v2
(https://lore.kernel.org/linux-mm/[email protected]/)
- Support the physical memory monitoring with the user space tool
- Use 'pfn_to_online_page()' (David Hildenbrand)
- Document more detail on random 'pfn' and its safeness (David Hildenbrand)
Changes from RFC v1
(https://lore.kernel.org/linux-mm/[email protected]/)
- Provide the reference primitive implementations for the physical memory
- Connect the extensions with the debugfs interface
SeongJae Park (11):
mm/damon/debugfs: Allow users to set initial monitoring target regions
tools/damon: Support init target regions specification
mm/damon-test: Add more unit tests for 'init_regions'
selftests/damon/_chk_record: Do not check number of gaps
Docs/damon: Document 'initial_regions' feature
mm/rmap: Export essential functions for rmap_run
mm/damon: Implement callbacks for physical memory monitoring
mm/damon/debugfs: Support physical memory monitoring
tools/damon/record: Support physical memory monitoring
tools/damon/record: Support NUMA specific recording
Docs/damon: Document physical memory monitoring support
Documentation/admin-guide/mm/damon/faq.rst | 7 +-
Documentation/admin-guide/mm/damon/index.rst | 1 -
.../admin-guide/mm/damon/mechanisms.rst | 29 +-
Documentation/admin-guide/mm/damon/plans.rst | 7 -
Documentation/admin-guide/mm/damon/usage.rst | 80 +++-
include/linux/damon.h | 5 +
mm/damon-test.h | 53 +++
mm/damon.c | 374 +++++++++++++++++-
mm/rmap.c | 2 +
mm/util.c | 1 +
tools/damon/_damon.py | 41 ++
tools/damon/_paddr_layout.py | 158 ++++++++
tools/damon/heats.py | 2 +-
tools/damon/record.py | 60 ++-
tools/damon/schemes.py | 12 +-
tools/testing/selftests/damon/_chk_record.py | 6 -
16 files changed, 783 insertions(+), 55 deletions(-)
delete mode 100644 Documentation/admin-guide/mm/damon/plans.rst
create mode 100644 tools/damon/_paddr_layout.py
--
2.17.1
From: SeongJae Park <[email protected]>
Some users would want to monitor only a part of the entire virtual
memory address space. The '->init_target_regions' callback is therefore
provided, but only programming interface can use it.
For the reason, this commit introduces a new debugfs file,
'init_region'. Users can specify which initial monitoring target
address regions they want by writing special input to the file. The
input should describe each region in each line in below form:
<pid> <start address> <end address>
This commit also makes the default '->init_target_regions' callback,
'kdamon_init_vm_regions()' to do nothing if the user has set the initial
target regions already.
Note that the regions will be updated to cover entire memory mapped
regions after 'regions update interval'. If you want the regions to not
be updated after the initial setting, you could set the interval as a
very long time, say, a few decades.
Signed-off-by: SeongJae Park <[email protected]>
---
mm/damon.c | 156 +++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 152 insertions(+), 4 deletions(-)
diff --git a/mm/damon.c b/mm/damon.c
index 937b6bccb7b8..3aecdef4c841 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -1800,6 +1800,147 @@ static ssize_t debugfs_record_write(struct file *file,
return ret;
}
+static ssize_t sprint_init_regions(struct damon_ctx *c, char *buf, ssize_t len)
+{
+ struct damon_task *t;
+ struct damon_region *r;
+ int written = 0;
+ int rc;
+
+ damon_for_each_task(t, c) {
+ damon_for_each_region(r, t) {
+ rc = snprintf(&buf[written], len - written,
+ "%d %lu %lu\n",
+ t->pid, r->ar.start, r->ar.end);
+ if (!rc)
+ return -ENOMEM;
+ written += rc;
+ }
+ }
+ return written;
+}
+
+static ssize_t debugfs_init_regions_read(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct damon_ctx *ctx = &damon_user_ctx;
+ char *kbuf;
+ ssize_t len;
+
+ kbuf = kmalloc(count, GFP_KERNEL);
+ if (!kbuf)
+ return -ENOMEM;
+
+ mutex_lock(&ctx->kdamond_lock);
+ if (ctx->kdamond) {
+ mutex_unlock(&ctx->kdamond_lock);
+ return -EBUSY;
+ }
+
+ len = sprint_init_regions(ctx, kbuf, count);
+ mutex_unlock(&ctx->kdamond_lock);
+ if (len < 0)
+ goto out;
+ len = simple_read_from_buffer(buf, count, ppos, kbuf, len);
+
+out:
+ kfree(kbuf);
+ return len;
+}
+
+static int add_init_region(struct damon_ctx *c,
+ int pid, struct damon_addr_range *ar)
+{
+ struct damon_task *t;
+ struct damon_region *r, *prev;
+ int rc = -EINVAL;
+
+ if (ar->start >= ar->end)
+ return -EINVAL;
+
+ damon_for_each_task(t, c) {
+ if (t->pid == pid) {
+ r = damon_new_region(ar->start, ar->end);
+ if (!r)
+ return -ENOMEM;
+ damon_add_region(r, t);
+ if (nr_damon_regions(t) > 1) {
+ prev = damon_prev_region(r);
+ if (prev->ar.end > r->ar.start) {
+ damon_destroy_region(r);
+ return -EINVAL;
+ }
+ }
+ rc = 0;
+ }
+ }
+ return rc;
+}
+
+static int set_init_regions(struct damon_ctx *c, const char *str, ssize_t len)
+{
+ struct damon_task *t;
+ struct damon_region *r, *next;
+ int pos = 0, parsed, ret;
+ int pid;
+ struct damon_addr_range ar;
+ int err;
+
+ damon_for_each_task(t, c) {
+ damon_for_each_region_safe(r, next, t)
+ damon_destroy_region(r);
+ }
+
+ while (pos < len) {
+ ret = sscanf(&str[pos], "%d %lu %lu%n",
+ &pid, &ar.start, &ar.end, &parsed);
+ if (ret != 3)
+ break;
+ err = add_init_region(c, pid, &ar);
+ if (err)
+ goto fail;
+ pos += parsed;
+ }
+
+ return 0;
+
+fail:
+ damon_for_each_task(t, c) {
+ damon_for_each_region_safe(r, next, t)
+ damon_destroy_region(r);
+ }
+ return err;
+}
+
+static ssize_t debugfs_init_regions_write(struct file *file,
+ const char __user *buf, size_t count,
+ loff_t *ppos)
+{
+ struct damon_ctx *ctx = &damon_user_ctx;
+ char *kbuf;
+ ssize_t ret = count;
+ int err;
+
+ kbuf = user_input_str(buf, count, ppos);
+ if (IS_ERR(kbuf))
+ return PTR_ERR(kbuf);
+
+ mutex_lock(&ctx->kdamond_lock);
+ if (ctx->kdamond) {
+ ret = -EBUSY;
+ goto unlock_out;
+ }
+
+ err = set_init_regions(ctx, kbuf, ret);
+ if (err)
+ ret = err;
+
+unlock_out:
+ mutex_unlock(&ctx->kdamond_lock);
+ kfree(kbuf);
+ return ret;
+}
+
static ssize_t debugfs_attrs_read(struct file *file,
char __user *buf, size_t count, loff_t *ppos)
{
@@ -1876,6 +2017,12 @@ static const struct file_operations record_fops = {
.write = debugfs_record_write,
};
+static const struct file_operations init_regions_fops = {
+ .owner = THIS_MODULE,
+ .read = debugfs_init_regions_read,
+ .write = debugfs_init_regions_write,
+};
+
static const struct file_operations attrs_fops = {
.owner = THIS_MODULE,
.read = debugfs_attrs_read,
@@ -1886,10 +2033,11 @@ static struct dentry *debugfs_root;
static int __init damon_debugfs_init(void)
{
- const char * const file_names[] = {"attrs", "record", "schemes",
- "pids", "monitor_on"};
- const struct file_operations *fops[] = {&attrs_fops, &record_fops,
- &schemes_fops, &pids_fops, &monitor_on_fops};
+ const char * const file_names[] = {"attrs", "init_regions", "record",
+ "schemes", "pids", "monitor_on"};
+ const struct file_operations *fops[] = {&attrs_fops,
+ &init_regions_fops, &record_fops, &schemes_fops, &pids_fops,
+ &monitor_on_fops};
int i;
debugfs_root = debugfs_create_dir("damon", NULL);
--
2.17.1
From: SeongJae Park <[email protected]>
This commit documents the 'initial_regions' feature.
Signed-off-by: SeongJae Park <[email protected]>
---
Documentation/admin-guide/mm/damon/usage.rst | 35 ++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst
index 153f07da9368..573fcb4c57a7 100644
--- a/Documentation/admin-guide/mm/damon/usage.rst
+++ b/Documentation/admin-guide/mm/damon/usage.rst
@@ -315,6 +315,41 @@ having pids 42 and 4242 as the processes to be monitored and check it again::
Note that setting the pids doesn't start the monitoring.
+Initial Monitoring Target Regions
+---------------------------------
+
+In case of the debugfs based monitoring, DAMON automatically sets and updates
+the monitoring target regions so that entire memory mappings of target
+processes can be covered. However, users might want to limit the monitoring
+region to specific address ranges, such as the heap, the stack, or specific
+file-mapped area. Or, some users might know the initial access pattern of their
+workloads and therefore want to set optimal initial regions for the 'adaptive
+regions adjustment'.
+
+In such cases, users can explicitly set the initial monitoring target regions
+as they want, by writing proper values to the ``init_regions`` file. Each line
+of the input should represent one region in below form.::
+
+ <pid> <start address> <end address>
+
+The ``pid`` should already in ``pids`` file, and the regions should be
+passed in address order. For example, below commands will set a couple of
+address ranges, ``1-100`` and ``100-200`` as the initial monitoring target
+region of process 42, and another couple of address ranges, ``20-40`` and
+``50-100`` as that of process 4242.::
+
+ # cd <debugfs>/damon
+ # echo "42 1 100
+ 42 100 200
+ 4242 20 40
+ 4242 50 100" > init_regions
+
+Note that this sets the initial monitoring target regions only. DAMON will
+automatically updates the boundary of the regions after one ``regions update
+interval``. Therefore, users should set the ``regions update interval`` large
+enough.
+
+
Record
------
--
2.17.1
From: SeongJae Park <[email protected]>
This commit makes the debugfs interface to support the physical memory
monitoring, in addition to the virtual memory monitoring.
Users can do the physical memory monitoring by writing a special
keyword, 'paddr\n' to the 'pids' debugfs file. Then, DAMON will check
the special keyword and configure the callbacks of the monitoring
context for the debugfs user for physical memory. This will internally
add one fake monitoring target process, which has pid as -1.
Unlike the virtual memory monitoring, DAMON debugfs will not
automatically set the monitoring target region. Therefore, users should
also set the monitoring target address region using the 'init_regions'
debugfs file. While doing this, the 'pid' in the input should be '-1'.
Finally, the physical memory monitoring will not automatically
terminated because it has fake monitoring target process. The user
should explicitly turn off the monitoring by writing 'off' to the
'monitor_on' debugfs file.
Signed-off-by: SeongJae Park <[email protected]>
---
mm/damon.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/mm/damon.c b/mm/damon.c
index fb533b2ee4bf..34c418ef4e5f 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -1928,6 +1928,23 @@ static ssize_t debugfs_pids_write(struct file *file,
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
+ if (!strncmp(kbuf, "paddr\n", count)) {
+ /* Configure the context for physical memory monitoring */
+ ctx->init_target_regions = kdamond_init_phys_regions;
+ ctx->update_target_regions = kdamond_update_phys_regions;
+ ctx->prepare_access_checks = kdamond_prepare_phys_access_checks;
+ ctx->check_accesses = kdamond_check_phys_accesses;
+
+ /* Set the fake target task pid as -1 */
+ snprintf(kbuf, count, "-1 ");
+ } else {
+ /* Configure the context for virtual memory monitoring */
+ ctx->init_target_regions = kdamond_init_vm_regions;
+ ctx->update_target_regions = kdamond_update_vm_regions;
+ ctx->prepare_access_checks = kdamond_prepare_vm_access_checks;
+ ctx->check_accesses = kdamond_check_vm_accesses;
+ }
+
targets = str_to_pids(kbuf, ret, &nr_targets);
if (!targets) {
ret = -ENOMEM;
--
2.17.1
From: SeongJae Park <[email protected]>
This commit adds description for the physical memory monitoring usage in
the DAMON document.
Signed-off-by: SeongJae Park <[email protected]>
---
Documentation/admin-guide/mm/damon/faq.rst | 7 ++-
Documentation/admin-guide/mm/damon/index.rst | 1 -
.../admin-guide/mm/damon/mechanisms.rst | 29 +++++-----
Documentation/admin-guide/mm/damon/plans.rst | 7 ---
Documentation/admin-guide/mm/damon/usage.rst | 53 ++++++++++++++-----
5 files changed, 60 insertions(+), 37 deletions(-)
delete mode 100644 Documentation/admin-guide/mm/damon/plans.rst
diff --git a/Documentation/admin-guide/mm/damon/faq.rst b/Documentation/admin-guide/mm/damon/faq.rst
index f55d1d719999..ff630cf5fce1 100644
--- a/Documentation/admin-guide/mm/damon/faq.rst
+++ b/Documentation/admin-guide/mm/damon/faq.rst
@@ -44,10 +44,9 @@ constructions and actual access checks can be implemented and configured on the
DAMON core by the users. In this way, DAMON users can monitor any address
space with any access check technique.
-Nonetheless, DAMON provides a vma tracking and PTE Accessed bit check based
-implementation of the address space dependent functions for the virtual memory
-by default, for a reference and convenient use. In near future, we will also
-provide that for physical memory address space.
+Nonetheless, DAMON provides vma/rmap tracking and PTE Accessed bit check based
+implementations of the address space dependent functions for the virtual memory
+and the physical memory by default, for a reference and convenient use.
Can I simply monitor page granularity?
diff --git a/Documentation/admin-guide/mm/damon/index.rst b/Documentation/admin-guide/mm/damon/index.rst
index c6e657f8e90c..6e36149053fa 100644
--- a/Documentation/admin-guide/mm/damon/index.rst
+++ b/Documentation/admin-guide/mm/damon/index.rst
@@ -32,4 +32,3 @@ workloads and systems.
faq
mechanisms
eval
- plans
diff --git a/Documentation/admin-guide/mm/damon/mechanisms.rst b/Documentation/admin-guide/mm/damon/mechanisms.rst
index 16066477bb2c..fb33d8d8a09c 100644
--- a/Documentation/admin-guide/mm/damon/mechanisms.rst
+++ b/Documentation/admin-guide/mm/damon/mechanisms.rst
@@ -25,9 +25,11 @@ files, and backing devices would be supportable. Also, if some architectures
or kernel module support special access check primitives for specific address
space, those will be easily configurable.
-DAMON currently provides an implementation of the primitives for the virtual
-address space. It uses VMA for the target address range identification and PTE
-Accessed bit for the access check.
+DAMON currently provides an implementation of the primitives for the physical
+and virtual address spaces. The implementation for the physical address space
+ask users to manually set the monitoring target address ranges while the
+implementation for the virtual address space uses VMA for the target address
+range identification. Both uses PTE Accessed bit for the access check.
Below four sections describe the address independent core mechanisms and the
five knobs for tuning, that is, ``sampling interval``, ``aggregation
@@ -113,26 +115,29 @@ memory mapping changes and applies it to the abstracted target area only for
each of a user-specified time interval (``regions update interval``).
-Virtual Address Space Specific Low Primitives
-=============================================
+Address Space Specific Low Primitives
+=====================================
-This is for the DAMON's reference implementation of the virtual memory address
-specific low level primitive only.
+This is for the DAMON's reference implementation of the address space specific
+low level primitive only.
PTE Accessed-bit Based Access Check
-----------------------------------
-The implementation uses PTE Accessed-bit for basic access checks. That is, it
-clears the bit for next sampling target page and checks whether it set again
-after one sampling period. To avoid disturbing other Accessed bit users such
-as the reclamation logic, this implementation adjusts the ``PG_Idle`` and
-``PG_Young`` appropriately, as same to the 'Idle Page Tracking'.
+Both of the implementations for physical and virtual address spaces use PTE
+Accessed-bit for basic access checks. That is, those clears the bit for next
+sampling target page and checks whether it set again after one sampling period.
+To avoid disturbing other Accessed bit users such as the reclamation logic, the
+implementations adjust the ``PG_Idle`` and ``PG_Young`` appropriately, as same
+to the 'Idle Page Tracking'.
VMA-based Target Address Range Construction
-------------------------------------------
+This is for the virtual address space specific primitives implementation.
+
Only small parts in the super-huge virtual address space of the processes are
mapped to the physical memory and accessed. Thus, tracking the unmapped
address regions is just wasteful. However, because DAMON can deal with some
diff --git a/Documentation/admin-guide/mm/damon/plans.rst b/Documentation/admin-guide/mm/damon/plans.rst
deleted file mode 100644
index 765344f02eb3..000000000000
--- a/Documentation/admin-guide/mm/damon/plans.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-
-============
-Future Plans
-============
-
-TBD.
diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst
index 573fcb4c57a7..356281078d4d 100644
--- a/Documentation/admin-guide/mm/damon/usage.rst
+++ b/Documentation/admin-guide/mm/damon/usage.rst
@@ -10,15 +10,16 @@ DAMON provides below three interfaces for different users.
This is for privileged people such as system administrators who want a
just-working human-friendly interface. Using this, users can use the DAMON’s
major features in a human-friendly way. It may not be highly tuned for
- special cases, though. It supports virtual address space monitoring only.
+ special cases, though. It supports both virtual and physical address spaces
+ monitoring.
- *debugfs interface.*
This is for privileged user space programmers who want more optimized use of
DAMON. Using this, users can use DAMON’s major features by reading
from and writing to special debugfs files. Therefore, you can write and use
your personalized DAMON debugfs wrapper programs that reads/writes the
debugfs files instead of you. The DAMON user space tool is also a reference
- implementation of such programs. It supports virtual address space
- monitoring only.
+ implementation of such programs. It supports both virtual and physical
+ address spaces monitoring.
- *Kernel Space Programming Interface.*
This is for kernel space programmers. Using this, users can utilize every
feature of DAMON most flexibly and efficiently by writing kernel space
@@ -48,9 +49,11 @@ Recording Data Access Pattern
-----------------------------
The ``record`` subcommand records the data access pattern of target workloads
-in a file (``./damon.data`` by default). You can specify the target as either
-process id of running target or a command for execution of it. Below example
-shows a command target usage::
+in a file (``./damon.data`` by default). You can specify the target with 1)
+the command for execution of the monitoring target process, 2) pid of running
+target process, or 3) the special keyword, 'paddr', if you want to monitor the
+system's physical memory address space. Below example shows a command target
+usage::
# cd <kernel>/tools/damon/
# damo record "sleep 5"
@@ -61,6 +64,15 @@ of the process. Below example shows a pid target usage::
# sleep 5 &
# damo record `pidof sleep`
+Finally, below example shows the use of the special keyword, 'paddr'::
+
+ # damo record paddr
+
+In this case, the monitoring target regions defaults to the largetst 'System
+RAM' region specified in '/proc/iomem' file. Note that the initial monitoring
+target region is maintained rather than dynamically updated like the virtual
+memory address spaces monitoring case.
+
The location of the recorded file can be explicitly set using ``-o`` option.
You can further tune this by setting the monitoring attributes. To know about
the monitoring attributes in detail, please refer to :doc:`mechanisms`.
@@ -303,15 +315,25 @@ check it again::
Target PIDs
-----------
-Users can get and set the pids of monitoring target processes by reading from
-and writing to the ``pids`` file. For example, below commands set processes
-having pids 42 and 4242 as the processes to be monitored and check it again::
+To monitor the virtual memory address spaces of specific processes, users can
+get and set the pids of monitoring target processes by reading from and writing
+to the ``pids`` file. For example, below commands set processes having pids 42
+and 4242 as the processes to be monitored and check it again::
# cd <debugfs>/damon
# echo 42 4242 > pids
# cat pids
42 4242
+Users can also monitor the physical memory address space of the system by
+writing a special keyword, "``paddr\n``" to the file. In this case, reading the
+file will show ``-1``, as below::
+
+ # cd <debugfs>/damon
+ # echo paddr > pids
+ # cat pids
+ -1
+
Note that setting the pids doesn't start the monitoring.
@@ -326,6 +348,10 @@ file-mapped area. Or, some users might know the initial access pattern of their
workloads and therefore want to set optimal initial regions for the 'adaptive
regions adjustment'.
+In contrast, DAMON do not automatically sets and updates the monitoring target
+regions in case of physical memory monitoring. Therefore, users should set the
+monitoring target regions by themselves.
+
In such cases, users can explicitly set the initial monitoring target regions
as they want, by writing proper values to the ``init_regions`` file. Each line
of the input should represent one region in below form.::
@@ -344,10 +370,11 @@ region of process 42, and another couple of address ranges, ``20-40`` and
4242 20 40
4242 50 100" > init_regions
-Note that this sets the initial monitoring target regions only. DAMON will
-automatically updates the boundary of the regions after one ``regions update
-interval``. Therefore, users should set the ``regions update interval`` large
-enough.
+Note that this sets the initial monitoring target regions only. In case of
+virtual memory monitoring, DAMON will automatically updates the boundary of the
+regions after one ``regions update interval``. Therefore, users should set the
+``regions update interval`` large enough in this case, if they don't want the
+update.
Record
--
2.17.1
From: SeongJae Park <[email protected]>
This commit exports the three essential functions for ramp walk,
'page_lock_anon_vma_read()', 'rmap_walk()', and 'page_rmapping()', to
GPL modules. Those will be used by DAMON for the physical memory
address based access monitoring in the following commit.
Signed-off-by: SeongJae Park <[email protected]>
---
mm/rmap.c | 2 ++
mm/util.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/mm/rmap.c b/mm/rmap.c
index f79a206b271a..20ac37b27a7d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -579,6 +579,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
rcu_read_unlock();
return anon_vma;
}
+EXPORT_SYMBOL_GPL(page_lock_anon_vma_read);
void page_unlock_anon_vma_read(struct anon_vma *anon_vma)
{
@@ -1934,6 +1935,7 @@ void rmap_walk(struct page *page, struct rmap_walk_control *rwc)
else
rmap_walk_file(page, rwc, false);
}
+EXPORT_SYMBOL_GPL(rmap_walk);
/* Like rmap_walk, but caller holds relevant rmap lock */
void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc)
diff --git a/mm/util.c b/mm/util.c
index 988d11e6c17c..1df32546fe28 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -620,6 +620,7 @@ void *page_rmapping(struct page *page)
page = compound_head(page);
return __page_rmapping(page);
}
+EXPORT_SYMBOL_GPL(page_rmapping);
/*
* Return true if this page is mapped into pagetables.
--
2.17.1
From: SeongJae Park <[email protected]>
This commit implements the four callbacks (->init_target_regions,
->update_target_regions, ->prepare_access_check, and ->check_accesses)
for the basic access monitoring of the physical memory address space.
By setting the callback pointers to point those, users can easily
monitor the accesses to the physical memory.
Internally, it uses the PTE Accessed bit, as similar to that of the
virtual memory support. Also, it supports only user memory pages, as
idle page tracking also does, for the same reason. If the monitoring
target physical memory address range contains non-user memory pages,
access check of the pages will do nothing but simply treat the pages as
not accessed.
Users who want to use other access check primitives and/or monitor the
non-user memory regions could implement and use their own callbacks.
Signed-off-by: SeongJae Park <[email protected]>
---
include/linux/damon.h | 5 ++
mm/damon.c | 201 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 206 insertions(+)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index f176a2b6e67c..eb7a5595b616 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -227,6 +227,11 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx);
void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx);
unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx);
+void kdamond_init_phys_regions(struct damon_ctx *ctx);
+void kdamond_update_phys_regions(struct damon_ctx *ctx);
+void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx);
+unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx);
+
int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids);
int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
unsigned long aggr_int, unsigned long regions_update_int,
diff --git a/mm/damon.c b/mm/damon.c
index 3aecdef4c841..fb533b2ee4bf 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -27,10 +27,13 @@
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/kthread.h>
+#include <linux/memory_hotplug.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/page_idle.h>
+#include <linux/pagemap.h>
#include <linux/random.h>
+#include <linux/rmap.h>
#include <linux/sched/mm.h>
#include <linux/sched/task.h>
#include <linux/slab.h>
@@ -535,6 +538,18 @@ void kdamond_init_vm_regions(struct damon_ctx *ctx)
}
}
+/*
+ * The initial regions construction function for the physical address space.
+ *
+ * This default version does nothing in actual. Users should set the initial
+ * regions by themselves before passing their damon_ctx to 'start_damon()', or
+ * implement their version of this and set '->init_target_regions' of their
+ * damon_ctx to point it.
+ */
+void kdamond_init_phys_regions(struct damon_ctx *ctx)
+{
+}
+
/*
* Functions for the dynamic monitoring target regions update
*/
@@ -618,6 +633,19 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx)
}
}
+/*
+ * The dynamic monitoring target regions update function for the physical
+ * address space.
+ *
+ * This default version does nothing in actual. Users should update the
+ * regions in other callbacks such as '->aggregate_cb', or implement their
+ * version of this and set the '->init_target_regions' of their damon_ctx to
+ * point it.
+ */
+void kdamond_update_phys_regions(struct damon_ctx *ctx)
+{
+}
+
/*
* Functions for the access checking of the regions
*/
@@ -753,6 +781,179 @@ unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx)
return max_nr_accesses;
}
+/* access check functions for physical address based regions */
+
+/*
+ * Get a page by pfn if it is in the LRU list. Otherwise, returns NULL.
+ *
+ * The body of this function is stollen from the 'page_idle_get_page()'. We
+ * steal rather than reuse it because the code is quite simple .
+ */
+static struct page *damon_phys_get_page(unsigned long pfn)
+{
+ struct page *page = pfn_to_online_page(pfn);
+ pg_data_t *pgdat;
+
+ if (!page || !PageLRU(page) ||
+ !get_page_unless_zero(page))
+ return NULL;
+
+ pgdat = page_pgdat(page);
+ spin_lock_irq(&pgdat->lru_lock);
+ if (unlikely(!PageLRU(page))) {
+ put_page(page);
+ page = NULL;
+ }
+ spin_unlock_irq(&pgdat->lru_lock);
+ return page;
+}
+
+static bool damon_page_mkold(struct page *page, struct vm_area_struct *vma,
+ unsigned long addr, void *arg)
+{
+ damon_mkold(vma->vm_mm, addr);
+ return true;
+}
+
+static void damon_phys_mkold(unsigned long paddr)
+{
+ struct page *page = damon_phys_get_page(PHYS_PFN(paddr));
+ struct rmap_walk_control rwc = {
+ .rmap_one = damon_page_mkold,
+ .anon_lock = page_lock_anon_vma_read,
+ };
+ bool need_lock;
+
+ if (!page)
+ return;
+
+ if (!page_mapped(page) || !page_rmapping(page))
+ return;
+
+ need_lock = !PageAnon(page) || PageKsm(page);
+ if (need_lock && !trylock_page(page))
+ return;
+
+ rmap_walk(page, &rwc);
+
+ if (need_lock)
+ unlock_page(page);
+ put_page(page);
+}
+
+static void damon_prepare_phys_access_check(struct damon_ctx *ctx,
+ struct damon_region *r)
+{
+ r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
+
+ damon_phys_mkold(r->sampling_addr);
+}
+
+void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx)
+{
+ struct damon_task *t;
+ struct damon_region *r;
+
+ damon_for_each_task(t, ctx) {
+ damon_for_each_region(r, t)
+ damon_prepare_phys_access_check(ctx, r);
+ }
+}
+
+struct damon_phys_access_chk_result {
+ unsigned long page_sz;
+ bool accessed;
+};
+
+static bool damon_page_accessed(struct page *page, struct vm_area_struct *vma,
+ unsigned long addr, void *arg)
+{
+ struct damon_phys_access_chk_result *result = arg;
+
+ result->accessed = damon_young(vma->vm_mm, addr, &result->page_sz);
+
+ /* If accessed, stop walking */
+ return !result->accessed;
+}
+
+static bool damon_phys_young(unsigned long paddr, unsigned long *page_sz)
+{
+ struct page *page = damon_phys_get_page(PHYS_PFN(paddr));
+ struct damon_phys_access_chk_result result = {
+ .page_sz = PAGE_SIZE,
+ .accessed = false,
+ };
+ struct rmap_walk_control rwc = {
+ .arg = &result,
+ .rmap_one = damon_page_accessed,
+ .anon_lock = page_lock_anon_vma_read,
+ };
+ bool need_lock;
+
+ if (!page)
+ return false;
+
+ if (!page_mapped(page) || !page_rmapping(page))
+ return false;
+
+ need_lock = !PageAnon(page) || PageKsm(page);
+ if (need_lock && !trylock_page(page))
+ return false;
+
+ rmap_walk(page, &rwc);
+
+ if (need_lock)
+ unlock_page(page);
+ put_page(page);
+
+ *page_sz = result.page_sz;
+ return result.accessed;
+}
+
+/*
+ * Check whether the region was accessed after the last preparation
+ *
+ * mm 'mm_struct' for the given virtual address space
+ * r the region of physical address space that needs to be checked
+ */
+static void damon_check_phys_access(struct damon_ctx *ctx,
+ struct damon_region *r)
+{
+ static unsigned long last_addr;
+ static unsigned long last_page_sz = PAGE_SIZE;
+ static bool last_accessed;
+
+ /* If the region is in the last checked page, reuse the result */
+ if (ALIGN_DOWN(last_addr, last_page_sz) ==
+ ALIGN_DOWN(r->sampling_addr, last_page_sz)) {
+ if (last_accessed)
+ r->nr_accesses++;
+ return;
+ }
+
+ last_accessed = damon_phys_young(r->sampling_addr, &last_page_sz);
+ if (last_accessed)
+ r->nr_accesses++;
+
+ last_addr = r->sampling_addr;
+}
+
+unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx)
+{
+ struct damon_task *t;
+ struct damon_region *r;
+ unsigned int max_nr_accesses = 0;
+
+ damon_for_each_task(t, ctx) {
+ damon_for_each_region(r, t) {
+ damon_check_phys_access(ctx, r);
+ max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
+ }
+ }
+
+ return max_nr_accesses;
+}
+
/*
* Functions for DAMON core logics and features
*/
--
2.17.1
From: SeongJae Park <[email protected]>
This commit allows users to record the data accesses on physical memory
address space by passing 'paddr' as target to 'damo-record'. If the
init regions are given, the regions will be monitored. Else, it will
monitor biggest conitguous 'System RAM' region in '/proc/iomem' and
monitor the region.
Signed-off-by: SeongJae Park <[email protected]>
---
tools/damon/_damon.py | 2 ++
tools/damon/heats.py | 2 +-
tools/damon/record.py | 29 ++++++++++++++++++++++++++++-
3 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/tools/damon/_damon.py b/tools/damon/_damon.py
index ad476cc61421..95d23c2ab6ee 100644
--- a/tools/damon/_damon.py
+++ b/tools/damon/_damon.py
@@ -27,6 +27,8 @@ def set_target(pid, init_regions=[]):
if not os.path.exists(debugfs_init_regions):
return 0
+ if pid == 'paddr':
+ pid = -1
string = ' '.join(['%s %d %d' % (pid, r[0], r[1]) for r in init_regions])
return subprocess.call('echo "%s" > %s' % (string, debugfs_init_regions),
shell=True, executable='/bin/bash')
diff --git a/tools/damon/heats.py b/tools/damon/heats.py
index 99837083874e..34dbcf1a839d 100644
--- a/tools/damon/heats.py
+++ b/tools/damon/heats.py
@@ -307,7 +307,7 @@ def plot_heatmap(data_file, output_file):
set xrange [0:];
set yrange [0:];
set xlabel 'Time (ns)';
- set ylabel 'Virtual Address (bytes)';
+ set ylabel 'Address (bytes)';
plot '%s' using 1:2:3 with image;""" % (terminal, output_file, data_file)
subprocess.call(['gnuplot', '-e', gnuplot_cmd])
os.remove(data_file)
diff --git a/tools/damon/record.py b/tools/damon/record.py
index 6ce8721d782a..416dca940c1d 100644
--- a/tools/damon/record.py
+++ b/tools/damon/record.py
@@ -73,6 +73,29 @@ def set_argparser(parser):
parser.add_argument('-o', '--out', metavar='<file path>', type=str,
default='damon.data', help='output file path')
+def default_paddr_region():
+ "Largest System RAM region becomes the default"
+ ret = []
+ with open('/proc/iomem', 'r') as f:
+ # example of the line: '100000000-42b201fff : System RAM'
+ for line in f:
+ fields = line.split(':')
+ if len(fields) != 2:
+ continue
+ name = fields[1].strip()
+ if name != 'System RAM':
+ continue
+ addrs = fields[0].split('-')
+ if len(addrs) != 2:
+ continue
+ start = int(addrs[0], 16)
+ end = int(addrs[1], 16)
+
+ sz_region = end - start
+ if not ret or sz_region > (ret[1] - ret[0]):
+ ret = [start, end]
+ return ret
+
def main(args=None):
global orig_attrs
if not args:
@@ -93,7 +116,11 @@ def main(args=None):
target = args.target
target_fields = target.split()
- if not subprocess.call('which %s > /dev/null' % target_fields[0],
+ if target == 'paddr': # physical memory address space
+ if not init_regions:
+ init_regions = [default_paddr_region()]
+ do_record(target, False, init_regions, new_attrs, orig_attrs)
+ elif not subprocess.call('which %s > /dev/null' % target_fields[0],
shell=True, executable='/bin/bash'):
do_record(target, True, init_regions, new_attrs, orig_attrs)
else:
--
2.17.1
From: SeongJae Park <[email protected]>
This commit updates the DAMON user space tool (damo-record) for NUMA
specific physical memory monitoring. With this change, users can
monitor accesses to physical memory of specific NUMA node.
Signed-off-by: SeongJae Park <[email protected]>
---
tools/damon/_paddr_layout.py | 158 +++++++++++++++++++++++++++++++++++
tools/damon/record.py | 21 ++++-
2 files changed, 178 insertions(+), 1 deletion(-)
create mode 100644 tools/damon/_paddr_layout.py
diff --git a/tools/damon/_paddr_layout.py b/tools/damon/_paddr_layout.py
new file mode 100644
index 000000000000..10056172db21
--- /dev/null
+++ b/tools/damon/_paddr_layout.py
@@ -0,0 +1,158 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+import os
+
+class PaddrRange:
+ start = None
+ end = None
+ nid = None
+ state = None
+ name = None
+
+ def __init__(self, start, end, nid, state, name):
+ self.start = start
+ self.end = end
+ self.nid = nid
+ self.state = state
+ self.name = name
+
+ def interleaved(self, prange):
+ if self.end <= prange.start:
+ return None
+ if prange.end <= self.start:
+ return None
+ return [max(self.start, prange.start), min(self.end, prange.end)]
+
+ def __str__(self):
+ return '%x-%x, nid %s, state %s, name %s' % (self.start, self.end,
+ self.nid, self.state, self.name)
+
+class MemBlock:
+ nid = None
+ index = None
+ state = None
+
+ def __init__(self, nid, index, state):
+ self.nid = nid
+ self.index = index
+ self.state = state
+
+ def __str__(self):
+ return '%d (%s)' % (self.index, self.state)
+
+ def __repr__(self):
+ return self.__str__()
+
+def readfile(file_path):
+ with open(file_path, 'r') as f:
+ return f.read()
+
+def collapse_ranges(ranges):
+ ranges = sorted(ranges, key=lambda x: x.start)
+ merged = []
+ for r in ranges:
+ if not merged:
+ merged.append(r)
+ continue
+ last = merged[-1]
+ if last.end != r.start or last.nid != r.nid or last.state != r.state:
+ merged.append(r)
+ else:
+ last.end = r.end
+ return merged
+
+def memblocks_to_ranges(blocks, block_size):
+ ranges = []
+ for b in blocks:
+ ranges.append(PaddrRange(b.index * block_size,
+ (b.index + 1) * block_size, b.nid, b.state, None))
+
+ return collapse_ranges(ranges)
+
+def memblock_ranges():
+ SYSFS='/sys/devices/system/node'
+ sz_block = int(readfile('/sys/devices/system/memory/block_size_bytes'), 16)
+ sys_nodes = [x for x in os.listdir(SYSFS) if x.startswith('node')]
+
+ blocks = []
+ for sys_node in sys_nodes:
+ nid = int(sys_node[4:])
+
+ sys_node_files = os.listdir(os.path.join(SYSFS, sys_node))
+ for f in sys_node_files:
+ if not f.startswith('memory'):
+ continue
+ index = int(f[6:])
+ sys_state = os.path.join(SYSFS, sys_node, f, 'state')
+ state = readfile(sys_state).strip()
+
+ blocks.append(MemBlock(nid, index, state))
+
+ return memblocks_to_ranges(blocks, sz_block)
+
+def iomem_ranges():
+ ranges = []
+
+ with open('/proc/iomem', 'r') as f:
+ # example of the line: '100000000-42b201fff : System RAM'
+ for line in f:
+ fields = line.split(':')
+ if len(fields) < 2:
+ continue
+ name = ':'.join(fields[1:]).strip()
+ addrs = fields[0].split('-')
+ if len(addrs) != 2:
+ continue
+ start = int(addrs[0], 16)
+ end = int(addrs[1], 16) + 1
+ ranges.append(PaddrRange(start, end, None, None, name))
+
+ return ranges
+
+def paddr_ranges():
+ ranges1 = memblock_ranges()
+ ranges2 = iomem_ranges()
+ merged = []
+
+ for r in ranges1:
+ subsets = []
+ for r2 in ranges2:
+ interleaved = r.interleaved(r2)
+ if interleaved == None:
+ continue
+
+ start, end = interleaved
+ left = None
+ if start > r.start:
+ left = PaddrRange(r.start, start, r.nid, r.state, r.name)
+ subsets.append(left)
+
+ middle = PaddrRange(start, end, r.nid, r.state, r.name)
+ if r2.nid:
+ middle.nid = r2.nid
+ if r2.state:
+ middle.state = r2.state
+ if r2.name:
+ middle.name = r2.name
+ subsets.append(middle)
+ r.start = end
+ if r.start < r.end:
+ subsets = [r]
+
+ merged += subsets
+ return merged
+
+def pr_ranges(ranges):
+ print('#%12s %13s\tnode\tstate\tresource\tsize' % ('start', 'end'))
+ for r in ranges:
+ print('%13d %13d\t%s\t%s\t%s\t%d' % (r.start, r.end, r.nid,
+ r.state, r.name, r.end - r.start))
+
+def main():
+ ranges = paddr_ranges()
+
+ pr_ranges(ranges)
+
+if __name__ == '__main__':
+ main()
diff --git a/tools/damon/record.py b/tools/damon/record.py
index 416dca940c1d..8440a9818810 100644
--- a/tools/damon/record.py
+++ b/tools/damon/record.py
@@ -12,6 +12,7 @@ import subprocess
import time
import _damon
+import _paddr_layout
def do_record(target, is_target_cmd, init_regions, attrs, old_attrs):
if os.path.isfile(attrs.rfile_path):
@@ -70,6 +71,8 @@ def set_argparser(parser):
help='the target command or the pid to record')
parser.add_argument('-l', '--rbuf', metavar='<len>', type=int,
default=1024*1024, help='length of record result buffer')
+ parser.add_argument('--numa_node', metavar='<node id>', type=int,
+ help='if target is \'paddr\', limit it to the numa node')
parser.add_argument('-o', '--out', metavar='<file path>', type=str,
default='damon.data', help='output file path')
@@ -96,6 +99,18 @@ def default_paddr_region():
ret = [start, end]
return ret
+def paddr_region_of(numa_node):
+ regions = []
+ default_region = default_paddr_region()
+ paddr_ranges = _paddr_layout.paddr_ranges()
+ for r in paddr_ranges:
+ if r.end <= default_region[0] or default_region[1] <= r.start:
+ continue
+ if r.nid == numa_node and r.name == 'System RAM':
+ regions.append([r.start, r.end])
+
+ return regions
+
def main(args=None):
global orig_attrs
if not args:
@@ -113,12 +128,16 @@ def main(args=None):
args.schemes = ''
new_attrs = _damon.cmd_args_to_attrs(args)
init_regions = _damon.cmd_args_to_init_regions(args)
+ numa_node = args.numa_node
target = args.target
target_fields = target.split()
if target == 'paddr': # physical memory address space
if not init_regions:
- init_regions = [default_paddr_region()]
+ if numa_node:
+ init_regions = paddr_region_of(numa_node)
+ else:
+ init_regions = [default_paddr_region()]
do_record(target, False, init_regions, new_attrs, orig_attrs)
elif not subprocess.call('which %s > /dev/null' % target_fields[0],
shell=True, executable='/bin/bash'):
--
2.17.1