Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2482067ybt; Tue, 16 Jun 2020 07:16:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOtdYFJqy6Yb2ksv8nq3xB8evbJsM4EMMWG+yzoR5DVgFMdBMUntrakhVsAQtiF0d0Vhbi X-Received: by 2002:a17:906:3e84:: with SMTP id a4mr2860107ejj.372.1592316970441; Tue, 16 Jun 2020 07:16:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592316970; cv=none; d=google.com; s=arc-20160816; b=vOmoyG13YnWinecZCQGi5tux9CBMEU1r6DfUf2+UJi9/xxXDsVMwKPgaF+p671ZRvp i6YidCF9S+vp64oaDlvz0snyFnpHHxX1NQD+FzzbJlDkBYO/oOKJhDUUOlCjYPLB5zwI GJHdL9EeWRsyybvnDcZ08I53utNY2u0hCWOuvaKoFEpaIR5sVYwQvsfhi/GiI3dmiqOW 9yB+51rhLVsu5x2/bb9Osi3gR1QFbnTQ2osnM6tzNA7PQft8RGEtpKa2UcoPOKkodLxv 2CteVFMbUK+UwNnNDFEMsc+TKTVZMiN092Iq8T1DkieBbLQLYUiA5c6Dhzw3ByinjemD I6Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:ironport-sdr:dkim-signature; bh=uuuAHwsA0agsAV11LV6hPkkJjfjwCTMrcn4kO5fu174=; b=ow6vLPVpfFA20+FWawLyMWG5wnlmC1593Mb7qR0uenCyD29WUZGSfJC61BpWYw6d84 g1q5lhKGkuY8sd2WqM+/qvGVIYlJoykfGJ6nqH/tXyukf/Goz0vv0cBTLv4+rGZwGr8V MTkXCRydwNKkTu9TCmxsTfcUfvUbxkWdzKwvDIlPG1qxnINlwnaqhXONn/Hwg0nBLRpw c+p8A3C9K1khgXDBc6cAc8ayJzmrwCChE9z23xlkll1LIlU7XtET9mritHhxB6sB+fc/ UGnupoyIc5x4/wWaa22cqwQhU5EydG2xAJ5AsYD7T4D1lBQ302ZCWKVX+tEss+Y8CFkE wnKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=G+6hYQHP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n16si10343864edt.7.2020.06.16.07.15.47; Tue, 16 Jun 2020 07:16:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=G+6hYQHP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729732AbgFPOLO (ORCPT + 99 others); Tue, 16 Jun 2020 10:11:14 -0400 Received: from smtp-fw-6002.amazon.com ([52.95.49.90]:43242 "EHLO smtp-fw-6002.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728869AbgFPOLO (ORCPT ); Tue, 16 Jun 2020 10:11:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592316673; x=1623852673; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=uuuAHwsA0agsAV11LV6hPkkJjfjwCTMrcn4kO5fu174=; b=G+6hYQHPgBnpiMWMReOfTgC4rBvBMm4bHfwg74osbZymGPU57VZ5ePDw dXG2g5Ns10dqDJpjLbiGvUK1sOTKQ4Sn4t4ZRuh+K9XziZOlpdYEum1tt DfpPdezUHgbCr/f1XviRO1ECyw0LKvvuCr1vYavVvU9XrfgxIRNBYWRVA c=; IronPort-SDR: q3pM3I96QRx5llJ88veRcQR89iKQAGSfZfyjSimy16PvbPBoRRyEF0QknRKScKrK7jcFjFz1i3 EVvU6o/aVUsw== X-IronPort-AV: E=Sophos;i="5.73,518,1583193600"; d="scan'208";a="36497058" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 16 Jun 2020 14:11:09 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS id 07937A2277; Tue, 16 Jun 2020 14:11:05 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 16 Jun 2020 14:11:05 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.162.109) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 16 Jun 2020 14:10:44 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v4 5/8] mm/damon: Implement callbacks for physical memory monitoring Date: Tue, 16 Jun 2020 16:08:10 +0200 Message-ID: <20200616140813.17863-6-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200616140813.17863-1-sjpark@amazon.com> References: <20200616140813.17863-1-sjpark@amazon.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.43.162.109] X-ClientProxiedBy: EX13D01UWB004.ant.amazon.com (10.43.161.157) To EX13D31EUA001.ant.amazon.com (10.43.165.15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: SeongJae Park This commit implements the four callbacks (->init_target_regions, ->update_target_regions, ->prepare_access_check, and ->check_accesses) for the basic access monitoring of the physical memory address space. By setting the callback pointers to point those, users can easily monitor the accesses to the physical memory. Internally, it uses the PTE Accessed bit, as similar to that of the virtual memory support. Also, it supports only user memory pages, as idle page tracking also does, for the same reason. If the monitoring target physical memory address range contains non-user memory pages, access check of the pages will do nothing but simply treat the pages as not accessed. Users who want to use other access check primitives and/or monitor the non-user memory regions could implement and use their own callbacks. Signed-off-by: SeongJae Park --- include/linux/damon.h | 5 ++ mm/damon.c | 201 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 206 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 076852bab7aa..6c0e9bb35a1f 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -227,6 +227,11 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx); void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx); unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx); +void kdamond_init_phys_regions(struct damon_ctx *ctx); +void kdamond_update_phys_regions(struct damon_ctx *ctx); +void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx); +unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx); + int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, diff --git a/mm/damon.c b/mm/damon.c index ab115db1f20c..fdf3425befb2 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -27,10 +27,13 @@ #include #include #include +#include #include #include #include +#include #include +#include #include #include #include @@ -534,6 +537,18 @@ void kdamond_init_vm_regions(struct damon_ctx *ctx) } } +/* + * The initial regions construction function for the physical address space. + * + * This default version does nothing in actual. Users should set the initial + * regions by themselves before passing their damon_ctx to 'start_damon()', or + * implement their version of this and set '->init_target_regions' of their + * damon_ctx to point it. + */ +void kdamond_init_phys_regions(struct damon_ctx *ctx) +{ +} + /* * Functions for the dynamic monitoring target regions update */ @@ -617,6 +632,19 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx) } } +/* + * The dynamic monitoring target regions update function for the physical + * address space. + * + * This default version does nothing in actual. Users should update the + * regions in other callbacks such as '->aggregate_cb', or implement their + * version of this and set the '->init_target_regions' of their damon_ctx to + * point it. + */ +void kdamond_update_phys_regions(struct damon_ctx *ctx) +{ +} + /* * Functions for the access checking of the regions */ @@ -752,6 +780,179 @@ unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx) return max_nr_accesses; } +/* access check functions for physical address based regions */ + +/* + * Get a page by pfn if it is in the LRU list. Otherwise, returns NULL. + * + * The body of this function is stollen from the 'page_idle_get_page()'. We + * steal rather than reuse it because the code is quite simple . + */ +static struct page *damon_phys_get_page(unsigned long pfn) +{ + struct page *page = pfn_to_online_page(pfn); + pg_data_t *pgdat; + + if (!page || !PageLRU(page) || + !get_page_unless_zero(page)) + return NULL; + + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + if (unlikely(!PageLRU(page))) { + put_page(page); + page = NULL; + } + spin_unlock_irq(&pgdat->lru_lock); + return page; +} + +static bool damon_page_mkold(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + damon_mkold(vma->vm_mm, addr); + return true; +} + +static void damon_phys_mkold(unsigned long paddr) +{ + struct page *page = damon_phys_get_page(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { + .rmap_one = damon_page_mkold, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return; + + if (!page_mapped(page) || !page_rmapping(page)) + return; + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) + return; + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); +} + +static void damon_prepare_phys_access_check(struct damon_ctx *ctx, + struct damon_region *r) +{ + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); + + damon_phys_mkold(r->sampling_addr); +} + +void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct damon_region *r; + + damon_for_each_task(t, ctx) { + damon_for_each_region(r, t) + damon_prepare_phys_access_check(ctx, r); + } +} + +struct damon_phys_access_chk_result { + unsigned long page_sz; + bool accessed; +}; + +static bool damon_page_accessed(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct damon_phys_access_chk_result *result = arg; + + result->accessed = damon_young(vma->vm_mm, addr, &result->page_sz); + + /* If accessed, stop walking */ + return !result->accessed; +} + +static bool damon_phys_young(unsigned long paddr, unsigned long *page_sz) +{ + struct page *page = damon_phys_get_page(PHYS_PFN(paddr)); + struct damon_phys_access_chk_result result = { + .page_sz = PAGE_SIZE, + .accessed = false, + }; + struct rmap_walk_control rwc = { + .arg = &result, + .rmap_one = damon_page_accessed, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return false; + + if (!page_mapped(page) || !page_rmapping(page)) + return false; + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) + return false; + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); + + *page_sz = result.page_sz; + return result.accessed; +} + +/* + * Check whether the region was accessed after the last preparation + * + * mm 'mm_struct' for the given virtual address space + * r the region of physical address space that needs to be checked + */ +static void damon_check_phys_access(struct damon_ctx *ctx, + struct damon_region *r) +{ + static unsigned long last_addr; + static unsigned long last_page_sz = PAGE_SIZE; + static bool last_accessed; + + /* If the region is in the last checked page, reuse the result */ + if (ALIGN_DOWN(last_addr, last_page_sz) == + ALIGN_DOWN(r->sampling_addr, last_page_sz)) { + if (last_accessed) + r->nr_accesses++; + return; + } + + last_accessed = damon_phys_young(r->sampling_addr, &last_page_sz); + if (last_accessed) + r->nr_accesses++; + + last_addr = r->sampling_addr; +} + +unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct damon_region *r; + unsigned int max_nr_accesses = 0; + + damon_for_each_task(t, ctx) { + damon_for_each_region(r, t) { + damon_check_phys_access(ctx, r); + max_nr_accesses = max(r->nr_accesses, max_nr_accesses); + } + } + + return max_nr_accesses; +} + /* * Functions for DAMON core logics and features */ -- 2.17.1