Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2635682imm; Fri, 20 Jul 2018 02:04:32 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeEy3PSJcPJ3a/EQeyAYpBy+C4pzHr0N3y0P0cShJLq3RVeB5LlRJViiw+2qyFlBj7AR5Yo X-Received: by 2002:a65:5c03:: with SMTP id u3-v6mr1201579pgr.402.1532077472555; Fri, 20 Jul 2018 02:04:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532077472; cv=none; d=google.com; s=arc-20160816; b=peCpAIOsu6rWGqmyGlAXrdF/AXEQCYcPGsTXX7FYtwwGGwehbBpAWc+hDrib63iY75 jY6KqIkvr+mZj2N7BUedujys/HQWFNlCX8Pw4mbw+LVoDm9rxmxU1a/KjmTS8BPa706K 3Kzh1VuHC/Pl4pVhWmq4TbcKMoOpg9YvsqEANPLCrWZo7zpLBLwHgtUut1SuHUzXjzeq WSfgQ3IknK3pGHsbzdUfe840gMJC9WaaR5ThiR90PKaatqp+SZkWljRE+wF+hmMrhwJ/ XNVDNbhixAPtjqOP10LNC09qGdlJPsX4fCjiIOQwysfevSLyWmxlqggV4+HBPLusPxK7 IaJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=e8e6HUwI/0/eB/03ZtSZ6D686JzhCOR3cBcpgaGBjDg=; b=NEHLY6Ysx+aOx3kQyhgyDSNCHX6R9S5UUHhRuA7Kwpa9wz3Myp21Z7HY0w7j8+alWk pnsw+j93xwdDRVYh6IntdjSAbitoNiD2i2vwI9Vrgrr8/bBar4SDCNU6izWABTEcEn4M 4TSwju/iPJgWqU8oIPnBRuNb7d7h32buZ1lc/tKyMBHYgu168zOmtte5f1qiiTQyfSJj WrtGQbAd1XJQl3fUGjoLKaA1w6TNe26USzEWXQ90THBIAzQoU04FyCqlx5MUoso/mfbE E8wZa3ISTj1mGEUGtZXpDYuq9dT0g/jKRrkHOGCciLHqmG7YsaNzTFhZuj5zKVUXCBYr u1vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bN3xzEpJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2-v6si1386786pgl.140.2018.07.20.02.04.17; Fri, 20 Jul 2018 02:04:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bN3xzEpJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728669AbeGTJtU (ORCPT + 99 others); Fri, 20 Jul 2018 05:49:20 -0400 Received: from mail-lj1-f194.google.com ([209.85.208.194]:45294 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727243AbeGTJtU (ORCPT ); Fri, 20 Jul 2018 05:49:20 -0400 Received: by mail-lj1-f194.google.com with SMTP id q5-v6so10511524ljh.12; Fri, 20 Jul 2018 02:02:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=e8e6HUwI/0/eB/03ZtSZ6D686JzhCOR3cBcpgaGBjDg=; b=bN3xzEpJkBOBKcmgPjpa51oSRIZ26xLea5rCb5YB+7dL0lcjpLNK7XoW4ywPsTm229 fQnYXApHpqU/azbfkM45c3tT77Gh2cWn8f1Fy4h4J1UkiYuVp44TjBzoP874Ll776f0y g51JNZejfKMKdLkBGZY4NVgn6hRLUbaHXi+ybkADqWsfqtrBprXKvZzf6rNYo7rNNiHQ tVXo2PHZd3Snmj6YE20+Itp4XT2u2aOuc8Y7PUcUbF4DrL7A9wJl+R4i7dKoVfWCaRNZ uNfh+RRdjTTBXJGDBhIUOvuMUrjJX4FTz6MVx5j70PXf8SE4JNRxbRmbqieu4ao38YxL J+lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=e8e6HUwI/0/eB/03ZtSZ6D686JzhCOR3cBcpgaGBjDg=; b=pUIGHkm2YKGUomsz4UZ3kTESfepY3eggIbHcuPHX+Tmrotktb5NFCC85vVVJH/atGz uEFe5ESZoqy7S4UHcJxayBQFQDXQNTdFrL0GclqMOvFRMhsfzR7gNqly9005C2hWuXuT 95qVzUthOWiBN0AN+1XWRMVFgNXxrT0mV5x9UjYC43UjVg3HKfZUKaDnuXtHmDo0Ewva w7OELyc2jDgUqW8uoLO5Fca/QexXJaaLlz7xqRvWs5uG6PViKMQreiW8GtSRgFn1C+Yf WVFyl37FXTl+F/+C1TMHe1A9WK77mr4PnIOaKqG6Q5lyjrcvG1AmafAocxtjsN6SBji/ k8Iw== X-Gm-Message-State: AOUpUlFH+9UFWqRkxeT5OhEzPuQBfCb1+S4OYX07q8rrW1OHrB3IoWgn g+gJLvnojPdgRyoDIi0DSrw= X-Received: by 2002:a2e:8743:: with SMTP id q3-v6mr974250ljj.139.1532077321542; Fri, 20 Jul 2018 02:02:01 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id w2-v6sm252394lje.73.2018.07.20.02.02.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 20 Jul 2018 02:02:00 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: daniel.vetter@intel.com, andr2000@gmail.com, dongwon.kim@intel.com, matthew.d.roper@intel.com, Oleksandr Andrushchenko Subject: [PATCH v5 2/8] xen/balloon: Share common memory reservation routines Date: Fri, 20 Jul 2018 12:01:44 +0300 Message-Id: <20180720090150.24560-3-andr2000@gmail.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180720090150.24560-1-andr2000@gmail.com> References: <20180720090150.24560-1-andr2000@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Oleksandr Andrushchenko Memory {increase|decrease}_reservation and VA mappings update/reset code used in balloon driver can be made common, so other drivers can also re-use the same functionality without open-coding. Create a dedicated file for the shared code and export corresponding symbols for other kernel modules. Signed-off-by: Oleksandr Andrushchenko Reviewed-by: Boris Ostrovsky --- drivers/xen/Makefile | 1 + drivers/xen/balloon.c | 75 ++------------------- drivers/xen/mem-reservation.c | 118 ++++++++++++++++++++++++++++++++++ include/xen/mem-reservation.h | 59 +++++++++++++++++ 4 files changed, 184 insertions(+), 69 deletions(-) create mode 100644 drivers/xen/mem-reservation.c create mode 100644 include/xen/mem-reservation.h diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile index 48b154276179..129dd1cc1b83 100644 --- a/drivers/xen/Makefile +++ b/drivers/xen/Makefile @@ -2,6 +2,7 @@ obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o obj-$(CONFIG_X86) += fallback.o obj-y += grant-table.o features.o balloon.o manage.o preempt.o time.o +obj-y += mem-reservation.o obj-y += events/ obj-y += xenbus/ diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 065f0b607373..e12bb256036f 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -71,6 +71,7 @@ #include #include #include +#include static int xen_hotplug_unpopulated; @@ -157,13 +158,6 @@ static DECLARE_DELAYED_WORK(balloon_worker, balloon_process); #define GFP_BALLOON \ (GFP_HIGHUSER | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC) -static void scrub_page(struct page *page) -{ -#ifdef CONFIG_XEN_SCRUB_PAGES - clear_highpage(page); -#endif -} - /* balloon_append: add the given page to the balloon. */ static void __balloon_append(struct page *page) { @@ -463,11 +457,6 @@ static enum bp_state increase_reservation(unsigned long nr_pages) int rc; unsigned long i; struct page *page; - struct xen_memory_reservation reservation = { - .address_bits = 0, - .extent_order = EXTENT_ORDER, - .domid = DOMID_SELF - }; if (nr_pages > ARRAY_SIZE(frame_list)) nr_pages = ARRAY_SIZE(frame_list); @@ -479,16 +468,11 @@ static enum bp_state increase_reservation(unsigned long nr_pages) break; } - /* XENMEM_populate_physmap requires a PFN based on Xen - * granularity. - */ frame_list[i] = page_to_xen_pfn(page); page = balloon_next_page(page); } - set_xen_guest_handle(reservation.extent_start, frame_list); - reservation.nr_extents = nr_pages; - rc = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation); + rc = xenmem_reservation_increase(nr_pages, frame_list); if (rc <= 0) return BP_EAGAIN; @@ -496,29 +480,7 @@ static enum bp_state increase_reservation(unsigned long nr_pages) page = balloon_retrieve(false); BUG_ON(page == NULL); -#ifdef CONFIG_XEN_HAVE_PVMMU - /* - * We don't support PV MMU when Linux and Xen is using - * different page granularity. - */ - BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE); - - if (!xen_feature(XENFEAT_auto_translated_physmap)) { - unsigned long pfn = page_to_pfn(page); - - set_phys_to_machine(pfn, frame_list[i]); - - /* Link back into the page tables if not highmem. */ - if (!PageHighMem(page)) { - int ret; - ret = HYPERVISOR_update_va_mapping( - (unsigned long)__va(pfn << PAGE_SHIFT), - mfn_pte(frame_list[i], PAGE_KERNEL), - 0); - BUG_ON(ret); - } - } -#endif + xenmem_reservation_va_mapping_update(1, &page, &frame_list[i]); /* Relinquish the page back to the allocator. */ free_reserved_page(page); @@ -535,11 +497,6 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) unsigned long i; struct page *page, *tmp; int ret; - struct xen_memory_reservation reservation = { - .address_bits = 0, - .extent_order = EXTENT_ORDER, - .domid = DOMID_SELF - }; LIST_HEAD(pages); if (nr_pages > ARRAY_SIZE(frame_list)) @@ -553,7 +510,7 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) break; } adjust_managed_page_count(page, -1); - scrub_page(page); + xenmem_reservation_scrub_page(page); list_add(&page->lru, &pages); } @@ -572,28 +529,10 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) */ i = 0; list_for_each_entry_safe(page, tmp, &pages, lru) { - /* XENMEM_decrease_reservation requires a GFN */ frame_list[i++] = xen_page_to_gfn(page); -#ifdef CONFIG_XEN_HAVE_PVMMU - /* - * We don't support PV MMU when Linux and Xen is using - * different page granularity. - */ - BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE); - - if (!xen_feature(XENFEAT_auto_translated_physmap)) { - unsigned long pfn = page_to_pfn(page); + xenmem_reservation_va_mapping_reset(1, &page); - if (!PageHighMem(page)) { - ret = HYPERVISOR_update_va_mapping( - (unsigned long)__va(pfn << PAGE_SHIFT), - __pte_ma(0), 0); - BUG_ON(ret); - } - __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); - } -#endif list_del(&page->lru); balloon_append(page); @@ -601,9 +540,7 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) flush_tlb_all(); - set_xen_guest_handle(reservation.extent_start, frame_list); - reservation.nr_extents = nr_pages; - ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation); + ret = xenmem_reservation_decrease(nr_pages, frame_list); BUG_ON(ret != nr_pages); balloon_stats.current_pages -= nr_pages; diff --git a/drivers/xen/mem-reservation.c b/drivers/xen/mem-reservation.c new file mode 100644 index 000000000000..084799c6180e --- /dev/null +++ b/drivers/xen/mem-reservation.c @@ -0,0 +1,118 @@ +// SPDX-License-Identifier: GPL-2.0 + +/****************************************************************************** + * Xen memory reservation utilities. + * + * Copyright (c) 2003, B Dragovic + * Copyright (c) 2003-2004, M Williamson, K Fraser + * Copyright (c) 2005 Dan M. Smith, IBM Corporation + * Copyright (c) 2010 Daniel Kiper + * Copyright (c) 2018 Oleksandr Andrushchenko, EPAM Systems Inc. + */ + +#include + +#include +#include + +/* + * Use one extent per PAGE_SIZE to avoid to break down the page into + * multiple frame. + */ +#define EXTENT_ORDER (fls(XEN_PFN_PER_PAGE) - 1) + +#ifdef CONFIG_XEN_HAVE_PVMMU +void __xenmem_reservation_va_mapping_update(unsigned long count, + struct page **pages, + xen_pfn_t *frames) +{ + int i; + + for (i = 0; i < count; i++) { + struct page *page = pages[i]; + unsigned long pfn = page_to_pfn(page); + + BUG_ON(!page); + + /* + * We don't support PV MMU when Linux and Xen is using + * different page granularity. + */ + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE); + + set_phys_to_machine(pfn, frames[i]); + + /* Link back into the page tables if not highmem. */ + if (!PageHighMem(page)) { + int ret; + + ret = HYPERVISOR_update_va_mapping( + (unsigned long)__va(pfn << PAGE_SHIFT), + mfn_pte(frames[i], PAGE_KERNEL), + 0); + BUG_ON(ret); + } + } +} +EXPORT_SYMBOL_GPL(__xenmem_reservation_va_mapping_update); + +void __xenmem_reservation_va_mapping_reset(unsigned long count, + struct page **pages) +{ + int i; + + for (i = 0; i < count; i++) { + struct page *page = pages[i]; + unsigned long pfn = page_to_pfn(page); + + /* + * We don't support PV MMU when Linux and Xen are using + * different page granularity. + */ + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE); + + if (!PageHighMem(page)) { + int ret; + + ret = HYPERVISOR_update_va_mapping( + (unsigned long)__va(pfn << PAGE_SHIFT), + __pte_ma(0), 0); + BUG_ON(ret); + } + __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); + } +} +EXPORT_SYMBOL_GPL(__xenmem_reservation_va_mapping_reset); +#endif /* CONFIG_XEN_HAVE_PVMMU */ + +/* @frames is an array of PFNs */ +int xenmem_reservation_increase(int count, xen_pfn_t *frames) +{ + struct xen_memory_reservation reservation = { + .address_bits = 0, + .extent_order = EXTENT_ORDER, + .domid = DOMID_SELF + }; + + /* XENMEM_populate_physmap requires a PFN based on Xen granularity. */ + set_xen_guest_handle(reservation.extent_start, frames); + reservation.nr_extents = count; + return HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation); +} +EXPORT_SYMBOL_GPL(xenmem_reservation_increase); + +/* @frames is an array of GFNs */ +int xenmem_reservation_decrease(int count, xen_pfn_t *frames) +{ + struct xen_memory_reservation reservation = { + .address_bits = 0, + .extent_order = EXTENT_ORDER, + .domid = DOMID_SELF + }; + + /* XENMEM_decrease_reservation requires a GFN */ + set_xen_guest_handle(reservation.extent_start, frames); + reservation.nr_extents = count; + return HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation); +} +EXPORT_SYMBOL_GPL(xenmem_reservation_decrease); diff --git a/include/xen/mem-reservation.h b/include/xen/mem-reservation.h new file mode 100644 index 000000000000..80b52b4945e9 --- /dev/null +++ b/include/xen/mem-reservation.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Xen memory reservation utilities. + * + * Copyright (c) 2003, B Dragovic + * Copyright (c) 2003-2004, M Williamson, K Fraser + * Copyright (c) 2005 Dan M. Smith, IBM Corporation + * Copyright (c) 2010 Daniel Kiper + * Copyright (c) 2018 Oleksandr Andrushchenko, EPAM Systems Inc. + */ + +#ifndef _XENMEM_RESERVATION_H +#define _XENMEM_RESERVATION_H + +#include + +#include + +static inline void xenmem_reservation_scrub_page(struct page *page) +{ +#ifdef CONFIG_XEN_SCRUB_PAGES + clear_highpage(page); +#endif +} + +#ifdef CONFIG_XEN_HAVE_PVMMU +void __xenmem_reservation_va_mapping_update(unsigned long count, + struct page **pages, + xen_pfn_t *frames); + +void __xenmem_reservation_va_mapping_reset(unsigned long count, + struct page **pages); +#endif + +static inline void xenmem_reservation_va_mapping_update(unsigned long count, + struct page **pages, + xen_pfn_t *frames) +{ +#ifdef CONFIG_XEN_HAVE_PVMMU + if (!xen_feature(XENFEAT_auto_translated_physmap)) + __xenmem_reservation_va_mapping_update(count, pages, frames); +#endif +} + +static inline void xenmem_reservation_va_mapping_reset(unsigned long count, + struct page **pages) +{ +#ifdef CONFIG_XEN_HAVE_PVMMU + if (!xen_feature(XENFEAT_auto_translated_physmap)) + __xenmem_reservation_va_mapping_reset(count, pages); +#endif +} + +int xenmem_reservation_increase(int count, xen_pfn_t *frames); + +int xenmem_reservation_decrease(int count, xen_pfn_t *frames); + +#endif -- 2.18.0