Received: by 2002:a05:7412:cfc7:b0:fc:a2b0:25d7 with SMTP id by7csp1884982rdb; Tue, 20 Feb 2024 09:44:34 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXccLxsAnXAPkpYgVGoHRl4Sn7JzgswC0gJEYddr3vagRU4pEA1yzZyGvCXKTykuarQtjH/Kzk+dg7bmnaQKcZnRkoccw774652VtH4Aw== X-Google-Smtp-Source: AGHT+IE/f+GdDT8EOruH/jHZiHqyWtr4d+WStNQET+BL/G3mVb+2p3uOylY+qSNV8aBIHIduFf4Z X-Received: by 2002:a17:903:22ca:b0:1db:d9b7:ac55 with SMTP id y10-20020a17090322ca00b001dbd9b7ac55mr11706114plg.27.1708451074145; Tue, 20 Feb 2024 09:44:34 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708451074; cv=pass; d=google.com; s=arc-20160816; b=CnUu76Mn3w1Urvj9eIKDqa1SGFx0Syap7karMXFFMJn0Q1HhBa3dlIT601W84WENsy 9cu2lN17/K81B2XHP844i7YxeNJsjqBP94Dz08myoCpBunfp96bJEo/gTOa/z/0uBvrq Zs8EwXNEylap+BkKERxHYreGzF4+BGVtlLmlSlpKxYEJFIcrUn7Nd3JKXaEPa2/dRQFg Q5RkV0vdUzmo8boBhS8sytmlSvdIbvTD+s/0YmqSSUXgG26xzvcCRmCyDg/Rw9j0MiWC blFLt/+9VR+q+aSix5jd7pYWSh+eSVbxmt9nX2bh/HC4Q60Ts4n2X+JAEZ9ju4C3SGGc 0cpg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=fN3AioTtNiF2G1C/gJr0Z8XOqGfeC2tvwzpfkBsnwog=; fh=pAVP4X8g+n/Qy3Zjm2/PgPq5h6oTIffbo40m4BX51jU=; b=zVmUZR9gegePjcDq1QirLtJyTMksjVeGZMtxFGu/Fx0qrQhKVCndX+kMRAsB06wsXM qt+RC72w2ICPIz1UYPmHKXto3THR2pCZxlnE+5xk9KnlpYsYMje2Z46KS5VTr/t1f4MJ k97yDnrcgYz16ikyg9iuisXRKwg3O72sT2/xc1Fcp5bHlZVq7EtopQvihKzS+0gGa3nY CesoC0Ezj/5bc1IGsfDJKBoAfX9/yBMHY1U9KjD+4cXQ9Pp+cA5f/XstSfp0DHLYD1+1 gZqbj1DJzVAnYh3HCP17/thl8aKHhvBESrFfbWINDNDfEXpYbF2HbC/szR8b2Ry76kF+ cGpw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@citrix.com header.s=google header.b=J4FdpBBK; arc=pass (i=1 spf=pass spfdomain=cloud.com dkim=pass dkdomain=citrix.com dmarc=pass fromdomain=citrix.com); spf=pass (google.com: domain of linux-kernel+bounces-73464-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-73464-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=citrix.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id h18-20020a170902f71200b001dbecd793fcsi4068951plo.556.2024.02.20.09.44.33 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Feb 2024 09:44:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-73464-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@citrix.com header.s=google header.b=J4FdpBBK; arc=pass (i=1 spf=pass spfdomain=cloud.com dkim=pass dkdomain=citrix.com dmarc=pass fromdomain=citrix.com); spf=pass (google.com: domain of linux-kernel+bounces-73464-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-73464-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=citrix.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D0C852838D2 for ; Tue, 20 Feb 2024 17:44:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F0D8977630; Tue, 20 Feb 2024 17:44:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="J4FdpBBK" Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64C467762D for ; Tue, 20 Feb 2024 17:44:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.174 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708451068; cv=none; b=Z8B7oK56rzlzJ1RPzvIwcHQw9QMUUgffdT4tHD2OR8haZoB7Rv+barct8tP/rsPDAqprDEHu+FujkPzXTf7dDZFIprmrY1/vxsfhqYYEQYleAaI+66UOtOomFfUKK8e++eMBZ9IKtpzGUlhT+hMb6kV28qDo7i+PSfDd4ExupTw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708451068; c=relaxed/simple; bh=Wr56TQrXjscHfcTI0a9S7QFBpQRGQ66V8gr7AtBhm9I=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:Content-Type; b=jQ+1Ny0MtvqbE0ligBeFH2ZTBvkRC84kJAYhuLMnPe8i2l27u6wK5iRZFPkzre8s6GIbOPNGkwEVpohi0IRz3VgpGd79KN/E3pLfrJEfPiHzDc/HLL5ZGHB/bzp7oKNJJesA/CdsOcj+eciidhBeGk2HGsZV0wscvhN6FoFakoM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=citrix.com; spf=pass smtp.mailfrom=cloud.com; dkim=pass (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b=J4FdpBBK; arc=none smtp.client-ip=209.85.208.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=citrix.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloud.com Received: by mail-lj1-f174.google.com with SMTP id 38308e7fff4ca-2d208be133bso60005711fa.2 for ; Tue, 20 Feb 2024 09:44:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1708451064; x=1709055864; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=fN3AioTtNiF2G1C/gJr0Z8XOqGfeC2tvwzpfkBsnwog=; b=J4FdpBBKK+RAYnOk3qYViBMW/wcT8/5zntqeRMTDpkhHNNb16cfhKpgFqkJe7HT5w9 6fkOQl/jXGHEsdt4BieUxLSjGdS1em8ng05fTnIQ4p82PTIMH1c+9O5oSOU1vzVbePDk kUlHajGRk2SIpRMOkrbe7Yz4rd6adSkZMhY28= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708451064; x=1709055864; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fN3AioTtNiF2G1C/gJr0Z8XOqGfeC2tvwzpfkBsnwog=; b=gT5jCtWQ5esb6roldny4ZtXu1UFuXLwBK/pvBjdmyGRCkw6POXN9fKOSaHkL2I0HW2 XMU703j7xT2i5AlJvwywuTt+VpKHi/ry7xACjNuFtary97uTypWiJTMvk+jmVmZRabAp 1dco/T5wesF96/VyiWzjtaQRUI6zVFqONyLArGf4l78+vslfUDUoDiNuUj/cVakQwA41 4tQV2Kp9w2SWMn5uXaUl8wEWpqGLbjAgJXnRhz/Q0kwRiCx9xQtgKcrsZbEOpqro4lxp F1Ps4wTOC3mo2hKRuw3euDBs7XlciGH2+3df76lT3nnVDZI5T6HVIfSzV0wKqGE79xvy auHg== X-Forwarded-Encrypted: i=1; AJvYcCWUiB+xoW85IAzWhwB1cB636XjRerTzA8d0q+5GBAk0vzjrvkeceVw4sbUq7ze68MSUscQvbDgEFOSiACkbQKZ5Ox2MitukonourS00 X-Gm-Message-State: AOJu0YzZUqaZO1mxRZeX2+dLyoOXl10xh0KL+yvMAs1PoZwHK/rPAd+N 20cdqkKegydExSzcNWBrolsLwf5QFlP8W+rdFv3F5P6OdE0z4SSLWJVRz8lDCnA= X-Received: by 2002:a2e:b005:0:b0:2d2:3757:c6b with SMTP id y5-20020a2eb005000000b002d237570c6bmr4961778ljk.0.1708451064254; Tue, 20 Feb 2024 09:44:24 -0800 (PST) Received: from localhost ([213.195.118.74]) by smtp.gmail.com with ESMTPSA id cx19-20020a05620a51d300b007873df5a625sm3585374qkb.97.2024.02.20.09.44.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Feb 2024 09:44:23 -0800 (PST) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stefano Stabellini , Oleksandr Tyshchenko , linux-kernel@vger.kernel.org, Jan Beulich , Andrew Cooper Subject: [PATCH RFC] x86/xen: attempt to inflate the memory balloon on PVH Date: Tue, 20 Feb 2024 18:43:41 +0100 Message-ID: <20240220174341.56131-1-roger.pau@citrix.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When running as PVH or HVM Linux will use holes in the memory map as scratch space to map grants, foreign domain pages and possibly miscellaneous other stuff. However the usage of such memory map holes for Xen purposes can be problematic. The request of holesby Xen happen quite early in the kernel boot process (grant table setup already uses scratch map space), and it's possible that by then not all devices have reclaimed their MMIO space. It's not unlikely for chunks of Xen scratch map space to end up using PCI bridge MMIO window memory, which (as expected) causes quite a lot of issues in the system. At least for PVH dom0 we have the possibility of using regions marked as UNUSABLE in the e820 memory map. Either if the region is UNUSABLE in the native memory map, or it has been converted into UNUSABLE in order to hide RAM regions from dom0, the second stage translation page-tables can populate those areas without issues. PV already has this kind of logic, where the balloon driver is inflated at boot. Re-use the current logic in order to also inflate it when running as PVH. onvert UNUSABLE regions up to the ratio specified in EXTRA_MEM_RATIO to RAM, while reserving them using xen_add_extra_mem() (which is also moved so it's no longer tied to CONFIG_PV). Signed-off-by: Roger Pau Monné --- RFC reasons: * Note that it would be preferred for the hypervisor to provide an explicit range to be used as scratch mapping space, but that requires changes to Xen, and it's not fully clear whether Xen can figure out the position of all MMIO regions at boot in order to suggest a scratch mapping region for dom0. * Should the whole set of xen_{add,del,chk,inv}_extra_mem() functions be moved to a different file? For the purposes of PVH only xen_add_extra_mem() is moved and the chk and inv ones are PV specific and might not want moving to a separate file just to guard them with CONFIG_PV. --- arch/x86/include/asm/xen/hypervisor.h | 1 + arch/x86/platform/pvh/enlighten.c | 3 ++ arch/x86/xen/enlighten.c | 32 +++++++++++++ arch/x86/xen/enlighten_pvh.c | 68 +++++++++++++++++++++++++++ arch/x86/xen/setup.c | 44 ----------------- arch/x86/xen/xen-ops.h | 14 ++++++ drivers/xen/balloon.c | 2 - 7 files changed, 118 insertions(+), 46 deletions(-) diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h index a9088250770f..31e2bf8d5db7 100644 --- a/arch/x86/include/asm/xen/hypervisor.h +++ b/arch/x86/include/asm/xen/hypervisor.h @@ -62,6 +62,7 @@ void xen_arch_unregister_cpu(int num); #ifdef CONFIG_PVH void __init xen_pvh_init(struct boot_params *boot_params); void __init mem_map_via_hcall(struct boot_params *boot_params_p); +void __init xen_reserve_extra_memory(struct boot_params *bootp); #endif /* Lazy mode for batching updates / context switch */ diff --git a/arch/x86/platform/pvh/enlighten.c b/arch/x86/platform/pvh/enlighten.c index 00a92cb2c814..a12117f3d4de 100644 --- a/arch/x86/platform/pvh/enlighten.c +++ b/arch/x86/platform/pvh/enlighten.c @@ -74,6 +74,9 @@ static void __init init_pvh_bootparams(bool xen_guest) } else xen_raw_printk("Warning: Can fit ISA range into e820\n"); + if (xen_guest) + xen_reserve_extra_memory(&pvh_bootparams); + pvh_bootparams.hdr.cmd_line_ptr = pvh_start_info.cmdline_paddr; diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 3c61bb98c10e..a01ca255b0c6 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -350,3 +351,34 @@ void xen_arch_unregister_cpu(int num) } EXPORT_SYMBOL(xen_arch_unregister_cpu); #endif + +/* Amount of extra memory space we add to the e820 ranges */ +struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS] __initdata; + +void __init xen_add_extra_mem(unsigned long start_pfn, unsigned long n_pfns) +{ + unsigned int i; + + /* + * No need to check for zero size, should happen rarely and will only + * write a new entry regarded to be unused due to zero size. + */ + for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) { + /* Add new region. */ + if (xen_extra_mem[i].n_pfns == 0) { + xen_extra_mem[i].start_pfn = start_pfn; + xen_extra_mem[i].n_pfns = n_pfns; + break; + } + /* Append to existing region. */ + if (xen_extra_mem[i].start_pfn + xen_extra_mem[i].n_pfns == + start_pfn) { + xen_extra_mem[i].n_pfns += n_pfns; + break; + } + } + if (i == XEN_EXTRA_MEM_MAX_REGIONS) + printk(KERN_WARNING "Warning: not enough extra memory regions\n"); + + memblock_reserve(PFN_PHYS(start_pfn), PFN_PHYS(n_pfns)); +} diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c index ada3868c02c2..c28f073c1df5 100644 --- a/arch/x86/xen/enlighten_pvh.c +++ b/arch/x86/xen/enlighten_pvh.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include #include +#include #include @@ -72,3 +73,70 @@ void __init mem_map_via_hcall(struct boot_params *boot_params_p) } boot_params_p->e820_entries = memmap.nr_entries; } + +/* + * Reserve e820 UNUSABLE regions to inflate the memory balloon. + * + * On PVH dom0 the host memory map is used, RAM regions available to dom0 are + * located as the same place as in the native memory map, but since dom0 gets + * less memory than the total amount of host RAM the ranges that can't be + * populated are converted from RAM -> UNUSABLE. Use such regions (up to the + * ratio signaled in EXTRA_MEM_RATIO) in order to inflate the balloon driver at + * boot. Doing so prevents the guest (even if just temporary) from using holes + * in the memory map in order to map grants or foreign addresses, and + * hopefully limits the risk of a clash with a device MMIO region. Ideally the + * hypervisor should notify us which memory ranges are suitable for creating + * foreign mappings, but that's not yet implemented. + */ +void __init xen_reserve_extra_memory(struct boot_params *bootp) +{ + unsigned int i, ram_pages = 0, extra_pages; + + for (i = 0; i < bootp->e820_entries; i++) { + struct boot_e820_entry *e = &bootp->e820_table[i]; + + if (e->type != E820_TYPE_RAM) + continue; + ram_pages += PFN_DOWN(e->addr + e->size) - PFN_UP(e->addr); + } + + /* Max amount of extra memory. */ + extra_pages = EXTRA_MEM_RATIO * ram_pages; + + /* + * Convert UNUSABLE ranges to RAM and reserve them for foreign mapping + * purposes. + */ + for (i = 0; i < bootp->e820_entries && extra_pages; i++) { + struct boot_e820_entry *e = &bootp->e820_table[i]; + unsigned long pages; + + if (e->type != E820_TYPE_UNUSABLE) + continue; + + pages = min(extra_pages, + PFN_DOWN(e->addr + e->size) - PFN_UP(e->addr)); + + if (pages != (PFN_DOWN(e->addr + e->size) - PFN_UP(e->addr))) { + struct boot_e820_entry *next; + + if (bootp->e820_entries == + ARRAY_SIZE(bootp->e820_table)) + /* No space left to split - skip region. */ + continue; + + /* Split entry. */ + next = e + 1; + memmove(next, e, + (bootp->e820_entries - i) * sizeof(*e)); + bootp->e820_entries++; + next->addr = PAGE_ALIGN(e->addr) + PFN_PHYS(pages); + e->size = next->addr - e->addr; + next->size -= e->size; + } + e->type = E820_TYPE_RAM; + extra_pages -= pages; + + xen_add_extra_mem(PFN_UP(e->addr), pages); + } +} diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c index b3e37961065a..380591028cb8 100644 --- a/arch/x86/xen/setup.c +++ b/arch/x86/xen/setup.c @@ -38,9 +38,6 @@ #define GB(x) ((uint64_t)(x) * 1024 * 1024 * 1024) -/* Amount of extra memory space we add to the e820 ranges */ -struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS] __initdata; - /* Number of pages released from the initial allocation. */ unsigned long xen_released_pages; @@ -64,18 +61,6 @@ static struct { } xen_remap_buf __initdata __aligned(PAGE_SIZE); static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY; -/* - * The maximum amount of extra memory compared to the base size. The - * main scaling factor is the size of struct page. At extreme ratios - * of base:extra, all the base memory can be filled with page - * structures for the extra memory, leaving no space for anything - * else. - * - * 10x seems like a reasonable balance between scaling flexibility and - * leaving a practically usable system. - */ -#define EXTRA_MEM_RATIO (10) - static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB); static void __init xen_parse_512gb(void) @@ -96,35 +81,6 @@ static void __init xen_parse_512gb(void) xen_512gb_limit = val; } -static void __init xen_add_extra_mem(unsigned long start_pfn, - unsigned long n_pfns) -{ - int i; - - /* - * No need to check for zero size, should happen rarely and will only - * write a new entry regarded to be unused due to zero size. - */ - for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) { - /* Add new region. */ - if (xen_extra_mem[i].n_pfns == 0) { - xen_extra_mem[i].start_pfn = start_pfn; - xen_extra_mem[i].n_pfns = n_pfns; - break; - } - /* Append to existing region. */ - if (xen_extra_mem[i].start_pfn + xen_extra_mem[i].n_pfns == - start_pfn) { - xen_extra_mem[i].n_pfns += n_pfns; - break; - } - } - if (i == XEN_EXTRA_MEM_MAX_REGIONS) - printk(KERN_WARNING "Warning: not enough extra memory regions\n"); - - memblock_reserve(PFN_PHYS(start_pfn), PFN_PHYS(n_pfns)); -} - static void __init xen_del_extra_mem(unsigned long start_pfn, unsigned long n_pfns) { diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h index a87ab36889e7..79cf93f2c92f 100644 --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -163,4 +163,18 @@ void xen_hvm_post_suspend(int suspend_cancelled); static inline void xen_hvm_post_suspend(int suspend_cancelled) {} #endif +/* + * The maximum amount of extra memory compared to the base size. The + * main scaling factor is the size of struct page. At extreme ratios + * of base:extra, all the base memory can be filled with page + * structures for the extra memory, leaving no space for anything + * else. + * + * 10x seems like a reasonable balance between scaling flexibility and + * leaving a practically usable system. + */ +#define EXTRA_MEM_RATIO (10) + +void xen_add_extra_mem(unsigned long start_pfn, unsigned long n_pfns); + #endif /* XEN_OPS_H */ diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 976c6cdf9ee6..aaf2514fcfa4 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -672,7 +672,6 @@ EXPORT_SYMBOL(xen_free_ballooned_pages); static void __init balloon_add_regions(void) { -#if defined(CONFIG_XEN_PV) unsigned long start_pfn, pages; unsigned long pfn, extra_pfn_end; unsigned int i; @@ -696,7 +695,6 @@ static void __init balloon_add_regions(void) balloon_stats.total_pages += extra_pfn_end - start_pfn; } -#endif } static int __init balloon_init(void) -- 2.43.0