Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp200582rwb; Fri, 12 Aug 2022 17:47:41 -0700 (PDT) X-Google-Smtp-Source: AA6agR7MwCdKi8AC0BZACkGI06oezaIeOloycbTxteVHJCBtv8Iy9IM7enpqhtjcfz5o1qw4GJ/t X-Received: by 2002:a17:90b:180f:b0:1f5:160c:a656 with SMTP id lw15-20020a17090b180f00b001f5160ca656mr17011020pjb.193.1660351660663; Fri, 12 Aug 2022 17:47:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660351660; cv=none; d=google.com; s=arc-20160816; b=E2IzULjdLhnLIAPTqr4+QK9AkDaSs6z165Bl0RC6nZGma8l6EU83K71ykTf8sPNWpn fVUVpiWJf2AILbbcpy/e5JFAP7NegQCPnxAIN8UG+rf2G95rvEAm7j7WaHfWHfhbD6Ub 6QrzLr/Pp9Q/oaeEtTreMXQxj+VB3bqyC3UTBHmRPj+OkQahoTHykLpiU1eT9ZrIiaqd iEsyiey6zFqJqp06nGqVk7PTgtGXLm6IPGndb39FXc+uq+SGkY9vl+5gypP/qt6VMVxd LCcewNp2OYe2XKLGHhQP6kld62811QRJiFBDfK95CyZdHZ2a6iZkGIOPgV3sQ+CsazaN n67g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=peRfAvWIP7FDZjD9LS6PUCSHBhipHIZbT/wujjdYk4M=; b=X8JoSjG0u88ErD5RZ17yl1uOjPYDmPKlS+pulYbYhtxh8qM0rjH0e2GcR2pfCF4RhW jc0nxUwsFa9eihof3LQgN++6Vhymz/7HwIRPd0u0/m7OAgUt16xgaFG2ZGU9hV3aFSn9 UlmK28lAzNbjrH81SFATv01thAs23U+n7BO5xDbj3kfBVeatp3As41CFv7iWbvTqOb7B +pFVz5LOuROOko3YnSsTjJC17htl7+oETc4msF1JZN2JcugaTUGyAbqeYltOX7ZG9S1Q bL17rZYzLVFYNQutMU9kNNLoc6m8achRyUmTNrK0rZlGT+qBiCf3kAUO7f+ZT570ALh/ RycA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HeMien+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i3-20020a654d03000000b0041d22f88961si4178432pgt.201.2022.08.12.17.47.20; Fri, 12 Aug 2022 17:47:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HeMien+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235716AbiHMAeX (ORCPT + 99 others); Fri, 12 Aug 2022 20:34:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229617AbiHMAeV (ORCPT ); Fri, 12 Aug 2022 20:34:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C1AE5A285E for ; Fri, 12 Aug 2022 17:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660350859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=peRfAvWIP7FDZjD9LS6PUCSHBhipHIZbT/wujjdYk4M=; b=HeMien+OMONP2RauGp5B4GziYR09GhIAMMW0/ZZErnhMRmWhOtzoRbNUUtIOXqyEiAMABG g5fgmmQ284I6xqqDvx2UgkbFwy2sPzwl54uZDH+VhKoRZjcU51GyUP8sBL7E5mgXbOZBZe OODx+3Q4+Bq0QXMe9bP+J2JVH9TqyeM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-101-pusrxSxtNjmQQNpKUc6RaQ-1; Fri, 12 Aug 2022 20:34:15 -0400 X-MC-Unique: pusrxSxtNjmQQNpKUc6RaQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A7BAC85A581; Sat, 13 Aug 2022 00:34:14 +0000 (UTC) Received: from localhost (ovpn-12-29.pek2.redhat.com [10.72.12.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8A9C040CF8E5; Sat, 13 Aug 2022 00:34:13 +0000 (UTC) Date: Sat, 13 Aug 2022 08:34:10 +0800 From: Baoquan He To: Eric DeVolder Cc: linux-kernel@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, ebiederm@xmission.com, dyoung@redhat.com, vgoyal@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, nramas@linux.microsoft.com, thomas.lendacky@amd.com, robh@kernel.org, efault@gmx.de, rppt@kernel.org, david@redhat.com, sourabhjain@linux.ibm.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com Subject: Re: [PATCH v10 8/8] x86/crash: Add x86 crash hotplug support Message-ID: References: <20220721181747.1640-1-eric.devolder@oracle.com> <20220721181747.1640-9-eric.devolder@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220721181747.1640-9-eric.devolder@oracle.com> X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/21/22 at 02:17pm, Eric DeVolder wrote: ...snip.... > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index e58798f636d4..bb59596c8bea 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -2065,6 +2065,17 @@ config CRASH_DUMP > (CONFIG_RELOCATABLE=y). > For more details see Documentation/admin-guide/kdump/kdump.rst > > +config CRASH_MAX_MEMORY_RANGES > + depends on CRASH_DUMP && KEXEC_FILE && (HOTPLUG_CPU || MEMORY_HOTPLUG) > + int > + default 32768 Do we need to enforce the value with page align and minimal size? I checked crash_load_segments() in arch/x86/kernel/crash.c, it does the page size aligning in kexec_add_buffer(). And in load_crashdump_segments() of kexec-tools/kexec/arch/i386/crashdump-x86.c, it creates elfcorehdr at below code, the align is 1024, and in generic add_buffer() implementation, it enforces the memsz page aligned, and changes the passed align as page alignment. elfcorehdr = add_buffer(info, tmp, bufsz, memsz, align, min_base, max_addr, -1); Maybe we should at least mention this in the help text to notice people. > + help > + For the kexec_file_load path, specify the maximum number of > + memory regions, eg. as represented by the 'System RAM' entries > + in /proc/iomem, that the elfcorehdr buffer/segment can accommodate. > + This value is combined with NR_CPUS and multiplied by Elf64_Phdr > + size to determine the final buffer size. > + > config KEXEC_JUMP > bool "kexec jump" > depends on KEXEC && HIBERNATION > diff --git a/arch/x86/include/asm/crash.h b/arch/x86/include/asm/crash.h > index 8b6bd63530dc..96051d8e4b45 100644 > --- a/arch/x86/include/asm/crash.h > +++ b/arch/x86/include/asm/crash.h > @@ -9,4 +9,24 @@ int crash_setup_memmap_entries(struct kimage *image, > struct boot_params *params); > void crash_smp_send_stop(void); > > +void *arch_map_crash_pages(unsigned long paddr, unsigned long size); > +#define arch_map_crash_pages arch_map_crash_pages > + > +void arch_unmap_crash_pages(void **ptr); > +#define arch_unmap_crash_pages arch_unmap_crash_pages > + > +void arch_crash_handle_hotplug_event(struct kimage *image, > + unsigned int hp_action, unsigned int cpu); > +#define arch_crash_handle_hotplug_event arch_crash_handle_hotplug_event > + > +#ifdef CONFIG_HOTPLUG_CPU > +static inline int crash_hotplug_cpu_support(void) { return 1; } > +#define crash_hotplug_cpu_support crash_hotplug_cpu_support > +#endif > + > +#ifdef CONFIG_MEMORY_HOTPLUG > +static inline int crash_hotplug_memory_support(void) { return 1; } > +#define crash_hotplug_memory_support crash_hotplug_memory_support > +#endif > + > #endif /* _ASM_X86_CRASH_H */ > diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c > index 9ceb93c176a6..55dda4fcde6e 100644 > --- a/arch/x86/kernel/crash.c > +++ b/arch/x86/kernel/crash.c > @@ -25,6 +25,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -397,7 +398,17 @@ int crash_load_segments(struct kimage *image) > image->elf_headers = kbuf.buffer; > image->elf_headers_sz = kbuf.bufsz; > > +#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_MEMORY_HOTPLUG) > + /* Ensure elfcorehdr segment large enough for hotplug changes */ > + kbuf.memsz = (CONFIG_NR_CPUS_DEFAULT + CONFIG_CRASH_MAX_MEMORY_RANGES) * sizeof(Elf64_Phdr); Do we need to break the line to 80 chars? > + /* For marking as usable to crash kernel */ > + image->elf_headers_sz = kbuf.memsz; Do we need this code comment? > + /* Record the index of the elfcorehdr segment */ > + image->elfcorehdr_index = image->nr_segments; And this place? > + image->elfcorehdr_index_valid = true; > +#else > kbuf.memsz = kbuf.bufsz; > +#endif > kbuf.buf_align = ELF_CORE_HEADER_ALIGN; > kbuf.mem = KEXEC_BUF_MEM_UNKNOWN; > ret = kexec_add_buffer(&kbuf); > @@ -412,3 +423,107 @@ int crash_load_segments(struct kimage *image) > return ret; > } > #endif /* CONFIG_KEXEC_FILE */ > + > +#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_MEMORY_HOTPLUG) > +void *arch_map_crash_pages(unsigned long paddr, unsigned long size) > +{ > + /* > + * NOTE: The addresses and sizes passed to this routine have > + * already been fully aligned on page boundaries. There is no > + * need for massaging the address or size. > + */ Can we move the code comment above function interface? > + void *ptr = NULL; > + > + /* NOTE: requires arch_kexec_[un]protect_crashkres() for write access */ Do we need this code comment? On ARCH where proctionion is made, we surely need to the protect/unprotect. > + if (size > 0) { > + struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); > + > + ptr = kmap_local_page(page); > + } > + > + return ptr; > +} > + > +void arch_unmap_crash_pages(void **ptr) > +{ > + if (ptr) { > + if (*ptr) > + kunmap_local(*ptr); > + *ptr = NULL; > + } > +} > + > +/** > + * arch_crash_handle_hotplug_event() - Handle hotplug elfcorehdr changes > + * @image: the active struct kimage > + * @hp_action: the hot un/plug action being handled > + * @cpu: when KEXEC_CRASH_HP_ADD/REMOVE_CPU, the cpu affected > + * > + * To accurately reflect hot un/plug changes, the elfcorehdr (which > + * is passed to the crash kernel via the elfcorehdr= parameter) > + * must be updated with the new list of CPUs and memories. The new > + * elfcorehdr is prepared in a kernel buffer, and then it is > + * written on top of the existing/old elfcorehdr. > + * > + * For hotplug changes to elfcorehdr to work, two conditions are > + * needed: > + * First, the segment containing the elfcorehdr must be large enough > + * to permit a growing number of resources. See the > + * CONFIG_CRASH_MAX_MEMORY_RANGES description. > + * Second, purgatory must explicitly exclude the elfcorehdr from the > + * list of segments it checks (since the elfcorehdr changes and thus > + * would require an update to purgatory itself to update the digest). Isn't this generic concept to crash hotplug? Should we move it out to some generic place? > + * > + */ > +void arch_crash_handle_hotplug_event(struct kimage *image, > + unsigned int hp_action, unsigned int cpu) The passed in 'cpu' is not used at all, what is it added for? I didn't see explanation about it. > +{ > + struct kexec_segment *ksegment; > + unsigned char *ptr = NULL; > + unsigned long elfsz = 0; > + void *elfbuf = NULL; > + unsigned long mem, memsz; > + > + /* > + * Elfcorehdr_index_valid checked in crash_core:handle_hotplug_event() > + */ > + ksegment = &image->segment[image->elfcorehdr_index]; > + mem = ksegment->mem; > + memsz = ksegment->memsz; > + > + /* > + * Create the new elfcorehdr reflecting the changes to CPU and/or > + * memory resources. > + */ > + if (prepare_elf_headers(image, &elfbuf, &elfsz)) { > + pr_err("crash hp: unable to prepare elfcore headers"); > + goto out; > + } > + if (elfsz > memsz) { > + pr_err("crash hp: update elfcorehdr elfsz %lu > memsz %lu", > + elfsz, memsz); > + goto out; > + } > + > + /* > + * At this point, we are all but assured of success. > + * Copy new elfcorehdr into destination. > + */ > + ptr = arch_map_crash_pages(mem, memsz); > + if (ptr) { > + /* > + * Temporarily invalidate the crash image while the > + * elfcorehdr is updated. > + */ > + xchg(&kexec_crash_image, NULL); > + memcpy_flushcache((void *)ptr, elfbuf, elfsz); > + xchg(&kexec_crash_image, image); > + } > + arch_unmap_crash_pages((void **)&ptr); > + pr_debug("crash hp: re-loaded elfcorehdr at 0x%lx\n", mem); > + > +out: > + if (elfbuf) > + vfree(elfbuf); > +} > +#endif > -- > 2.31.1 >