Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp420420rwr; Thu, 27 Apr 2023 03:21:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ74A3WRBwIo5OYVA8MkO/+0f5sNhrK0M/Rj2o6NsdHt68A7eNC+eKHyCngGJd1LSVNZg1O7 X-Received: by 2002:a17:90a:1983:b0:23d:1bef:8594 with SMTP id 3-20020a17090a198300b0023d1bef8594mr1290665pji.1.1682590881530; Thu, 27 Apr 2023 03:21:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682590881; cv=none; d=google.com; s=arc-20160816; b=XjPNg9bStignE/4kOLC3kLzi34yGYgEdVcX3KPJnZtYd8wNRcpc4DwPP6PjGlSFXq5 LaV+Z0aJrb0OfWbZ2SOe9RTZ4PuYMxTJNinIvXH1N6wViytsGJO6YynUGJR2WdMIhaxv xoSFptlRaBH3ysmz2GqsBSzzV8a5pIGBTQR+Pr/EC1blyXgPDLRNDyJo4gz0CqjBjvzn SzvrQlLim58m7Ie9lwDNXLdpxzDE00CbMnggZB32dDaxjVi8EAB/PDFvFFV7ZbyPJiHF rYhInYHKgQLCrNLns2r0VCB/glvEvpePYOJ8+p7jrFEeFrs20lOgXaXtT7bVyyr6wUm+ pVEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=xhvg6DmEOR49Qs+o0tbcN1kWGOUKgmTbc88AAXZqVdU=; b=uRLlawZxPd2lYQRccq2FTPBmOLI42Erol7IYHjTPJib5Br9MNHaAjd0TXJxh9X4Hs3 hDOBmrV4OOsQbQRGb1k83KZlk5pU8YZ6i+gKfX4exqu3y5YirAGpjT4YmgZAwl4mjG5I 16IAopCR/o6zcgU9MMWuj7ukGcl/GPmoaZV8rn2TxzzSGXBQDRAQf0O6xYMe5mJM4eQ/ ZoUhDCnEy4iR7WHu0ASxbH07iScaWRwXAhvwOf5q6sjrmwZVjne7na5ft3eYP6i0pC0s +bWV0AtUdsph/3bBxjNYWa8t0WSW9jFaBYKSCkg6fZBGY07Jp5Bkenf+g30TokJaqDXK RctQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i17-20020a63d451000000b005133c334fe0si18505982pgj.192.2023.04.27.03.21.07; Thu, 27 Apr 2023 03:21:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243349AbjD0KSu (ORCPT + 99 others); Thu, 27 Apr 2023 06:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243160AbjD0KSr (ORCPT ); Thu, 27 Apr 2023 06:18:47 -0400 Received: from esa10.hc1455-7.c3s2.iphmx.com (esa10.hc1455-7.c3s2.iphmx.com [139.138.36.225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5322E48 for ; Thu, 27 Apr 2023 03:18:45 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6600,9927,10692"; a="102762536" X-IronPort-AV: E=Sophos;i="5.99,230,1677510000"; d="scan'208";a="102762536" Received: from unknown (HELO oym-r4.gw.nic.fujitsu.com) ([210.162.30.92]) by esa10.hc1455-7.c3s2.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2023 19:18:43 +0900 Received: from oym-m1.gw.nic.fujitsu.com (oym-nat-oym-m1.gw.nic.fujitsu.com [192.168.87.58]) by oym-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id 35AB3DD994 for ; Thu, 27 Apr 2023 19:18:41 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by oym-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id 5B433D8BDF for ; Thu, 27 Apr 2023 19:18:40 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id D77BE114147F; Thu, 27 Apr 2023 19:18:38 +0900 (JST) From: Li Zhijian To: x86@kernel.org, nvdimm@lists.linux.dev, kexec@lists.infradead.org Cc: linux-kernel@vger.kernel.org, y-goto@fujitsu.com, yangx.jy@fujitsu.com, ruansy.fnst@fujitsu.com Subject: [RFC PATCH v2 0/3] pmem memmap dump support Date: Thu, 27 Apr 2023 18:18:31 +0800 Message-Id: <20230427101838.12267-1-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.40.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27590.006 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27590.006 X-TMASE-Result: 10--27.356600-10.000000 X-TMASE-MatchedRID: BMQxCkCavJY/erZuALh5hxxQCXaqsX3JeLLCA0PD7agXfsbcR/BbMebC +mIBPEBflnyhwSQkWk1BQyHqW4xlnWZ3IJrmJ3tqlTsGW3DmpUv1+9bO3CCbk8gVyTd/p+/ILXk gVQ0v4RMFw308tAFb1Dr61sLjnpY0o05cjFUN5oTX3j/lf1V8LIOeZuUUsCzCuBsk5njfgGzrVs ptJtPUH2+74QBQ4LHTdlwnBvQITCwaPxeMMHjQbvSG/+sPtZVk0bdjqKOoG3cb2DY1LGw+8KRGw X+NnBzgODYCThDek8ORloiW1KgftdSnuEf3mhWIHWRJEfGP5nn6xaEr/b4wE99zZd3pUn7KwWah HScmYz36vMFE52vcH0BArakVwkHtBXdkbv140jXl2CNM+DA49HMak2rr0Vj+PCZpicYzmkEl47Z 7tpcJixBIrvkNJsHOruc+49GPC8Rav8PkKeeaZMYv//yaWh0DRE2jSGHIzVuXLvthk3Zj8oA3fc x3+CyKZRUjkVFROF7gxbc0qkj4UR/HXj0hXLdJQfSOYOlxKYvRW9GSMIeErt9RjZujPiSk4rl+F HG3VoDwtUelt8q+Ql/NnaI9HF5eEEBjOlnEA/zN+qWlu2ZxaKn/3nyhTdZwggSG3lCXB9d5MXRK uSnHcF/qt5Y2tZKiREA0r6pepUpO8fMZwaqIgnaNJ/iTxXCafS0Ip2eEHnz3IzXlXlpamPoLR4+ zsDTtAqYBE3k9Mpw= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello folks, About 2 months ago, we posted our first RFC[3] and received your kindly feedback. Thank you :) Now, I'm back with the code. Currently, this RFC has already implemented to supported case D*. And the case A&B is disabled deliberately in makedumpfile. It includes changes in 3 source code as below: -----------+-------------------------------------------------------------------+ Source | changes | -----------+-------------------------------------------------------------------+ I. | 1. export a linked list(devm_memmap_vmcore) to vmcoreinfo | kernel | 2. link metatada region to the linked list | | 3. mark the whole pmem's PT_LOAD for kexec_file_load(2) syscall | -----------+-------------------------------------------------------------------+ II. kexec- | 1. mark the whole pmem's PT_LOAD for kexe_load(2) syscall | tool | | -----------+-------------------------------------------------------------------+ III. | 1. restore the linked list from devm_memmap_vmcore in vmcoreinfo | makedump- | 2. skip pmem userdata region(relies on I.3 and II.2) | file | 3. exclude pmem metadata region if needed | -----------+-------------------------------------------------------------------+ * Refer to the following section for the cases description. In RFC stage, I folded these 3 projects in this same cover letter for reviewing convenience. kernel: crash: export dev memmap header to vmcoreinfo drivers/nvdimm: export memmap of namespace to vmcoreinfo resource, crash: Make kexec_file_load support pmem kexec-tools: kexec: Add and mark pmem region into PT_LOADs makedumpfile: elf_info.c: Introduce is_pmem_pt_load_range makedumpfile.c: Exclude all pmem pages makedumpfile.c: Allow excluding metadata of pmem region --- pmem memmap can also be called pmem metadata here. ### Background and motivate overview ### --- Crash dump is an important feature for trouble shooting of kernel. It is the final way to chase what happened at the kernel panic, slowdown, and so on. It is the most important tool for customer support. However, a part of data on pmem is not included in crash dump, it may cause difficulty to analyze trouble around pmem (especially Filesystem-DAX). A pmem namespace in "fsdax" or "devdax" mode requires allocation of per-page metadata[1]. The allocation can be drawn from either mem(system memory) or dev(pmem device), see `ndctl help create-namespace` for more details. In fsdax, struct page array becomes very important, it is one of the key data to find status of reverse map. So, when metadata was stored in pmem, even pmem's per-page metadata will not be dumped. That means troubleshooters are unable to check more details about pmem from the dumpfile. ### Make pmem memmap dump support ### --- Our goal is that whether metadata is stored on mem or pmem, its metadata can be dumped and then the crash-utilities can read more details about the pmem. Of course, this feature can be enabled/disabled. First, based on our previous investigation, according to the location of metadata and the scope of dump, we can divide it into the following four cases: A, B, C, D. It should be noted that although we mentioned case A&B below, we do not want these two cases to be part of this feature, because dumping the entire pmem will consume a lot of space, and more importantly, it may contain user sensitive data. +-------------+----------+------------+ |\+--------+\ metadata location | | ++-----------------------+ | dump scope | mem | PMEM | +-------------+----------+------------+ | entire pmem | A | B | +-------------+----------+------------+ | metadata | C | D | +-------------+----------+------------+ ### Testing ### Only x86_64 are tested. Please note that we have to disable the 2nd kernel's libnvdimm to ensure the metadata in 2nd kernel will not be touched again. below 2 commits use sha256 to check the metadata in 1st kernel during panic and makedumpfile in 2nd kernel. https://github.com/zhijianli88/makedumpfile/commit/91a135be6980e6e87b9e00b909aaaf8ef9566ec0 https://github.com/zhijianli88/linux/commit/55bef07f8f0b2e587737b796e73b92f242947e5a ### TODO ### Only x86 are fully supported for both kexec_load(2) and kexec_file_load(2) kexec_file_load(2) on other architectures are TODOs. --- [1] Pmem region layout: ^<--namespace0.0---->^<--namespace0.1------>^ | | | +--+m----------------+--+m------------------+---------------------+-+a |++|e |++|e | |+|l |++|t |++|t | |+|i |++|a |++|a | |+|g |++|d namespace0.0 |++|d namespace0.1 | un-allocated |+|n |++|a fsdax |++|a devdax | |+|m |++|t |++|t | |+|e +--+a----------------+--+a------------------+---------------------+-+n | |t v<-----------------------pmem region------------------------------->v [2] https://lore.kernel.org/linux-mm/70F971CF-1A96-4D87-B70C-B971C2A1747C@roc.cs.umass.edu/T/ [3] https://lore.kernel.org/linux-mm/3c752fc2-b6a0-2975-ffec-dba3edcf4155@fujitsu.com/ ### makedumpfile output in case B #### kdump.sh[224]: makedumpfile: version 1.7.2++ (released on 20 Oct 2022) kdump.sh[224]: command line: makedumpfile -l --message-level 31 -d 31 /proc/vmcore /sysroot/var/crash/127.0.0.1-2023-04-21-02:50:57//vmcore-incomplete kdump.sh[224]: sadump: does not have partition header kdump.sh[224]: sadump: read dump device as unknown format kdump.sh[224]: sadump: unknown format kdump.sh[224]: phys_start phys_end virt_start virt_end is_pmem kdump.sh[224]: LOAD[ 0] 1000000 3c26000 ffffffff81000000 ffffffff83c26000 false kdump.sh[224]: LOAD[ 1] 100000 7f000000 ffff888000100000 ffff88807f000000 false kdump.sh[224]: LOAD[ 2] bf000000 bffd7000 ffff8880bf000000 ffff8880bffd7000 false kdump.sh[224]: LOAD[ 3] 100000000 140000000 ffff888100000000 ffff888140000000 false kdump.sh[224]: LOAD[ 4] 140000000 23e200000 ffff888140000000 ffff88823e200000 true kdump.sh[224]: Linux kdump kdump.sh[224]: VMCOREINFO : kdump.sh[224]: OSRELEASE=6.3.0-rc3-pmem-bad+ kdump.sh[224]: BUILD-ID=0546bd82db93706799d3eea38194ac648790aa85 kdump.sh[224]: PAGESIZE=4096 kdump.sh[224]: page_size : 4096 kdump.sh[224]: SYMBOL(init_uts_ns)=ffffffff82671300 kdump.sh[224]: OFFSET(uts_namespace.name)=0 kdump.sh[224]: SYMBOL(node_online_map)=ffffffff826bbe08 kdump.sh[224]: SYMBOL(swapper_pg_dir)=ffffffff82446000 kdump.sh[224]: SYMBOL(_stext)=ffffffff81000000 kdump.sh[224]: SYMBOL(vmap_area_list)=ffffffff82585fb0 kdump.sh[224]: SYMBOL(devm_memmap_vmcore_head)=ffffffff825603c0 kdump.sh[224]: SIZE(devm_memmap_vmcore)=40 kdump.sh[224]: OFFSET(devm_memmap_vmcore.entry)=0 kdump.sh[224]: OFFSET(devm_memmap_vmcore.start)=16 kdump.sh[224]: OFFSET(devm_memmap_vmcore.end)=24 kdump.sh[224]: SYMBOL(mem_section)=ffff88813fff4000 kdump.sh[224]: LENGTH(mem_section)=2048 kdump.sh[224]: SIZE(mem_section)=16 kdump.sh[224]: OFFSET(mem_section.section_mem_map)=0 ... kdump.sh[224]: STEP [Checking for memory holes ] : 0.012699 seconds kdump.sh[224]: STEP [Excluding unnecessary pages] : 0.538059 seconds kdump.sh[224]: STEP [Copying data ] : 0.995418 seconds kdump.sh[224]: STEP [Copying data ] : 0.000067 seconds kdump.sh[224]: Writing erase info... kdump.sh[224]: offset_eraseinfo: 5d02266, size_eraseinfo: 0 kdump.sh[224]: Original pages : 0x00000000001c0cfd kdump.sh[224]: Excluded pages : 0x00000000001a58d2 kdump.sh[224]: Pages filled with zero : 0x0000000000006805 kdump.sh[224]: Non-private cache pages : 0x0000000000019e93 kdump.sh[224]: Private cache pages : 0x0000000000077572 kdump.sh[224]: User process data pages : 0x0000000000002c3b kdump.sh[224]: Free pages : 0x0000000000010e8d kdump.sh[224]: Hwpoison pages : 0x0000000000000000 kdump.sh[224]: Offline pages : 0x0000000000000000 kdump.sh[224]: pmem metadata pages : 0x0000000000000000 kdump.sh[224]: pmem userdata pages : 0x00000000000fa200 kdump.sh[224]: Remaining pages : 0x000000000001b42b kdump.sh[224]: (The number of pages is reduced to 6%.) kdump.sh[224]: Memory Hole : 0x000000000007d503 kdump.sh[224]: -------------------------------------------------- kdump.sh[224]: Total pages : 0x000000000023e200 kdump.sh[224]: Write bytes : 97522590 kdump.sh[224]: Cache hit: 191669, miss: 292, hit rate: 99.8% kdump.sh[224]: The dumpfile is saved to /sysroot/var/crash/127.0.0.1-2023-04-21-02:50:57//vmcore-incomplete. kdump.sh[224]: makedumpfile Completed. -- 2.29.2