Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4669465yba; Sun, 12 May 2019 19:22:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqxUolnRIEdVGRoHdYjs2BZJr5JzGEtUpW4tr8Cw1VGARJbsaGhZfGuBZJV5rCAG67pGKhtS X-Received: by 2002:a63:1d02:: with SMTP id d2mr28765466pgd.26.1557714146859; Sun, 12 May 2019 19:22:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557714146; cv=none; d=google.com; s=arc-20160816; b=FtkSQ9QIhAfW6y5z4fWyOVyNJ4KDe62Q+pTLnkbhDT8OUe7srMAPtIpLYkfZD69RNq 9MTvcFZgQ4i/vsT63kAQlIAyE66DqqYPYHgRm3XC5xsIYcEamPb0HKtR+H3xXEEwbdcO Sxswyu67EFNUjcbcndq4UrpHMDV25IXPgQGa0zlwlV4zj5yC/FKvjDdbL9Dq5aAr9Lv+ Ahdtt9BdwaJ9a8Ip0/QD0tea6D+QWlJwmbvtr7edjdVp4tV/Kd1TkAwN7g0PpHUmKJYF LlBv1FgRiKLn27IU5BRM/VeprLzP7vkijg0wgHkf6YlnDDRgI4ENLJgXVzvhC2Dvb8TR k3ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version; bh=0YBhbwDYDQaaP8Q10VnTc/btcudJfr9E3ZfK2SxCdhk=; b=mKjrfeo/A+DE63LM0GAdEOrzrc+h3uYRMrAB0sGUmitAhSluvTpyYfGzON4/cNwIhC Pw4epYd0MsmLc70AzmbcTRHXecz41XummMscvZlVdATjaZH0BnVR3OE3QBuyjXhR2WuG Jf3aAMXKXTQNwZF74S2a4oMkFemsGzZlPvRH4iTFGxR393xXnYGD3Z8fNFCN28OqdZDx p7YCuHnKu/op0xYDZJyzcS9BKP73bmqPIkiklINMdPl7knKsG2UK5yPE60nxt+5D/UdZ mW0TMCVcYFLCDCL4ly/2rmAFTRY9GxP8x2RU0Ir9VJ/GIFTFOVjG/NSvrjr6aQPNLyx2 VG5A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a16si11694853pgv.75.2019.05.12.19.22.10; Sun, 12 May 2019 19:22:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727311AbfEMCTU (ORCPT + 99 others); Sun, 12 May 2019 22:19:20 -0400 Received: from mail-io1-f67.google.com ([209.85.166.67]:41395 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727054AbfEMCTU (ORCPT ); Sun, 12 May 2019 22:19:20 -0400 Received: by mail-io1-f67.google.com with SMTP id a17so8804592iot.8 for ; Sun, 12 May 2019 19:19:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0YBhbwDYDQaaP8Q10VnTc/btcudJfr9E3ZfK2SxCdhk=; b=PDZ0cczf/4UwcfxcPfFPDDqA/HrIwatWujHbWgbcjOPAo3jyhA88JOa7NERFRHCC9C Wngf8sMm8xPBrywhsADMXK+B31b3+5Zz8AXHIcAD8cFyi770bDtXiGYFTm1aW5txij86 T9ngXUgcn/8LL5OcS6jrdieniy20cDylEm/7gLy90AE6h7jBnpJiSoENmiz83RnmUwXI XkCd5ZMmel1xhN7lfbqNu7OrVYg2tjBGpXhxVj0/3q3z6K/trUfovXpdI4kQs5tx3Cag 2EBxrzSnRRijc/G0yV9KFMVnXEX+ooBlcr2S2ZkOcEjcwSSesdEfSg7ANOXaLUwnK5It A9tw== X-Gm-Message-State: APjAAAV/EjHMFrxQWpMndLuxkvCJEB36Gr+Rm4Z6HxwGZFZ7pq3CsXpX kHwSXsAUyilsfljJGZ8dTZVAps3Zl2SEtQT3xqthBw== X-Received: by 2002:a5d:83c5:: with SMTP id u5mr14476328ior.137.1557713959523; Sun, 12 May 2019 19:19:19 -0700 (PDT) MIME-Version: 1.0 References: <20190510102051.25647-1-kasong@redhat.com> <20190513015241.GA8515@dhcp-128-65.nay.redhat.com> In-Reply-To: <20190513015241.GA8515@dhcp-128-65.nay.redhat.com> From: Kairui Song Date: Mon, 13 May 2019 10:19:09 +0800 Message-ID: Subject: Re: [RFC PATCH] vmcore: Add a kernel cmdline device_dump_limit To: Dave Young Cc: Linux Kernel Mailing List , Rahul Lakkireddy , Ganesh Goudar , "David S . Miller" , Eric Biederman , Alexey Dobriyan , Andrew Morton , "kexec@lists.infradead.org" , Bhupesh Sharma Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 13, 2019 at 9:52 AM Dave Young wrote: > > On 05/10/19 at 06:20pm, Kairui Song wrote: > > Device dump allow drivers to add device related dump data to vmcore as > > they want. This have a potential issue, the data is stored in memory, > > drivers may append too much data and use too much memory. The vmcore is > > typically used in a kdump kernel which runs in a pre-reserved small > > chunk of memory. So as a result it will make kdump unusable at all due > > to OOM issues. > > > > So introduce new device_dump_limit= kernel parameter, and set the > > default limit to 0, so device dump is not enabled unless user specify > > the accetable maxiam memory usage for device dump data. In this way user > > will also have the chance to adjust the kdump reserved memory > > accordingly. > > The device dump is only affective in kdump 2nd kernel, so add the > limitation seems not useful. One is hard to know the correct size > unless one does some crash test. If one did the test and want to eanble > the device dump he needs increase crashkernel= size in 1st kernel and > add the limit param in 2nd kernel. > > So a global on/off param sounds easier and better, something like > vmcore_device_dump=on (default is off) Yes, on/off could be another way to solve this issue, the size limit could being more flexibility, if device dump is not asking for too much memory then it would just work but bring extra complexity indeed. Considering it's actually hard to know how much memory is needed for the device dump drivers to work, I'll update to use the on/off cmdline then. > > > > > Signed-off-by: Kairui Song > > --- > > fs/proc/vmcore.c | 20 ++++++++++++++++++++ > > 1 file changed, 20 insertions(+) > > > > diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c > > index 3fe90443c1bb..e28695ef2439 100644 > > --- a/fs/proc/vmcore.c > > +++ b/fs/proc/vmcore.c > > @@ -53,6 +53,9 @@ static struct proc_dir_entry *proc_vmcore; > > /* Device Dump list and mutex to synchronize access to list */ > > static LIST_HEAD(vmcoredd_list); > > static DEFINE_MUTEX(vmcoredd_mutex); > > + > > +/* Device Dump Limit */ > > +static size_t vmcoredd_limit; > > #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ > > > > /* Device Dump Size */ > > @@ -1465,6 +1468,11 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) > > data_size = roundup(sizeof(struct vmcoredd_header) + data->size, > > PAGE_SIZE); > > > > + if (vmcoredd_orig_sz + data_size >= vmcoredd_limit) { > > + ret = -ENOMEM; > > + goto out_err; > > + } > > + > > /* Allocate buffer for driver's to write their dumps */ > > buf = vmcore_alloc_buf(data_size); > > if (!buf) { > > @@ -1502,6 +1510,18 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) > > return ret; > > } > > EXPORT_SYMBOL(vmcore_add_device_dump); > > + > > +static int __init parse_vmcoredd_limit(char *arg) > > +{ > > + char *end; > > + > > + if (!arg) > > + return -EINVAL; > > + vmcoredd_limit = memparse(arg, &end); > > + return end > arg ? 0 : -EINVAL; > > + > > +} > > +__setup("device_dump_limit=", parse_vmcoredd_limit); > > #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ > > > > /* Free all dumps in vmcore device dump list */ > > -- > > 2.20.1 > > > > Thanks > Dave -- Best Regards, Kairui Song