Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp4073768iog; Tue, 21 Jun 2022 11:26:18 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sEut9YOtel+MeJZkcKNyj1gOv5tvVmFhzp21QoOaL9GdzC1fkBwRrQFZ8Q4WE+lC2q3xvX X-Received: by 2002:a17:902:d5c9:b0:16a:26a1:7673 with SMTP id g9-20020a170902d5c900b0016a26a17673mr11365000plh.68.1655835978108; Tue, 21 Jun 2022 11:26:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655835978; cv=none; d=google.com; s=arc-20160816; b=ikrYwT62uXdk/4FQ9lUYc1W5txWBZsLz8Aaa+cxsQMIjhq0WgE68t8u25ub/14aGtu XT1GHwWxIfCmDS7sozWc7eoyIC8LB8KgZtHOh3obJwe/ohjkr3SEVurRrpMtbZ193C0z 9aEic47BrO1/o3fpila22Xdq6RS5G9ijfl1ZUkpZ3fkHbiJHJP+INaYWHk7YphpEj/WR a7gd85nd/Pzpz/E83/MVg/tA63I9ENWvytShWqacOVrazah0dBgoTGbXSVg4glBpHnNw 538YeSt8OXXj6YM0JcI+V3JtF6PWmHkjSn+T8veISRP/x9xv6quiqhgfSLl7BXOWvRUr /GTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Ratr7Q/9Q04gAuB8GLgvv5nl4ujdmtanmsM5v9nYld4=; b=RTeWTWatYvPHkpgDX52Jg840twK9sBSaR9YsSt8Tx9kGqLsHOXjOq1K8thXrkgbLLr 4oJyoacVfZFJHaRE/wiiDlkYabxz5RBJuT/w9peA96X9mEzXF6gnLfn8gGmhzW+DMt52 zzprFTxgwizM3BaRwI91Lx84M2J10g9GJ05ZVUF7JCCjzCvkKVQJdtu3cN47hlKJ7q2W TN1alFCpF4rzEUSDbRuHnrpcg48cAjT0Nrcjwxy6xqrLXqhmfl7prsQPo0XrRcjwsSEW mcxcoBuwmWigdptn39AIHoptCb/vX+U7BKRp91Uwj+MZ41hqXACeBKQwLJaktmYKPdAf mo0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t10-20020a056a0021ca00b005188ba74699si20904440pfj.196.2022.06.21.11.26.04; Tue, 21 Jun 2022 11:26:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351540AbiFUSEd (ORCPT + 99 others); Tue, 21 Jun 2022 14:04:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235625AbiFUSEc (ORCPT ); Tue, 21 Jun 2022 14:04:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58D0319C2E; Tue, 21 Jun 2022 11:04:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ECC98615E3; Tue, 21 Jun 2022 18:04:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A105C3411C; Tue, 21 Jun 2022 18:04:25 +0000 (UTC) Date: Tue, 21 Jun 2022 19:04:22 +0100 From: Catalin Marinas To: Kefeng Wang Cc: Baoquan He , Zhen Lei , Ard Biesheuvel , Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , devicetree@vger.kernel.org, Dave Young , Vivek Goyal , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Jonathan Corbet , linux-doc@vger.kernel.org, Randy Dunlap , Feng Zhou , Chen Zhou , John Donnelly , Dave Kleikamp , liushixin Subject: Re: [PATCH 5/5] arm64: kdump: Don't defer the reservation of crash high memory Message-ID: References: <20220613080932.663-1-thunder.leizhen@huawei.com> <20220613080932.663-6-thunder.leizhen@huawei.com> <3f66323d-f371-b931-65fb-edfae0f01c88@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <3f66323d-f371-b931-65fb-edfae0f01c88@huawei.com> X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 21, 2022 at 02:24:01PM +0800, Kefeng Wang wrote: > On 2022/6/21 13:33, Baoquan He wrote: > > On 06/13/22 at 04:09pm, Zhen Lei wrote: > > > If the crashkernel has both high memory above DMA zones and low memory > > > in DMA zones, kexec always loads the content such as Image and dtb to the > > > high memory instead of the low memory. This means that only high memory > > > requires write protection based on page-level mapping. The allocation of > > > high memory does not depend on the DMA boundary. So we can reserve the > > > high memory first even if the crashkernel reservation is deferred. > > > > > > This means that the block mapping can still be performed on other kernel > > > linear address spaces, the TLB miss rate can be reduced and the system > > > performance will be improved. > > > > Ugh, this looks a little ugly, honestly. > > > > If that's for sure arm64 can't split large page mapping of linear > > region, this patch is one way to optimize linear mapping. Given kdump > > setting is necessary on arm64 server, the booting speed is truly > > impacted heavily. > > Is there some conclusion or discussion that arm64 can't split large page > mapping? > > Could the crashkernel reservation (and Kfence pool) be splited dynamically? > > I found Mark replay "arm64: remove page granularity limitation from > KFENCE"[1], > > ? "We also avoid live changes from block<->table mappings, since the > ? archtitecture gives us very weak guarantees there and generally requires > ? a Break-Before-Make sequence (though IIRC this was tightened up > ? somewhat, so maybe going one way is supposed to work). Unless it's > ? really necessary, I'd rather not split these block mappings while > ? they're live." The problem with splitting is that you can end up with two entries in the TLB for the same VA->PA mapping (e.g. one for a 4KB page and another for a 2MB block). In the lucky case, the CPU will trigger a TLB conflict abort (but can be worse like loss of coherency). Prior to FEAT_BBM (added in ARMv8.4), such scenario was not allowed at all, the software would have to unmap the range, TLBI, remap. With FEAT_BBM (level 2), we can do this without tearing the mapping down but we still need to handle the potential TLB conflict abort. The handler only needs a TLBI but if it touches the memory range being changed it risks faulting again. With vmap stacks and the kernel image mapped in the vmalloc space, we have a small window where this could be handled but we probably can't go into the C part of the exception handling (tracing etc. may access a kmalloc'ed object for example). Another option is to do a stop_machine() (if multi-processor at that point), disable the MMUs, modify the page tables, re-enable the MMU but it's also complicated. -- Catalin