Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp990347pxb; Thu, 25 Feb 2021 22:47:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJxC4C+MoqxQThihlX8OAH6eCYXgz2bwussyCRlZLP314Jv0LEZ90gZABcSdFWg7f4e2ombC X-Received: by 2002:a05:6402:cb0:: with SMTP id cn16mr1702237edb.25.1614322055614; Thu, 25 Feb 2021 22:47:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614322055; cv=none; d=google.com; s=arc-20160816; b=exHiWkqSPkNGwJIcAuTn1s88e2FQzJXzjSHKP0+fYWYWq8hDNsHzMYrpg1ZKtdIftr uhCm295ysf8dhBJuRGTFhWKdposMtAGSOU0qRkg2PfAA1lqRrqa/madCTdAUmR/L9NFT cco7UvlL4Pt2cbpWUtC9iI6lGW/KJ8YKu9hyx8eLzkDTKJi7nWqTivx2JITR60LZLZXD fnNF98aPoPvliUqTUYZueoWXcmF3eS0VBvCxLy8CimcPUbCay6wRNrtnLtU5aVMbTKkx RBNRNL4te/9jRvYFzFdkw43FnDnDmCn7XxFeB6W5K8N9aBbJBs33HhGtOvI97FVm6J7p Czww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=CC4BNWZYga5Dd8/J4FGpYjEs3DWvzs4ZllqEfeJ06p0=; b=le/Ku7cW9DZBQnqfKG5cI054whdDYjREiN6R/eNC9nQQ+no2T2fOtbtU2fAS8cv8Bx 6MoUHTDL0RRXq7GC0B4F1oZWRGNg19mtCxeNaFMKtk9+OnEd9lp56vY/WH1PlLZI3YfK ZJNiHMT1Qyk7hqhFKMVWmIAJ841tf2toWU0g37CmIaVp22Tcb46AV+3HVNZdjxVtsA5T KWX//id+2WkP+9nTUGlcZL6038EwIB8CulcC0YHTfI+ckhnA4zXAF8INpIjY86FO6S2A jgQZAMOOLRW6SWVFhIHmCgIrlyU8y9ok7W37eYL10xXjxkfV0bVtRXRAMOdReFViNUSJ dEsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lz18si5232478ejb.576.2021.02.25.22.47.13; Thu, 25 Feb 2021 22:47:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229545AbhBZGqW (ORCPT + 99 others); Fri, 26 Feb 2021 01:46:22 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:13090 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229482AbhBZGqQ (ORCPT ); Fri, 26 Feb 2021 01:46:16 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Dn0Vh0lYlz16CG8; Fri, 26 Feb 2021 14:43:56 +0800 (CST) Received: from [10.174.176.191] (10.174.176.191) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.498.0; Fri, 26 Feb 2021 14:45:26 +0800 Subject: Re: [PATCH v14 01/11] x86: kdump: replace the hard-coded alignment with macro CRASH_ALIGN To: Baoquan He , Catalin Marinas , References: <20210130071025.65258-1-chenzhou10@huawei.com> <20210130071025.65258-2-chenzhou10@huawei.com> <20210224141939.GA28965@arm.com> <20210225072426.GH3553@MiWiFi-R3L-srv> CC: , , , , , , , , , , , , , , , , , , , , From: chenzhou Message-ID: <121fa1e6-f1a3-d47f-bb1d-baaacf96fddc@huawei.com> Date: Fri, 26 Feb 2021 14:45:25 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20210225072426.GH3553@MiWiFi-R3L-srv> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.176.191] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/2/25 15:25, Baoquan He wrote: > On 02/24/21 at 02:19pm, Catalin Marinas wrote: >> On Sat, Jan 30, 2021 at 03:10:15PM +0800, Chen Zhou wrote: >>> Move CRASH_ALIGN to header asm/kexec.h for later use. Besides, the >>> alignment of crash kernel regions in x86 is 16M(CRASH_ALIGN), but >>> function reserve_crashkernel() also used 1M alignment. So just >>> replace hard-coded alignment 1M with macro CRASH_ALIGN. >> [...] >>> @@ -510,7 +507,7 @@ static void __init reserve_crashkernel(void) >>> } else { >>> unsigned long long start; >>> >>> - start = memblock_phys_alloc_range(crash_size, SZ_1M, crash_base, >>> + start = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, >>> crash_base + crash_size); >>> if (start != crash_base) { >>> pr_info("crashkernel reservation failed - memory is in use.\n"); >> There is a small functional change here for x86. Prior to this patch, >> crash_base passed by the user on the command line is allowed to be 1MB >> aligned. With this patch, such reservation will fail. >> >> Is the current behaviour a bug in the current x86 code or it does allow >> 1MB-aligned reservations? > Hmm, you are right. Here we should keep 1MB alignment as is because > users specify the address and size, their intention should be respected. > The 1MB alignment for fixed memory region reservation was introduced in > below commit, but it doesn't tell what is Eric's request at that time, I > guess it meant respecting users' specifying. I think we could make the alignment unified. Why is the alignment system reserved and user specified different? Besides, there is no document about the 1MB alignment. How about adding the alignment size(16MB) in doc if user specified start address as arm64 does. Thanks, Chen Zhou > > commit 44280733e71ad15377735b42d8538c109c94d7e3 > Author: Yinghai Lu > Date: Sun Nov 22 17:18:49 2009 -0800 > > x86: Change crash kernel to reserve via reserve_early() > > use find_e820_area()/reserve_early() instead. > > -v2: address Eric's request, to restore original semantics. > will fail, if the provided address can not be used. > > . >