Received: by 10.192.165.148 with SMTP id m20csp3826836imm; Mon, 30 Apr 2018 07:05:32 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqaGI6AXnL5VDDeHTvxIdwtSxqkjGG+skXjeqdfXyFAPtRaHTFsU7XvJ9e4lhOUgNPIM7Tz X-Received: by 2002:a63:6ec7:: with SMTP id j190-v6mr10038031pgc.71.1525097132805; Mon, 30 Apr 2018 07:05:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525097132; cv=none; d=google.com; s=arc-20160816; b=wMYgBxe24DEoDZ6UEKCJlvDiJMY5rnXzT4bEAjCalf5Y0YNpfsD9yOkQYV8RptX6gP 7sImhvUc/ZYVbpFiRuy1qIwnjoxFuPmhCJTELHEJ7IIsrOB7oOrYyQouMkm4EbGD8G4R 8ipEvDYGdhjpasUVneljrHygb0sWV5qBvIZ7OjIlyKifXch60/xNsi6HO3+4lB/naa1l evjX5/b6gsT8qxkWY6+goF9FYjUmQesft8OtO868y+u28tPv7YvAZWUhd1QakwyENqmM to0SRpc7uVamCcMJBM96u4E/1Tr9xgBe/hqu9jTgK7FJl+TMx8MxDWnwZg8pUmRq/Xlj 8bEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date:arc-authentication-results; bh=xGKcXG+ojPbY7UmnQ4pepce8Xq+Dcx7Ijo6pcKL7Ndo=; b=u9EeBQ4SlDhd2FSE/olT6Gs2bPHLPN4ly+gA/IkCU9qczhbzwbq3OGTJVCpDFAe+GF 4agrAIdQhou4e6FOcdea9bKCyF+AfbOKrbXJ1UrZ4L5PlzbpnuLpDaPfdLNXJuMRJQns npUgs+e4e4o0K2wPYQeZ6HIYsWZpF2OclaTNEzJZWw19alvHme+xfyicYb0FqouMHSuS ZjbI5WSlmW49GGi2RaGhu/FL3GpkkEQ+4vDckC0j4UFBAsH+24FqjOXC9oVX/lfw9dQV SI0o0EfZYx+3TwU0QwAAg+daqE/zIAkvB73XquibkPUPgLMT4ZntHu44cb2owyfaDIZR 6xkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i73si7744694pfe.27.2018.04.30.07.05.18; Mon, 30 Apr 2018 07:05:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753964AbeD3OD2 (ORCPT + 99 others); Mon, 30 Apr 2018 10:03:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43782 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751949AbeD3OD1 (ORCPT ); Mon, 30 Apr 2018 10:03:27 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C840828209; Mon, 30 Apr 2018 14:03:26 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E1AE31001918; Mon, 30 Apr 2018 14:03:26 +0000 (UTC) Received: from zmail24.collab.prod.int.phx2.redhat.com (zmail24.collab.prod.int.phx2.redhat.com [10.5.83.30]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 86E7D4CAA8; Mon, 30 Apr 2018 14:03:26 +0000 (UTC) Date: Mon, 30 Apr 2018 10:03:26 -0400 (EDT) From: Dave Anderson To: Laura Abbott Cc: Kees Cook , Ard Biesheuvel , Linux Kernel Mailing List , Ingo Molnar , Andi Kleen Message-ID: <964518723.25675338.1525097006267.JavaMail.zimbra@redhat.com> In-Reply-To: <7ab806fe-ee36-59ad-483b-d6734fcd3451@redhat.com> References: <981100282.24860394.1524770798522.JavaMail.zimbra@redhat.com> <823082096.24861749.1524771086176.JavaMail.zimbra@redhat.com> <7ab806fe-ee36-59ad-483b-d6734fcd3451@redhat.com> Subject: Re: BUG: /proc/kcore does not export direct-mapped memory on arm64 (and presumably some other architectures) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.18.17.201, 10.4.195.27] Thread-Topic: /proc/kcore does not export direct-mapped memory on arm64 (and presumably some other architectures) Thread-Index: ssVgjX4ZXXtUUA8qO4M14K88SJOVFg== X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 30 Apr 2018 14:03:26 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- Original Message ----- > On 04/26/2018 02:16 PM, Kees Cook wrote: > > On Thu, Apr 26, 2018 at 12:31 PM, Dave Anderson > > wrote: > >> > >> While testing /proc/kcore as the live memory source for the crash utility, > >> it fails on arm64. The failure on arm64 occurs because only the > >> vmalloc/module space segments are exported in PT_LOAD segments, > >> and it's missing all of the PT_LOAD segments for the generic > >> unity-mapped regions of physical memory, as well as their associated > >> vmemmap sections. > >> > >> The mapping of unity-mapped RAM segments in fs/proc/kcore.c is > >> architecture-neutral, and after debugging it, I found this as the > >> problem. For each chunk of physical memory, kcore_update_ram() > >> calls walk_system_ram_range(), passing kclist_add_private() as a > >> callback function to add the chunk to the kclist, and eventually > >> leading to the creation of a PT_LOAD segment. > >> > >> kclist_add_private() does some verification of the memory region, > >> but this one below is bogus for arm64: > >> > >> static int > >> kclist_add_private(unsigned long pfn, unsigned long nr_pages, void > >> *arg) > >> { > >> ... [ cut ] ... > >> ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT)); > >> ... [ cut ] ... > >> > >> /* Sanity check: Can happen in 32bit arch...maybe */ > >> if (ent->addr < (unsigned long) __va(0)) > >> goto free_out; > >> > >> And that's because __va(0) is a bogus check for arm64. It is checking > >> whether the ent->addr value is less than the lowest possible unity-mapped > >> address. But "0" should not be used as a physical address on arm64; the > >> lowest legitimate physical address for this __va() check would be the > >> arm64 > >> PHYS_OFFSET, or memstart_addr: > >> > >> Here's the arm64 __va() and PHYS_OFFSET: > >> > >> #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) > >> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | > >> PAGE_OFFSET) > >> > >> extern s64 memstart_addr; > >> /* PHYS_OFFSET - the physical address of the start of memory. */ > >> #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); > >> memstart_addr; }) > >> > >> If PHYS_OFFSET/memstart_addr is anything other than 0 (it is 0x4000000000 > >> on my > >> test system), the __va(0) calculation goes negative and creates a bogus, > >> very > >> large, virtual address. And since the ent->addr virtual address is less > >> than > >> bogus __va(0) address, the test fails, and the memory chunk is rejected. > >> > >> Looking at the kernel sources, it seems that this would affect other > >> architectures as well, i.e., the ones whose __va() is not a simple > >> addition of the physical address with PAGE_OFFSET. > >> > >> Anyway, I don't know what the best approach for an architecture-neutral > >> fix would be in this case. So I figured I'd throw it out to you guys for > >> some ideas. > > > > I'm not as familiar with this code, but I've added Ard and Laura to CC > > here, as this feels like something they'd be able to comment on. :) > > > > -Kees > > > > It seems backwards that we're converting a physical address to > a virtual address and then validating that. I think checking against > pfn_valid (to ensure there is a valid memmap entry) > and then checking page_to_virt against virt_addr_valid to catch > other cases (e.g. highmem or holes in the space) seems cleaner. Hi Laura, Thanks a lot for looking into this -- I couldn't find a maintainer for kcore. The patch looks good to me, as long as virt_addr_valid() will fail on 32-bit arches when page_to_virt() creates an invalid address when it gets passed a highmem-physical address. Thanks again, Dave > Maybe something like: > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > index d1e82761de81..e64ecb9f2720 100644 > --- a/fs/proc/kcore.c > +++ b/fs/proc/kcore.c > @@ -209,25 +209,34 @@ kclist_add_private(unsigned long pfn, unsigned long > nr_pages, void *arg) > { > struct list_head *head = (struct list_head *)arg; > struct kcore_list *ent; > + struct page *p; > + > + if (!pfn_valid(pfn)) > + return 1; > + > + p = pfn_to_page(pfn); > + if (!memmap_valid_within(pfn, p, page_zone(p))) > + return 1; > > ent = kmalloc(sizeof(*ent), GFP_KERNEL); > if (!ent) > return -ENOMEM; > - ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT)); > + ent->addr = (unsigned long)page_to_virt(p); > ent->size = nr_pages << PAGE_SHIFT; > > - /* Sanity check: Can happen in 32bit arch...maybe */ > - if (ent->addr < (unsigned long) __va(0)) > + if (!virt_addr_valid(ent->addr)) > goto free_out; > > /* cut not-mapped area. ....from ppc-32 code. */ > if (ULONG_MAX - ent->addr < ent->size) > ent->size = ULONG_MAX - ent->addr; > > - /* cut when vmalloc() area is higher than direct-map area */ > - if (VMALLOC_START > (unsigned long)__va(0)) { > - if (ent->addr > VMALLOC_START) > - goto free_out; > + /* > + * We've already checked virt_addr_valid so we know this address > + * is a valid pointer, therefore we can check against it to determine > + * if we need to trim > + */ > + if (VMALLOC_START > ent->addr) { > if (VMALLOC_START - ent->addr < ent->size) > ent->size = VMALLOC_START - ent->addr; > } > >