Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp4745751ybp; Mon, 14 Oct 2019 09:22:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqzw9ZSV6MTIIrIXCnTRLi7UCZI/4TONVCwfaSPH+ZU7PPcuICVyv4v0nFukexQqxJVdjd7X X-Received: by 2002:a17:906:553:: with SMTP id k19mr29399460eja.102.1571070130213; Mon, 14 Oct 2019 09:22:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571070130; cv=none; d=google.com; s=arc-20160816; b=IMTxm1uaHTnGdt3MLI+ZT2jO9A/koDfCKY13SPSTUuSc6H+NvLmXriT1aJ04w3x2G+ lfHINihghsrwnceji/eSr/K8vBgWVZ529/A+ty/2ofi2rbl5SgelkcENLx2Vhsw+s4jJ Ltk4hngzuuJXEOMzxsEXhfVK7SavnFb2XswVap8wYk+kywT605n5ZAFRdyRMlUm0/3vQ g8i379I/UrmVoOUw2D1RkLAfcfJfkGI92ubUg44gaRGCx7id7GvpMmrbUiKFPj0Q8WdE Zvt+9oC8dlJCmHgzUyNrcJNhLaEnxZDJQCTPxi8eFO+lDCRk3qSfm0uwQF9Wlx5LULKB UmHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=UoT2pOzkJy+Uc8c+b4u1ar1CCmQVsfRZ2xR5y+1AnWQ=; b=HuIfmDJL0Gc4/Q/McSGbFlvZ4505759qiIEFTcpZlPTxrCHEvTvNahTEr7Td68LzzL OG4H99qmAY7mwEz1A+slw3B/IeTYqeCb53LUikR3goiuWtI//slE4+uHWtk9cQyeSVLn XNHnGEAe0qdP/lEcgV+qg1M4D+sOgVxh4C1ynpOQFBlyDkkxFwR7d0uY7m77YLpxphtw AARx9KmmJ/j6pajDn0wnin41nZH+jNM8y+u6orQwtUlsM8vXJLAD6FQLQWvj85lzBAM9 92GYg8ayzI6eWLNglwntu/mzfAgnyOsDgKp9eV1tw8bVDv9MZlz+9hI7xbn3/E7eLBd8 sa9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16si7092892edb.420.2019.10.14.09.21.47; Mon, 14 Oct 2019 09:22:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387643AbfJNPoG (ORCPT + 99 others); Mon, 14 Oct 2019 11:44:06 -0400 Received: from foss.arm.com ([217.140.110.172]:47290 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731441AbfJNPoG (ORCPT ); Mon, 14 Oct 2019 11:44:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 60DE228; Mon, 14 Oct 2019 08:44:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C9CCC3F718; Mon, 14 Oct 2019 08:44:01 -0700 (PDT) Date: Mon, 14 Oct 2019 16:43:59 +0100 From: Mark Rutland To: Daniel Axtens Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, dvyukov@google.com, christophe.leroy@c-s.fr, linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory Message-ID: <20191014154359.GC20438@lakrids.cambridge.arm.com> References: <20191001065834.8880-1-dja@axtens.net> <20191001065834.8880-2-dja@axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191001065834.8880-2-dja@axtens.net> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote: > Hook into vmalloc and vmap, and dynamically allocate real shadow > memory to back the mappings. > > Most mappings in vmalloc space are small, requiring less than a full > page of shadow space. Allocating a full shadow page per mapping would > therefore be wasteful. Furthermore, to ensure that different mappings > use different shadow pages, mappings would have to be aligned to > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > Instead, share backing space across multiple mappings. Allocate a > backing page when a mapping in vmalloc space uses a particular page of > the shadow region. This page can be shared by other vmalloc mappings > later on. > > We hook in to the vmap infrastructure to lazily clean up unused shadow > memory. > > To avoid the difficulties around swapping mappings around, this code > expects that the part of the shadow region that covers the vmalloc > space will not be covered by the early shadow page, but will be left > unmapped. This will require changes in arch-specific code. > > This allows KASAN with VMAP_STACK, and may be helpful for architectures > that do not have a separate module space (e.g. powerpc64, which I am > currently working on). It also allows relaxing the module alignment > back to PAGE_SIZE. > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 > Acked-by: Vasily Gorbik > Signed-off-by: Daniel Axtens > [Mark: rework shadow allocation] > Signed-off-by: Mark Rutland Sorry to point this out so late, but your S-o-B should come last in the chain per Documentation/process/submitting-patches.rst. Judging by the rest of that, I think you want something like: Co-developed-by: Mark Rutland Signed-off-by: Mark Rutland [shadow rework] Signed-off-by: Daniel Axtens ... leaving yourself as the Author in the headers. Sorry to have made that more complicated! [...] > +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, > + void *unused) > +{ > + unsigned long page; > + > + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); > + > + spin_lock(&init_mm.page_table_lock); > + > + if (likely(!pte_none(*ptep))) { > + pte_clear(&init_mm, addr, ptep); > + free_page(page); > + } There should be TLB maintenance between clearing the PTE and freeing the page here. Thanks, Mark.