Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp582940ybz; Wed, 15 Apr 2020 14:31:35 -0700 (PDT) X-Google-Smtp-Source: APiQypJmxYSicVxWqnR+sjtrT1y0qKk+rxwef0z+6psz8bXBNdamYMiHJbWzuGVZ3tfPcXnqOMc8 X-Received: by 2002:a05:6402:22ed:: with SMTP id dn13mr4510806edb.212.1586986295704; Wed, 15 Apr 2020 14:31:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586986295; cv=none; d=google.com; s=arc-20160816; b=Gz9MuwpP0zdM70BiVr2mM3j3M4eEr7RisQnsmwAz3+48JhREAFHGOZu2bCdun9Gqb2 m4mB06/TIvTuE+X89xJ2a17vDILaoBBvlW27QXvINfYnqA16zKHxAIiU4BIFyyN4LPDT 0taUE33I8Pz0gJaVI0Drkf/vIT1GSaRP1ZW/OjtuOE83SFSe2c3iFnPZJOzYbbjxsDPd uV0MmpjnjOtFMnF07EHDA7hPIb4uILtvCswVTZ+RmDXz/xHjxMd1tD8FF9Xlc3JLjUcP Pquf7G+ckjnR9jrJa/+wlypCQpZuoZTUH8wr6zxzvOien1Q/4z6BNm+GcVjgJCaJkJhh 1Fcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ounqfI9CXMUIXqEGvecn2fVMevKBON5/GtrMu/96rgE=; b=p99CKLr1uth0goW34xvD3ceCsfXmWlXSebK5EsA/VP3kscHZ+deAYA/+794FDKf8l7 zfdMKndwu1IvGzFQ9PoSk5li8m4nq6wPwmHRlgSYOFnCBKHw6YISiPlGAwu1wUwJBCn/ IJgLtxydu2TfTeX/C+eIG+R8RAPmGFZb/FjBkfbW1+NxaEM8p/wBc6d2Mq71LKlkEUIt euq3DJx+sFw2r+zLhYQxQrxFYdYsrCdbEMnLAFEC9FoCFIyWOKipo1B5QZAyxcOAAWWN ExxLQvmELiXgTJ7c0f2Y+x76RGDN34FlMbB6k21O63S6DQbxzlvyNCgh2saDXM5vcFLN e04A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=JXYdttdn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v1si11092275edq.38.2020.04.15.14.31.11; Wed, 15 Apr 2020 14:31:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=JXYdttdn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2502817AbgDNNPc (ORCPT + 99 others); Tue, 14 Apr 2020 09:15:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2502728AbgDNNOw (ORCPT ); Tue, 14 Apr 2020 09:14:52 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D748C061A0F; Tue, 14 Apr 2020 06:14:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ounqfI9CXMUIXqEGvecn2fVMevKBON5/GtrMu/96rgE=; b=JXYdttdnaHgBP+lrnbNQv+l484 AiTLXkeT89LiTTe8UjKuAOXDWRuFGAcQ/0CnDrMoy9ZGKz4sk7r96NGt6CuvaRdpA66qBYAJs0u4x u7ZGjCB5NfHcOgUfTmMxihPOpHU3YzO7ATv3rEodE1mB0A6EDcsMfLgynRGNEoH0URG1N67vsvQ19 2xBcQNMtObO5UAoeqB1qDJY5ftdJr1rvZS8JlFIjADihw193xCv/FJPXcuqN0qXZ02y6IQK+CKIx5 yQbDGbmNwBHeYmOIds2Y0zr7VGGy4VAfNmsEEGLFdUk7woS9f96wHe5hQkcMuG6kN1pDBO+UDwMcB 358La5gQ==; Received: from [2001:4bb8:180:384b:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jOLOY-0006i4-0d; Tue, 14 Apr 2020 13:14:30 +0000 From: Christoph Hellwig To: Andrew Morton , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , x86@kernel.org, David Airlie , Daniel Vetter , Laura Abbott , Sumit Semwal , Sakari Ailus , Minchan Kim , Nitin Gupta Cc: Robin Murphy , Christophe Leroy , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org, linux-hyperv@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/29] mm: only allow page table mappings for built-in zsmalloc Date: Tue, 14 Apr 2020 15:13:30 +0200 Message-Id: <20200414131348.444715-12-hch@lst.de> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200414131348.444715-1-hch@lst.de> References: <20200414131348.444715-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This allows to unexport map_vm_area and unmap_kernel_range, which are rather deep internal and should not be available to modules, as they for example allow fine grained control of mapping permissions, and also allow splitting the setup of a vmalloc area and the actual mapping and thus expose vmalloc internals. zsmalloc is typically built-in and continues to work (just like the percpu-vm code using a similar patter), while modular zsmalloc also continues to work, but must use copies. Signed-off-by: Christoph Hellwig Acked-by: Peter Zijlstra (Intel) --- mm/Kconfig | 2 +- mm/vmalloc.c | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 09a9edfb8461..5c0362bd8d56 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -707,7 +707,7 @@ config ZSMALLOC config ZSMALLOC_PGTABLE_MAPPING bool "Use page table mapping to access object in zsmalloc" - depends on ZSMALLOC + depends on ZSMALLOC=y help By default, zsmalloc uses a copy-based object mapping method to access allocations that span two pages. However, if a particular diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3375f9508ef6..9183fc0d365a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2046,7 +2046,6 @@ void unmap_kernel_range(unsigned long addr, unsigned long size) vunmap_page_range(addr, end); flush_tlb_kernel_range(addr, end); } -EXPORT_SYMBOL_GPL(unmap_kernel_range); int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) { @@ -2058,7 +2057,6 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) return err > 0 ? 0 : err; } -EXPORT_SYMBOL_GPL(map_vm_area); static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) -- 2.25.1