Received: by 2002:ab2:6857:0:b0:1ef:ffd0:ce49 with SMTP id l23csp3059048lqp; Mon, 25 Mar 2024 19:18:10 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCV1BTf8+0NB8W5ilIxsm+HRGLmOWK7O1Vh2zxHQwdc86gBGlMrUdjTTzKEOriYXGuC6MsA0ZFAMULZddypBp0bkpwrOnWPrBEy3qFkNew== X-Google-Smtp-Source: AGHT+IEs52dZ7htdmfrwP0XWc1n+634lwv/fcocii6HXVY+lu2ZMvyte9z7G+vWaK4+e9XPTfwo4 X-Received: by 2002:a17:902:b18a:b0:1e0:5d7:3e9c with SMTP id s10-20020a170902b18a00b001e005d73e9cmr7798448plr.54.1711419490473; Mon, 25 Mar 2024 19:18:10 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711419490; cv=pass; d=google.com; s=arc-20160816; b=fOX7ngNOFXpqLza4R6IFBVlGvN/eKjtXBkGUWIRz+5R0fzI8R1D5bDwuiNkCVucFXG e0iU8Tqw8iZROA4hLjReNc9+fC4OFVu4KcX0jk/y0bQaUzg38Tne51lCnVqLVRRKt+m6 Sf+2k698V3aTjnpEhMzliF5vexIlWZ1XYA+u40icd6ectz9AL2m2JBgYXZUt7XGW47nY uiu1h7F2mnhrSwXIQ0LluZsuPy/61xTyEsBNV2Pn6phA7yhXsw5be6A/iqDeSbii0xP2 YxcuS0EvvXSrMKD17fNVoc2qXSJ7hxn6rykcrJyhZLtGReAOd+R2hgcemG24ccJn3ZDg 5CZg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=5ic3mPs7s2z6mq/2rkKIDBh9XLDk3GzkNtuRlzq0a2w=; fh=VdawlRosHP6/TRt7Q+d5dFfk99BjWPbqwi77dk1QfXM=; b=ACjvdKtXbbEAgZzW6lUJ+D91SN0ZGt2mC8go9hYmYL1nkO7EsPgaXADxI28QmLcK1m kukM2532sJBKO6cFHYagaJXlzizt32cGJNm/9OuvZIMo2w02omrKQexdcM2t1oKRtIvD 0pL+8nz4WnAtNnyBT+lQk/rCwhJsSvmwtSIx3HNBMSE9WyZ1mknh/1gZsF55fyNTQEWW cOc7L1/KRQnXM2whn8XdJ8gHNuTs3/FYKcKi68Pvj17csNSckwK+omuxgxsVXEL3/dfS FLcHT0x5hBxdk0IeVkRNfusi9BoP3aJSRIQODKtiLLuiv+Rabu6j4WKAq093575zPV8q mXNA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ni2KB+jk; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-118283-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-118283-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id im16-20020a170902bb1000b001dee0e175c2si6070489plb.230.2024.03.25.19.18.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 19:18:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-118283-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ni2KB+jk; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-118283-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-118283-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id E7276B21F07 for ; Tue, 26 Mar 2024 02:17:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F2C7686646; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ni2KB+jk" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00D3A4EB22 for ; Tue, 26 Mar 2024 02:17:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711419439; cv=none; b=kcCfrw7/Vuxo3TBHpL+hx2kA5zJie+rDZMf1ig/G2w144I1yLvCrJZdymjWAAb42ZPtauT/RTwuuvKRjMCVy+DLN+G66d32nKvh3aBtb3F2CBL1G1FhRr/S7ZkuSte5+knTKMCt9Nc3ilYAJVUIu07+gJXfgLTrHXK0PZMs4qvU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711419439; c=relaxed/simple; bh=y5DokqN/WlvuTFZmqP1zrzAjZ8lFdRvaKiOQWgxpins=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=k6AtYfXvr7IAqOIEJosQfE/AkkcGOLu+56cVIdbpUSw/GtiSSmf2M4TJLBEMJIUhI553nHuj6SMGBTyo9piMSGmaZsbCbjJ6mWzqPBK+GxwOODlOm+wJinuSgNjDz+aoBore+9poxnyG6Oknk3mNpMI9sdBS+LdXfNpQOFcA4+s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ni2KB+jk; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419438; x=1742955438; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y5DokqN/WlvuTFZmqP1zrzAjZ8lFdRvaKiOQWgxpins=; b=ni2KB+jk9TIRHBle9lEIlTJgLmrMItVkCE9hmcONN8yPSWNvDmD2/l+Y bMslvnC6Yc6nEI/ICi4vPOs7b3QzKPtlVWdXM02zbveTyddw+jn0/cb2P SJySuTSuabaGjXgtsRmtBhr6H6L6KOUcz78et9SM3na9JV6eJHUpcSh2P iIP8X4dzsglILFhDBMIzTz9CEcM+2HXKNeQDq4BUSwKpNpHWZ8dFluFO2 OKGIaqHPpzWNIFPdEQ7oZbM+PHa/p626tD7LFu8g7wlnldb+ziFgtROOA wJRC2w8od4N43A850M3mquT98H7it3DJp4AAphJppPebr2skOULPJlk5U g==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564243" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564243" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489872" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 03/14] mm: Introduce arch_get_unmapped_area_vmflags() Date: Mon, 25 Mar 2024 19:16:45 -0700 Message-Id: <20240326021656.202649-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. In order to take the start gap into account, the maple tree search needs to know the size of start gap the new mapping will need. The call chain from do_mmap() to the actual maple tree search looks like this: do_mmap(size, vm_flags, map_flags, ..) mm/mmap.c:get_unmapped_area(size, map_flags, ...) arch_get_unmapped_area(size, map_flags, ...) vm_unmapped_area(struct vm_unmapped_area_info) One option would be to add another MAP_ flag to mean a one page start gap (as is for shadow stack), but this consumes a flag unnecessarily. Another option could be to simply increase the size passed in do_mmap() by the start gap size, and adjust after the fact, but this will interfere with the alignment requirements passed in struct vm_unmapped_area_info, and unknown to mmap.c. Instead, introduce variants of arch_get_unmapped_area/_topdown() that take vm_flags. In future changes, these variants can be used in mmap.c:get_unmapped_area() to allow the vm_flags to be passed through to vm_unmapped_area(), while preserving the normal arch_get_unmapped_area/_topdown() for the existing callers. Signed-off-by: Rick Edgecombe --- v4: - Remove externs (Christophe Leroy) --- include/linux/sched/mm.h | 17 +++++++++++++++++ mm/mmap.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index ed1caa26c8be..91546493c43d 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -191,6 +191,23 @@ unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags); +unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t); + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, + struct file *filp, + unsigned long addr, + unsigned long len, + unsigned long pgoff, + unsigned long flags, + vm_flags_t vm_flags); + unsigned long generic_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, diff --git a/mm/mmap.c b/mm/mmap.c index 224e9ce1e2fd..2bd7580b8f0b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1808,6 +1808,34 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, } #endif +#ifndef HAVE_ARCH_UNMAPPED_AREA_VMFLAGS +unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area(filp, addr, len, pgoff, flags); +} + +unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area_topdown(filp, addr, len, pgoff, flags); +} +#endif + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) +{ + if (test_bit(MMF_TOPDOWN, &mm->flags)) + return arch_get_unmapped_area_topdown_vmflags(filp, addr, len, pgoff, + flags, vm_flags); + return arch_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, vm_flags); +} + unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) -- 2.34.1