Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp3845143rdh; Fri, 29 Sep 2023 04:19:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHLWqqtY0j2dcKZRmvb2YBsRhXtTmEa3FQLHS9qHWHG/Kx2VcOpsVv9YPem3TwITO8BfVo3 X-Received: by 2002:a17:902:f807:b0:1c7:495c:87f7 with SMTP id ix7-20020a170902f80700b001c7495c87f7mr879482plb.22.1695986345012; Fri, 29 Sep 2023 04:19:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695986344; cv=none; d=google.com; s=arc-20160816; b=E+qRr5bmr9SBbnvjC8ALJW33SvoBgfysRzEhOYH1mHVfHJHCI2Up2avgGJuD8sGcyZ RBfD4Qb7XoAlMD3UkN3Wf8Y5/DhchWTc9Yeyij5FjhESs4moKhm766deb38a5KxZTfoG tTa2AgHWQZCV3J37RZuS0iJm0vXIXVvNutj+jm8nT0fDp9j8ycMIH0fQc+T43AheBn16 +SPZ8CmqSJLPurVVLc41mSVOj2oMamgnWWWUvam72gOBEKCfT+sAGPT/GZwsIyp780sy MM4sqB1uMl5h3uZi7qnqaumNapR+d2DBbVjTwlXPT4wcAkMxmIsRG5rdoQw71s9k+EVf KqdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=q1S2QHXg38kETw2Vrnc3ITAH/+VCqfp/XhwxJe7fVnI=; fh=XX5IXYH8cr6m1JvqVhV2T0rYnPj2s5ee3zd7sxhMDe4=; b=NUd5mvlEWAI9i35Hyw6HMzQmIhSyKGSdvnOF3CEP9b9+vf8yy8JcBYRe7ojKiRsWHb pK5fT0ECle/f9WxBHZVEJ4H7b2hWBhnafrJQSax8prc+zrHGEULNzBhnJSyN2eXcstOB qFLPiAeYJEHIu4j4COBZu4HTGcbUx2JdPQdAyFHQMGbVdDtQhPiU1MeUMnW7Fv5/psB1 HQQCsu1he4kviWgEh9TaXXKNHXjiv4Gw9OLleq6BecXoRZ2TyQUlghnGe6Gir1KENP8L k8VYBW1op9o0GPiSIlFzRKMnoNwFlXduPkVPFYHo9XlhrUeGl43bkxxiroU0akiL7plL iBVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=mF636whg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id kw7-20020a170902f90700b001b674055d72si6271384plb.621.2023.09.29.04.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 04:19:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=mF636whg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 6E8CD82FABDB; Thu, 28 Sep 2023 20:25:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232582AbjI2DYp (ORCPT + 99 others); Thu, 28 Sep 2023 23:24:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229541AbjI2DYm (ORCPT ); Thu, 28 Sep 2023 23:24:42 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 544CB1A4 for ; Thu, 28 Sep 2023 20:24:38 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-692eed30152so7283880b3a.1 for ; Thu, 28 Sep 2023 20:24:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1695957877; x=1696562677; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q1S2QHXg38kETw2Vrnc3ITAH/+VCqfp/XhwxJe7fVnI=; b=mF636whglJkKt7whpXv2+mWn/3LbbjPxcpSg7nd1s5iqgrchl9a8PaFb9wmzPGcVqO 9XbxqfoBqUjRYWix1e8ecXY5uX3R1E2OokPG91RVYv3JnJjGo3NX3KD+ijbxuTtJfZi+ JjHL3NpAuz3eKBeRt3BBifNfytPS1Pfb8vzmQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695957877; x=1696562677; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q1S2QHXg38kETw2Vrnc3ITAH/+VCqfp/XhwxJe7fVnI=; b=cMkwIQTOPUT48ToAch6ZMq5gweiuxwkwJHmQSescGErNdZUKVAzEIiJBe16nCkCDRl PIs1SqQmVCr2DPAnyleFagFl5CiUaGuuiNwgX5BPvm4xeIN6vlhvJdS2EuYLnGyixQ3z fXZvxxNDOOW4AxBHErklKZXD0H/Oj0kDVN15PP4VanuzIileMt7o1gbk/NskU1xzWAXw MjDnfxywJpCqJFz6TsX7aHiYEtrpoXZLJasvfu6ags8CD+n8p+riD+CwFUN0au7pykbf DZpCcWag6xd8YbVlxMFs/JGw07OxDRpsQJKlFXF3Z+N2wfqr7H66b95JnBRcsmms/0FH 8i2Q== X-Gm-Message-State: AOJu0YzpLbweveDHT5zKUehKRBWoIPFt48lVId60BLHSZhzFT0fKa43Q ohU3tq5BHWIoeg5YnuwzBOHFbw== X-Received: by 2002:a05:6a20:d90b:b0:160:cc73:39ef with SMTP id jd11-20020a056a20d90b00b00160cc7339efmr2588831pzb.48.1695957877520; Thu, 28 Sep 2023 20:24:37 -0700 (PDT) Received: from www.outflux.net (198-0-35-241-static.hfc.comcastbusiness.net. [198.0.35.241]) by smtp.gmail.com with ESMTPSA id g9-20020a170902c38900b001c7373769basm2074161plg.88.2023.09.28.20.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 20:24:36 -0700 (PDT) From: Kees Cook To: Eric Biederman Cc: Kees Cook , Sebastian Ott , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= , Pedro Falcato , Al Viro , Christian Brauner , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH v4 1/6] binfmt_elf: Support segments with 0 filesz and misaligned starts Date: Thu, 28 Sep 2023 20:24:29 -0700 Message-Id: <20230929032435.2391507-1-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230929031716.it.155-kees@kernel.org> References: <20230929031716.it.155-kees@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Developer-Signature: v=1; a=openpgp-sha256; l=8056; i=keescook@chromium.org; h=from:subject; bh=17HFsxzRR6qjNPl2OqeDyONMEClWu0Ld8l5K3CgB/PI=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBlFkNx9ky4C5SoI1ZcjBN8kT+abWcWJ1uQcH9Lv 1XdD+58PfSJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZRZDcQAKCRCJcvTf3G3A Jkq+D/4+ERIwd+j9kCgE3YL8f6BaEPCM1Wy540egTRyd8uxuhJLV/MjU4bGQTq5IzynPij0EVRZ AMX3Kda0AuvJ9/nAqmFqVAPvO0yR68AwteJvc5m7SSBnbNIYr6LAJ4jO+2fBU/rx9GNhwZPe6dB blk7g6mbQXiRAeKTdqGu5bjoYcZTrrQFJj91tUj9OCuNDZEr2lD4iMNVPhZfK6HO97g2NWw11kt UEcNkn69PVA0KRsMgCqWh02zT9JvaZP+ciKiHEDz1aQoPHotcJpFQqEDxUSWkGUQwlkbKrGRaf1 fcPi2kq7GTJiTxXPgxd5o7yWriWxPHNTyhKGXcbmToLSeKJBOkj8NWQIkYlv05UyEUw+gY+0WZw FJ3LNMpBDzrKiUiNWUfal8Watcpz/yp+mjIELKSDFReTXTqLQW49KvqulWYNZHPtbpVVaTmkGGu bKCHrI4wlZo/0Ea4v4V205GrLpUsbdCcsj8CpSpWGlWqb+FWHvo0dQhWmwNQEWqpSA7soBBxKsV 5Ragq8YPGzA2Xl68fzMW7FnyZCLJOeob+5SPgEzYkdiUSqO3L0nZ2V8J5Xyxo3nxEyZ5KfjGyzU 8JgId+qHNCzcdvZSiOJ6nW3Io53wtMH4PaQZLa4PNAB19C3URkhNr/gqkDCFsv4txyd8mlojxjf NlBsHU/1WHut/KA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 28 Sep 2023 20:25:11 -0700 (PDT) From: "Eric W. Biederman" Implement a helper elf_load() that wraps elf_map() and performs all of the necessary work to ensure that when "memsz > filesz" the bytes described by "memsz > filesz" are zeroed. An outstanding issue is if the first segment has filesz 0, and has a randomized location. But that is the same as today. In this change I replaced an open coded padzero() that did not clear all of the way to the end of the page, with padzero() that does. I also stopped checking the return of padzero() as there is at least one known case where testing for failure is the wrong thing to do. It looks like binfmt_elf_fdpic may have the proper set of tests for when error handling can be safely completed. I found a couple of commits in the old history https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git, that look very interesting in understanding this code. commit 39b56d902bf3 ("[PATCH] binfmt_elf: clearing bss may fail") commit c6e2227e4a3e ("[SPARC64]: Missing user access return value checks in fs/binfmt_elf.c and fs/compat.c") commit 5bf3be033f50 ("v2.4.10.1 -> v2.4.10.2") Looking at commit 39b56d902bf3 ("[PATCH] binfmt_elf: clearing bss may fail"): > commit 39b56d902bf35241e7cba6cc30b828ed937175ad > Author: Pavel Machek > Date: Wed Feb 9 22:40:30 2005 -0800 > > [PATCH] binfmt_elf: clearing bss may fail > > So we discover that Borland's Kylix application builder emits weird elf > files which describe a non-writeable bss segment. > > So remove the clear_user() check at the place where we zero out the bss. I > don't _think_ there are any security implications here (plus we've never > checked that clear_user() return value, so whoops if it is a problem). > > Signed-off-by: Pavel Machek > Signed-off-by: Andrew Morton > Signed-off-by: Linus Torvalds It seems pretty clear that binfmt_elf_fdpic with skipping clear_user() for non-writable segments and otherwise calling clear_user(), aka padzero(), and checking it's return code is the right thing to do. I just skipped the error checking as that avoids breaking things. And notably, it looks like Borland's Kylix died in 2005 so it might be safe to just consider read-only segments with memsz > filesz an error. Reported-by: Sebastian Ott Reported-by: Thomas Weißschuh Closes: https://lkml.kernel.org/r/20230914-bss-alloc-v1-1-78de67d2c6dd@weissschuh.net Signed-off-by: "Eric W. Biederman" Link: https://lore.kernel.org/r/87sf71f123.fsf@email.froward.int.ebiederm.org Signed-off-by: Kees Cook --- fs/binfmt_elf.c | 111 +++++++++++++++++++++--------------------------- 1 file changed, 48 insertions(+), 63 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 7b3d2d491407..2a615f476e44 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -110,25 +110,6 @@ static struct linux_binfmt elf_format = { #define BAD_ADDR(x) (unlikely((unsigned long)(x) >= TASK_SIZE)) -static int set_brk(unsigned long start, unsigned long end, int prot) -{ - start = ELF_PAGEALIGN(start); - end = ELF_PAGEALIGN(end); - if (end > start) { - /* - * Map the last of the bss segment. - * If the header is requesting these pages to be - * executable, honour that (ppc32 needs this). - */ - int error = vm_brk_flags(start, end - start, - prot & PROT_EXEC ? VM_EXEC : 0); - if (error) - return error; - } - current->mm->start_brk = current->mm->brk = end; - return 0; -} - /* We need to explicitly zero any fractional pages after the data section (i.e. bss). This would contain the junk from the file that should not @@ -406,6 +387,51 @@ static unsigned long elf_map(struct file *filep, unsigned long addr, return(map_addr); } +static unsigned long elf_load(struct file *filep, unsigned long addr, + const struct elf_phdr *eppnt, int prot, int type, + unsigned long total_size) +{ + unsigned long zero_start, zero_end; + unsigned long map_addr; + + if (eppnt->p_filesz) { + map_addr = elf_map(filep, addr, eppnt, prot, type, total_size); + if (BAD_ADDR(map_addr)) + return map_addr; + if (eppnt->p_memsz > eppnt->p_filesz) { + zero_start = map_addr + ELF_PAGEOFFSET(eppnt->p_vaddr) + + eppnt->p_filesz; + zero_end = map_addr + ELF_PAGEOFFSET(eppnt->p_vaddr) + + eppnt->p_memsz; + + /* Zero the end of the last mapped page */ + padzero(zero_start); + } + } else { + map_addr = zero_start = ELF_PAGESTART(addr); + zero_end = zero_start + ELF_PAGEOFFSET(eppnt->p_vaddr) + + eppnt->p_memsz; + } + if (eppnt->p_memsz > eppnt->p_filesz) { + /* + * Map the last of the segment. + * If the header is requesting these pages to be + * executable, honour that (ppc32 needs this). + */ + int error; + + zero_start = ELF_PAGEALIGN(zero_start); + zero_end = ELF_PAGEALIGN(zero_end); + + error = vm_brk_flags(zero_start, zero_end - zero_start, + prot & PROT_EXEC ? VM_EXEC : 0); + if (error) + map_addr = error; + } + return map_addr; +} + + static unsigned long total_mapping_size(const struct elf_phdr *phdr, int nr) { elf_addr_t min_addr = -1; @@ -829,7 +855,6 @@ static int load_elf_binary(struct linux_binprm *bprm) struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL; struct elf_phdr *elf_property_phdata = NULL; unsigned long elf_bss, elf_brk; - int bss_prot = 0; int retval, i; unsigned long elf_entry; unsigned long e_entry; @@ -1040,33 +1065,6 @@ static int load_elf_binary(struct linux_binprm *bprm) if (elf_ppnt->p_type != PT_LOAD) continue; - if (unlikely (elf_brk > elf_bss)) { - unsigned long nbyte; - - /* There was a PT_LOAD segment with p_memsz > p_filesz - before this one. Map anonymous pages, if needed, - and clear the area. */ - retval = set_brk(elf_bss + load_bias, - elf_brk + load_bias, - bss_prot); - if (retval) - goto out_free_dentry; - nbyte = ELF_PAGEOFFSET(elf_bss); - if (nbyte) { - nbyte = ELF_MIN_ALIGN - nbyte; - if (nbyte > elf_brk - elf_bss) - nbyte = elf_brk - elf_bss; - if (clear_user((void __user *)elf_bss + - load_bias, nbyte)) { - /* - * This bss-zeroing can fail if the ELF - * file specifies odd protections. So - * we don't check the return value - */ - } - } - } - elf_prot = make_prot(elf_ppnt->p_flags, &arch_state, !!interpreter, false); @@ -1162,7 +1160,7 @@ static int load_elf_binary(struct linux_binprm *bprm) } } - error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt, + error = elf_load(bprm->file, load_bias + vaddr, elf_ppnt, elf_prot, elf_flags, total_size); if (BAD_ADDR(error)) { retval = IS_ERR_VALUE(error) ? @@ -1217,10 +1215,8 @@ static int load_elf_binary(struct linux_binprm *bprm) if (end_data < k) end_data = k; k = elf_ppnt->p_vaddr + elf_ppnt->p_memsz; - if (k > elf_brk) { - bss_prot = elf_prot; + if (k > elf_brk) elf_brk = k; - } } e_entry = elf_ex->e_entry + load_bias; @@ -1232,18 +1228,7 @@ static int load_elf_binary(struct linux_binprm *bprm) start_data += load_bias; end_data += load_bias; - /* Calling set_brk effectively mmaps the pages that we need - * for the bss and break sections. We must do this before - * mapping in the interpreter, to make sure it doesn't wind - * up getting placed where the bss needs to go. - */ - retval = set_brk(elf_bss, elf_brk, bss_prot); - if (retval) - goto out_free_dentry; - if (likely(elf_bss != elf_brk) && unlikely(padzero(elf_bss))) { - retval = -EFAULT; /* Nobody gets to see this, but.. */ - goto out_free_dentry; - } + current->mm->start_brk = current->mm->brk = ELF_PAGEALIGN(elf_brk); if (interpreter) { elf_entry = load_elf_interp(interp_elf_ex, -- 2.34.1