Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3009572imu; Mon, 19 Nov 2018 09:18:14 -0800 (PST) X-Google-Smtp-Source: AJdET5dEwoXPuMfyAZTtXXudw4GXc2rFVTSgN28IQSyfXx/2vTpl+KUG2wQVyOEBVK3vl9xWqDT1 X-Received: by 2002:a17:902:110a:: with SMTP id d10-v6mr23475423pla.85.1542647894304; Mon, 19 Nov 2018 09:18:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542647894; cv=none; d=google.com; s=arc-20160816; b=lzyHDxq108gRWscUz0eShgVhtLnswi2LpinLX7ediH88LpeROwWPuDfu+S2HAYZmUC 2sd5L8+G/1lqo5G0IPyn7jOnhb6x6oHesw96mpwhvfLSMEcija7lMGvKubAUwBok4SJe J7ujcY4Q62B5aO4Q27NKhlrYqcbdPVIGwtFqritNzQ2Wb0m7U8JIurGmEE7OyGce7zzF 5lcmRJQWhGlr0g5P8ZuHoAEVAT0vqQ4QaC7X/JBs0qM9KmdmtOTDo6H+6uivzmw4npm9 32gE6CkQg0s75bBQ6JlGaJ9K9fCaZ06reco0cxCieaai7529zbTxNybabu1mAUB/9Ipy goow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=O48ohr9p/U6rFedd37REZs3xoy4piOTGoqJuPOvdn40=; b=zb6eiW/cIb0+eWyF8yY0t3AUNFf4PqnThoWQVT0CUz8s79L9NoLIPFgWOhZdTaPhpK CibImXL93GNIn6Qq0BmDNpwFBnsFCTimpt8NKbFMdJbdefZWcr3l0IvZygZt7ZX3kbOu snRoJ9Q6JCYiyxlWldYq3U0fvhLoXlC0gpdnUi/fqNajoexGi+Sijo9IJI+h8qRmQpzb qHUa3ySCsrRIcfVD1I7Xx2e86Vuo767ju/lRSxNPlp+sE0xFPMx+CSjci+sF0+pgOesH a/yJ0TE8TInW/HrWAKt8dH3A4UhfDPtF4MLJqW04htyh+llcrj+ZV8V4VaFOKX/P2c3q D+7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=jxoxkq5R; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 28si18925057pgw.364.2018.11.19.09.17.56; Mon, 19 Nov 2018 09:18:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=jxoxkq5R; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405944AbeKTD0n (ORCPT + 99 others); Mon, 19 Nov 2018 22:26:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:40326 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405038AbeKTD0m (ORCPT ); Mon, 19 Nov 2018 22:26:42 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7C8452243B; Mon, 19 Nov 2018 17:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542646945; bh=SxIJT9cWqRF8nZICSdOrFpokMOLztAKi6+SnKv2HeaQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jxoxkq5R6CDR5p+68nbryUHPqJZkLSkaGeT+Ue1/uv2bOfAEldIvFd+K569+GJ13J bDPpeyNAJZTLusI2erpUSOQoVyRZJbPyeT4IMF2P7DEMu9rPQqIhMy2y136yibS8ts AqMDnYOo5CNfh0TTVdEGanxXVaatz0Vq+4FKIQek= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Kees Cook , Hector Marco-Gisbert , Ismael Ripoll Ripoll , Alexander Viro , "Kirill A. Shutemov" , Oleg Nesterov , Chen Gang , Michal Hocko , Konstantin Khlebnikov , Andrea Arcangeli , Andrey Ryabinin , Andrew Morton , Linus Torvalds , Ben Hutchings , Sasha Levin Subject: [PATCH 4.4 113/160] binfmt_elf: fix calculations for bss padding Date: Mon, 19 Nov 2018 17:29:12 +0100 Message-Id: <20181119162641.731743738@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181119162630.031306128@linuxfoundation.org> References: <20181119162630.031306128@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ commit 0036d1f7eb95bcc52977f15507f00dd07018e7e2 upstream. A double-bug exists in the bss calculation code, where an overflow can happen in the "last_bss - elf_bss" calculation, but vm_brk internally aligns the argument, underflowing it, wrapping back around safe. We shouldn't depend on these bugs staying in sync, so this cleans up the bss padding handling to avoid the overflow. This moves the bss padzero() before the last_bss > elf_bss case, since the zero-filling of the ELF_PAGE should have nothing to do with the relationship of last_bss and elf_bss: any trailing portion should be zeroed, and a zero size is already handled by padzero(). Then it handles the math on elf_bss vs last_bss correctly. These need to both be ELF_PAGE aligned to get the comparison correct, since that's the expected granularity of the mappings. Since elf_bss already had alignment-based padding happen in padzero(), the "start" of the new vm_brk() should be moved forward as done in the original code. However, since the "end" of the vm_brk() area will already become PAGE_ALIGNed in vm_brk() then last_bss should get aligned here to avoid hiding it as a side-effect. Additionally makes a cosmetic change to the initial last_bss calculation so it's easier to read in comparison to the load_addr calculation above it (i.e. the only difference is p_filesz vs p_memsz). Link: http://lkml.kernel.org/r/1468014494-25291-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook Reported-by: Hector Marco-Gisbert Cc: Ismael Ripoll Ripoll Cc: Alexander Viro Cc: "Kirill A. Shutemov" Cc: Oleg Nesterov Cc: Chen Gang Cc: Michal Hocko Cc: Konstantin Khlebnikov Cc: Andrea Arcangeli Cc: Andrey Ryabinin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Ben Hutchings Signed-off-by: Sasha Levin --- fs/binfmt_elf.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 70ea4b9c6dd9..2963a23f7a80 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -604,28 +604,30 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex, * Do the same thing for the memory mapping - between * elf_bss and last_bss is the bss section. */ - k = load_addr + eppnt->p_memsz + eppnt->p_vaddr; + k = load_addr + eppnt->p_vaddr + eppnt->p_memsz; if (k > last_bss) last_bss = k; } } + /* + * Now fill out the bss section: first pad the last page from + * the file up to the page boundary, and zero it from elf_bss + * up to the end of the page. + */ + if (padzero(elf_bss)) { + error = -EFAULT; + goto out; + } + /* + * Next, align both the file and mem bss up to the page size, + * since this is where elf_bss was just zeroed up to, and where + * last_bss will end after the vm_brk() below. + */ + elf_bss = ELF_PAGEALIGN(elf_bss); + last_bss = ELF_PAGEALIGN(last_bss); + /* Finally, if there is still more bss to allocate, do it. */ if (last_bss > elf_bss) { - /* - * Now fill out the bss section. First pad the last page up - * to the page boundary, and then perform a mmap to make sure - * that there are zero-mapped pages up to and including the - * last bss page. - */ - if (padzero(elf_bss)) { - error = -EFAULT; - goto out; - } - - /* What we have mapped so far */ - elf_bss = ELF_PAGESTART(elf_bss + ELF_MIN_ALIGN - 1); - - /* Map the last of the bss segment */ error = vm_brk(elf_bss, last_bss - elf_bss); if (BAD_ADDR(error)) goto out; -- 2.17.1