Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp826699ybg; Tue, 9 Jun 2020 13:59:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwhCRtKe4zsviQRld9rb9n/fbxDO+u9gXW6dwiIr99w/5Nm1mwrIGdhg5Vf1RsNtu4pvx2P X-Received: by 2002:a17:906:4009:: with SMTP id v9mr211043ejj.481.1591736370427; Tue, 09 Jun 2020 13:59:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591736370; cv=none; d=google.com; s=arc-20160816; b=oSvwHw3YQ99ugbFZzH+RAmmnJLmKTLxaPYqkuKuecnL0cn7q1L/u2GMLeEkWBP76Qj cumgdhkExcrCPBPy9qlQe+HGMdlQkOmMokw3tfzY/ashulUOVuKC6+G4884SLHmgGRRl rTlXj7EloKtbphA0E/tvJlTrQf0b94LVYgx9IyO8taxqyY1ZzQZZJxz4LJBIv0P7ZfnP HKKM317cV+aYIHxoJ9k/ls96L/ZfWvQ3ffxK9O8lEyYsj+AYN+mf3SbWL7QBbgCoht6Q f7cguu7A1orpA1nChLxJfUnr68Piv1v/Zop4I8ls8qjanhucquFq75wF4gwE9rpEjf/N d80w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=2wbIkw9MfRT39negfp6axNW0ANOfS1BrH9EVBGDMRsc=; b=tDGaFkWz3Nxd75kpXuXKrS0xqn96dvPD1hia/WlNj8vETuyRdHhDluntSROMOz7W0+ +cZHy7XQeokcb44WSM9R8KqeQzlaaYmw3ufxntA9j4rN6exeyzyrT08BNI6K8qntRVQQ dQfUx/NgfaYiNzVTmWnbwSzFaw1Kfuajt7X3aIZUXSVG+Bx+6F8ePbBcwRbh1zP6Q27c UVsrY6ydpE4kwJFyJXg1LT+efIYU1W3fYgYMiTcBAEIhc1kyd6XF7BDaHlRv5Fgg2pcU TJmGN/Zik75+cD2qoXE9MQ+HaAckPXnALQsF0mAeHRC30PLaNS1YgMWKZ5xdQDaGwLMf lN4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Qg6c3ga6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v14si11586108edr.110.2020.06.09.13.59.06; Tue, 09 Jun 2020 13:59:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Qg6c3ga6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387583AbgFIR4s (ORCPT + 99 others); Tue, 9 Jun 2020 13:56:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:47812 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733247AbgFIRza (ORCPT ); Tue, 9 Jun 2020 13:55:30 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 88CA820734; Tue, 9 Jun 2020 17:55:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591725330; bh=mQlyIZpsFwSxbxKyryxpvl6ccsVmLfLXK1O83SaWYIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qg6c3ga6aPYrddtpJ7FDhMBPItdIm+0yjjfYqHnyWMfxx7zhRWmdHyY2/s5F2S0Ff K0WyRrBGCaspSWK3oSl3pxjOihv8rOhCY382ErpdoOHVb8631VtnOmDeF9zSRHEpXm DGM2pLx+nCTn8JaMEYX3L4wEi4WY3xnVOXDDfHpY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Linus Torvalds , Oleg Nesterov , Srikar Dronamraju , Christian Borntraeger , Sven Schnelle , Steven Rostedt Subject: [PATCH 5.7 24/24] uprobes: ensure that uprobe->offset and ->ref_ctr_offset are properly aligned Date: Tue, 9 Jun 2020 19:45:55 +0200 Message-Id: <20200609174151.286066479@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200609174149.255223112@linuxfoundation.org> References: <20200609174149.255223112@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Oleg Nesterov commit 013b2deba9a6b80ca02f4fafd7dedf875e9b4450 upstream. uprobe_write_opcode() must not cross page boundary; prepare_uprobe() relies on arch_uprobe_analyze_insn() which should validate "vaddr" but some architectures (csky, s390, and sparc) don't do this. We can remove the BUG_ON() check in prepare_uprobe() and validate the offset early in __uprobe_register(). The new IS_ALIGNED() check matches the alignment check in arch_prepare_kprobe() on supported architectures, so I think that all insns must be aligned to UPROBE_SWBP_INSN_SIZE. Another problem is __update_ref_ctr() which was wrong from the very beginning, it can read/write outside of kmap'ed page unless "vaddr" is aligned to sizeof(short), __uprobe_register() should check this too. Reported-by: Linus Torvalds Suggested-by: Linus Torvalds Signed-off-by: Oleg Nesterov Reviewed-by: Srikar Dronamraju Acked-by: Christian Borntraeger Tested-by: Sven Schnelle Cc: Steven Rostedt Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- kernel/events/uprobes.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -867,10 +867,6 @@ static int prepare_uprobe(struct uprobe if (ret) goto out; - /* uprobe_write_opcode() assumes we don't cross page boundary */ - BUG_ON((uprobe->offset & ~PAGE_MASK) + - UPROBE_SWBP_INSN_SIZE > PAGE_SIZE); - smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */ set_bit(UPROBE_COPY_INSN, &uprobe->flags); @@ -1166,6 +1162,15 @@ static int __uprobe_register(struct inod if (offset > i_size_read(inode)) return -EINVAL; + /* + * This ensures that copy_from_page(), copy_to_page() and + * __update_ref_ctr() can't cross page boundary. + */ + if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE)) + return -EINVAL; + if (!IS_ALIGNED(ref_ctr_offset, sizeof(short))) + return -EINVAL; + retry: uprobe = alloc_uprobe(inode, offset, ref_ctr_offset); if (!uprobe) @@ -2014,6 +2019,9 @@ static int is_trap_at_addr(struct mm_str uprobe_opcode_t opcode; int result; + if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE))) + return -EINVAL; + pagefault_disable(); result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr); pagefault_enable();