Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752514AbdLFOPn (ORCPT ); Wed, 6 Dec 2017 09:15:43 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:11517 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752362AbdLFOPm (ORCPT ); Wed, 6 Dec 2017 09:15:42 -0500 From: Cheng Jian To: , , , , CC: , , Subject: [PATCH] futex: use fault_in to avoid infinite loop Date: Wed, 6 Dec 2017 22:21:07 +0800 Message-ID: <1512570067-79946-1-git-send-email-cj.chengjian@huawei.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090203.5A27FB86.0043,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 37dc4f333141ae4228afb3227d87ba66 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2939 Lines: 118 It will cause softlockup(infinite loop) in kernel space when we use SYS_set_robust_list in futex which incoming a misaligned address from user space. It can be triggered by the following demo // futex_align.c #include #include #include #include #include int main() { char *p = malloc(128); struct robust_list_head *ro1; struct robust_list *entry; struct robust_list *pending; int ret = 0; pid_t pid = getpid(); printf("size = %d, p %p pid [%d] \n", sizeof(struct robust_list_head), p, pid); ro1 = p; entry = p + 20; pending = p + 40; ro1->list.next = entry; ro1->list_op_pending = pending; entry->next = &(ro1->list); ro1->futex_offset = 41; *((int *)((char *)entry + 41)) = pid; printf(" entry + offert [%p] [%d] \n", (int *)((char *)entry + 41), *((int *)((char *)entry + 41))); ret = syscall(SYS_set_robust_list, ro1, sizeof(struct robust_list_head)); printf("ret = [%d]\n", ret); return 0; } It is because LDXER instructions requires the address which is aligned under arm64 architecture. otherwise it can trigger an exception, cmpxchg_futex_value_locked return -EFAULT. int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi) { retry: //...... /* return -EFAULT */ if (cmpxchg_futex_value_locked (& nval, uaddr, uval, mval)) { /* always return 0 */ if (fault_in_user_writeable(uaddr)) return -1; /* never here */ goto retry; /* then goto retry */ //...... } So retry - => goto retry -=> retry -=> goto retry ... Then dead loop here. So use fault_in to avoid it, It will not enter the retry label twice under this branch. Signed-off-by: Cheng Jian --- kernel/futex.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/kernel/futex.c b/kernel/futex.c index 76ed592..bc0b14f 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -3327,6 +3327,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi) { u32 uval, uninitialized_var(nval), mval; + int fault_in = false; retry: if (get_user(uval, uaddr)) @@ -3351,11 +3352,15 @@ int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi) * access fails we try to fault in the futex with R/W * verification via get_user_pages. get_user() above * does not guarantee R/W access. If that fails we - * give up and leave the futex locked. + * give up and leave the futex locked. use fault_in + * infinite loop when other exceptions */ if (cmpxchg_futex_value_locked(&nval, uaddr, uval, mval)) { - if (fault_in_user_writeable(uaddr)) + if (unlikely(fault_in) || + fault_in_user_writeable(uaddr)) { return -1; + } + fault_in = true; goto retry; } if (nval != uval) -- 1.8.3.1