Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp3915369ybp; Sun, 6 Oct 2019 23:30:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqwDi8BYpjNYLWdyXFNEmLmmtRJnLfIBvUn3a3jEEfBZ9Naql1tj/Y7AzLoHjup+v4bhoFjo X-Received: by 2002:a50:b884:: with SMTP id l4mr27348832ede.295.1570429800958; Sun, 06 Oct 2019 23:30:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570429800; cv=none; d=google.com; s=arc-20160816; b=T2Dczk8lu9BtuC454dTO5ZDWVHCz8j1PbE+GzO8nuQ5bLW9zl41ZRK2JI0kub4MXee Fxe/TYQSeNFUDG2JppKSzzgTyB7O+yKL7yW2eyuP3pAKp+0jXLPTt3ET4ps0+jQfCego YZ4dRDZ5pjriznF0B/JQssgZikabAfCXjAJySAbbZh+C/Unxa3qLDiSH8dI/n6YM9+4Z OVwvWv1qsokB+2nvUTY9Nk/ZxjAyCdZTSH8wrOORynq5X4daQ5vFAIyRd8qSpmDQ57wJ UlERrUAJYfYGl31hzJ4S3bqEXrB5eaKskazcQAE5KihDvMzZL8Y1r9P/zqv/GOkpe9hz yeKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=F+oN7OhnQuDvPSd+hXHX3nHOZPOSv2CcWvqNvOXGack=; b=tnU0C0FzFbUmgG1z1bWtCI1Cwd4rU4QfAqZ4AJvGhi6zUFLpKVn2k/s3u9C5CgOsIq s+r9FClZ+NHqtjYxWKpW+dmujCfoVBaAnt37KowF2m31cHf4nsH/QDrFXPiA3e7gV6yH rm+h3oKCTQBh7mKsnbmyiWxFcPsEsBQzYaDbD9YT6kbr00v4GOdaVqszQJLX3lwZWrRa 5HzzlyaaObQ/dJ0LuGRyfQES9yPpIONcCSm+jCaDmlDOPkOiIiBsG2H1xQzaMzEfbk5Z g6y6a4e08AEDJ2Yx1J5twV8RPj08bD4CmuDUB3Dt1sIebO1IMxLSsCPC+3CR5qEIiyLM 6tGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=d5LjPYjP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k9si8152848edb.56.2019.10.06.23.29.23; Sun, 06 Oct 2019 23:30:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=d5LjPYjP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727059AbfJGG2x (ORCPT + 99 others); Mon, 7 Oct 2019 02:28:53 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:36908 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726969AbfJGG2w (ORCPT ); Mon, 7 Oct 2019 02:28:52 -0400 Received: by mail-io1-f65.google.com with SMTP id b19so26111776iob.4 for ; Sun, 06 Oct 2019 23:28:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=F+oN7OhnQuDvPSd+hXHX3nHOZPOSv2CcWvqNvOXGack=; b=d5LjPYjP+oNF04YEV5GbXFOT9tv6SH9ye5Tbl6yGeP0zjdpS+DlWNLTBe0M/MDPXqU R0F1KG0AnpPuAqsEq4RR2+ddYspdUw5CZKPJfJEKsF68KIA4nvRK1nEGQz2nmeqlXNBJ 4S2D7ehGpwiiCuqug+J03fCCb1b3fpOvFcSIs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=F+oN7OhnQuDvPSd+hXHX3nHOZPOSv2CcWvqNvOXGack=; b=RZmQ8Vu1iDoO2HfrFNer4t3EReQbxs5mM3xPWgZY5h0AYiRBunW2D46BROVSfHug2B zV2n9zEsE2J0O0OVFmRC4XJKWAWvSsPzPgZCmarA0rJrMkSYh85Ft1BeGPb0slBC5O03 7icmWsTU5OV+vR0yi2zzwQeM6kaZcp6rnN+rVAj/mT2ufemx4iI92IbD/6Rop7VTCERc lS5ogMXyNpGDAiVTVaVqTeEUGJIdQa3qEcOdEsdgjMNSFPeoQq35r0g1/Igp1RZhq3vc FwvOqsfEWyrQAysgY7XSoPMQ742euJ4/9XGM0FKCeWBv+Tve8gPLp8URM3FblEE+nzVS 67kw== X-Gm-Message-State: APjAAAXKQ2KkTY8h/ramjdsZ4W7hOGABkuMZF+RhGImPH0gD1oRvyyxQ 6gZVzA1Td4fXj4kJ9V/Trh7B3EQBpmBFS5IiPodZjw== X-Received: by 2002:a92:d847:: with SMTP id h7mr10737243ilq.85.1570429731325; Sun, 06 Oct 2019 23:28:51 -0700 (PDT) MIME-Version: 1.0 References: <20191006172016.873463083@linuxfoundation.org> <20191006172018.480360174@linuxfoundation.org> <20191006173202.GA832@sol.localdomain> <20191006182433.GA217738@kroah.com> In-Reply-To: <20191006182433.GA217738@kroah.com> From: Mattias Nissler Date: Sun, 6 Oct 2019 23:28:40 -0700 Message-ID: Subject: Re: [PATCH 4.9 30/47] ANDROID: binder: remove waitqueue when thread exits. To: Greg Kroah-Hartman Cc: Todd Kjos , Martijn Coenen , linux-kernel@vger.kernel.org, stable@vger.kernel.org, syzbot Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (resend, apologies for accidental HTML reply) On Sun, Oct 6, 2019 at 11:24 AM Greg Kroah-Hartman wrote: > > On Sun, Oct 06, 2019 at 10:32:02AM -0700, Eric Biggers wrote: > > On Sun, Oct 06, 2019 at 07:21:17PM +0200, Greg Kroah-Hartman wrote: > > > From: Martijn Coenen > > > > > > commit f5cb779ba16334b45ba8946d6bfa6d9834d1527f upstream. > > > > > > binder_poll() passes the thread->wait waitqueue that > > > can be slept on for work. When a thread that uses > > > epoll explicitly exits using BINDER_THREAD_EXIT, > > > the waitqueue is freed, but it is never removed > > > from the corresponding epoll data structure. When > > > the process subsequently exits, the epoll cleanup > > > code tries to access the waitlist, which results in > > > a use-after-free. > > > > > > Prevent this by using POLLFREE when the thread exits. > > > > > > Signed-off-by: Martijn Coenen > > > Reported-by: syzbot > > > Cc: stable # 4.14 > > > [backport BINDER_LOOPER_STATE_POLL logic as well] > > > Signed-off-by: Mattias Nissler > > > Signed-off-by: Greg Kroah-Hartman > > > --- > > > drivers/android/binder.c | 17 ++++++++++++++++- > > > 1 file changed, 16 insertions(+), 1 deletion(-) > > > > > > --- a/drivers/android/binder.c > > > +++ b/drivers/android/binder.c > > > @@ -334,7 +334,8 @@ enum { > > > BINDER_LOOPER_STATE_EXITED = 0x04, > > > BINDER_LOOPER_STATE_INVALID = 0x08, > > > BINDER_LOOPER_STATE_WAITING = 0x10, > > > - BINDER_LOOPER_STATE_NEED_RETURN = 0x20 > > > + BINDER_LOOPER_STATE_NEED_RETURN = 0x20, > > > + BINDER_LOOPER_STATE_POLL = 0x40, > > > }; > > > > > > struct binder_thread { > > > @@ -2628,6 +2629,18 @@ static int binder_free_thread(struct bin > > > } else > > > BUG(); > > > } > > > + > > > + /* > > > + * If this thread used poll, make sure we remove the waitqueue > > > + * from any epoll data structures holding it with POLLFREE. > > > + * waitqueue_active() is safe to use here because we're holding > > > + * the inner lock. > > > + */ > > > + if ((thread->looper & BINDER_LOOPER_STATE_POLL) && > > > + waitqueue_active(&thread->wait)) { > > > + wake_up_poll(&thread->wait, POLLHUP | POLLFREE); > > > + } > > > + > > > if (send_reply) > > > binder_send_failed_reply(send_reply, BR_DEAD_REPLY); > > > binder_release_work(&thread->todo); > > > @@ -2651,6 +2664,8 @@ static unsigned int binder_poll(struct f > > > return POLLERR; > > > } > > > > > > + thread->looper |= BINDER_LOOPER_STATE_POLL; > > > + > > > wait_for_proc_work = thread->transaction_stack == NULL && > > > list_empty(&thread->todo) && thread->return_error == BR_OK; > > > > > > > Are you sure this backport is correct, given that in 4.9, binder_poll() > > sometimes uses proc->wait instead of thread->wait?: Jann's PoC calls the BINDER_THREAD_EXIT ioctl to free the binder_thread which will then cause the UAF, and this is cut off by the patch. IIUC, you are worried about a similar AUF on the proc->wait access. I am not 100% sure, but I think the binder_proc lifetime matches the corresponding struct file instance, so it shouldn't be possible to get the binder_proc deallocated while still being able to access it via filp->private_data. > > > > wait_for_proc_work = thread->transaction_stack == NULL && > > list_empty(&thread->todo) && thread->return_error == BR_OK; > > > > binder_unlock(__func__); > > > > if (wait_for_proc_work) { > > if (binder_has_proc_work(proc, thread)) > > return POLLIN; > > poll_wait(filp, &proc->wait, wait); > > if (binder_has_proc_work(proc, thread)) > > return POLLIN; > > } else { > > if (binder_has_thread_work(thread)) > > return POLLIN; > > poll_wait(filp, &thread->wait, wait); > > if (binder_has_thread_work(thread)) > > return POLLIN; > > } > > return 0; > > I _think_ the backport is correct, and I know someone has verified that > the 4.4.y backport works properly and I don't see much difference here > from that version. > > But I will defer to Todd and Martijn here, as they know this code _WAY_ > better than I do. The codebase has changed a lot from 4.9.y to 4.14.y > so it makes it hard to do equal comparisons simply. > > Todd and Martijn, thoughts? > > thanks, > > greg k-h