Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2713517pxu; Mon, 7 Dec 2020 13:39:01 -0800 (PST) X-Google-Smtp-Source: ABdhPJzRHUWkx/OE4rhR4TjIi7DhbHFZ4vzbi4MnBVdThWLP6T+01gm04NT81SaO+Sz7/LYpIM4S X-Received: by 2002:a50:e083:: with SMTP id f3mr6706864edl.223.1607377140874; Mon, 07 Dec 2020 13:39:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607377140; cv=none; d=google.com; s=arc-20160816; b=Zcz4BbnUD0ZrheloI4yKSAjtG2b11qF+wFEW8RU0J7e3KQd6bDEPsrb8Lih8qkBOeD KfncmFP4fW52JQrzfEnGPdENlnVJ255YIi4so4zjHJ16EwIwUhkw9FtSyh9OAGO+QS+5 2t5ktr9pYMMa4UDrQVt8QPlWHqd7onrkKLhwx47eDkolNceyUN1fv2eE77WeJ9HWYU+o hZXJfFCILJX8N2XbSMQ3XCHevBysHZNTywiF1UH95CgwpMjvctavx1sGwEnsGOiISM9m jYZ0hEKh1Dm/JPXrkgyx6Ll7E6f/0FMvT3fSznf6x+sdEKBEwy0+f/rnmMYSM+wFoQFQ 5YYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :sender:dkim-signature; bh=cn9dwjO1FYcWQADH/EAbIN2YNbcmphKHUetK7tfw3zE=; b=M4p2Owm4ikmB0Z3q4pNlJ4g5s1w5o+E1UqBsdfrcThc4btOD1IY+zbclpPk7uY3YV5 cFFgIWk46nQVHZw9QPfF5xgWeSd254cpAh8l4InPuFnktv5GkosmXSmSWpHTBbc1qu8N vES8d/SO73/U5K5LcTj/b/cl2XBYTpqZr21HL+i5o3nZwhLuyARTUT9QyNPs1DUG45B7 UqynHnj+rGXroaoQFmzCcsRAzsYW6dmxKRIAQiVtDvfx8HU+hAnLXQ9dGPGzybD/AQR2 8/Ud6CMgkCnQLbtvpu/PJh8t8nJTbGIeVbN5AsNfco1Jvj4SOWSy5Ib2mxH8kodqpzTe ZwTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=dnQQoB3k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x40si9192455ede.441.2020.12.07.13.38.37; Mon, 07 Dec 2020 13:39:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=dnQQoB3k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727555AbgLGVeq (ORCPT + 99 others); Mon, 7 Dec 2020 16:34:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725924AbgLGVeq (ORCPT ); Mon, 7 Dec 2020 16:34:46 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2965EC061749 for ; Mon, 7 Dec 2020 13:34:06 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id c11so12292586qtw.10 for ; Mon, 07 Dec 2020 13:34:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=cn9dwjO1FYcWQADH/EAbIN2YNbcmphKHUetK7tfw3zE=; b=dnQQoB3k6QHILcZmviO5bBY/QhSefQqhrh5pI+3bZlnZ3lYYhNucWT5TuALFZuG6SZ KYlA0JraZnvnDYYLj84AHk2OBZy/Ui9ij/oF4bBlNeYxhiKCk+Vuohai3mibhXse6Nb6 1Uyl7pA06nDCKPjU78q3A/W/BqF/muoe8wTgUcQqOcIA7GPlpy5si0/vr+Z4xzRuktjD NbGTU5Bq8SnLyic3QrxqznxJ5orwUF4LscJuc4U0dWqA3SWdK0WdJ/+sHJR7P8JZAYLt i8Aw6q01DfbqOM8iInXDUdExQz01s4eBnqt2+ROc/zfxK8KxJdwfI4rmvA9hkU/F4vgb W8LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=cn9dwjO1FYcWQADH/EAbIN2YNbcmphKHUetK7tfw3zE=; b=GtgObFCklhEawPA795rTf2YtHLLlYsY/lCo1HzaNYBWTjT2WQRjkQBH2wF53G4NfBB LUJD5U3EPUgCcjrmseKkMgYG+B8CLZJMEUbUZ0j5bPNEXZT8dMguav/vvlJu/SvVgJUB W4cqwzivC84jebTUuD4iSuieN8NqmlUGbdkBUqOACiHv5cOnzeK/U0TfgmwDP4XxnXCx 8FItqR+20OIVNoLNnfiAdxPpHhd6cJBF6hmjTFbVFdALBWfCDRKDeFv77kdzbF6U1SQL tV/vwl95A90U4lmtPqPOufO93yDNyfmA632aovrRbiLW8JCw5Gpe4YyTNHiC4OfN9rX9 ZAtw== X-Gm-Message-State: AOAM531JQ1XfRF2xs3LY4J1a4nG8PQBlX/joMPSxaKT3H8iPcM/PJGT+ agIgA/wN/m7JnxH1Z/8MyyeWW1pJxBYW2b/VyKOq Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:f693:9fff:feef:c8f8]) (user=axelrasmussen job=sendgmr) by 2002:a05:6214:443:: with SMTP id cc3mr23112125qvb.53.1607376844581; Mon, 07 Dec 2020 13:34:04 -0800 (PST) Date: Mon, 7 Dec 2020 13:33:58 -0800 Message-Id: <20201207213358.573750-1-axelrasmussen@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH v3] mm: mmap_lock: fix use-after-free race and css ref leak in tracepoints From: Axel Rasmussen To: Andrew Morton , Chinwen Chang , Daniel Jordan , David Rientjes , Davidlohr Bueso , Ingo Molnar , Jann Horn , Laurent Dufour , Michel Lespinasse , Stephen Rothwell , Steven Rostedt , Vlastimil Babka Cc: Yafang Shao , "David S . Miller" , dsahern@kernel.org, Greg Kroah-Hartman , Jakub Kicinski , liuhangbin@gmail.com, Tejun Heo , Shakeel Butt , Greg Thelen , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug is that an ongoing trace event might race with the tracepoint being disabled (and therefore the _unreg() callback being called). Consider this ordering: T1: trace event fires, get_mm_memcg_path() is called T1: get_memcg_path_buf() returns a buffer pointer T2: trace_mmap_lock_unreg() is called, buffers are freed T1: cgroup_path() is called with the now-freed buffer The solution in this commit is to switch to mutex + RCU. With the RCU API we can first stop new buffers from being handed out, then wait for existing users to finish, and *then* free the buffers. I have a simple reproducer program which spins up two pools of threads, doing the following in a tight loop: Pool 1: mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0) munmap() Pool 2: echo 1 > /sys/kernel/debug/tracing/events/mmap_lock/enable echo 0 > /sys/kernel/debug/tracing/events/mmap_lock/enable This triggers the use-after-free very quickly. With this patch, I let it run for an hour without any BUGs. While fixing this, I also noticed and fixed a css ref leak. Previously we called get_mem_cgroup_from_mm(), but we never called css_put() to release that reference. get_mm_memcg_path() now does this properly. [1]: https://syzkaller.appspot.com/bug?extid=19e6dd9943972fa1c58a Signed-off-by: Axel Rasmussen --- mm/mmap_lock.c | 123 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 83 insertions(+), 40 deletions(-) diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index 12af8f1b8a14..dcdde4f722a4 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -6,9 +6,10 @@ #include #include #include +#include #include +#include #include -#include #include EXPORT_TRACEPOINT_SYMBOL(mmap_lock_start_locking); @@ -23,8 +24,8 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released); * concurrent _reg() and _unreg() calls, and count how many _reg() calls have * been made. */ -static DEFINE_SPINLOCK(reg_lock); -static int reg_refcount; +static DEFINE_MUTEX(reg_lock); +static int reg_refcount; /* Protected by reg_lock. */ /* * Size of the buffer for memcg path names. Ignoring stack trace support, @@ -38,99 +39,141 @@ static int reg_refcount; */ #define CONTEXT_COUNT 4 -DEFINE_PER_CPU(char *, memcg_path_buf); -DEFINE_PER_CPU(int, memcg_path_buf_idx); +static DEFINE_PER_CPU(char __rcu *, memcg_path_buf); +static char **tmp_bufs; +static DEFINE_PER_CPU(int, memcg_path_buf_idx); + +/* Called with reg_lock held. */ +static void free_memcg_path_bufs(void) +{ + int cpu; + char **old = tmp_bufs; + + for_each_possible_cpu(cpu) { + *(old++) = rcu_dereference_protected( + per_cpu(memcg_path_buf, cpu), + lockdep_is_held(®_lock)); + rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL); + } + + /* Wait for inflight memcg_path_buf users to finish. */ + synchronize_rcu(); + + old = tmp_bufs; + for_each_possible_cpu(cpu) { + kfree(*(old++)); + } + + kfree(tmp_bufs); + tmp_bufs = NULL; +} int trace_mmap_lock_reg(void) { - unsigned long flags; int cpu; + char *new; - spin_lock_irqsave(®_lock, flags); + mutex_lock(®_lock); + /* If the refcount is going 0->1, proceed with allocating buffers. */ if (reg_refcount++) goto out; + tmp_bufs = kmalloc_array(num_possible_cpus(), sizeof(*tmp_bufs), + GFP_KERNEL); + if (tmp_bufs == NULL) + goto out_fail; + for_each_possible_cpu(cpu) { - per_cpu(memcg_path_buf, cpu) = NULL; - } - for_each_possible_cpu(cpu) { - per_cpu(memcg_path_buf, cpu) = kmalloc( - MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_NOWAIT); - if (per_cpu(memcg_path_buf, cpu) == NULL) - goto out_fail; - per_cpu(memcg_path_buf_idx, cpu) = 0; + new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL); + if (new == NULL) + goto out_fail_free; + rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), new); + /* Don't need to wait for inflights, they'd have gotten NULL. */ } out: - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); return 0; +out_fail_free: + free_memcg_path_bufs(); out_fail: - for_each_possible_cpu(cpu) { - if (per_cpu(memcg_path_buf, cpu) != NULL) - kfree(per_cpu(memcg_path_buf, cpu)); - else - break; - } - + /* Since we failed, undo the earlier ref increment. */ --reg_refcount; - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); return -ENOMEM; } void trace_mmap_lock_unreg(void) { - unsigned long flags; - int cpu; - - spin_lock_irqsave(®_lock, flags); + mutex_lock(®_lock); + /* If the refcount is going 1->0, proceed with freeing buffers. */ if (--reg_refcount) goto out; - for_each_possible_cpu(cpu) { - kfree(per_cpu(memcg_path_buf, cpu)); - } + free_memcg_path_bufs(); out: - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); } static inline char *get_memcg_path_buf(void) { + char *buf; int idx; + rcu_read_lock(); + buf = rcu_dereference(*this_cpu_ptr(&memcg_path_buf)); + if (buf == NULL) { + rcu_read_unlock(); + return NULL; + } idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) - MEMCG_PATH_BUF_SIZE; - return &this_cpu_read(memcg_path_buf)[idx]; + return &buf[idx]; } static inline void put_memcg_path_buf(void) { this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE); + rcu_read_unlock(); } /* * Write the given mm_struct's memcg path to a percpu buffer, and return a - * pointer to it. If the path cannot be determined, NULL is returned. + * pointer to it. If the path cannot be determined, or no buffer was available + * (because the trace event is being unregistered), NULL is returned. * * Note: buffers are allocated per-cpu to avoid locking, so preemption must be * disabled by the caller before calling us, and re-enabled only after the * caller is done with the pointer. + * + * The caller must call put_memcg_path_buf() once the buffer is no longer + * needed. This must be done while preemption is still disabled. */ static const char *get_mm_memcg_path(struct mm_struct *mm) { + char *buf = NULL; struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); - if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { - char *buf = get_memcg_path_buf(); + if (memcg == NULL) + goto out; + if (unlikely(memcg->css.cgroup == NULL)) + goto out_put; - cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); - return buf; - } - return NULL; + buf = get_memcg_path_buf(); + if (buf == NULL) + goto out_put; + + cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); + +out_put: + css_put(&memcg->css); +out: + return buf; } #define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ -- 2.29.2.576.ga3fc446d84-goog