Received: by 2002:ab2:2994:0:b0:1ef:ca3e:3cd5 with SMTP id n20csp826661lqb; Fri, 15 Mar 2024 07:32:18 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCV+MsGUGfdYCzW+HjBoUQZvZoNVVWADlO8gSYkafPIjhOOhQOSJ0MEJ8BmWA2Z7hdQq/ffVktijxUIgDenxKzkL30/cxWAn2Gn3fFprdg== X-Google-Smtp-Source: AGHT+IFgMXT+P9A6HQkkCKkBXQxVpaBB6o2NKbTCJw6joyoirm43mTXcgebvN2PBPTCM3QJ8OGEv X-Received: by 2002:a05:6a20:7f87:b0:1a3:3db3:48c4 with SMTP id d7-20020a056a207f8700b001a33db348c4mr3730746pzj.18.1710513138147; Fri, 15 Mar 2024 07:32:18 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1710513138; cv=pass; d=google.com; s=arc-20160816; b=gcjDtKPWDwCV7rxxocmnzbl4dP60UpTS1AOqZ/iDlgyQSrn3t5sj+TNDxazLc4wLg6 MDvJ2qsd8k/1PYv7MbeOBrzUG8fpQJD6CeUPR3/cuh2A6JcDFSGjvs0DybKHXWIItn+a 5dSPLKAF9q31P1+OjvCAESBClslxwBt6ISr9VCFEbj+2XqbETLtCTIWdb3eSdgHHght5 xqWQJ42y8JDrZWzpMm75IpWppWwwlaMYdR7pa30tMXTuHbHjCzGPI6j+Ol6xfm/SASvp wide5p8yWpiLiPB6D5biuuiHsFJuZllCaf0R+BACsZR+lbQ8LR5AfAitmNnB5DiZEYz4 ppBQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :subject:date:from:dkim-signature; bh=G2nC+9WCmo5FNA7civzx0m7hF7iBnJgfsFllkWodZro=; fh=L3UFV6no5Bg+ONjucac5Whrd1OR3maQPkF8kTmP37SU=; b=hMn0h00jvOfnj41LwLUiazqpVHXN4cyuPWo9Z3mujRf52GTlugcbpivwaxtQtKz6vR l4qx0h4M7yK7GXE6lEayJnPI2HqpZ6z61qeBdqHoGLr8nJBvxZc4V82igXhvsOMyYBtS u7rLAnmdt5teeVMFCV0abd2y0xCKR0gGPV16m0nhEUit6QzAMyKoxeeMkIXjWZRsCy2e 9DKtqMofU7+ixxmw6nOC8wLzf/ZFtqqE9idXFQupJKnSYgD7N3fX9jOuERoTTyCtbUdy svBY6JBATqdVfXMsGFz4Rs7zmkswCr8O/hPFNeFGMkU7zUzW6sdrZOVEaeXZj4o0hGYY 4GZA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bVDW40wP; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-104505-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-104505-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id r123-20020a632b81000000b005e43a9f911fsi2790065pgr.53.2024.03.15.07.32.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Mar 2024 07:32:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-104505-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bVDW40wP; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-104505-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-104505-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 19DE1B22F71 for ; Fri, 15 Mar 2024 14:30:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3BD0937145; Fri, 15 Mar 2024 14:29:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bVDW40wP" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13FDF3B795; Fri, 15 Mar 2024 14:29:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710512982; cv=none; b=ktrWdMZ11SU3xSKePInIZN2geum1vmFlSEmaLGSNt5KRugJn2Y63OujG8Fa1/yqvxzsmxzUFajwHb4M365QtkYec474+TEroAnGVP1pskjmJoxoHtKpbsUnv3UHxn7BxEDsk/gy6vjUoMyqYQ0WRQDnBRgKnT6vAI1K7nCk1N8g= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710512982; c=relaxed/simple; bh=+8iv05gWhBn81v7n8L6hi2pnOLtKGBdEuyaUm2XdM+w=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NVQdjMxiKBD7AIYsK8C0LnJLBcU+V3kVCxexao0a/Wv7yrgwkQNtXuOtjfzL4YQIJNFeSQQyfTqxgajWJz7Z+a5xW5WhdCs0KTXUx5wh0GuDSsQyzE5f8uoxHGBET0OOcbxh+cO7AdQeAhQqSePxaN5amZ1b+W3QF67DXAHv8uk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bVDW40wP; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D26DC43601; Fri, 15 Mar 2024 14:29:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1710512981; bh=+8iv05gWhBn81v7n8L6hi2pnOLtKGBdEuyaUm2XdM+w=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=bVDW40wPVZv/NqDUt3y85cWLXKVihLrqz4j7zsb9GoEMS1oy3rxb/sk4hK3mA5fqx BRIFv3kPISQgT148dqNY+RArBJppPX0R6kGWNkXebQDD17WWOMGUyJOs5/zJ4ue2TJ opLHIDAuDi2lkcKt1e93kPRv5l2MVN3rThZO3rgUV2LVWYaiKvqRjyhwG8VHuSPteQ tCArBNF07wk44+RtA0lCphaCmg39mP6A+iU1k/4Hb5EXdV/OfQqNJFv8ga9BfzO5ZB d3z4CK8nAReE3yFnSshXuZnLLhsLCrUHbKgPOlJTidNa2etCQ5hWOwpbBsdXjDD/tc QY5JPrkomAJ3w== From: Benjamin Tissoires Date: Fri, 15 Mar 2024 15:29:25 +0100 Subject: [PATCH bpf-next v4 1/6] bpf/helpers: introduce sleepable bpf_timers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20240315-hid-bpf-sleepable-v4-1-5658f2540564@kernel.org> References: <20240315-hid-bpf-sleepable-v4-0-5658f2540564@kernel.org> In-Reply-To: <20240315-hid-bpf-sleepable-v4-0-5658f2540564@kernel.org> To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan Cc: Benjamin Tissoires , bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1710512973; l=7890; i=bentiss@kernel.org; s=20230215; h=from:subject:message-id; bh=+8iv05gWhBn81v7n8L6hi2pnOLtKGBdEuyaUm2XdM+w=; b=Is9ulqHkGUYBrx9fC68zYkVcrnHLCNTT/JjVRbbgmFY+2RRyPXraydcb0IDr8mpjlCjnx8QT7 ei0DFgnmUM/CMqHQewJVMifD49DWE23kTsGty3Tw0RI9mHtfI9szlXB X-Developer-Key: i=bentiss@kernel.org; a=ed25519; pk=7D1DyAVh6ajCkuUTudt/chMuXWIJHlv2qCsRkIizvFw= They are implemented as a workqueue, which means that there are no guarantees of timing nor ordering. Signed-off-by: Benjamin Tissoires --- changes in v4: - dropped __bpf_timer_compute_key() - use a spin_lock instead of a semaphore - ensure bpf_timer_cancel_and_free is not complaining about non sleepable context and use cancel_work() instead of cancel_work_sync() - return -EINVAL if a delay is given to bpf_timer_start() with BPF_F_TIMER_SLEEPABLE changes in v3: - extracted the implementation in bpf_timer only, without bpf_timer_set_sleepable_cb() - rely on schedule_work() only, from bpf_timer_start() - add semaphore to ensure bpf_timer_work_cb() is accessing consistent data changes in v2 (compared to the one attaches to v1 0/9): - make use of a kfunc - add a (non-used) BPF_F_TIMER_SLEEPABLE - the callback is *not* called, it makes the kernel crashes --- include/uapi/linux/bpf.h | 4 +++ kernel/bpf/helpers.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 88 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 3c42b9f1bada..b90def29d796 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -7461,10 +7461,14 @@ struct bpf_core_relo { * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is * relative to current time. * - BPF_F_TIMER_CPU_PIN: Timer will be pinned to the CPU of the caller. + * - BPF_F_TIMER_SLEEPABLE: Timer will run in a sleepable context, with + * no guarantees of ordering nor timing (consider this as being just + * offloaded immediately). */ enum { BPF_F_TIMER_ABS = (1ULL << 0), BPF_F_TIMER_CPU_PIN = (1ULL << 1), + BPF_F_TIMER_SLEEPABLE = (1ULL << 2), }; /* BPF numbers iterator state */ diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index a89587859571..38de73a9df83 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1094,14 +1094,20 @@ const struct bpf_func_proto bpf_snprintf_proto = { * bpf_timer_cancel() cancels the timer and decrements prog's refcnt. * Inner maps can contain bpf timers as well. ops->map_release_uref is * freeing the timers when inner map is replaced or deleted by user space. + * + * sleepable_lock protects only the setup of the workqueue, not the callback + * itself. This is done to ensure we don't run concurrently a free of the + * callback or the associated program. */ struct bpf_hrtimer { struct hrtimer timer; + struct work_struct work; struct bpf_map *map; struct bpf_prog *prog; void __rcu *callback_fn; void *value; struct rcu_head rcu; + spinlock_t sleepable_lock; }; /* the actual struct hidden inside uapi struct bpf_timer */ @@ -1114,6 +1120,49 @@ struct bpf_timer_kern { struct bpf_spin_lock lock; } __attribute__((aligned(8))); +static void bpf_timer_work_cb(struct work_struct *work) +{ + struct bpf_hrtimer *t = container_of(work, struct bpf_hrtimer, work); + struct bpf_map *map = t->map; + bpf_callback_t callback_fn; + void *value = t->value; + unsigned long flags; + void *key; + u32 idx; + + BTF_TYPE_EMIT(struct bpf_timer); + + spin_lock_irqsave(&t->sleepable_lock, flags); + + callback_fn = READ_ONCE(t->callback_fn); + if (!callback_fn) { + spin_unlock_irqrestore(&t->sleepable_lock, flags); + return; + } + + if (map->map_type == BPF_MAP_TYPE_ARRAY) { + struct bpf_array *array = container_of(map, struct bpf_array, map); + + /* compute the key */ + idx = ((char *)value - array->value) / array->elem_size; + key = &idx; + } else { /* hash or lru */ + key = value - round_up(map->key_size, 8); + } + + /* prevent the callback to be freed by bpf_timer_cancel() while running + * so we can release the sleepable lock + */ + bpf_prog_inc(t->prog); + + spin_unlock_irqrestore(&t->sleepable_lock, flags); + + callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0); + /* The verifier checked that return value is zero. */ + + bpf_prog_put(t->prog); +} + static DEFINE_PER_CPU(struct bpf_hrtimer *, hrtimer_running); static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) @@ -1192,6 +1241,8 @@ BPF_CALL_3(bpf_timer_init, struct bpf_timer_kern *, timer, struct bpf_map *, map t->prog = NULL; rcu_assign_pointer(t->callback_fn, NULL); hrtimer_init(&t->timer, clockid, HRTIMER_MODE_REL_SOFT); + INIT_WORK(&t->work, bpf_timer_work_cb); + spin_lock_init(&t->sleepable_lock); t->timer.function = bpf_timer_cb; WRITE_ONCE(timer->timer, t); /* Guarantee the order between timer->timer and map->usercnt. So @@ -1237,6 +1288,7 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb ret = -EINVAL; goto out; } + spin_lock(&t->sleepable_lock); if (!atomic64_read(&t->map->usercnt)) { /* maps with timers must be either held by user space * or pinned in bpffs. Otherwise timer might still be @@ -1263,6 +1315,8 @@ BPF_CALL_3(bpf_timer_set_callback, struct bpf_timer_kern *, timer, void *, callb } rcu_assign_pointer(t->callback_fn, callback_fn); out: + if (t) + spin_unlock(&t->sleepable_lock); __bpf_spin_unlock_irqrestore(&timer->lock); return ret; } @@ -1283,8 +1337,12 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla if (in_nmi()) return -EOPNOTSUPP; - if (flags & ~(BPF_F_TIMER_ABS | BPF_F_TIMER_CPU_PIN)) + if (flags & ~(BPF_F_TIMER_ABS | BPF_F_TIMER_CPU_PIN | BPF_F_TIMER_SLEEPABLE)) return -EINVAL; + + if ((flags & BPF_F_TIMER_SLEEPABLE) && nsecs) + return -EINVAL; + __bpf_spin_lock_irqsave(&timer->lock); t = timer->timer; if (!t || !t->prog) { @@ -1300,7 +1358,10 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla if (flags & BPF_F_TIMER_CPU_PIN) mode |= HRTIMER_MODE_PINNED; - hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode); + if (flags & BPF_F_TIMER_SLEEPABLE) + schedule_work(&t->work); + else + hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode); out: __bpf_spin_unlock_irqrestore(&timer->lock); return ret; @@ -1348,13 +1409,22 @@ BPF_CALL_1(bpf_timer_cancel, struct bpf_timer_kern *, timer) ret = -EDEADLK; goto out; } + spin_lock(&t->sleepable_lock); drop_prog_refcnt(t); + spin_unlock(&t->sleepable_lock); out: __bpf_spin_unlock_irqrestore(&timer->lock); /* Cancel the timer and wait for associated callback to finish * if it was running. */ ret = ret ?: hrtimer_cancel(&t->timer); + + /* also cancel the sleepable work, but *do not* wait for + * it to finish if it was running as we might not be in a + * sleepable context + */ + ret = ret ?: cancel_work(&t->work); + rcu_read_unlock(); return ret; } @@ -1383,11 +1453,13 @@ void bpf_timer_cancel_and_free(void *val) t = timer->timer; if (!t) goto out; + spin_lock(&t->sleepable_lock); drop_prog_refcnt(t); /* The subsequent bpf_timer_start/cancel() helpers won't be able to use * this timer, since it won't be initialized. */ WRITE_ONCE(timer->timer, NULL); + spin_unlock(&t->sleepable_lock); out: __bpf_spin_unlock_irqrestore(&timer->lock); if (!t) @@ -1410,6 +1482,16 @@ void bpf_timer_cancel_and_free(void *val) */ if (this_cpu_read(hrtimer_running) != t) hrtimer_cancel(&t->timer); + + /* also cancel the sleepable work, but *do not* wait for + * it to finish if it was running as we might not be in a + * sleepable context. Same reason as above, it's fine to + * free 't': the subprog callback will never access it anymore + * and can not reschedule itself since timer->timer = NULL was + * already done. + */ + cancel_work(&t->work); + kfree_rcu(t, rcu); } -- 2.44.0