2016-04-12 05:29:25

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

Hi,

This a crude cyclic scheduler implementation. It uses SCHED_FIFO tasks
and runs them according to a map pattern specified by a 64 bit mask. Each
bit corresponds to an entry into an 64 entry array of
'struct task_struct'. This works single core CPU 0 only for now.

Threads are 'admitted' to this map by an extension to the ioctl() via the
of (rtc) real-time clock interface. The bit pattern then determines when
the task will run or activate next.

The /dev/rtc interface is choosen for this purpose because of its
accessibilty to userspace. For example, the mplayer program already use
it as a timer source and could possibly benefit from being sync to a
vertical retrace interrupt during decoding. Could be an OpenGL program
needing precisely scheduler support for those same handling vertical
retrace interrupts, low latency audio and timely handling of touch
events amognst other uses.

There is also a need for some kind of blocking/yielding interface that can
return an overrun count for when the thread utilizes more time than
allocated for that frame. The read() function in rtc is overloaded for this
purpose and reports overrun events. Yield functionality has yet to be fully
tested.

I apologize for any informal or misused of terminology as I haven't fully
reviewed all of the academic literature regarding these kind of schedulers.
I welcome suggestions and corrects etc

Special thanks to includes...

Peter Ziljstra (Intel), Steve Rostedt (Red Hat), Rik van Riel (Red Hat) for
encouraging me to continue working in the Linux kernel community and being
generally positive and supportive.

KY Srinivasan (formerly Novell now Microsoft) for discussion of real-time
schedulers and pointers to specifics on that topic. It was just a single
discussion but was basically the inspiration for this kind of work.

Amir Frenkel (Palm), Kenneth Albanowski (Palm), Bdale Garbee (HP) for the
amazing place that was Palm, Kenneth for being a co-conspirator with this
scheduler. This scheduler was inspired by performance work that I did
at Palm's kernel group along with discussions with the multimedia team
before HP kill webOS off. Sad and infuriating moment.

Maybe, in a short while, the community will understand the value of these
patches for -rt and start solving the general phenomenon of high performance
multi-media and user interactivity problems more properly with both a
scheduler like this and -rt shipped as default in the near future.

[Also, I'd love some kind of sponsorship to continue what I think is
critical work versus heading back into the valley]

---

Bill Huey (hui) (12):
Kconfig change
Reroute rtc update irqs to the cyclic scheduler handler
Add cyclic support to rtc-dev.c
Anonymous struct initialization
Task tracking per file descriptor
Add anonymous struct to sched_rt_entity
kernel/userspace additions for addition ioctl() support for rtc
Compilation support
Add priority support for the cyclic scheduler
Export SCHED_FIFO/RT requeuing functions
Cyclic scheduler support
Cyclic/rtc documentation

Documentation/scheduler/sched-cyclic-rtc.txt | 468 ++++++++++++++++++++
drivers/rtc/Kconfig | 5 +
drivers/rtc/class.c | 3 +
drivers/rtc/interface.c | 23 +
drivers/rtc/rtc-dev.c | 161 +++++++
include/linux/init_task.h | 18 +
include/linux/rtc.h | 3 +
include/linux/sched.h | 15 +
include/uapi/linux/rtc.h | 4 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 13 +
kernel/sched/cyclic.c | 620 +++++++++++++++++++++++++++
kernel/sched/cyclic.h | 86 ++++
kernel/sched/cyclic_rt.h | 7 +
kernel/sched/rt.c | 41 ++
15 files changed, 1468 insertions(+)
create mode 100644 Documentation/scheduler/sched-cyclic-rtc.txt
create mode 100644 kernel/sched/cyclic.c
create mode 100644 kernel/sched/cyclic.h
create mode 100644 kernel/sched/cyclic_rt.h

--
2.5.0


2016-04-12 05:29:31

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 04/12] Anonymous struct initialization

Anonymous struct initialization

Signed-off-by: Bill Huey (hui) <[email protected]>
---
include/linux/init_task.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index f2cb8d4..308caf6 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -183,6 +183,23 @@ extern struct task_group root_task_group;
# define INIT_KASAN(tsk)
#endif

+#ifdef CONFIG_RTC_CYCLIC
+# define INIT_RT_OVERRUN(tsk) \
+ .rt_overrun = { \
+ .count = 0, \
+ .task_list = LIST_HEAD_INIT(tsk.rt.rt_overrun.task_list), \
+ .type = 0, \
+ .color = 0, \
+ .slots = 0, \
+ .yield = 0, \
+ .machine_state = 0, \
+ .last_machine_state = 0, \
+ .last_task_state = 0, \
+ },
+#else
+# define INIT_RT_OVERRUN
+#endif
+
/*
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -210,6 +227,7 @@ extern struct task_group root_task_group;
.rt = { \
.run_list = LIST_HEAD_INIT(tsk.rt.run_list), \
.time_slice = RR_TIMESLICE, \
+ INIT_RT_OVERRUN(tsk) \
}, \
.tasks = LIST_HEAD_INIT(tsk.tasks), \
INIT_PUSHABLE_TASKS(tsk) \
--
2.5.0

2016-04-12 05:29:46

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 11/12] Cyclic scheduler support

Core implementation of the cyclic scheduler that includes admittance
handling, thread death supprot, cyclic timer tick handler, primitive proc
debugging interface, wait-queue modifications.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
kernel/sched/cyclic.c | 620 +++++++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cyclic.h | 86 +++++++
kernel/sched/cyclic_rt.h | 7 +
3 files changed, 713 insertions(+)
create mode 100644 kernel/sched/cyclic.c
create mode 100644 kernel/sched/cyclic.h
create mode 100644 kernel/sched/cyclic_rt.h

diff --git a/kernel/sched/cyclic.c b/kernel/sched/cyclic.c
new file mode 100644
index 0000000..8ce34bd
--- /dev/null
+++ b/kernel/sched/cyclic.c
@@ -0,0 +1,620 @@
+/*
+ * cyclic scheduler for rtc support
+ *
+ * Copyright (C) Bill Huey
+ * Author: Bill Huey <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/module.h>
+#include <linux/rtc.h>
+#include <linux/sched.h>
+#include "sched.h"
+#include "cyclic.h"
+#include "cyclic_rt.h"
+
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+DEFINE_RAW_SPINLOCK(rt_overrun_lock);
+struct rb_root rt_overrun_tree = RB_ROOT;
+
+#define MASK2 0xFFFFffffFFFFfff0
+
+/* must revisit again when I get more time to fix the possbility of
+ * overflow here and 32 bit portability */
+static int cmp_ptr_unsigned_long(long *p, long *q)
+{
+ int result = ((unsigned long)p & MASK2) - ((unsigned long)q & MASK2);
+
+ WARN_ON(sizeof(long *) != 8);
+
+ if (!result)
+ return 0;
+ else if (result > 0)
+ return 1;
+ else
+ return -1;
+}
+
+static int eq_ptr_unsigned_long(long *p, long *q)
+{
+ return (((long)p & MASK2) == ((long)q & MASK2));
+}
+
+#define CMP_PTR_LONG(p,q) cmp_ptr_unsigned_long((long *)p, (long *)q)
+
+static
+struct task_struct *_rt_overrun_entry_find(struct rb_root *root,
+ struct task_struct *p)
+{
+ struct task_struct *ret = NULL;
+ struct rb_node *node = root->rb_node;
+
+ while (node) { // double_rq_lock(struct rq *, struct rq *) cpu_rq
+ struct task_struct *task = container_of(node,
+ struct task_struct, rt.rt_overrun.node);
+
+ int result = CMP_PTR_LONG(p, task);
+
+ if (result < 0)
+ node = node->rb_left;
+ else if (result > 0)
+ node = node->rb_right;
+ else {
+ ret = task;
+ goto exit;
+ }
+ }
+exit:
+ return ret;
+}
+
+static int rt_overrun_task_runnable(struct task_struct *p)
+{
+ return task_on_rq_queued(p);
+}
+
+/* avoiding excessive debug printing, splitting the entry point */
+static
+struct task_struct *rt_overrun_entry_find(struct rb_root *root,
+ struct task_struct *p)
+{
+printk("%s: \n", __func__);
+ return _rt_overrun_entry_find(root, p);
+}
+
+static int _rt_overrun_entry_insert(struct rb_root *root, struct task_struct *p)
+{
+ struct rb_node **new = &(root->rb_node), *parent = NULL;
+
+printk("%s: \n", __func__);
+ while (*new) {
+ struct task_struct *task = container_of(*new,
+ struct task_struct, rt.rt_overrun.node);
+
+ int result = CMP_PTR_LONG(p, task);
+
+ parent = *new;
+ if (result < 0)
+ new = &((*new)->rb_left);
+ else if (result > 0)
+ new = &((*new)->rb_right);
+ else
+ return 0;
+ }
+
+ /* Add new node and rebalance tree. */
+ rb_link_node(&p->rt.rt_overrun.node, parent, new);
+ rb_insert_color(&p->rt.rt_overrun.node, root);
+
+ return 1;
+}
+
+static void _rt_overrun_entry_delete(struct task_struct *p)
+{
+ struct task_struct *task;
+ int i;
+
+ task = rt_overrun_entry_find(&rt_overrun_tree, p);
+
+ if (task) {
+ printk("%s: p color %d - comm %s - slots 0x%016llx\n",
+ __func__, task->rt.rt_overrun.color, task->comm,
+ task->rt.rt_overrun.slots);
+
+ rb_erase(&task->rt.rt_overrun.node, &rt_overrun_tree);
+ list_del(&task->rt.rt_overrun.task_list);
+ for (i = 0; i < SLOTS; ++i) {
+ if (rt_admit_rq.curr[i] == p)
+ rt_admit_rq.curr[i] = NULL;
+ }
+
+ if (rt_admit_curr == p)
+ rt_admit_curr = NULL;
+ }
+}
+
+void rt_overrun_entry_delete(struct task_struct *p)
+{
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ _rt_overrun_entry_delete(p);
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+/* forward */
+int rt_overrun_task_active(struct task_struct *p);
+
+#define PROCFS_MAX_SIZE 2048
+static char chunk[PROCFS_MAX_SIZE]; // lock this
+
+static
+ssize_t rt_overrun_proc_write(struct file *file, const char *buffer, size_t len,
+ loff_t * off)
+{
+ unsigned long end;
+
+ if (len > PROCFS_MAX_SIZE)
+ end = PROCFS_MAX_SIZE;
+ else
+ end = len;
+
+ if (copy_from_user(chunk, buffer, end))
+ return -EFAULT;
+
+ printk(KERN_INFO "%s: write %lu bytes, s = %s \n", __func__, end,
+ (char *) &chunk[0]);
+ return end;
+}
+
+static int rt_overrun_proc_show(struct seq_file *m, void *v);
+
+static int rt_overrun_proc_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, rt_overrun_proc_show, NULL);
+}
+
+static const struct file_operations rt_overrun_proc_fops = {
+ .owner = THIS_MODULE,
+ .open = rt_overrun_proc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+ .write = rt_overrun_proc_write,
+};
+
+static int __init rt_overrun_proc_init(void) {
+ proc_create("rt_overrun_proc", 0, NULL, &rt_overrun_proc_fops);
+ return 0;
+}
+
+static void __exit rt_overrun_proc_exit(void) {
+ remove_proc_entry("rt_overrun_proc", NULL);
+}
+
+struct rt_overrun_admit_rq rt_admit_rq;
+
+/*static*/
+void init_rt_overrun(void)
+{
+ rt_overrun_proc_init();
+ reset_rt_overrun();
+}
+
+void reset_rt_overrun(void)
+{
+ int i;
+
+ for (i = 0; i < SLOTS; i++)
+ rt_admit_rq.curr[i] = NULL;
+
+ rt_admit_rq.slot = 0;
+ rt_admit_rq.end = SLOTS;
+}
+
+static int rt_overrun_proc_show(struct seq_file *m, void *v) {
+ int i;
+ unsigned long flags;
+ u64 slots = 0;
+ struct task_struct *task;
+
+ seq_printf(m, "%s: \n", __func__);
+ seq_printf(m, "\n\t");
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+
+ if (rt_admit_curr)
+ slots = rt_admit_curr->rt.rt_overrun.slots;
+
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+ for (i = 0; i < SLOTS; i++) {
+ if ((i % 4) == 0 )
+ seq_printf(m, "\n\t");
+
+ task = rt_admit_rq.curr[i];
+ if (task)
+ seq_printf(m, " %d", task->rt.rt_overrun.color);
+ else
+ seq_printf(m, " 0");
+
+ if (task)
+ seq_printf(m, " (%d)",
+ task->rt.rt_overrun.color);
+ else
+ seq_printf(m, " (0)");
+ }
+ seq_printf(m, "\ncurr\n");
+
+ seq_printf(m, "\n\t");
+ for (i = 0; i < SLOTS; ++i) {
+ if (test_bit(i, (unsigned long *) &slots))
+ seq_printf(m, "1");
+ else
+ seq_printf(m, "0");
+
+ if (((i+1) % 16) == 0)
+ seq_printf(m, "\n\t");
+ else if (((i +1) % 4) == 0)
+ seq_printf(m, " ");
+ }
+ seq_printf(m, "\n");
+
+ return 0;
+}
+
+
+static void _rt_overrun_task_replenish(struct task_struct *p)
+{
+ WARN_ONCE(!rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+ rt_admit_curr = p;
+ rt_admit_rq.debug = p;
+ WARN_ONCE(!(CMP_PTR_LONG(rt_admit_curr, p)),
+ "not equal \b");
+}
+
+void rt_overrun_task_replenish(struct task_struct *p)
+{
+ unsigned long flags;
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ _rt_overrun_task_replenish(p);
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+static void _rt_overrun_task_expire(struct task_struct *p)
+{
+printk("%s: \n", __func__);
+ WARN_ONCE(!rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+ rt_admit_curr = NULL;
+}
+
+static void rt_overrun_task_expire(struct task_struct *p)
+{
+ unsigned long flags;
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ _rt_overrun_task_expire(p);
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+static int rt_overrun_slot_color_next(void)
+{
+ int color = rt_admit_rq.color;
+
+ ++rt_admit_rq.color;
+
+ return color;
+}
+
+/* potential security problems */
+int rt_overrun_task_admit(struct task_struct *p, u64 slots)
+{
+ int i, ret = 0;
+ unsigned long flags;
+
+ printk("%s: slot = 0x%016llx\n", __func__, slots);
+
+ get_task_struct(p);
+ if (p->policy != SCHED_FIFO) {
+ printk("%s: policy, admittance failed \n", __func__);
+ put_task_struct(p);
+ return 1;
+ }
+
+ if (p->sched_class != &rt_sched_class) {
+ printk("%s: sched_class, admittance failed \n", __func__);
+ put_task_struct(p);
+ return 1;
+ }
+
+ /* grabs the rq lock here, CPU 0 only */
+ set_cpus_allowed_ptr(p, get_cpu_mask(0));
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ if (!p->rt.rt_overrun.color) {
+ p->rt.rt_overrun.color = rt_overrun_slot_color_next();
+ printk("%s: color = %d \n", __func__, p->rt.rt_overrun.color);
+ }
+
+
+ p->rt.rt_overrun.slots = slots;
+
+ WARN_ONCE(rt_admit_rq.active < 0, "\n");
+ WARN_ONCE(_rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+ p->rt.rt_overrun.count = 0;
+
+ _rt_overrun_entry_insert(&rt_overrun_tree, p);
+ _rt_overrun_task_replenish(p);
+ ++rt_admit_rq.active;
+
+ if ((cpumask_weight(&p->cpus_allowed) != 1) ||
+ !cpumask_test_cpu(0, &p->cpus_allowed)) {
+ printk("%s: failed \n", __func__);
+ ret = 1;
+ } else
+ printk("%s: success \n", __func__);
+
+ for (i = 0; i < SLOTS; ++i) {
+ /* slots is a u64, ignore the pointer type */
+ if (test_bit(i, (unsigned long *) &slots))
+ rt_admit_rq.curr[i] = p;
+ }
+
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+ put_task_struct(p);
+
+ return ret;
+}
+
+static void rt_overrun_task_discharge(struct task_struct *p)
+{
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+printk("%s: \n", __func__);
+ WARN_ONCE(rt_admit_rq.active <= 0, "\n");
+ WARN_ONCE(!_rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+ --rt_admit_rq.active;
+
+ /* assert */
+ _rt_overrun_task_expire(p);
+ _rt_overrun_entry_delete(p);
+
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+void rt_overrun_entries_delete_all(struct rtc_device *rtc)
+{
+ unsigned long flags;
+ struct task_struct *task;
+
+ struct list_head *pos;
+
+printk("%s: \n", __func__);
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ list_for_each (pos, &rtc->rt_overrun_tasks) {
+ task = list_entry(pos, struct task_struct,
+ rt.rt_overrun.task_list);
+printk("%s: rt_overrun_tasks p 0x%016llx - comm %s\n", __func__, (u64) task->rt.rt_overrun.color,
+ task->comm);
+ _rt_overrun_entry_delete(task);
+ }
+
+ rt_admit_rq.active = 0;
+ rt_admit_rq.color = 0;
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_task_admitted1(struct rq *rq, struct task_struct *p)
+{
+ int ret = 0;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ if (rt_admit_rq.active) { // --billh
+ if ((rt_admit_curr == p)
+ || _on_rt_overrun_admitted(p)
+ || _rt_overrun_entry_find(&rt_overrun_tree, p))
+ ret = 1;
+ }
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+ return ret;
+}
+
+void rt_overrun_check(struct rq *rq, struct task_struct *p)
+{
+ get_task_struct(p);
+ WARN_ONCE(rt_overrun_task_admitted1(rq, p) &&
+ cpumask_equal(get_cpu_mask(0), &p->cpus_allowed),
+ "not bounded to CPU 0\n");
+ put_task_struct(p);
+}
+
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_rq_admitted(void)
+{
+ return rt_admit_rq.active;
+}
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_task_active(struct task_struct *p)
+{
+ if (CMP_PTR_LONG(rt_admit_curr, p))
+ return 1;
+ else
+ return 0;
+}
+
+static struct task_struct *rt_overrun_get_next_task(void)
+{
+ /* return the next slot, advance the cursor */
+ WARN_ONCE(!rt_admit_rq.active, "\n");
+
+ if (rt_admit_rq.slot < (SLOTS -1)) {
+ ++rt_admit_rq.slot;
+ } else {
+// printk("%s: slot wrap = 0 \n", __func__);
+ rt_admit_rq.slot = 0;
+ }
+
+ return rt_admit_curr;
+}
+
+#define PRT_RUNNABLE() \
+ if (tail == 1) \
+ printk("on rq \n"); \
+ else if (tail == 0) \
+ printk("not on rq \n"); \
+ else \
+ printk("\n");
+
+/* rq lock already grabbed, interrupts off */
+void rt_overrun_timer_handler(struct rtc_device *rtc)
+{
+ int cpu = smp_processor_id();
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long irq_data;
+
+ struct task_struct *curr_slot, *next_slot;
+ int tail = 2;
+ int wake_next = 0, curr_runnable = 0;
+ int same;
+
+ WARN_ON(!irqs_disabled());
+
+ printk("%s: ---\n", __func__);
+
+ /* this is incorrect, but is working for now */
+ WARN_ON((rq->cpu != 0));
+ raw_spin_lock(&rq->lock);
+ raw_spin_lock(&rt_overrun_lock);
+
+ curr_slot = rt_admit_curr;
+ irq_data = rtc->irq_data;
+
+ /* suppress rtc_read_dev wake if curr_slot == NULL */
+ if (curr_slot) {
+ if (rt_overrun_task_runnable(curr_slot))
+ curr_runnable = tail = 1;
+ else
+ curr_runnable = tail = 0;
+ }
+
+ if (curr_slot)
+ printk("%s: curr_slot %d ", __func__, curr_slot->rt.rt_overrun.color);
+ PRT_RUNNABLE();
+
+ next_slot = rt_overrun_get_next_task();
+ tail = 2;
+
+ /* */
+ if (curr_slot == next_slot) {
+ same = 1;
+ } else {
+ same = 0;
+ /* deactivate edge, runnable case */
+ if (curr_slot && curr_runnable) {
+ requeue_task_rt2(rq, curr_slot, 0); // tail
+ resched_curr(rq);
+ }
+ }
+
+ /* transition edge, record per task overrun */
+ if (curr_slot && !same) {
+ ++curr_slot->rt.rt_overrun.count;
+ printk("%s: overrun inc %ld\n", __func__,
+ curr_slot->rt.rt_overrun.count);
+ }
+
+ /* activate edge, wake/top next_slot */
+ if (next_slot) {
+ if (!same) {
+ printk("%s: ~same\n", __func__);
+ if (rt_overrun_task_runnable(next_slot)) {
+ printk("%s: next runnable requeue top\n", __func__);
+ requeue_task_rt2(rq, next_slot, 1); // head
+ resched_curr(rq);
+ tail = 1;
+ } else {
+ printk("%s: ~next runnable\n", __func__);
+ tail = 0;
+ wake_next = 1;
+ }
+ } /* same, then chain the activations */
+ }
+
+ if (next_slot)
+ printk("%s: next_slot %d ", __func__, next_slot->rt.rt_overrun.color);
+ PRT_RUNNABLE();
+
+ rt_admit_curr = next_slot;
+
+ raw_spin_unlock(&rt_overrun_lock);
+ raw_spin_unlock(&rq->lock);
+
+ /* set to reschedule at interrupt return, wake attempt should
+ * do this for us already */
+ if (wake_next) {
+ wake_up_interruptible_sync_poll(&rtc->irq_queue, next_slot);
+ if (same) {
+ printk("%s: same\n", __func__);
+ }
+ }
+ else
+ printk("%s: pass\n", __func__);
+}
+
+int rt_overrun_task_yield(struct task_struct *p)
+{
+ return rt_task_yield(p);
+}
+
+// default_wake_function wake_up_state list_head rtc_device
+int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+ int wake_flags, void *key)
+{
+ unsigned long flags;
+ struct task_struct *p = key, *task = curr->private;
+ int on = 0;
+
+ /* If not admitted to rt_overrun, then wake it normally with at the
+ * normal timer interrupt handler */
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ if (p) on = _on_rt_overrun_admitted(p);
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+ /* wake only one thread for this case */
+ if (key == NULL) {
+ printk("%s: wake 0 p 0x%08llx, task 0x%08llx, admit %d,"
+ " flags %ld\n",
+ __func__, (u64) p, (u64) task, on, flags);
+ return wake_up_state(task, mode);
+ } else if (key == task) {
+ if (on) {
+ printk("%s: wake 1 p 0x%08llx, task 0x%08llx, "
+ "admit %d, flags %ld\n",
+ __func__, (u64) p, (u64) task, on, flags);
+ return wake_up_state(task, mode);
+ } else {
+ printk("%s: ignore 0 p 0x%08llx, task 0x%08llx, "
+ "flags %ld\n",
+ __func__, (u64) p, (u64) task, flags);
+ return 0;
+ }
+ } else {
+ printk("%s: ignore 1 p 0x%08llx, task 0x%08llx, flags %ld\n",
+ __func__, (u64) p, (u64) task, flags);
+ return 0;
+ }
+}
diff --git a/kernel/sched/cyclic.h b/kernel/sched/cyclic.h
new file mode 100644
index 0000000..829ad6d
--- /dev/null
+++ b/kernel/sched/cyclic.h
@@ -0,0 +1,86 @@
+/* cyclic.h rt_overrun */
+
+//#ifndef _CYCLIC_H
+//#define _CYCLIC_H
+//#else
+
+#include <linux/spinlock.h>
+#include <linux/rbtree.h>
+#include <linux/rtc.h>
+
+extern int rt_overrun_task_active(struct task_struct *p);
+extern void rt_overrun_stat_tick_blocked(void);
+
+extern void rt_overrun_stat_dequeue(void);
+extern void rt_overrun_stat_enqueue(struct task_struct *p);
+
+extern void rt_overrun_timer_handler(struct rtc_device *rtc);
+extern int rt_overrun_rq_admitted(void);
+extern void init_rt_overrun(void);
+extern void rt_overrun_entry_delete(struct task_struct *p);
+
+extern void rt_overrun_task_replenish(struct task_struct *p);
+extern int rt_overrun_task_admit(struct task_struct *p, u64 slots);
+
+extern int rt_overrun_task_yield(struct task_struct *p);
+extern void rt_overrun_entries_delete_all(struct rtc_device *);
+extern void reset_rt_overrun(void);
+
+extern struct raw_spinlock rt_overrun_lock;
+extern int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+ int wake_flags, void *key);
+#define SLOTS 64
+
+#define rt_admit_curr (rt_admit_rq.curr[rt_admit_rq.slot])
+#define rt_task_count(a) (a->rt.rt_overrun.count)
+#define rt_task_yield(a) (a->rt.rt_overrun.yield)
+
+/* slot admittance queue */
+struct rt_overrun_admit_rq {
+ int active;
+ int slot, end;
+ struct task_struct *curr[SLOTS];
+ struct task_struct *debug;
+ int color;
+};
+
+extern struct rt_overrun_admit_rq rt_admit_rq;
+
+static inline int rt_overrun_policy(struct task_struct *p, int policy)
+{
+ int ret;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ ret = RB_EMPTY_NODE(&p->rt.rt_overrun.node);
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+ return ret;
+}
+
+static inline int _on_rt_overrun_admitted(struct task_struct *p)
+{
+ struct sched_rt_entity *rt_se = &p->rt;
+ struct rb_node *node = &rt_se->rt_overrun.node;
+
+ if (node)
+ return !RB_EMPTY_NODE(node);
+ else
+ return 0;
+}
+
+static inline int on_rt_overrun_admitted(struct task_struct *p)
+{
+ int ret;
+
+ raw_spin_lock(&rt_overrun_lock);
+ ret = _on_rt_overrun_admitted(p);
+ raw_spin_unlock(&rt_overrun_lock);
+
+ return ret;
+}
+
+extern int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+ int wake_flags, void *key);
+
+//#endif
diff --git a/kernel/sched/cyclic_rt.h b/kernel/sched/cyclic_rt.h
new file mode 100644
index 0000000..c6c89bf
--- /dev/null
+++ b/kernel/sched/cyclic_rt.h
@@ -0,0 +1,7 @@
+/*
+ */
+
+#ifdef CONFIG_RTC_CYCLIC
+extern void dequeue_task_rt2(struct rq *rq, struct task_struct *p, int flags);
+extern void requeue_task_rt2(struct rq *rq, struct task_struct *p, int head);
+#endif
--
2.5.0

2016-04-12 05:29:41

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 09/12] Add priority support for the cyclic scheduler

Initial bits to prevent priority changing of cyclic scheduler tasks by
only allow them to be SCHED_FIFO. Fairly hacky at this time and will need
revisiting because of the security concerns.

Affects task death handling since it uses an additional scheduler class
hook for clean up at death. Must be SCHED_FIFO.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
kernel/sched/core.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 44db0ff..cf6cf57 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -87,6 +87,10 @@
#include "../workqueue_internal.h"
#include "../smpboot.h"

+#ifdef CONFIG_RTC_CYCLIC
+#include "cyclic.h"
+#endif
+
#define CREATE_TRACE_POINTS
#include <trace/events/sched.h>

@@ -2074,6 +2078,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
memset(&p->se.statistics, 0, sizeof(p->se.statistics));
#endif

+#ifdef CONFIG_RTC_CYCLIC
+ RB_CLEAR_NODE(&p->rt.rt_overrun.node);
+#endif
+
RB_CLEAR_NODE(&p->dl.rb_node);
init_dl_task_timer(&p->dl);
__dl_clear_params(p);
@@ -3881,6 +3889,11 @@ recheck:
if (dl_policy(policy))
return -EPERM;

+#ifdef CONFIG_RTC_CYCLIC
+ if (rt_overrun_policy(p, policy))
+ return -EPERM;
+#endif
+
/*
* Treat SCHED_IDLE as nice 20. Only allow a switch to
* SCHED_NORMAL if the RLIMIT_NICE would normally permit it.
--
2.5.0

2016-04-12 05:29:39

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 07/12] kernel/userspace additions for addition ioctl() support for rtc

Add additional ioctl() values to rtc so that it can 'admit' the calling
thread into a red-black tree for tracking, set the execution slot pattern,
support for setting whether read() will yield or block.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
include/uapi/linux/rtc.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/include/uapi/linux/rtc.h b/include/uapi/linux/rtc.h
index f8c82e6..76c9254 100644
--- a/include/uapi/linux/rtc.h
+++ b/include/uapi/linux/rtc.h
@@ -94,6 +94,10 @@ struct rtc_pll_info {
#define RTC_VL_READ _IOR('p', 0x13, int) /* Voltage low detector */
#define RTC_VL_CLR _IO('p', 0x14) /* Clear voltage low information */

+#define RTC_OV_ADMIT _IOW('p', 0x15, unsigned long) /* Set test */
+#define RTC_OV_REPLEN _IOW('p', 0x16, unsigned long) /* Set test */
+#define RTC_OV_YIELD _IOW('p', 0x17, unsigned long) /* Set test */
+
/* interrupt flags */
#define RTC_IRQF 0x80 /* Any of the following is active */
#define RTC_PF 0x40 /* Periodic interrupt */
--
2.5.0

2016-04-12 05:30:51

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 12/12] Cyclic/rtc documentation

Initial attempt at documentation with a test program

Signed-off-by: Bill Huey (hui) <[email protected]>
---
Documentation/scheduler/sched-cyclic-rtc.txt | 468 +++++++++++++++++++++++++++
1 file changed, 468 insertions(+)
create mode 100644 Documentation/scheduler/sched-cyclic-rtc.txt

diff --git a/Documentation/scheduler/sched-cyclic-rtc.txt b/Documentation/scheduler/sched-cyclic-rtc.txt
new file mode 100644
index 0000000..4d22381
--- /dev/null
+++ b/Documentation/scheduler/sched-cyclic-rtc.txt
@@ -0,0 +1,468 @@
+[in progress]
+
+"Work Conserving"
+
+When a task is active and calls read(), it will block/yield depending on
+is requested from the cyclic scheduler. A RT_OV_YIELD call to ioctl()
+specifies the behavior for the calling thread.
+
+In the case where read() is called before the time slice is over, it will
+allow other tasks to run with the leftover time.
+
+"Overrun Reporting/Apps"
+
+Calls to read() will return the overrun count and zero the counter. This
+can be used to adjust the execution time of the thread so that it can run
+within that slot so that thread can meet some deadline constraint.
+
+[no decision has been made to return a more meaningful set of numbers as
+you can just get time stamps and do the math in userspace but it could
+be changed to do so]
+
+The behavior of the read() depends on whether it has been admitted or not
+via an ioctl() using RTC_OV_ADMIT. If it is then it will return the overrun
+count. If this is not admitted then it returns value corresponding to the
+default read() behavior for rtc.
+
+See the sample test sources for details.
+
+Using a video game as an example, having a rendering engine overrunning its
+slot driving by a vertical retrace interrupt can cause visual skipping and
+hurt interactivity. Adapting the computation from the read() result can
+allow for the frame buffer swap at the frame interrupt. If read() reports
+and it can simplify calculations and adapt to fit within that slot.
+It would then allow the program to respond to events (touches, buttons)
+minimizing the possibility of perceived pauses.
+
+The slot allocation scheme for the video game must have some inherit
+definition of interactivity. That determines appropriate slot allocation
+amognst a mixture of soft/hard real-time. A general policy must be created
+for the system, and all programs, to meet a real-time criteria.
+
+"Admittance"
+
+Admittance of a task is done through a ioctl() call using RTC_OV_ADMIT.
+This passes 64 bit wide bitmap that maps onto a entries in the slot map.
+
+(slot map of two threads)
+execution direction ->
+
+1000 1000 1000 1000...
+0100 0100 0100 0100...
+
+(bit pattern of two threads)
+0001 0001 0001 0001...
+0010 0010 0010 0010...
+
+(hex)
+0x1111
+0x2222
+
+The slot map is an array of 64 entries of threads. An index is increment
+through determine what the next active thread-slot will be. The end of the
+index set in /proc/rt_overrun_proc
+
+"Slot/slice activation"
+
+Move the task to the front of the SCHED_FIFO list when active, the tail when
+inactive.
+
+"RTC Infrastructure and Interrupt Routing"
+
+The cyclic scheduler is driven by the update interrupt in the RTC
+infrastructure but can be rerouted to any periodic interrupt source.
+
+One of those applications could be when interrupts from a display refresh
+happen or some interval where an external controller such as a drum pad,
+touch event or whatever.
+
+"Embedded Environments"
+
+This is single run queue only and targeting embedded scenarios where not all
+cores are guaranteed to be available. Older Qualcomm MSM kernels have a very
+aggressive cpu hotplug as a means of fully powering off cores. The only
+guaranteed CPU to run is CPU 0.
+
+"Project History"
+
+This was originally created when I was at HP/Palm to solve issues related
+to touch event handling and lag working with the real-time media subsystem.
+The typical workaround used to prevent skipping is to use large buffers to
+prevent data underruns. The programs running at SCHED_FIFO which can
+starve the system from handling external events in a timely manner like
+buttons or touch events. The lack of a globally defined policy of how to
+use real-time resources can causes long pauses between handling touch
+events and other kinds of implicit deadline misses.
+
+By choosing some kind of slot execution pattern, it was hoped that it that
+can be controlled globally across the system so that some basic interactive
+guarantees can be met. Whether the tasks be some combination of soft or
+hard real-time, a mechanism like this can help guide how SCHED_FIFO tasks
+are run versus letting SCHED_FIFO tasks run wildly.
+
+"Future work"
+
+Possible integration with the deadline scheduler. Power management
+awareness, CPU clock governor. Turning off the scheduler tick when there
+are no runnable tasks, other things...
+
+"Power management"
+
+Governor awareness...
+
+[more]
+
+----------------------------
+
+/*
+ * Based on the:
+ *
+ * Real Time Clock Driver Test/Example Program
+ * by Copyright (C) 1996, Paul Gortmaker.
+ *
+ * Simplification and multi-threading support for interrupt event testing
+ * by Bill Huey at <[email protected]>
+ *
+ * Released under the GNU General Public License, version 2,
+ * included herein by reference.
+ */
+
+#include <stdio.h>
+#include <linux/rtc.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <sys/mman.h>
+#include <sched.h>
+#include <errno.h>
+
+/*
+ * This expects the new RTC class driver framework, working with
+ * clocks that will often not be clones of what the PC-AT had.
+ * Use the command line to specify another RTC if you need one.
+ */
+static const char default_rtc[] = "/dev/rtc0";
+
+#define RTC_OV_ADMIT _IOW('p', 0x15, unsigned long)
+#define RTC_OV_REPLEN _IOW('p', 0x16, unsigned long)
+#define RTC_OV_YIELD _IOW('p', 0x17, unsigned long)
+
+//#if 0
+#define THREADS (3)
+#define handle_error_en(en, msg) \
+ do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
+
+#define OBJ (THREADS + 1)
+
+//#define LARGE (1000 * 1000)
+
+pthread_mutex_t mutex[OBJ];
+pthread_cond_t condvar[OBJ];
+
+pthread_mutex_t start_mutex = PTHREAD_MUTEX_INITIALIZER;
+pthread_cond_t start_condvar = PTHREAD_COND_INITIALIZER;
+int start[OBJ];
+
+pthread_t threads[OBJ];
+volatile int die = 0;
+int fd;
+
+#define ioctl_enotty_err(a, b, c, label, ret) \
+ retval = ioctl(fd, a, ret); \
+ if (retval == -1) { \
+ if (errno == ENOTTY) { \
+ fprintf(stderr, b); \
+ goto label; \
+ } \
+ perror(c); \
+ exit(errno); \
+ }
+
+#define ioctl_err(a, b, c) \
+ retval = ioctl(fd, a, c); \
+ if (retval == -1) { \
+ perror(b); \
+ exit(errno); \
+ }
+
+#define read_err(a) \
+ retval = read(fd, &data, sizeof(unsigned long)); \
+ if (retval == -1) { \
+ perror("read"); \
+ exit(errno); \
+ }
+
+void init_pthreads(void)
+{
+ int i;
+
+ for (i = 0; i < OBJ; ++i) {
+ start[i] = 1;
+ pthread_mutex_init(&mutex[i], NULL);
+ pthread_cond_init(&condvar[i], NULL);
+ }
+}
+
+/* Just loop and exit */
+void *thread(void *threadid)
+{
+ long tid = (unsigned long)threadid;
+ pthread_t self = pthread_self();
+ unsigned int i,j;
+ pid_t pid;
+ struct sched_param param;
+ int retval;
+ __aligned_u64 slots = 0;
+ unsigned long data;
+
+ fprintf(stderr, "\tthread id = %ld\n", tid);
+ param.sched_priority = sched_get_priority_min(SCHED_RR);
+ if (sched_setscheduler( 0, SCHED_RR, &param) == -1)
+ perror("sched_setscheduler failed\n");
+
+ pid = getpid();
+
+ if (start) {
+ start[tid] = 0;
+ pthread_mutex_lock(&start_mutex);
+ pthread_cond_signal(&start_condvar);
+ pthread_mutex_unlock(&start_mutex);
+ }
+
+ /* admit the task before doing yields */
+// fprintf(stderr, "\n");
+ for (i = 0; i < 64; ++i) {
+ if (((tid + i) % THREADS) == 0) {
+// fprintf(stderr, "%d\n", i);
+ slots |= ((long long unsigned) 1 << i);
+ }
+ }
+
+ fprintf(stderr, "slots = 0x%016llx\n", slots);
+ ioctl_err(RTC_OV_ADMIT, "RTC_OV_ADMIT ioctl", &slots);
+
+// slots = 1; /* set yield instead of block */
+// ioctl_err(RTC_OV_YIELD, "RTC_OV_YIELD ioctl", &slots);
+
+ while (!die)
+ ;
+
+ read_err();
+ fprintf(stderr, "tid %ld, 0x%04lx\n", tid, data);
+
+#if 0
+ ioctl_enotty_err(RTC_IRQP_SET,
+ "\n...Periodic IRQ rate is fixed\n",
+ "RTC_IRQP_SET ioctl",
+ done, (unsigned long ) slots);
+
+ ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+ while (!die)
+ ;
+
+ ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+#endif
+ /* body */
+
+ fprintf(stderr, "\tthread exited running SCHED_RR = %ld\n", tid);
+ pthread_exit(NULL);
+}
+
+void thread_spawn(int val)
+{
+ int result, retval;
+ long tid;
+ int i;
+ pthread_attr_t threads_attr[OBJ];
+ cpu_set_t cpuset;
+
+ struct sched_param schedparam;
+
+ schedparam.sched_priority = 3;
+
+ init_pthreads();
+
+ tid = 0;
+ while(tid < THREADS) {
+ fprintf(stderr, "\ncreated thread %ld\n", tid);
+ pthread_attr_init(&threads_attr[tid]);
+ pthread_attr_setinheritsched(&threads_attr[tid], PTHREAD_EXPLICIT_SCHED);
+ pthread_attr_setschedpolicy(&threads_attr[tid], SCHED_RR);
+ pthread_attr_setschedparam(&threads_attr[tid], &schedparam);
+ pthread_attr_destroy(&threads_attr[tid]);
+
+ pthread_mutex_lock(&start_mutex);
+ result = pthread_create(&threads[tid], &threads_attr[tid], thread, (void *)tid);
+ pthread_cond_wait(&start_condvar, &start_mutex);
+ pthread_mutex_unlock(&start_mutex);
+
+ if (result != 0)
+ handle_error_en(result, "pthread_create");
+ ++tid;
+ }
+
+ ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+ sleep(3);
+
+// 10/val; // deliberate divide by zero
+
+ sleep(1);
+ die = 1;
+
+ for (i = 0; i < THREADS; i++)
+ pthread_join(threads[i], NULL);
+
+ ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+
+ fprintf(stderr, "pthread done\n");
+}
+//#endif
+
+int main(int argc, char **argv)
+{
+ int i,j,k,
+ blocking,
+ delay,
+ retval, irqcount = 0;
+ unsigned long data;
+ __aligned_u64 slots;
+ struct rtc_time rtc_tm;
+ const char *rtc = default_rtc;
+ struct timeval start, end, diff;
+
+ struct sched_param param;
+ pid_t pid = getpid();
+
+ /* testing thread should be SCHED_FIFO or RR */
+ param.sched_priority = sched_get_priority_min(SCHED_RR);
+
+ if (sched_setscheduler(pid, SCHED_RR, &param) == -1)
+
+ perror("sched_setscheduler failed\n");
+
+ switch (argc) {
+ case 2:
+ rtc = argv[1];
+ /* FALLTHROUGH */
+ case 1:
+ break;
+ default:
+ fprintf(stderr, "usage: rtctest [rtcdev]\n");
+ return 1;
+ }
+
+ fd = open(rtc, O_RDONLY);
+ if (fd == -1) {
+ perror(rtc);
+ exit(errno);
+ }
+
+ fprintf(stderr, "\n\t\t\tRTC Driver Test Example.\n\n");
+
+ /* Admit this task, enable tick tracking and set the slots */
+// slots = 0xFFFFffffFFFFffff;
+ slots = 0x3333333333333333;
+ ioctl_err(RTC_OV_ADMIT, "RTC_OV_ADMIT ioctl", &slots);
+
+#if 0
+#endif
+test_PIE:
+ /* Read periodic IRQ rate */
+ ioctl_enotty_err(RTC_IRQP_READ,
+ "\nNo periodic IRQ support\n",
+ "RTC_IRQP_READ ioctl",
+ done, &slots);
+
+ fprintf(stderr, "\nPeriodic IRQ rate is %ldHz.\n", (unsigned long) slots);
+
+ fprintf(stderr, "Counting 20 interrupts at:");
+ fflush(stderr);
+
+ /* The frequencies 128Hz, 256Hz, ... 8192Hz are only allowed for root. */
+ for (slots=2; slots<=64; slots*=2) {
+ /* not all RTCs can change their periodic IRQ rate */
+ ioctl_enotty_err(RTC_IRQP_SET,
+ "\n...Periodic IRQ rate is fixed\n",
+ "RTC_IRQP_SET ioctl",
+ done, (unsigned long ) slots);
+
+ fprintf(stderr, "\n%ldHz:\t", (unsigned long ) slots);
+ fflush(stderr);
+
+ /* Enable periodic interrupts */
+ ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+// blocking = 0; delay = 0;
+ blocking = 0; delay = 1;
+// blocking = 1; delay = 0;
+// blocking = 1; delay = 1;
+ for (i=1; i<6; i++) {
+
+#define LARGE 5000
+#define LARGE2 50000
+#define work() \
+ \
+ /* This blocks */ \
+ if (blocking) { \
+ if (delay) \
+ fprintf(stderr, " ignoring delay "); \
+ \
+ gettimeofday(&start, NULL); \
+ fprintf(stderr, " "); \
+ read_err() \
+ } else { \
+ /* delay for testing yield only */ \
+ if (delay) { \
+ fprintf(stderr, "."); \
+ for(j = LARGE; j > 0; --j) \
+ for(k = LARGE2; k > 0; --k) \
+ ; \
+ } else \
+ fprintf(stderr, "`"); \
+ \
+ /* really a yield */ \
+ read_err() \
+ /* fake diff values on a yield */ \
+ gettimeofday(&start, NULL); \
+ } \
+ \
+ gettimeofday(&end, NULL); \
+ \
+ timersub(&end, &start, &diff); \
+ if (!blocking && (diff.tv_sec > 0 || \
+ diff.tv_usec > ((1000000L / slots) * 1.10))) { \
+ fprintf(stderr, \
+ "\nPIE delta error: %ld.%06ld should be close to 0.%06ld\n", \
+ diff.tv_sec, diff.tv_usec, \
+ (1000000L / (unsigned long) slots)); \
+ fflush(stdout); \
+ exit(-1); \
+ } \
+ \
+ fprintf(stderr, "%d 0x%04lx,", i, data); \
+ fflush(stderr); \
+ irqcount++;
+
+ work()
+ }
+ /* Disable periodic interrupts */
+ ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+ }
+
+done:
+ fprintf(stderr, "\n\n\t\t\t *** Test complete ***\n");
+
+ thread_spawn(0);
+
+ close(fd);
+
+ return 0;
+}
--
2.5.0

2016-04-12 05:29:37

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 08/12] Compilation support

Makefile changes to support the menuconfig option

Signed-off-by: Bill Huey (hui) <[email protected]>
---
kernel/sched/Makefile | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 302d6eb..df8e131 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -19,4 +19,5 @@ obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o
obj-$(CONFIG_SCHEDSTATS) += stats.o
obj-$(CONFIG_SCHED_DEBUG) += debug.o
obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
+obj-$(CONFIG_RTC_CYCLIC) += cyclic.o
obj-$(CONFIG_CPU_FREQ) += cpufreq.o
--
2.5.0

2016-04-12 05:31:27

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 10/12] Export SCHED_FIFO/RT requeuing functions

SCHED_FIFO/RT tail/head runqueue insertion support, initial thread death
support via a hook to the scheduler class. Thread death must include
additional semantics to remove/discharge an admitted task properly.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
kernel/sched/rt.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c41ea7a..1d77adc 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -8,6 +8,11 @@
#include <linux/slab.h>
#include <linux/irq_work.h>

+#ifdef CONFIG_RTC_CYCLIC
+#include "cyclic.h"
+extern int rt_overrun_task_admitted1(struct rq *rq, struct task_struct *p);
+#endif
+
int sched_rr_timeslice = RR_TIMESLICE;

static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun);
@@ -1321,8 +1326,18 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)

if (flags & ENQUEUE_WAKEUP)
rt_se->timeout = 0;
+#ifdef CONFIG_RTC_CYCLIC
+ /* if admitted and the current slot then head, otherwise tail */
+ if (rt_overrun_task_admitted1(rq, p)) {
+ if (rt_overrun_task_active(p)) {
+ flags |= ENQUEUE_HEAD;
+ }
+ }

enqueue_rt_entity(rt_se, flags);
+#else
+ enqueue_rt_entity(rt_se, flags & ENQUEUE_HEAD);
+#endif

if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
enqueue_pushable_task(rq, p);
@@ -1367,6 +1382,18 @@ static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head)
}
}

+#ifdef CONFIG_RTC_CYCLIC
+void dequeue_task_rt2(struct rq *rq, struct task_struct *p, int flags)
+{
+ dequeue_task_rt(rq, p, flags);
+}
+
+void requeue_task_rt2(struct rq *rq, struct task_struct *p, int head)
+{
+ requeue_task_rt(rq, p, head);
+}
+#endif
+
static void yield_task_rt(struct rq *rq)
{
requeue_task_rt(rq, rq->curr, 0);
@@ -2177,6 +2204,10 @@ void __init init_sched_rt_class(void)
zalloc_cpumask_var_node(&per_cpu(local_cpu_mask, i),
GFP_KERNEL, cpu_to_node(i));
}
+
+#ifdef CONFIG_RTC_CYCLIC
+ init_rt_overrun();
+#endif
}
#endif /* CONFIG_SMP */

@@ -2322,6 +2353,13 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
return 0;
}

+#ifdef CONFIG_RTC_CYCLIC
+static void task_dead_rt(struct task_struct *p)
+{
+ rt_overrun_entry_delete(p);
+}
+#endif
+
const struct sched_class rt_sched_class = {
.next = &fair_sched_class,
.enqueue_task = enqueue_task_rt,
@@ -2344,6 +2382,9 @@ const struct sched_class rt_sched_class = {
#endif

.set_curr_task = set_curr_task_rt,
+#ifdef CONFIG_RTC_CYCLIC
+ .task_dead = task_dead_rt,
+#endif
.task_tick = task_tick_rt,

.get_rr_interval = get_rr_interval_rt,
--
2.5.0

2016-04-12 05:31:28

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 06/12] Add anonymous struct to sched_rt_entity

Add an anonymous struct to support admittance using a red-black tree,
overrun tracking, state for whether or not to yield or block, debugging
support, execution slot pattern for the scheduler.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
include/linux/sched.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 084ed9f..cff56c6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1305,6 +1305,21 @@ struct sched_rt_entity {
/* rq "owned" by this entity/group: */
struct rt_rq *my_q;
#endif
+#ifdef CONFIG_RTC_CYCLIC
+ struct {
+ struct rb_node node; /* admittance structure */
+ struct list_head task_list;
+ unsigned long count; /* overrun count per slot */
+ int type, color, yield;
+ u64 slots;
+
+ /* debug */
+ unsigned long last_task_state;
+
+ /* instrumentation */
+ unsigned int machine_state, last_machine_state;
+ } rt_overrun;
+#endif
};

struct sched_dl_entity {
--
2.5.0

2016-04-12 05:29:29

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 02/12] Reroute rtc update irqs to the cyclic scheduler handler

Redirect rtc update irqs so that it drives the cyclic scheduler timer
handler instead. Let the handler determine which slot to activate next.
Similar to scheduler tick handling but just for the cyclic scheduler.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
drivers/rtc/interface.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 9ef5f6f..6d39d40 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -17,6 +17,10 @@
#include <linux/log2.h>
#include <linux/workqueue.h>

+#ifdef CONFIG_RTC_CYCLIC
+#include "../kernel/sched/cyclic.h"
+#endif
+
static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer);
static void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer);

@@ -488,6 +492,9 @@ EXPORT_SYMBOL_GPL(rtc_update_irq_enable);
void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
{
unsigned long flags;
+#ifdef CONFIG_RTC_CYCLIC
+ int handled = 0;
+#endif

/* mark one irq of the appropriate mode */
spin_lock_irqsave(&rtc->irq_lock, flags);
@@ -500,7 +507,23 @@ void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
rtc->irq_task->func(rtc->irq_task->private_data);
spin_unlock_irqrestore(&rtc->irq_task_lock, flags);

+#ifdef CONFIG_RTC_CYCLIC
+ /* wake up slot_curr if overrun task */
+ if (RTC_PF) {
+ if (rt_overrun_rq_admitted()) {
+ /* advance the cursor, overrun report */
+ rt_overrun_timer_handler(rtc);
+ handled = 1;
+ }
+ }
+
+ if (!handled) {
+ wake_up_interruptible(&rtc->irq_queue);
+ }
+#else
wake_up_interruptible(&rtc->irq_queue);
+#endif
+
kill_fasync(&rtc->async_queue, SIGIO, POLL_IN);
}

--
2.5.0

2016-04-12 05:32:22

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 05/12] Task tracking per file descriptor

Task tracking per file descriptor for thread death clean up.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
drivers/rtc/class.c | 3 +++
include/linux/rtc.h | 3 +++
2 files changed, 6 insertions(+)

diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
index 74fd974..ad570b9 100644
--- a/drivers/rtc/class.c
+++ b/drivers/rtc/class.c
@@ -201,6 +201,9 @@ struct rtc_device *rtc_device_register(const char *name, struct device *dev,
rtc->irq_freq = 1;
rtc->max_user_freq = 64;
rtc->dev.parent = dev;
+#ifdef CONFIG_RTC_CYCLIC
+ INIT_LIST_HEAD(&rtc->rt_overrun_tasks); //struct list_head
+#endif
rtc->dev.class = rtc_class;
rtc->dev.groups = rtc_get_dev_attribute_groups();
rtc->dev.release = rtc_device_release;
diff --git a/include/linux/rtc.h b/include/linux/rtc.h
index b693ada..1424550 100644
--- a/include/linux/rtc.h
+++ b/include/linux/rtc.h
@@ -114,6 +114,9 @@ struct rtc_timer {
struct rtc_device {
struct device dev;
struct module *owner;
+#ifdef CONFIG_RTC_CYCLIC
+ struct list_head rt_overrun_tasks;
+#endif

int id;
char name[RTC_DEVICE_NAME_SIZE];
--
2.5.0

2016-04-12 05:32:44

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 03/12] Add cyclic support to rtc-dev.c

wait-queue changes to rtc_dev_read so that it can support overrun count
reporting when multiple threads are blocked against a single wait object.

ioctl() additions to allow for those calling it to admit the thread to the
cyclic scheduler.

Signed-off-by: Bill Huey (hui) <[email protected]>
---
drivers/rtc/rtc-dev.c | 161 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 161 insertions(+)

diff --git a/drivers/rtc/rtc-dev.c b/drivers/rtc/rtc-dev.c
index a6d9434..0fc9a8c 100644
--- a/drivers/rtc/rtc-dev.c
+++ b/drivers/rtc/rtc-dev.c
@@ -18,6 +18,15 @@
#include <linux/sched.h>
#include "rtc-core.h"

+#ifdef CONFIG_RTC_CYCLIC
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <../kernel/sched/sched.h>
+#include <../kernel/sched/cyclic.h>
+//#include <../kernel/sched/cyclic_rt.h>
+#endif
+
static dev_t rtc_devt;

#define RTC_DEV_MAX 16 /* 16 RTCs should be enough for everyone... */
@@ -29,6 +38,10 @@ static int rtc_dev_open(struct inode *inode, struct file *file)
struct rtc_device, char_dev);
const struct rtc_class_ops *ops = rtc->ops;

+#ifdef CONFIG_RTC_CYCLIC
+ reset_rt_overrun();
+#endif
+
if (test_and_set_bit_lock(RTC_DEV_BUSY, &rtc->flags))
return -EBUSY;

@@ -153,13 +166,26 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
{
struct rtc_device *rtc = file->private_data;

+#ifdef CONFIG_RTC_CYCLIC
+ DEFINE_WAIT_FUNC(wait, single_default_wake_function);
+#else
DECLARE_WAITQUEUE(wait, current);
+#endif
unsigned long data;
+ unsigned long flags;
+#ifdef CONFIG_RTC_CYCLIC
+ int wake = 0, block = 0;
+#endif
ssize_t ret;

if (count != sizeof(unsigned int) && count < sizeof(unsigned long))
return -EINVAL;

+#ifdef CONFIG_RTC_CYCLIC
+ if (rt_overrun_task_yield(current))
+ goto yield;
+#endif
+printk("%s: 0 color = %d \n", __func__, current->rt.rt_overrun.color);
add_wait_queue(&rtc->irq_queue, &wait);
do {
__set_current_state(TASK_INTERRUPTIBLE);
@@ -169,23 +195,59 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
rtc->irq_data = 0;
spin_unlock_irq(&rtc->irq_lock);

+if (block) {
+ block = 0;
+ if (wake) {
+ printk("%s: wake \n", __func__);
+ wake = 0;
+ } else {
+ printk("%s: ~wake \n", __func__);
+ }
+}
if (data != 0) {
+#ifdef CONFIG_RTC_CYCLIC
+ /* overrun reporting */
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ if (_on_rt_overrun_admitted(current)) {
+ /* pass back to userspace */
+ data = rt_task_count(current);
+ rt_task_count(current) = 0;
+ }
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+ ret = 0;
+printk("%s: 1 color = %d \n", __func__, current->rt.rt_overrun.color);
+ break;
+ }
+#else
ret = 0;
break;
}
+#endif
if (file->f_flags & O_NONBLOCK) {
ret = -EAGAIN;
+printk("%s: 2 color = %d \n", __func__, current->rt.rt_overrun.color);
break;
}
if (signal_pending(current)) {
+printk("%s: 3 color = %d \n", __func__, current->rt.rt_overrun.color);
ret = -ERESTARTSYS;
break;
}
+#ifdef CONFIG_RTC_CYCLIC
+ block = 1;
+#endif
schedule();
+#ifdef CONFIG_RTC_CYCLIC
+ /* debugging */
+ wake = 1;
+#endif
} while (1);
set_current_state(TASK_RUNNING);
remove_wait_queue(&rtc->irq_queue, &wait);

+#ifdef CONFIG_RTC_CYCLIC
+ret:
+#endif
if (ret == 0) {
/* Check for any data updates */
if (rtc->ops->read_callback)
@@ -201,6 +263,29 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
sizeof(unsigned long);
}
return ret;
+
+#ifdef CONFIG_RTC_CYCLIC
+yield:
+
+ spin_lock_irq(&rtc->irq_lock);
+ data = rtc->irq_data;
+ rtc->irq_data = 0;
+ spin_unlock_irq(&rtc->irq_lock);
+
+ raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+ if (_on_rt_overrun_admitted(current)) {
+ /* pass back to userspace */
+ data = rt_task_count(current);
+ rt_task_count(current) = 0;
+ }
+ else {
+ }
+
+ raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+ ret = 0;
+
+ goto ret;
+#endif
}

static unsigned int rtc_dev_poll(struct file *file, poll_table *wait)
@@ -215,6 +300,56 @@ static unsigned int rtc_dev_poll(struct file *file, poll_table *wait)
return (data != 0) ? (POLLIN | POLLRDNORM) : 0;
}

+#ifdef CONFIG_RTC_CYCLIC
+extern asmlinkage __visible void __sched notrace preempt_schedule(void);
+
+/* yield behavior * /
+int rt_overrun_task_yield_block(struct task_struct *p)
+{
+ struct rq *rq = task_rq(p);
+ unsigned int block = 1;
+
+ if (test_case)
+ else
+ return 1;
+
+ if (rt_overrun_task_is_best_effort(p)) {
+ // assert that it should be on the rq
+ // move to the end, let pick_next_task_rt() deal with the next runnable task
+ requeue_task_rt2(rq, p, false);
+
+ //clear_overrun_log();
+
+ if (_cond_resched()) {
+ // we reschedule here
+ }
+
+ block = 0;
+ }
+
+ return block;
+} */
+
+int test_admit(u64 slots)
+{
+ /* Only allow the current task to be admitted for now
+ * and allow for /proc to show the slot pattern
+ * in a global fashion */
+ return rt_overrun_task_admit(current, slots);
+}
+
+int test_yield(u64 slots)
+{
+ rt_task_yield(current) = slots;
+ return 0;
+}
+
+void test_replenish(void)
+{
+ rt_overrun_task_replenish(current);
+}
+#endif
+
static long rtc_dev_ioctl(struct file *file,
unsigned int cmd, unsigned long arg)
{
@@ -223,6 +358,9 @@ static long rtc_dev_ioctl(struct file *file,
const struct rtc_class_ops *ops = rtc->ops;
struct rtc_time tm;
struct rtc_wkalrm alarm;
+#ifdef CONFIG_RTC_CYCLIC
+ u64 slots;
+#endif
void __user *uarg = (void __user *) arg;

err = mutex_lock_interruptible(&rtc->ops_lock);
@@ -250,6 +388,12 @@ static long rtc_dev_ioctl(struct file *file,
!capable(CAP_SYS_RESOURCE))
err = -EACCES;
break;
+#ifdef CONFIG_RTC_CYCLIC
+ case RTC_OV_REPLEN:
+ test_replenish();
+ err = -EACCES;
+ break;
+#endif
}

if (err)
@@ -380,7 +524,21 @@ static long rtc_dev_ioctl(struct file *file,
case RTC_IRQP_READ:
err = put_user(rtc->irq_freq, (unsigned long __user *)uarg);
break;
+#ifdef CONFIG_RTC_CYCLIC
+ case RTC_OV_YIELD:
+ mutex_unlock(&rtc->ops_lock);
+ if (copy_from_user(&slots, uarg, sizeof(u64)))
+ return -EFAULT;
+
+ return test_yield(slots);
+
+ case RTC_OV_ADMIT:
+ mutex_unlock(&rtc->ops_lock);
+ if (copy_from_user(&slots, uarg, sizeof(u64)))
+ return -EFAULT;

+ return test_admit(slots);
+#endif
case RTC_WKALM_SET:
mutex_unlock(&rtc->ops_lock);
if (copy_from_user(&alarm, uarg, sizeof(alarm)))
@@ -424,6 +582,9 @@ static int rtc_dev_release(struct inode *inode, struct file *file)
{
struct rtc_device *rtc = file->private_data;

+#ifdef CONFIG_RTC_CYCLIC
+ rt_overrun_entries_delete_all(rtc);
+#endif
/* We shut down the repeating IRQs that userspace enabled,
* since nothing is listening to them.
* - Update (UIE) ... currently only managed through ioctls
--
2.5.0

2016-04-12 05:33:00

by Bill Huey (hui)

[permalink] [raw]
Subject: [PATCH RFC v0 01/12] Kconfig change

Add the selection options for the cyclic scheduler

Signed-off-by: Bill Huey (hui) <[email protected]>
---
drivers/rtc/Kconfig | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
index 544bd34..8a1b704 100644
--- a/drivers/rtc/Kconfig
+++ b/drivers/rtc/Kconfig
@@ -73,6 +73,11 @@ config RTC_DEBUG
Say yes here to enable debugging support in the RTC framework
and individual RTC drivers.

+config RTC_CYCLIC
+ bool "RTC cyclic executive scheduler support"
+ help
+ Frame/Cyclic executive scheduler support through the RTC interface
+
comment "RTC interfaces"

config RTC_INTF_SYSFS
--
2.5.0

2016-04-12 05:58:40

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

On Mon, 2016-04-11 at 22:29 -0700, Bill Huey (hui) wrote:
> Hi,
>
> This a crude cyclic scheduler implementation. It uses SCHED_FIFO tasks
> and runs them according to a map pattern specified by a 64 bit mask. Each
> bit corresponds to an entry into an 64 entry array of
> 'struct task_struct'. This works single core CPU 0 only for now.
>
> Threads are 'admitted' to this map by an extension to the ioctl() via the
> of (rtc) real-time clock interface. The bit pattern then determines when
> the task will run or activate next.
>
> The /dev/rtc interface is choosen for this purpose because of its
> accessibilty to userspace. For example, the mplayer program already use
> it as a timer source and could possibly benefit from being sync to a
> vertical retrace interrupt during decoding. Could be an OpenGL program
> needing precisely scheduler support for those same handling vertical
> retrace interrupts, low latency audio and timely handling of touch
> events amognst other uses.

Sounds like you want SGI's frame rate scheduler.

-Mike

2016-04-12 06:05:20

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

On Mon, 2016-04-11 at 22:59 -0700, Bill Huey (hui) wrote:
>
>
> On Mon, Apr 11, 2016 at 10:58 PM, Mike Galbraith <[email protected]> wrote:
> > On Mon, 2016-04-11 at 22:29 -0700, Bill Huey (hui) wrote:
> > > Hi,
> > >
> > > This a crude cyclic scheduler implementation. It uses SCHED_FIFO tasks
> > > and runs them according to a map pattern specified by a 64 bit mask. Each
> > > bit corresponds to an entry into an 64 entry array of
> > > 'struct task_struct'. This works single core CPU 0 only for now.
> > >
> > > Threads are 'admitted' to this map by an extension to the ioctl() via the
> > > of (rtc) real-time clock interface. The bit pattern then determines when
> > > the task will run or activate next.
> > >
> > > The /dev/rtc interface is choosen for this purpose because of its
> > > accessibilty to userspace. For example, the mplayer program already use
> > > it as a timer source and could possibly benefit from being sync to a
> > > vertical retrace interrupt during decoding. Could be an OpenGL program
> > > needing precisely scheduler support for those same handling vertical
> > > retrace interrupts, low latency audio and timely handling of touch
> > > events amognst other uses.
> >
> > Sounds like you want SGI's frame rate scheduler.

And an echo free mailer :)

2016-04-13 08:57:36

by Juri Lelli

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

[+Luca, as he might be interested]

Hi,

On 11/04/16 22:29, Bill Huey (hui) wrote:
> Hi,
>
> This a crude cyclic scheduler implementation. It uses SCHED_FIFO tasks
> and runs them according to a map pattern specified by a 64 bit mask. Each
> bit corresponds to an entry into an 64 entry array of
> 'struct task_struct'. This works single core CPU 0 only for now.
>
> Threads are 'admitted' to this map by an extension to the ioctl() via the
> of (rtc) real-time clock interface. The bit pattern then determines when
> the task will run or activate next.
>
> The /dev/rtc interface is choosen for this purpose because of its
> accessibilty to userspace. For example, the mplayer program already use
> it as a timer source and could possibly benefit from being sync to a
> vertical retrace interrupt during decoding. Could be an OpenGL program
> needing precisely scheduler support for those same handling vertical
> retrace interrupts, low latency audio and timely handling of touch
> events amognst other uses.
>

Interesting! I read doc patch and only skimmed through the others, but
I seem to already have a general type of question.

Since you seem familiar with SCHED_DEADLINE [1] (you refer to deadline
scheduling in the doc patch and Dario is in CC :-)), what do you think
is wrong with just use that for this type of workloads?

AFAIK, mplayer (like) type of workloads already play well with deadline
scheduling, and SCHED_DEADLINE is mainline and actively maintained and
developed [2].

Best,

- Juri

[1] http://onlinelibrary.wiley.com/doi/10.1002/spe.2335/abstract
[2] http://events.linuxfoundation.org/sites/events/files/slides/SCHED_DEADLINE-20160404.pdf

> There is also a need for some kind of blocking/yielding interface that can
> return an overrun count for when the thread utilizes more time than
> allocated for that frame. The read() function in rtc is overloaded for this
> purpose and reports overrun events. Yield functionality has yet to be fully
> tested.
>
> I apologize for any informal or misused of terminology as I haven't fully
> reviewed all of the academic literature regarding these kind of schedulers.
> I welcome suggestions and corrects etc
>
> Special thanks to includes...
>
> Peter Ziljstra (Intel), Steve Rostedt (Red Hat), Rik van Riel (Red Hat) for
> encouraging me to continue working in the Linux kernel community and being
> generally positive and supportive.
>
> KY Srinivasan (formerly Novell now Microsoft) for discussion of real-time
> schedulers and pointers to specifics on that topic. It was just a single
> discussion but was basically the inspiration for this kind of work.
>
> Amir Frenkel (Palm), Kenneth Albanowski (Palm), Bdale Garbee (HP) for the
> amazing place that was Palm, Kenneth for being a co-conspirator with this
> scheduler. This scheduler was inspired by performance work that I did
> at Palm's kernel group along with discussions with the multimedia team
> before HP kill webOS off. Sad and infuriating moment.
>
> Maybe, in a short while, the community will understand the value of these
> patches for -rt and start solving the general phenomenon of high performance
> multi-media and user interactivity problems more properly with both a
> scheduler like this and -rt shipped as default in the near future.
>
> [Also, I'd love some kind of sponsorship to continue what I think is
> critical work versus heading back into the valley]
>
> ---
>
> Bill Huey (hui) (12):
> Kconfig change
> Reroute rtc update irqs to the cyclic scheduler handler
> Add cyclic support to rtc-dev.c
> Anonymous struct initialization
> Task tracking per file descriptor
> Add anonymous struct to sched_rt_entity
> kernel/userspace additions for addition ioctl() support for rtc
> Compilation support
> Add priority support for the cyclic scheduler
> Export SCHED_FIFO/RT requeuing functions
> Cyclic scheduler support
> Cyclic/rtc documentation
>
> Documentation/scheduler/sched-cyclic-rtc.txt | 468 ++++++++++++++++++++
> drivers/rtc/Kconfig | 5 +
> drivers/rtc/class.c | 3 +
> drivers/rtc/interface.c | 23 +
> drivers/rtc/rtc-dev.c | 161 +++++++
> include/linux/init_task.h | 18 +
> include/linux/rtc.h | 3 +
> include/linux/sched.h | 15 +
> include/uapi/linux/rtc.h | 4 +
> kernel/sched/Makefile | 1 +
> kernel/sched/core.c | 13 +
> kernel/sched/cyclic.c | 620 +++++++++++++++++++++++++++
> kernel/sched/cyclic.h | 86 ++++
> kernel/sched/cyclic_rt.h | 7 +
> kernel/sched/rt.c | 41 ++
> 15 files changed, 1468 insertions(+)
> create mode 100644 Documentation/scheduler/sched-cyclic-rtc.txt
> create mode 100644 kernel/sched/cyclic.c
> create mode 100644 kernel/sched/cyclic.h
> create mode 100644 kernel/sched/cyclic_rt.h
>
> --
> 2.5.0
>

2016-04-13 09:37:30

by Bill Huey (hui)

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

[Trying to resend this so that linux-kernel mailer doesn't reject it.
ok just found plain text mode. Will cull the CC list in future
responses]

Hi Juri,

It's not for replacing deadline first of all. I'm not fully aware of the
kind of things being done with deadline and I would like links so that I
have some kind of reference

The original motivation for doing this was for a number of reasons:

1) Current FIFO/RR policies aren't exact enough for a lot of the mixed
modern multimedia scenarios I saw working a real-world load on an Android
like system. Insufficient feedback to interactive UX tasks that include
things like jackd and pulse audio for low latency applications (music,
keyboard controllers, touch events...) across a span of tasks across the
system.

Deadline seems to be more localized to a specific application's need and
seems to be hard to use but I'm inexperienced with it. The problems would
benefit from a simpler solution.

2) The need for a scheduler to be driven by an external interrupt from a
number sources directly.

3) The need for a global view of the system so that power management
decisions can be made sensibly made in multicore systems. It's not a
scheduler alone but ideal would have more influence over power management
decision on battery powered devices, etc...

4) other reasons that should be in the docs but I got sick of writing
exhaustive documentation on the matter...

That's the best I can do for now. I need to post new version with
compilations fixes. There's a lot of problems with code regarding
portability and other issues with the initial revision.

bill

2016-04-13 10:07:04

by Juri Lelli

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

On 13/04/16 02:37, Bill Huey (hui) wrote:
> [Trying to resend this so that linux-kernel mailer doesn't reject it.
> ok just found plain text mode. Will cull the CC list in future
> responses]
>
> Hi Juri,
>
> It's not for replacing deadline first of all. I'm not fully aware of the
> kind of things being done with deadline and I would like links so that I
> have some kind of reference
>

OK. You can find references in Documentation, in my first reply and
embedded in the ELC slides as well. Please, let me know if you need more
:-).

> The original motivation for doing this was for a number of reasons:
>
> 1) Current FIFO/RR policies aren't exact enough for a lot of the mixed
> modern multimedia scenarios I saw working a real-world load on an Android
> like system. Insufficient feedback to interactive UX tasks that include
> things like jackd and pulse audio for low latency applications (music,
> keyboard controllers, touch events...) across a span of tasks across the
> system.
>
> Deadline seems to be more localized to a specific application's need and
> seems to be hard to use but I'm inexperienced with it. The problems would
> benefit from a simpler solution.
>

I'm not sure what you mean by "localized", but I believe DEADLINE should
be used more widely to service the same kind of applications you are
referring to. It's still a quite new addition to the scheduler, so it is
understandable that we still have some legacy to fight. But we can get
better in the future.

> 2) The need for a scheduler to be driven by an external interrupt from a
> number sources directly.
>

If you use DEADLINE to service the activity an interrupt source might
trigger, I think you can already do this.

> 3) The need for a global view of the system so that power management
> decisions can be made sensibly made in multicore systems. It's not a
> scheduler alone but ideal would have more influence over power management
> decision on battery powered devices, etc...
>

That's true. But it is also already something we currently are working on.
I don't know if you are following the schedfreq/schedutil threads [1], for
example, but there we are discussing how to integrate scheduler and
cpufreq more closely. And you might also be interested in the EAS effort
[2].

> 4) other reasons that should be in the docs but I got sick of writing
> exhaustive documentation on the matter...
>

:-)

> That's the best I can do for now. I need to post new version with
> compilations fixes. There's a lot of problems with code regarding
> portability and other issues with the initial revision.
>

OK. Feel free to ask if you also decide to experiment with DEADLINE and
find any problem with it.

Best,

- Juri

[1] https://lkml.org/lkml/2016/3/17/420
https://lkml.org/lkml/2016/2/22/1037
[2] https://lkml.org/lkml/2015/7/7/754

2016-04-13 10:35:19

by Bill Huey (hui)

[permalink] [raw]
Subject: Re: [PATCH RFC v0 00/12] Cyclic Scheduler Against RTC

Hi,

On Wed, Apr 13, 2016 at 3:08 AM, Juri Lelli <[email protected]> wrote:
> I'm not sure what you mean by "localized", but I believe DEADLINE should
> be used more widely to service the same kind of applications you are
> referring to. It's still a quite new addition to the scheduler, so it is
> understandable that we still have some legacy to fight. But we can get
> better in the future.

Yeah, I've known about it for a while but it's just so hard for me to imagine
using that for the kinds of cases that I'm thinking about for mixed tasks.
I just don't have an example in my head how that would work since I don't
have a view of how something like EDF would solve some of the basic
cases. That's mostly my ignorance.

The original inspiration for this was problems with how FIFO tasks would
run for long periods of time will stall with touch event handlers. The solution
in multi-media circles seemed to be that (1) using larger buffers to avoid
dropouts were the solution only to cause starvation and other problems
with other important threads.

There has to be some kind of global view of how a system should run.
It's hard for me (self ignorance) to see how something like deadline would
run for continuously running tasks like that under those scenarios and
have that define some kind of global running policy in the system.

That's why I created this for a brain dead view of how to hack this stuff,
with some kind of crude execution pattern, to somehow get some level of
acceptable interactivity yet meet basic hard requirements with audio etc.

Might be a scenario where one would use sched_switch data to help with
deciding that. We ran into a lot of problems with the Qualcomm MSM
architecture and their power management code. Some of the hacks were
pretty brutal and wasted processor time polling the second core aggressively.

I wanted to solve all of these problems more completely and outside of the
current work being done for better or worse.

>> 2) The need for a scheduler to be driven by an external interrupt from a
>> number sources directly.
>
> If you use DEADLINE to service the activity an interrupt source might
> trigger, I think you can already do this.

I'll have to think about this. Would might having a simple example here.

>> 3) The need for a global view of the system so that power management
>> decisions can be made sensibly made in multicore systems. It's not a
>> scheduler alone but ideal would have more influence over power management
>> decision on battery powered devices, etc...
>
> That's true. But it is also already something we currently are working on.
> I don't know if you are following the schedfreq/schedutil threads [1], for
> example, but there we are discussing how to integrate scheduler and
> cpufreq more closely. And you might also be interested in the EAS effort
> [2].

Not yet but I'll look for them.

> OK. Feel free to ask if you also decide to experiment with DEADLINE and
> find any problem with it.

> [1] https://lkml.org/lkml/2016/3/17/420
> https://lkml.org/lkml/2016/2/22/1037
> [2] https://lkml.org/lkml/2015/7/7/754

Thanks, reading them now but they're quite complicated and the threads
are quite long. It'll take time to digest it all

Thanks

bill