The Linux Plumbers 2019 Scheduler Microconference is about all scheduler topics, which are not Realtime
- Load Balancer Rework - prototype
- Idle Balance optimizations
- Flattening the group scheduling hierrachy
- Core scheduling
- Proxy Execution for CFS
- Improving scheduling latency with SCHED_IDLE task
- Scheduler tunables - Mobile vs Server
- LISA for scheduler verification
We plan to continue the discussions that started at OSPM in May'19 and get a wider audience outside the core scheduler developers at LPC.
Rik van Riel
This microconference is picking scheduler topics which are not RT, but this should take place either immediately before or after that MC.
Juri Lelli email@example.com, Vincent Guittot firstname.lastname@example.org, Daniel Bristot de Oliveira email@example.com, Subhra Mazumdar firstname.lastname@example.org, Dhaval Giani email@example.com
There have been two different approaches proposposed on the LKML over the past year on core scheduling. One was the coscheduling approach by Jan Schönherr, originally posted at https://lkml.org/lkml/2018/9/7/1521 and the next version posted at https://lkml.org/lkml/2018/10/19/859
Upstream chose a different route and decided to modify CFS, and only do "core-scheduling". Vineeth picked up the...
Proxy execution can be considered as a generalization of the real-time priority inheritance mechanism. With proxy execution a task can run using the context of some other task that is "willing" to let the first task run as this improves performace for both. With this topic I'd like to detail about progress that has been made after the initial RFC posting on LKML and discuss about open problems...
Dmitry Vyukov's testing work identified some (ab)uses of sched_setattr() that can result in SCHED_DEADLINE tasks starving RCU's kthreads for extended time periods, not millisecond, not seconds, not minutes, not even hours, but days. Given that RCU CPU stall warnings are issued whenever an RCU grace period fails to complete within a few tens of seconds, the system did not suffer silently. ...
The cfs load_balance has became more and more complex over the years and has reached the point where policy can't be explained sometimes. Furthermore, available metrics have evolved and load balance doesn't always take full advantage of it to calculate the imbalance. It's probably the good time to do a rework of the load balance code as proposed in this...
There is a presentation in the refereed track on flattening the CPU controller runqueue hierarchy, but it may be useful to have a discussion on the same topic in the scheduler microconference.
The Linux Kernel scheduler represents a system's topology by the means of
scheduler domains. In the common case, these domains map to the cache topology
of the system.
The Cavium ThunderX is an ARMv8-A 2-node NUMA system, each node containing
48 CPUs (no hyperthreading). Each CPU has its own L1 cache, and CPUs within
the same node will share a same L2 cache.
Running some memory-intensive...
Turbosched is a proposed scheduler enhancement that aims to sustain turbo frequencies for a longer duration by explicitly marking small tasks that are known to be jitters and pack them on a smaller number of cores. This ensures that the other cores will remain idle, and the energy thus saved can be used by CPU intensive tasks for sustaining higher frequencies for a longer duration.
Currently there is no user control on how much time scheduler should spend searching for CPUs when scheduling a task. It is hardcoded logic based on some heuristics that doesn't work well in many cases. e.g. very short running tasks. Provide a new latency-nice property user can set for a task (similar to nice value) that controls the search time and also potentially the preemption logic. Also...