Evaluating mixed criticality scheduling algorithms with realistic workloads.

David Griffin, Iain Bate, Benjamin Lesage, Frank Soboczenski

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Most work on mixed-criticality scheduling has
considered timing-related failures to be independent of one
another. In reality this is not true as in many systems the state
that caused the original failure will be similar to the state in
the next release (job) of the task. Therefore when arguing
about the number of jobs that do not meet their deadlines, it is
crucial tasks have an appropriate fault model incorporated into
the tool framework (i.e. task set generators and simulators)
used to evaluate scheduling policies. The second issue that
affects the tool framework is the choice of Worst-Case
Execution Times (WCET) for different criticality modes of
tasks. In the current literature it has been argued that a
WCET should be chosen which would only be exceeded
incredibly rarely, e.g. 1 in 1016 jobs. This leads to WCET
values much greater than the High WaterMark (HWM). The
needs of certification and the consideration of how safety is
argued leads to the conclusion that the probability of a job not
meeting its deadline can be much greater. This would greatly
impact the WCETs and hence the results of the evaluation. The
contributions of this paper are thus a more realistic tool
framework, and hence more realistic results than those
previously reported, which we claim gives a better insight into
how the scheduling policies would behave in practice and hence
better evidence for any safety case.
Original languageEnglish
Title of host publicationProc. 3rd Workshop on Mixed Criticality Systems (WMC), RTSS
PublisherIEEE Real-Time Systems Symposium (RTSS)
Pages24-29
Publication statusPublished - 2016

Fingerprint

Dive into the research topics of 'Evaluating mixed criticality scheduling algorithms with realistic workloads.'. Together they form a unique fingerprint.

Cite this