Desktop version

Home arrow Computer Science arrow Real-Time and Distributed Real-Time Systems: Theory and Applications

Virtual Deadlock

In most real-time systems, shared resources, particularly I/O devices, are handled in a mutually exclusive manner. That is to say a higher priority task contending for a shared resource is not allowed to preempt a lower priority task while it is accessing the same resource. Note, this could have also been a workaround for the priority inversion problem illustrated in Figure 1.17, but is not adopted in order to preclude a virtual deadlock condition.

The virtual deadlock can be understood by examining a situation where two tasks T1 and T2 try to access a common I/O device like an external firmware, which requires a finite sequence of commands. A typical case could be where none of the tasks get to complete initialization of the I/O device as each task is forced to relinquish the shared resource by the other before completion of the initialization. Such a situation may lead to a virtual deadlock.

Scheduling in a Distributed RealTime System Environment

Scheduling in a distributed real-time system (DRTS) environment is more complicated that scheduling in an RTOS running on a single processor because of two reasons: scheduling in a multiprocessor environment and message interchange between tasks that introduce dependency. If T = [T1, T2, Tn] is a set of tasks with periods P1, P2, ..., Pn and WCETs C1, C2, ..., Cn scheduled over m processors, then a necessary condition for the set of tasks to be schedulable is

With messages involved, task scheduling has to take into account message scheduling and this is either dynamic or static. It is difficult to guarantee tight deadlines by dynamic scheduling techniques in a uniprocessor system if mutual exclusion and task precedence constraints are considered.

Static scheduling in a DRTS environment, on the other hand, involves computation of a trace of tasks and messages before execution, which is stored for dispatch during run time. It is assumed that all clocks are globally synchronized and dispatchers across all processors have a common time reference. The trace is defined as the schedule and the quantum of time after which the schedule repeats is called the schedule period. One way of computation of a static schedule is task and message coscheduling, which assumes a definite transmission time for each message, and a CPU time for sending and receiving messages, and a nonpreemptive use of the shared network connecting the processors. Figure 1.18 explains static coscheduling in a DRTS with three tasks (T1, T2, T3) assigned to three processors (p1, p2, p3) interconnected over a network with message dependencies M13 and M21, assuming that the periods of the tasks P1, P2, P3 each equal to the schedule period Ps. Again, it is assumed that the relative deadline of each task is its period.


(See color insert.) Coscheduling in DRTS.

However, it might be possible to meet the dependency and deadline requirements of the tasks by an alternate schedule, as shown in Figure 1.19.

Computing a static schedule on a distributed environment is known to be a NP-complete task [13], and for small task sets often a heuristic search is applied using a search tree. A search tree starts with an empty schedule with the start node of the precedence graph for a given task set describing the dependencies associated with transmitted messages. Each level of the search tree corresponds to an instant when a task and/or message is scheduled. Outward edges from a node in the search tree point to tasks/messages


(See color insert.) An alternate schedule.

that can be scheduled next, fulfilling the dependency constraints. A path from the root node to a particular node at level l records the sequence of scheduling decisions that have to be made up to level l. The search completes when the set of unscheduled tasks and messages is empty. A path from the root to the leaf of the search tree describes a complete schedule. A complete schedule that meets the deadline is said to be feasible. As an example, schedules shown in Figures 1.18 and 1.19 are both feasible for the task set considered. Branch and bound heuristics [13] is often employed to find a feasible schedule with the shortest time. An examination of Figures 1.18 and 1.19 shows that the schedule shown in Figure 1.18 is shorter than the schedule shown in Figure 1.19. For a general case where the task periods are unequal, the schedule period is the least common multiple (LCM) of the individual task periods.


  • 1.1. An RT system consists of four scanner tasks that scan sensor data from the field in a real-life plant and write on to a buffer that is read by a data handler once every 100 ms. Each scanner takes 5 ms and the data handler takes a further 20 ms. If the whole system is controlled by a heartbeat timer that takes 20 ms each time it is triggered, what should be the maximum periodicity of the timer, so that no overflow occurs? Assume RM scheduling for the tasks.
  • 1.2. In Problem 1.1, if it is assumed that one of the scanners updates the data at a higher periodicity, that is, once in 50 ms and the data handling load for this increases by 40%, what should be the new period for the heartbeat timer?
  • 1.3. An RT embedded system is developed around an Intel 8088-based system with a single 8259. It comprises three tasks—T1, T2, T3— with priorities in the same sequence, which are connected to IRQs 4, 5, and 6, respectively, with a default priority assignment for the IRQs. If the tasks are associated with execution times of 20, 25, and 30 ms, respectively, and the interrupt latency is 5 ps, what is the response time of the system, if interrupt driven scheduling is assumed? If the period at which the triggers arrive is the same and it is P, what should be the minimum value of P if no overflow occurs, assuming scheduling overhead is 10%? Sketch the execution profile for two cycles with this value of P.
  • 1.4. Three tasks—T1, T2, T3—with priorities in the same sequence are synchronized using a flag semaphore. Initially, T1 and T2 are in the blocked state when T3 runs for 50 ms after having set the semaphore, when it is preempted by T2. T2 runs for 30 ms more before it is blocked again, while trying to set the same semaphore, and T3 runs for a further period of 10 ms when it is preempted by T1, which runs for 20 ms and gets blocked again when it tries to set the semaphore set by T1. T3 then runs for 15 ms and resets the semaphore. If T2 and T1 take further 20 ms each to complete the activities for the particular cycle, draw the execution profile and calculate the time spent by T1 in blocked mode assuming (a) a burst mode semaphore and (b) an FIFO mode semaphore. What is the processor utilization in this case? Does this depend on the execution profile or the semaphore type?
  • 1.5. For Problem 1.2 derive a suitable synchronization mechanism for the application and explain your answer with appropriate pseudocodes.
  • 1.6. Investigate if an EDF can schedule a set of tasks with the set of {C, Lt} tuples defined as [{3, 1}, {5, 2}, {8, 3}] and state what happens if an overflow occurs.
< Prev   CONTENTS   Source   Next >

Related topics