Desktop version

Home arrow Computer Science arrow Real-Time and Distributed Real-Time Systems: Theory and Applications

Burst and FIFO Mode Semaphores

Some RTOSs allow flag and counting semaphores to operate in either a burst mode or an FIFO mode. In burst mode, when several processes wait on a semaphore, tasks get unblocked in decreasing sequence of their priorities, that is, the process with the highest priority gets unblocked first, flowed by the next, and so on. In case of an FIFO mode semaphore, the processes get unblocked in the sequence in which they got blocked, that is, the process entering the queue first gets unblocked first. The execution profile for a four-task system with burst and FIFO mode semaphores are illustrated with Figures 1.15 and 1.16, respectively.

It is clear from Figure 1.16 that the response time of higher priority tasks is lower with burst mode semaphores.

Priority Inversion

Priority inversion is a typical situation where a medium priority task preempts a lower priority task while it has access to a shared resource for which a higher priority task is waiting. The situation is explained in Figure 1.17.

In Figure 1.17, it is assumed that the priority sequence of the tasks is T1 > T2 > T3. The task T1 sets a semaphore and accesses a shared resource when it is preempted by the task T1, which runs for some time and gets

24

Real-Time and Distributed Real-Time Systems

FIGURE 1.15

(See color insert.) Burst mode semaphore.

blocked while trying to access the same shared resource. Now, even though T1 is otherwise ready, it will be displaced by a lower priority task T2 (if it is ready) and a priority inversion is said to have occurred.

Priority inversion can lead to dangerous situations in mission critical applications. The most famous recorded instance of priority inversion is associated with the NASA Mars Pathfinder mission [10]. The problem occurred

FIGURE 1.16

(See color insert.) FIFO mode semaphore.

FIGURE 1.17

(See color insert.) Execution profile under priority inversion.

on the Mars sojourner. The system used two high-priority tasks, which controlled the activity over the data bus through which the devices communicated. One of the tasks was a lower priority meteorological data collection task that received data from one of the high priority bus manager tasks and was preempted by medium priority tasks accessing the communication resource. The high priority bus manager, thus, got blocked and the medium priority tasks continued to run. The situation was detected by a watchdog process that led to a series of system resets.

The problem of priority inversion can be solved by a number of workarounds, namely, priority inheritance, priority association, and the use of critical sections. Priority inheritance elevates the priority of the lower priority task to a level more than that of the medium priority task while it is accessing the resource. That is to say, for the case shown in Figure 1.17 the priority of T3 must be raised to a level that is more than that of T2 while it accesses the shared resource so that T1 can resume once T3 releases the resource. Priority association, on the other hand, is a resource-specific strategy that associates a priority level to a resource and this is equal to the priority level of its highest priority contender plus one. When a task accesses this resource, it inherits the priority of the resource and is thus prevented from preemption. Introduction of a critical section in the lowest priority task while it accesses the shared resource is the third work around. A critical section defines a portion of the code where the executing task cannot be preempted. Thus, for the case illustrated in Figure 1.17 the task T3 may access the shared resource within a critical section and will not be preempted before releasing it, preventing priority inversion.

In the computation of the least upper bound of maximum processor utilization with RM scheduling, it has been assumed that the tasks in a given task set are independent and do not interact. However, tasks do interact and while a critical section in a lower priority task extends the execution time of a higher priority task by a finite quantum of time, a priority inversion can cause an unbounded extension. Thus, if task Ti is ready but is blocked for a finite quantum of time B{ by a lower priority task entering a critical section, for example, then its utilization may be assumed to be extended [11] by an amount Bi/Ti and thus the set of n tasks remain sched- ulable by RM scheduling if

Again, in a practical RT system, there are sporadic tasks, for example, special recovery routines that are activated by exceptions. Augmentation of Equation 1.34 to handle sporadic tasks is based on a conservative approach by assigning the highest periodicity to such a task in the computation of maximum processor utilization [12].

Timer Functions in a Real-Time Operating System (RTOS)

Many RTOSs provide timer functions available as RT system calls to the applications programmer. Typical timer functionalities provided are

Delay function—Delays the execution of a task by a specified delay interval.

Calendar clock execution—Specifies execution of a task at a specific time instant.

Periodic execution—Specifies execution of a task periodically with a specified interval.

The implementation and syntax of the calls are RTOS specific and are usually accessible to the application programmer through an application programming interface (API).

Intertask Communication in an RTOS

Most RTOSs provide standard intertask communication structures for data transfer, such as shared variables, bounded buffers, message queues, mail boxes, and FIFO. The concepts are similar to those for a GPOS.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics