Desktop version

Home arrow Computer Science arrow Real-Time and Distributed Real-Time Systems: Theory and Applications

Selected Protocols

In this section, a discussion covering the different aspects of a few popular protocols are presented, covering the lowest two layers of the protocol stack represented by Figure 3.5. The protocols are discussed from three aspects: media access, efficiency, and determinism. The protocols discussed are Switched Ethernet with User Datagram Protocol/Internet Protocol (UDP/IP), Controller Area Network (CAN), Time Division Multiple Access (TDMA), Token Bus and Token Ring, FlexRay, and the Proximity-1 Space Link.

Switched Ethernet with User Datagram Protocol/Internet Protocol (UDP/IP)

Ethernet used for real-time communication commonly uses the 10 Mb/s standard (e.g., Modbus/TCP). Higher-speeds (100 Mb/s or even 1 Gb/s) Ethernet is mainly used in data networks. Standard Ethernet is normally avoided for real-time applications because of the nondeterministic communication delay caused by collisions when multiple stations try to access the physical medium simultaneously. In this case, a source waits for the communication line to be idle to transmit. If a collision is detected during the transmission, the source interrupts the transmission and broadcasts a jam signal to notify an occurrence of the collision to other stations. The jamming sequence is a 32-bit sequence prefixed with a preamble. Thus, a transmitter will minimally send 96 bits in the case of collision (64-bit preamble + 32-bit jamming sequence), and the frame is called a runt frame.

In fact, there exists a minimum size of an Ethernet frame and this is 512 bits. The existence of a minimum frame size can be understood from the following considerations: Consider a case where a node щ starts transmitting. A node nj located at a distance d receives the first bit of the transmission after a time of TP s, where TP denotes the propagation time of the electrical signal from node щ to node n,. Now, if it be assumed that the node n, starts transmitting its own frame just before the first bit of node щ reaches n,, then the node щ will come to know of a collision after a further TPR s. Therefore, the node щ must be transmitting for a minimum interval

where TP is the propagation delay computed using the largest permissible value of d which is 2500 m, and the corresponding value of Tmin becomes

51.2 |j,s. Considering a transfer speed of 10 Mbps, this yields a minimum frame size of 512 bits.

After the collision of a frame, the source station waits for the backofftime

based on the truncated binary exponential backoff (BEB) algorithm, and then retries the aforementioned transmission procedure. In Equation 3.2, k denotes the number of collisions in a row and for q, r e Z+, rnd[q, r] denotes a random integer in the interval [q, r]. It is clearly seen that where two nodes are waiting for a third node to finish its transmission, they will first collide with probability 1, then with probability 1/2 for k = 1, then with probability 1/4 for k = 2 and so on. This retry is continued up to 16 times, and then transmission of the corresponding frame is withdrawn.

A switch is an active device that receives the frames from any node щ and forwards the same to its destination щ. A switch provides a path [щ, n} between any two nodes щ, n j on the network. Collision occurs only if simultaneous communication is attempted along any two paths that are not disjoint, for example, along paths [щ, nk} and {n, nk} simultaneously, and not otherwise. The implicit assumption is that the maximum sustained capacity of the switched network is less than the peak demand. The fact that switched Ethernet has a high message efficiency owing to the extremely small bandwidth usage for media access has motivated some researchers to consider it for real-time communication requirements. Lee and Lee [3] have presented an estimate of the maximum latency of standard and switched Ethernet and the result for switched Ethernet is presented here. As reported by Lee and Lee, the total latency for a message consisting of a frame is

where TPS and TPR represent processing delays for transmission at the source and destination, respectively; and TP is the propagation delay for the electrical signal to propagate from the source to the switch, which is proportional to the length of cable connecting the node and the switch. The factor 2 comes because of the path from the switch to the destination, which is assumed to be of the same length.

The term Tq in Equation 3.3 represents the total frame transmission delay across the switch expressed as

where TF is the frame transmission delay defined as the number of bits of the frame divided by the data rate and TIF and is the interframe delay when the source waits between two frames transmitted successively. It is defined as 96-bit times in the 10BASE-T Ethernet standard.

Applications layered on Ethernet use either Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol/Internet Protocol (UDP/IP), which constitute the fourth and fifth layers of the OSI model, respectively, as shown in Figure 3.7.

With an explicit flow control, use of TCP/IP for real-time communication is rather restricted except for some protocols, for example, the MODBUS-TCP [4]. UDP/IP offers limited service when messages are exchanged between computers, using a transfer unit called a datagram. The messages are divided into packets at the sending end and reassembled at the receiver end. The flow control is implicit and the service in reliable packets may be dropped. The frame-oriented nature of the protocol makes it ideal for automation applications that use small amounts of data at frequent intervals. Figure 3.8 shows an UDP/IP frame embedded in a standard Ethernet frame.

Communication Protocols

57

FIGURE 3.7

Ethernet over TCP/IP.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics