The algorithms in this category compute paths using one or more of the following oracles: contacts summary, contacts, and
queuing. Further, each message is routed independently of the future demand because the trac oracle is not used. These al-
gorithms are all based upon assigning costs to edges and computing a form of minimum-cost (\shortest") path. Costs are
assigned to edges (by consulting the available oracles) to reflect the estimated delay of the message in taking that edge. The
challenge and sophistication lies in assigning costs such that the assigned costs are close to the delay that will actually be
encountered when a message is forwarded across the DTN (delay tolerant network).

The reasons for considering only cost-based algorithms in this class are two-fold. First, they provide a convenient and common way to utilize the di erent knowledge oracles (thereby, identifying to what extent global knowledge is necessary). Second, they correspond naturally to traditional shortest-path based routing problems which are well-understood and for which simple computationally-efficient distributed algorithms are known. This simplicity, however, comes at the price of imposing certain restrictions on the nature of routing paths determined. One key limitation is that only a single path to a destination is derived. As argued earlier, for DTNs it may be important to use multiple paths (with splitting) to achieve near-optimal performance. Interestingly, the basic ideas introduced here can be used to find multiple routes and good split sizes. This is discussed briefly at the end of this section.
 
A memory mangement unit’s (MMU) main responsibility is to define and enforce the boundaries that separate different tasks or processes. To understand the effect of separating tasks, consider two different office environments. In the first one (the formal environment), there is an individual office for each employee. Each employee has a key to the door of his or her office and is considered the owner of that space. The second environment (the informal environment) uses a bullpen setup. There are no walls between employees, and everyone is on the same floor in the same air space. Each one of these configurations has its particular advantages and disadvantages. In the formal environment, you don’t have to worry about one employee bothering another employee because there are walls between them. No matter how noisy one employee gets, the  neighboring worker does not hear the disturbance. This is not the case for an informal setup. If an employee is talking loudly on the phone in the open environment, this inconsiderate individual distracts the other workers. If, on the other hand, the employees are considerate, two employees can quickly communicate with each other without needing to leave their seats.
Similarly, if one employee needs to borrow another employee’s stapler, it’s just a toss away. In the formal environment, each time communication needs to take place, the communicating employee needs to go through some series of steps, like making a phone call or walking to the other office.

Each employee can be compared to a block of code (or task) that is designed to perform some specified job in an embedded systems program. The air space equates to the memory space of the target system, and the noise generated by a misbehaving employee can be equated to a bug that corrupts the memory space used by some other task. The MMU in hardware, if configured properly, can be compared to the walls of the formal office environment. Each task is bounded by the limitations placed on it by the MMU, which means that a task that somehow has a bad pointer cannot corrupt the memory space of any other task. However, it also means that when one process wants to talk to another process, more overhead is required.

Later, in this book, you will see that code bounded by the MMU protection is called a process and code not bounded by an MMU is called a task. A few years ago, the full use of an MMU in an embedded system was rare. Even today, most embedded
systems don’t fully use the capabilities of the MMU; however, as embedded systems become more and more complex and CPU speeds continue to rise, the use of an MMU to provide walls between processes is becoming more common.
 
One approach to reliable transport that can tolerate extremely long and variable round-trip latency is reflected in the design of CFDP. CCSDS(Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP)  can operate in either acknowledged (reliable) or unacknowledged mode; in acknowledged mode, lost or corrupted data are automatically re-transmitted. CFDP’s design includes a number of measures adopted to enable robust operation of its ARQ system in high-latency environments:

  1. Because the time required to establish a connection might exceed the duration of a communication opportunity, there is no connection protocol; communication parameters are managed.
  2. Because round-trip latency may far exceed the time required to transmit a given file, CFDP never waits for acknowledgment of one transmission before beginning another. Therefore, the re-transmitted data for one file may arrive long after the originally transmitted data for a subsequently issued file, so CFDP must attach a common transaction identifier to all messages pertaining to a given file transmission.
  3. Because a large number of file transmissions may concurrently be in various stages of transmission, re-transmission buffers typically must be retained in nonvolatile storage; this can help prevent catastrophic communications failure in the event of an unplanned power cycle at either the sender or the receiver.
 
The Super Database Computer (SOC) project at the University of Tokyo presents an interesting contrast to other database system projects. This system takes a combined hardware and software approach to the performance problem. The basic unit, called a processing module (PM), consists of one or more processors with a shared memory. These processors are augmented by a special purpose sorting engine that sorts at high speed (3MB/s at present), and by a disk subsystem. Clusters of processing modules are connected via an omega network which not only provides non-blocking NxN interconnect, but also provides some dynamic routing to support data distribution during hash joins. The SOC is designed to scale to thousands of PMs, and so considerable attention is paid to the problem of data skew. Data is declustered among the PMs by hashing.

The SOC software includes a unique operating system, and a relational database query executor. Most publish work so far has been on query execution and on efficient algorithms to execute the relational operations, rather than on query planning or data declustering. The SOC is a shared-nothing design with a software dataflow architecture. This is consistent with our assertion that current parallel database machines systems use conventional hardware. But the special purpose design of the omega network and of the hardware sorter clearly contradict the thesis that special-purpose hardware is not a good investment of development resources. Time will tell whether these special-purpose components offer better price performance or peak performance than shared-nothing designs built of conventional hardware.
 
The “account locking” feature enables denial of service attacks against users. These attacks are mounted by trying to login several times to a user’s account with invalid passwords, thus causing this account to be blocked. Yahoo!, for example, report that users who compete in auctions use these methods to block the accounts of other users who compete in the same auctions. This attack should be especially worrisome to mission critical applications, for example to enterprises whose employees and customers use the web to login to their accounts.

One could even imagine a distributed denial of service attack against servers that implement the “account locking” feature. Similar to other distributed denial of service attacks (DDoS), the attacker could plant hidden agents around the web. All the agents could start operating at a specific time, trying to login into accounts in a specific server using random passwords (or using a dictionary attack). This attack could block virtually a large proportion of the accounts of the attacked server.
 
Picture
The exibility offered by cross-layer design has been exploited in a number of research efforts. Joint optimization of power allocation at the physical layer, link scheduling at the MAC layer, network layer ow assignment and transport layer congestion control has been investigated with convex optimization formulations. Our own cross-layer design framework attempts to maintain a layered architecture while exchanging key parameters between adjacent protocol layers. The framework allows enough exibility for signicant performance gains, while
keeping protocol design tractable within the layered structure, as demonstrated by the preliminary results exploring adaptive link-layer techniques, joint capacity and ow assignment,
media-aware packet scheduling and congestion aware video rate allocation.


Figure 1: Channel time allocation for three video streams, all Crew, sharing a single-hop network.

 
The National Institute of Standards and Technology has presented the most clear and comprehensive definition of cloud computing. It distinguishes cloud characteristics, delivery model, and deployment method. The Institute says there are five key features of cloud computing: on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and measured service.

Computing can take the form of software-as-a-service (running specific applications through a cloud), platform-as-a-service (using a suite of applications, programming languages, and user tools), or infrastructure-as-a-service (relying on remote data storage networks). Deployment depends on whether the cloud is a private, community, public, or hybrid one. Private clouds are operated for a specific organization, for example, whereas community clouds are shared by a number of organizations. Public clouds are available to the general public or large groups of agencies, while hybrid clouds combine public and private elements in the same data center.