P1394 Mail Archive: Re: P1394> unordered execution

P1394 Mail Archive: Re: P1394> unordered execution

Re: P1394> unordered execution

Akihiro Shimura (shimura@pure.cpdc.canon.co.jp)
Mon, 9 Mar 1998 18:41:46 +0900 (JST)

On Mon, 9 Mar 1998 10:14:15 +0900
Nagasaka Fumio <Nagasaka.Fumio@exc.epson.co.jp> wrote:

> I am curious why unordered execution is required for ‘HPT’.
> May be there are something I am missing.

The HPT(High Performance Transport) essentially requires unordered
execution model in order to multiplex two independent sequences of
operation requests (read request sequence and write request sequence)
into single linked list of ORB's. In other words, the HPT provides
full duplex communication path (independent flow for each direction)
by using unordered execution model with one fetch agent (one login).

The unordered model under the basic task management model allows the
target to unrestrictedly reorder the active tasks, but does not allow
to append a new task to the task list until all tasks in the current
task set completed.
Though this model will be suitable for multi-cluster or multi-track
access to the disk drive since the disk drive may optimize track
seeking, etc., it will not meet the requirement from multiplexing two
ordered request sequences because it prohibits to append a new task
when previous tasks are in progress.

The SBP-2 describes following sentences as a note.

"NOTE - In multitasking operating system environments,
independent execution threads may generate tasks that have
ordering constraints within each thread but not with respect
to other threads. If this is the case, an initiator may
manage the constraints of each thread yet still keep the
target substantially busy. This avoids the undesirable
latencies that occur if the target is allowed to become idle
before new ORB's are signaled."

The HPT uses this model. Essentially, the independent execution
threads are read thread and write thread in the HPT. Each thread
generates tasks that have ordering constraints within each thread but
not with respect to another thread. So, the HPT provides in-order
delivery in each direction while keeping each direction independent.

A certain operating system uses two I/O request queues for each
direction to serve as full duplex. The requests in these I/O request
queues are multiplexed into single linked list of ORB by message(ORB
or status_block, but NOT data itself) flow control in the HPT. The
ordering constraints are maintained in these two I/O request queues
as usual. The status_block returned by the target indicates to which
of the head of these I/O request queues the completed I/O request
belongs. Since the message flow control does not require additional
packet exchange with another peer in the HPT, the flow control does
not have an overhead from the peer's latency.
(The HPT also uses the same mechanism to multiplex multiple channels.)

There will be three places to put multiplexing information for
independent data paths within on using SBP-2.

a) Data buffer
b) ORB
c) 1394 bus address

The protocol uses "a)" will be 1284.4 over SBP-2.
(Single fetch agent, single execution agent, data flow control)
The protocol uses "b)" will be HPT.
(Single fetch agent, independent execution agents, message flow control)
The protocol uses "c)" will be dual login or cross login.
(Independent fetch agents, independent execution agents)

Since plain SBP-2 provides master controlled half duplex path, the
profile (rev. 0.1d) may be between half duplex "plain SBP-2" and full
duplex "SBP-2 dual or cross login" because it may require some sort
of retries(re-ordering in the initiator) in case of asynchronous back
channel data though it uses ordered model. (Three quarters duplex?)

 Akihiro Shimura (shimura@pure.cpdc.canon.co.jp)
 Office Imaging Products Development Center 3