As we close in on the definition of the protocol, there
are still some things that worry me.
1) Will our proposed use of http handle the transport of
very very large (many Gbyte) files efficiently and with sufficient
recovery? I know you all have tried to convince me that tcp/ip
provides everything that is necessary to guarantee delivery,
but I still am concerned that we may have left holes where
a connection remains open because of some failure, or some
component times out, the print file does not get delivered and
no one knows why or how to recover because there is no ipp
level notification of what has happened.
2) Have we sufficiently handled the case where a print driver is
generating the print ouput and putting it on the wire a piece at
a time? Does chunking solve this problem?
3) Have we sufficiently handled the case where ipp is implemented
in an output device that has no capability to spool the data so must
start printing while receiving the data? What happens when a printer
failure occurs during transmission in this case?
Roger K deBry
Senior Techncial Staff Member
Architecture and Technology
IBM Printing Systems
email: rdebry at us.ibm.com