IPP Mail Archive: RE: IPP>MOD Should IPP notification use http as t

RE: IPP>MOD Should IPP notification use http as t

Carl-Uno Manros (carl@manros.com)
Tue, 17 Aug 1999 06:50:41 -0700

Carl,

I have a number of last minute questions about the proposed HTTP solution in
time for our meeting tomorrow, hope you have time to respond today:

1) Our current solution assumes a URI address for delivery of notifications.
In your case you don't have one as far as I can understand? What do we put
in the URI attribute?

2) How does the IPP client signal to the IPP server that it is ready to get
notifications? A new Send-Notifications Operations to which the serever can
then respond with a multipart MIME object?

3) Are not all our operations synchronous? Does that not mean that while you
are waiting for the potentially long response on your Send-Notification
request, you cannot send any other operations? Can you abort the
Send-Notification operation in order to do something else and then start a
new Send-Operation again afterwards or do you always need a separate HTTP
connection for the Send-Notification operation?

4) If the IPP client wants to go away before all notifications are
delivered, what happens with the rest? Do they just get discarded or are
they stored and delivered the next time you start up the Send-Notification
operation?

Carl-Uno

> -----Original Message-----
> From: owner-ipp@pwg.org [mailto:owner-ipp@pwg.org]On Behalf Of
> kugler@us.ibm.com
> Sent: Monday, August 16, 1999 3:14 PM
> To: ipp@pwg.org
> Subject: Re: IPP>MOD Should IPP notification use http as t
>
>
> original article:http://www.egroups.com/group/ipp/?start=6169
> > > I asked you about using http as the transport. You suggested that
> I
> > > send you email to get your comments on the pros and cons of http as
> > > the event transport.
> >
> > basically, you don't want to use http (or any tcp based protocol)
> > to send small payloads, because the connection setup overhead
> > is fairly large, and also because tcp connections that are being
> > established do not compete fairly with tcp connctions that are
> > already established (under very loaded conditions, attempts to
> > establish new connections can steal bandwidth from
> > already-eastablished connections)
> >
> > but a lot depends on how you use http or tcp.
> >
> > if you know you're going to be sending several short status updates
> > (say, per-page acknowledgements) and the duration of the session is
> > such that you can probably keep a single tcp connection open for the
> > entire time, and especially if you need reliable delivery of those
> short
> > messages (i.e. you can't afford to have a status update dropped
> > or delivered out-of-order) it's probably reasonable to open a tcp
> > connection and deliver each update over the connection. but you
> > want to try to do this not only in a single tcp session, but also
> > in a single http transaction. i.e. you should avoid introducing
> > extra http round-trips. e.g. do something like this:
> >
> > (connection opened)
> > >>> GET /printer/status/job#2343 HTTP/1.1
> > >>>
> > <<< Content-type: application/ipp-status-messages
> > <<<
> > (pause)
> > <<< status: page 1 printed
> > (pause)
> > <<< status: page 2 printed
> > (pause)
> > <<< status: page 3 printed
> > (pause)
> > <<< status: out of paper
> > (pause)
> > <<< status: resumed
> > (pause)
> > <<< status: page 4 printed
> > <<< status: job complete
> > (connection closed)
> >
> > rather than sending a separate GET or POST or whatever for each
> > transaction reported. and you really want to avoid
> > opening up a new tcp connection for every page printed.
> >
> > you can save a bit more bandwidth by batching status updates,
> > and only sending a single update (perhaps for multiple pages)
> > every N seconds, or when an exception occurs, whichever
> > is sooner.
>
> Can't we depend on TCP's Nagle algorithm to do this coalescing for us?
>
> >
> > if you're doing per-page status updates, you're going to
> > be sending at least one packet per page regardless of whether
> > you use TCP or UDP. the difference would be that each packet
> > sent in TCP would be acknowledged by the receiver, in
> > most cases doubling the number of packets sent over the link.
>
> Why would the receiver acknowledge each packet? I thought the receiver
> was supposed to ack the longest contiguous prefix of the stream that
> has been received correctly -- a cumulative acknowledgement scheme.
> Therefore there may only be one ACK for multiple segments transmitted.
> I thought one ACK for two segments is typical.
>
> > for most kinds of links you are concerned about minimizing
> > bandwidth (rather than packets) or delay (to improve response
> > time for interactive applications). the latter doesn't apply
> > in this case. as for users of links that charge per-packet,
> > they can avoid the problem by not requesting per-page status updates.
> >
> > if you're doing less frequent notifications (like job completion
> > notifications), the volume of data is less, but in this you probably
> > do want reliable delivery and acknowledgement. an RPC-like protocol
> > layered on UDP would work. but if you're only sending a single
> > notification per print job, it's probably not worth the savings.
> >
> > so offhand, as long as IPP notifications can nearly always occur in a
> > single TCP session and a single HTTP transaction per print job
> > (and especially if it can be piggybacked in the same HTTP/TCP session
> > used to submit the print job), I can't make a strong case for using
> > UDP for IPP notifications, or even for supporting it as an option.
> >
> > of course, if there's an important case that I'm missing, let me know.
> >
> > (just in case the http session gets disconnected, you probably will
> > want to have the capability of reconnecting, but it should be
> > the exception rather than the rule. and you need to define what
> > happens in this case - do you replay all status messages since
> > the start of the print job or do you just replay those issued after
> the
> > start of that http session?)
> >
> > hope this helps,
> >
> > Keith
>
>