IPP> machine readable etc. - why Harry is right

IPP> machine readable etc. - why Harry is right

pmoore at peerless.com pmoore at peerless.com
Tue Aug 22 16:20:11 EDT 2000


It wasnt so much that it was 'bad' HTTP - it was more like 'bad' IPP. IPP feels
like an RPC implementation to me - send a request, get the reply, return to
caller. Allowing things to dribble back over the next few days changes this
aspect of the model; do I return to the caller or not, how do I tell the caller
that something just arrived, etc.

These are, I know , implementation issues rather than protocol issues but at the
same time I am certain that the dribbling back method would break existing
implementations and we should bear that in mind.

Proxies et al:

I agree that pure HTTP proxies would have trouble with my proposed mechanism. I
did not envisage it being carried on HTTP rather I expected it to be carried
'raw'




"Carl Kugler/Boulder/IBM" <kugler at us.ibm.com> on 08/22/2000 12:52:10 PM

To:   pmoore at peerless.com
cc:   "Harry Lewis/Boulder/IBM" <harryl at us.ibm.com>, bwagner at digprod.com,
      ipp at pwg.org, "Herriot, Robert" <Robert.Herriot at pahv.xerox.com> (bcc: Paul
      Moore/AUCO/US)

Subject:  RE: IPP> machine readable etc. - why Harry is right




Paul wrote:
>
>I felt that the nail in the coffin of multi-part response was the radical
change
>in the data flow - it was too much of a change (at least to my mental
model of
>IPP).
>
Hmmm, I'm not sure what you mean by a radical change in the data flow.
It's a perfectly normal HTTP request/response that happens to have
"unusual" (but still legal) timing in delivery of the response.  For
example, you do a Print-Job request, and get back a chunked response.  The
first chunk contains the normal Print-Job response.  Additional chunks
follow (later), containing notifications about the job progress.  This is
efficient enough for page-by-page notifications.

>Regarding the problems you cite
>
>1. What is a buffering proxy. If it doesnt allow what I described then it will
>break instant messegner systems and hence wont survive in the market place very
>long. IN fact this proposal has the same requirements as instant messgener
>systems (and they definitely work).
>
It's a mythical beast, but theoretically a standard-compliant HTTP proxy
implementation.  A proxy might buffer up a complete response from a server
before relaying it to a client.  For example, a proxy sitting between an
HTTP/1.1 server and an HTTP/1.0 client might use this algorithm (from
section 19.4.6. of RFC 2616) to buffer up a chunked response in order to
relay it to the client with Content-Length.

A process for decoding the "chunked" transfer-coding (section 3.6) can be
represented in pseudo-code as:

       length := 0
       read chunk-size, chunk-extension (if any) and CRLF
       while (chunk-size > 0) {
          read chunk-data and CRLF
          append chunk-data to entity-body
          length := length + chunk-size
          read chunk-size and CRLF
       }
       read entity-header
       while (entity-header not empty) {
          append entity-header to existing header fields
          read entity-header
       }
       Content-Length := length
       Remove "chunked" from Transfer-Encoding


The reason I say that this kind of proxy doesn't exist is also to be found
in the HTTP spec:  "Note: HTTP/1.1 does not define any means to limit the
size of a chunked response such that a client can be assured of buffering
the entire response."  (That and the extremely poor responsiveness, for
interactive applications.)

>2. Ditto
>
Some HTTP proxies will drop an idle connection after 5-15 minutes.  There
is no spec for this, but it seems to be common practice.  (The spec just
says, in essence, anyone can drop a connection any time for any reason.)

>3. What does that means -proxies are limited to 2 per server (by spec). Do you
>mean HTTP proxies?
>
Re:  2 per server, I just looked it up, and it's not quite as bad as I
thought.  From HTTP spec, "A proxy SHOULD use up to 2*N connections to
another server or proxy, where N is the number of simultaneously active
users. "   So each user is limited to two connections through a proxy to a
particular Printer.

>Well this wont be using HTTP proxies  - they will have to be
>direct (socks or winsock) proxies.
>
So much for "native" IPP notifications!  I agree, though, if you get away from
HTTP, most of these problems go away.


>4. is a good point. But then the printer probably isnt going to have a lot of
>clients talking directly to it.
>
We have this recurring scenario of a lightweight printer shared by 1000
users, each of which wants to get Printer notifications (out of paper, out
of toner, paper jam, etc.).  I don't believe in it, myself.

>
>
>
>
>"Carl Kugler/Boulder/IBM" <kugler at us.ibm.com> on 08/22/2000 09:35:44 AM
>
>To:   pmoore at peerless.com
>cc:   "Harry Lewis/Boulder/IBM" <harryl at us.ibm.com>, bwagner at digprod.com,
>      ipp at pwg.org, "Herriot, Robert" <Robert.Herriot at pahv.xerox.com> (bcc: Paul
>      Moore/AUCO/US)
>
>Subject:  RE: IPP> machine readable etc. - why Harry is right
>
>
>
>Paul wrote:
>>
>..
>>
>>There are two workable possibilities - one is ipp-get. This already exists but
>>is polling - and people have a visceral dislike of polling to say the least!
>>
>It is a little better than polling, in that it introduces some message
>queuing.  With Get-Job-Attributes polling, you just get a periodic snapshot
>of the state.  With ipp-get, you can reconstruct the sequence of events
>(e.g., Job history), even if you don't get them in real time.
>
>>The other alternative is a tweak to indp. Instead of the client sending in a
>url
>>and the printer connecting to that URL we could have a client-opened
>connection.
>>
>>i.e. the client opens the connection
>>it subscribes on the 'normal' operation url,
>>in the subsription the notification url is 'indp:'
>>the lack of an address in the url causes the printer to use the exisitng
>>connection rather than open a new one.
>>
>This solution has all the problems that killed the multi-part response
solution:
>
>1.  Buffering proxies. (Although I strongly doubt their existence.)
>2.  Proxy time outs.  (Although this could be handled with reconnects and
>sequence numbers.)
>3.  Proxies being limited (by spec) to two(?) connections to any particular
>server.
>4.  Tiny printers that can't handle more than a few active connections at a
>time.
>
>Ipp-get might be as good as it gets.
>
>     -Carl
>
>
>
>
>








More information about the Ipp mailing list