IPP> PRO> Requirements on use of HTTP/1.1 in IPP

IPP> PRO> Requirements on use of HTTP/1.1 in IPP

Carl Kugler kugler at us.ibm.com
Mon Sep 28 15:36:41 EDT 1998


Carl-Uno-

As you requested at the bake-off, I am putting down some of my thoughts about
IPP's restricted use of HTTP/1.1.

It was clear at the bake-off that many implementors are using pre-existing HTTP
frameworks, software development kits, or protocol stacks.  These are provided
by operating systems, development frameworks, web servers, etc.  I have yet to
see an off-the-shelf HTTP implementation that is not somehow broken in its
support for HTTP/1.1.  Yet many of these implementations somehow manage to get
the job done for millions of users, daily, around the globe, in a variety of
HTTP applications.

Philosophically, I believe that the IPP spec should be as unambiguous and
rigorously defined as possible.  This improves interoperability.  However, it
does not improve interoperability when IPP tries to remove some of the
ambiguity in the HTTP/1.1 spec.  Http-wg members admit that the HTTP/1.1 spec
is intentionally vague in some areas, because it is felt that those areas are
not understood well enough to be fully nailed down, and only experience with
what works in practice will provide the understanding.  A good example of this
is connection management.  The HTTP/1.1 spec is silent about who closes
connections and when and why.  Also, the HTTP/1.1 spec itself provides for a
lot of leniency and backward compatibility with HTTP/1.0.

Therefore, I think the IPP specs should avoid putting restrictions on the
HTTP/1.1 transport layer.  It would be a good idea to recommend use of HTTP/1.1
features like persistent connections, but this should not be an absolute
requirement of the IPP specification.  We should make our recommendations, but
defer to the HTTP/1.1 spec when it comes to MUSTs and SHOULDs about the
transport layer.  If HTTP/1.1 allows for some vestiges of HTTP/1.0 for
compatibility's sake, we should too.  That way, generally available HTTP stacks
that work fine for other applications will be likely to work fine for IPP,
too.

I presume that IPP tries to subset HTTP/1.1 for the benefit of implementors
building their own HTTP layers.  But I think the practical reality is that some
variations of HTTP/1.1 will have to be accommodated for in any implementation
that wants to interoperate widely with others.  Also, the amount of HTTP
functionality needed for an IPP implementation is pretty lightweight, even
allowing for backward compatibility.  The real weight is in areas like the
encryption layer.

Specifics
-------------
Below are some specific areas where IPP puts restrictions on the HTTP transport
layer that go beyond what's in the HTTP/1.1 spec.

PRO> "HTTP/1.1 is the transport layer for this protocol. "

We should allow that to be interpreted as saying that the transport layer is
defined by draft-ietf-http-v11-spec-rev-05 (or whatever), with all of it's
vagueness, leniency, and backward compatibility, not as saying that a message
must be rejected if says HTTP/1.0 in the message header.

PRO>  "The IPP layer doesn't have to deal with chunking.  In the context of CGI
scripts, the HTTP layer removes any chunking information in the received
data."
This statement is irrelevant and confusing.  Any HTTP/1.1 application
(including an IPP implementation) must be able to receive and decode the
chunked encoding.  That's what the ietf-http-v11-spec says, and we should leave
it at that.
PRO>  "A client MUST NOT expect a response from an IPP server until after the
client has sent the entire response.  But a client MAY listen for an error
response that an IPP server MAY send before it receives all the data.  In this
case a client, if chunking the data, can send a premature zero-length chunk to
end the request before sending all the data. If the request is blocked for some
reason, a client MAY determine the reason by opening another connection to
query the server."
The ietf-http-v11-spec says that the client MAY expect a response from an HTTP
server before the client has sent the entire request, IF it announces this
intention with the "Expect:  100-Continue" HTTP header.  "The purpose of the
100 (Continue) status (see section 10.1.1) is to allow an end-client that is
sending a request message with a request body to determine if the origin server
is willing to accept the request (based on the request headers) before the clien
t sends the request body. In some cases, it might either be inappropriate or
highly inefficient for the client to send the body if the server will reject
the message without looking at the body."  I don't see why IPP should prohibit
this.
PRO> (in table):  "General-Header:  Connection:  "close" only. Both client
and server SHOULD keep a connection for the duration of a sequence of
operations. The client and server MUST include this header for the last
operation in such a sequence.  The client or server MUST send the header when
this condition is met."
This is connection management.  This use of persistent connections should be a
recommendation for performance enhancement, not an absolute requirement of the
IPP spec.  Also, why say "'close' only" and effectively prohibit the use of
"Connection: Keep-Alive", which is commonly used by HTTP implementations
because it was an extension to HTTP/1.0 to allow persistent connections before
HTTP/1.1 made persistent connections the default?  Can the client or server
always determine whether or not the current operation is the last operation in
a sequence of operations?
Finally, the requirement to send Cache-Control headers is redundant since the
HTTP spec prohibits caching for POST requests anyway.
  -Carl



More information about the Ipp mailing list