IPP> Chunked POST

IPP> Chunked POST

Carl Kugler kugler at us.ibm.com
Wed Dec 16 15:46:58 EST 1998


I've stumbled across an apparent conflict between the CGI spec (see
http://www.ietf.org/internet-drafts/draft-coar-cgi-v11-00.txt) and the
HTTP/1.1 spec (see
http://www.ietf.org/internet-drafts/draft-ietf-http-v11-spec-rev-06.txt)
which is compounded by the Internet Printing Protocol (IPP) Spec (see
http://www.ietf.org//internet-drafts/draft-ietf-ipp-protocol-07.txt).  

HTTP/1.1 says that "All HTTP/1.1 applications MUST be able to receive
and decode the "chunked" transfer-coding" and "The presence of a
message-body in a request is signaled by the inclusion of a
Content-Length or Transfer-Encoding header field in the request's
message-headers".  Futhermore, the Content-Length header field MUST NOT
be sent if a Transfer-Encoding header field is present. 

CGI/1.1 specifies the CONTENT_LENGTH meta-variable to indicate the size
of the entity attached to the request, if any. If no data are attached
CONTENT_LENGTH is NULL.  The script must not attempt to read more than
CONTENT_LENGTH bytes, even if more data are available. 

Many http server implementors seem to have interpreted the combination
of these requirements to imply that a POST request without a
Content-Length HTTP header cannot have a message-body.  Which further
implies that a POST using Transfer-Encoding: chunked cannot have a
message-body.  Indeed, I have tried several commercial web servers and
in all cases, a servlet or CGI program gets end-of-file as soon as it
tries to read the message-body input stream for a POST request with
chunked transfer-coding.  However, there are some web servers out there
that do work with chunked POSTs.  Two that I have found are Jigsaw and
Acme.

IPP/1.0 uses HTTP/1.1 as a "substrate", and requires the use of the POST
method for submitting print jobs.  Chunking is essential for IPP, since
the POST requests might contain dynamically generated muli-megabyte
document files (like the 25MB Postscript files I've seen come out of
CASE tools).

I can think of a couple possible solutions to this problem:

1.  No spec changes.  HTTP/1.1 servers that don't support chunked POSTs
are considered broken, and have to be fixed to buffer a chunked request
message-body in order to generate a CONTENT_LENGTH.  This works in
theory, but is impractical because system resources are limited and
there is no guarantee the server can buffer all of the chunked data. 
It's also inefficient.  For example, for IPP with chunked encoding, the
printer could begin printing a job while the request is still arriving
if the request didn't have to be buffered in the http server.

2.  Change CGI spec to remove dependency on CONTENT_LENGTH (and change
http servers accordingly).

In any case, I think we need some clarification on this issue.  Are
POSTs with message bodies and without Content-Length headers legal?  If
so, should an http server pass the message body of such a request to the
service layer?  If so, must the http server generate a CONTENT_LENGTH
before doing so?  Should the http server decode (or filter) the chunked
transfer-coding before (or while) passing the message-body to the
service layer?

Carl Kugler
kugler at us.ibm.com



More information about the Ipp mailing list