Maybe this is something we could discuss at the next telecon.
Here is our situation. For reasons beyond the scope of this discussion, we need to build a client-side API that supports an open,write,write...close|cancel paradigm for sending the document data. This is easy to do with chunking. It may be impossible to do with Content-Length (due to buffering limitations). So our client will normally use chunking. We'd really to avoid having to design it to fall-back to Content-Length when it encounters an HTTP/1.0 IPP server, because that would be complicated and expensive, and won't always work. On the other hand, we don't want to paint ourselves into a corner and build a client that relies heavily on chunking, only to find out that many server implementations can't receive and decode the chunked transfer coding.
Is it safe to assume that all IPP 1.0 products will support HTTP/1.1 (and therefore chunking), even if prototype implementations don't?