IPP Mail Archive: Re[2]: IPP> Common concerns regarding the SWP proposal

Re[2]: IPP> Common concerns regarding the SWP proposal

Bill Wagner (bwagner@digprod.com)
Mon, 9 Jun 1997 13:15:13 -0400

Carl-Unos and Scott appear to suggest the that IPP standard should
include all the features the group feels are desirable; and that
specific implementations may have "null" implementations of these
features. Aside from the time necessary to properly define this
complete set of features, and the time necessary to get approval, the
following message on the IETF-FAX list (in response to perhaps a
similar situation in internet fax) points out a potential problem to
this approach which sould be addressed. It might be appropriate for
the IPP's IETF guides to comment on this.

Bill Wagner, Osicom/DPI

______________________________ Forward Header _____________________________
Subject: Re: JWK and Wordcraft support for legacy fax machines - UniB
Author: Ned Freed <Ned.Freed@innosoft.com> at Internet
Date: 6/9/97 9:13 AM

>Confusion of standards and implementations again. Standards define the set
>of functions, implementations may opt for a sub-set.The key is to define a
>full set to start with and then let people do their own thing in terms of
>the sub-set they select.

I agree that this is how the ISO and ITU standards process works.It is not,
however, how the IETF standards process works. IETF standards attempt to
standardize the minimum amount of stuff needed to get the job done and no
more.
And if subsequent implementation experience shows that some features end up
being unimplemented and hence don't interoperate they will be removed from
the standard as it proceeds down the standards path.

At the heart of all this is the IETF requirement that each feature of a
standard has to be shown to

(1) interoperate between multiple implementations and
(2) not have any known interoperability problems

before the standard can advance in grade.

Fine,you say,but why does this rule preclude me from writing a standard
with
lots of stuff in it and then trimming the stuff that sees insufficient
implementation later on? The answer is that while the process may formally
allow this, the long-time participants in the process have a strong bias
against it, and you will have the devil's own job getting rough consensus
on
documents that are written this way. The "rough consensus and running code"
line often sounds silly to people who hail from the ISO or ITU communities
(I used to be such a person, BTW) but here these are words to live by.

When MIME was first designed there were no less than three separate
implementations of every feature in the standard (one by me, one by
Marshall Rose, and one by Nathaniel Borenstein) that were done as the
standard was being written and were kept up to date as the standard
changed. And even though we effectively met the implementation requirements
for a move to full standard status before RFC1341 even came out, we've
still have to trim any number of features out of MIME, and I suspect that
several more will have to go before we
are allowed to move it to full standard.

Ned

______________________________ Reply Separator _________________________________
Subject: Re: IPP> Common concerns regarding the SWP proposal
Author: Scott Isaacson <SISAACSON@novell.com> at Internet
Date: 6/6/97 1:43 PM

Good summary Jay..

************************************************************
Scott A. Isaacson
Print Services Consulting Engineer
Novell Inc., 122 E 1700 S, Provo, UT 84606
V: (801) 861-7366, (800) 453-1267 x17366
F: (801) 861-4025, E: scott_isaacson@novell.com
W: http://www.novell.com
************************************************************

>>> JK Martin <jkm@underscore.com> 06/05 6:47 PM >>>

> 1. Does not support multiple documents

No mandatory support for all implementations. Add a simple attribute to the
printer that is a boolean "multiple-documents-supported" = True or False.
If False, expect an error code back on Create-Job or Send-Document. Maybe
even put this in the directroy schema. Do end users want to find printers
that can print multiple documents per job? I don't know.

> 2. Does not support Print-by-Reference

Add s simple attribute to the printer that says
"print-by-reference-supported" = True or False. If False, don't expect a
URL as document content to work. If True, what gets printed is what gets
printed no guarantee and if you send
in an FTP URL and the given instance of a Printer does not support it, you
get
an error message.

> 3. Does not support job status queries
Has always sounded to me like a basic query to a Printer or a Job is
essential. I believe this is fundamental. Sure, you could always do a GET
on a url with an
accept header of text/html and get back a browser view, but this need not be
a requirement. SWP does not mandate that this be a requirement. However, a
requirement would be to support an POST with an application/ipp PDU in
it. The number of MANDATORY requirements is SO small here, I don't see a
problem with supporting the operation for all implemntations given that you
have a web server already (doing the GET accepting text/html that is).

> 4. Does not support job cancellation by the requesting user
Seems fundamental to me. If you support an HTML form version of this, it
would be like litterally 10 lines of code to add the POST application/IPP
version.

> 5. Uses binary-encoded data for simple information normally encoded
> in text for HTTP-related transactions

In our prototype, we just finished up a Java application that tried to
process
attributes encoded as:

<lenght>name<length>value (case 2)

rather than as

name ":" value CRLF (case 1)

We ran both versions on 3 attributes, 30 attributes, and 300 attributes.
The goal was to measure the actuall difference in processing time for a
dumb, interpreted java application to do one or the other. The times seem
slow (we just did a quick an dirty hack with no optimizations, kept the
buffering I/O the same, just modified the algorithm)

Encoding
30 attributes (varying name lengths, varying value lenghts), 50 runs, avg.

Case 1: .1375 seconds
Case 2: .1525 seconds

Decoding
30 attributes (varying name lengths, varying value lenghts), 50 runs, avg.

Case 1: .0975 seconds
Case 2: .0825 seconds

Sure enough, case 2 was "quicker" but as been said many times before:
"This is printing we are talking about, and at MOST 30 attributes we are
talking about" Ease of use and extensibility and portability etc are so
much easier with a non-binary encoding of lengths. It just doesn't buy you
very much.

Interesting node, encoding actually took longer with case 2? You actually
have to compute the length in order to encode the length! Looking
at both sides (encoder/decoder), it seems to come out a wash with
binary encoding or not.

The point in posting this data is not to get into a war about "we'll I could
do it
faster ..." but just to look at the RELATIVELY MINOR overhead of 30
attributes
compared to a 4 meg color RIP process!!!

Please, let's not have two conformance levels. Let's NOT force everyone
to do do everything, but let's not fragement the standard. We have
all failed if we do.

Scott Isaacson