> From hastings at cp10.es.xerox.com Wed May 21 21:42:58 1997
>>> Why can't we specify UTF8 (not Unicode) in the Model document as the
> coded character set for values? Then we avoid a lot of messy coded
> character set mapping rules in the encoding and protocol documents.
> Believe me coded character set mapping is very hard! Most character sets
> don't have all the same characters, and/or have over lapping characters.
> Lets avoid character set mapping altogether in our documents.
> We specify USASCII for the attribute name keywords in the model document,
> why not specify UTF8 as well (and follow the recommendations of the IETF here).
It is the duty of the encoding to specify the encoding. In the model,
we could say any character in Unicode and in the encoding document we
could specify the UTF-8 encoding of the Unicode characters. But in any
case, unless a server supports the entire Unicode character set,
something gets dropped somewhere, and if the server uses some other
encoding, it has to convert at some point and drop whatever characters
the new encoding doesn't support. So, I am still not sure that all
implementations will want to restrict the protocol to UTF-8 only. I
could see a Japanese site choosing EUC or SJIS to avoid the conversion
overhead. It's nice and simple to pick Unicode and UTF-8, but I don't
think it is realistic until Unicode is more pervasive.
> > 17. We had a 3 hour discussion on best-effort. The final
> > proposal was to do away with any notion of "substitute"
> > values when there is a conflict between what is requested
> > and what is supported. The expectation is that if a client
> > specifies something and it is not supported, return an error
> > so that the client can query what is supported. best-effort
> > will now apply to only conflicts between IPP attributes and
> > what is in the document stream (PDL) itself. Proposed
> > values are: SHALL_HONOR_IPP_ATTR (IPP attribute values take
> > precedence over PDL instructions), SHOULD_HONOR_IPP_ATTR
> > (same as SHALL with no guarantees), and
> > NEED_NOT_HONOR_IPP_ATTR (PDL takes precedence over IPP
> > attributes).
We dropped NEED_NOT_HONOR_IPP_ATTR because it was the case where a client
could ask for an attribute that the Printer doesn't support. Both the
SHALL_HONOR_IPP_ATTR and SHOULD_HONOR_IPP_ATTR provide for a precedence
of IPP > embedded and embedded > Printer-default. The distinction was
on probability of success.
> > 19. When a Job object is created only supplied attributes
> > will be copied from the request into the Job object. In
> > order to determine the defaults that will be applied, query
> > the Printer Job Template attributes to get defaults.
>> We need to clarify in the above that:
> The Printer object applies the default values when there is no value supplied
> by either the client in the IPP protocol or by the document in the PDL.
> Presumably when the Printer object applies the defaults is implementation
> dependent. Some implementations for some attributes may scan the PDL at job
> submission time, shortly thereafter, or just before processing. Some
> implementations may wait until during interpretation of the PDL. Some
> implementations may set the output device to the default values just prior
> to sending the document PDL to the output device. If the PDL changes the
> output device, the PDL overrode the defaults.
I think we would specify expected behavior rather that possible implementations.
> I think that we did not agree to add "time-since-processing" on the grounds
> that that is an attribute in the Job Monitoring MIB for accounting and
> system utilization, so we didn't need to add it to IPP, since IPP isn't
> for accounting and utilization analysis.
I thought we did agree to have a 'time-since' for each state
attribute. Though I think that we also agreed that the clock for such
a time stopped when the job moved to the next state. So the name should
be time-<job-state-name>, that is: time-pending (i.e. time that the
job was in the pending state), time-processing, time-completed,
> My notes said the units were milliseconds. But wouldn't seconds be
We picked milliseconds so that a server with many jobs arriving at
subsecond intervals could distinguish order.
>>> > 7. Define SHALL and SHOULD get rid of NEED_NOT
> > Who: Scott
>> Why get rid of "need not"? It needed when the standard has to say that an
> implementor does not need to do something. Its too confusing to say
> "may not", since it sounds like a prohibition, rather than a relaxation
> of requirements. POSIX and ISO standards are quite clear on this terminology.
See my comment above for why to get rid of "need not".