> It seems to me that having major printer vendors like Lexmark
> and HP as well as the OS and NOS providers like Microsoft,
> IBM and Novell behind an HTTP-based solution will almost
> guarantee a commercial success.
What I want is a well-designed, reliable, and ubiqutous printing
protocol, that makes life easier for users and system administrators.
If this group produces such a thing, I'm happy. And if they do, I'm
quite confident that printer vendors will sell them like hotcakes.
The purpose of my earlier rebuttal was to caution against hubris.
I've seen lots of well-supported standards efforts produce really
complicated, expensive, and unreliable protocols.
Yes, the support of these vendors will help a lot. But it's also
important to do good protocol engineering.
> While working within the IETF to define this would be nice, it is
> not necessary. As Vice-Chair of the IEEE Computer Society's
> Microprocessor Standard Committee, I believe we could be chartered
> by the IEEE easily.
Yes, I'm sure you could. But neither IEEE's nor IETF's imprimatur is
any assurance of success. The quality of the work is also important.
That's what I'm trying to address.
If users get a really good printing protocol out of this effort, I
don't care whose umbrella it happens under!
> If the IETF doesn't want us to create a commercial success then
> let's move one and do what we, the printing experts, need to do to
> provide what are customers want and need.
IETF is interested in producing beneficial results in finite time.
"Beneficial" means that we try really hard to make sure that our
efforts solve a real problem, that the result works well, is low cost,
is easy to use and manage, etc.
"Finite time" means we have a well-defined time frame, realistic and
narrowly focused goals, and that we try to avoid ratholes.
This is just good engineering common-sense.
Of course, the group needs the wisdom of "printing experts" to produce
beneficial results. It also needs the wisdom of "networking experts".
The latter group cares very much about minimizing operational
complexity and having things interoperate.
The whole HTTP vs. HTTP-lite debate is about how to minimize
Some people feel that simply reusing HTTP accomplishes this, because
there are already tools and libraries available that can be reused.
I believe that this approach minimizes prototyping complexity and
client implementation complexity on some platforms, at the expense of
implementation complexity on *most* of the target platforms
(especially embedded environments), operational complexity, product
cost, and interoperability.
I've got at least half a dozen groups (in IETF) that want to reuse
HTTP for something besides the web. I'd much rather give them a small
subset to let them embellish, than give them all of the HTTP/1.1 rope
with which to hang themselves.
But there's no point in having a prolonged discussion about it until
there's a spec for HTTP-lite that people can look at. So I'll try to
get that going. Once it materializes, people can see whether they
Meanwhile, I suggest that the group focus on the payload, rather than
the rpc mechanism.