Well, I don't think it's worth spending too much time designing an
elaborate polling-based solution, because it won't scale. What Harry & I
originally proposed (see
limited in scope, mainly for job progress type notifications back to the
original sender only. For multi-recipient notifications, Printer status
change notifications, etc., one of the other delivery methods is probably
I mean, just think about hundreds of desktop clients, in a corporate
"distributed printing" scenario, all polling the poor printer every 15
seconds. 15 seconds is frequent enough to make persistent connections
worthwhile, but now you have hundreds of simultaneous connections. Or, you
refuse persistent connections and deal with dozens of new connections per
second, possibly including Digest authentication challenge/responses or TLS
handshakes for each one. Won't scale.
You're right, I certainly haven't solved the
backhoe-through-the-network-cable scenario. While there isn't a good way
to handle an indefinite time until the cable is fixed, perhaps I can
suggest a reasonable solution.
Maybe there should be 3 time values: 1) how often polling should be done to
be a good network citizen (such as 15 seconds), 2) the minimum number of
seconds that must pass before an event can be deleted (such as 3 minutes,
to allow clients to listen in on existing subscriptions without losing
events), and 3) the maximum number of seconds before an event is deleted
(such as an hour).
On a low-end implementation, the 2nd and 3rd numbers might be the same.
On a high-end implementation, however, it could keep track of the listeners
of each subscription, and if the 2nd number of seconds has elapsed and all
listeners have received an event, the event can be deleted, instead of
lasting until the 3rd number of seconds has elapsed. Maybe there should be
a 4th number: the number of seconds since last polling before a listener is
assumed to be dead, and no longer listening. Since a polling listener
won't inform the receiver that it's going to stop polling for events, when
a listener stops polling, this 4th number prevents all events from all
subscriptions that a listener had previously polled from having to last the
3rd number of seconds. If we decide on this type of solution, we'll have
to figure out how listeners are uniquely identified, since
requesting-user-name is optional.
"Carl Kugler" <firstname.lastname@example.org>@pwg.org on 08/14/2001 10:18:21 AM
Sent by: email@example.com
Subject: Re: IFX> ippget and Lost Events
I don't think you've solved the problem. You still can't predict how long
it will take for the connection to be reestablished, and there is still a
limit to how long an event will be retained, so you can still lose events.
You can't beat the backhoe-through-the-network-cable scenario with
P.S. An earlier version of this spec
minimum and maximum event lifetimes.
I'm concerned that in the non-wait mode of Get-Notifications, events could
be lost. The problem lies in the fact that the sender is unable to always
predict how long it will take to connect to the receiver, or how long it
will take once it connects for the receiver to process the request. The
sender knows how long the receiver takes before deleting events, so it
knows it must connect and have its request get processed before that number
of seconds, but connection times may vary, as may printer responsiveness
due to load.
The following is not a proposal, so much as it is a possible solution to
the lost event problem, and is intended to lead to discussion.
If there was a one-to-one relationship between subscriptions and senders,
then the receiver could keep events for a long period of time, and delete
them once they have been retrieved. The problem is when there are multiple
senders who want to receive the same events. Because subscriptions can be
created when a job is created, a second sender who wants the same events
couldn't create a subscription soon enough to not lose per-job
If the receiver knew how many senders there were for each subscription, it
could keep track of how many senders received an event, and delete the
event once all senders have received it. If the event contained a list of
the senders, and they were uniquely identified, not only could the event be
deleted once all senders have received it, but this could solve the
duplicate event problem. Maybe instead of a lifetime value, there should
be a minimum and a maximum lifetime value. An event would live until its
minimum lifetime even if all senders that have registered for that event
have already received the event. This would allow a new sender that also
wants to receive that event to be able to. An event would live no longer
than its maximum lifetime value. This way, if a sender lost interest, the
event will still get deleted. I'm sure we would want to dictate some
minimum value for the minimum lifetime value, and suggest some minimum
difference between the minimum and maximum lifetimes.
This archive was generated by hypermail 2b29 : Wed Aug 15 2001 - 13:02:43 EDT