Ayende @ Rahien

It's a girl

Pull vs. Push models for UI updates

I thought that I would give this topic a bit of discussion, because it seems to me that a lot of people are making false assumptions here.

Let us consider a push model for an application UI. Assuming that data binding works, and we have implemented INotifyPropertyChanged and INotifyCollectionChanged, push model is actually the easiest to work with. Change the model, and the UI is updated automatically.

Very often, this can look like magic. And it is a damn simple model to work with.

It also breaks down completely when the number of updates that you have to deal with goes up beyond what is humanly perceivable.

Let us imagine that we have to build a system that needs to provide the user with a real time view of a system that generate 5,000 events per second.

Guess what? There is a serious issue in the actual problem definition. No human can deal with 5,000 events per second. More than that, even if you are displaying an aggregation of the data, it is still not going to work. Having to deal with 5,000 UI updates will usually kill the system. It also means that the system is doing way too much work. Just processing those UI events is going to take a lot of time. Time that we could dedicate to actually making the UI work for the user.

A better model in this scenario is to split the work. We have a backend that deals with the data aggregation, and we have the UI poll the backend fora snapshot of the information periodically (every .2 - .5 seconds).

The advantage is that now we have far less UI work to do. More than that, we don't have to go through all the work to show the current state.

Let us assume that we have 5,000 events per second, that we update the UI every .5 seconds and that only the last 500 events are actually relevant for the UI.

Using the push model, we would have to actually process all 5,000 changes. Using the pull model, every time we pull the new snapshot, we get the already processed data. So in this case, we throw away all the data that is not relevant to us (everything not in the last 500 events that showed up since the last batch. Trying to implement the same for a push model require a fairly complex system, especially if you have a rich model, and you don't have a good way of telling what the implications of change would be.

The reason that this matters is that UI operations are expensive, while backend operations tend to be very cheap.

Add that to the human perception time lag, and you get a smoother user experience. And since now the UI work that needs to be done is so much less, and since the polling happens when the UI isn't busy doing other things, we also have a more responsive UI.

Comments

configurator
02/16/2009 03:34 AM by
configurator

So now the user has an interface that is changed every .5 seconds? How does that work, in terms of usability?

Ayende Rahien
02/16/2009 03:45 AM by
Ayende Rahien

The UI is updated every .5 seconds.

Think about a reporting interface, where the values are constantly updating.

For example:

number of sessions: 15

number of alerts: 312

avg. statements per session: 23.4

avg. alerts per session :12.1

Or seeing new rows added to grid

Mike Rettig
02/16/2009 04:00 AM by
Mike Rettig

In other words, this is producer versus consumer pace. The UI wants the events at an appropriate pace for human consumption while the producer of the information simply wants to publish any and all updates.

To plug Retlang one more time.... this is why Retlang works so well in the UI. Producers publish information at any rate that is appropriate, while consumers (UI threads) can choose the rate of delivery. Subscriptions can be for the last message, for all messages over an interval, or for all unique events over an interval. With multiple subscriptions, some parts of the UI are updated immediately while other information can operate on a delay.

I just added an example earlier today for WPF in SVN.

Mike Rettig

Peter Morris
02/16/2009 09:50 AM by
Peter Morris

Mike: That's exactly what I am trying to suggest Oren achieves by using a manual reset event, a thread pool job queued for that event, and the GUI requeuing the next wait when it has finished updating the GUI.

The first data will come through immediately, subsequent data will come through as quickly as the GUI can handle it without crippling Windows.

Ayende Rahien
02/16/2009 12:12 PM by
Ayende Rahien

Peter,

But I don't want the data as fast as possible.

I want the data in human relative time.

Because it doesn't mean anything to display all the data.

Jonathan
02/16/2009 01:01 PM by
Jonathan

One thing that you have probably already implemented--or at least considered--is dumping all SQL statements to some kind of log file. If the info is coming too fast for the UI, you can display a portion of the queries in the UI as mentioned in the post, but the user could easily go to a log file for the more complete account, e.g. perhaps to see if and how a single row of 10,000 was properly inserted.

Steve
02/16/2009 01:38 PM by
Steve

Thanks for the Retlang plug, I went to check it out and it is very good!

Can this be used in Silverlight apps? (or is there any thought to include it in Silverlight?)

Sorry to highjack your post Ayende !

Peter Morris
02/16/2009 02:02 PM by
Peter Morris

Just to be clear I my aim isn't "fast" but "immediate". I just think that stepping through code and waiting up to .5 seconds for the log to come through is sluggish, especially if you are stepping through multiple lines of code and waiting for each log in order to find a specific problem.

Another way of putting it. You could still throttle the updates to no more than X lines per second, but using a variable sleep based on the last update time + amount of data retrieved last time.

Ayende Rahien
02/16/2009 02:19 PM by
Ayende Rahien

Peter,

.5 is not something that you would notice, in most cases.

Joe Gutierrez
02/16/2009 04:26 PM by
Joe Gutierrez

Why change from a push to a pull. As you stated a push is much simpler than a pull. Keep it! Just add an event filtering mechanism. This time a temporal filter.

I like this because when you register for the events you are already registering a filter of sorts, events that your presentation is interested in. So you might as well add a temporal filter too.

Dag Christensen
02/16/2009 04:51 PM by
Dag Christensen

Isn't 0.5 just a configurable number, like the low | normal | high update speed modes of windows task manager. On my work laptop I'd probably prefer "low" in a WPF app since otherwise the fan goes through the roof from the high load updating my gui.

Ayende Rahien
02/16/2009 04:57 PM by
Ayende Rahien

Joe,

The problem is that the events that are going on are complex, you can do anything via push model.

Filtering on that is hard.

The pull method is generating a snapshot, so there is a lot less complexity.

Gena
02/16/2009 05:18 PM by
Gena

We also use a push driven (or event bubbling) approach and sometimes have the same situation when number of changes is too big. Since we use PostSharp to subscribe/bubble events I'm considering to group them inside the aspect when corresponding parameter is used, e.g.:

[BubbleEvents(Delay=0.5] // or also groupby parameters

public class A

{

...

}

For INotifyCollectionChanged it may work using OldItems, NewItems but for INotifyPropertyChanged something else should be invented ... INotifyPropertiesChanged, to specify many properties changed in the last 0.5 sec groupped by senders.

... or to define smth like MaxCapacity=100 [events/sec] :)

Torkel
02/16/2009 05:23 PM by
Torkel

I came to the same conclusion on my last WPF app, that is i moved some stuff (like updating stats) to a pull model that updated at regular intervals (I choose 0.5s too) .

Joe Gutierrez
02/16/2009 06:27 PM by
Joe Gutierrez

The complexity doesn't go away whether using push or pull. Just where you're encapsulating the complexity.

In "The Cost of Messaging" you used a pull method on the authentication service.

I think the polling mechanism required for a pull is complex. Which object is responsible for the polling? The UI?

I think you're breaking separation of responsibilities, if the UI is responsible for polling.

I think this is true: ". . . UI operations are expensive, while backend operations tend to be very cheap."

So try it keep the operations on the back-end

Frans Bouma
02/17/2009 12:22 PM by
Frans Bouma

Sounds familiar (the problem). If you have an application which visualizes a model in various ways and reflects changes immediately, it can be slow when a lot of events are raised because of a single change (e.g. check a checkbox, which changes a property, which triggers a 'I'm changed!' event which bubbles through a model).

You can solve it in different ways. The approach you took is OK if the UI has a single area which suffers from a lot of updates, so polling can make things simpler. It has a disadvantage: it only works in some situations, namely the ones where you know you'll get a lot of updates and you want to poll for them instead of observe the changes come in. It can be cumbersome to use it in other situations as your observer setup is more cumbersome (i.e. with timers, instead of an event handler).

Another approach, which I took in llblgen pro v2, is the introduction of a different set of events for visualization. This too has downsides, though it makes writing ui code simpler: you write UI code to update itself against the visualization oriented events, not the regular changed events. It gives a much more fine-grained way to fine-tune performance with larger sets of events, as it's not the event system itself which is slow, but the messagepump wrapping in .NET.

Another approach is to use a flag for the event handling, and only accept calls to the event handlers if the flag is false, and it's managed by a timer.

Comments have been closed on this topic.