Re: [Q] Why is the threading and UI updating designed to be done only on a main thread?
Re: [Q] Why is the threading and UI updating designed to be done only on a main thread?
- Subject: Re: [Q] Why is the threading and UI updating designed to be done only on a main thread?
- From: Alex Zavatone <email@hidden>
- Date: Wed, 14 Mar 2012 18:09:07 -0400
Back in the olden days in Director (1995-2001), I actually built an xplat pseudo threading engine in the Lingo language for Director and Shockwave for this exact purpose.
There was no robust model for thread locking as I simply said to myself "don't do anything stupid" and "handle your callbacks so that you can't create a locked up or out of control case".
In my experience, IF you can stick to this simple approach and properly handle success and failure cases, and you can focus on keeping your operations simple and limited you're really got multiple async processes that will all eventually phone back home and you can get a lot of things done on parallel basis.
But the only reason I think I was comfortable doing this was that I wrote the system and drove it through a throttled master service call on an idle interrupt which then ran through the master manager, the instantiated sub managers and then the list of async processes to be iterated on. Frame dropping of animations was also supported if each iteration took more than the requested time to process. This meant that performance would stay the same on faster and slower boxes and if under load, tasks still completed in the specified amount of time and only that time. However, If I wanted to fire off 1000 HTTP gets, and perform 1000 async animations at the same time, I COULD, but performance implications would be my own fault for being an idiot. But even this would never hang the system.
In one popular use case, glowing and pulsing button graphics were needed to be created and async animated to indicate the currently active UI button for a shortcut key keypress. For this task, the pseudo-threading/async process manager was ideal since Director's lingo engine was mostly a synchronous, 1 thread system. I created an animated pulsing effect by blending in the rollover state of an alpha channeled graphic over the active state, to indicate the UI element that would be activated if Return was pressed. The up or down arrows would allow moving of the indicator to the next/previous clickable button, and the pulsing would move to the next button, while the previous animation thread would clean up after itself and restore the graphic state.
This was simply one thread/async item at a time, all run through the master service throttle onto the async animation manager through the master iterator.
But it didn't look good. The pulsing did look great and organic, but simply stopping the pulse effect and moving the pulsing to another button was very abrupt and unpleasant. The previously active button(s) needed to fade back to the original state. What was needed was to catch the pulse in whatever % of fade in it was, dynamically create a fade out array from the current value minus the standard interval down to a value of 0, and spawn another thread with these graphic properties when focus moved from one clickable button to another. As the user navigated through the interface with the shortcut keys, at times the system I would have 4 to 10 threads handling pulse and the fade outs without any problem. Pressing Return or Escape in the middle of a fade up/fade down issued a thread disposal where the thread would handle its own house cleaning, including deallocation of the graphic sprite through the sprite server, disposal of itself and the rest of the memory management was cleaned up by Director itself.
If this was the last thread running, the sub manager would remove itself from the master iterator and if the master iterator had no managers in it, it would remove itself from the timeOut servicer as well.
If desired, callbacks could be executed at the end or at an interval of a thread's execution to universal system wide method or a scoped method within an object.
The thread manager was subclassed into two self starting and self disposing managers, one for graphic animations and one for get and post network operations.
Prototype thread classes were made for all basic animation and network operations and could be executed with one line of code and were easily subclassable.
Each thread was inherently self terminating (unless otherwise specified) and the thread manager and sub managers would instantly come into existence upon the first thread's issuance. Auto clean up, removal of their references and disposal of their instances upon the completion of the last thread in the queue was built into the system. By that, I mean all the housekeeping was magically taken care of by the system itself and its components.
If needed, at any time in execution, the threads could be referenced through several means, including case insensitive name strings, and could be told to finish up or have their instruction sets and intervals modified.
An artificial thread limit or variable that tracked the amount of idle ms available for threads was not present, but could have been implemented to prevent runaway thread allocation or congestion.
Threads were not able to be locked, due to there being no need for it at the time, as long as I followed my "don't be stupid" rule. What was very nice though was that this, with a 1000 channel graphic sprite server, enabled me to lay down robust, responsive, animated, 32 bit interfaces with dialogs, menus, 5 state buttons without resorting to synchronous loops that would lock up the inherently single threaded Director application just to get pleasing animations.
So, unless I'm terribly far off, if you want to thread some of your app's tasks, and you're got a clean model for your callbacks and you have properly modeled all potential circumstances, this should be very doable.
I remember running this system very well and with no performance lag at all on 80 MHz Pentium IIIs and on my super fast G3 Wallstreet PB with a 233 MHz processor and 128 MB of 100 MHz RAM.
The main approach in keeping these pseudo threads from getting out of control was the auto cleaning design of the system AND the premise that I would know all the possible outcomes of threads being spawned, completing and being disposed of.
I don't see why, with a modified approach and clean memory management, this wouldn't be able to be be done today in Xcode projects.
(edited many times - I hope this reads cleanly)
- Alex Zavatone
On Mar 14, 2012, at 4:12 PM, Per Bull Holmen wrote:
> Den 20:00 14. mars 2012 skrev Wade Tregaskis <email@hidden> følgende:
>
>> On the other hand, you could have an event handling framework which dispatched events to any of multiple threads/queues/whatever for you. For example, each window might have its own queue. This actually makes a lot more sense in many cases, as a default, since many actions within a single window/document are successive, not concurrent. If they are concurrent, you could then go to the trouble of manually dispatching things to other queues, or otherwise realising that concurrency.
>
> I guess most seasoned Cocoa programmers are familiar with this
> aproach, because they have probably not been doing everything in Cocoa
> all their lives. It's just a matter of taste, really. I think it's
> best to have it single threaded by default. I don't like the idea of a
> multithreaded aproach by default, because as a general rule, you
> should not make your application multithreaded unless you have a good
> reason. So, it's just a matter of when do you want to do the extra
> work? When you actually wanted to do it single threaded, but the
> framework has a multithreaded approach by default, or the other way
> around? Because each document or window is likely to need to access
> shared data anyway. Also, whatever model comes with the framework is
> likely to be not quite the best thing for your specific needs. I see
> the point of not having one document obstruct the work of other docs,
> but in many cases the best and easiest remedy to this might be to not
> let any document block its thread at all.
>
> Therefore, I think it's better that they focus their efforts on good
> abstractions that make concurrent programming in general easier and
> more efficient, like NSOperationQueue etc...
>
> Per
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden