Re: Inserting a task into the run loop
Re: Inserting a task into the run loop
- Subject: Re: Inserting a task into the run loop
- From: Graham Cox <email@hidden>
- Date: Mon, 23 Mar 2015 10:28:41 +1100
> On 23 Mar 2015, at 9:52 am, Ken Thomases <email@hidden> wrote:
>
> I'd be curious to know how "run all the time", "run as often as possible", and "an endless loop" jibe with "not heavyweight processing" and "isn't going to be a huge drain on anything" in your mind.
>
> Processor intensive code is not code which does "hard" work. It's just code that runs all the time.
>
> I think you need to think more about what you need and whether it's reasonable.
>
Yes, I'm thinking about that :)
The code in question is the data model for a digital logic simulation. Each cycle of the loop it looks at whether any state of any "device" has changed and propagates the change. If no state has changed, it doesn't really do anything - it just iterates over a bunch of devices checking a flag. If some state has changed it still may not really do anything - some state changes ultimately get reflected in the UI in various ways, but many are simply internal changes that have no UI to update (the user gets to decide because they can build anything). The model is hierarchical so if there's no state change to any device, the root of the system doesn't have its flag set either, so the root level which implements the whole simulation cycle just sits and spins waiting on that flag.
What I do want (I think) - is that the simulation is very fine-grained compared with user interaction with it. Essentially, each cycle of the simulation is equivalent to the propagation delay of a "device" (I use the term "device" to mean anything that has some logical function other than just propagating a signal from one point to another, i.e. a wire). To allow realistic timing these delays want to be very small. I'm obviously not expecting to achieve real-time simulation with propagation delays in the order if nanoseconds, and the cycle time itself can pretend to be whatever I want, but I do want these times to be realistically short with respect to user interaction, so for example if they flip a switch to change the state of an input, an entire circuit will respond to that change with realistic delays, so it appears to operate in real time, and an analysis of the timing will show realistic propagation delays which appear to be reported in nanoseconds.
The cycle-based approach works really well because it allows feedback loops around devices to work correctly without getting stuck in an infinite loop, but if I allow the cycle loop to sleep and wait for user input, my cycle timing ends up off because it has no idea how many faked cycles it should account for, or if I actually use the true measured time, the propagation delays are revealed as unrealistically long - many milliseconds rather than some faked nanoseconds. Perhaps all I need to do is to think about these timing calculations rather than trying to make my loop run faster??? Not sure...
I believe I want my simulation to run "as fast as possible" but because in an idle state a given circuit may simply sit there doing nothing it shouldn't be burning up a lot of processing time as such.
--Graham
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden