Use UI or data for logic? (hijacked from Two NSRects adjacent/touching?)
Use UI or data for logic? (hijacked from Two NSRects adjacent/touching?)
- Subject: Use UI or data for logic? (hijacked from Two NSRects adjacent/touching?)
- From: Paul Bruneau <email@hidden>
- Date: Wed, 27 Jun 2007 08:59:17 -0400
I am planning a scheduling application where the schedule is similar
to a Gantt chart (but with many projects visible on the same chart).
I expect to create an NSView with NSRects very similar to what Brad
describes for his game, so I had to take the opportunity to ask a
couple "planning stage" questions.
When the game's being played, and the user clicks
somewhere in the view, I iterate through the tile
array to see which one the user clicked, using a
simple NSPointInRect() call.
This is how I am used to doing things procedurally (my only
programming experience to date), but I was thinking that with Cocoa
things would be a little different. I expected my rect shapes to
generate their own messages upon being clicked. Don't the elements of
a NSView get "told" that they have been clicked by the OS? Then based
on their outlets (if I am using the term correctly), the desired code
gets called?
This is where I'm running into trouble: only some
tiles are clickable at any given moment. Part of what
determines clickability is whether the tile has any
sides that aren't touching another tile.
So, I was hoping to find a way to determine this
programatically by feeding the clicked tile's rect
into a function which could then quickly compare that
rect against those of the other tiles.
This part of Brad's question also has me wondering about something. I
am torn and ignorant about whether to use the NSView and its elements
to determine decisions, or whether to use the underlying data to make
my decisions.
For instance, the user will want to drag a rect (representing a task
to be completed) either forward or backward on the "timeline" (left
or right). But she might be limited by other tasks being "in the
way". Or maybe I want the user to be able to "nudge" other rects out
of the way with the one that she is dragging around. Should I use the
actual rendered graphic rects to determine when such collisions
occur, or should I rely only on the underlying data to find these
collisions?
I want the user to be able to zoom in and out of time as desired.
Since the UI should be scaled correctly and it seems very handy to
use, I would like to go that way, but it seems to go counter to the
idea of separation between the UI and the data--part of me thinks I
should do all computations purely in the data, and use the UI only
for UI things, not for determining logic.
Is anything I am saying making any sense? All comment welcome.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden