Re: An architectural question for iOS app
Re: An architectural question for iOS app
- Subject: Re: An architectural question for iOS app
- From: Brian Bruinewoud <email@hidden>
- Date: Mon, 30 May 2011 22:03:28 +1000
One Issue I'm having that I forgot to mention is that when I zoom in, the gesture recogniser on the objects ceases to recognise my attempts to drag and the scrollview drags instead - I have to zoom back out to near to 1:1 before I cant start dragging the objects again.
On 30/05/2011, at 21:55 , Brian Bruinewoud wrote:
> Hi all,
>
> I'm about to start writing an app (Target is iPad running 4.3 or later) but I've gotten to a position where I'm not sure of the best way to proceed with the architecture. I thought I'd better ask now before I paint myself into a corner.
>
> The parts of the app with which I am having trouble will be similar to a basic vector graphic or CAD program. The user will be able to add objects to the document/screen from a palette and then move these objects around and edit them in various ways. Most documents will have around 25 objects; 50 would be a large number. The objects will exist in a document level coordinate system which should be mapped to the view coordinate system. When the user does a pinch gesture, the objects will change in size but, at the end of the gesture, they should all be redrawn at the new size so that they look neat and so as to adjust the amount of details displayed within them. Finally, when the iPad is rotated I would like to keep graphical aspects of the document un-rotated but rotate the UI and any text on the document.
>
> So, given the above, I'm wondering how to represent the objects. Currently I just have them as subviews within the main view (which is a UIView within a UIScrollView) and I can drag them around (thanks to a gesture recogniser attached to each object's view) and I can pinch to zoom (thanks to the Scroll View). I haven't worked out how to do the level of detail stuff, though I read that CATiledLayer might be of assistance, but then I don't think that doesn't play well with the UIView-per-object architecture I currently have. The views seem to give me a fair bit of functionality - hit testing, animation, gesture recognition (of which I'll need more than the current drag around) - but I haven't worked out how to do the nice zoom-and-redraw stuff.
>
> Other issues I'm having (in the Simulator):
>
> Sometimes the scrolling of the scrollview doesn't happen and then randomly starts happening. Sometimes it starts happening even though I haven't zoomed; Sometimes it doesn't even though I have zoomed. There are no gesture recognisers on the larger views that might be interfering and I'm not accidentally touching the object views.
>
> I would have liked to use the transformation property of the main view to map the object view's coordinates to the UIViews coordinates. It seems like it should work but I've had no success with this and in my small proof-of-concept that I did I just resorted to direct manipulation of the coordinates.
>
> I haven't started with the rotation thing yet so I don't know if that will confuse me.
>
> Anyway, I would like comments/suggestions. Is the view-per-object approach reasonable and can you suggest solutions to my zooming-level-of-detail and other issues? Or should I do the whole thing in one view and do manual hit testing and dragging etc?
>
> Thanks,
> Brian._______________________________________________
>
> Cocoa-dev mailing list (email@hidden)
>
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden