site_archiver@lists.apple.com Delivered-To: darwin-dev@lists.apple.com This topic overlaps Cocoa and Darwin, bear with me a minute: I would like to make some serious use of interprocess shared memory in Cocoa applications. I don't mean pipelines or their equivalents; my project involves having multiple different applications all making asynchronous reads and writes to different parts of a large (Gigabyte) area of shared memory. (Details later, for the curious.) I would be using shmget, shmat, and so on, as low-level primitives to obtain and access shared memory. I have figured out that OS X doesn't support that much shared memory right out of the box. (BTW,I am running 10.4.10 on a 13-inch Intel Macbook with 1 GByte of physical memory.) I have done enough research to know that some of the parameters that control shared memory are in /etc/rc, which is read at boot-time, and I even managed to find some documentation on the web for what kern.sysv.shm<whatever> means. (The documentation I found was in Spanish, which I don't read or speak, but fortunately the Google translation was usable -- the search experience was sufficiently Monty-Pythonesque that I found myself lifting my feet in case of killer rabbits.) And I found a lot of comments from people who had tried to get shared memory to work on the Mac, and had not been as successful as they had hoped. I hope that is enough homework to warrant an appeal to this group for further pointers. Specifically, 1) Is there any thorough documentation anywhere about how shared memory on the Mac actually works? I need all the background information I can get, and I certainly need to know whether there are hard or soft range limits for the kern.sysv.<whatever> parameters, and what happens if you crowd those limits. Just man pages aren't near enough, and googling and searching XCode documentation hasn't turned up anything interesting yet. 2) What am I likely to break elsewhere in the Mac architecture if I start increasing shared-memory limits? 3) How do the layers of software architecture upon which a Cocoa application runs interact with the shared memory stuff? I haven't tried messing with /etc/rc yet, but in my application, I could not successfully create even a 1 MByte shared-memory segment. I suspect that something else is using enough of the default shared-memory resources that there weren't enough left for my test, and I need to know how much to allow for any such uses. (If this question should go to another group, tell me and I will take it there.) If you are still reading and are curious, what I have is a functioning Lisp system (Wraith Scheme -- see my web site if you are terminally curious (URL below)); I think I have a way to add an access-rights system to it such that multiple copies of the Lisp interpreter can interact with the same heap without stepping on each others' toes. The heap is the big shared memory object I spoke of. (Lisps eat memory as if there were no tomorrow, 1 GByte is not a lot...) Using just separate threads, one for each Lisp interpreter, isn't appropriate, because each interpreter has substantial amounts of truly private data. Just fork (without exec) won't work on a Mac app, at least, not if you want both parent and child to have a functioning GUI. Small numbers of cooperating parallel Lisp processes would be fun, and would be a natural fit to present and likely near-future Mac processor hardware. Enough blathering. Thanks for any hints or suggestions you may have. -- Jay Reynolds Freeman --------------------- Jay_Reynolds_Freeman@mac.com http://web.mac.com/jay_reynolds_freeman (personal web site) _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-dev mailing list (Darwin-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-dev/site_archiver%40lists.appl... This email sent to site_archiver@lists.apple.com