Re: Shared Memory and Distributed Objects
Re: Shared Memory and Distributed Objects
- Subject: Re: Shared Memory and Distributed Objects
- From: publiclook <email@hidden>
- Date: Sun, 16 Feb 2003 11:56:41 -0500
On Sunday, February 16, 2003, at 06:37 AM, Dan Bernstein wrote:
Hi,
I'm taking my first steps with distributed objects, and I'm looking
for an *efficient* way to make a large (~1MB) area of one process's
memory available to a second process for reading. Here's what I've
tried so far:
1. Using a shared memory object, opened with shm_open() and mapped
with mmap().
Problem: no matter what I did, I couldn't get mmap() to accept the fd
shm_open() had returned.
2. Using a shared memory area obtained with shmget() and mapped with
shmat().
Problems: a. shmget() didn't work with large values of size. I don't
want to separate the memory into chunks.
b. How am I supposed to pick a value for key? Try a random value and
hope it's free?
It would be even better if I could wrap my memory in an NSData and
vend it, however I'm concerned that this is implemented in such a
(clever) way that even though the entire area of memory isn't copied
as soon as the object is vended, whenever the vending process writes
to the area afterwards the relevant memory page is duplicated and the
writing appears only in that process's space, so that the client keeps
seeing the memory as it was at the time the NSData was vended.
Is there a way to do what I want using DOs? If not, can the problems
in 1. or 2. above be overcome?
IIt is likely that problems 1 and 2 can be overcome. It is also likely
that DO will work for you. When an object such as an NSData instance is
passed between processes via DO, the normal behavior is that one
process has the actual instance and the other has only a proxy. If you
pass an NSData instance this way, the storage of the NSData instance is
not copied. When the process that has the proxy asks for the NSData
instance's contents with a message, the message is delivered to the
actual object in another process and the data (probably a subset of all
of the data) is transmitted over the connection only then. Of course,
if you use the bycopy key word, you will get copies on both sides of
the connection.
If you need very fast random access to 1M of memory shared by two or
more processes, I think shared memory is the way to go. I have some
experience with shared memory for other Unix like platforms. I think
Darwin provides an emulation layer for Sys V style shared memory, but I
haven't tried it. There are certainly other applications on the system
using similar features. Make sure that Darwin's limit for shared memory
sizes is not too small.
I did a quick check of the man pages and there are no ipcs or ipcrm
commands. That makes me worry a little about how good Darwin's
emulation of Sys V shared memory is.
Many thanks in advance,
-- Dan Bernstein
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.