Re: shm_open and mmap: Invalid argument?
Re: shm_open and mmap: Invalid argument?
- Subject: Re: shm_open and mmap: Invalid argument?
- From: Ethan Tira-Thompson <email@hidden>
- Date: Mon, 21 Feb 2005 01:23:08 -0500
Thanks to everyone for their insight... I've gotten things working, but
there may be more here for you to work on amongst yourselves (assuming
there's a few kernel hackers on here...) Actually, I think
documentation or a code sample was the real shortcoming, and that's not
limited to Darwin (I did extensive web searching before winding up
here)
So, ftruncate does seem to be key in the creation of a shared memory
region - after shm_open, before mmap. However, it should only be
called on a *new* region at its creation... it will return an error if
you're connecting to a pre-existing region. That seems reasonable
enough, but I found this aspect to be rather undocumented. (I suppose
the size/len argument of mmap is supposed to be used to map a
subsection of a file into memory, so you have to use ftruncate to
pre-size the shared memory region, and then can selectively map
portions of the region into the process. I initially expected mmap to
do the region sizing itself...)
The next step is shm_unlink. It seems it should *not* be called right
after the creation in order to request dereference down the road (i.e.
shm_open, ftruncate, mmap, close, shm_unlink) shm_unlink will remove
the name immediately, thus blocking other processes from accessing it
since it's on the way out. However, I suppose it's fine to call
immediately following the last attach you are expecting, to guarantee
that a sudden crash will cause it to be removed.
The safest general behavior (if you don't know when an attach is the
last one) seems to be to wait until the last reference is being
removed, and then call shm_unlink. This is directly analogous to using
shmctl to delete a region under SysV. (initially I had called it on
*every* dereference, so I'm not completely sure what happens if it's
not called for the last dereference, or if I the errors I got were for
the first call or the last call (i.e. did the first call succeed and
following ones fail, or did all fail except the last?))
This brings up an interesting problem however. If you shm_open a bunch
of regions, and never unlink them, where do they go when the process
gets shutdown? My experience from the intial email implies they stick
around (blocking further attempts to make new regions of the same
name). Is there a system limit on these resources? Is it possible to
get a list of current regions? Is it possible to clean them out
without rebooting (yes if we can get the list of names, but
otherwise...)? I could see some havoc being caused by a program which
leaks regions, using random or unpredictable names, which can then
never be reclaimed. Surely that's not the case? (say it ain't so!)
For the region names, it doesn't seem to matter if they start with a
'/' or not. I'm not going to bother for now, I'll write back if I run
into portability issues on other platforms. I haven't checked if it's
a problem if a file has the same name. (implied by #2 from Tim?)
Thus, for future reference, correct (or at least functional) usage
appears to be:
CREATION: (initial setup)
shm_open
ftruncate
mmap
close
ACCESS: (from another process)
shm_open
mmap
close
REMOVAL: (when done)
munmap
if(wasLastReference) shm_unlink;
My OS is 10.3.8
My code for this can be found at:
http://cvs.tekkotsu.org/cgi-bin/viewcvs.cgi/Tekkotsu/Shared/Attic/
RCRegion.cc?rev=1.1.2.4&content-type=text/vnd.viewcvs-markup
This class contains code for either SysV or POSIX style shared memory,
and can be switched back and forth using a #define. (TEKKOTSU_SHM_STYLE
set to either SYSV_SHM or POSIX_SHM)
This is part of a framework for robotics software, which currently runs
on the Sony Aibo, but we are trying to port to desktop computers for
local development and simulation. The Aibo uses processes with shared
memory regions for communication, and in order to allow the most
accurate local simulation possible, we are doing the same for the
desktop. (threads would've been a little easier to port initially, but
would wind up sharing global values, not just specified objects,
leading to different behavior between code on the desktop and code on
the actual robot. In general, this architecture actually seems quite
nice for preventing unanticipated interaction between different threads
of execution, but still allowing full interaction in specified key
shared regions.)
Thanks!
-ethan
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden