Re: preventing bad memory access
Re: preventing bad memory access
- Subject: Re: preventing bad memory access
- From: Don Quixote de la Mancha <email@hidden>
- Date: Thu, 20 Oct 2011 20:50:35 -0700
On Thu, Oct 20, 2011 at 8:19 PM, Wilker <email@hidden> wrote:
> But they are really safe? There is anything that I can do for extreme
> situations in case to avoid bad memory access?
Yes they are safe. The "f" I/O calls go back to the very beginnings
of the C standard library in the late 1960s. I myself have been using
them since 1988, on many different platforms.
By "bad memory access" are you concerned about excessive file block
storage caching, or crashes due to bad memory accesses such as stray
pointers?
If you're concerned about the performance hit of caching, the best
thing you can do is to get your application at least mostly working,
then profile it with Instruments.
If this is on Mac OS X, at the same time you are testing your app,
run the top command in the Terminal - see "man top" for what the
fields mean. Run top for a little while on a system with a normal
load, then test your app, then check to see how many more buffers
there are in the kernel page cache. "man top" explains what all the
fields are.
If you're concerned about crashing due to bad memory access, and your
application is on Mac OS X, run it with Guard Malloc enabled. This
use the hardware memory management unit to catch bad memory
references.
Guard Malloc is not supported on the iOS but you can use it in the iOS
Simulator.
You can also enable malloc() diagnostics in Xcode, such as pre-filling
new memory with garbage, as well as post-filling free blocks.
In the same place where you enable malloc() diagnostics you can have
stack traces recorded whenever a malloc is done, to track down what
parts of your code do the most allocation.
If you are not using garbage collection, a very powerful tool is the
Valgrind Memory Debugger. It not only validates all memory
references, but it also checks the input parameters to most OS X API
calls.
Unfortunately I understand Valgrind does not yet support garbage
collection. I don't see why it couldn't support ARC but I really
don't know whether it does.
Finally, use assert(), what I call "The Test That Keeps On Testing".
Whenever your functions or methods have particular requirements of
their input parameters, use assert() to validate them. Whenever any
of your functions or methods make any particular guarantee about the
state of your data upon their return, again use assert() to validate
that the guarantee holds.
For example, if I were writing my own implementation of the C standard
library, the strlen() function cannot take a NULL char pointer. So I
would write it something like this:
int strlen( const char *inStr )
{
int result = 0;
assert( NULL != inStr && "inStr must not be NULL" );
while ( *inStr++ )
++result;
return result;
}
Assert() is a macro that is stripped out when NDEBUG is #define'd,
which is done for you in release or profile builds. If you use it
properly, it does not slow your debug builds down much at all, and
because it is stripped out for release builds, there is no slow-down
for the end-user.
Class invariants are a form of assertions that depend on the fact that
many classes have a condition that always holds true. This condition
may temporarily be broken within class member functions or methods,
but is always restored before they return.
Not every class has a sensible invariant, but many of them do. If you
design your classes so they definitely have some manner of invariant,
they will likely be easier for you and your colleagues to understand.
You would use them like this, in C++:
// MyInvariant.h
//
// We need the semicolons to go away when NDEBUG is #defined
#ifndef NDEBUG
#define Invariant() bool invariant();
#else
#define Invariant()
#endif
class Foo{
public:
Foo();
virtual ~Foo();
Invariant() // NOTE: No Semicolon!
virtual int bar();
};
int Foo::bar()
{
assert( Invariant() );
// Do some real computation here that messes up the invariant
//
// But only temporarily.
//
// Just before the member function exits, restore the invariant
assert( Invariant() );
return 0;
}
The above macro is not quite right because you want to get rid of the
semicolon in the class declaration (I think so anyway), but you don't
want the semicolon appearing between the parentheses of the assert
call when NDEBUG is not defined (that is, in a debug build).
You also want other classes to be able to check any class' invariant
at any time, from outside that class' member functions or methods:
void Baz::boo()
{
Foo *theFoo = new Foo();
assert( theFoo->Invariant() );
return;
}
There are some considerations of thread safety. You need the
Invariant to be thread safe, or you need to ensure that you don't
check it when multiple thread accesses make the invariant invalid.
Class invariants, asserting preconditions and asserting postconditions
are all together known as "Programming by Contract". Programming by
Contract is built into the Eiffel programming language, which
unfortunately is not as popular as I think it should be.
But Programming by Contract is not hard at all to implement as a macro
package for most other programming languages.
--
Don Quixote de la Mancha
Dulcinea Technologies Corporation
Software of Elegance and Beauty
http://www.dulcineatech.com
email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden