Re: malloc was optimized out
Re: malloc was optimized out
- Subject: Re: malloc was optimized out
- From: Dmitry Markman <email@hidden>
- Date: Mon, 04 Jul 2016 22:11:48 -0400
Hi Jens
I think your explanation makes sense, thank you very much
I’m asking for 0x1000000000000 or 281474976710656B which is ~250TB on my MacBook Pro with 1TB SSD and 16GB RAM and for release build it doesn’t return NULL
in our case, customer is trying to build huge simulink model and we’d like to error out at the very early stage (allocation)
and notify customer that it’s not possible to create such a model.
and in our real case we trying to allocate about 10 chunks of memory 200GB each, so it’s far less than in my example
thanks again
> On Jul 4, 2016, at 9:35 PM, Jens Alfke <email@hidden> wrote:
>
> I think there are two different things going on.
>
> (1) In your contrived example, the malloc/free calls are optimized away entirely by the compiler. First the compiler optimizes away the “data != NULL” test, replacing it with “true” on the assumption malloc can’t fail; then it sees that there are no remaining uses of “data” so it optimizes out the malloc and free calls.
>
> I think you’ve found an edge case here — in general malloc will not return NULL whether or not it can supply that much memory (see below), however it appears that given a sufficiently ridiculous size (you asked for something like 150 terabytes!) it _will_ return NULL. I don’t know how “sufficiently ridiculous” is determined; maybe there’s a hardcoded limit?
> I am not a compiler engineer, but my guess is that optimizing away mallocs is sufficiently useful in real code that they’ve decided to ignore the edge case where someone passes a ridiculously large size.
>
> (2) In your real code, which you said crashes, what’s probably going on is that malloc is overcommitting — it allocates as much address space as you asked for, but doesn’t map it to any actual RAM or backing store. Then as the address space is used, the page faults trigger allocation of actual RAM and backing storage, probably by growing the swap file. At some point it becomes unable to allocate (probably because the boot disk filled up?), and the page-fault fails with a segfault.
>
> Operating systems behave differently with regard to overcommitting. I’m not very familiar with Linux and almost totally unfamiliar with Windows. My understanding is that Linux has some kind of “OOM Killer” process that will kill any process that’s using too much memory; presumably this happens before that process would run out of allocatable space. At the other extreme, iOS doesn’t use swap space at all and will kill a process that tries to use too much of physical RAM.
>
> I don’t know what the best way is to ask for huge amounts of address space such that it’s all pre-mapped and can all safely be used without segfaulting. There may be an option to vm_allocate that does this. (Calling calloc won’t help; for large allocations that fall through to vm_allocate, calloc and malloc are equivalent.) You may want to ask on the darwin_userlevel or darwin_dev mailing lists.
>
> —Jens
Dmitry Markman
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden