About the memory allocate about hikey(oom)

I use a test to test memory allocate in hikey board:

#include <stdio.h>
#include <stdlib.h>
 
#define MEGABYTE 1024*1024
 
int main(int argc, char *argv[])
{
        void *myblock = NULL;
        int count = 0;
 
        while (1)
        {
                myblock = (void *) calloc(1,MEGABYTE);
                if (!myblock) break;
                printf("Currently allocating %d MB\n", ++count);
        }
 
        exit(0);
}

and I also disable the oom-killer by using echo 2 > /proc/sys/vm//proc/sys/vm/overcommit_memory
then I find it can only allocate the about 240M:

Currently allocating 239 MB Currently allocating 240 MB root@linaro-alip:/home/linaro/test#
while the hikey have nearly 900M memory:
root@linaro-alip:/home/linaro/test# free -h total used free shared buffers cached Mem: 894M 125M 769M 5.2M 4.7M 31M -/+ buffers/cache: 89M 805M Swap: 0B 0B 0B

so anyone know the reason? I use 4.1+hikey+debian,thanks

This doesn’t make much sense - it seems to me you might be mixing concepts (virtual memory vs physical memory).

Of course if you pin the pages in your program (ie mlockall()), then you will reach the expected ~700MB.

what release are you using? I can’t reproduce what you see on 16.03 - I managed to allocate around 68GB.

out of curiosity, can you try running without disabling the oom killer and using mlockall(MCL_CURRENT|MCL_FUTURE)? just add that line at the start of your program

ok after enabling the swap space and disabling the over commit I do get your numbers. I’ll look into this.

So:

root@linaro-alip:/proc/sys/vm# free -lm
             total       used       free     shared    buffers     cached
Mem:           891        154        737          2          4         44
Low:           891        154        737
High:            0          0          0
-/+ buffers/cache:        105        786
Swap:           49          6         43

From the kernel docs:

2 - Don’t overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable amount (default is 50%) of physical RAM.
Depending on the amount you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as
appropriate.

It we do some maths: 890MB/2 = 445, 445MB - 154 = 290MB
I think this is pretty much what we are seeing =/- a factor which I can’t explain without looking at the implementation (maybe some of the memory gets immediately pinned to running processes I guess)

does this help?

<blockquote
This doesn’t make much sense – it seems to me you might be mixing concepts (virtual memory vs physical memory).
Of course if you pin the pages in your program (ie mlockall()), then you will reach the expected ~700MB.
what release are you using? I can’t reproduce what you see on 16.03 – I managed to allocate around 68GB.

Firstly for you reply,very impressive~ yes, I am lack of memory knowledge. well,I use 15.03(I just replace the kernel to 4.1 but the same phenomenon in 3.18) ,if I do not disable the oom, this test will trigger the oom, so I try to disable the oom then I find I can only get the 240M memory,so I am a little curious. I am using a test in hikey which will allocate the memory until it cannot get the memory, this test can pass in other platform(the kernel is 3.8), but always trigger the oom in hikey(and kill all the process),so I dig into this issue, but I have no idea currently, could you give me any suggestion about this problem,thanks

and I also try to use mlockall(MCL_CURRENT|MCL_FUTURE) and yes ,I got the nearly 836 memory:

Currently allocating 835 MB Currently allocating 836 MB

Message from syslogd@linaro-alip at Mar 21 03:27:28 …
kernel:[ 2096.498532] Call trace:
Killed

This doesn’t make much sense – it seems to me you might be mixing concepts (virtual memory vs physical memory). Of course if you pin the pages in your program (ie mlockall()), then you will reach the expected ~700MB. what release are you using? I can’t reproduce what you see on 16.03 – I managed to allocate around 68GB.
Firstly for you reply,very impressive~ yes, I am lack of memory knowledge. well,I use 15.03(I just replace the kernel to 4.1 but the same phenomenon in 3.18) ,if I do not disable the oom, this test will trigger the oom, so I try to disable the oom then I find I can only get the 240M memory,so I am a little curious. I am using a test in hikey which will allocate the memory until it cannot get the memory, this test can pass in other platform(the kernel is 3.8), but always trigger the oom in hikey(and kill all the process),so I dig into this issue, but I have no idea currently, could you give me any suggestion about this problem,thanks

BTW,in my understanding, this test


include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;

#define MEGABYTE 1024*1024

int main(int argc, char *argv[])
{
        void *myblock = NULL;
        int count = 0;

        while (1)
        {
                myblock = (void *) malloc(MEGABYTE);
                if (!myblock) break;
                printf(&quot;Currently allocating %d MB\n&quot;, ++count);
        }
        
        exit(0);
}

it should not occur oom because it only allocate memory but not use the memory.
very strange,I use this test in hikey and it occur OOM ,but i test it in other board which is 32bit, it is exit because a failed malloc() rather than OOM

Not being a memory expert myself I still think the behavior you observe makes sense:

  1. when over-commit is disabled, even if you don’t use the memory, you can’t over-commit. Period. Moreover the client will not be able to allocate all the free physical memory in the system (roughly 50% as per the kernel docs). To me that seems sensible and this is what we observe.

  2. when every page allocated is pinned to the process (mlock) the client can allocate all the free memory but no more than that (the kernel wouldn’t be able to guarantee that that page is resident in RAM otherwise).

As to why other systems behave differently I can’t really comment but if there are any kernel vm experts please feel free to chip in.