Hi all:
I meet a kernel problem about the random segmentation fault(x86_64). In my testcase, the size of local variables exceeds 20MB.
when run the testcase, it will cause segmentation fault(because the default stack size limit is 8192KB).
when I increase the stack size limit to 1024000KB(ulimit -s 1024000), the testcase will pass.
But when I run the testcase 100 times, it will cause random segmentation fault.
Maybe the commit fee7e49d45149fba60156f5b59014f764d3e3728 "mm: propagate error from stack expansion even for guard page"
cause this problems, when I revert it, the testcase will not cause random segmentation fault problem.
Can anyone give some ideas about this problem?
Best Regards
Wang Long
############ Test Environment #############
# uname -a
Linux ivybridge 4.1.0-rc2+ #3 SMP PREEMPT Wed May 6 10:46:57 CST 2015 x86_64 x86_64 x86_64 GNU/Linux
############ The Testcase ################
#include <stdio.h>
#include <stdlib.h>
#include <sys/resource.h>
#define KB *1024
#define MB *(1024*1024)
#define GB *(1024*1024*1024)
int main(int argc, char** argv)
{
int ret;
struct rlimit rlim;
rlim.rlim_cur=20 MB;
rlim.rlim_max=20 MB;
ret = setrlimit(RLIMIT_AS, &rlim);
if ( 0 > ret)
{
perror("setrlimit failed");
exit(1);
}
printf("setrlimit success\n");
char tmp[20 MB];
int i = 0;
for (i = 0; i < 20 MB; i++)
{
tmp[i]=1;
}
printf("test success\n");
exit(1);
}
# My config
--
1.8.3.4
.
>
> Hi all:
>
> I meet a kernel problem about the random segmentation fault(x86_64). In my testcase, the size of local variables exceeds 20MB.
> when run the testcase, it will cause segmentation fault(because the default stack size limit is 8192KB).
> when I increase the stack size limit to 1024000KB(ulimit -s 1024000), the testcase will pass.
>
> But when I run the testcase 100 times, it will cause random segmentation fault.
>
> Maybe the commit fee7e49d45149fba60156f5b59014f764d3e3728 "mm: propagate error from stack expansion even for guard page"
> cause this problems, when I revert it, the testcase will not cause random segmentation fault problem.
>
> Can anyone give some ideas about this problem?
>
> Best Regards
> Wang Long
>
> ############ Test Environment #############
>
> # uname -a
> Linux ivybridge 4.1.0-rc2+ #3 SMP PREEMPT Wed May 6 10:46:57 CST 2015 x86_64 x86_64 x86_64 GNU/Linux
>
>
> ############ The Testcase ################
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/resource.h>
>
> #define KB *1024
> #define MB *(1024*1024)
> #define GB *(1024*1024*1024)
>
> int main(int argc, char** argv)
> {
> int ret;
> struct rlimit rlim;
>
> rlim.rlim_cur=20 MB;
> rlim.rlim_max=20 MB;
Can you please get rlimit before setting it?
And try again without reverting fee7e49d45?
> ret = setrlimit(RLIMIT_AS, &rlim);
> if ( 0 > ret)
> {
> perror("setrlimit failed");
> exit(1);
> }
>
> printf("setrlimit success\n");
>
> char tmp[20 MB];
> int i = 0;
>
> for (i = 0; i < 20 MB; i++)
> {
> tmp[i]=1;
> }
>
> printf("test success\n");
> exit(1);
> }
>
On 2015/5/6 16:20, Hillf Danton wrote:
>>
>> Hi all:
>>
>> I meet a kernel problem about the random segmentation fault(x86_64). In my testcase, the size of local variables exceeds 20MB.
>> when run the testcase, it will cause segmentation fault(because the default stack size limit is 8192KB).
>> when I increase the stack size limit to 1024000KB(ulimit -s 1024000), the testcase will pass.
>>
>> But when I run the testcase 100 times, it will cause random segmentation fault.
>>
>> Maybe the commit fee7e49d45149fba60156f5b59014f764d3e3728 "mm: propagate error from stack expansion even for guard page"
>> cause this problems, when I revert it, the testcase will not cause random segmentation fault problem.
>>
>> Can anyone give some ideas about this problem?
>>
>> Best Regards
>> Wang Long
>>
>> ############ Test Environment #############
>>
>> # uname -a
>> Linux ivybridge 4.1.0-rc2+ #3 SMP PREEMPT Wed May 6 10:46:57 CST 2015 x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>> ############ The Testcase ################
>>
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <sys/resource.h>
>>
>> #define KB *1024
>> #define MB *(1024*1024)
>> #define GB *(1024*1024*1024)
>>
>> int main(int argc, char** argv)
>> {
>> int ret;
>> struct rlimit rlim;
>>
>> rlim.rlim_cur=20 MB;
>> rlim.rlim_max=20 MB;
>
> Can you please get rlimit before setting it?
> And try again without reverting fee7e49d45?
>
Hi,Hillf Danton
After add getrlimit function before setting it. the testcase will not cause
random segmentation fault problem.
Could you please explain why? Are there any special considerations when we
use setrlimit and getrlimit functions?
Best Regards
Wang Long
>> ret = setrlimit(RLIMIT_AS, &rlim);
>> if ( 0 > ret)
>> {
>> perror("setrlimit failed");
>> exit(1);
>> }
>>
>> printf("setrlimit success\n");
>>
>> char tmp[20 MB];
>> int i = 0;
>>
>> for (i = 0; i < 20 MB; i++)
>> {
>> tmp[i]=1;
>> }
>>
>> printf("test success\n");
>> exit(1);
>> }
>>
>
>
>
On Wednesday 2015-05-06 05:46, long.wanglong wrote:
>
>int main(int argc, char** argv)
>{
> rlim.rlim_cur=20 MB;
> rlim.rlim_max=20 MB;
> ret = setrlimit(RLIMIT_AS, &rlim);
> [...]
> char tmp[20 MB];
> for (i = 0; i < 20 MB; i++)
> tmp[i]=1;
if tmp already takes 20 MB, where will the remainder of the program find
space if you only allow for 20 MB? This is bound to fail under normal
considerations.