--- linux-2.6.24-rc3/fs/exec.c.orig
+++ linux-2.6.24-rc3/fs/exec.c
@@ -658,7 +658,8 @@ struct file *open_exec(const char *name)
int err = vfs_permission(&nd, MAY_EXEC);
file = ERR_PTR(err);
if (!err) {
- file = nameidata_to_filp(&nd, O_RDONLY);
+ file = nameidata_to_filp(&nd, force_o_largefile() ?
+ O_RDONLY|O_LARGEFILE : O_RDONLY);
if (!IS_ERR(file)) {
err = deny_write_access(file);
if (err) {
Dave Anderson <[email protected]> writes:
> When a executable that is greater than 2GB in size is attempted on a 64-bit
> system on a file system that calls, or uses generic_file_open() as its
> open handler, it fails with an EOVERFLOW erro. This patch adds a call
> to force_o_largefile() call in open_exec(), as done in sys_open() and
> sys_openat().
Wouldn't it be better to just always pass O_LARGEFILE unconditionally
there? e.g. in theory a 2.5GB executable should work on i386 and binfmt_*
shouldn't have any problems with a large file.
That would simplify your patch.
-Andi
Andi Kleen wrote:
> Dave Anderson <[email protected]> writes:
>
>
>>When a executable that is greater than 2GB in size is attempted on a 64-bit
>>system on a file system that calls, or uses generic_file_open() as its
>>open handler, it fails with an EOVERFLOW erro. This patch adds a call
>>to force_o_largefile() call in open_exec(), as done in sys_open() and
>>sys_openat().
>
>
> Wouldn't it be better to just always pass O_LARGEFILE unconditionally
> there? e.g. in theory a 2.5GB executable should work on i386 and binfmt_*
> shouldn't have any problems with a large file.
> That would simplify your patch.
>
> -Andi
>
I agree in theory. We've only seen instances on 64-bitters...
Dave
> I agree in theory. We've only seen instances on 64-bitters...
I think that's because gcc does not support the medium/large code models
for i386. Although in theory someone could create an executable with
a large enough .data.
-Andi
Andi Kleen wrote:
>>I agree in theory. We've only seen instances on 64-bitters...
>
>
> I think that's because gcc does not support the medium/large code models
> for i386. Although in theory someone could create an executable with
> a large enough .data.
>
> -Andi
Or perhaps huge debuginfo section(s)? The x86_64 instance
had 2.5GB .debug_macinfo DWARF section.
Dave
Since Dave didn't post an updated patch. This is how I think what
the patch should be. I also changed sys_uselib just to be complete.
----
Always use O_LARGEFILE for opening executables
This allows to use executables >2GB.
Based on a patch by Dave Anderson
Signed-off-by: Andi Kleen <[email protected]>
Index: linux-2.6.24-rc3/fs/exec.c
===================================================================
--- linux-2.6.24-rc3.orig/fs/exec.c
+++ linux-2.6.24-rc3/fs/exec.c
@@ -119,7 +119,7 @@ asmlinkage long sys_uselib(const char __
if (error)
goto exit;
- file = nameidata_to_filp(&nd, O_RDONLY);
+ file = nameidata_to_filp(&nd, O_RDONLY|O_LARGEFILE);
error = PTR_ERR(file);
if (IS_ERR(file))
goto out;
@@ -658,7 +658,8 @@ struct file *open_exec(const char *name)
int err = vfs_permission(&nd, MAY_EXEC);
file = ERR_PTR(err);
if (!err) {
- file = nameidata_to_filp(&nd, O_RDONLY);
+ file = nameidata_to_filp(&nd,
+ O_RDONLY|O_LARGEFILE);
if (!IS_ERR(file)) {
err = deny_write_access(file);
if (err) {
Andi Kleen wrote:
> Since Dave didn't post an updated patch. This is how I think what
> the patch should be. I also changed sys_uselib just to be complete.
>
Thanks Andi -- I just tested open_exec() w/O_LARGEFILE on an
i386 with a 2.5GB+ binary (mostly debuginfo), and it works as
expected. Interesting to note that the test binary couldn't
be compiled with i386 gcc, but it could be built with x86_64
gcc -m32.
Dave
> ----
>
>
> Always use O_LARGEFILE for opening executables
>
> This allows to use executables >2GB.
>
> Based on a patch by Dave Anderson
>
> Signed-off-by: Andi Kleen <[email protected]>
>
> Index: linux-2.6.24-rc3/fs/exec.c
> ===================================================================
> --- linux-2.6.24-rc3.orig/fs/exec.c
> +++ linux-2.6.24-rc3/fs/exec.c
> @@ -119,7 +119,7 @@ asmlinkage long sys_uselib(const char __
> if (error)
> goto exit;
>
> - file = nameidata_to_filp(&nd, O_RDONLY);
> + file = nameidata_to_filp(&nd, O_RDONLY|O_LARGEFILE);
> error = PTR_ERR(file);
> if (IS_ERR(file))
> goto out;
> @@ -658,7 +658,8 @@ struct file *open_exec(const char *name)
> int err = vfs_permission(&nd, MAY_EXEC);
> file = ERR_PTR(err);
> if (!err) {
> - file = nameidata_to_filp(&nd, O_RDONLY);
> + file = nameidata_to_filp(&nd,
> + O_RDONLY|O_LARGEFILE);
> if (!IS_ERR(file)) {
> err = deny_write_access(file);
> if (err) {
> Thanks Andi -- I just tested open_exec() w/O_LARGEFILE on an
> i386 with a 2.5GB+ binary (mostly debuginfo), and it works as
> expected. Interesting to note that the test binary couldn't
> be compiled with i386 gcc, but it could be built with x86_64
> gcc -m32.
I guess the i386 binutils or gcc don't use O_LARGEFILE. They probably
just need to be rebuilt with -D_FILE_OFFSET_BITS=64
-Andi
>