Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.
Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.
This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
Changes since v1:
* Rebased to kernel 5.7-rc3.
* commit log change.
An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
tool can be found here[1]. This change is accepted by crash utility
maintainer [2].
[1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
[2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
arch/arm64/include/asm/compiler.h | 3 +++
arch/arm64/kernel/crash_core.c | 4 ++++
2 files changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
index eece20d..32d5900 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -19,6 +19,9 @@
#define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
+#else /* !CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_user_pac_mask() 0ULL
+#define ptrauth_kernel_pac_mask() 0ULL
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_COMPILER_H */
diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c
index ca4c3e1..25cf2ce 100644
--- a/arch/arm64/kernel/crash_core.c
+++ b/arch/arm64/kernel/crash_core.c
@@ -6,6 +6,7 @@
#include <linux/crash_core.h>
#include <asm/memory.h>
+#include <asm/pointer_auth.h>
void arch_crash_save_vmcoreinfo(void)
{
@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
PHYS_OFFSET);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
+ vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
+ system_supports_address_auth() ?
+ ptrauth_kernel_pac_mask() : 0);
}
--
2.7.4
Add documentation for KERNELPACMASK variable being added to the vmcoreinfo.
It indicates the PAC bits mask information of signed kernel pointers if
Armv8.3-A Pointer Authentication feature is present.
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
index 007a6b8..5cc3ee6 100644
--- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
+++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
@@ -393,6 +393,12 @@ KERNELOFFSET
The kernel randomization offset. Used to compute the page offset. If
KASLR is disabled, this value is zero.
+KERNELPACMASK
+-------------
+
+Indicates the PAC bits mask information if Pointer Authentication is
+enabled and address authentication feature is present.
+
arm
===
--
2.7.4
Hi Will/Catalin,
On 4/27/20 11:55 AM, Amit Daniel Kachhap wrote:
> Recently arm64 linux kernel added support for Armv8.3-A Pointer
> Authentication feature. If this feature is enabled in the kernel and the
> hardware supports address authentication then the return addresses are
> signed and stored in the stack to prevent ROP kind of attack. Kdump tool
> will now dump the kernel with signed lr values in the stack.
>
> Any user analysis tool for this kernel dump may need the kernel pac mask
> information in vmcoreinfo to generate the correct return address for
> stacktrace purpose as well as to resolve the symbol name.
>
> This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
> bit positions via ptrace") which exposes pac mask information via ptrace
> interfaces.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
This patch user side changes are accepted by crash-utility maintainer [1]
so I think this is in a good shape to go in.
Thanks,
Amit Daniel
[1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
> ---
> Changes since v1:
> * Rebased to kernel 5.7-rc3.
> * commit log change.
>
> An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
> tool can be found here[1]. This change is accepted by crash utility
> maintainer [2].
>
> [1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
> [2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
>
> arch/arm64/include/asm/compiler.h | 3 +++
> arch/arm64/kernel/crash_core.c | 4 ++++
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index eece20d..32d5900 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -19,6 +19,9 @@
> #define __builtin_return_address(val) \
> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +#define ptrauth_user_pac_mask() 0ULL
> +#define ptrauth_kernel_pac_mask() 0ULL
> #endif /* CONFIG_ARM64_PTR_AUTH */
>
> #endif /* __ASM_COMPILER_H */
> diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c
> index ca4c3e1..25cf2ce 100644
> --- a/arch/arm64/kernel/crash_core.c
> +++ b/arch/arm64/kernel/crash_core.c
> @@ -6,6 +6,7 @@
>
> #include <linux/crash_core.h>
> #include <asm/memory.h>
> +#include <asm/pointer_auth.h>
>
> void arch_crash_save_vmcoreinfo(void)
> {
> @@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
> vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
> PHYS_OFFSET);
> vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
> + vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
> + system_supports_address_auth() ?
> + ptrauth_kernel_pac_mask() : 0);
> }
>
Hi Will/Catalin,
Sorry: Resending with correct To list.
On 4/27/20 11:55 AM, Amit Daniel Kachhap wrote:
> Recently arm64 linux kernel added support for Armv8.3-A Pointer
> Authentication feature. If this feature is enabled in the kernel and the
> hardware supports address authentication then the return addresses are
> signed and stored in the stack to prevent ROP kind of attack. Kdump tool
> will now dump the kernel with signed lr values in the stack.
>
> Any user analysis tool for this kernel dump may need the kernel pac mask
> information in vmcoreinfo to generate the correct return address for
> stacktrace purpose as well as to resolve the symbol name.
>
> This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
> bit positions via ptrace") which exposes pac mask information via ptrace
> interfaces.
This patch user side changes are accepted by crash-utility maintainer [1]
so I think this is in a good shape to go in.
Thanks,
Amit Daniel
[1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> ---
> Changes since v1:
> * Rebased to kernel 5.7-rc3.
> * commit log change.
>
> An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
> tool can be found here[1]. This change is accepted by crash utility
> maintainer [2].
>
> [1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
> [2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
>
> arch/arm64/include/asm/compiler.h | 3 +++
> arch/arm64/kernel/crash_core.c | 4 ++++
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index eece20d..32d5900 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -19,6 +19,9 @@
> #define __builtin_return_address(val) \
> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +#define ptrauth_user_pac_mask() 0ULL
> +#define ptrauth_kernel_pac_mask() 0ULL
> #endif /* CONFIG_ARM64_PTR_AUTH */
>
> #endif /* __ASM_COMPILER_H */
> diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c
> index ca4c3e1..25cf2ce 100644
> --- a/arch/arm64/kernel/crash_core.c
> +++ b/arch/arm64/kernel/crash_core.c
> @@ -6,6 +6,7 @@
>
> #include <linux/crash_core.h>
> #include <asm/memory.h>
> +#include <asm/pointer_auth.h>
>
> void arch_crash_save_vmcoreinfo(void)
> {
> @@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
> vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
> PHYS_OFFSET);
> vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
> + vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
> + system_supports_address_auth() ?
> + ptrauth_kernel_pac_mask() : 0);
> }
>
On Mon, Apr 27, 2020 at 11:55:01AM +0530, Amit Daniel Kachhap wrote:
> Recently arm64 linux kernel added support for Armv8.3-A Pointer
> Authentication feature. If this feature is enabled in the kernel and the
> hardware supports address authentication then the return addresses are
> signed and stored in the stack to prevent ROP kind of attack. Kdump tool
> will now dump the kernel with signed lr values in the stack.
>
> Any user analysis tool for this kernel dump may need the kernel pac mask
> information in vmcoreinfo to generate the correct return address for
> stacktrace purpose as well as to resolve the symbol name.
>
> This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
> bit positions via ptrace") which exposes pac mask information via ptrace
> interfaces.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> ---
> Changes since v1:
> * Rebased to kernel 5.7-rc3.
> * commit log change.
>
> An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
> tool can be found here[1]. This change is accepted by crash utility
> maintainer [2].
>
> [1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
> [2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
>
> arch/arm64/include/asm/compiler.h | 3 +++
> arch/arm64/kernel/crash_core.c | 4 ++++
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index eece20d..32d5900 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -19,6 +19,9 @@
> #define __builtin_return_address(val) \
> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +#define ptrauth_user_pac_mask() 0ULL
> +#define ptrauth_kernel_pac_mask() 0ULL
This doesn't look quite right to me, since you still have to take into
account the case where CONFIG_ARM64_PTR_AUTH=y but the feature is not
available at runtime:
> @@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
> vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
> PHYS_OFFSET);
> vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
> + vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
> + system_supports_address_auth() ?
> + ptrauth_kernel_pac_mask() : 0);
In which case, would it make more sense to define
ptrauth_{kernel,user}_pac_mask() unconditionally? In fact, I'd probably
just remove the guards completely from asm/compiler.h because I think
they're misleading.
Will
--->8
diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
index eece20d2c55f..51a7ce87cdfe 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -2,8 +2,6 @@
#ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H
-#if defined(CONFIG_ARM64_PTR_AUTH)
-
/*
* The EL0/EL1 pointer bits used by a pointer authentication code.
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
@@ -19,6 +17,4 @@
#define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
-#endif /* CONFIG_ARM64_PTR_AUTH */
-
#endif /* __ASM_COMPILER_H */
On Mon, Apr 27, 2020 at 11:55:02AM +0530, Amit Daniel Kachhap wrote:
> Add documentation for KERNELPACMASK variable being added to the vmcoreinfo.
>
> It indicates the PAC bits mask information of signed kernel pointers if
> Armv8.3-A Pointer Authentication feature is present.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Dave Young <[email protected]>
> Cc: Baoquan He <[email protected]>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> ---
> Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
> index 007a6b8..5cc3ee6 100644
> --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
> +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
> @@ -393,6 +393,12 @@ KERNELOFFSET
> The kernel randomization offset. Used to compute the page offset. If
> KASLR is disabled, this value is zero.
>
> +KERNELPACMASK
> +-------------
> +
> +Indicates the PAC bits mask information if Pointer Authentication is
> +enabled and address authentication feature is present.
This is a bit cryptic. How about:
The mask to extract the Pointer Authentication Code from a kernel virtual
address.
Will
Hi Will,
On 5/4/20 10:47 PM, Will Deacon wrote:
> On Mon, Apr 27, 2020 at 11:55:01AM +0530, Amit Daniel Kachhap wrote:
>> Recently arm64 linux kernel added support for Armv8.3-A Pointer
>> Authentication feature. If this feature is enabled in the kernel and the
>> hardware supports address authentication then the return addresses are
>> signed and stored in the stack to prevent ROP kind of attack. Kdump tool
>> will now dump the kernel with signed lr values in the stack.
>>
>> Any user analysis tool for this kernel dump may need the kernel pac mask
>> information in vmcoreinfo to generate the correct return address for
>> stacktrace purpose as well as to resolve the symbol name.
>>
>> This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
>> bit positions via ptrace") which exposes pac mask information via ptrace
>> interfaces.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> ---
>> Changes since v1:
>> * Rebased to kernel 5.7-rc3.
>> * commit log change.
>>
>> An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
>> tool can be found here[1]. This change is accepted by crash utility
>> maintainer [2].
>>
>> [1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
>> [2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html
>>
>> arch/arm64/include/asm/compiler.h | 3 +++
>> arch/arm64/kernel/crash_core.c | 4 ++++
>> 2 files changed, 7 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>> index eece20d..32d5900 100644
>> --- a/arch/arm64/include/asm/compiler.h
>> +++ b/arch/arm64/include/asm/compiler.h
>> @@ -19,6 +19,9 @@
>> #define __builtin_return_address(val) \
>> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>>
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +#define ptrauth_user_pac_mask() 0ULL
>> +#define ptrauth_kernel_pac_mask() 0ULL
>
> This doesn't look quite right to me, since you still have to take into
> account the case where CONFIG_ARM64_PTR_AUTH=y but the feature is not
> available at runtime:
Yes agree with you here. However the config gaurd is saving some extra
computation for __builtin_return_address. There are some compiler
support being added in __builtin_extract_return_address to mask the PAC.
Hopefully that will improve this code. In the meantime let it be like this.
I can remove this else case and as other users of
ptrauth_{kernel,user}_pac_mask(ptrace.c) protect it with a config gaurd
there.
>
>> @@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
>> vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
>> PHYS_OFFSET);
>> vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
>> + vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
>> + system_supports_address_auth() ?
>> + ptrauth_kernel_pac_mask() : 0);
>
> In which case, would it make more sense to define
> ptrauth_{kernel,user}_pac_mask() unconditionally? In fact, I'd probably
> just remove the guards completely from asm/compiler.h because I think
> they're misleading.
As answered above. Let me know your opinion. Although I don't have
strong reservation in keeping the config gaurd.
Thanks,
Amit Daniel
>
> Will
>
> --->8
>
> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index eece20d2c55f..51a7ce87cdfe 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -2,8 +2,6 @@
> #ifndef __ASM_COMPILER_H
> #define __ASM_COMPILER_H
>
> -#if defined(CONFIG_ARM64_PTR_AUTH)
> -
> /*
> * The EL0/EL1 pointer bits used by a pointer authentication code.
> * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
> @@ -19,6 +17,4 @@
> #define __builtin_return_address(val) \
> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>
> -#endif /* CONFIG_ARM64_PTR_AUTH */
> -
> #endif /* __ASM_COMPILER_H */
>
Hi,
On 5/4/20 11:04 PM, Will Deacon wrote:
> On Mon, Apr 27, 2020 at 11:55:02AM +0530, Amit Daniel Kachhap wrote:
>> Add documentation for KERNELPACMASK variable being added to the vmcoreinfo.
>>
>> It indicates the PAC bits mask information of signed kernel pointers if
>> Armv8.3-A Pointer Authentication feature is present.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: Dave Young <[email protected]>
>> Cc: Baoquan He <[email protected]>
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> ---
>> Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 ++++++
>> 1 file changed, 6 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
>> index 007a6b8..5cc3ee6 100644
>> --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
>> +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
>> @@ -393,6 +393,12 @@ KERNELOFFSET
>> The kernel randomization offset. Used to compute the page offset. If
>> KASLR is disabled, this value is zero.
>>
>> +KERNELPACMASK
>> +-------------
>> +
>> +Indicates the PAC bits mask information if Pointer Authentication is
>> +enabled and address authentication feature is present.
>
> This is a bit cryptic. How about:
>
> The mask to extract the Pointer Authentication Code from a kernel virtual
> address.
Ok sure. I will update like this in the v3 version.
Cheers,
Amit Daniel
>
> Will
>
On Wed, May 06, 2020 at 05:32:56PM +0530, Amit Kachhap wrote:
> On 5/4/20 10:47 PM, Will Deacon wrote:
> > On Mon, Apr 27, 2020 at 11:55:01AM +0530, Amit Daniel Kachhap wrote:
> > > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> > > index eece20d..32d5900 100644
> > > --- a/arch/arm64/include/asm/compiler.h
> > > +++ b/arch/arm64/include/asm/compiler.h
> > > @@ -19,6 +19,9 @@
> > > #define __builtin_return_address(val) \
> > > (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
> > > +#else /* !CONFIG_ARM64_PTR_AUTH */
> > > +#define ptrauth_user_pac_mask() 0ULL
> > > +#define ptrauth_kernel_pac_mask() 0ULL
> >
> > This doesn't look quite right to me, since you still have to take into
> > account the case where CONFIG_ARM64_PTR_AUTH=y but the feature is not
> > available at runtime:
>
> Yes agree with you here. However the config gaurd is saving some extra
> computation for __builtin_return_address. There are some compiler
> support being added in __builtin_extract_return_address to mask the PAC.
> Hopefully that will improve this code. In the meantime let it be like this.
Does the extra computation matter? Isn't it just a couple of instructions?
Will
Hi,
On 5/6/20 6:01 PM, Will Deacon wrote:
> On Wed, May 06, 2020 at 05:32:56PM +0530, Amit Kachhap wrote:
>> On 5/4/20 10:47 PM, Will Deacon wrote:
>>> On Mon, Apr 27, 2020 at 11:55:01AM +0530, Amit Daniel Kachhap wrote:
>>>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>>>> index eece20d..32d5900 100644
>>>> --- a/arch/arm64/include/asm/compiler.h
>>>> +++ b/arch/arm64/include/asm/compiler.h
>>>> @@ -19,6 +19,9 @@
>>>> #define __builtin_return_address(val) \
>>>> (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>>>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>>>> +#define ptrauth_user_pac_mask() 0ULL
>>>> +#define ptrauth_kernel_pac_mask() 0ULL
>>>
>>> This doesn't look quite right to me, since you still have to take into
>>> account the case where CONFIG_ARM64_PTR_AUTH=y but the feature is not
>>> available at runtime:
>>
>> Yes agree with you here. However the config gaurd is saving some extra
>> computation for __builtin_return_address. There are some compiler
>> support being added in __builtin_extract_return_address to mask the PAC.
>> Hopefully that will improve this code. In the meantime let it be like this.
>
> Does the extra computation matter? Isn't it just a couple of instructions?
ok sure. I will push v3 as you suggested.
Thanks,
Amit
>
> Will
>