Last sent 23 Nov 2016.
The following 23 patches are rebased and resent, and represent a
rewrite of the arm and arm64 vDSO into C, adding support for arch32
(32-bit user space hosted 64-bit kernels) and into a common library
that other (arm, or non-arm) architectures may utilize.
[PATCH v5 01/12] arm: vdso: rename vdso_datapage variables
[PATCH v5 02/12] arm: vdso: add include file defining __get_datapage()
[PATCH v5 03/12] arm: vdso: inline assembler operations to compiler.h
[PATCH v5 04/12] arm: vdso: do calculations outside reader loops
[PATCH v6 05/12] arm: vdso: Add support for CLOCK_MONOTONIC_RAW
[PATCH v5 06/12] arm: vdso: add support for clock_getres
[PATCH v5 07/12] arm: vdso: disable profiling
[PATCH v5 08/12] arm: vdso: Add ARCH_CLOCK_FIXED_MASK
[PATCH v5 09/12] arm: vdso: move vgettimeofday.c to lib/vdso/
[PATCH v5 10/12] arm64: vdso: replace gettimeofday.S with global vgettimeofday.C
[PATCH v6 11/12] lib: vdso: Add support for CLOCK_BOOTTIME
[PATCH v5 12/12] lib: vdso: do not expose gettimeofday, if no arch supported timer
[PATCH] lib: vdso: add support for time
[PATCH v2 1/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (C sources)
[PATCH v2 2/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (assembler sources)
[PATCH v2 3/3] arm64: compat: Add CONFIG_KUSER_HELPERS
[PATCH] arm64: compat: Expose offset to registers in sigframes
[PATCH 1/6] arm64: compat: Use vDSO sigreturn trampolines if available
[PATCH 2/6] arm64: elf: Set AT_SYSINFO_EHDR in compat processes
[PATCH 3/6] arm64: Refactor vDSO init/setup
[PATCH v2 4/6] arm64: compat: Add a 32-bit vDSO
[PATCH 5/6] arm64: compat: 32-bit vDSO setup
[PATCH 6/6] arm64: Wire up and expose the new compat vDSO
This patch series' above has been applied to the latest Pixel phones
and resulted in a 0.4% battery improvement.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Rename seq_count to tb_seq_count. Rename tk_is_cntvct to use_syscall.
Rename cs_mult to cs_mono_mult. All to align with the variables in the
arm64 vdso datapage. Rework vdso_read_begin() and vdso_read_retry()
functions to reflect modern access patterns for tb_seq_count field.
Update copyright message to reflect the start of the contributions in
this series.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 1 of 7 pieces
v4:
- update commit message to reflect reasoning
v5:
- rebase
---
arch/arm/include/asm/vdso_datapage.h | 6 +--
arch/arm/kernel/vdso.c | 17 ++++----
arch/arm/vdso/vgettimeofday.c | 61 +++++++++++++++-------------
3 files changed, 45 insertions(+), 39 deletions(-)
diff --git a/arch/arm/include/asm/vdso_datapage.h b/arch/arm/include/asm/vdso_datapage.h
index 9be259442fca..fa3e1856244d 100644
--- a/arch/arm/include/asm/vdso_datapage.h
+++ b/arch/arm/include/asm/vdso_datapage.h
@@ -29,8 +29,8 @@
* 32 bytes.
*/
struct vdso_data {
- u32 seq_count; /* sequence count - odd during updates */
- u16 tk_is_cntvct; /* fall back to syscall if false */
+ u32 tb_seq_count; /* sequence count - odd during updates */
+ u16 use_syscall; /* fall back to syscall if true */
u16 cs_shift; /* clocksource shift */
u32 xtime_coarse_sec; /* coarse time */
u32 xtime_coarse_nsec;
@@ -38,7 +38,7 @@ struct vdso_data {
u32 wtm_clock_sec; /* wall to monotonic offset */
u32 wtm_clock_nsec;
u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */
- u32 cs_mult; /* clocksource multiplier */
+ u32 cs_mono_mult; /* clocksource multiplier */
u64 cs_cycle_last; /* last cycle value */
u64 cs_mask; /* clocksource mask */
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index f4dd7f9663c1..c2c57f6b8c60 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -276,14 +276,14 @@ void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
static void vdso_write_begin(struct vdso_data *vdata)
{
- ++vdso_data->seq_count;
+ ++vdso_data->tb_seq_count;
smp_wmb(); /* Pairs with smp_rmb in vdso_read_retry */
}
static void vdso_write_end(struct vdso_data *vdata)
{
smp_wmb(); /* Pairs with smp_rmb in vdso_read_begin */
- ++vdso_data->seq_count;
+ ++vdso_data->tb_seq_count;
}
static bool tk_is_cntvct(const struct timekeeper *tk)
@@ -307,10 +307,10 @@ static bool tk_is_cntvct(const struct timekeeper *tk)
* counter again, making it even, indicating to userspace that the
* update is finished.
*
- * Userspace is expected to sample seq_count before reading any other
- * fields from the data page. If seq_count is odd, userspace is
+ * Userspace is expected to sample tb_seq_count before reading any other
+ * fields from the data page. If tb_seq_count is odd, userspace is
* expected to wait until it becomes even. After copying data from
- * the page, userspace must sample seq_count again; if it has changed
+ * the page, userspace must sample tb_seq_count again; if it has changed
* from its previous value, userspace must retry the whole sequence.
*
* Calls to update_vsyscall are serialized by the timekeeping core.
@@ -328,18 +328,19 @@ void update_vsyscall(struct timekeeper *tk)
vdso_write_begin(vdso_data);
- vdso_data->tk_is_cntvct = tk_is_cntvct(tk);
+ vdso_data->use_syscall = !tk_is_cntvct(tk);
vdso_data->xtime_coarse_sec = tk->xtime_sec;
vdso_data->xtime_coarse_nsec = (u32)(tk->tkr_mono.xtime_nsec >>
tk->tkr_mono.shift);
vdso_data->wtm_clock_sec = wtm->tv_sec;
vdso_data->wtm_clock_nsec = wtm->tv_nsec;
- if (vdso_data->tk_is_cntvct) {
+ if (!vdso_data->use_syscall) {
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->xtime_clock_sec = tk->xtime_sec;
vdso_data->xtime_clock_snsec = tk->tkr_mono.xtime_nsec;
- vdso_data->cs_mult = tk->tkr_mono.mult;
+ vdso_data->cs_mono_mult = tk->tkr_mono.mult;
+ /* tkr_mono.shift == tkr_raw.shift */
vdso_data->cs_shift = tk->tkr_mono.shift;
vdso_data->cs_mask = tk->tkr_mono.mask;
}
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index a9dd619c6c29..8cf13af1323c 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -1,18 +1,25 @@
/*
- * Copyright 2015 Mentor Graphics Corporation.
+ * Userspace implementations of gettimeofday() and friends.
*
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; version 2 of the
- * License.
+ * Copyright (C) 2017 Cavium, Inc.
+ * Copyright (C) 2015 Mentor Graphics Corporation
+ * Copyright (C) 2012 ARM Limited
*
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Rewriten from arch64 version into C by: Andrew Pinski <[email protected]>
+ * Reworked and rebased over arm version by: Mark Salyzyn <[email protected]>
*/
#include <linux/compiler.h>
@@ -31,32 +38,30 @@
extern struct vdso_data *__get_datapage(void);
-static notrace u32 __vdso_read_begin(const struct vdso_data *vdata)
-{
- u32 seq;
-repeat:
- seq = READ_ONCE(vdata->seq_count);
- if (seq & 1) {
- cpu_relax();
- goto repeat;
- }
- return seq;
-}
-
static notrace u32 vdso_read_begin(const struct vdso_data *vdata)
{
u32 seq;
- seq = __vdso_read_begin(vdata);
+ do {
+ seq = READ_ONCE(vdata->tb_seq_count);
+
+ if ((seq & 1) == 0)
+ break;
- smp_rmb(); /* Pairs with smp_wmb in vdso_write_end */
+ cpu_relax();
+ } while (true);
+
+ smp_rmb(); /* Pairs with second smp_wmb in update_vsyscall */
return seq;
}
static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start)
{
- smp_rmb(); /* Pairs with smp_wmb in vdso_write_begin */
- return vdata->seq_count != start;
+ u32 seq;
+
+ smp_rmb(); /* Pairs with first smp_wmb in update_vsyscall */
+ seq = READ_ONCE(vdata->tb_seq_count);
+ return seq != start;
}
static notrace long clock_gettime_fallback(clockid_t _clkid,
@@ -127,7 +132,7 @@ static notrace u64 get_ns(struct vdso_data *vdata)
cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
- nsec = (cycle_delta * vdata->cs_mult) + vdata->xtime_clock_snsec;
+ nsec = (cycle_delta * vdata->cs_mono_mult) + vdata->xtime_clock_snsec;
nsec >>= vdata->cs_shift;
return nsec;
@@ -141,7 +146,7 @@ static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
do {
seq = vdso_read_begin(vdata);
- if (!vdata->tk_is_cntvct)
+ if (vdata->use_syscall)
return -1;
ts->tv_sec = vdata->xtime_clock_sec;
@@ -164,7 +169,7 @@ static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
do {
seq = vdso_read_begin(vdata);
- if (!vdata->tk_is_cntvct)
+ if (vdata->use_syscall)
return -1;
ts->tv_sec = vdata->xtime_clock_sec;
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Define the prototype for __get_datapage() in local datapage.h header.
Rename all vdata variable that point to the datapage shortened to vd
to relect a consistent and concise style. Make sure that all
references to the datapage in vdso operations are readonly (const).
Make sure datapage is first parameter to all subroutines to also
be consistent.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 2 of 7 pieces
v4:
- update commit message to reflect overall reasoning
v5:
- rebase
---
arch/arm/vdso/datapage.h | 25 +++++++++
arch/arm/vdso/vgettimeofday.c | 99 +++++++++++++++++------------------
2 files changed, 73 insertions(+), 51 deletions(-)
create mode 100644 arch/arm/vdso/datapage.h
diff --git a/arch/arm/vdso/datapage.h b/arch/arm/vdso/datapage.h
new file mode 100644
index 000000000000..e3088bdfb946
--- /dev/null
+++ b/arch/arm/vdso/datapage.h
@@ -0,0 +1,25 @@
+/*
+ * Userspace implementations of __get_datapage
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __VDSO_DATAPAGE_H
+#define __VDSO_DATAPAGE_H
+
+#include <linux/types.h>
+#include <asm/vdso_datapage.h>
+
+extern const struct vdso_data *__get_datapage(void);
+
+#endif /* __VDSO_DATAPAGE_H */
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 8cf13af1323c..2474c17dc356 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -30,20 +30,19 @@
#include <asm/bug.h>
#include <asm/page.h>
#include <asm/unistd.h>
-#include <asm/vdso_datapage.h>
#ifndef CONFIG_AEABI
#error This code depends on AEABI system call conventions
#endif
-extern struct vdso_data *__get_datapage(void);
+#include "datapage.h"
-static notrace u32 vdso_read_begin(const struct vdso_data *vdata)
+static notrace u32 vdso_read_begin(const struct vdso_data *vd)
{
u32 seq;
do {
- seq = READ_ONCE(vdata->tb_seq_count);
+ seq = READ_ONCE(vd->tb_seq_count);
if ((seq & 1) == 0)
break;
@@ -55,12 +54,12 @@ static notrace u32 vdso_read_begin(const struct vdso_data *vdata)
return seq;
}
-static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start)
+static notrace int vdso_read_retry(const struct vdso_data *vd, u32 start)
{
u32 seq;
smp_rmb(); /* Pairs with first smp_wmb in update_vsyscall */
- seq = READ_ONCE(vdata->tb_seq_count);
+ seq = READ_ONCE(vd->tb_seq_count);
return seq != start;
}
@@ -81,38 +80,38 @@ static notrace long clock_gettime_fallback(clockid_t _clkid,
return ret;
}
-static notrace int do_realtime_coarse(struct timespec *ts,
- struct vdso_data *vdata)
+static notrace int do_realtime_coarse(const struct vdso_data *vd,
+ struct timespec *ts)
{
u32 seq;
do {
- seq = vdso_read_begin(vdata);
+ seq = vdso_read_begin(vd);
- ts->tv_sec = vdata->xtime_coarse_sec;
- ts->tv_nsec = vdata->xtime_coarse_nsec;
+ ts->tv_sec = vd->xtime_coarse_sec;
+ ts->tv_nsec = vd->xtime_coarse_nsec;
- } while (vdso_read_retry(vdata, seq));
+ } while (vdso_read_retry(vd, seq));
return 0;
}
-static notrace int do_monotonic_coarse(struct timespec *ts,
- struct vdso_data *vdata)
+static notrace int do_monotonic_coarse(const struct vdso_data *vd,
+ struct timespec *ts)
{
struct timespec tomono;
u32 seq;
do {
- seq = vdso_read_begin(vdata);
+ seq = vdso_read_begin(vd);
- ts->tv_sec = vdata->xtime_coarse_sec;
- ts->tv_nsec = vdata->xtime_coarse_nsec;
+ ts->tv_sec = vd->xtime_coarse_sec;
+ ts->tv_nsec = vd->xtime_coarse_nsec;
- tomono.tv_sec = vdata->wtm_clock_sec;
- tomono.tv_nsec = vdata->wtm_clock_nsec;
+ tomono.tv_sec = vd->wtm_clock_sec;
+ tomono.tv_nsec = vd->wtm_clock_nsec;
- } while (vdso_read_retry(vdata, seq));
+ } while (vdso_read_retry(vd, seq));
ts->tv_sec += tomono.tv_sec;
timespec_add_ns(ts, tomono.tv_nsec);
@@ -122,7 +121,7 @@ static notrace int do_monotonic_coarse(struct timespec *ts,
#ifdef CONFIG_ARM_ARCH_TIMER
-static notrace u64 get_ns(struct vdso_data *vdata)
+static notrace u64 get_ns(const struct vdso_data *vd)
{
u64 cycle_delta;
u64 cycle_now;
@@ -130,29 +129,29 @@ static notrace u64 get_ns(struct vdso_data *vdata)
cycle_now = arch_counter_get_cntvct();
- cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
+ cycle_delta = (cycle_now - vd->cs_cycle_last) & vd->cs_mask;
- nsec = (cycle_delta * vdata->cs_mono_mult) + vdata->xtime_clock_snsec;
- nsec >>= vdata->cs_shift;
+ nsec = (cycle_delta * vd->cs_mono_mult) + vd->xtime_clock_snsec;
+ nsec >>= vd->cs_shift;
return nsec;
}
-static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
+static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
{
u64 nsecs;
u32 seq;
do {
- seq = vdso_read_begin(vdata);
+ seq = vdso_read_begin(vd);
- if (vdata->use_syscall)
+ if (vd->use_syscall)
return -1;
- ts->tv_sec = vdata->xtime_clock_sec;
- nsecs = get_ns(vdata);
+ ts->tv_sec = vd->xtime_clock_sec;
+ nsecs = get_ns(vd);
- } while (vdso_read_retry(vdata, seq));
+ } while (vdso_read_retry(vd, seq));
ts->tv_nsec = 0;
timespec_add_ns(ts, nsecs);
@@ -160,25 +159,25 @@ static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
return 0;
}
-static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
+static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
{
struct timespec tomono;
u64 nsecs;
u32 seq;
do {
- seq = vdso_read_begin(vdata);
+ seq = vdso_read_begin(vd);
- if (vdata->use_syscall)
+ if (vd->use_syscall)
return -1;
- ts->tv_sec = vdata->xtime_clock_sec;
- nsecs = get_ns(vdata);
+ ts->tv_sec = vd->xtime_clock_sec;
+ nsecs = get_ns(vd);
- tomono.tv_sec = vdata->wtm_clock_sec;
- tomono.tv_nsec = vdata->wtm_clock_nsec;
+ tomono.tv_sec = vd->wtm_clock_sec;
+ tomono.tv_nsec = vd->wtm_clock_nsec;
- } while (vdso_read_retry(vdata, seq));
+ } while (vdso_read_retry(vd, seq));
ts->tv_sec += tomono.tv_sec;
ts->tv_nsec = 0;
@@ -189,12 +188,12 @@ static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
#else /* CONFIG_ARM_ARCH_TIMER */
-static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
+static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
{
return -1;
}
-static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
+static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
{
return -1;
}
@@ -203,23 +202,22 @@ static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
{
- struct vdso_data *vdata;
int ret = -1;
- vdata = __get_datapage();
+ const struct vdso_data *vd = __get_datapage();
switch (clkid) {
case CLOCK_REALTIME_COARSE:
- ret = do_realtime_coarse(ts, vdata);
+ ret = do_realtime_coarse(vd, ts);
break;
case CLOCK_MONOTONIC_COARSE:
- ret = do_monotonic_coarse(ts, vdata);
+ ret = do_monotonic_coarse(vd, ts);
break;
case CLOCK_REALTIME:
- ret = do_realtime(ts, vdata);
+ ret = do_realtime(vd, ts);
break;
case CLOCK_MONOTONIC:
- ret = do_monotonic(ts, vdata);
+ ret = do_monotonic(vd, ts);
break;
default:
break;
@@ -251,12 +249,11 @@ static notrace long gettimeofday_fallback(struct timeval *_tv,
notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{
struct timespec ts;
- struct vdso_data *vdata;
int ret;
- vdata = __get_datapage();
+ const struct vdso_data *vd = __get_datapage();
- ret = do_realtime(&ts, vdata);
+ ret = do_realtime(vd, &ts);
if (ret)
return gettimeofday_fallback(tv, tz);
@@ -265,8 +262,8 @@ notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
tv->tv_usec = ts.tv_nsec / 1000;
}
if (tz) {
- tz->tz_minuteswest = vdata->tz_minuteswest;
- tz->tz_dsttime = vdata->tz_dsttime;
+ tz->tz_minuteswest = vd->tz_minuteswest;
+ tz->tz_dsttime = vd->tz_dsttime;
}
return ret;
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Move compiler-specific code to a local compiler.h file:
- CONFIG_AEABI dependency check.
- System call fallback functions standardized into a
DEFINE_FALLBACK macro.
- Replace arch_counter_get_cntvct() with arch_vdso_read_counter.
- Deal with architecture specific unresolved references emitted
by GCC.
- Optimize handling of fallback calls in callers.
- For time functions that always return success, do not waste time
checking return value for switch to fallback.
- Optimize unlikely nullptr checking in __vdso_gettimeofday,
if tv null no need to proceed to fallback, as vdso is still
capable of filling in the tv values.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 3 of 7 pieces
v4:
- switch to arch_vdso_read_counter as common API
- update commit message to reflect overall reasoning
v5:
- comment about asm/arch_timer.h and asm/processor.h regarding why
they have been added in compiler.h for porting clarity.
- add linux/compiler.h in vgettimeofday.c because of notrace.
- remove unnecessary dependency on linux/hrtimer.h
---
arch/arm/vdso/compiler.h | 69 +++++++++++++++++++++
arch/arm/vdso/vgettimeofday.c | 109 +++++++++-------------------------
2 files changed, 96 insertions(+), 82 deletions(-)
create mode 100644 arch/arm/vdso/compiler.h
diff --git a/arch/arm/vdso/compiler.h b/arch/arm/vdso/compiler.h
new file mode 100644
index 000000000000..af24502797e8
--- /dev/null
+++ b/arch/arm/vdso/compiler.h
@@ -0,0 +1,69 @@
+/*
+ * Userspace implementations of fallback calls
+ *
+ * Copyright (C) 2017 Cavium, Inc.
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Rewriten into C by: Andrew Pinski <[email protected]>
+ */
+
+#ifndef __VDSO_COMPILER_H
+#define __VDSO_COMPILER_H
+
+#include <asm/arch_timer.h> /* for arch_counter_get_cntvct() */
+#include <asm/processor.h> /* for cpu_relax() */
+#include <asm/unistd.h>
+#include <linux/compiler.h>
+
+#ifndef CONFIG_AEABI
+#error This code depends on AEABI system call conventions
+#endif
+
+#define DEFINE_FALLBACK(name, type_arg1, name_arg1, type_arg2, name_arg2) \
+static notrace long name##_fallback(type_arg1 _##name_arg1, \
+ type_arg2 _##name_arg2) \
+{ \
+ register type_arg1 name_arg1 asm("r0") = _##name_arg1; \
+ register type_arg2 name_arg2 asm("r1") = _##name_arg2; \
+ register long ret asm ("r0"); \
+ register long nr asm("r7") = __NR_##name; \
+ \
+ asm volatile( \
+ " swi #0\n" \
+ : "=r" (ret) \
+ : "r" (name_arg1), "r" (name_arg2), "r" (nr) \
+ : "memory"); \
+ \
+ return ret; \
+}
+
+#define arch_vdso_read_counter() arch_counter_get_cntvct()
+
+/* Avoid unresolved references emitted by GCC */
+
+void __aeabi_unwind_cpp_pr0(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr1(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr2(void)
+{
+}
+
+#endif /* __VDSO_COMPILER_H */
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 2474c17dc356..522094b147a2 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -22,21 +22,16 @@
* Reworked and rebased over arm version by: Mark Salyzyn <[email protected]>
*/
-#include <linux/compiler.h>
-#include <linux/hrtimer.h>
-#include <linux/time.h>
-#include <asm/arch_timer.h>
#include <asm/barrier.h>
-#include <asm/bug.h>
-#include <asm/page.h>
-#include <asm/unistd.h>
-
-#ifndef CONFIG_AEABI
-#error This code depends on AEABI system call conventions
-#endif
+#include <linux/compiler.h> /* for notrace */
+#include <linux/time.h>
+#include "compiler.h"
#include "datapage.h"
+DEFINE_FALLBACK(gettimeofday, struct timeval *, tv, struct timezone *, tz)
+DEFINE_FALLBACK(clock_gettime, clockid_t, clock, struct timespec *, ts)
+
static notrace u32 vdso_read_begin(const struct vdso_data *vd)
{
u32 seq;
@@ -63,23 +58,6 @@ static notrace int vdso_read_retry(const struct vdso_data *vd, u32 start)
return seq != start;
}
-static notrace long clock_gettime_fallback(clockid_t _clkid,
- struct timespec *_ts)
-{
- register struct timespec *ts asm("r1") = _ts;
- register clockid_t clkid asm("r0") = _clkid;
- register long ret asm ("r0");
- register long nr asm("r7") = __NR_clock_gettime;
-
- asm volatile(
- " swi #0\n"
- : "=r" (ret)
- : "r" (clkid), "r" (ts), "r" (nr)
- : "memory");
-
- return ret;
-}
-
static notrace int do_realtime_coarse(const struct vdso_data *vd,
struct timespec *ts)
{
@@ -127,7 +105,7 @@ static notrace u64 get_ns(const struct vdso_data *vd)
u64 cycle_now;
u64 nsec;
- cycle_now = arch_counter_get_cntvct();
+ cycle_now = arch_vdso_read_counter();
cycle_delta = (cycle_now - vd->cs_cycle_last) & vd->cs_mask;
@@ -200,85 +178,52 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
#endif /* CONFIG_ARM_ARCH_TIMER */
-notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
+notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
{
- int ret = -1;
-
const struct vdso_data *vd = __get_datapage();
- switch (clkid) {
+ switch (clock) {
case CLOCK_REALTIME_COARSE:
- ret = do_realtime_coarse(vd, ts);
+ do_realtime_coarse(vd, ts);
break;
case CLOCK_MONOTONIC_COARSE:
- ret = do_monotonic_coarse(vd, ts);
+ do_monotonic_coarse(vd, ts);
break;
case CLOCK_REALTIME:
- ret = do_realtime(vd, ts);
+ if (do_realtime(vd, ts))
+ goto fallback;
break;
case CLOCK_MONOTONIC:
- ret = do_monotonic(vd, ts);
+ if (do_monotonic(vd, ts))
+ goto fallback;
break;
default:
- break;
+ goto fallback;
}
- if (ret)
- ret = clock_gettime_fallback(clkid, ts);
-
- return ret;
-}
-
-static notrace long gettimeofday_fallback(struct timeval *_tv,
- struct timezone *_tz)
-{
- register struct timezone *tz asm("r1") = _tz;
- register struct timeval *tv asm("r0") = _tv;
- register long ret asm ("r0");
- register long nr asm("r7") = __NR_gettimeofday;
-
- asm volatile(
- " swi #0\n"
- : "=r" (ret)
- : "r" (tv), "r" (tz), "r" (nr)
- : "memory");
-
- return ret;
+ return 0;
+fallback:
+ return clock_gettime_fallback(clock, ts);
}
notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{
- struct timespec ts;
- int ret;
-
const struct vdso_data *vd = __get_datapage();
- ret = do_realtime(vd, &ts);
- if (ret)
- return gettimeofday_fallback(tv, tz);
+ if (likely(tv != NULL)) {
+ struct timespec ts;
+
+ if (do_realtime(vd, &ts))
+ return gettimeofday_fallback(tv, tz);
- if (tv) {
tv->tv_sec = ts.tv_sec;
tv->tv_usec = ts.tv_nsec / 1000;
}
- if (tz) {
+
+ if (unlikely(tz != NULL)) {
tz->tz_minuteswest = vd->tz_minuteswest;
tz->tz_dsttime = vd->tz_dsttime;
}
- return ret;
-}
-
-/* Avoid unresolved references emitted by GCC */
-
-void __aeabi_unwind_cpp_pr0(void)
-{
-}
-
-void __aeabi_unwind_cpp_pr1(void)
-{
-}
-
-void __aeabi_unwind_cpp_pr2(void)
-{
+ return 0;
}
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
In variable timer reading loops, pick up just the values until all
are synchronized, then outside of loop pick up cntvct and perform
calculations to determine final offset, shifted and multiplied
output value.
This replaces get_ns with get_clock_shifted_nsec as cntvct reader.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 5 of 7 pieces
v4:
- split into two, moving ARCH_CLOCK_FIXED_MASK to later
- update commit message to reflect overall reasoning
- adjust for dropping forced inline
- replace typeof() with types vdso_wtm_clock_nsec_t,
vdso_xtime_clock_sec and __kernel_time_t.
v5:
- drop including linux/time.h in favour of uapi/linux/time.h in
vgettimeofday.c to limit architectural includes.
- add linux/time.h to compiler.h for NSEC_PER_SEC definition.
- replace last timespec_add_ns with __iter_div_u64_rem.
---
arch/arm/include/asm/vdso_datapage.h | 18 +++++-
arch/arm/vdso/compiler.h | 1 +
arch/arm/vdso/vgettimeofday.c | 90 ++++++++++++++++++----------
3 files changed, 76 insertions(+), 33 deletions(-)
diff --git a/arch/arm/include/asm/vdso_datapage.h b/arch/arm/include/asm/vdso_datapage.h
index fa3e1856244d..8dd7303db4ec 100644
--- a/arch/arm/include/asm/vdso_datapage.h
+++ b/arch/arm/include/asm/vdso_datapage.h
@@ -24,6 +24,16 @@
#include <asm/page.h>
+#ifndef _VDSO_WTM_CLOCK_SEC_T
+#define _VDSO_WTM_CLOCK_SEC_T
+typedef u32 vdso_wtm_clock_nsec_t;
+#endif
+
+#ifndef _VDSO_XTIME_CLOCK_SEC_T
+#define _VDSO_XTIME_CLOCK_SEC_T
+typedef u32 vdso_xtime_clock_sec_t;
+#endif
+
/* Try to be cache-friendly on systems that don't implement the
* generic timer: fit the unconditionally updated fields in the first
* 32 bytes.
@@ -35,9 +45,11 @@ struct vdso_data {
u32 xtime_coarse_sec; /* coarse time */
u32 xtime_coarse_nsec;
- u32 wtm_clock_sec; /* wall to monotonic offset */
- u32 wtm_clock_nsec;
- u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */
+ /* wall to monotonic offset */
+ u32 wtm_clock_sec;
+ vdso_wtm_clock_nsec_t wtm_clock_nsec;
+ /* CLOCK_REALTIME - seconds */
+ vdso_xtime_clock_sec_t xtime_clock_sec;
u32 cs_mono_mult; /* clocksource multiplier */
u64 cs_cycle_last; /* last cycle value */
diff --git a/arch/arm/vdso/compiler.h b/arch/arm/vdso/compiler.h
index af24502797e8..3edddb705a1b 100644
--- a/arch/arm/vdso/compiler.h
+++ b/arch/arm/vdso/compiler.h
@@ -27,6 +27,7 @@
#include <asm/processor.h> /* for cpu_relax() */
#include <asm/unistd.h>
#include <linux/compiler.h>
+#include <linux/time.h> /* for NSEC_PER_SEC */
#ifndef CONFIG_AEABI
#error This code depends on AEABI system call conventions
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 522094b147a2..59893fca03b3 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -24,7 +24,8 @@
#include <asm/barrier.h>
#include <linux/compiler.h> /* for notrace */
-#include <linux/time.h>
+#include <linux/math64.h> /* for __iter_div_u64_rem() */
+#include <uapi/linux/time.h> /* for struct timespec */
#include "compiler.h"
#include "datapage.h"
@@ -79,6 +80,7 @@ static notrace int do_monotonic_coarse(const struct vdso_data *vd,
{
struct timespec tomono;
u32 seq;
+ u64 nsec;
do {
seq = vdso_read_begin(vd);
@@ -92,33 +94,41 @@ static notrace int do_monotonic_coarse(const struct vdso_data *vd,
} while (vdso_read_retry(vd, seq));
ts->tv_sec += tomono.tv_sec;
- timespec_add_ns(ts, tomono.tv_nsec);
+ /* open coding timespec_add_ns */
+ ts->tv_sec += __iter_div_u64_rem(ts->tv_nsec + tomono.tv_nsec,
+ NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
return 0;
}
#ifdef CONFIG_ARM_ARCH_TIMER
-static notrace u64 get_ns(const struct vdso_data *vd)
+/*
+ * Returns the clock delta, in nanoseconds left-shifted by the clock
+ * shift.
+ */
+static notrace u64 get_clock_shifted_nsec(const u64 cycle_last,
+ const u32 mult,
+ const u64 mask)
{
- u64 cycle_delta;
- u64 cycle_now;
- u64 nsec;
-
- cycle_now = arch_vdso_read_counter();
+ u64 res;
- cycle_delta = (cycle_now - vd->cs_cycle_last) & vd->cs_mask;
+ /* Read the virtual counter. */
+ res = arch_vdso_read_counter();
- nsec = (cycle_delta * vd->cs_mono_mult) + vd->xtime_clock_snsec;
- nsec >>= vd->cs_shift;
+ res = res - cycle_last;
- return nsec;
+ res &= mask;
+ return res * mult;
}
static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
{
- u64 nsecs;
- u32 seq;
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+ u64 mask;
+ vdso_xtime_clock_sec_t sec;
do {
seq = vdso_read_begin(vd);
@@ -126,22 +136,33 @@ static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
if (vd->use_syscall)
return -1;
- ts->tv_sec = vd->xtime_clock_sec;
- nsecs = get_ns(vd);
+ cycle_last = vd->cs_cycle_last;
- } while (vdso_read_retry(vd, seq));
+ mult = vd->cs_mono_mult;
+ shift = vd->cs_shift;
+ mask = vd->cs_mask;
+
+ sec = vd->xtime_clock_sec;
+ nsec = vd->xtime_clock_snsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
- ts->tv_nsec = 0;
- timespec_add_ns(ts, nsecs);
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
return 0;
}
static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
{
- struct timespec tomono;
- u64 nsecs;
- u32 seq;
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+ u64 mask;
+ vdso_wtm_clock_nsec_t wtm_nsec;
+ __kernel_time_t sec;
do {
seq = vdso_read_begin(vd);
@@ -149,17 +170,26 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
if (vd->use_syscall)
return -1;
- ts->tv_sec = vd->xtime_clock_sec;
- nsecs = get_ns(vd);
+ cycle_last = vd->cs_cycle_last;
- tomono.tv_sec = vd->wtm_clock_sec;
- tomono.tv_nsec = vd->wtm_clock_nsec;
+ mult = vd->cs_mono_mult;
+ shift = vd->cs_shift;
+ mask = vd->cs_mask;
- } while (vdso_read_retry(vd, seq));
+ sec = vd->xtime_clock_sec;
+ nsec = vd->xtime_clock_snsec;
- ts->tv_sec += tomono.tv_sec;
- ts->tv_nsec = 0;
- timespec_add_ns(ts, nsecs + tomono.tv_nsec);
+ sec += vd->wtm_clock_sec;
+ wtm_nsec = vd->wtm_clock_nsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ nsec += wtm_nsec;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
return 0;
}
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Add a case for CLOCK_MONOTONIC_RAW to match up with support that
is available in arm64's vdso.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 6 of 7 pieces
v4:
- Move out ARCH_CLOCK_FIXED_MASK to later adjustment.
- update commit message to reflect overall reasoning.
- replace typeof() with type vdso_raw_time_sec_t.
v5:
- replace erroneous tk->raw_time.shift with tk->tkr_raw.shift
v6:
- fixup raw_time_sec and raw_time_nsec in vdso.c
---
arch/arm/include/asm/vdso_datapage.h | 11 +++++++
arch/arm/kernel/vdso.c | 3 ++
arch/arm/vdso/vgettimeofday.c | 44 ++++++++++++++++++++++++++++
3 files changed, 58 insertions(+)
diff --git a/arch/arm/include/asm/vdso_datapage.h b/arch/arm/include/asm/vdso_datapage.h
index 8dd7303db4ec..1c6e6a5d5d9d 100644
--- a/arch/arm/include/asm/vdso_datapage.h
+++ b/arch/arm/include/asm/vdso_datapage.h
@@ -34,6 +34,11 @@ typedef u32 vdso_wtm_clock_nsec_t;
typedef u32 vdso_xtime_clock_sec_t;
#endif
+#ifndef _VDSO_RAW_TIME_SEC_T
+#define _VDSO_RAW_TIME_SEC_T
+typedef u32 vdso_raw_time_sec_t;
+#endif
+
/* Try to be cache-friendly on systems that don't implement the
* generic timer: fit the unconditionally updated fields in the first
* 32 bytes.
@@ -58,6 +63,12 @@ struct vdso_data {
u64 xtime_clock_snsec; /* CLOCK_REALTIME sub-ns base */
u32 tz_minuteswest; /* timezone info for gettimeofday(2) */
u32 tz_dsttime;
+
+ /* Raw clocksource multipler */
+ u32 cs_raw_mult;
+ /* Raw time */
+ vdso_raw_time_sec_t raw_time_sec;
+ u32 raw_time_nsec;
};
union vdso_data_store {
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index c2c57f6b8c60..9c13a32fa5f0 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -337,9 +337,12 @@ void update_vsyscall(struct timekeeper *tk)
if (!vdso_data->use_syscall) {
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
+ vdso_data->raw_time_sec = tk->raw_sec;
+ vdso_data->raw_time_nsec = tk->tkr_raw.xtime_nsec;
vdso_data->xtime_clock_sec = tk->xtime_sec;
vdso_data->xtime_clock_snsec = tk->tkr_mono.xtime_nsec;
vdso_data->cs_mono_mult = tk->tkr_mono.mult;
+ vdso_data->cs_raw_mult = tk->tkr_raw.mult;
/* tkr_mono.shift == tkr_raw.shift */
vdso_data->cs_shift = tk->tkr_mono.shift;
vdso_data->cs_mask = tk->tkr_mono.mask;
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 59893fca03b3..a2c4db83edc4 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -194,6 +194,40 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
return 0;
}
+static notrace int do_monotonic_raw(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+ u64 mask;
+ vdso_raw_time_sec_t sec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ if (vd->use_syscall)
+ return -1;
+
+ cycle_last = vd->cs_cycle_last;
+
+ mult = vd->cs_raw_mult;
+ shift = vd->cs_shift;
+ mask = vd->cs_mask;
+
+ sec = vd->raw_time_sec;
+ nsec = vd->raw_time_nsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
#else /* CONFIG_ARM_ARCH_TIMER */
static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
@@ -206,6 +240,12 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
return -1;
}
+static notrace int do_monotonic_raw(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ return -1;
+}
+
#endif /* CONFIG_ARM_ARCH_TIMER */
notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
@@ -227,6 +267,10 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
if (do_monotonic(vd, ts))
goto fallback;
break;
+ case CLOCK_MONOTONIC_RAW:
+ if (do_monotonic_raw(vd, ts))
+ goto fallback;
+ break;
default:
goto fallback;
}
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Add ARCH_CLOCK_FIXED_MASK as an efficiency since arm64 has no
purpose for cs_mask vdso_data variable.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v4:
- New in series, split off from earlier.
v5:
- rebase
---
arch/arm/vdso/vgettimeofday.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index a354586f8a65..3005479efbe8 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -128,7 +128,11 @@ static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
{
u32 seq, mult, shift;
u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
u64 mask;
+#endif
vdso_xtime_clock_sec_t sec;
do {
@@ -141,7 +145,9 @@ static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
mult = vd->cs_mono_mult;
shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
mask = vd->cs_mask;
+#endif
sec = vd->xtime_clock_sec;
nsec = vd->xtime_clock_snsec;
@@ -161,7 +167,11 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
{
u32 seq, mult, shift;
u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
u64 mask;
+#endif
vdso_wtm_clock_nsec_t wtm_nsec;
__kernel_time_t sec;
@@ -175,7 +185,9 @@ static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
mult = vd->cs_mono_mult;
shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
mask = vd->cs_mask;
+#endif
sec = vd->xtime_clock_sec;
nsec = vd->xtime_clock_snsec;
@@ -200,7 +212,11 @@ static notrace int do_monotonic_raw(const struct vdso_data *vd,
{
u32 seq, mult, shift;
u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
u64 mask;
+#endif
vdso_raw_time_sec_t sec;
do {
@@ -213,7 +229,9 @@ static notrace int do_monotonic_raw(const struct vdso_data *vd,
mult = vd->cs_raw_mult;
shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
mask = vd->cs_mask;
+#endif
sec = vd->raw_time_sec;
nsec = vd->raw_time_nsec;
--
2.19.0.605.g01d371f741-goog
Take an effort from the previous 9 patches to recode the arm64 vdso
code from assembler to C previously submitted by
Andrew Pinski <[email protected]>, rework it for use in both arm and
arm64, overlapping any optimizations for each architecture. But
instead of landing it in arm64, land the result into lib/vdso and
unify both implementations to simplify future maintenance.
[email protected] makes the following claims in the original patch:
This allows the compiler to optimize the divide by 1000 and remove
the other divides.
On ThunderX, gettimeofday improves by 32%. On ThunderX 2,
gettimeofday improves by 18%.
Note I noticed a bug in the old (arm64) implementation of
__kernel_clock_getres; it was checking only the lower 32bits of the
pointer; this would work for most cases but could fail in a few.
<end of claim>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- turned off profiling, restored quiet_cmd_vdsoas.
v3:
- move arch/arm/vdso/vgettimeofday.c to lib/vdso/vgettimeofday.c.
- adjust vgettimeofday.c to be a better global candidate, switch to
using ARCH_PROVIDES_TIMER and __arch_counter_get() as more generic.
v4:
- simplify arch_vdso_read_counter, use read_sysreg.
- Use GENMASK_ULL macro for ARCH_CLOCK_FIXED_MASK.
- update commit message.
- add vdso_wtm_clock_nsec_t, vdso_xtime_clock_sec_t and
vdso_raw_time_nsec_t.
v5:
- comment about asm/processor.h and asm/sysreg.h regarding why
they have been added in compiler.h for porting clarity.
- add linux/hrtimer.h to compiler.h and comment for porting clarity.
- strip out excess unused definitions in asm-offsets.c.
---
arch/arm64/include/asm/vdso_datapage.h | 23 +-
arch/arm64/kernel/asm-offsets.c | 36 ---
arch/arm64/kernel/vdso.c | 2 +-
arch/arm64/kernel/vdso/Makefile | 29 ++-
arch/arm64/kernel/vdso/compiler.h | 69 ++++++
arch/arm64/kernel/vdso/datapage.h | 59 +++++
arch/arm64/kernel/vdso/gettimeofday.S | 328 -------------------------
arch/arm64/kernel/vdso/vgettimeofday.c | 3 +
8 files changed, 175 insertions(+), 374 deletions(-)
create mode 100644 arch/arm64/kernel/vdso/compiler.h
create mode 100644 arch/arm64/kernel/vdso/datapage.h
delete mode 100644 arch/arm64/kernel/vdso/gettimeofday.S
create mode 100644 arch/arm64/kernel/vdso/vgettimeofday.c
diff --git a/arch/arm64/include/asm/vdso_datapage.h b/arch/arm64/include/asm/vdso_datapage.h
index 2b9a63771eda..95f4a7abab80 100644
--- a/arch/arm64/include/asm/vdso_datapage.h
+++ b/arch/arm64/include/asm/vdso_datapage.h
@@ -20,16 +20,31 @@
#ifndef __ASSEMBLY__
+#ifndef _VDSO_WTM_CLOCK_SEC_T
+#define _VDSO_WTM_CLOCK_SEC_T
+typedef __u64 vdso_wtm_clock_nsec_t;
+#endif
+
+#ifndef _VDSO_XTIME_CLOCK_SEC_T
+#define _VDSO_XTIME_CLOCK_SEC_T
+typedef __u64 vdso_xtime_clock_sec_t;
+#endif
+
+#ifndef _VDSO_RAW_TIME_SEC_T
+#define _VDSO_RAW_TIME_SEC_T
+typedef __u64 vdso_raw_time_sec_t;
+#endif
+
struct vdso_data {
__u64 cs_cycle_last; /* Timebase at clocksource init */
- __u64 raw_time_sec; /* Raw time */
+ vdso_raw_time_sec_t raw_time_sec; /* Raw time */
__u64 raw_time_nsec;
- __u64 xtime_clock_sec; /* Kernel time */
- __u64 xtime_clock_nsec;
+ vdso_xtime_clock_sec_t xtime_clock_sec; /* Kernel time */
+ __u64 xtime_clock_snsec;
__u64 xtime_coarse_sec; /* Coarse time */
__u64 xtime_coarse_nsec;
__u64 wtm_clock_sec; /* Wall to monotonic time */
- __u64 wtm_clock_nsec;
+ vdso_wtm_clock_nsec_t wtm_clock_nsec;
__u32 tb_seq_count; /* Timebase sequence counter */
/* cs_* members must be adjacent and in this order (ldp accesses) */
__u32 cs_mono_mult; /* NTP-adjusted clocksource multiplier */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 323aeb5f2fe6..8938a4223690 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -94,42 +94,6 @@ int main(void)
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
BLANK();
- DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
- BLANK();
- DEFINE(CLOCK_REALTIME, CLOCK_REALTIME);
- DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC);
- DEFINE(CLOCK_MONOTONIC_RAW, CLOCK_MONOTONIC_RAW);
- DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
- DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
- DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
- DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC);
- DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
- BLANK();
- DEFINE(VDSO_CS_CYCLE_LAST, offsetof(struct vdso_data, cs_cycle_last));
- DEFINE(VDSO_RAW_TIME_SEC, offsetof(struct vdso_data, raw_time_sec));
- DEFINE(VDSO_RAW_TIME_NSEC, offsetof(struct vdso_data, raw_time_nsec));
- DEFINE(VDSO_XTIME_CLK_SEC, offsetof(struct vdso_data, xtime_clock_sec));
- DEFINE(VDSO_XTIME_CLK_NSEC, offsetof(struct vdso_data, xtime_clock_nsec));
- DEFINE(VDSO_XTIME_CRS_SEC, offsetof(struct vdso_data, xtime_coarse_sec));
- DEFINE(VDSO_XTIME_CRS_NSEC, offsetof(struct vdso_data, xtime_coarse_nsec));
- DEFINE(VDSO_WTM_CLK_SEC, offsetof(struct vdso_data, wtm_clock_sec));
- DEFINE(VDSO_WTM_CLK_NSEC, offsetof(struct vdso_data, wtm_clock_nsec));
- DEFINE(VDSO_TB_SEQ_COUNT, offsetof(struct vdso_data, tb_seq_count));
- DEFINE(VDSO_CS_MONO_MULT, offsetof(struct vdso_data, cs_mono_mult));
- DEFINE(VDSO_CS_RAW_MULT, offsetof(struct vdso_data, cs_raw_mult));
- DEFINE(VDSO_CS_SHIFT, offsetof(struct vdso_data, cs_shift));
- DEFINE(VDSO_TZ_MINWEST, offsetof(struct vdso_data, tz_minuteswest));
- DEFINE(VDSO_TZ_DSTTIME, offsetof(struct vdso_data, tz_dsttime));
- DEFINE(VDSO_USE_SYSCALL, offsetof(struct vdso_data, use_syscall));
- BLANK();
- DEFINE(TVAL_TV_SEC, offsetof(struct timeval, tv_sec));
- DEFINE(TVAL_TV_USEC, offsetof(struct timeval, tv_usec));
- DEFINE(TSPEC_TV_SEC, offsetof(struct timespec, tv_sec));
- DEFINE(TSPEC_TV_NSEC, offsetof(struct timespec, tv_nsec));
- BLANK();
- DEFINE(TZ_MINWEST, offsetof(struct timezone, tz_minuteswest));
- DEFINE(TZ_DSTTIME, offsetof(struct timezone, tz_dsttime));
- BLANK();
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
BLANK();
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 2d419006ad43..59f150c25889 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -238,7 +238,7 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->raw_time_sec = tk->raw_sec;
vdso_data->raw_time_nsec = tk->tkr_raw.xtime_nsec;
vdso_data->xtime_clock_sec = tk->xtime_sec;
- vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec;
+ vdso_data->xtime_clock_snsec = tk->tkr_mono.xtime_nsec;
vdso_data->cs_mono_mult = tk->tkr_mono.mult;
vdso_data->cs_raw_mult = tk->tkr_raw.mult;
/* tkr_mono.shift == tkr_raw.shift */
diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index b215c712d897..21cad81a4f40 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -6,18 +6,32 @@
# Heavily based on the vDSO Makefiles for other archs.
#
-obj-vdso := gettimeofday.o note.o sigreturn.o
+obj-vdso-s := note.o sigreturn.o
+obj-vdso-c := vgettimeofday.o
# Build rules
-targets := $(obj-vdso) vdso.so vdso.so.dbg
-obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+targets := $(obj-vdso-s) $(obj-vdso-c) vdso.so vdso.so.dbg
+obj-vdso-s := $(addprefix $(obj)/, $(obj-vdso-s))
+obj-vdso-c := $(addprefix $(obj)/, $(obj-vdso-c))
+obj-vdso := $(obj-vdso-c) $(obj-vdso-s)
-ccflags-y := -shared -fno-common -fno-builtin
+ccflags-y := -shared -fno-common -fno-builtin -fno-stack-protector
+ccflags-y += -DDISABLE_BRANCH_PROFILING
ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+# Force -O2 to avoid libgcc dependencies
+CFLAGS_REMOVE_vgettimeofday.o = -pg -Os
+CFLAGS_vgettimeofday.o = -O2 -fPIC
+ifneq ($(cc-name),clang)
+CFLAGS_vgettimeofday.o += -mcmodel=tiny
+endif
+
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n
+KASAN_SANITIZE := n
+UBSAN_SANITIZE := n
+KCOV_INSTRUMENT := n
# Workaround for bare-metal (ELF) toolchains that neglect to pass -shared
# down to collect2, resulting in silent corruption of the vDSO image.
@@ -50,12 +64,17 @@ include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE
$(call if_changed,vdsosym)
# Assembly rules for the .S files
-$(obj-vdso): %.o: %.S FORCE
+$(obj-vdso-s): %.o: %.S FORCE
$(call if_changed_dep,vdsoas)
+$(obj-vdso-c): %.o: %.c FORCE
+ $(call if_changed_dep,vdsocc)
+
# Actual build commands
quiet_cmd_vdsold = VDSOL $@
cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsocc = VDSOC $@
+ cmd_vdsocc = ${CC} $(c_flags) -c -o $@ $<
quiet_cmd_vdsoas = VDSOA $@
cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
diff --git a/arch/arm64/kernel/vdso/compiler.h b/arch/arm64/kernel/vdso/compiler.h
new file mode 100644
index 000000000000..921a7191b497
--- /dev/null
+++ b/arch/arm64/kernel/vdso/compiler.h
@@ -0,0 +1,69 @@
+/*
+ * Userspace implementations of fallback calls
+ *
+ * Copyright (C) 2017 Cavium, Inc.
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Rewriten into C by: Andrew Pinski <[email protected]>
+ */
+
+#ifndef __VDSO_COMPILER_H
+#define __VDSO_COMPILER_H
+
+#include <asm/processor.h> /* for cpu_relax() */
+#include <asm/sysreg.h> /* for read_sysreg() */
+#include <asm/unistd.h>
+#include <linux/compiler.h>
+#include <linux/hrtimer.h> /* for LOW_RES_NSEC and MONOTONIC_RES_NSEC */
+
+#ifdef CONFIG_ARM_ARCH_TIMER
+#define ARCH_PROVIDES_TIMER
+#endif
+
+#define DEFINE_FALLBACK(name, type_arg1, name_arg1, type_arg2, name_arg2) \
+static notrace long name##_fallback(type_arg1 _##name_arg1, \
+ type_arg2 _##name_arg2) \
+{ \
+ register type_arg1 name_arg1 asm("x0") = _##name_arg1; \
+ register type_arg2 name_arg2 asm("x1") = _##name_arg2; \
+ register long ret asm ("x0"); \
+ register long nr asm("x8") = __NR_##name; \
+ \
+ asm volatile( \
+ " svc #0\n" \
+ : "=r" (ret) \
+ : "r" (name_arg1), "r" (name_arg2), "r" (nr) \
+ : "memory"); \
+ \
+ return ret; \
+}
+
+/*
+ * AArch64 implementation of arch_counter_get_cntvct() suitable for vdso
+ */
+static __always_inline notrace u64 arch_vdso_read_counter(void)
+{
+ /* Read the virtual counter. */
+ isb();
+ return read_sysreg(cntvct_el0);
+}
+
+/* Rename exported vdso functions */
+#define __vdso_clock_gettime __kernel_clock_gettime
+#define __vdso_gettimeofday __kernel_gettimeofday
+#define __vdso_clock_getres __kernel_clock_getres
+
+#endif /* __VDSO_COMPILER_H */
diff --git a/arch/arm64/kernel/vdso/datapage.h b/arch/arm64/kernel/vdso/datapage.h
new file mode 100644
index 000000000000..be86a6074cf8
--- /dev/null
+++ b/arch/arm64/kernel/vdso/datapage.h
@@ -0,0 +1,59 @@
+/*
+ * Userspace implementations of __get_datapage
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __VDSO_DATAPAGE_H
+#define __VDSO_DATAPAGE_H
+
+#include <linux/bitops.h>
+#include <linux/types.h>
+#include <asm/vdso_datapage.h>
+
+/*
+ * We use the hidden visibility to prevent the compiler from generating a GOT
+ * relocation. Not only is going through a GOT useless (the entry couldn't and
+ * mustn't be overridden by another library), it does not even work: the linker
+ * cannot generate an absolute address to the data page.
+ *
+ * With the hidden visibility, the compiler simply generates a PC-relative
+ * relocation (R_ARM_REL32), and this is what we need.
+ */
+extern const struct vdso_data _vdso_data __attribute__((visibility("hidden")));
+
+static inline const struct vdso_data *__get_datapage(void)
+{
+ const struct vdso_data *ret;
+ /*
+ * This simply puts &_vdso_data into ret. The reason why we don't use
+ * `ret = &_vdso_data` is that the compiler tends to optimise this in a
+ * very suboptimal way: instead of keeping &_vdso_data in a register,
+ * it goes through a relocation almost every time _vdso_data must be
+ * accessed (even in subfunctions). This is both time and space
+ * consuming: each relocation uses a word in the code section, and it
+ * has to be loaded at runtime.
+ *
+ * This trick hides the assignment from the compiler. Since it cannot
+ * track where the pointer comes from, it will only use one relocation
+ * where __get_datapage() is called, and then keep the result in a
+ * register.
+ */
+ asm("" : "=r"(ret) : "0"(&_vdso_data));
+ return ret;
+}
+
+/* We can only guarantee 56 bits of precision. */
+#define ARCH_CLOCK_FIXED_MASK GENMASK_ULL(55, 0)
+
+#endif /* __VDSO_DATAPAGE_H */
diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
deleted file mode 100644
index c39872a7b03c..000000000000
--- a/arch/arm64/kernel/vdso/gettimeofday.S
+++ /dev/null
@@ -1,328 +0,0 @@
-/*
- * Userspace implementations of gettimeofday() and friends.
- *
- * Copyright (C) 2012 ARM Limited
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- *
- * Author: Will Deacon <[email protected]>
- */
-
-#include <linux/linkage.h>
-#include <asm/asm-offsets.h>
-#include <asm/unistd.h>
-
-#define NSEC_PER_SEC_LO16 0xca00
-#define NSEC_PER_SEC_HI16 0x3b9a
-
-vdso_data .req x6
-seqcnt .req w7
-w_tmp .req w8
-x_tmp .req x8
-
-/*
- * Conventions for macro arguments:
- * - An argument is write-only if its name starts with "res".
- * - All other arguments are read-only, unless otherwise specified.
- */
-
- .macro seqcnt_acquire
-9999: ldr seqcnt, [vdso_data, #VDSO_TB_SEQ_COUNT]
- tbnz seqcnt, #0, 9999b
- dmb ishld
- .endm
-
- .macro seqcnt_check fail
- dmb ishld
- ldr w_tmp, [vdso_data, #VDSO_TB_SEQ_COUNT]
- cmp w_tmp, seqcnt
- b.ne \fail
- .endm
-
- .macro syscall_check fail
- ldr w_tmp, [vdso_data, #VDSO_USE_SYSCALL]
- cbnz w_tmp, \fail
- .endm
-
- .macro get_nsec_per_sec res
- mov \res, #NSEC_PER_SEC_LO16
- movk \res, #NSEC_PER_SEC_HI16, lsl #16
- .endm
-
- /*
- * Returns the clock delta, in nanoseconds left-shifted by the clock
- * shift.
- */
- .macro get_clock_shifted_nsec res, cycle_last, mult
- /* Read the virtual counter. */
- isb
- mrs x_tmp, cntvct_el0
- /* Calculate cycle delta and convert to ns. */
- sub \res, x_tmp, \cycle_last
- /* We can only guarantee 56 bits of precision. */
- movn x_tmp, #0xff00, lsl #48
- and \res, x_tmp, \res
- mul \res, \res, \mult
- .endm
-
- /*
- * Returns in res_{sec,nsec} the REALTIME timespec, based on the
- * "wall time" (xtime) and the clock_mono delta.
- */
- .macro get_ts_realtime res_sec, res_nsec, \
- clock_nsec, xtime_sec, xtime_nsec, nsec_to_sec
- add \res_nsec, \clock_nsec, \xtime_nsec
- udiv x_tmp, \res_nsec, \nsec_to_sec
- add \res_sec, \xtime_sec, x_tmp
- msub \res_nsec, x_tmp, \nsec_to_sec, \res_nsec
- .endm
-
- /*
- * Returns in res_{sec,nsec} the timespec based on the clock_raw delta,
- * used for CLOCK_MONOTONIC_RAW.
- */
- .macro get_ts_clock_raw res_sec, res_nsec, clock_nsec, nsec_to_sec
- udiv \res_sec, \clock_nsec, \nsec_to_sec
- msub \res_nsec, \res_sec, \nsec_to_sec, \clock_nsec
- .endm
-
- /* sec and nsec are modified in place. */
- .macro add_ts sec, nsec, ts_sec, ts_nsec, nsec_to_sec
- /* Add timespec. */
- add \sec, \sec, \ts_sec
- add \nsec, \nsec, \ts_nsec
-
- /* Normalise the new timespec. */
- cmp \nsec, \nsec_to_sec
- b.lt 9999f
- sub \nsec, \nsec, \nsec_to_sec
- add \sec, \sec, #1
-9999:
- cmp \nsec, #0
- b.ge 9998f
- add \nsec, \nsec, \nsec_to_sec
- sub \sec, \sec, #1
-9998:
- .endm
-
- .macro clock_gettime_return, shift=0
- .if \shift == 1
- lsr x11, x11, x12
- .endif
- stp x10, x11, [x1, #TSPEC_TV_SEC]
- mov x0, xzr
- ret
- .endm
-
- .macro jump_slot jumptable, index, label
- .if (. - \jumptable) != 4 * (\index)
- .error "Jump slot index mismatch"
- .endif
- b \label
- .endm
-
- .text
-
-/* int __kernel_gettimeofday(struct timeval *tv, struct timezone *tz); */
-ENTRY(__kernel_gettimeofday)
- .cfi_startproc
- adr vdso_data, _vdso_data
- /* If tv is NULL, skip to the timezone code. */
- cbz x0, 2f
-
- /* Compute the time of day. */
-1: seqcnt_acquire
- syscall_check fail=4f
- ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
- /* w11 = cs_mono_mult, w12 = cs_shift */
- ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
- ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
- seqcnt_check fail=1b
-
- get_nsec_per_sec res=x9
- lsl x9, x9, x12
-
- get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
- get_ts_realtime res_sec=x10, res_nsec=x11, \
- clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
-
- /* Convert ns to us. */
- mov x13, #1000
- lsl x13, x13, x12
- udiv x11, x11, x13
- stp x10, x11, [x0, #TVAL_TV_SEC]
-2:
- /* If tz is NULL, return 0. */
- cbz x1, 3f
- ldp w4, w5, [vdso_data, #VDSO_TZ_MINWEST]
- stp w4, w5, [x1, #TZ_MINWEST]
-3:
- mov x0, xzr
- ret
-4:
- /* Syscall fallback. */
- mov x8, #__NR_gettimeofday
- svc #0
- ret
- .cfi_endproc
-ENDPROC(__kernel_gettimeofday)
-
-#define JUMPSLOT_MAX CLOCK_MONOTONIC_COARSE
-
-/* int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp); */
-ENTRY(__kernel_clock_gettime)
- .cfi_startproc
- cmp w0, #JUMPSLOT_MAX
- b.hi syscall
- adr vdso_data, _vdso_data
- adr x_tmp, jumptable
- add x_tmp, x_tmp, w0, uxtw #2
- br x_tmp
-
- ALIGN
-jumptable:
- jump_slot jumptable, CLOCK_REALTIME, realtime
- jump_slot jumptable, CLOCK_MONOTONIC, monotonic
- b syscall
- b syscall
- jump_slot jumptable, CLOCK_MONOTONIC_RAW, monotonic_raw
- jump_slot jumptable, CLOCK_REALTIME_COARSE, realtime_coarse
- jump_slot jumptable, CLOCK_MONOTONIC_COARSE, monotonic_coarse
-
- .if (. - jumptable) != 4 * (JUMPSLOT_MAX + 1)
- .error "Wrong jumptable size"
- .endif
-
- ALIGN
-realtime:
- seqcnt_acquire
- syscall_check fail=syscall
- ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
- /* w11 = cs_mono_mult, w12 = cs_shift */
- ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
- ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
- seqcnt_check fail=realtime
-
- /* All computations are done with left-shifted nsecs. */
- get_nsec_per_sec res=x9
- lsl x9, x9, x12
-
- get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
- get_ts_realtime res_sec=x10, res_nsec=x11, \
- clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
- clock_gettime_return, shift=1
-
- ALIGN
-monotonic:
- seqcnt_acquire
- syscall_check fail=syscall
- ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
- /* w11 = cs_mono_mult, w12 = cs_shift */
- ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
- ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
- ldp x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC]
- seqcnt_check fail=monotonic
-
- /* All computations are done with left-shifted nsecs. */
- lsl x4, x4, x12
- get_nsec_per_sec res=x9
- lsl x9, x9, x12
-
- get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
- get_ts_realtime res_sec=x10, res_nsec=x11, \
- clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
-
- add_ts sec=x10, nsec=x11, ts_sec=x3, ts_nsec=x4, nsec_to_sec=x9
- clock_gettime_return, shift=1
-
- ALIGN
-monotonic_raw:
- seqcnt_acquire
- syscall_check fail=syscall
- ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
- /* w11 = cs_raw_mult, w12 = cs_shift */
- ldp w12, w11, [vdso_data, #VDSO_CS_SHIFT]
- ldp x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC]
- seqcnt_check fail=monotonic_raw
-
- /* All computations are done with left-shifted nsecs. */
- get_nsec_per_sec res=x9
- lsl x9, x9, x12
-
- get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
- get_ts_clock_raw res_sec=x10, res_nsec=x11, \
- clock_nsec=x15, nsec_to_sec=x9
-
- add_ts sec=x10, nsec=x11, ts_sec=x13, ts_nsec=x14, nsec_to_sec=x9
- clock_gettime_return, shift=1
-
- ALIGN
-realtime_coarse:
- seqcnt_acquire
- ldp x10, x11, [vdso_data, #VDSO_XTIME_CRS_SEC]
- seqcnt_check fail=realtime_coarse
- clock_gettime_return
-
- ALIGN
-monotonic_coarse:
- seqcnt_acquire
- ldp x10, x11, [vdso_data, #VDSO_XTIME_CRS_SEC]
- ldp x13, x14, [vdso_data, #VDSO_WTM_CLK_SEC]
- seqcnt_check fail=monotonic_coarse
-
- /* Computations are done in (non-shifted) nsecs. */
- get_nsec_per_sec res=x9
- add_ts sec=x10, nsec=x11, ts_sec=x13, ts_nsec=x14, nsec_to_sec=x9
- clock_gettime_return
-
- ALIGN
-syscall: /* Syscall fallback. */
- mov x8, #__NR_clock_gettime
- svc #0
- ret
- .cfi_endproc
-ENDPROC(__kernel_clock_gettime)
-
-/* int __kernel_clock_getres(clockid_t clock_id, struct timespec *res); */
-ENTRY(__kernel_clock_getres)
- .cfi_startproc
- cmp w0, #CLOCK_REALTIME
- ccmp w0, #CLOCK_MONOTONIC, #0x4, ne
- ccmp w0, #CLOCK_MONOTONIC_RAW, #0x4, ne
- b.ne 1f
-
- ldr x2, 5f
- b 2f
-1:
- cmp w0, #CLOCK_REALTIME_COARSE
- ccmp w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne
- b.ne 4f
- ldr x2, 6f
-2:
- cbz x1, 3f
- stp xzr, x2, [x1]
-
-3: /* res == NULL. */
- mov w0, wzr
- ret
-
-4: /* Syscall fallback. */
- mov x8, #__NR_clock_getres
- svc #0
- ret
-5:
- .quad CLOCK_REALTIME_RES
-6:
- .quad CLOCK_COARSE_RES
- .cfi_endproc
-ENDPROC(__kernel_clock_getres)
diff --git a/arch/arm64/kernel/vdso/vgettimeofday.c b/arch/arm64/kernel/vdso/vgettimeofday.c
new file mode 100644
index 000000000000..b73d4011993d
--- /dev/null
+++ b/arch/arm64/kernel/vdso/vgettimeofday.c
@@ -0,0 +1,3 @@
+#include "compiler.h"
+#include "datapage.h"
+#include "../../../../lib/vdso/vgettimeofday.c"
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Add clock_getres vdso support to match up with existing support in
the arm64's vdso.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split first CL into 7 of 7 pieces
v4:
- update commit message to reflect overall reasoning.
- replace typeof() with long for nsec
- replace clock_id with clock to match usage elsewhere.
v5:
- add linux/hrtimer.h to compiler.h to supply for LOW_RES_NSEC and
MONOTONIC_RES_NSEC.
---
arch/arm/kernel/vdso.c | 1 +
arch/arm/vdso/compiler.h | 1 +
arch/arm/vdso/vdso.lds.S | 1 +
arch/arm/vdso/vgettimeofday.c | 23 +++++++++++++++++++++++
4 files changed, 26 insertions(+)
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index 9c13a32fa5f0..c299967df63c 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -191,6 +191,7 @@ static void __init patch_vdso(void *ehdr)
if (!cntvct_ok) {
vdso_nullpatch_one(&einfo, "__vdso_gettimeofday");
vdso_nullpatch_one(&einfo, "__vdso_clock_gettime");
+ vdso_nullpatch_one(&einfo, "__vdso_clock_getres");
}
}
diff --git a/arch/arm/vdso/compiler.h b/arch/arm/vdso/compiler.h
index 3edddb705a1b..c7751019246a 100644
--- a/arch/arm/vdso/compiler.h
+++ b/arch/arm/vdso/compiler.h
@@ -27,6 +27,7 @@
#include <asm/processor.h> /* for cpu_relax() */
#include <asm/unistd.h>
#include <linux/compiler.h>
+#include <linux/hrtimer.h> /* for LOW_RES_NSEC and MONOTONIC_RES_NSEC */
#include <linux/time.h> /* for NSEC_PER_SEC */
#ifndef CONFIG_AEABI
diff --git a/arch/arm/vdso/vdso.lds.S b/arch/arm/vdso/vdso.lds.S
index 89ca89f12d23..1d81e8c3acf6 100644
--- a/arch/arm/vdso/vdso.lds.S
+++ b/arch/arm/vdso/vdso.lds.S
@@ -82,6 +82,7 @@ VERSION
global:
__vdso_clock_gettime;
__vdso_gettimeofday;
+ __vdso_clock_getres;
local: *;
};
}
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index a2c4db83edc4..a354586f8a65 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -32,6 +32,7 @@
DEFINE_FALLBACK(gettimeofday, struct timeval *, tv, struct timezone *, tz)
DEFINE_FALLBACK(clock_gettime, clockid_t, clock, struct timespec *, ts)
+DEFINE_FALLBACK(clock_getres, clockid_t, clock, struct timespec *, ts)
static notrace u32 vdso_read_begin(const struct vdso_data *vd)
{
@@ -301,3 +302,25 @@ notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
return 0;
}
+
+int __vdso_clock_getres(clockid_t clock, struct timespec *res)
+{
+ long nsec;
+
+ if (clock == CLOCK_REALTIME ||
+ clock == CLOCK_MONOTONIC ||
+ clock == CLOCK_MONOTONIC_RAW)
+ nsec = MONOTONIC_RES_NSEC;
+ else if (clock == CLOCK_REALTIME_COARSE ||
+ clock == CLOCK_MONOTONIC_COARSE)
+ nsec = LOW_RES_NSEC;
+ else
+ return clock_getres_fallback(clock, res);
+
+ if (likely(res != NULL)) {
+ res->tv_sec = 0;
+ res->tv_nsec = nsec;
+ }
+
+ return 0;
+}
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Declare arch/arm/vdso/vgettimeofday.c to be a candidate for a global
implementation of the vdso timer calls. The hope is that new
architectures can take advantage of the current unification of
arm and arm64 implementations.
We urge future efforts to merge their implementations into the
global vgettimeofday.c file and thus provide functional parity.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v3:
- added this change
- move arch/arm/vdso/vgettimeofday.c to lib/vdso/vgettimeofday.c
- adjust vgettimeofday.c to be a better global candidate, switch to
using ARCH_PROVIDES_TIMER and __arch_counter_get() as more generic.
v4:
- update commit message to reflect overall reasoning
- adjust to reflect dropping of any forced inline
v5:
- rebase
---
arch/arm/vdso/compiler.h | 4 +
arch/arm/vdso/vgettimeofday.c | 343 +--------------------------------
lib/vdso/compiler.h | 24 +++
lib/vdso/datapage.h | 24 +++
lib/vdso/vgettimeofday.c | 344 ++++++++++++++++++++++++++++++++++
5 files changed, 397 insertions(+), 342 deletions(-)
create mode 100644 lib/vdso/compiler.h
create mode 100644 lib/vdso/datapage.h
create mode 100644 lib/vdso/vgettimeofday.c
diff --git a/arch/arm/vdso/compiler.h b/arch/arm/vdso/compiler.h
index c7751019246a..6fd88be2ff0e 100644
--- a/arch/arm/vdso/compiler.h
+++ b/arch/arm/vdso/compiler.h
@@ -34,6 +34,10 @@
#error This code depends on AEABI system call conventions
#endif
+#ifdef CONFIG_ARM_ARCH_TIMER
+#define ARCH_PROVIDES_TIMER
+#endif
+
#define DEFINE_FALLBACK(name, type_arg1, name_arg1, type_arg2, name_arg2) \
static notrace long name##_fallback(type_arg1 _##name_arg1, \
type_arg2 _##name_arg2) \
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 3005479efbe8..4b241fe60d17 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -1,344 +1,3 @@
-/*
- * Userspace implementations of gettimeofday() and friends.
- *
- * Copyright (C) 2017 Cavium, Inc.
- * Copyright (C) 2015 Mentor Graphics Corporation
- * Copyright (C) 2012 ARM Limited
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- *
- * Author: Will Deacon <[email protected]>
- * Rewriten from arch64 version into C by: Andrew Pinski <[email protected]>
- * Reworked and rebased over arm version by: Mark Salyzyn <[email protected]>
- */
-
-#include <asm/barrier.h>
-#include <linux/compiler.h> /* for notrace */
-#include <linux/math64.h> /* for __iter_div_u64_rem() */
-#include <uapi/linux/time.h> /* for struct timespec */
-
#include "compiler.h"
#include "datapage.h"
-
-DEFINE_FALLBACK(gettimeofday, struct timeval *, tv, struct timezone *, tz)
-DEFINE_FALLBACK(clock_gettime, clockid_t, clock, struct timespec *, ts)
-DEFINE_FALLBACK(clock_getres, clockid_t, clock, struct timespec *, ts)
-
-static notrace u32 vdso_read_begin(const struct vdso_data *vd)
-{
- u32 seq;
-
- do {
- seq = READ_ONCE(vd->tb_seq_count);
-
- if ((seq & 1) == 0)
- break;
-
- cpu_relax();
- } while (true);
-
- smp_rmb(); /* Pairs with second smp_wmb in update_vsyscall */
- return seq;
-}
-
-static notrace int vdso_read_retry(const struct vdso_data *vd, u32 start)
-{
- u32 seq;
-
- smp_rmb(); /* Pairs with first smp_wmb in update_vsyscall */
- seq = READ_ONCE(vd->tb_seq_count);
- return seq != start;
-}
-
-static notrace int do_realtime_coarse(const struct vdso_data *vd,
- struct timespec *ts)
-{
- u32 seq;
-
- do {
- seq = vdso_read_begin(vd);
-
- ts->tv_sec = vd->xtime_coarse_sec;
- ts->tv_nsec = vd->xtime_coarse_nsec;
-
- } while (vdso_read_retry(vd, seq));
-
- return 0;
-}
-
-static notrace int do_monotonic_coarse(const struct vdso_data *vd,
- struct timespec *ts)
-{
- struct timespec tomono;
- u32 seq;
- u64 nsec;
-
- do {
- seq = vdso_read_begin(vd);
-
- ts->tv_sec = vd->xtime_coarse_sec;
- ts->tv_nsec = vd->xtime_coarse_nsec;
-
- tomono.tv_sec = vd->wtm_clock_sec;
- tomono.tv_nsec = vd->wtm_clock_nsec;
-
- } while (vdso_read_retry(vd, seq));
-
- ts->tv_sec += tomono.tv_sec;
- /* open coding timespec_add_ns */
- ts->tv_sec += __iter_div_u64_rem(ts->tv_nsec + tomono.tv_nsec,
- NSEC_PER_SEC, &nsec);
- ts->tv_nsec = nsec;
-
- return 0;
-}
-
-#ifdef CONFIG_ARM_ARCH_TIMER
-
-/*
- * Returns the clock delta, in nanoseconds left-shifted by the clock
- * shift.
- */
-static notrace u64 get_clock_shifted_nsec(const u64 cycle_last,
- const u32 mult,
- const u64 mask)
-{
- u64 res;
-
- /* Read the virtual counter. */
- res = arch_vdso_read_counter();
-
- res = res - cycle_last;
-
- res &= mask;
- return res * mult;
-}
-
-static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
-{
- u32 seq, mult, shift;
- u64 nsec, cycle_last;
-#ifdef ARCH_CLOCK_FIXED_MASK
- static const u64 mask = ARCH_CLOCK_FIXED_MASK;
-#else
- u64 mask;
-#endif
- vdso_xtime_clock_sec_t sec;
-
- do {
- seq = vdso_read_begin(vd);
-
- if (vd->use_syscall)
- return -1;
-
- cycle_last = vd->cs_cycle_last;
-
- mult = vd->cs_mono_mult;
- shift = vd->cs_shift;
-#ifndef ARCH_CLOCK_FIXED_MASK
- mask = vd->cs_mask;
-#endif
-
- sec = vd->xtime_clock_sec;
- nsec = vd->xtime_clock_snsec;
-
- } while (unlikely(vdso_read_retry(vd, seq)));
-
- nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
- nsec >>= shift;
- /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
- ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
- ts->tv_nsec = nsec;
-
- return 0;
-}
-
-static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
-{
- u32 seq, mult, shift;
- u64 nsec, cycle_last;
-#ifdef ARCH_CLOCK_FIXED_MASK
- static const u64 mask = ARCH_CLOCK_FIXED_MASK;
-#else
- u64 mask;
-#endif
- vdso_wtm_clock_nsec_t wtm_nsec;
- __kernel_time_t sec;
-
- do {
- seq = vdso_read_begin(vd);
-
- if (vd->use_syscall)
- return -1;
-
- cycle_last = vd->cs_cycle_last;
-
- mult = vd->cs_mono_mult;
- shift = vd->cs_shift;
-#ifndef ARCH_CLOCK_FIXED_MASK
- mask = vd->cs_mask;
-#endif
-
- sec = vd->xtime_clock_sec;
- nsec = vd->xtime_clock_snsec;
-
- sec += vd->wtm_clock_sec;
- wtm_nsec = vd->wtm_clock_nsec;
-
- } while (unlikely(vdso_read_retry(vd, seq)));
-
- nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
- nsec >>= shift;
- nsec += wtm_nsec;
- /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
- ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
- ts->tv_nsec = nsec;
-
- return 0;
-}
-
-static notrace int do_monotonic_raw(const struct vdso_data *vd,
- struct timespec *ts)
-{
- u32 seq, mult, shift;
- u64 nsec, cycle_last;
-#ifdef ARCH_CLOCK_FIXED_MASK
- static const u64 mask = ARCH_CLOCK_FIXED_MASK;
-#else
- u64 mask;
-#endif
- vdso_raw_time_sec_t sec;
-
- do {
- seq = vdso_read_begin(vd);
-
- if (vd->use_syscall)
- return -1;
-
- cycle_last = vd->cs_cycle_last;
-
- mult = vd->cs_raw_mult;
- shift = vd->cs_shift;
-#ifndef ARCH_CLOCK_FIXED_MASK
- mask = vd->cs_mask;
-#endif
-
- sec = vd->raw_time_sec;
- nsec = vd->raw_time_nsec;
-
- } while (unlikely(vdso_read_retry(vd, seq)));
-
- nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
- nsec >>= shift;
- /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
- ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
- ts->tv_nsec = nsec;
-
- return 0;
-}
-
-#else /* CONFIG_ARM_ARCH_TIMER */
-
-static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
-{
- return -1;
-}
-
-static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
-{
- return -1;
-}
-
-static notrace int do_monotonic_raw(const struct vdso_data *vd,
- struct timespec *ts)
-{
- return -1;
-}
-
-#endif /* CONFIG_ARM_ARCH_TIMER */
-
-notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
-{
- const struct vdso_data *vd = __get_datapage();
-
- switch (clock) {
- case CLOCK_REALTIME_COARSE:
- do_realtime_coarse(vd, ts);
- break;
- case CLOCK_MONOTONIC_COARSE:
- do_monotonic_coarse(vd, ts);
- break;
- case CLOCK_REALTIME:
- if (do_realtime(vd, ts))
- goto fallback;
- break;
- case CLOCK_MONOTONIC:
- if (do_monotonic(vd, ts))
- goto fallback;
- break;
- case CLOCK_MONOTONIC_RAW:
- if (do_monotonic_raw(vd, ts))
- goto fallback;
- break;
- default:
- goto fallback;
- }
-
- return 0;
-fallback:
- return clock_gettime_fallback(clock, ts);
-}
-
-notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
-{
- const struct vdso_data *vd = __get_datapage();
-
- if (likely(tv != NULL)) {
- struct timespec ts;
-
- if (do_realtime(vd, &ts))
- return gettimeofday_fallback(tv, tz);
-
- tv->tv_sec = ts.tv_sec;
- tv->tv_usec = ts.tv_nsec / 1000;
- }
-
- if (unlikely(tz != NULL)) {
- tz->tz_minuteswest = vd->tz_minuteswest;
- tz->tz_dsttime = vd->tz_dsttime;
- }
-
- return 0;
-}
-
-int __vdso_clock_getres(clockid_t clock, struct timespec *res)
-{
- long nsec;
-
- if (clock == CLOCK_REALTIME ||
- clock == CLOCK_MONOTONIC ||
- clock == CLOCK_MONOTONIC_RAW)
- nsec = MONOTONIC_RES_NSEC;
- else if (clock == CLOCK_REALTIME_COARSE ||
- clock == CLOCK_MONOTONIC_COARSE)
- nsec = LOW_RES_NSEC;
- else
- return clock_getres_fallback(clock, res);
-
- if (likely(res != NULL)) {
- res->tv_sec = 0;
- res->tv_nsec = nsec;
- }
-
- return 0;
-}
+#include "../../../lib/vdso/vgettimeofday.c"
diff --git a/lib/vdso/compiler.h b/lib/vdso/compiler.h
new file mode 100644
index 000000000000..0e618b73e064
--- /dev/null
+++ b/lib/vdso/compiler.h
@@ -0,0 +1,24 @@
+/*
+ * Userspace implementations of fallback calls
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __VDSO_COMPILER_H
+#define __VDSO_COMPILER_H
+
+#error "vdso: Provide architectural overrides such as ARCH_PROVIDES_TIMER,"
+#error " DEFINE_FALLBACK and __arch_counter_get or any overrides. eg:"
+#error " vdso entry points or compilation time helpers."
+
+#endif /* __VDSO_COMPILER_H */
diff --git a/lib/vdso/datapage.h b/lib/vdso/datapage.h
new file mode 100644
index 000000000000..df4427e42d51
--- /dev/null
+++ b/lib/vdso/datapage.h
@@ -0,0 +1,24 @@
+/*
+ * Userspace implementations of __get_datapage
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __VDSO_DATAPAGE_H
+#define __VDSO_DATAPAGE_H
+
+#error "vdso: Provide a user space architecture specific definition or"
+#error "prototype for struct vdso_data *__get_datapage(void). Also define"
+#error "ARCH_CLOCK_FIXED_MASK if not provided by cs_mask."
+
+#endif /* __VDSO_DATAPAGE_H */
diff --git a/lib/vdso/vgettimeofday.c b/lib/vdso/vgettimeofday.c
new file mode 100644
index 000000000000..33c5917fe9f8
--- /dev/null
+++ b/lib/vdso/vgettimeofday.c
@@ -0,0 +1,344 @@
+/*
+ * Userspace implementations of gettimeofday() and friends.
+ *
+ * Copyright (C) 2017 Cavium, Inc.
+ * Copyright (C) 2015 Mentor Graphics Corporation
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Rewriten from arch64 version into C by: Andrew Pinski <[email protected]>
+ * Reworked and rebased over arm version by: Mark Salyzyn <[email protected]>
+ */
+
+#include <asm/barrier.h>
+#include <linux/compiler.h> /* for notrace */
+#include <linux/math64.h> /* for __iter_div_u64_rem() */
+#include <uapi/linux/time.h> /* for struct timespec */
+
+#include "compiler.h"
+#include "datapage.h"
+
+DEFINE_FALLBACK(gettimeofday, struct timeval *, tv, struct timezone *, tz)
+DEFINE_FALLBACK(clock_gettime, clockid_t, clock, struct timespec *, ts)
+DEFINE_FALLBACK(clock_getres, clockid_t, clock, struct timespec *, ts)
+
+static notrace u32 vdso_read_begin(const struct vdso_data *vd)
+{
+ u32 seq;
+
+ do {
+ seq = READ_ONCE(vd->tb_seq_count);
+
+ if ((seq & 1) == 0)
+ break;
+
+ cpu_relax();
+ } while (true);
+
+ smp_rmb(); /* Pairs with second smp_wmb in update_vsyscall */
+ return seq;
+}
+
+static notrace int vdso_read_retry(const struct vdso_data *vd, u32 start)
+{
+ u32 seq;
+
+ smp_rmb(); /* Pairs with first smp_wmb in update_vsyscall */
+ seq = READ_ONCE(vd->tb_seq_count);
+ return seq != start;
+}
+
+static notrace int do_realtime_coarse(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ u32 seq;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ ts->tv_sec = vd->xtime_coarse_sec;
+ ts->tv_nsec = vd->xtime_coarse_nsec;
+
+ } while (vdso_read_retry(vd, seq));
+
+ return 0;
+}
+
+static notrace int do_monotonic_coarse(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ struct timespec tomono;
+ u32 seq;
+ u64 nsec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ ts->tv_sec = vd->xtime_coarse_sec;
+ ts->tv_nsec = vd->xtime_coarse_nsec;
+
+ tomono.tv_sec = vd->wtm_clock_sec;
+ tomono.tv_nsec = vd->wtm_clock_nsec;
+
+ } while (vdso_read_retry(vd, seq));
+
+ ts->tv_sec += tomono.tv_sec;
+ /* open coding timespec_add_ns */
+ ts->tv_sec += __iter_div_u64_rem(ts->tv_nsec + tomono.tv_nsec,
+ NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
+#ifdef ARCH_PROVIDES_TIMER
+
+/*
+ * Returns the clock delta, in nanoseconds left-shifted by the clock
+ * shift.
+ */
+static notrace u64 get_clock_shifted_nsec(const u64 cycle_last,
+ const u32 mult,
+ const u64 mask)
+{
+ u64 res;
+
+ /* Read the virtual counter. */
+ res = arch_vdso_read_counter();
+
+ res = res - cycle_last;
+
+ res &= mask;
+ return res * mult;
+}
+
+static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
+{
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
+ u64 mask;
+#endif
+ vdso_xtime_clock_sec_t sec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ if (vd->use_syscall)
+ return -1;
+
+ cycle_last = vd->cs_cycle_last;
+
+ mult = vd->cs_mono_mult;
+ shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
+ mask = vd->cs_mask;
+#endif
+
+ sec = vd->xtime_clock_sec;
+ nsec = vd->xtime_clock_snsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
+static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
+{
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
+ u64 mask;
+#endif
+ vdso_wtm_clock_nsec_t wtm_nsec;
+ __kernel_time_t sec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ if (vd->use_syscall)
+ return -1;
+
+ cycle_last = vd->cs_cycle_last;
+
+ mult = vd->cs_mono_mult;
+ shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
+ mask = vd->cs_mask;
+#endif
+
+ sec = vd->xtime_clock_sec;
+ nsec = vd->xtime_clock_snsec;
+
+ sec += vd->wtm_clock_sec;
+ wtm_nsec = vd->wtm_clock_nsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ nsec += wtm_nsec;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
+static notrace int do_monotonic_raw(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
+ u64 mask;
+#endif
+ vdso_raw_time_sec_t sec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ if (vd->use_syscall)
+ return -1;
+
+ cycle_last = vd->cs_cycle_last;
+
+ mult = vd->cs_raw_mult;
+ shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
+ mask = vd->cs_mask;
+#endif
+
+ sec = vd->raw_time_sec;
+ nsec = vd->raw_time_nsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
+#else /* ARCH_PROVIDES_TIMER */
+
+static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
+{
+ return -1;
+}
+
+static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
+{
+ return -1;
+}
+
+static notrace int do_monotonic_raw(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ return -1;
+}
+
+#endif /* ARCH_PROVIDES_TIMER */
+
+notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
+{
+ const struct vdso_data *vd = __get_datapage();
+
+ switch (clock) {
+ case CLOCK_REALTIME_COARSE:
+ do_realtime_coarse(vd, ts);
+ break;
+ case CLOCK_MONOTONIC_COARSE:
+ do_monotonic_coarse(vd, ts);
+ break;
+ case CLOCK_REALTIME:
+ if (do_realtime(vd, ts))
+ goto fallback;
+ break;
+ case CLOCK_MONOTONIC:
+ if (do_monotonic(vd, ts))
+ goto fallback;
+ break;
+ case CLOCK_MONOTONIC_RAW:
+ if (do_monotonic_raw(vd, ts))
+ goto fallback;
+ break;
+ default:
+ goto fallback;
+ }
+
+ return 0;
+fallback:
+ return clock_gettime_fallback(clock, ts);
+}
+
+notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+ const struct vdso_data *vd = __get_datapage();
+
+ if (likely(tv != NULL)) {
+ struct timespec ts;
+
+ if (do_realtime(vd, &ts))
+ return gettimeofday_fallback(tv, tz);
+
+ tv->tv_sec = ts.tv_sec;
+ tv->tv_usec = ts.tv_nsec / 1000;
+ }
+
+ if (unlikely(tz != NULL)) {
+ tz->tz_minuteswest = vd->tz_minuteswest;
+ tz->tz_dsttime = vd->tz_dsttime;
+ }
+
+ return 0;
+}
+
+int __vdso_clock_getres(clockid_t clock, struct timespec *res)
+{
+ long nsec;
+
+ if (clock == CLOCK_REALTIME ||
+ clock == CLOCK_MONOTONIC ||
+ clock == CLOCK_MONOTONIC_RAW)
+ nsec = MONOTONIC_RES_NSEC;
+ else if (clock == CLOCK_REALTIME_COARSE ||
+ clock == CLOCK_MONOTONIC_COARSE)
+ nsec = LOW_RES_NSEC;
+ else
+ return clock_getres_fallback(clock, res);
+
+ if (likely(res != NULL)) {
+ res->tv_sec = 0;
+ res->tv_nsec = nsec;
+ }
+
+ return 0;
+}
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Make sure kasan and ubsan profiling, and kcov instrumentation,
is turned off for VDSO code.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- Added this to the split up of first CL, based on comments in second CL
v4:
- update commit message to reflect overall reasoning
v5:
- rebase
---
arch/arm/vdso/Makefile | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index f4efff9d3afb..1125e3fb8006 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -27,8 +27,11 @@ CFLAGS_REMOVE_vdso.o = -pg
CFLAGS_REMOVE_vgettimeofday.o = -pg -Os
CFLAGS_vgettimeofday.o = -O2
-# Disable gcov profiling for VDSO code
+# Disable gcov, kasan, ubsan and kcov profiling for VDSO code
GCOV_PROFILE := n
+KASAN_SANITIZE := n
+UBSAN_SANITIZE := n
+KCOV_INSTRUMENT := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
--
2.19.0.605.g01d371f741-goog
From: Mark Salyzyn <[email protected]>
Add time() vdso support to match up with existing support in the x86's
vdso. Currently benefitting arm and arm64 which uses the common
vgettimeofday.c implementation. On arm provides about a ~14 fold
improvement in speed over the straight syscall, and about a ~5 fold
improvement in speed over an alternate library implementation that
relies on the vdso call to gettimeofday to fulfill the request.
We can provide __vdso_time even if we can not provide a speed
enhanced __vdso_gettimeofday. (arm, virtual counters).
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
---
arch/arm/kernel/vdso.c | 1 +
arch/arm/vdso/vdso.lds.S | 1 +
arch/arm64/kernel/vdso/compiler.h | 1 +
arch/arm64/kernel/vdso/vdso.lds.S | 1 +
lib/vdso/vgettimeofday.c | 10 ++++++++++
5 files changed, 14 insertions(+)
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index 51d8dcbd9952..6721854c5ae6 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -192,6 +192,7 @@ static void __init patch_vdso(void *ehdr)
vdso_nullpatch_one(&einfo, "__vdso_gettimeofday");
vdso_nullpatch_one(&einfo, "__vdso_clock_gettime");
vdso_nullpatch_one(&einfo, "__vdso_clock_getres");
+ /* do not zero out __vdso_time, no cntvct_ok dependency */
}
}
diff --git a/arch/arm/vdso/vdso.lds.S b/arch/arm/vdso/vdso.lds.S
index 1d81e8c3acf6..1eb577091d1f 100644
--- a/arch/arm/vdso/vdso.lds.S
+++ b/arch/arm/vdso/vdso.lds.S
@@ -83,6 +83,7 @@ VERSION
__vdso_clock_gettime;
__vdso_gettimeofday;
__vdso_clock_getres;
+ __vdso_time;
local: *;
};
}
diff --git a/arch/arm64/kernel/vdso/compiler.h b/arch/arm64/kernel/vdso/compiler.h
index 921a7191b497..fb27545640f2 100644
--- a/arch/arm64/kernel/vdso/compiler.h
+++ b/arch/arm64/kernel/vdso/compiler.h
@@ -65,5 +65,6 @@ static __always_inline notrace u64 arch_vdso_read_counter(void)
#define __vdso_clock_gettime __kernel_clock_gettime
#define __vdso_gettimeofday __kernel_gettimeofday
#define __vdso_clock_getres __kernel_clock_getres
+#define __vdso_time __kernel_time
#endif /* __VDSO_COMPILER_H */
diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
index beca249bc2f3..9de0ffc369c5 100644
--- a/arch/arm64/kernel/vdso/vdso.lds.S
+++ b/arch/arm64/kernel/vdso/vdso.lds.S
@@ -88,6 +88,7 @@ VERSION
__kernel_gettimeofday;
__kernel_clock_gettime;
__kernel_clock_getres;
+ __kernel_time;
local: *;
};
}
diff --git a/lib/vdso/vgettimeofday.c b/lib/vdso/vgettimeofday.c
index 54e519c99c4b..dfced9608cd3 100644
--- a/lib/vdso/vgettimeofday.c
+++ b/lib/vdso/vgettimeofday.c
@@ -386,3 +386,13 @@ int __vdso_clock_getres(clockid_t clock, struct timespec *res)
return 0;
}
+
+notrace time_t __vdso_time(time_t *t)
+{
+ const struct vdso_data *vd = __get_datapage();
+ time_t result = READ_ONCE(vd->xtime_coarse_sec);
+
+ if (t)
+ *t = result;
+ return result;
+}
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
AArch32 processes are currently installed a special [vectors] page that
contains the sigreturn trampolines and the kuser helpers, at the fixed
address mandated by the kuser helpers ABI.
Having both functionalities in the same page has become problematic,
because:
* It makes it impossible to disable the kuser helpers (the sigreturn
trampolines cannot be removed), which is possible on arm.
* A future 32-bit vDSO would provide the sigreturn trampolines itself,
making those in [vectors] redundant.
This patch addresses the problem by moving the sigreturn trampolines to
a separate [sigpage] page, mirroring [sigpage] on arm.
Even though [vectors] has always been a misnomer on arm64/compat, as
there is no AArch32 vector there (and now only the kuser helpers),
its name has been left unchanged, for compatibility with arm (there
are reports of software relying on [vectors] being there as the last
mapping in /proc/maps).
mm->context.vdso used to point to the [vectors] page, which is
unnecessary (as its address is fixed). It now points to the [sigpage]
page (whose address is randomized like a vDSO).
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- reduce churniness (and defer later to vDSO patches)
- vectors_page and compat_vdso_spec as array of 2
- free sigpage if vectors allocation failed
v3:
- rebase
---
arch/arm64/include/asm/processor.h | 4 +-
arch/arm64/include/asm/signal32.h | 2 -
arch/arm64/kernel/signal32.c | 5 +-
arch/arm64/kernel/vdso.c | 82 ++++++++++++++++++++----------
4 files changed, 60 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 79657ad91397..bc6bb256ea4c 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -66,9 +66,9 @@
#define STACK_TOP_MAX TASK_SIZE_64
#ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE 0xffff0000
+#define AARCH32_KUSER_HELPERS_BASE 0xffff0000
#define STACK_TOP (test_thread_flag(TIF_32BIT) ? \
- AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+ AARCH32_KUSER_HELPERS_BASE : STACK_TOP_MAX)
#else
#define STACK_TOP STACK_TOP_MAX
#endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
#ifdef CONFIG_COMPAT
#include <linux/compat.h>
-#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500
-
int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs);
int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 24b09003f821..52f0d44417c8 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -398,14 +398,13 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
retcode = ptr_to_compat(ka->sa.sa_restorer);
} else {
/* Set up sigreturn pointer */
+ void *sigreturn_base = current->mm->context.vdso;
unsigned int idx = thumb << 1;
if (ka->sa.sa_flags & SA_SIGINFO)
idx += 3;
- retcode = AARCH32_VECTORS_BASE +
- AARCH32_KERN_SIGRET_CODE_OFFSET +
- (idx << 2) + thumb;
+ retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb;
}
regs->regs[0] = usig;
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 8dd2ad220a0f..5398f6454ce1 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
/*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * Additional userspace pages setup for AArch64 and AArch32.
+ * - AArch64: vDSO pages setup, vDSO data page update.
+ * - AArch32: sigreturn and kuser helpers pages setup.
*
* Copyright (C) 2012 ARM Limited
*
@@ -53,32 +55,51 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
/*
* Create and map the vectors page for AArch32 tasks.
*/
-static struct page *vectors_page[1] __ro_after_init;
+static struct page *vectors_page[] __ro_after_init;
+static const struct vm_special_mapping compat_vdso_spec[] = {
+ {
+ /* Must be named [sigpage] for compatibility with arm. */
+ .name = "[sigpage]",
+ .pages = &vectors_page[0],
+ },
+ {
+ .name = "[kuserhelpers]",
+ .pages = &vectors_page[1],
+ },
+};
+static struct page *vectors_page[ARRAY_SIZE(compat_vdso_spec)] __ro_after_init;
static int __init alloc_vectors_page(void)
{
extern char __kuser_helper_start[], __kuser_helper_end[];
- extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+ size_t kuser_sz = __kuser_helper_end - __kuser_helper_start;
+ unsigned long kuser_vpage;
- int kuser_sz = __kuser_helper_end - __kuser_helper_start;
- int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
- unsigned long vpage;
-
- vpage = get_zeroed_page(GFP_ATOMIC);
+ extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+ size_t sigret_sz =
+ __aarch32_sigret_code_end - __aarch32_sigret_code_start;
+ unsigned long sigret_vpage;
- if (!vpage)
+ sigret_vpage = get_zeroed_page(GFP_ATOMIC);
+ if (!sigret_vpage)
return -ENOMEM;
- /* kuser helpers */
- memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
- kuser_sz);
+ kuser_vpage = get_zeroed_page(GFP_ATOMIC);
+ if (!kuser_vpage) {
+ free_page(sigret_vpage);
+ return -ENOMEM;
+ }
/* sigreturn code */
- memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
- __aarch32_sigret_code_start, sigret_sz);
+ memcpy((void *)sigret_vpage, __aarch32_sigret_code_start, sigret_sz);
+ flush_icache_range(sigret_vpage, sigret_vpage + PAGE_SIZE);
+ vectors_page[0] = virt_to_page(sigret_vpage);
- flush_icache_range(vpage, vpage + PAGE_SIZE);
- vectors_page[0] = virt_to_page(vpage);
+ /* kuser helpers */
+ memcpy((void *)kuser_vpage + 0x1000 - kuser_sz, __kuser_helper_start,
+ kuser_sz);
+ flush_icache_range(kuser_vpage, kuser_vpage + PAGE_SIZE);
+ vectors_page[1] = virt_to_page(kuser_vpage);
return 0;
}
@@ -87,23 +108,32 @@ arch_initcall(alloc_vectors_page);
int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
- unsigned long addr = AARCH32_VECTORS_BASE;
- static const struct vm_special_mapping spec = {
- .name = "[vectors]",
- .pages = vectors_page,
-
- };
+ unsigned long addr;
void *ret;
if (down_write_killable(&mm->mmap_sem))
return -EINTR;
- current->mm->context.vdso = (void *)addr;
+ addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+ if (IS_ERR_VALUE(addr)) {
+ ret = ERR_PTR(addr);
+ goto out;
+ }
- /* Map vectors page at the high address. */
ret = _install_special_mapping(mm, addr, PAGE_SIZE,
- VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
- &spec);
+ VM_READ|VM_EXEC|
+ VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
+ &compat_vdso_spec[0]);
+ if (IS_ERR(ret))
+ goto out;
+ current->mm->context.vdso = (void *)addr;
+
+ /* Map the kuser helpers at the ABI-defined high address. */
+ ret = _install_special_mapping(mm, AARCH32_KUSER_HELPERS_BASE,
+ PAGE_SIZE,
+ VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
+ &compat_vdso_spec[1]);
+out:
up_write(&mm->mmap_sem);
return PTR_ERR_OR_ZERO(ret);
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
AArch32 processes are currently installed a special [vectors] page that
contains the sigreturn trampolines and the kuser helpers, at the fixed
address mandated by the kuser helpers ABI.
Having both functionalities in the same page has become problematic,
because:
* It makes it impossible to disable the kuser helpers (the sigreturn
trampolines cannot be removed), which is possible on arm.
* A future 32-bit vDSO would provide the sigreturn trampolines itself,
making those in [vectors] redundant.
This patch addresses the problem by moving the sigreturn trampolines
sources to its own file. Wrapped the comments to reduce the wrath of
checkpatch.pl.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split off from previous v1 'arm64: compat: Add CONFIG_KUSER_HELPERS'
- adjust makefile so one line for each of the assembler source modules
v3:
- rebase
---
arch/arm64/kernel/Makefile | 4 +-
arch/arm64/kernel/kuser32.S | 48 ++---------------------
arch/arm64/kernel/sigreturn32.S | 67 +++++++++++++++++++++++++++++++++
3 files changed, 73 insertions(+), 46 deletions(-)
create mode 100644 arch/arm64/kernel/sigreturn32.S
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 4c8b13bede80..b89a79424912 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,10 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
$(obj)/%.stub.o: $(obj)/%.o FORCE
$(call if_changed,objcopy)
-arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \
+arm64-obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
sys_compat.o
+arm64-obj-$(CONFIG_COMPAT) += sigreturn32.o
+arm64-obj-$(CONFIG_COMPAT) += kuser32.o
arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
arm64-obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..d15b5c2935b3 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -20,16 +20,13 @@
*
* AArch32 user helpers.
*
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page. New segments (if ever needed) must be added in front of
- * existing ones. This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * These helpers are provided for compatibility with AArch32 binaries that
+ * still need them. They are installed at a fixed address by
+ * aarch32_setup_additional_pages().
*
* See Documentation/arm/kernel_user_helpers.txt for formal definitions.
*/
-#include <asm/unistd.h>
-
.align 5
.globl __kuser_helper_start
__kuser_helper_start:
@@ -77,42 +74,3 @@ __kuser_helper_version: // 0xffff0ffc
.word ((__kuser_helper_end - __kuser_helper_start) >> 5)
.globl __kuser_helper_end
__kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
- .globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
- /*
- * ARM Code
- */
- .byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn
- .byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn
-
- /*
- * Thumb code
- */
- .byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn
- .byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn
-
- /*
- * ARM code
- */
- .byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn
- .byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn
-
- /*
- * Thumb code
- */
- .byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn
- .byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn
-
- .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..6ecda4d84cd5
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,67 @@
+/*
+ * sigreturn trampolines for AArch32.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <[email protected]>
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ *
+ * AArch32 sigreturn code
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+ .globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+ /*
+ * ARM Code
+ */
+ // mov r7, #__NR_compat_sigreturn
+ .byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3
+ // svc #__NR_compat_sigreturn
+ .byte __NR_compat_sigreturn, 0x00, 0x00, 0xef
+
+ /*
+ * Thumb code
+ */
+ // svc #__NR_compat_sigreturn
+ .byte __NR_compat_sigreturn, 0x27
+ // mov r7, #__NR_compat_sigreturn
+ .byte __NR_compat_sigreturn, 0xdf
+
+ /*
+ * ARM code
+ */
+ // mov r7, #__NR_compat_rt_sigreturn
+ .byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3
+ // svc #__NR_compat_rt_sigreturn
+ .byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef
+
+ /*
+ * Thumb code
+ */
+ // svc #__NR_compat_rt_sigreturn
+ .byte __NR_compat_rt_sigreturn, 0x27
+ // mov r7, #__NR_compat_rt_sigreturn
+ .byte __NR_compat_rt_sigreturn, 0xdf
+
+ .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
Add a case for CLOCK_BOOTTIME as it is popular for measuring
relative time on systems expected to suspend() or hibernate().
Android uses CLOCK_BOOTTIME for all relative time measurements
and timeouts. Switching to vdso reduced CPU utilization and improves
accuracy. There is also a desire by some partners to switch all
logging over to CLOCK_BOOTTIME, and thus this operation alone would
contribute to a near percentile CPU load.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- rebased and changed from 3/3 to 10/10, fortified commit message.
v3:
- move arch/arm/vdso/vgettimeofday.c to lib/vdso/vgettimeofday.c.
v4:
- update commit message to reflect specific, and overall reasoning
of patch series.
- drop forced inline operations.
- switch typeof() with __kernel_time_t.
v5:
- added comment about open coded timepsec_add_ns() for clarity.
v6:
- fix issue with __iter_div_u64_rem scaling by splitting sec & nsec
---
arch/arm/include/asm/vdso_datapage.h | 2 +
arch/arm/kernel/vdso.c | 4 ++
arch/arm64/include/asm/vdso_datapage.h | 2 +
arch/arm64/kernel/vdso.c | 4 ++
lib/vdso/vgettimeofday.c | 56 ++++++++++++++++++++++++++
5 files changed, 68 insertions(+)
diff --git a/arch/arm/include/asm/vdso_datapage.h b/arch/arm/include/asm/vdso_datapage.h
index 1c6e6a5d5d9d..0120852b6b12 100644
--- a/arch/arm/include/asm/vdso_datapage.h
+++ b/arch/arm/include/asm/vdso_datapage.h
@@ -64,6 +64,8 @@ struct vdso_data {
u32 tz_minuteswest; /* timezone info for gettimeofday(2) */
u32 tz_dsttime;
+ u32 btm_sec; /* monotonic to boot time */
+ u32 btm_nsec;
/* Raw clocksource multipler */
u32 cs_raw_mult;
/* Raw time */
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index c299967df63c..51d8dcbd9952 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -337,6 +337,8 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->wtm_clock_nsec = wtm->tv_nsec;
if (!vdso_data->use_syscall) {
+ struct timespec btm = ktime_to_timespec(tk->offs_boot);
+
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->raw_time_sec = tk->raw_sec;
vdso_data->raw_time_nsec = tk->tkr_raw.xtime_nsec;
@@ -347,6 +349,8 @@ void update_vsyscall(struct timekeeper *tk)
/* tkr_mono.shift == tkr_raw.shift */
vdso_data->cs_shift = tk->tkr_mono.shift;
vdso_data->cs_mask = tk->tkr_mono.mask;
+ vdso_data->btm_sec = btm.tv_sec;
+ vdso_data->btm_nsec = btm.tv_nsec;
}
vdso_write_end(vdso_data);
diff --git a/arch/arm64/include/asm/vdso_datapage.h b/arch/arm64/include/asm/vdso_datapage.h
index 95f4a7abab80..348b9be9efe7 100644
--- a/arch/arm64/include/asm/vdso_datapage.h
+++ b/arch/arm64/include/asm/vdso_datapage.h
@@ -45,6 +45,8 @@ struct vdso_data {
__u64 xtime_coarse_nsec;
__u64 wtm_clock_sec; /* Wall to monotonic time */
vdso_wtm_clock_nsec_t wtm_clock_nsec;
+ __u32 btm_sec; /* monotonic to boot time */
+ __u32 btm_nsec;
__u32 tb_seq_count; /* Timebase sequence counter */
/* cs_* members must be adjacent and in this order (ldp accesses) */
__u32 cs_mono_mult; /* NTP-adjusted clocksource multiplier */
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 59f150c25889..8dd2ad220a0f 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -233,6 +233,8 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;
if (!use_syscall) {
+ struct timespec btm = ktime_to_timespec(tk->offs_boot);
+
/* tkr_mono.cycle_last == tkr_raw.cycle_last */
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->raw_time_sec = tk->raw_sec;
@@ -243,6 +245,8 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->cs_raw_mult = tk->tkr_raw.mult;
/* tkr_mono.shift == tkr_raw.shift */
vdso_data->cs_shift = tk->tkr_mono.shift;
+ vdso_data->btm_sec = btm.tv_sec;
+ vdso_data->btm_nsec = btm.tv_nsec;
}
smp_wmb();
diff --git a/lib/vdso/vgettimeofday.c b/lib/vdso/vgettimeofday.c
index 33c5917fe9f8..4c3af7bc6499 100644
--- a/lib/vdso/vgettimeofday.c
+++ b/lib/vdso/vgettimeofday.c
@@ -247,6 +247,51 @@ static notrace int do_monotonic_raw(const struct vdso_data *vd,
return 0;
}
+static notrace int do_boottime(const struct vdso_data *vd, struct timespec *ts)
+{
+ u32 seq, mult, shift;
+ u64 nsec, cycle_last;
+ vdso_wtm_clock_nsec_t wtm_nsec;
+#ifdef ARCH_CLOCK_FIXED_MASK
+ static const u64 mask = ARCH_CLOCK_FIXED_MASK;
+#else
+ u64 mask;
+#endif
+ __kernel_time_t sec;
+
+ do {
+ seq = vdso_read_begin(vd);
+
+ if (vd->use_syscall)
+ return -1;
+
+ cycle_last = vd->cs_cycle_last;
+
+ mult = vd->cs_mono_mult;
+ shift = vd->cs_shift;
+#ifndef ARCH_CLOCK_FIXED_MASK
+ mask = vd->cs_mask;
+#endif
+
+ sec = vd->xtime_clock_sec;
+ nsec = vd->xtime_clock_snsec;
+
+ sec += vd->wtm_clock_sec + vd->btm_sec;
+ wtm_nsec = vd->wtm_clock_nsec + vd->btm_nsec;
+
+ } while (unlikely(vdso_read_retry(vd, seq)));
+
+ nsec += get_clock_shifted_nsec(cycle_last, mult, mask);
+ nsec >>= shift;
+ nsec += wtm_nsec;
+
+ /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
+ ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
+ ts->tv_nsec = nsec;
+
+ return 0;
+}
+
#else /* ARCH_PROVIDES_TIMER */
static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
@@ -265,6 +310,12 @@ static notrace int do_monotonic_raw(const struct vdso_data *vd,
return -1;
}
+static notrace int do_boottime(const struct vdso_data *vd,
+ struct timespec *ts)
+{
+ return -1;
+}
+
#endif /* ARCH_PROVIDES_TIMER */
notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
@@ -290,6 +341,10 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
if (do_monotonic_raw(vd, ts))
goto fallback;
break;
+ case CLOCK_BOOTTIME:
+ if (do_boottime(vd, ts))
+ goto fallback;
+ break;
default:
goto fallback;
}
@@ -326,6 +381,7 @@ int __vdso_clock_getres(clockid_t clock, struct timespec *res)
long nsec;
if (clock == CLOCK_REALTIME ||
+ clock == CLOCK_BOOTTIME ||
clock == CLOCK_MONOTONIC ||
clock == CLOCK_MONOTONIC_RAW)
nsec = MONOTONIC_RES_NSEC;
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
This will be needed to provide unwinding information in compat
sigreturn trampolines, part of the future compat vDSO. There is no
obvious header the compat_sig* struct's should be moved to, so let's
put them in signal32.h.
Also fix minor style issues reported by checkpatch.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Dave Martin <[email protected]>
Cc: Eric W. Biederman <[email protected]>
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
NB:
Basically unchanged as part of a vDSO32 effort through 4 revisions,
Resubmitted as a standalone change for quicker approval.
---
arch/arm64/include/asm/signal32.h | 46 +++++++++++++++++++++++++++++++
arch/arm64/kernel/asm-offsets.c | 13 +++++++++
arch/arm64/kernel/signal32.c | 46 -------------------------------
3 files changed, 59 insertions(+), 46 deletions(-)
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 58e288aaf0ba..bcd0e139ee4a 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,6 +20,52 @@
#ifdef CONFIG_COMPAT
#include <linux/compat.h>
+struct compat_sigcontext {
+ /* We always set these two fields to 0 */
+ compat_ulong_t trap_no;
+ compat_ulong_t error_code;
+
+ compat_ulong_t oldmask;
+ compat_ulong_t arm_r0;
+ compat_ulong_t arm_r1;
+ compat_ulong_t arm_r2;
+ compat_ulong_t arm_r3;
+ compat_ulong_t arm_r4;
+ compat_ulong_t arm_r5;
+ compat_ulong_t arm_r6;
+ compat_ulong_t arm_r7;
+ compat_ulong_t arm_r8;
+ compat_ulong_t arm_r9;
+ compat_ulong_t arm_r10;
+ compat_ulong_t arm_fp;
+ compat_ulong_t arm_ip;
+ compat_ulong_t arm_sp;
+ compat_ulong_t arm_lr;
+ compat_ulong_t arm_pc;
+ compat_ulong_t arm_cpsr;
+ compat_ulong_t fault_address;
+};
+
+struct compat_ucontext {
+ compat_ulong_t uc_flags;
+ compat_uptr_t uc_link;
+ compat_stack_t uc_stack;
+ struct compat_sigcontext uc_mcontext;
+ compat_sigset_t uc_sigmask;
+ int __unused[32 - (sizeof(compat_sigset_t) / sizeof(int))];
+ compat_ulong_t uc_regspace[128] __aligned(8);
+};
+
+struct compat_sigframe {
+ struct compat_ucontext uc;
+ compat_ulong_t retcode[2];
+};
+
+struct compat_rt_sigframe {
+ struct compat_siginfo info;
+ struct compat_sigframe sig;
+};
+
int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs);
int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 8938a4223690..a79507c5d845 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -29,6 +29,7 @@
#include <asm/fixmap.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
+#include <asm/signal32.h>
#include <asm/smp_plat.h>
#include <asm/suspend.h>
#include <asm/vdso_datapage.h>
@@ -81,6 +82,18 @@ int main(void)
DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe));
DEFINE(S_FRAME_SIZE, sizeof(struct pt_regs));
BLANK();
+#ifdef CONFIG_COMPAT
+ DEFINE(COMPAT_SIGFRAME_REGS_OFFSET,
+ offsetof(struct compat_sigframe, uc) +
+ offsetof(struct compat_ucontext, uc_mcontext) +
+ offsetof(struct compat_sigcontext, arm_r0));
+ DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET,
+ offsetof(struct compat_rt_sigframe, sig) +
+ offsetof(struct compat_sigframe, uc) +
+ offsetof(struct compat_ucontext, uc_mcontext) +
+ offsetof(struct compat_sigcontext, arm_r0));
+ BLANK();
+#endif
DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter));
BLANK();
DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 52f0d44417c8..6b421666b5b8 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -30,42 +30,6 @@
#include <linux/uaccess.h>
#include <asm/unistd.h>
-struct compat_sigcontext {
- /* We always set these two fields to 0 */
- compat_ulong_t trap_no;
- compat_ulong_t error_code;
-
- compat_ulong_t oldmask;
- compat_ulong_t arm_r0;
- compat_ulong_t arm_r1;
- compat_ulong_t arm_r2;
- compat_ulong_t arm_r3;
- compat_ulong_t arm_r4;
- compat_ulong_t arm_r5;
- compat_ulong_t arm_r6;
- compat_ulong_t arm_r7;
- compat_ulong_t arm_r8;
- compat_ulong_t arm_r9;
- compat_ulong_t arm_r10;
- compat_ulong_t arm_fp;
- compat_ulong_t arm_ip;
- compat_ulong_t arm_sp;
- compat_ulong_t arm_lr;
- compat_ulong_t arm_pc;
- compat_ulong_t arm_cpsr;
- compat_ulong_t fault_address;
-};
-
-struct compat_ucontext {
- compat_ulong_t uc_flags;
- compat_uptr_t uc_link;
- compat_stack_t uc_stack;
- struct compat_sigcontext uc_mcontext;
- compat_sigset_t uc_sigmask;
- int __unused[32 - (sizeof (compat_sigset_t) / sizeof (int))];
- compat_ulong_t uc_regspace[128] __attribute__((__aligned__(8)));
-};
-
struct compat_vfp_sigframe {
compat_ulong_t magic;
compat_ulong_t size;
@@ -92,16 +56,6 @@ struct compat_aux_sigframe {
unsigned long end_magic;
} __attribute__((__aligned__(8)));
-struct compat_sigframe {
- struct compat_ucontext uc;
- compat_ulong_t retcode[2];
-};
-
-struct compat_rt_sigframe {
- struct compat_siginfo info;
- struct compat_sigframe sig;
-};
-
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set)
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
If the compat vDSO is enabled, it replaces the sigreturn page.
Therefore, we use the sigreturn trampolines the vDSO provides instead.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/vdso.h | 3 +++
arch/arm64/kernel/signal32.c | 15 +++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/arch/arm64/include/asm/vdso.h b/arch/arm64/include/asm/vdso.h
index 839ce0031bd5..f2a952338f1e 100644
--- a/arch/arm64/include/asm/vdso.h
+++ b/arch/arm64/include/asm/vdso.h
@@ -28,6 +28,9 @@
#ifndef __ASSEMBLY__
#include <generated/vdso-offsets.h>
+#ifdef CONFIG_VDSO32
+#include <generated/vdso32-offsets.h>
+#endif
#define VDSO_SYMBOL(base, name) \
({ \
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 6b421666b5b8..c3d9a74e3945 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -29,6 +29,7 @@
#include <asm/traps.h>
#include <linux/uaccess.h>
#include <asm/unistd.h>
+#include <asm/vdso.h>
struct compat_vfp_sigframe {
compat_ulong_t magic;
@@ -352,6 +353,19 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
retcode = ptr_to_compat(ka->sa.sa_restorer);
} else {
/* Set up sigreturn pointer */
+#ifdef CONFIG_VDSO32
+ void *vdso_base = current->mm->context.vdso;
+ void *trampoline =
+ (ka->sa.sa_flags & SA_SIGINFO
+ ? (thumb
+ ? VDSO_SYMBOL(vdso_base, compat_rt_sigreturn_thumb)
+ : VDSO_SYMBOL(vdso_base, compat_rt_sigreturn_arm))
+ : (thumb
+ ? VDSO_SYMBOL(vdso_base, compat_sigreturn_thumb)
+ : VDSO_SYMBOL(vdso_base, compat_sigreturn_arm)));
+
+ retcode = ptr_to_compat(trampoline) + thumb;
+#else
void *sigreturn_base = current->mm->context.vdso;
unsigned int idx = thumb << 1;
@@ -359,6 +373,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
idx += 3;
retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb;
+#endif
}
regs->regs[0] = usig;
--
2.19.0.605.g01d371f741-goog
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture. But instead of landing it in arm64, land the
result into lib/vdso and unify both implementations to simplify
future maintenance.
If ARCH_PROVIDES_TIMER is not defined, do not expose gettimeofday.
libc will default directly to syscall. Also ifdef clock_gettime
switch cases and stubs if not supported and other unused components.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v3:
- do not expose gettimeofday if arch does not support user space timer.
v4:
- update commit message to reflect overall reasoning of patch series.
v5:
- rebase
---
lib/vdso/vgettimeofday.c | 52 ++++++++++++++++------------------------
1 file changed, 20 insertions(+), 32 deletions(-)
diff --git a/lib/vdso/vgettimeofday.c b/lib/vdso/vgettimeofday.c
index 4c3af7bc6499..54e519c99c4b 100644
--- a/lib/vdso/vgettimeofday.c
+++ b/lib/vdso/vgettimeofday.c
@@ -30,7 +30,9 @@
#include "compiler.h"
#include "datapage.h"
+#ifdef ARCH_PROVIDES_TIMER
DEFINE_FALLBACK(gettimeofday, struct timeval *, tv, struct timezone *, tz)
+#endif
DEFINE_FALLBACK(clock_gettime, clockid_t, clock, struct timespec *, ts)
DEFINE_FALLBACK(clock_getres, clockid_t, clock, struct timespec *, ts)
@@ -292,30 +294,6 @@ static notrace int do_boottime(const struct vdso_data *vd, struct timespec *ts)
return 0;
}
-#else /* ARCH_PROVIDES_TIMER */
-
-static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
-{
- return -1;
-}
-
-static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
-{
- return -1;
-}
-
-static notrace int do_monotonic_raw(const struct vdso_data *vd,
- struct timespec *ts)
-{
- return -1;
-}
-
-static notrace int do_boottime(const struct vdso_data *vd,
- struct timespec *ts)
-{
- return -1;
-}
-
#endif /* ARCH_PROVIDES_TIMER */
notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
@@ -329,6 +307,7 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
case CLOCK_MONOTONIC_COARSE:
do_monotonic_coarse(vd, ts);
break;
+#ifdef ARCH_PROVIDES_TIMER
case CLOCK_REALTIME:
if (do_realtime(vd, ts))
goto fallback;
@@ -345,6 +324,7 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
if (do_boottime(vd, ts))
goto fallback;
break;
+#endif
default:
goto fallback;
}
@@ -354,6 +334,7 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
return clock_gettime_fallback(clock, ts);
}
+#ifdef ARCH_PROVIDES_TIMER
notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{
const struct vdso_data *vd = __get_datapage();
@@ -375,21 +356,28 @@ notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
return 0;
}
+#endif
int __vdso_clock_getres(clockid_t clock, struct timespec *res)
{
long nsec;
- if (clock == CLOCK_REALTIME ||
- clock == CLOCK_BOOTTIME ||
- clock == CLOCK_MONOTONIC ||
- clock == CLOCK_MONOTONIC_RAW)
- nsec = MONOTONIC_RES_NSEC;
- else if (clock == CLOCK_REALTIME_COARSE ||
- clock == CLOCK_MONOTONIC_COARSE)
+ switch (clock) {
+ case CLOCK_REALTIME_COARSE:
+ case CLOCK_MONOTONIC_COARSE:
nsec = LOW_RES_NSEC;
- else
+ break;
+#ifdef ARCH_PROVIDES_TIMER
+ case CLOCK_REALTIME:
+ case CLOCK_MONOTONIC:
+ case CLOCK_MONOTONIC_RAW:
+ case CLOCK_BOOTTIME:
+ nsec = MONOTONIC_RES_NSEC;
+ break;
+#endif
+ default:
return clock_getres_fallback(clock, res);
+ }
if (likely(res != NULL)) {
res->tv_sec = 0;
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
Make it possible to disable the kuser helpers by adding a KUSER_HELPERS
config option (enabled by default). When disabled, all kuser
helpers-related code is removed from the kernel and no mapping is done
at the fixed high address (0xffff0000); any attempt to use a kuser
helper from a 32-bit process will result in a segfault.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- split off assembler changes to a new previous patch in series to reduce churn
- modify slightly the feature documentation to reduce its reach
- modify slightly the feature documentation to rationalize the yes default.
- There are more ifdefs as a result of the rebase.
v3:
- rebase
---
arch/arm64/Kconfig | 30 ++++++++++++++++++++++++++++++
arch/arm64/kernel/Makefile | 4 ++--
arch/arm64/kernel/vdso.c | 10 ++++++++++
3 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1b1a0e95c751..6e61f01108cb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1301,6 +1301,36 @@ config COMPAT
If you want to execute 32-bit userspace applications, say Y.
+config KUSER_HELPERS
+ bool "Enable the kuser helpers page in 32-bit processes"
+ depends on COMPAT
+ default y
+ help
+ Warning: disabling this option may break 32-bit applications.
+
+ Provide kuser helpers in a special purpose fixed-address page. The
+ kernel provides helper code to userspace in read-only form at a fixed
+ location to allow userspace to be independent of the CPU type fitted
+ to the system. This permits 32-bit binaries to be run on ARMv6 through
+ to ARMv8 without modification.
+
+ See Documentation/arm/kernel_user_helpers.txt for details.
+
+ However, the fixed-address nature of these helpers can be used by ROP
+ (return-orientated programming) authors when creating exploits.
+
+ If all of the 32-bit binaries and libraries that run on your platform
+ are built specifically for your platform, and make no use of these
+ helpers, then you can turn this option off to hinder such exploits.
+ However, in that case, if a binary or library relying on those helpers
+ is run, it will receive a SIGSEGV signal, which will terminate the
+ program. Typically, binaries compiled for ARMv7 or later do not use
+ the kuser helpers.
+
+ Say N here only if you are absolutely certain that you do not need
+ these helpers; otherwise, the safe option is to say Y (the default
+ for now)
+
config SYSVIPC_COMPAT
def_bool y
depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index b89a79424912..1c2bd2210f58 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,10 +27,10 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
$(obj)/%.stub.o: $(obj)/%.o FORCE
$(call if_changed,objcopy)
-arm64-obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
+arm64-obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
sys_compat.o
arm64-obj-$(CONFIG_COMPAT) += sigreturn32.o
-arm64-obj-$(CONFIG_COMPAT) += kuser32.o
+arm64-obj-$(CONFIG_KUSER_HELPERS) += kuser32.o
arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
arm64-obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 5398f6454ce1..76a94bed4bd5 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -62,18 +62,22 @@ static const struct vm_special_mapping compat_vdso_spec[] = {
.name = "[sigpage]",
.pages = &vectors_page[0],
},
+#ifdef CONFIG_KUSER_HELPERS
{
.name = "[kuserhelpers]",
.pages = &vectors_page[1],
},
+#endif
};
static struct page *vectors_page[ARRAY_SIZE(compat_vdso_spec)] __ro_after_init;
static int __init alloc_vectors_page(void)
{
+#ifdef CONFIG_KUSER_HELPERS
extern char __kuser_helper_start[], __kuser_helper_end[];
size_t kuser_sz = __kuser_helper_end - __kuser_helper_start;
unsigned long kuser_vpage;
+#endif
extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
size_t sigret_sz =
@@ -84,22 +88,26 @@ static int __init alloc_vectors_page(void)
if (!sigret_vpage)
return -ENOMEM;
+#ifdef CONFIG_KUSER_HELPERS
kuser_vpage = get_zeroed_page(GFP_ATOMIC);
if (!kuser_vpage) {
free_page(sigret_vpage);
return -ENOMEM;
}
+#endif
/* sigreturn code */
memcpy((void *)sigret_vpage, __aarch32_sigret_code_start, sigret_sz);
flush_icache_range(sigret_vpage, sigret_vpage + PAGE_SIZE);
vectors_page[0] = virt_to_page(sigret_vpage);
+#ifdef CONFIG_KUSER_HELPERS
/* kuser helpers */
memcpy((void *)kuser_vpage + 0x1000 - kuser_sz, __kuser_helper_start,
kuser_sz);
flush_icache_range(kuser_vpage, kuser_vpage + PAGE_SIZE);
vectors_page[1] = virt_to_page(kuser_vpage);
+#endif
return 0;
}
@@ -128,11 +136,13 @@ int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
current->mm->context.vdso = (void *)addr;
+#ifdef CONFIG_KUSER_HELPERS
/* Map the kuser helpers at the ABI-defined high address. */
ret = _install_special_mapping(mm, AARCH32_KUSER_HELPERS_BASE,
PAGE_SIZE,
VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
&compat_vdso_spec[1]);
+#endif
out:
up_write(&mm->mmap_sem);
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
If the compat vDSO is enabled, we need to set AT_SYSINFO_EHDR in the
auxiliary vector of compat processes to the address of the vDSO code
page, so that the dynamic linker can find it (just like the regular vDSO).
Note that we cast context.vdso to Elf64_Off, instead of elf_addr_t,
because elf_addr_t is Elf32_Off in compat_binfmt_elf.c, and casting
context.vdso to u32 would trigger a pointer narrowing warning.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/elf.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 433b9554c6a1..bf4672a0491b 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -147,10 +147,10 @@ typedef struct user_fpsimd_state elf_fpregset_t;
})
/* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
-#define ARCH_DLINFO \
+#define _SET_AUX_ENT_VDSO \
do { \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
- (elf_addr_t)current->mm->context.vdso); \
+ (Elf64_Off)current->mm->context.vdso); \
\
/* \
* Should always be nonzero unless there's a kernel bug. \
@@ -162,6 +162,7 @@ do { \
else \
NEW_AUX_ENT(AT_IGNORE, 0); \
} while (0)
+#define ARCH_DLINFO _SET_AUX_ENT_VDSO
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
struct linux_binprm;
@@ -209,7 +210,11 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
({ \
set_thread_flag(TIF_32BIT); \
})
+#ifdef CONFIG_VDSO32
+#define COMPAT_ARCH_DLINFO _SET_AUX_ENT_VDSO
+#else
#define COMPAT_ARCH_DLINFO
+#endif
extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
int uses_interp);
#define compat_arch_setup_additional_pages \
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
If the compat vDSO is enabled, install it in compat processes. In this
case, the compat vDSO replaces the sigreturn page (it provides its own
sigreturn trampolines).
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
---
arch/arm64/kernel/vdso.c | 55 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 8529e85a521f..9fb1e0d380ab 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -58,6 +58,7 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
/*
* Create and map the vectors page for AArch32 tasks.
*/
+#if !defined(CONFIG_VDSO32) || defined(CONFIG_KUSER_HELPERS)
static struct page *vectors_page[] __ro_after_init;
static const struct vm_special_mapping compat_vdso_spec[] = {
{
@@ -73,6 +74,7 @@ static const struct vm_special_mapping compat_vdso_spec[] = {
#endif
};
static struct page *vectors_page[ARRAY_SIZE(compat_vdso_spec)] __ro_after_init;
+#endif
static int __init alloc_vectors_page(void)
{
@@ -82,6 +84,7 @@ static int __init alloc_vectors_page(void)
unsigned long kuser_vpage;
#endif
+#ifndef CONFIG_VDSO32
extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
size_t sigret_sz =
__aarch32_sigret_code_end - __aarch32_sigret_code_start;
@@ -90,19 +93,24 @@ static int __init alloc_vectors_page(void)
sigret_vpage = get_zeroed_page(GFP_ATOMIC);
if (!sigret_vpage)
return -ENOMEM;
+#endif
#ifdef CONFIG_KUSER_HELPERS
kuser_vpage = get_zeroed_page(GFP_ATOMIC);
if (!kuser_vpage) {
+#ifndef CONFIG_VDSO32
free_page(sigret_vpage);
+#endif
return -ENOMEM;
}
#endif
+#ifndef CONFIG_VDSO32
/* sigreturn code */
memcpy((void *)sigret_vpage, __aarch32_sigret_code_start, sigret_sz);
flush_icache_range(sigret_vpage, sigret_vpage + PAGE_SIZE);
vectors_page[0] = virt_to_page(sigret_vpage);
+#endif
#ifdef CONFIG_KUSER_HELPERS
/* kuser helpers */
@@ -116,6 +124,7 @@ static int __init alloc_vectors_page(void)
}
arch_initcall(alloc_vectors_page);
+#ifndef CONFIG_VDSO32
int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
@@ -151,6 +160,7 @@ int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
return PTR_ERR_OR_ZERO(ret);
}
+#endif /* !CONFIG_VDSO32 */
#endif /* CONFIG_COMPAT */
static int vdso_mremap(const struct vm_special_mapping *sm,
@@ -221,6 +231,23 @@ static int __init vdso_mappings_init(const char *name,
return 0;
}
+#ifdef CONFIG_COMPAT
+#ifdef CONFIG_VDSO32
+
+static struct vdso_mappings vdso32_mappings __ro_after_init;
+
+static int __init vdso32_init(void)
+{
+ extern char vdso32_start[], vdso32_end[];
+
+ return vdso_mappings_init("vdso32", vdso32_start, vdso32_end,
+ &vdso32_mappings);
+}
+arch_initcall(vdso32_init);
+
+#endif /* CONFIG_VDSO32 */
+#endif /* CONFIG_COMPAT */
+
static struct vdso_mappings vdso_mappings __ro_after_init;
static int __init vdso_init(void)
@@ -263,6 +290,34 @@ static int vdso_setup(struct mm_struct *mm,
return PTR_ERR_OR_ZERO(ret);
}
+#ifdef CONFIG_COMPAT
+#ifdef CONFIG_VDSO32
+int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+{
+ struct mm_struct *mm = current->mm;
+ void *ret;
+
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
+
+ ret = ERR_PTR(vdso_setup(mm, &vdso32_mappings));
+#ifdef CONFIG_KUSER_HELPERS
+ if (!IS_ERR(ret))
+ /* Map the kuser helpers at the ABI-defined high address. */
+ ret = _install_special_mapping(mm, AARCH32_KUSER_HELPERS_BASE,
+ PAGE_SIZE,
+ VM_READ|VM_EXEC|
+ VM_MAYREAD|VM_MAYEXEC,
+ &compat_vdso_spec[1]);
+#endif
+
+ up_write(&mm->mmap_sem);
+
+ return PTR_ERR_OR_ZERO(ret);
+}
+#endif /* CONFIG_VDSO32 */
+#endif /* CONFIG_COMPAT */
+
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
Expose the new compat vDSO via the COMPAT_VDSO config option.
The option is not enabled in defconfig because we really need a 32-bit
compiler this time, and we rely on the user to provide it themselves
by setting CROSS_COMPILE_ARM32. Therefore enabling the option by
default would make little sense, since the user must explicitly set a
non-standard environment variable anyway.
CONFIG_COMPAT_VDSO is not directly used in the code, because we want
to ignore it (build as if it were not set) if the user didn't set
CROSS_COMPILE_ARM32. If the variable has been set to a valid prefix,
CONFIG_VDSO32 will be set; this is the option that the code and
Makefiles test.
For more flexibility, like CROSS_COMPILE, CROSS_COMPILE_ARM32 can also
be set via CONFIG_CROSS_COMPILE_ARM32 (the environment variable
overrides the config option, as expected).
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2: rebase
---
arch/arm64/Kconfig | 24 ++++++++++++++++++++++++
arch/arm64/Makefile | 36 ++++++++++++++++++++++++++++++++++--
arch/arm64/kernel/Makefile | 3 +++
3 files changed, 61 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6e61f01108cb..4ed2f93a9607 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1335,6 +1335,30 @@ config SYSVIPC_COMPAT
def_bool y
depends on COMPAT && SYSVIPC
+config COMPAT_VDSO
+ bool "32-bit vDSO"
+ depends on COMPAT
+ default n
+ help
+ Warning: a 32-bit toolchain is necessary to build the vDSO. You
+ must explicitly define which toolchain should be used by setting
+ CROSS_COMPILE_ARM32 to the prefix of the 32-bit toolchain (same format
+ as CROSS_COMPILE). If CROSS_COMPILE_ARM32 is empty, a warning will be
+ printed and the kernel will be built as if COMPAT_VDSO had not been
+ set. If CROSS_COMPILE_ARM32 is set to an invalid prefix, compilation
+ will be aborted.
+
+ Provide a vDSO to 32-bit processes. It includes the symbols provided
+ by the vDSO from the 32-bit kernel, so that a 32-bit libc can use
+ the compat vDSO without modification. It also provides sigreturn
+ trampolines, replacing the sigreturn page.
+
+config CROSS_COMPILE_ARM32
+ string "32-bit toolchain prefix"
+ help
+ Same as setting CROSS_COMPILE_ARM32 in the environment, but saved for
+ future builds. The environment variable overrides this config option.
+
menu "Power management options"
source "kernel/power/Kconfig"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 106039d25e2f..ed6c3c6fb8f8 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -49,9 +49,39 @@ $(warning Detected assembler with broken .inst; disassembly will be unreliable)
endif
endif
-KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst)
+ifeq ($(CONFIG_COMPAT_VDSO), y)
+ CROSS_COMPILE_ARM32 ?= $(CONFIG_CROSS_COMPILE_ARM32:"%"=%)
+
+ # Check that the user has provided a valid prefix for the 32-bit toolchain.
+ # To prevent selecting the system $(cc-name) by default, the prefix is not
+ # allowed to be empty, unlike CROSS_COMPILE. In the unlikely event that the
+ # system $(cc-name) is actually the 32-bit ARM compiler to be used, the
+ # variable can be set to the dirname (e.g. CROSS_COMPILE_ARM32=/usr/bin/).
+ # Note: this Makefile is read both before and after regenerating the config
+ # (if needed). Any warning appearing before the config has been regenerated
+ # should be ignored. If the error is triggered and you set
+ # CONFIG_CROSS_COMPILE_ARM32, set CROSS_COMPILE_ARM32 to an appropriate value
+ # when invoking make and fix CONFIG_CROSS_COMPILE_ARM32.
+ ifeq ($(CROSS_COMPILE_ARM32),)
+ $(error CROSS_COMPILE_ARM32 not defined or empty, the compat vDSO will not be built)
+ else ifeq ($(cc-name),clang)
+ export CLANG_TRIPLE_ARM32 ?= $(CROSS_COMPILE_ARM32)
+ export CLANG_TARGET_ARM32 := --target=$(notdir $(CLANG_TRIPLE_ARM32:%-=%))
+ export CONFIG_VDSO32 := y
+ vdso32 := -DCONFIG_VDSO32=1
+ else ifeq ($(shell which $(CROSS_COMPILE_ARM32)$(cc-name) 2> /dev/null),)
+ $(error $(CROSS_COMPILE_ARM32)$(cc-name) not found, check CROSS_COMPILE_ARM32)
+ else
+ export CROSS_COMPILE_ARM32
+ export CONFIG_VDSO32 := y
+ vdso32 := -DCONFIG_VDSO32=1
+ endif
+endif
+
+KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst) $(vdso32)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
-KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
+KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
+KBUILD_AFLAGS += $(lseinstr) $(brokengasinst) $(vdso32)
KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)
@@ -156,6 +186,8 @@ archclean:
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso include/generated/vdso-offsets.h
+ $(if $(CONFIG_VDSO32),$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 \
+ include/generated/vdso32-offsets.h)
define archhelp
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 1c2bd2210f58..6eca683fc5a8 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -29,7 +29,9 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
arm64-obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
sys_compat.o
+ifneq ($(CONFIG_VDSO32),y)
arm64-obj-$(CONFIG_COMPAT) += sigreturn32.o
+endif
arm64-obj-$(CONFIG_KUSER_HELPERS) += kuser32.o
arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
@@ -61,6 +63,7 @@ arm64-obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
arm64-obj-$(CONFIG_ARM64_SSBD) += ssbd.o
obj-y += $(arm64-obj-y) vdso/ probes/
+obj-$(CONFIG_VDSO32) += vdso32/
obj-m += $(arm64-obj-m)
head-y := head.o
extra-y += $(head-y) vmlinux.lds
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
Move the logic for setting up mappings and pages for the vDSO into
static functions. This makes the vDSO setup code more consistent with
the compat side and will allow to reuse it for the future compat vDSO.
Signed-off-by: Kevin Brodsky <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
---
arch/arm64/kernel/vdso.c | 118 +++++++++++++++++++++++----------------
1 file changed, 70 insertions(+), 48 deletions(-)
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 76a94bed4bd5..8529e85a521f 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -39,8 +39,11 @@
#include <asm/vdso.h>
#include <asm/vdso_datapage.h>
-extern char vdso_start[], vdso_end[];
-static unsigned long vdso_pages __ro_after_init;
+struct vdso_mappings {
+ unsigned long num_code_pages;
+ struct vm_special_mapping data_mapping;
+ struct vm_special_mapping code_mapping;
+};
/*
* The vDSO data page.
@@ -164,95 +167,114 @@ static int vdso_mremap(const struct vm_special_mapping *sm,
return 0;
}
-static struct vm_special_mapping vdso_spec[2] __ro_after_init = {
- {
- .name = "[vvar]",
- },
- {
- .name = "[vdso]",
- .mremap = vdso_mremap,
- },
-};
-
-static int __init vdso_init(void)
+static int __init vdso_mappings_init(const char *name,
+ const char *code_start,
+ const char *code_end,
+ struct vdso_mappings *mappings)
{
- int i;
+ unsigned long i, vdso_page;
struct page **vdso_pagelist;
unsigned long pfn;
- if (memcmp(vdso_start, "\177ELF", 4)) {
- pr_err("vDSO is not a valid ELF object!\n");
+ if (memcmp(code_start, "\177ELF", 4)) {
+ pr_err("%s is not a valid ELF object!\n", name);
return -EINVAL;
}
- vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
- pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n",
- vdso_pages + 1, vdso_pages, vdso_start, 1L, vdso_data);
-
- /* Allocate the vDSO pagelist, plus a page for the data. */
- vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *),
- GFP_KERNEL);
+ vdso_pages = (code_end - code_start) >> PAGE_SHIFT;
+ pr_info("%s: %ld pages (%ld code @ %p, %ld data @ %p)\n",
+ name, vdso_pages + 1, vdso_pages, code_start, 1L,
+ vdso_data);
+
+ /*
+ * Allocate space for storing pointers to the vDSO code pages + the
+ * data page. The pointers must have the same lifetime as the mappings,
+ * which are static, so there is no need to keep track of the pointer
+ * array to free it.
+ */
+ vdso_pagelist = kmalloc_array(vdso_pages + 1, sizeof(struct page *),
+ GFP_KERNEL);
if (vdso_pagelist == NULL)
return -ENOMEM;
/* Grab the vDSO data page. */
vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
-
/* Grab the vDSO code pages. */
- pfn = sym_to_pfn(vdso_start);
+ pfn = sym_to_pfn(code_start);
for (i = 0; i < vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
- vdso_spec[0].pages = &vdso_pagelist[0];
- vdso_spec[1].pages = &vdso_pagelist[1];
+ /* Populate the special mapping structures */
+ mappings->data_mapping = (struct vm_special_mapping) {
+ .name = "[vvar]",
+ .pages = &vdso_pagelist[0],
+ };
+
+ mappings->code_mapping = (struct vm_special_mapping) {
+ .name = "[vdso]",
+ .pages = &vdso_pagelist[1],
+ };
+ mappings->num_code_pages = vdso_pages;
return 0;
}
+
+static struct vdso_mappings vdso_mappings __ro_after_init;
+
+static int __init vdso_init(void)
+{
+ extern char vdso_start[], vdso_end[];
+
+ return vdso_mappings_init("vdso", vdso_start, vdso_end,
+ &vdso_mappings);
+}
arch_initcall(vdso_init);
-int arch_setup_additional_pages(struct linux_binprm *bprm,
- int uses_interp)
+static int vdso_setup(struct mm_struct *mm,
+ const struct vdso_mappings *mappings)
{
- struct mm_struct *mm = current->mm;
unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
void *ret;
- vdso_text_len = vdso_pages << PAGE_SHIFT;
+ vdso_text_len = mappings->num_code_pages << PAGE_SHIFT;
/* Be sure to map the data page */
vdso_mapping_len = vdso_text_len + PAGE_SIZE;
- if (down_write_killable(&mm->mmap_sem))
- return -EINTR;
vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
- if (IS_ERR_VALUE(vdso_base)) {
- ret = ERR_PTR(vdso_base);
- goto up_fail;
- }
+ if (IS_ERR_VALUE(vdso_base))
+ ret = PTR_ERR_OR_ZERO(ERR_PTR(vdso_base));
+
ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE,
VM_READ|VM_MAYREAD,
- &vdso_spec[0]);
+ &mappings->data_mapping);
if (IS_ERR(ret))
- goto up_fail;
+ return PTR_ERR_OR_ZERO(ret);
vdso_base += PAGE_SIZE;
- mm->context.vdso = (void *)vdso_base;
ret = _install_special_mapping(mm, vdso_base, vdso_text_len,
VM_READ|VM_EXEC|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
- &vdso_spec[1]);
- if (IS_ERR(ret))
- goto up_fail;
+ &mappings->code_mapping);
+ if (!IS_ERR(ret))
+ mm->context.vdso = (void *)vdso_base;
+
+ return PTR_ERR_OR_ZERO(ret);
+}
+int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+ struct mm_struct *mm = current->mm;
+ int ret;
- up_write(&mm->mmap_sem);
- return 0;
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
+
+ ret = vdso_setup(mm, &vdso_mappings);
-up_fail:
- mm->context.vdso = NULL;
up_write(&mm->mmap_sem);
- return PTR_ERR(ret);
+ return ret;
}
/*
--
2.19.0.605.g01d371f741-goog
From: Kevin Brodsky <[email protected]>
Provide the files necessary for building a compat (AArch32) vDSO in
kernel/vdso32.
This is mostly an adaptation of the arm vDSO. The most significant
change in vgettimeofday.c is the use of the arm64 vdso_data struct,
allowing the vDSO data page to be shared between the 32 and 64-bit
vDSOs. Additionally, a different set of barrier macros is used (see
aarch32-barrier.h), as we want to support old 32-bit compilers that
may not support ARMv8 and its new barrier arguments (*ld).
In addition to the time functions, sigreturn trampolines are also
provided, aiming at replacing those in the sigreturn page as the
latter don't provide any unwinding information (and it's easier to
have just one "user code" page). arm-specific unwinding directives are
used, based on glibc's implementation. Symbol offsets are made
available to the kernel using the same method as the 64-bit vDSO.
There is unfortunately an important caveat: we cannot get away with
hand-coding 32-bit instructions like in kernel/kuser32.S, this time we
really need a 32-bit compiler. The compat vDSO Makefile relies on
CROSS_COMPILE_ARM32 to provide a 32-bit compiler, appropriate logic
will be added to the arm64 Makefile later on to ensure that an attempt
to build the compat vDSO is made only if this variable has been set
properly.
Signed-off-by: Kevin Brodsky <[email protected]>
Take an effort to recode the arm64 vdso code from assembler to C
previously submitted by Andrew Pinski <[email protected]>, rework
it for use in both arm and arm64, overlapping any optimizations
for each architecture.
Signed-off-by: Mark Salyzyn <[email protected]>
Tested-by: Mark Salyzyn <[email protected]>
Cc: James Morse <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Dave Martin <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andy Gross <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Andrew Pinski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Jeremy Linton <[email protected]>
Cc: [email protected]
v2:
- Ensured CONFIG_64BIT is not defined, side effect is
BITS_PER_LONG is correct adding confidence.
---
arch/arm64/kernel/vdso32/.gitignore | 2 +
arch/arm64/kernel/vdso32/Makefile | 172 +++++++++++++++++++++++
arch/arm64/kernel/vdso32/compiler.h | 122 ++++++++++++++++
arch/arm64/kernel/vdso32/datapage.h | 1 +
arch/arm64/kernel/vdso32/sigreturn.S | 76 ++++++++++
arch/arm64/kernel/vdso32/vdso.S | 32 +++++
arch/arm64/kernel/vdso32/vdso.lds.S | 95 +++++++++++++
arch/arm64/kernel/vdso32/vgettimeofday.c | 3 +
8 files changed, 503 insertions(+)
create mode 100644 arch/arm64/kernel/vdso32/.gitignore
create mode 100644 arch/arm64/kernel/vdso32/Makefile
create mode 100644 arch/arm64/kernel/vdso32/compiler.h
create mode 100644 arch/arm64/kernel/vdso32/datapage.h
create mode 100644 arch/arm64/kernel/vdso32/sigreturn.S
create mode 100644 arch/arm64/kernel/vdso32/vdso.S
create mode 100644 arch/arm64/kernel/vdso32/vdso.lds.S
create mode 100644 arch/arm64/kernel/vdso32/vgettimeofday.c
diff --git a/arch/arm64/kernel/vdso32/.gitignore b/arch/arm64/kernel/vdso32/.gitignore
new file mode 100644
index 000000000000..4fea950fa5ed
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/.gitignore
@@ -0,0 +1,2 @@
+vdso.lds
+vdso.so.raw
diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
new file mode 100644
index 000000000000..6d44d972e89d
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/Makefile
@@ -0,0 +1,172 @@
+#
+# Building a vDSO image for AArch32.
+#
+# Author: Kevin Brodsky <[email protected]>
+# A mix between the arm64 and arm vDSO Makefiles.
+
+ifeq ($(cc-name),clang)
+ CC_ARM32 := $(cc-name) $(CLANG_TARGET_ARM32) -no-integrated-as
+else
+ CC_ARM32 := $(CROSS_COMPILE_ARM32)$(cc-name)
+endif
+
+# Same as cc-*option, but using CC_ARM32 instead of CC
+cc32-option = $(call try-run,\
+ $(CC_ARM32) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
+cc32-disable-warning = $(call try-run,\
+ $(CC_ARM32) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+cc32-ldoption = $(call try-run,\
+ $(CC_ARM32) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
+
+# We cannot use the global flags to compile the vDSO files, the main reason
+# being that the 32-bit compiler may be older than the main (64-bit) compiler
+# and therefore may not understand flags set using $(cc-option ...). Besides,
+# arch-specific options should be taken from the arm Makefile instead of the
+# arm64 one.
+# As a result we set our own flags here.
+
+# From top-level Makefile
+# NOSTDINC_FLAGS
+VDSO_CPPFLAGS := -nostdinc -isystem $(shell $(CC_ARM32) -print-file-name=include)
+VDSO_CPPFLAGS += $(LINUXINCLUDE)
+VDSO_CPPFLAGS += $(KBUILD_CPPFLAGS)
+
+# Common C and assembly flags
+# From top-level Makefile
+VDSO_CAFLAGS := $(VDSO_CPPFLAGS)
+VDSO_CAFLAGS += $(call cc32-option,-fno-PIE)
+ifdef CONFIG_DEBUG_INFO
+VDSO_CAFLAGS += -g
+endif
+ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC_ARM32)), y)
+VDSO_CAFLAGS += -DCC_HAVE_ASM_GOTO
+endif
+
+# From arm Makefile
+VDSO_CAFLAGS += $(call cc32-option,-fno-dwarf2-cfi-asm)
+VDSO_CAFLAGS += -mabi=aapcs-linux -mfloat-abi=soft
+ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
+VDSO_CAFLAGS += -mbig-endian
+else
+VDSO_CAFLAGS += -mlittle-endian
+endif
+
+# From arm vDSO Makefile
+VDSO_CAFLAGS += -fPIC -fno-builtin -fno-stack-protector
+VDSO_CAFLAGS += -DDISABLE_BRANCH_PROFILING
+
+# Try to compile for ARMv8. If the compiler is too old and doesn't support it,
+# fall back to v7. There is no easy way to check for what architecture the code
+# is being compiled, so define a macro specifying that (see arch/arm/Makefile).
+VDSO_CAFLAGS += $(call cc32-option,-march=armv8-a -D__LINUX_ARM_ARCH__=8,\
+ -march=armv7-a -D__LINUX_ARM_ARCH__=7)
+
+VDSO_CFLAGS := $(VDSO_CAFLAGS)
+# KBUILD_CFLAGS from top-level Makefile
+VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+ -std=gnu89
+VDSO_CFLAGS += -O2
+# Some useful compiler-dependent flags from top-level Makefile
+VDSO_CFLAGS += $(call cc32-option,-Wdeclaration-after-statement,)
+VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+VDSO_CFLAGS += $(call cc32-option,-fno-strict-overflow)
+VDSO_CFLAGS += $(call cc32-option,-Werror=strict-prototypes)
+VDSO_CFLAGS += $(call cc32-option,-Werror=date-time)
+VDSO_CFLAGS += $(call cc32-option,-Werror=incompatible-pointer-types)
+
+# The 32-bit compiler does not provide 128-bit integers, which are used in
+# some headers that are indirectly included from the vDSO code.
+# This hack makes the compiler happy and should trigger a warning/error if
+# variables of such type are referenced.
+VDSO_CFLAGS += -D__uint128_t='void*'
+# Silence some warnings coming from headers that operate on long's
+# (on GCC 4.8 or older, there is unfortunately no way to silence this warning)
+VDSO_CFLAGS += $(call cc32-disable-warning,shift-count-overflow)
+VDSO_CFLAGS += -Wno-int-to-pointer-cast
+
+VDSO_AFLAGS := $(VDSO_CAFLAGS)
+VDSO_AFLAGS += -D__ASSEMBLY__
+
+VDSO_LDFLAGS := $(VDSO_CPPFLAGS)
+# From arm vDSO Makefile
+VDSO_LDFLAGS += -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1
+VDSO_LDFLAGS += -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096
+VDSO_LDFLAGS += -nostdlib -shared -mfloat-abi=soft
+VDSO_LDFLAGS += $(call cc32-ldoption,-Wl$(comma)--hash-style=sysv)
+VDSO_LDFLAGS += $(call cc32-ldoption,-Wl$(comma)--build-id)
+VDSO_LDFLAGS += $(call cc32-ldoption,-fuse-ld=bfd)
+
+
+# Borrow vdsomunge.c from the arm vDSO
+# We have to use a relative path because scripts/Makefile.host prefixes
+# $(hostprogs-y) with $(obj)
+munge := ../../../arm/vdso/vdsomunge
+hostprogs-y := $(munge)
+
+c-obj-vdso := vgettimeofday.o
+asm-obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(c-obj-vdso) $(asm-obj-vdso) vdso.so vdso.so.dbg vdso.so.raw
+c-obj-vdso := $(addprefix $(obj)/, $(c-obj-vdso))
+asm-obj-vdso := $(addprefix $(obj)/, $(asm-obj-vdso))
+obj-vdso := $(c-obj-vdso) $(asm-obj-vdso)
+
+obj-y += vdso.o
+extra-y += vdso.lds
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency (vdso.s includes vdso.so through incbin)
+$(obj)/vdso.o: $(obj)/vdso.so
+
+include/generated/vdso32-offsets.h: $(obj)/vdso.so.dbg FORCE
+ $(call if_changed,vdsosym)
+
+# Strip rule for vdso.so
+$(obj)/vdso.so: OBJCOPYFLAGS := -S
+$(obj)/vdso.so: $(obj)/vdso.so.dbg FORCE
+ $(call if_changed,objcopy)
+
+$(obj)/vdso.so.dbg: $(obj)/vdso.so.raw $(obj)/$(munge) FORCE
+ $(call if_changed,vdsomunge)
+
+# Link rule for the .so file, .lds has to be first
+$(obj)/vdso.so.raw: $(src)/vdso.lds $(obj-vdso) FORCE
+ $(call if_changed,vdsold)
+
+# Compilation rules for the vDSO sources
+$(filter-out vgettimeofday.o, $(c-obj-vdso)): %.o: %.c FORCE
+ $(call if_changed_dep,vdsocc)
+$(asm-obj-vdso): %.o: %.S FORCE
+ $(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOL32 $@
+ cmd_vdsold = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_LDFLAGS) \
+ -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) -o $@
+quiet_cmd_vdsocc = VDSOC32 $@
+ cmd_vdsocc = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) -c -o $@ $<
+quiet_cmd_vdsoas = VDSOA32 $@
+ cmd_vdsoas = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_AFLAGS) -c -o $@ $<
+
+quiet_cmd_vdsomunge = MUNGE $@
+ cmd_vdsomunge = $(obj)/$(munge) $< $@
+
+# Generate vDSO offsets using helper script (borrowed from the 64-bit vDSO)
+gen-vdsosym := $(srctree)/$(src)/../vdso/gen_vdso_offsets.sh
+quiet_cmd_vdsosym = VDSOSYM $@
+# The AArch64 nm should be able to read an AArch32 binary
+ cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+ cmd_vdso_install = cp $(obj)/[email protected] $(MODLIB)/vdso/vdso32.so
+
+vdso.so: $(obj)/vdso.so.dbg
+ @mkdir -p $(MODLIB)/vdso
+ $(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/arm64/kernel/vdso32/compiler.h b/arch/arm64/kernel/vdso32/compiler.h
new file mode 100644
index 000000000000..19a43fc37bb9
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/compiler.h
@@ -0,0 +1,122 @@
+/*
+ * Userspace implementations of fallback calls
+ *
+ * Copyright (C) 2017 Cavium, Inc.
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Rewriten into C by: Andrew Pinski <[email protected]>
+ */
+
+#ifndef __VDSO_COMPILER_H
+#define __VDSO_COMPILER_H
+
+#include <generated/autoconf.h>
+#undef CONFIG_64BIT
+#include <asm/barrier.h> /* for isb() & dmb() */
+#include <asm/param.h> /* for HZ */
+#include <asm/unistd32.h>
+#include <linux/compiler.h>
+
+#ifdef CONFIG_ARM_ARCH_TIMER
+#define ARCH_PROVIDES_TIMER
+#endif
+
+/* can not include linux/time.h because of too much architectural cruft */
+#ifndef NSEC_PER_SEC
+#define NSEC_PER_SEC 1000000000L
+#endif
+
+/* can not include linux/jiffies.h because of too much architectural cruft */
+#ifndef TICK_NSEC
+#define TICK_NSEC ((NSEC_PER_SEC+HZ/2)/HZ)
+#endif
+
+/* can not include linux/hrtimer.h because of too much architectural cruft */
+#ifndef LOW_RES_NSEC
+#define LOW_RES_NSEC TICK_NSEC
+#ifdef ARCH_PROVIDES_TIMER
+#ifdef CONFIG_HIGH_RES_TIMERS
+# define HIGH_RES_NSEC 1
+# define MONOTONIC_RES_NSEC HIGH_RES_NSEC
+#else
+# define MONOTONIC_RES_NSEC LOW_RES_NSEC
+#endif
+#endif
+#endif
+
+#define DEFINE_FALLBACK(name, type_arg1, name_arg1, type_arg2, name_arg2) \
+static notrace long name##_fallback(type_arg1 _##name_arg1, \
+ type_arg2 _##name_arg2) \
+{ \
+ register type_arg1 name_arg1 asm("r0") = _##name_arg1; \
+ register type_arg2 name_arg2 asm("r1") = _##name_arg2; \
+ register long ret asm ("r0"); \
+ register long nr asm("r7") = __NR_##name; \
+ \
+ asm volatile( \
+ " swi #0\n" \
+ : "=r" (ret) \
+ : "r" (name_arg1), "r" (name_arg2), "r" (nr) \
+ : "memory"); \
+ \
+ return ret; \
+}
+
+/*
+ * AArch32 implementation of arch_counter_get_cntvct() suitable for vdso
+ */
+static __always_inline notrace u64 arch_vdso_read_counter(void)
+{
+ u64 res;
+
+ /* Read the virtual counter. */
+ isb();
+ asm volatile("mrrc p15, 1, %Q0, %R0, c14" : "=r" (res));
+
+ return res;
+}
+
+/*
+ * Can not include asm/processor.h to pick this up because of all the
+ * architectural components also included, so we open code a copy.
+ */
+static inline void cpu_relax(void)
+{
+ asm volatile("yield" ::: "memory");
+}
+
+#undef smp_rmb
+#if __LINUX_ARM_ARCH__ >= 8
+#define smp_rmb() dmb(ishld) /* ok on ARMv8 */
+#else
+#define smp_rmb() dmb(ish) /* ishld does not exist on ARMv7 */
+#endif
+
+/* Avoid unresolved references emitted by GCC */
+
+void __aeabi_unwind_cpp_pr0(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr1(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr2(void)
+{
+}
+
+#endif /* __VDSO_COMPILER_H */
diff --git a/arch/arm64/kernel/vdso32/datapage.h b/arch/arm64/kernel/vdso32/datapage.h
new file mode 100644
index 000000000000..fe3e216d94d1
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/datapage.h
@@ -0,0 +1 @@
+#include "../vdso/datapage.h"
diff --git a/arch/arm64/kernel/vdso32/sigreturn.S b/arch/arm64/kernel/vdso32/sigreturn.S
new file mode 100644
index 000000000000..14e5f9ca34f9
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/sigreturn.S
@@ -0,0 +1,76 @@
+/*
+ * Sigreturn trampolines for returning from a signal when the SA_RESTORER
+ * flag is not set.
+ *
+ * Copyright (C) 2016 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Based on glibc's arm sa_restorer. While this is not strictly necessary, we
+ * provide both A32 and T32 versions, in accordance with the arm sigreturn
+ * code.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm-offsets.h>
+#include <asm/unistd32.h>
+
+.macro sigreturn_trampoline name, syscall, regs_offset
+ /*
+ * We provide directives for enabling stack unwinding through the
+ * trampoline. On arm, CFI directives are only used for debugging (and
+ * the vDSO is stripped of debug information), so only the arm-specific
+ * unwinding directives are useful here.
+ */
+ .fnstart
+ .save {r0-r15}
+ .pad #\regs_offset
+ /*
+ * It is necessary to start the unwind tables at least one instruction
+ * before the trampoline, as the unwinder will assume that the signal
+ * handler has been called from the trampoline, that is just before
+ * where the signal handler returns (mov r7, ...).
+ */
+ nop
+ENTRY(\name)
+ mov r7, #\syscall
+ svc #0
+ .fnend
+ /*
+ * We would like to use ENDPROC, but the macro uses @ which is a
+ * comment symbol for arm assemblers, so directly use .type with %
+ * instead.
+ */
+ .type \name, %function
+END(\name)
+.endm
+
+ .text
+
+ .arm
+ sigreturn_trampoline __kernel_sigreturn_arm, \
+ __NR_sigreturn, \
+ COMPAT_SIGFRAME_REGS_OFFSET
+
+ sigreturn_trampoline __kernel_rt_sigreturn_arm, \
+ __NR_rt_sigreturn, \
+ COMPAT_RT_SIGFRAME_REGS_OFFSET
+
+ .thumb
+ sigreturn_trampoline __kernel_sigreturn_thumb, \
+ __NR_sigreturn, \
+ COMPAT_SIGFRAME_REGS_OFFSET
+
+ sigreturn_trampoline __kernel_rt_sigreturn_thumb, \
+ __NR_rt_sigreturn, \
+ COMPAT_RT_SIGFRAME_REGS_OFFSET
diff --git a/arch/arm64/kernel/vdso32/vdso.S b/arch/arm64/kernel/vdso32/vdso.S
new file mode 100644
index 000000000000..fe19ff70eb76
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vdso.S
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/page.h>
+
+ .globl vdso32_start, vdso32_end
+ .section .rodata
+ .balign PAGE_SIZE
+vdso32_start:
+ .incbin "arch/arm64/kernel/vdso32/vdso.so"
+ .balign PAGE_SIZE
+vdso32_end:
+
+ .previous
diff --git a/arch/arm64/kernel/vdso32/vdso.lds.S b/arch/arm64/kernel/vdso32/vdso.lds.S
new file mode 100644
index 000000000000..f95cb1c431fb
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vdso.lds.S
@@ -0,0 +1,95 @@
+/*
+ * Adapted from arm64 version.
+ *
+ * GNU linker script for the VDSO library.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <[email protected]>
+ * Heavily based on the vDSO linker scripts for other archs.
+ */
+
+#include <linux/const.h>
+#include <asm/page.h>
+#include <asm/vdso.h>
+
+OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", "elf32-littlearm")
+OUTPUT_ARCH(arm)
+
+SECTIONS
+{
+ PROVIDE_HIDDEN(_vdso_data = . - PAGE_SIZE);
+ . = VDSO_LBASE + SIZEOF_HEADERS;
+
+ .hash : { *(.hash) } :text
+ .gnu.hash : { *(.gnu.hash) }
+ .dynsym : { *(.dynsym) }
+ .dynstr : { *(.dynstr) }
+ .gnu.version : { *(.gnu.version) }
+ .gnu.version_d : { *(.gnu.version_d) }
+ .gnu.version_r : { *(.gnu.version_r) }
+
+ .note : { *(.note.*) } :text :note
+
+ .dynamic : { *(.dynamic) } :text :dynamic
+
+ .rodata : { *(.rodata*) } :text
+
+ .text : { *(.text*) } :text =0xe7f001f2
+
+ .got : { *(.got) }
+ .rel.plt : { *(.rel.plt) }
+
+ /DISCARD/ : {
+ *(.note.GNU-stack)
+ *(.data .data.* .gnu.linkonce.d.* .sdata*)
+ *(.bss .sbss .dynbss .dynsbss)
+ }
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+ text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+ note PT_NOTE FLAGS(4); /* PF_R */
+}
+
+VERSION
+{
+ LINUX_2.6 {
+ global:
+ __vdso_clock_gettime;
+ __vdso_gettimeofday;
+ __vdso_clock_getres;
+ __vdso_time;
+ __kernel_sigreturn_arm;
+ __kernel_sigreturn_thumb;
+ __kernel_rt_sigreturn_arm;
+ __kernel_rt_sigreturn_thumb;
+ local: *;
+ };
+}
+
+/*
+ * Make the sigreturn code visible to the kernel.
+ */
+VDSO_compat_sigreturn_arm = __kernel_sigreturn_arm;
+VDSO_compat_sigreturn_thumb = __kernel_sigreturn_thumb;
+VDSO_compat_rt_sigreturn_arm = __kernel_rt_sigreturn_arm;
+VDSO_compat_rt_sigreturn_thumb = __kernel_rt_sigreturn_thumb;
diff --git a/arch/arm64/kernel/vdso32/vgettimeofday.c b/arch/arm64/kernel/vdso32/vgettimeofday.c
new file mode 100644
index 000000000000..b73d4011993d
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vgettimeofday.c
@@ -0,0 +1,3 @@
+#include "compiler.h"
+#include "datapage.h"
+#include "../../../../lib/vdso/vgettimeofday.c"
--
2.19.0.605.g01d371f741-goog
On Mon, 1 Oct 2018, Mark Salyzyn wrote:
>
> +static notrace int do_boottime(const struct vdso_data *vd, struct timespec *ts)
> +{
> + u32 seq, mult, shift;
> + u64 nsec, cycle_last;
> + vdso_wtm_clock_nsec_t wtm_nsec;
> +
> + /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
> + ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
> + ts->tv_nsec = nsec;
> +
> + return 0;
> +}
> +
Instead of adding yet another copy of the same code you might want to look
at the rework I did for the x86 vdso in order to support CLOCK_TAI w/o
running into the issue of the clock switch case being compiled into a jump
table and then the compile asking for retpoline.
http://lkml.kernel.org/r/[email protected]
Thanks,
tglx
On Mon, Oct 1, 2018 at 10:58 AM, Mark Salyzyn <[email protected]> wrote:
> Last sent 23 Nov 2016.
>
> The following 23 patches are rebased and resent, and represent a
> rewrite of the arm and arm64 vDSO into C, adding support for arch32
> (32-bit user space hosted 64-bit kernels) and into a common library
> that other (arm, or non-arm) architectures may utilize.
So I feel like this has gone around a few times w/o much comment from
the arm/arm64 maintainers. I'm not sure if there's a reason?
I worry part of the issue is the scope of this patch set is a little
unwieldy (covering two architectures + generic code) might leave
maintainers thinking/hoping someone else should review it.
It seems the patchset is already somewhat broken up into separate
sets, so I might recommend picking just one area and focus on
upstreaming that first. Maybe the in-arch cleanups for arm and then
arm64 and then maybe do the move to lib?
thanks
-john
On 10/01/2018 11:49 AM, John Stultz wrote:
> On Mon, Oct 1, 2018 at 10:58 AM, Mark Salyzyn <[email protected]> wrote:
>> Last sent 23 Nov 2016.
>>
>> The following 23 patches are rebased and resent, and represent a
>> rewrite of the arm and arm64 vDSO into C, adding support for arch32
>> (32-bit user space hosted 64-bit kernels) and into a common library
>> that other (arm, or non-arm) architectures may utilize.
> So I feel like this has gone around a few times w/o much comment from
> the arm/arm64 maintainers. I'm not sure if there's a reason?
I am "forming an opinion"(tm) that ARM is not interested in any work on
32 bit arm architectures. They have no manpower that they are willing to
devote to this.
Despite the gain of 0.4% for screen-on battery life, where Android has a
mix of 64 and 32 bit applications, thus still relevant _today_ on 64 bit
architectures (providing vDSO32 for 32-bit applications).
> I worry part of the issue is the scope of this patch set is a little
> unwieldy (covering two architectures + generic code) might leave
> maintainers thinking/hoping someone else should review it.
Original was submitted by ARM author as a complete patch series. Failed,
so I took it over and have broken them up into 5 logical groups of
adjustments to divide and conquer.
Was submitted one group at a time, out of eventually 5, with more than a
month between them with no up-streaming action. They were reworked based
on comments and split into smaller pieces (the first 12 were a much
smaller set for example). Over the years (yes, it has been years) I have
settled on resending the 23 patches, still has 5 groups, and each
individual patch is tested one at a time, so they can be taken
individually from each set.
ARM has complained that they want them all at one time because
individually they represent more work. So the whole set is here ready to go.
>
> It seems the patchset is already somewhat broken up into separate
> sets, so I might recommend picking just one area and focus on
> upstreaming that first. Maybe the in-arch cleanups for arm and then
> arm64 and then maybe do the move to lib?
They are in set-order, The first 12 can be taken one at a time to
modernize arm so that it is up-to-date with the assembler code for
arm64. More or less the order you just outlined.
TahDah :-)
-- Mark
On 10/01/2018 11:15 AM, Thomas Gleixner wrote:
> On Mon, 1 Oct 2018, Mark Salyzyn wrote:
>>
>> +static notrace int do_boottime(const struct vdso_data *vd, struct timespec *ts)
>> +{
>> + u32 seq, mult, shift;
>> + u64 nsec, cycle_last;
>> + vdso_wtm_clock_nsec_t wtm_nsec;
>> +
>> + /* open coding timespec_add_ns to save a ts->tv_nsec = 0 */
>> + ts->tv_sec = sec + __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec);
>> + ts->tv_nsec = nsec;
>> +
>> + return 0;
>> +}
>> +
> Instead of adding yet another copy of the same code you might want to look
> at the rework I did for the x86 vdso in order to support CLOCK_TAI w/o
> running into the issue of the clock switch case being compiled into a jump
> table and then the compile asking for retpoline.
>
> http://lkml.kernel.org/r/[email protected]
>
> Thanks,
>
> tglx
Great idea. Thanks!
The point of the first 12 patches is to _align_ the arm code to match
the assembler for arm64 exactly 1:1. Then switch arm64 assembler to use
the _same_ code in 'C' as a library. No performance degradation.
Next extend the vdso framework on arm64 to _also_ use that library for
vDSO32 (arm32 compat on 64-bit). At this point we achieve a 0.4% power
reduction on Android.
At that point, we would be ready for a rework that fixes all three (ARM
vDSO, ARM64 vDSO and ARM64 vDSO32) to get the compiler to handle the
switch statement better. One step at a time.
-- Mark
On Mon, Oct 1, 2018 at 1:44 PM, Mark Salyzyn <[email protected]> wrote:
> On 10/01/2018 11:49 AM, John Stultz wrote:
>> It seems the patchset is already somewhat broken up into separate
>> sets, so I might recommend picking just one area and focus on
>> upstreaming that first. Maybe the in-arch cleanups for arm and then
>> arm64 and then maybe do the move to lib?
>
>
> They are in set-order, The first 12 can be taken one at a time to modernize
> arm so that it is up-to-date with the assembler code for arm64. More or less
> the order you just outlined.
>
> TahDah :-)
Sorry, I appreciate the background. I know this has been something
you've been pushing for quite some time, but it also seems to be
somewhat sporadic, and that makes it hard to keep track of the
narrative.
I'm unfortunately not the right person to queue the arm/arm64 changes,
but let me know if I can help with the generic bits.
thanks
-john
On Mon, Oct 01, 2018 at 01:44:52PM -0700, Mark Salyzyn wrote:
> Despite the gain of 0.4% for screen-on battery life, where Android has a mix
> of 64 and 32 bit applications, thus still relevant _today_ on 64 bit
> architectures (providing vDSO32 for 32-bit applications).
I don't think the issue is what you think it is. 0.4% gain is
equivalent to almost (but not quite) 1 minute extra for a lifetime of
4 hours. Is that really noticable, and is it worth the churn from
merging this series?
Given that the gain is so marginal, I can see why people find it
difficult to get excited about this series to spend the time reviewing
it.
What I'm saying is that the reason that people should look at this
series hasn't been "sold" particularly well. How does it look from
the system performance point of view - is there a speed-up there
that's more significant?
In any case, I suspect that if you compare the battery life from
kernels from two years ago with modern kernels, you'll see a
degredation over that period just because of the progressive
increase in complexity, and especially things such as the Spectre
work-arounds.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
On Mon, Oct 01, 2018 at 01:44:52PM -0700, Mark Salyzyn wrote:
> On 10/01/2018 11:49 AM, John Stultz wrote:
> > On Mon, Oct 1, 2018 at 10:58 AM, Mark Salyzyn <[email protected]> wrote:
> > > Last sent 23 Nov 2016.
> > >
> > > The following 23 patches are rebased and resent, and represent a
> > > rewrite of the arm and arm64 vDSO into C, adding support for arch32
> > > (32-bit user space hosted 64-bit kernels) and into a common library
> > > that other (arm, or non-arm) architectures may utilize.
> > So I feel like this has gone around a few times w/o much comment from
> > the arm/arm64 maintainers. I'm not sure if there's a reason?
>
> I am "forming an opinion"(tm) that ARM is not interested in any work on 32
> bit arm architectures. They have no manpower that they are willing to devote
> to this.
Actually, we are interested in this work but, TBH, I find it a bit hard
to read your series and have postponed looking into it in detail. Just
look at the patch numbering/versioning for example:
> [PATCH v5 01/12] arm: vdso: rename vdso_datapage variables
> [PATCH v5 02/12] arm: vdso: add include file defining __get_datapage()
> [PATCH v5 03/12] arm: vdso: inline assembler operations to compiler.h
> [PATCH v5 04/12] arm: vdso: do calculations outside reader loops
> [PATCH v6 05/12] arm: vdso: Add support for CLOCK_MONOTONIC_RAW
> [PATCH v5 06/12] arm: vdso: add support for clock_getres
> [PATCH v5 07/12] arm: vdso: disable profiling
> [PATCH v5 08/12] arm: vdso: Add ARCH_CLOCK_FIXED_MASK
> [PATCH v5 09/12] arm: vdso: move vgettimeofday.c to lib/vdso/
> [PATCH v5 10/12] arm64: vdso: replace gettimeofday.S with global vgettimeofday.C
> [PATCH v6 11/12] lib: vdso: Add support for CLOCK_BOOTTIME
> [PATCH v5 12/12] lib: vdso: do not expose gettimeofday, if no arch supported timer
> [PATCH] lib: vdso: add support for time
> [PATCH v2 1/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (C sources)
> [PATCH v2 2/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (assembler sources)
> [PATCH v2 3/3] arm64: compat: Add CONFIG_KUSER_HELPERS
> [PATCH] arm64: compat: Expose offset to registers in sigframes
> [PATCH 1/6] arm64: compat: Use vDSO sigreturn trampolines if available
> [PATCH 2/6] arm64: elf: Set AT_SYSINFO_EHDR in compat processes
> [PATCH 3/6] arm64: Refactor vDSO init/setup
> [PATCH v2 4/6] arm64: compat: Add a 32-bit vDSO
> [PATCH 5/6] arm64: compat: 32-bit vDSO setup
> [PATCH 6/6] arm64: Wire up and expose the new compat vDSO
The above may look obvious to you as you've worked on it but not to
maintainers who have to read lots of other patchsets.
> Despite the gain of 0.4% for screen-on battery life, where Android has a mix
> of 64 and 32 bit applications, thus still relevant _today_ on 64 bit
> architectures (providing vDSO32 for 32-bit applications).
As Russell said, if that's the only gain, you may need other selling
points.
The main advantage I see is to avoid code duplication, hence a vdso
library that could be shared by arm/arm64/arm64-compat _and_ future or
existing architectures that need vdso support.
> ARM has complained that they want them all at one time because individually
> they represent more work. So the whole set is here ready to go.
Having five separate series without a clear dependency between them was
worse than the current numbering scheme ;).
Anyway, since I still think this series is important, some weeks ago I
assigned Vincenzo Frascino in my team the task of de-cluttering this
patchset and posting it to the list. So we may see a new series later
this month (and any feedback welcome).
--
Catalin
On 10/02/2018 01:50 AM, Russell King - ARM Linux wrote:
> On Mon, Oct 01, 2018 at 01:44:52PM -0700, Mark Salyzyn wrote:
>> Despite the gain of 0.4% for screen-on battery life, where Android has a mix
>> of 64 and 32 bit applications, thus still relevant _today_ on 64 bit
>> architectures (providing vDSO32 for 32-bit applications).
> I don't think the issue is what you think it is. 0.4% gain is
> equivalent to almost (but not quite) 1 minute extra for a lifetime of
> 4 hours. Is that really noticable, and is it worth the churn from
> merging this series?
Screen on battery life, all other components of the system active,
backlights, touchscreen, sensors etc and it still had a measurable
impact on power from the portion that came from the CPU complex. This
impact came solely from 32 bit applications left over on the 64-bit
platforms that did not formerly have vDSO support. 32-bit applications
are not going away even with the advent of 64-bit (sound media framework
on Android is 32-bit).
Reduction in power is also an increase in performance. The
microbenchmarks show ~3-10 fold improvement in the actions of acquiring
the time of various sorts for 32-bit applications on 64-bit. A savings
in the range of sub-microseconds each time(sic) adds up to 0.4% battery
improvement overall.
> Given that the gain is so marginal, I can see why people find it
> difficult to get excited about this series to spend the time reviewing
> it.
The reviews were intense over the years IMHO, appeasing several
stakeholders and testers. Its that final ACK that has been elusive. I
have gotten private emails from ARM many times promising some
engineering to look at these waiting patiently without feedback.
The changes, as requested in the reviews of the past, turned also into
moving most vdso maintenance for all architectures of ARM into lib/vdso
with no reduction in microbenchmark performance for the other
configurations, at least two additional non-arm architectures look like
they could readily switch over to using it as well. This is a major win
for maintenance (and was not part of the original set).
As a result switching 64-ARM vdso assembler to C is a win for
maintenance if upstream'd.
Given the Android will need to place these patches in their common tree
(not there yet because I have required upstream first, but with limits,
2 years seems a bit long to wait for a project started in ARM, then I
was asked to help push it forward), not having them upstream, tested,
with many eyes on the code, will be a pity and will result in a possible
maintenance burden on up to 13000 phone devices.
-- Mark
On 10/02/2018 03:00 AM, Catalin Marinas wrote:
> On Mon, Oct 01, 2018 at 01:44:52PM -0700, Mark Salyzyn wrote:
>> On 10/01/2018 11:49 AM, John Stultz wrote:
>>> On Mon, Oct 1, 2018 at 10:58 AM, Mark Salyzyn <[email protected]> wrote:
>>>> Last sent 23 Nov 2016.
>>>>
>>>> The following 23 patches are rebased and resent, and represent a
>>>> rewrite of the arm and arm64 vDSO into C, adding support for arch32
>>>> (32-bit user space hosted 64-bit kernels) and into a common library
>>>> that other (arm, or non-arm) architectures may utilize.
>>> So I feel like this has gone around a few times w/o much comment from
>>> the arm/arm64 maintainers. I'm not sure if there's a reason?
>> I am "forming an opinion"(tm) that ARM is not interested in any work on 32
>> bit arm architectures. They have no manpower that they are willing to devote
>> to this.
> Actually, we are interested in this work but, TBH, I find it a bit hard
> to read your series and have postponed looking into it in detail. Just
> look at the patch numbering/versioning for example:
>
>> [PATCH v5 01/12] arm: vdso: rename vdso_datapage variables
>> [PATCH v5 02/12] arm: vdso: add include file defining __get_datapage()
>> [PATCH v5 03/12] arm: vdso: inline assembler operations to compiler.h
>> [PATCH v5 04/12] arm: vdso: do calculations outside reader loops
>> [PATCH v6 05/12] arm: vdso: Add support for CLOCK_MONOTONIC_RAW
>> [PATCH v5 06/12] arm: vdso: add support for clock_getres
>> [PATCH v5 07/12] arm: vdso: disable profiling
>> [PATCH v5 08/12] arm: vdso: Add ARCH_CLOCK_FIXED_MASK
>> [PATCH v5 09/12] arm: vdso: move vgettimeofday.c to lib/vdso/
>> [PATCH v5 10/12] arm64: vdso: replace gettimeofday.S with global vgettimeofday.C
>> [PATCH v6 11/12] lib: vdso: Add support for CLOCK_BOOTTIME
>> [PATCH v5 12/12] lib: vdso: do not expose gettimeofday, if no arch supported timer
>> [PATCH] lib: vdso: add support for time
>> [PATCH v2 1/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (C sources)
>> [PATCH v2 2/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (assembler sources)
>> [PATCH v2 3/3] arm64: compat: Add CONFIG_KUSER_HELPERS
>> [PATCH] arm64: compat: Expose offset to registers in sigframes
>> [PATCH 1/6] arm64: compat: Use vDSO sigreturn trampolines if available
>> [PATCH 2/6] arm64: elf: Set AT_SYSINFO_EHDR in compat processes
>> [PATCH 3/6] arm64: Refactor vDSO init/setup
>> [PATCH v2 4/6] arm64: compat: Add a 32-bit vDSO
>> [PATCH 5/6] arm64: compat: 32-bit vDSO setup
>> [PATCH 6/6] arm64: Wire up and expose the new compat vDSO
> The above may look obvious to you as you've worked on it but not to
> maintainers who have to read lots of other patchsets.
Because the whole set was not taken, I split them into mostly orthogonal
pieces for divide and conquer as requested. I feel so betrayed by the
system ;-} :-)
There is an order, but you will find at least
[PATCH v2 1/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (C sources)
[PATCH v2 2/3] arm64: compat: Split the sigreturn trampolines and kuser helpers (assembler sources)
[PATCH v2 3/3] arm64: compat: Add CONFIG_KUSER_HELPERS
can go independently at first and standalone providing a much needed rework and added security by allowing control over the troublesome kuser helpers.
>> Despite the gain of 0.4% for screen-on battery life, where Android has a mix
>> of 64 and 32 bit applications, thus still relevant _today_ on 64 bit
>> architectures (providing vDSO32 for 32-bit applications).
> As Russell said, if that's the only gain, you may need other selling
> points.
0.4% screen on means all other components on the phone including the
backlight taking power, and _still_ had a measurable power impact adding
arm64 vDSO32 (32 arm) applications that are a subset of the phone
ecosystem. There are 64-bit phones that have only a 32-bit user space
that no doubt will take plenty more from this.
Microbenchmarks for arm32 application on arm64 report ~3-10 fold
improvement in performance (time() call being the ten fold improvement,
a gain for both arm32 and arm64 applications)
> The main advantage I see is to avoid code duplication, hence a vdso
> library that could be shared by arm/arm64/arm64-compat _and_ future or
> existing architectures that need vdso support.
Thankfully added after being reviewed, but alas increased the complexity
of the set to fulfill.
>> ARM has complained that they want them all at one time because individually
>> they represent more work. So the whole set is here ready to go.
> Having five separate series without a clear dependency between them was
> worse than the current numbering scheme ;).
For that I apologize, I allowed others to ask it to be split up and
complied.
> Anyway, since I still think this series is important, some weeks ago I
> assigned Vincenzo Frascino in my team the task of de-cluttering this
> patchset and posting it to the list. So we may see a new series later
> this month (and any feedback welcome).
WooHoo (sorry for being so emotional)
-- Mark