Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755544Ab3G3BOs (ORCPT ); Mon, 29 Jul 2013 21:14:48 -0400 Received: from mx1.corp.phx1.mozilla.com ([63.245.216.69]:39869 "EHLO smtp.mozilla.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751212Ab3G3BOr (ORCPT ); Mon, 29 Jul 2013 21:14:47 -0400 From: Jed Davis To: Peter Zijlstra , Paul Mackerras , Ingo Molnar , Arnaldo Carvalho de Melo , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org Cc: Jed Davis Subject: [PATCH 1/2] perf: Fix handling of arch_perf_out_copy_user return value. Date: Mon, 29 Jul 2013 18:12:40 -0700 Message-Id: <1375146761-4339-1-git-send-email-jld@mozilla.com> X-Mailer: git-send-email 1.8.3.2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3007 Lines: 78 All architectures except x86 use __copy_from_user_inatomic to provide arch_perf_out_copy_user; like the other copy_from routines, it returns the number of bytes not copied. perf was expecting the number of bytes that had been copied. This change corrects that, and thereby allows PERF_SAMPLE_STACK_USER to be enabled on non-x86 architectures. x86 uses copy_from_user_nmi, which deviates from the other copy_from routines by returning the number of bytes copied. (This cancels out the effect of perf being backwards; apparently this code has only ever been tested on x86.) This change therefore adds a second wrapper to re-reverse it for perf; the next patch in this series will clean it up. Signed-off-by: Jed Davis --- arch/x86/include/asm/perf_event.h | 9 ++++++++- kernel/events/internal.h | 11 ++++++++++- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8249df4..ddae5bd 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -274,6 +274,13 @@ static inline void perf_check_microcode(void) { } static inline void amd_pmu_disable_virt(void) { } #endif -#define arch_perf_out_copy_user copy_from_user_nmi +static inline unsigned long copy_from_user_nmi_for_perf(void *to, + const void __user *from, + unsigned long n) +{ + return n - copy_from_user_nmi(to, from, n); +} + +#define arch_perf_out_copy_user copy_from_user_nmi_for_perf #endif /* _ASM_X86_PERF_EVENT_H */ diff --git a/kernel/events/internal.h b/kernel/events/internal.h index ca65997..e61b22c 100644 --- a/kernel/events/internal.h +++ b/kernel/events/internal.h @@ -81,6 +81,7 @@ static inline unsigned long perf_data_size(struct ring_buffer *rb) return rb->nr_pages << (PAGE_SHIFT + page_order(rb)); } +/* The memcpy_func must return the number of bytes successfully copied. */ #define DEFINE_OUTPUT_COPY(func_name, memcpy_func) \ static inline unsigned int \ func_name(struct perf_output_handle *handle, \ @@ -122,11 +123,19 @@ DEFINE_OUTPUT_COPY(__output_copy, memcpy_common) DEFINE_OUTPUT_COPY(__output_skip, MEMCPY_SKIP) +/* arch_perf_out_copy_user must return the number of bytes not copied. */ #ifndef arch_perf_out_copy_user #define arch_perf_out_copy_user __copy_from_user_inatomic #endif -DEFINE_OUTPUT_COPY(__output_copy_user, arch_perf_out_copy_user) +static inline unsigned long perf_memcpy_from_user(void *to, + const void __user *from, + unsigned long n) +{ + return n - arch_perf_out_copy_user(to, from, n); +} + +DEFINE_OUTPUT_COPY(__output_copy_user, perf_memcpy_from_user) /* Callchain handling */ extern struct perf_callchain_entry * -- 1.8.3.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/