Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759155AbcJ1JwT (ORCPT ); Fri, 28 Oct 2016 05:52:19 -0400 Received: from foss.arm.com ([217.140.101.70]:55066 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754617AbcJ1JwS (ORCPT ); Fri, 28 Oct 2016 05:52:18 -0400 Date: Fri, 28 Oct 2016 10:51:41 +0100 From: Mark Rutland To: Pavel Machek Cc: Kees Cook , Peter Zijlstra , Arnaldo Carvalho de Melo , kernel list , Ingo Molnar , Alexander Shishkin , "kernel-hardening@lists.openwall.com" Subject: Re: [kernel-hardening] rowhammer protection [was Re: Getting interrupt every million cache misses] Message-ID: <20161028095141.GA5806@leverpostej> References: <20161026204748.GA11177@amd> <20161027082801.GE3568@worktop.programming.kicks-ass.net> <20161027091104.GB19469@amd> <20161027093334.GK3102@twins.programming.kicks-ass.net> <20161027212747.GA18147@amd> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161027212747.GA18147@amd> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2117 Lines: 69 Hi, I missed the original, so I've lost some context. Has this been tested on a system vulnerable to rowhammer, and if so, was it reliable in mitigating the issue? Which particular attack codebase was it tested against? On Thu, Oct 27, 2016 at 11:27:47PM +0200, Pavel Machek wrote: > --- /dev/null > +++ b/kernel/events/nohammer.c > @@ -0,0 +1,66 @@ > +/* > + * Thanks to Peter Zijlstra . > + */ > + > +#include > +#include > +#include > + > +struct perf_event_attr rh_attr = { > + .type = PERF_TYPE_HARDWARE, > + .config = PERF_COUNT_HW_CACHE_MISSES, > + .size = sizeof(struct perf_event_attr), > + .pinned = 1, > + /* FIXME: it is 1000000 per cpu. */ > + .sample_period = 500000, > +}; I'm not sure that this is general enough to live in core code, because: * there are existing ways around this (e.g. in the drammer case, using a non-cacheable mapping, which I don't believe would count as a cache miss). Given that, I'm very worried that this gives the false impression of protection in cases where a software workaround of this sort is insufficient or impossible. * the precise semantics of performance counter events varies drastically across implementations. PERF_COUNT_HW_CACHE_MISSES, might only map to one particular level of cache, and/or may not be implemented on all cores. * On some implementations, it may be that the counters are not interchangeable, and for those this would take away PERF_COUNT_HW_CACHE_MISSES from existing users. > +static DEFINE_PER_CPU(struct perf_event *, rh_event); > +static DEFINE_PER_CPU(u64, rh_timestamp); > + > +static void rh_overflow(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) > +{ > + u64 *ts = this_cpu_ptr(&rh_timestamp); /* this is NMI context */ > + u64 now = ktime_get_mono_fast_ns(); > + s64 delta = now - *ts; > + > + *ts = now; > + > + /* FIXME msec per usec, reverse logic? */ > + if (delta < 64 * NSEC_PER_MSEC) > + mdelay(56); > +} If I round-robin my attack across CPUs, how much does this help? Thanks, Mark.