Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935797AbcJ1JrS (ORCPT ); Fri, 28 Oct 2016 05:47:18 -0400 Received: from mail-wm0-f41.google.com ([74.125.82.41]:37108 "EHLO mail-wm0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750721AbcJ1JrQ (ORCPT ); Fri, 28 Oct 2016 05:47:16 -0400 MIME-Version: 1.0 In-Reply-To: <20161028093547.GA9291@gmail.com> References: <20161026204748.GA11177@amd> <20161027082801.GE3568@worktop.programming.kicks-ass.net> <20161027091104.GB19469@amd> <20161027093334.GK3102@twins.programming.kicks-ass.net> <20161027212747.GA18147@amd> <20161028070701.GA11376@gmail.com> <20161028085039.GA15032@amd> <20161028090423.GY3102@twins.programming.kicks-ass.net> <20161028093547.GA9291@gmail.com> From: Vegard Nossum Date: Fri, 28 Oct 2016 11:47:07 +0200 Message-ID: Subject: Re: rowhammer protection [was Re: Getting interrupt every million cache misses] To: Ingo Molnar Cc: Peter Zijlstra , Pavel Machek , Kees Cook , Arnaldo Carvalho de Melo , kernel list , Ingo Molnar , Alexander Shishkin , "kernel-hardening@lists.openwall.com" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1190 Lines: 27 On 28 October 2016 at 11:35, Ingo Molnar wrote: > > * Vegard Nossum wrote: > >> Would it make sense to sample the counter on context switch, do some >> accounting on a per-task cache miss counter, and slow down just the >> single task(s) with a too high cache miss rate? That way there's no >> global slowdown (which I assume would be the case here). The task's >> slice of CPU would have to be taken into account because otherwise you >> could have multiple cooperating tasks that each escape the limit but >> taken together go above it. > > Attackers could work this around by splitting the rowhammer workload between > multiple threads/processes. > > I.e. the problem is that the risk may come from any 'unprivileged user-space > code', where the rowhammer workload might be spread over multiple threads, > processes or even users. That's why I emphasised the number of misses per CPU slice rather than just the total number of misses. I assumed there must be at least one task continuously hammering memory for a successful attack, in which case it should be observable with as little as 1 slice of CPU (however long that is), no? Vegard