Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752029AbbFZG4v (ORCPT ); Fri, 26 Jun 2015 02:56:51 -0400 Received: from protonic.xs4all.nl ([83.163.252.89]:14693 "EHLO protonic.xs4all.nl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751704AbbFZG4o (ORCPT ); Fri, 26 Jun 2015 02:56:44 -0400 Date: Fri, 26 Jun 2015 08:56:42 +0200 From: David Jander To: Ulf Hansson Cc: Pierre Ossman , Sascha Hauer , Johan Rudholm , Adrian Hunter , Javier Martinez Canillas , linux-mmc , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH] mmc: core: Optimize case for exactly one erase-group budget TRIM Message-ID: <20150626085642.5cdad2e3@archvile> In-Reply-To: References: <1433320469-29453-1-git-send-email-david@protonic.nl> Organization: Protonic Holland X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.27; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1086 Lines: 33 Dear Ulf, On Thu, 4 Jun 2015 10:31:59 +0200 Ulf Hansson wrote: > On 3 June 2015 at 10:34, David Jander wrote: > > In the (not so unlikely) case that the mmc controller timeout budget is > > enough for exactly one erase-group, the simplification of allowing one > > sector has an enormous performance penalty. We optimize this special case > > by introducing a flag that prohibits erase-group boundary crossing, so > > that we can allow trimming more than one sector at a time. > > > > Signed-off-by: David Jander > > Hi David, > > Thanks for working on this! I have since sent an updated patch that includes more comment. It would be great if you could find the time to review it. I hope the comments are clear enough. Best regards, -- David Jander Protonic Holland. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/