Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754297AbcJTC7M (ORCPT ); Wed, 19 Oct 2016 22:59:12 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:35280 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752475AbcJTC7K (ORCPT ); Wed, 19 Oct 2016 22:59:10 -0400 Message-ID: <1476932346.8590.2.camel@gmail.com> Subject: Re: [patch v2] drivers/zram: Don't disable preemption in zcomp_stream_get/put() From: Mike Galbraith To: Sebastian Andrzej Siewior Cc: Thomas Gleixner , LKML , linux-rt-users , Steven Rostedt Date: Thu, 20 Oct 2016 04:59:06 +0200 In-Reply-To: <20161019165419.wh74ibrlbyd5fy3c@linutronix.de> References: <20161006085228.jl6rpszdp5c2p2nr@linutronix.de> <1476587662.1538.8.camel@gmail.com> <20161017142428.25dyxefk2arjeyfz@linutronix.de> <1476892590.3975.6.camel@gmail.com> <20161019165419.wh74ibrlbyd5fy3c@linutronix.de> Content-Type: text/plain; charset="us-ascii" X-Mailer: Evolution 3.16.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1583 Lines: 39 On Wed, 2016-10-19 at 18:54 +0200, Sebastian Andrzej Siewior wrote: > On 2016-10-19 17:56:30 [+0200], Mike Galbraith wrote: > > In v4.7, the driver switched to percpu compression streams, disabling > > preemption vai get/put_cpu_ptr(). Use a local lock instead for RT. > > We also have to fix an RT lock order issue in zram_decompress_page() > > such that zs_map_object() nests inside of zcomp_stream_put() as it > > does in zram_bvec_write(). > > good. I almost had it myself. So let me get that one. And your previous > one (the spinlock replacement) looks also reasonable. With this hunk > > @@ -313,6 +314,7 @@ struct mapping_area { > #endif > char *vm_addr; /* address of kmap_atomic()'ed pages */ > enum zs_mapmode vm_mm; /* mapping mode */ > + spinlock_t ma_lock; > }; > > #ifdef CONFIG_COMPACTION > @@ -1489,6 +1491,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > off = (class->size * obj_idx) & ~PAGE_MASK; > > area = per_cpu_ptr(&zs_map_area, get_cpu_light()); > + spin_lock(&area->ma_lock); > area->vm_mm = mm; > if (off + class->size <= PAGE_SIZE) { > /* this object is contained entirely within a page */ > @@ -1542,6 +1545,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) > > __zs_unmap_object(area, pages, off, class->size); > } > + spin_unlock(&area->ma_lock); > put_cpu_light(); > > migrate_read_unlock(zspage); Ew, yeah, as the other, I thought it was covered, but nope. -Mike