Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753230AbbK3LOG (ORCPT ); Mon, 30 Nov 2015 06:14:06 -0500 Received: from mail-pa0-f44.google.com ([209.85.220.44]:36408 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751364AbbK3LNY (ORCPT ); Mon, 30 Nov 2015 06:13:24 -0500 Date: Mon, 30 Nov 2015 20:14:24 +0900 From: Sergey Senozhatsky To: "kyeongdon.kim" Cc: Minchan Kim , Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: Re: [PATCH v3 2/2] zram: try vmalloc() after kmalloc() Message-ID: <20151130111424.GB1483@swordfish> References: <1448597449-17579-1-git-send-email-sergey.senozhatsky@gmail.com> <20151130071053.GB3262@bbox> <565C27FA.407@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <565C27FA.407@lge.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3083 Lines: 76 On (11/30/15 19:42), kyeongdon.kim wrote: [..] > Sorry to have kept you waiting, > Obviously, I couldn't see allocation fail message with this patch. > But, there is something to make some delay(not sure yet this is normal). what delay? how significant it is? do you see it in practice or it's just a guess? > static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp) > { > > > zstrm->private = comp->backend->create(); > ^ // sometimes, return 'null' continually(2-5times) > > As you know, if there is 'null' return, this function is called again to > get a memory in while() loop. I just checked this one with printk(). well, not always. a) current wait_event() for available stream to become idle. b) once current awaken it attempts to get an idle stream c) if zstrm then return d) if there is no idle stream then goto a) e) else try to allocate stream again, if !zstrm goto a), else return while (1) { spin_lock(&zs->strm_lock); if (!list_empty(&zs->idle_strm)) { zstrm = list_entry(zs->idle_strm.next, struct zcomp_strm, list); list_del(&zstrm->list); spin_unlock(&zs->strm_lock); return zstrm; } /* zstrm streams limit reached, wait for idle stream */ if (zs->avail_strm >= zs->max_strm) { spin_unlock(&zs->strm_lock); wait_event(zs->strm_wait, !list_empty(&zs->idle_strm)); continue; } /* allocate new zstrm stream */ zs->avail_strm++; spin_unlock(&zs->strm_lock); zstrm = zcomp_strm_alloc(comp); if (!zstrm) { spin_lock(&zs->strm_lock); zs->avail_strm--; spin_unlock(&zs->strm_lock); wait_event(zs->strm_wait, !list_empty(&zs->idle_strm)); continue; } break; } so it's possible for current to zcomp_strm_alloc() several times... do you see the same process doing N zcomp_strm_alloc() calls, or it's N processes doing one zcomp_strm_alloc()? I think the latter one is more likely; once we failed to zcomp_strm_alloc() quite possible that N concurrent or succeeding IOs will do the same. That's why I proposed to decrease ->max_strm; but we basically don't know when we shall rollback it to the original value; I'm not sure I want to do something like: every 42nd IO try to increment ->max_strm by one, until it's less than the original value. so I'd probably prefer to keep it the way it is; but let's see the numbers from you first. -ss -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/