Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751606AbdGUFHP (ORCPT ); Fri, 21 Jul 2017 01:07:15 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:33914 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750740AbdGUFHO (ORCPT ); Fri, 21 Jul 2017 01:07:14 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.220.163 X-Original-MAILFROM: minchan@kernel.org Date: Fri, 21 Jul 2017 14:07:12 +0900 From: Minchan Kim To: Hui Zhu Cc: Hui Zhu , "ngupta@vflare.org" , Sergey Senozhatsky , Linux Memory Management List , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH] zsmalloc: zs_page_migrate: not check inuse if migrate_mode is not MIGRATE_ASYNC Message-ID: <20170721050712.GA11758@bbox> References: <1500018667-30175-1-git-send-email-zhuhui@xiaomi.com> <20170717053941.GA29581@bbox> <20170720084711.GA8355@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3079 Lines: 89 Hi Hui, On Thu, Jul 20, 2017 at 05:33:45PM +0800, Hui Zhu wrote: < snip > > >> >> +++ b/mm/zsmalloc.c > >> >> @@ -1982,6 +1982,7 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, > >> >> unsigned long old_obj, new_obj; > >> >> unsigned int obj_idx; > >> >> int ret = -EAGAIN; > >> >> + int inuse; > >> >> > >> >> VM_BUG_ON_PAGE(!PageMovable(page), page); > >> >> VM_BUG_ON_PAGE(!PageIsolated(page), page); > >> >> @@ -1996,21 +1997,24 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, > >> >> offset = get_first_obj_offset(page); > >> >> > >> >> spin_lock(&class->lock); > >> >> - if (!get_zspage_inuse(zspage)) { > >> >> + inuse = get_zspage_inuse(zspage); > >> >> + if (mode == MIGRATE_ASYNC && !inuse) { > >> >> ret = -EBUSY; > >> >> goto unlock_class; > >> >> } > >> >> > >> >> pos = offset; > >> >> s_addr = kmap_atomic(page); > >> >> - while (pos < PAGE_SIZE) { > >> >> - head = obj_to_head(page, s_addr + pos); > >> >> - if (head & OBJ_ALLOCATED_TAG) { > >> >> - handle = head & ~OBJ_ALLOCATED_TAG; > >> >> - if (!trypin_tag(handle)) > >> >> - goto unpin_objects; > >> >> + if (inuse) { > > > > I don't want to add inuse check for every loop. It might avoid unncessary > > looping in every loop of zs_page_migrate so it is for optimization, not > > correction. As I consider it would happen rarely, I think we don't need > > to add the check. Could you just remove get_zspage_inuse check, instead? > > > > like this. > > > > > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > > index 013eea76685e..2d3d75fb0f16 100644 > > --- a/mm/zsmalloc.c > > +++ b/mm/zsmalloc.c > > @@ -1980,14 +1980,9 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, > > pool = mapping->private_data; > > class = pool->size_class[class_idx]; > > offset = get_first_obj_offset(page); > > + pos = offset; > > > > spin_lock(&class->lock); > > - if (!get_zspage_inuse(zspage)) { > > - ret = -EBUSY; > > - goto unlock_class; > > - } > > - > > - pos = offset; > > s_addr = kmap_atomic(page); > > while (pos < PAGE_SIZE) { > > head = obj_to_head(page, s_addr + pos); > > > > > > What about set pos to avoid the loops? > > @@ -1997,8 +1997,10 @@ int zs_page_migrate(struct address_space > *mapping, struct page *newpage, > > spin_lock(&class->lock); > if (!get_zspage_inuse(zspage)) { > - ret = -EBUSY; > - goto unlock_class; > + /* The page is empty. > + Set "offset" to the end of page. > + Then the loops of page will be avoided. */ > + offset = PAGE_SIZE; Good idea. Just a nitpick: /* * set "offset" to end of the page so that every loops * skips unnecessary object scanning. */ Thanks!