Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752315AbaFXXJA (ORCPT ); Tue, 24 Jun 2014 19:09:00 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:37573 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751479AbaFXXI7 (ORCPT ); Tue, 24 Jun 2014 19:08:59 -0400 Date: Tue, 24 Jun 2014 16:08:57 -0700 From: Andrew Morton To: Dan Streetman Cc: Seth Jennings , Minchan Kim , Weijie Yang , Nitin Gupta , Bob Liu , Hugh Dickins , Mel Gorman , Rik van Riel , Johannes Weiner , Sergey Senozhatsky , Linux-MM , linux-kernel Subject: Re: [PATCHv4 3/6] mm/zpool: implement common zpool api to zbud/zsmalloc Message-Id: <20140624160857.ebb638c4c69c1d290f64d01f@linux-foundation.org> In-Reply-To: References: <1400958369-3588-1-git-send-email-ddstreet@ieee.org> <1401747586-11861-1-git-send-email-ddstreet@ieee.org> <1401747586-11861-4-git-send-email-ddstreet@ieee.org> <20140623144614.d4549fa0aecb03b7b8044bc7@linux-foundation.org> X-Mailer: Sylpheed 3.2.0beta5 (GTK+ 2.24.10; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 24 Jun 2014 11:39:12 -0400 Dan Streetman wrote: > On Mon, Jun 23, 2014 at 5:46 PM, Andrew Morton > wrote: > > On Mon, 2 Jun 2014 18:19:43 -0400 Dan Streetman wrote: > > > >> Add zpool api. > >> > >> zpool provides an interface for memory storage, typically of compressed > >> memory. Users can select what backend to use; currently the only > >> implementations are zbud, a low density implementation with up to > >> two compressed pages per storage page, and zsmalloc, a higher density > >> implementation with multiple compressed pages per storage page. > >> > >> ... > >> > >> +/** > >> + * zpool_create_pool() - Create a new zpool > >> + * @type The type of the zpool to create (e.g. zbud, zsmalloc) > >> + * @flags What GFP flags should be used when the zpool allocates memory. > >> + * @ops The optional ops callback. > >> + * > >> + * This creates a new zpool of the specified type. The zpool will use the > >> + * given flags when allocating any memory. If the ops param is NULL, then > >> + * the created zpool will not be shrinkable. > >> + * > >> + * Returns: New zpool on success, NULL on failure. > >> + */ > >> +struct zpool *zpool_create_pool(char *type, gfp_t flags, > >> + struct zpool_ops *ops); > > > > It is unconventional to document the API in the .h file. It's better > > to put the documentation where people expect to find it. > > > > It's irritating for me (for example) because this kernel convention has > > permitted me to train my tags system to ignore prototypes in headers. > > But if I want to find the zpool_create_pool documentation I will need > > to jump through hoops. > > Got it, I will move it to the .c file. > > I noticed you pulled these into -mm, do you want me to send follow-on > patches for these changes, or actually update the origin patches and > resend the patch set? Full resend, I guess. I often add things which are not-quite-fully-baked to give them a bit of testing, check for integration with other changes, etc. > > > >> > >> ... > >> > >> + > >> +struct zpool *zpool_create_pool(char *type, gfp_t flags, > >> + struct zpool_ops *ops) > >> +{ > >> + struct zpool_driver *driver; > >> + struct zpool *zpool; > >> + > >> + pr_info("creating pool type %s\n", type); > >> + > >> + spin_lock(&drivers_lock); > >> + driver = zpool_get_driver(type); > >> + spin_unlock(&drivers_lock); > > > > Racy against unregister. Can be solved with a standard get/put > > refcounting implementation. Or perhaps a big fat mutex. Was there a decision here? > >> +void zpool_destroy_pool(struct zpool *zpool) > >> +{ > >> + pr_info("destroying pool type %s\n", zpool->type); > >> + > >> + spin_lock(&pools_lock); > >> + list_del(&zpool->list); > >> + spin_unlock(&pools_lock); > >> + zpool->driver->destroy(zpool->pool); > >> + kfree(zpool); > >> +} > > > > What are the lifecycle rules here? How do we know that nobody else can > > be concurrently using this pool? > > Well I think with zpools, as well as direct use of zsmalloc and zbud > pools, whoever creates a pool is responsible for making sure it's no > longer in use before destroying it. Sounds reasonable. Perhaps there's some convenient WARN_ON we can put in here to check that. > I think in most use cases, pool > creators won't be sharing their pools, so there should be no issue > with concurrent use. In fact, concurrent pool use it probably a bad > idea in general - zsmalloc for example relies on per-cpu data during > handle mapping, so concurrent use of a single pool might result in the > per-cpu data being overwritten if multiple users of a single pool > tried to map and use different handles from the same cpu. That's all a bit waffly. Either we support concurrent use or we don't! > Should some use/sharing restrictions be added to the zpool documentation? Sure. And the code if possible. If a second user tries to use a pool which is already in use, that attempt should just fail, with WARN, printk, return -EBUSY, whatever. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/