Received: by 2002:a05:6520:4211:b029:f4:110d:56bc with SMTP id o17csp1584859lkv; Wed, 19 May 2021 13:23:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx19RAY9+UWlSBQa1WsYCgcaCBcjw6jOb1Llqjgtaoi8C/N029rUefOZV8OMCNLIPvAvGTo X-Received: by 2002:aa7:d803:: with SMTP id v3mr952601edq.150.1621455798092; Wed, 19 May 2021 13:23:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621455798; cv=none; d=google.com; s=arc-20160816; b=shLqTDnTr3GQj0ZMWlN5Lz0En/o5Lkya/qBIBg2bbbnaBymYCD+dXeD7GMutWhGsoq FVhWKZVkK/2wdJTHSG/o7P+tRQseUR16cx9kPK9WeumUKjh6uaOfKgnX6S26bPprz5Nc Ba89aDxDxgZXvw4i7RO/+EMMmHZaOLDUt2bgI3ha01ElT5bCuaVI5Xh0k+BxhzqBFBKT 7D4kz/AWl+B3OBoCeVDWL7eQQTCDmsJNy8Zi20/juv8YeJZYeB6JmpBZnBiwoZiCM1LE fQo2xUJ4oLtcUrG3Vt5shHI7e9FEY26Kmws45KwdOpesedH5WtV4PSDVqpJ5rm8Cha/S MVaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :dkim-signature; bh=ACFRqWVclL3GjOA8uYfDIKSmoewur8fyl+9CJwIkoV8=; b=JbAvsGaZnZNN49LdmOqkg0TV6hFVyV3RF3tj6qqK3y0K6I0LMyHlZ5zbgZW56o5P35 Tq2OdoJKkiWWEe/Jdp9dWjYxXyAHodyIna4h8FeMrktFRCe6xF6ZF8ZHl/macnuaGwqg 3rIAOMTfkxJu7lGYv90scndwUOdydkKTsVwOb39V6AJVCJdDff02QE37+WLOsZTxAhtT +U0qOzn90gBYg0q7tnDqioHL+vtplElCqAyig8GOW6cTx1zExmCYkn2Wxpb9Sv4RJBPB be9IU3nxdO1rNtw3Pej45L7uTggw9j4V00msF64z91TlsitqjoiTIUuo9ADF1cCQcjxj sYvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="Q+8/nv//"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m7si246266edq.235.2021.05.19.13.22.55; Wed, 19 May 2021 13:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="Q+8/nv//"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232320AbhESTxj (ORCPT + 99 others); Wed, 19 May 2021 15:53:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232246AbhESTxj (ORCPT ); Wed, 19 May 2021 15:53:39 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A92FC06175F for ; Wed, 19 May 2021 12:52:19 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id a4so2171035ljd.5 for ; Wed, 19 May 2021 12:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=ACFRqWVclL3GjOA8uYfDIKSmoewur8fyl+9CJwIkoV8=; b=Q+8/nv//kl1pdXF2NhiGsxHdk4rC3fFr4ALonC6Rmxa9A8gb7poO0Yc0lWakFVYdbM 0XNxUpqc9RBK08WDB5gPFqYnBPNGmzskFzZUZTWs/K/+eIPBFMTEm9Z3waJiTPoFnjpj 3uSPCjQrHOjbVPYdf2vZWZS0R/6g++zsm2gMPT3CnN1M1vOeHL/Ke/96pIOFYiEknddt i9jn17hBlHR52/3XVhA/h4ACvkQg5bM9WnSKH4majixMjTxOaXj+z3B3DOa3dEIfv9Ff rnrIXG8vrLgm4M2XyLEzrb1fb4stdP0y7Prcw7TMn5Kt9YXYa7hqaA5nQjI2rUkK4cNG uprA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=ACFRqWVclL3GjOA8uYfDIKSmoewur8fyl+9CJwIkoV8=; b=hcC0hY0wu5UElJyNVKUKIGWMYB7e1pZg9aANjibQuf8Y6uVyfFkQHjRBK17h0Pev8V sfa81e0+nufnbqhwqSNnfr07lpy3PcFP9lG3mfNBvA19pLQduUa+QdQqObT8kHH0A10m 2FJuE5DQCB1dmPihFuc/c3eiG6PwWZp205ZSyMiYfRcI+TZE0WkhV3XoI4Z+muhZ+0NQ qIy1pGaxD8AJYuU3yNpKEh+BW4JH6Zf/Q5ZE/TTL7fOiP8pPkGSGYetZHTP2jM9s7/rQ JtLAJKNSYQHK02GpjANgYCLAP7PxPoTl6Yyy8jqO8/yL05j+ETVBZ2WepelvO9N93CWq KhaA== X-Gm-Message-State: AOAM533jVwSVxB+2qrmKp+CeJHge2N0+xo9KSl+X5hFxzq6Dtc1ceQwY gv01as7jgSYQhhTGY4xbFzc= X-Received: by 2002:a2e:9615:: with SMTP id v21mr604137ljh.184.1621453937422; Wed, 19 May 2021 12:52:17 -0700 (PDT) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id m4sm23745ljo.127.2021.05.19.12.52.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 May 2021 12:52:17 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 19 May 2021 21:52:14 +0200 To: Mel Gorman , Christoph Hellwig Cc: Uladzislau Rezki , Christoph Hellwig , Andrew Morton , linux-mm@kvack.org, LKML , Matthew Wilcox , Nicholas Piggin , Hillf Danton , Michal Hocko , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node() Message-ID: <20210519195214.GA2343@pc638.lan> References: <20210516202056.2120-1-urezki@gmail.com> <20210516202056.2120-3-urezki@gmail.com> <20210519143900.GA2262@pc638.lan> <20210519155630.GD3672@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210519155630.GD3672@suse.de> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Wed, May 19, 2021 at 04:39:00PM +0200, Uladzislau Rezki wrote: > > > > + /* > > > > + * If not enough pages were obtained to accomplish an > > > > + * allocation request, free them via __vfree() if any. > > > > + */ > > > > + if (area->nr_pages != nr_small_pages) { > > > > + warn_alloc(gfp_mask, NULL, > > > > + "vmalloc size %lu allocation failure: " > > > > + "page order %u allocation failed", > > > > + area->nr_pages * PAGE_SIZE, page_order); > > > > + goto fail; > > > > + } > > > > > > From reading __alloc_pages_bulk not allocating all pages is something > > > that cn happen fairly easily. Shouldn't we try to allocate the missing > > > pages manually and/ore retry here? > > > > > > > It is a good point. The bulk-allocator, as i see, only tries to access > > to pcp-list and falls-back to a single allocator once it fails, so the > > array may not be fully populated. > > > > Partially correct. It does allocate via the pcp-list but the pcp-list will > be refilled if it's empty so if the bulk allocator returns fewer pages > than requested, it may be due to hitting watermarks or the local zone is > depleted. It does not take any special action to correct the situation > or stall e.g. wake kswapd, enter direct reclaim, allocate from a remote > node etc. > > If no pages were allocated, it'll try allocate at least one page via a > single allocation request in case the bulk failure would push the zone > over the watermark but 1 page does not. That path as a side-effect would > also wake kswapd. > OK. A single page allocator can enter a slow path i mean direct reclaim, etc to adjust watermarks. > > In that case probably it makes sense to manually populate it using > > single page allocator. > > > > Mel, could you please also comment on it? > > > > It is by design because it's unknown if callers can recover or if so, > how they want to recover and the primary intent behind the bulk allocator > was speed. In the case of network, it only wants some pages quickly so as > long as it gets 1, it makes progress. For the sunrpc user, it's willing > to wait and retry. For vmalloc, I'm unsure what a suitable recovery path > should be as I do not have a good handle on workloads that are sensitive > to vmalloc performance. The obvious option would be to loop and allocate > single pages with alloc_pages_node understanding that the additional > pages may take longer to allocate. > I got it. At least we should fall-back for a single allocator, that is how we used to allocate before(now it is for high-order pages). If it also fails to obtain a page we are done. Basically a single-page allocator is more permissive so it is a higher chance to success. Therefore a fallback to it makes sense. Thanks. -- Vlad Rezki