Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2286251imm; Thu, 20 Sep 2018 10:35:05 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaCMu30QwiRacIVCdfgdvkcXS8jxqxouZMcM3h19kN2d/5EfivQJ0zbagJYwgradwZG8foc X-Received: by 2002:a17:902:b491:: with SMTP id y17-v6mr40721976plr.160.1537464905904; Thu, 20 Sep 2018 10:35:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537464905; cv=none; d=google.com; s=arc-20160816; b=naXE2/xU6JG1/wKaah36bVh4vA5r4HL4ZfIsUL3aVpa3xyV1hhEDhkZ44wvayWa1aO qDTqzJtDXthYpwCjokMYY2Qmx8QjH//FFwkjd7kFeJ5tTjXYJBNJy1sX3249nVrYN1FO H3KWjKCDeOoKsQU4rg8X2QazLLUcSyVkNMC4h69K6eqvpmacdH1bN+yF3QjfE23bjeZ7 pxCQBO98woVyzLtJE6UNjO+iM8me730HMjEkMrkbgylzKw6HaegqP3mtm6ZxeR3Ricfm SAGvcRl4ha+9/wzQx9jMb3HKi9thxMypUo6QhDyaGdLOjd0eTMTUSGS7wHmMUjBCFe0m chuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=QWFaFoqEBjW7/xovQcm3md5K+U+Q4gAfWx5pVII/BZw=; b=WQw1NmkTyQPic0zLnW3MNvGe9T6BuvoSs9KD++haInMgXUF4OdBWXUkvCUIGr8kpKM J6/uQ4Nuvm2BZZLy9bu/JEegX/MAOejraCW5fts0nL8yHR8RKqWwvWCTps1MBLVtBZ0b HMnZQ4+uAWmWU3z+wksyCYkbg4oO08ur2/tNttF6DTCizsDfpyuChNVsz0JDrXsgKTFt k6WzvSDL4y1uzoepIs/7jQtq1FtzHq8FP5YMoFYNBKZukvZWNC/y/9saRXLKgbLLrW9P RvZLXgSgBYDsmzeh+mF4QcUdypPs3Jo9a0RN9Fi8UtvwFPRZPiOe9JPeihDMKsfPBPV8 weYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 142-v6si27457097pfy.182.2018.09.20.10.34.48; Thu, 20 Sep 2018 10:35:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388022AbeITXQ2 (ORCPT + 99 others); Thu, 20 Sep 2018 19:16:28 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:8434 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387939AbeITXQ0 (ORCPT ); Thu, 20 Sep 2018 19:16:26 -0400 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Thu, 20 Sep 2018 10:31:47 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 0CE8A40699; Thu, 20 Sep 2018 10:31:48 -0700 (PDT) From: Nadav Amit To: Greg Kroah-Hartman , Arnd Bergmann CC: , Xavier Deguillard , Nadav Amit Subject: [PATCH v2 07/20] vmw_balloon: treat all refused pages equally Date: Thu, 20 Sep 2018 10:30:13 -0700 Message-ID: <20180920173026.141333-8-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180920173026.141333-1-namit@vmware.com> References: <20180920173026.141333-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, when the hypervisor rejects a page during lock operation, the VM treats pages differently according to the error-code: in certain cases the page is immediately freed, and in others it is put on a rejection list and only freed later. The behavior does not make too much sense. If the page is freed immediately it is very likely to be used again in the next batch of allocations, and be rejected again. In addition, for support of compaction and OOM notifiers, we wish to separate the logic that communicates with the hypervisor (as well as analyzes the status of each page) from the logic that allocates or free pages. Treat all errors the same way, queuing the pages on the refuse list. Move to the next allocation size (4k) when too many pages are refused. Free the refused pages when moving to the next size to avoid situations in which too much memory is waiting to be freed on the refused list. Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- drivers/misc/vmw_balloon.c | 52 +++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 23 deletions(-) diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index 96dde120bbd5..4e067d269706 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -543,29 +543,13 @@ static int vmballoon_lock(struct vmballoon *b, unsigned int num_pages, /* Error occurred */ STATS_INC(b->stats.refused_alloc[is_2m_pages]); - switch (status) { - case VMW_BALLOON_ERROR_PPN_PINNED: - case VMW_BALLOON_ERROR_PPN_INVALID: - /* - * Place page on the list of non-balloonable pages - * and retry allocation, unless we already accumulated - * too many of them, in which case take a breather. - */ - if (page_size->n_refused_pages - < VMW_BALLOON_MAX_REFUSED) { - list_add(&p->lru, &page_size->refused_pages); - page_size->n_refused_pages++; - break; - } - /* Fallthrough */ - case VMW_BALLOON_ERROR_RESET: - case VMW_BALLOON_ERROR_PPN_NOTNEEDED: - vmballoon_free_page(p, is_2m_pages); - break; - default: - /* This should never happen */ - WARN_ON_ONCE(true); - } + /* + * Place page on the list of non-balloonable pages + * and retry allocation, unless we already accumulated + * too many of them, in which case take a breather. + */ + list_add(&p->lru, &page_size->refused_pages); + page_size->n_refused_pages++; } return batch_status == VMW_BALLOON_SUCCESS ? 0 : -EIO; @@ -712,9 +696,31 @@ static void vmballoon_inflate(struct vmballoon *b) vmballoon_add_page(b, num_pages++, page); if (num_pages == b->batch_max_pages) { + struct vmballoon_page_size *page_size = + &b->page_sizes[is_2m_pages]; + error = vmballoon_lock(b, num_pages, is_2m_pages); num_pages = 0; + + /* + * Stop allocating this page size if we already + * accumulated too many pages that the hypervisor + * refused. + */ + if (page_size->n_refused_pages >= + VMW_BALLOON_MAX_REFUSED) { + if (!is_2m_pages) + break; + + /* + * Release the refused pages as we move to 4k + * pages. + */ + vmballoon_release_refused_pages(b, true); + is_2m_pages = true; + } + if (error) break; } -- 2.17.1