Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1136201imm; Wed, 26 Sep 2018 12:17:41 -0700 (PDT) X-Google-Smtp-Source: ACcGV611KNjYZqc45u+JtDu5LH2zOtjIX4ed9Olv7nZWi7f4GFgXT7mBkie3WHF9d6L0OvhSyfL1 X-Received: by 2002:a62:9683:: with SMTP id s3-v6mr7666807pfk.191.1537989461805; Wed, 26 Sep 2018 12:17:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537989461; cv=none; d=google.com; s=arc-20160816; b=dyhvGZAi6Di2LnDkvn1sv5qPA7/PFFC5Aw2BpvuNRwmsrul2EczWiAXs9FddHDDW/V 8AkhEfEkm7bd3mwvXRZyqVbDcssYRQ3XmfBm7TTsgaGNDlMuy7GYdU3beOx/r4uxwd+K XoaMMfuFtOVcg6jiwzQ9efDKlcIe7P5T/DG2Pkop2Q6fcHic8K7UmUYb2cUO55DYPgGg NDEybOx+IbyqVGs15tQsbd2o2x45CPMIOcffFZQS058I91l+wGezi3In2k6LFjSn54W7 mpzO917ohQyyvPTcmhL1mnwu53Hmw48+QIVBQFwFFbL3MIeGt8ZGHd3IM1wv8a9D90R/ vKCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=QWFaFoqEBjW7/xovQcm3md5K+U+Q4gAfWx5pVII/BZw=; b=JXyHLPtk3xU5x4N/AILbH5dqq4shRrN+QcxwJUffPDhvX+f6CkbEMgL7OOPoGdI2Gz fsr2HdY+WtG5V5euvQDThTLxiJzV4hKKvgH6KIa9QbYTZ+LVccH+9daJvGMfyPyo92h0 wfr9paiD3DX+leBZpAedSbX3o5KgRZyXSNOJ3EPk5ZP4F7c/R74wQ7yf4YQtcD1qXJL3 iE3A1Ao9ROIdLo4c3KtcIQ3GFvSrt71vpPQj7veTneWuZUh1aV1kZU8pyHPHjD1N0Pyj diIQ2NDi4gcJxG9xJRl5Kq6/OMsGU5KUJxp6Nn+PQ0tlIfbQcjnAJ9EFy3opN6jolASE SC/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1-v6si1321173pgv.349.2018.09.26.12.17.26; Wed, 26 Sep 2018 12:17:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727343AbeI0BbJ (ORCPT + 99 others); Wed, 26 Sep 2018 21:31:09 -0400 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:25069 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726934AbeI0B3i (ORCPT ); Wed, 26 Sep 2018 21:29:38 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 26 Sep 2018 12:15:05 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 8FB3DB0E86; Wed, 26 Sep 2018 15:15:09 -0400 (EDT) From: Nadav Amit To: Arnd Bergmann , CC: Xavier Deguillard , , Nadav Amit Subject: [PATCH v3 07/20] vmw_balloon: treat all refused pages equally Date: Wed, 26 Sep 2018 12:13:23 -0700 Message-ID: <20180926191336.101885-8-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180926191336.101885-1-namit@vmware.com> References: <20180926191336.101885-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, when the hypervisor rejects a page during lock operation, the VM treats pages differently according to the error-code: in certain cases the page is immediately freed, and in others it is put on a rejection list and only freed later. The behavior does not make too much sense. If the page is freed immediately it is very likely to be used again in the next batch of allocations, and be rejected again. In addition, for support of compaction and OOM notifiers, we wish to separate the logic that communicates with the hypervisor (as well as analyzes the status of each page) from the logic that allocates or free pages. Treat all errors the same way, queuing the pages on the refuse list. Move to the next allocation size (4k) when too many pages are refused. Free the refused pages when moving to the next size to avoid situations in which too much memory is waiting to be freed on the refused list. Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- drivers/misc/vmw_balloon.c | 52 +++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 23 deletions(-) diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index 96dde120bbd5..4e067d269706 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -543,29 +543,13 @@ static int vmballoon_lock(struct vmballoon *b, unsigned int num_pages, /* Error occurred */ STATS_INC(b->stats.refused_alloc[is_2m_pages]); - switch (status) { - case VMW_BALLOON_ERROR_PPN_PINNED: - case VMW_BALLOON_ERROR_PPN_INVALID: - /* - * Place page on the list of non-balloonable pages - * and retry allocation, unless we already accumulated - * too many of them, in which case take a breather. - */ - if (page_size->n_refused_pages - < VMW_BALLOON_MAX_REFUSED) { - list_add(&p->lru, &page_size->refused_pages); - page_size->n_refused_pages++; - break; - } - /* Fallthrough */ - case VMW_BALLOON_ERROR_RESET: - case VMW_BALLOON_ERROR_PPN_NOTNEEDED: - vmballoon_free_page(p, is_2m_pages); - break; - default: - /* This should never happen */ - WARN_ON_ONCE(true); - } + /* + * Place page on the list of non-balloonable pages + * and retry allocation, unless we already accumulated + * too many of them, in which case take a breather. + */ + list_add(&p->lru, &page_size->refused_pages); + page_size->n_refused_pages++; } return batch_status == VMW_BALLOON_SUCCESS ? 0 : -EIO; @@ -712,9 +696,31 @@ static void vmballoon_inflate(struct vmballoon *b) vmballoon_add_page(b, num_pages++, page); if (num_pages == b->batch_max_pages) { + struct vmballoon_page_size *page_size = + &b->page_sizes[is_2m_pages]; + error = vmballoon_lock(b, num_pages, is_2m_pages); num_pages = 0; + + /* + * Stop allocating this page size if we already + * accumulated too many pages that the hypervisor + * refused. + */ + if (page_size->n_refused_pages >= + VMW_BALLOON_MAX_REFUSED) { + if (!is_2m_pages) + break; + + /* + * Release the refused pages as we move to 4k + * pages. + */ + vmballoon_release_refused_pages(b, true); + is_2m_pages = true; + } + if (error) break; } -- 2.17.1