Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp605884pxu; Thu, 3 Dec 2020 08:16:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJyG215PoN6TPmehjc07dsusduH/T8Aew8oRpNv0hCGh5wzUjdtv3e7WBVXUJzP5KsWAtKWu X-Received: by 2002:aa7:d354:: with SMTP id m20mr3432843edr.195.1607012218876; Thu, 03 Dec 2020 08:16:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607012218; cv=none; d=google.com; s=arc-20160816; b=tvnGGJSHi8rMjAH2huPs5smILdcZu9ql50gT+dnGRw1va/5QeDVuGc2geq4lrYmrjl g8unDiHSkAR1a6P6+QTaZOl9cTf+Vl1tcZah543jGlFm9lJKWdhryBEqzYQd+2srKw4n ZZw6oRD+0HqirHX/k+Hyc7AFG2qCIPNzk2uSrDmeVMtPznPgZYLt3u7NEIuWoLUw31ee pm+Fs1BBvmrkG68077iEGVVSbtMO0sVEROw01ieBIIopQduwwion0LRtuqVLR99qkT/F y5y/nDyV4QLhuOqZGhv0yfm70CP4q8YlsxWrkBx5XXc1nIn/tq1YWsUGd6HyYCQhP7Jp abxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:organization :from:references:cc:to:subject:dkim-signature; bh=6ZhQw7VlgrEreUBDAlCwi/iVR9qzViCUUe1Dw24VKkQ=; b=oouDThUU/9SZz/gukAqA7WSrRzSUgLwEO/dQ+Ocfkna0B++Ps/xld4koIglx3j10SJ S6nhtw13/M0Krd/uL/RxKRm+g6lx/JT/5/jdemNlNJJI//5zBgunHjups8cBkemrAZnm 5blva+A2QFB5Ue97SQFK78QAlVoFswZ3EqeXHSPxYbu0u8YwCDoLXuVvmoNsXxtAPAlI pPRvhCYhcDWYHPAnBJWz/Isb0yfCT7l4rGEIjG9Tg4SPeQ+PE7VqQqCXfxNfKhBZdCzq hj7E9yZeVNaeoXxYnRsi7RuShfETYZjpZGKyyO7tWsCy3640zRqUgu5D6tDOhzSxF0+U uWiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=R0VWQnss; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b5si508086ejk.631.2020.12.03.08.16.35; Thu, 03 Dec 2020 08:16:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=R0VWQnss; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389307AbgLCQOs (ORCPT + 99 others); Thu, 3 Dec 2020 11:14:48 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:20168 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387636AbgLCQOs (ORCPT ); Thu, 3 Dec 2020 11:14:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607012002; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6ZhQw7VlgrEreUBDAlCwi/iVR9qzViCUUe1Dw24VKkQ=; b=R0VWQnssVYmcfRrz0RkTuIT0Mpb0YeFJAai5IuXDUos4yZBLuJaGNnabA2Ij8qxHcjjglF 6/IN5y9rmjmOtNO2GyVaEL08FL3Ih+2jS9cSeDZRjsVuRv1coMYNQZRNvkl0qNOnME2F1a xYtF2eAKdo6+/lbrplbOgnICb/ed+r4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-584-2oZb4b4YMkKfpqrE5rjnwg-1; Thu, 03 Dec 2020 11:13:19 -0500 X-MC-Unique: 2oZb4b4YMkKfpqrE5rjnwg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2E5F3107ACE6; Thu, 3 Dec 2020 16:13:18 +0000 (UTC) Received: from [10.36.113.250] (ovpn-113-250.ams2.redhat.com [10.36.113.250]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7990A60873; Thu, 3 Dec 2020 16:13:16 +0000 (UTC) Subject: Re: [PATCH 2/2] hv_balloon: do adjust_managed_page_count() when ballooning/un-ballooning To: Vitaly Kuznetsov , linux-hyperv@vger.kernel.org Cc: Wei Liu , Stephen Hemminger , Haiyang Zhang , Michael Kelley , Dexuan Cui , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20201202161245.2406143-1-vkuznets@redhat.com> <20201202161245.2406143-3-vkuznets@redhat.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <9202aafa-f30e-4d96-72a9-3ccd083cc58c@redhat.com> Date: Thu, 3 Dec 2020 17:13:15 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <20201202161245.2406143-3-vkuznets@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02.12.20 17:12, Vitaly Kuznetsov wrote: > Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver > does not adjust managed pages count when ballooning/un-ballooning and this leads > to incorrect stats being reported, e.g. unexpected 'free' output. > > Note, the calculation in post_status() seems to remain correct: ballooned out > pages are never 'available' and we manually add dm->num_pages_ballooned to > 'commited'. > > Suggested-by: David Hildenbrand > Signed-off-by: Vitaly Kuznetsov > --- > drivers/hv/hv_balloon.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c > index da3b6bd2367c..8c471823a5af 100644 > --- a/drivers/hv/hv_balloon.c > +++ b/drivers/hv/hv_balloon.c > @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm, > __ClearPageOffline(pg); > __free_page(pg); > dm->num_pages_ballooned--; > + adjust_managed_page_count(pg, 1); > } > } > > @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm, > split_page(pg, get_order(alloc_unit << PAGE_SHIFT)); > > /* mark all pages offline */ > - for (j = 0; j < alloc_unit; j++) > + for (j = 0; j < alloc_unit; j++) { > __SetPageOffline(pg + j); > + adjust_managed_page_count(pg + j, -1); > + } > > bl_resp->range_count++; > bl_resp->range_array[i].finfo.start_page = > I assume this has been properly tested such that it does not change the system behavior regarding when/how HyperV decides to add/remove memory. LGTM Reviewed-by: David Hildenbrand -- Thanks, David / dhildenb