Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp677798pxu; Thu, 3 Dec 2020 09:54:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJwxz41wt80d0s8avng8iDUiQ8oFAuJouWo9igy7ZXAfBNIJ7sarXm51h3MLoA8RzfNx0cSm X-Received: by 2002:a05:6402:2059:: with SMTP id bc25mr3955206edb.13.1607018066072; Thu, 03 Dec 2020 09:54:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607018066; cv=none; d=google.com; s=arc-20160816; b=xfeNPUvAVUYRAmaBgC3KaXZeVBHyCR3FVASJCfrEn1ayQCcFdrqL+Jogc29w55H5u/ SgJ5IDGntUR37XLybn5+NSBV+AUanlPe33FzufH5zgOxlk/J3qJwTunNl6gT8bw7j1qi +BRmK4gczmNB1m1uJKtzABW6SsQ7uFSxhp5/xRnUio3GbnpCJ9N58fakQJ9sJ9CVnzBM /LdTpqRfgyo/gynf/nzm8R30Y0reyL7uIZMI2Cxdr1my18B0gIaEjFgiLdjU/tvcDBce jxEGbEnYGVhOOvbC7qtTNAengUVXpGZRE87iy6VOLou3YLUAEQBjH7Yc+CQ9Yq6OPzWA gSMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:organization :from:references:cc:to:subject:dkim-signature; bh=XZBiVCU94S1OuP6ARtameblgcsjWj/5Vo7asS36mdWU=; b=sENtQVWefpCSkZ4k3LCbGYq1PvTwspWP1CN0I1EOhidVAqHnVgOaXk5Hxg3mq7B88E x5uWLGVf76Kplf1eR28zsoQ39Z4FByHrP8ij5mwAEJh4b9OVUkdzwQdNNFQTbSPNjCAS oTR3Gc9uJPrIg+zSoz8B5C76Yn6SXgvuky/SFKQaHmNS2sQSw8SMmHeNR9fbQqJom8Zy efFMysk2NHOR8XH+yCYB6cio+2acFaQrgP3TLV9Yks1lFwXF0COmlYDZlTqxbNqorK6D fP2NhCxDZ+6jkH7ip6m3GpU5PePS1MbVu7JP9JhKEOdIiQt2UtFzXN4GGD35Bw0UDaRH likQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=asFxoaN5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k5si1548772ejk.655.2020.12.03.09.54.02; Thu, 03 Dec 2020 09:54:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=asFxoaN5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731341AbgLCRvM (ORCPT + 99 others); Thu, 3 Dec 2020 12:51:12 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:44081 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731330AbgLCRvL (ORCPT ); Thu, 3 Dec 2020 12:51:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607017784; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XZBiVCU94S1OuP6ARtameblgcsjWj/5Vo7asS36mdWU=; b=asFxoaN5jb2HkXOlQmdJ0WFIjFCk2ofPPuKFDfaGBxzU1006l5Gt4CwD6lPmaF931HntaB +KCMBq7jlMYKE21CQe5OSKCTQwztS7uUIczjunKjGR1XsdYcJ0t4BS/kNWZbRQl2grGZiC jS0aA0MgCU2Snam3ukVt/E/9daZWftc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-65-RS2WyDaZMeax9a0TOpUkoA-1; Thu, 03 Dec 2020 12:49:43 -0500 X-MC-Unique: RS2WyDaZMeax9a0TOpUkoA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B4E57612A0; Thu, 3 Dec 2020 17:49:41 +0000 (UTC) Received: from [10.36.113.250] (ovpn-113-250.ams2.redhat.com [10.36.113.250]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0B85960861; Thu, 3 Dec 2020 17:49:39 +0000 (UTC) Subject: Re: [PATCH 2/2] hv_balloon: do adjust_managed_page_count() when ballooning/un-ballooning To: Vitaly Kuznetsov , linux-hyperv@vger.kernel.org Cc: Wei Liu , Stephen Hemminger , Haiyang Zhang , Michael Kelley , Dexuan Cui , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20201202161245.2406143-1-vkuznets@redhat.com> <20201202161245.2406143-3-vkuznets@redhat.com> <9202aafa-f30e-4d96-72a9-3ccd083cc58c@redhat.com> <871rg6ok4v.fsf@vitty.brq.redhat.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <13524c28-dfec-dd21-8a45-216b161deb72@redhat.com> Date: Thu, 3 Dec 2020 18:49:39 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <871rg6ok4v.fsf@vitty.brq.redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03.12.20 18:49, Vitaly Kuznetsov wrote: > David Hildenbrand writes: > >> On 02.12.20 17:12, Vitaly Kuznetsov wrote: >>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver >>> does not adjust managed pages count when ballooning/un-ballooning and this leads >>> to incorrect stats being reported, e.g. unexpected 'free' output. >>> >>> Note, the calculation in post_status() seems to remain correct: ballooned out >>> pages are never 'available' and we manually add dm->num_pages_ballooned to >>> 'commited'. >>> >>> Suggested-by: David Hildenbrand >>> Signed-off-by: Vitaly Kuznetsov >>> --- >>> drivers/hv/hv_balloon.c | 5 ++++- >>> 1 file changed, 4 insertions(+), 1 deletion(-) >>> >>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c >>> index da3b6bd2367c..8c471823a5af 100644 >>> --- a/drivers/hv/hv_balloon.c >>> +++ b/drivers/hv/hv_balloon.c >>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm, >>> __ClearPageOffline(pg); >>> __free_page(pg); >>> dm->num_pages_ballooned--; >>> + adjust_managed_page_count(pg, 1); >>> } >>> } >>> >>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm, >>> split_page(pg, get_order(alloc_unit << PAGE_SHIFT)); >>> >>> /* mark all pages offline */ >>> - for (j = 0; j < alloc_unit; j++) >>> + for (j = 0; j < alloc_unit; j++) { >>> __SetPageOffline(pg + j); >>> + adjust_managed_page_count(pg + j, -1); >>> + } >>> >>> bl_resp->range_count++; >>> bl_resp->range_array[i].finfo.start_page = >>> >> >> I assume this has been properly tested such that it does not change the >> system behavior regarding when/how HyperV decides to add/remove memory. >> > > I'm always reluctant to confirm 'proper testing' as no matter how small > and 'obvious' the change is, regressions keep happening :-) But yes, > this was tested on a Hyper-V host and 'stress' and I observed 'free' > when the balloon was both inflated and deflated, values looked sane. That;s what I wanted to hear ;) -- Thanks, David / dhildenb