Received: by 10.223.176.46 with SMTP id f43csp2123839wra; Thu, 25 Jan 2018 05:22:29 -0800 (PST) X-Google-Smtp-Source: AH8x227+rVfIB0yrTLXg1K/sL+xJfuxkr7sjkc5Hs2VPTAig5T2vCAdmxWSkMhbujK7suYYe8ega X-Received: by 10.98.208.67 with SMTP id p64mr16119518pfg.158.1516886549630; Thu, 25 Jan 2018 05:22:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516886549; cv=none; d=google.com; s=arc-20160816; b=Q7yAAcQUBXvSWproQEVqvrynKCwltunPWjQ7/hMQFSnkStMdXiTaF/6t6j48OkOqiN xeTXqbUi6C6M8YjmGQFzMuYZyYGnCBEswUYcDyUsNh+mrcoLw4LlV9WG0NVyVRoKFwZ7 zXHsSHef0vFHMt9yFUrkyhrr5MndbdW8n9leBC6o8/6B55RFaNqaIHtTu2B4U5JoJ4Ow vDIriPD3y+m2zTcb97SXQ8JVaKADLHqWxks+MhPqGeFp/BTxbKs/nn71wAZ+7n3Ah7q1 Xk7ZaRHqDGFCf21mfKGisb2/cgxOps7ABogjZrY7oEqQ/CQ0RzeQhI+NUOm+LR9ezEG/ af9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :references:subject:cc:to:mime-version:user-agent:from:date :message-id:arc-authentication-results; bh=IGssgD3EPdAJQ8WDeNYpS5bKYBYqMQyUPl2Fdzrs6m4=; b=pDkBNV2g16ROzNEtOlR6WwxHqGs34k/3iYE+3Ddrrk+oWDWF0yLrk+dYxGNk6VJomh nU7A+diAAch3Tt7CVa7arC8ExQHYigYoMMnvRW5zCRcn5v4yKCfFccP/2iUoz+ygjNg1 FQgIkX8lyEp7MREH8AIpyPA7Z/7XOu2w8XUyBzAJYbe0heMsL1M1C79MOx+lB4bfqWSp fbUNNcTtwFoF7JX//hVVQXa6j4dDxqApwUWxQlsk7o3NJTv3+p2V5xdnSeMagMalXixJ AQiQeKZ1pVvAdM5kmu65/+bQTZ8Q6lhyZVb8Osn+HwEBaSsoFPAYuYe7qDzIRsvCIWx+ PgBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j7-v6si1975745plk.553.2018.01.25.05.22.14; Thu, 25 Jan 2018 05:22:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751946AbeAYMxJ (ORCPT + 99 others); Thu, 25 Jan 2018 07:53:09 -0500 Received: from mga05.intel.com ([192.55.52.43]:60030 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751227AbeAYMxH (ORCPT ); Thu, 25 Jan 2018 07:53:07 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2018 04:53:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,412,1511856000"; d="scan'208";a="12534166" Received: from unknown (HELO [10.239.13.97]) ([10.239.13.97]) by orsmga007.jf.intel.com with ESMTP; 25 Jan 2018 04:53:04 -0800 Message-ID: <5A69D3C9.9080201@intel.com> Date: Thu, 25 Jan 2018 20:55:37 +0800 From: Wei Wang User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Tetsuo Handa , "Michael S. Tsirkin" CC: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mhocko@kernel.org, akpm@linux-foundation.org, pbonzini@redhat.com, liliang.opensource@gmail.com, yang.zhang.wz@gmail.com, quan.xu0@gmail.com, nilal@redhat.com, riel@redhat.com Subject: Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT References: <1516790562-37889-1-git-send-email-wei.w.wang@intel.com> <1516790562-37889-3-git-send-email-wei.w.wang@intel.com> <20180124183349-mutt-send-email-mst@kernel.org> <5A694FB5.5090803@intel.com> <17068749-d2c7-61bb-4637-a1aee5a0d0fb@I-love.SAKURA.ne.jp> In-Reply-To: <17068749-d2c7-61bb-4637-a1aee5a0d0fb@I-love.SAKURA.ne.jp> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/25/2018 07:28 PM, Tetsuo Handa wrote: > On 2018/01/25 12:32, Wei Wang wrote: >> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote: >>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote: >>> + >>> +static void report_free_page_func(struct work_struct *work) >>> +{ >>> + struct virtio_balloon *vb; >>> + unsigned long flags; >>> + >>> + vb = container_of(work, struct virtio_balloon, report_free_page_work); >>> + >>> + /* Start by sending the obtained cmd id to the host with an outbuf */ >>> + send_cmd_id(vb, &vb->start_cmd_id); >>> + >>> + /* >>> + * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to >>> + * indicate a new request can be queued. >>> + */ >>> + spin_lock_irqsave(&vb->stop_update_lock, flags); >>> + vb->start_cmd_id = cpu_to_virtio32(vb->vdev, >>> + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID); >>> + spin_unlock_irqrestore(&vb->stop_update_lock, flags); >>> + >>> + walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages); >>> Can you teach walk_free_mem_block to return the && of all >>> return calls, so caller knows whether it completed? >> There will be two cases that can cause walk_free_mem_block to return without completing: >> 1) host requests to stop in advance >> 2) vq->broken >> >> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)? >> >> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value. > I feel that virtio_balloon_send_free_pages is doing too heavy things. > > It can be called for many times with IRQ disabled. Number of times > it is called depends on amount of free pages (and fragmentation state). > Generally, more free pages, more calls. > > Then, why don't you allocate some pages for holding all pfn values > and then call walk_free_mem_block() only for storing pfn values > and then send pfn values without disabling IRQ? We have actually tried many methods for this feature before, and what you suggested is one of them, and you could also find the related discussion in earlier versions. In addition to the complexity of that method (if thinking deeper along that line), I can share the performance (the live migration time) comparison of that method with this one in this patch: ~405ms vs. ~260 ms. The things that you worried about have also been discussed actually. The strategy is that we start with something fundamental and increase incrementally (if you check earlier versions, we also have a method which makes the lock finer granularity, but we decided to leave this to the future improvement for prudence purpose). If possible, please let Michael review this patch, he already knows all those things. We will finish this feature as soon as possible, and then discuss with you about another one if you want. Thanks. Best, Wei