Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932108AbcCJSTE (ORCPT ); Thu, 10 Mar 2016 13:19:04 -0500 Received: from mga03.intel.com ([134.134.136.65]:49417 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753458AbcCJSSx (ORCPT ); Thu, 10 Mar 2016 13:18:53 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,316,1455004800"; d="scan'208";a="63705260" Subject: Re: [PATCH] KVM: Remove redundant smp_mb() in the kvm_mmu_commit_zap_page() To: Paolo Bonzini , Lan Tianyu , Thomas Gleixner References: <1457055312-27067-1-git-send-email-tianyu.lan@intel.com> <56D9354E.9040908@intel.com> <56D94BFE.1080406@redhat.com> <56DE8F1A.9000401@intel.com> <56DEEF69.20208@redhat.com> <56DFCE50.2000807@intel.com> <56E1A9D3.7080803@redhat.com> Cc: gleb@kernel.org, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org From: Xiao Guangrong Message-ID: <56E1B805.3060906@linux.intel.com> Date: Fri, 11 Mar 2016 02:08:05 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <56E1A9D3.7080803@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1989 Lines: 61 On 03/11/2016 01:07 AM, Paolo Bonzini wrote: > On 09/03/2016 08:18, Lan Tianyu wrote: >> How about the following comments. >> >> Log for kvm_mmu_commit_zap_page() >> /* >> * We need to make sure everyone sees our modifications to >> * the page tables and see changes to vcpu->mode here. > > Please mention that this pairs with vcpu_enter_guest and > walk_shadow_page_lockless_begin/end. > >> The >> * barrier in the kvm_flush_remote_tlbs() helps us to achieve >> * these. Otherwise, wait for all vcpus to exit guest mode >> * and/or lockless shadow page table walks. >> */ >> kvm_flush_remote_tlbs(kvm); > > The rest of the comment is okay, but please replace "Otherwise" with "In > addition, we need to". > >> Log for kvm_flush_remote_tlbs() >> /* >> * We want to publish modifications to the page tables before >> * reading mode. Pairs with a memory barrier in arch-specific >> * code. >> * - x86: smp_mb__after_srcu_read_unlock in vcpu_enter_guest. > > ... and smp_mb in walk_shadow_page_lockless_begin/end. > >> * - powerpc: smp_mb in kvmppc_prepare_to_enter. >> */ >> smp_mb__before_atomic(); > > The comment looks good, but the smp_mb__before_atomic() is not needed. > As mentioned in the reply to Guangrong, only a smp_load_acquire is required. > > So the comment should say something like "There is already an smp_mb() > before kvm_make_all_cpus_request reads vcpu->mode. We reuse that > barrier here.". > > On top of this there is: > > - the change to paging_tmpl.h that Guangrong posted, adding smp_wmb() > before each increment of vcpu->kvm->tlbs_dirty Yes, please make it as a separated patch. > > - the change to smp_mb__after_atomic() in kvm_make_all_cpus_request > > - if you want :) you can also replace the store+mb in > walk_shadow_page_lockless_begin with smp_store_mb, and the mb+store in > walk_shadow_page_lockless_end with smp_store_release. These changes are good to me. TianYu, please CC me when you post the new version out.