Received: by 10.192.165.148 with SMTP id m20csp1561219imm; Thu, 10 May 2018 12:41:09 -0700 (PDT) X-Google-Smtp-Source: AB8JxZppwxQx7JYv9sD6JL0cjbiYfWewzSq+U/hXtvZD24ZYgwWJQ0P8W4AyYQ3OiFLhwNArjW7P X-Received: by 2002:a17:902:7109:: with SMTP id a9-v6mr2574033pll.271.1525981268986; Thu, 10 May 2018 12:41:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525981268; cv=none; d=google.com; s=arc-20160816; b=B+P7IRaSkU7ioskXP6uQh08htjsGqKdpN2UtxoitewgcNCZuDGnvz0TPHesss3t1zz vmvkOEkkcKo8ukKW0LC5iGBO/Cc/q10W4ctx15M6C5SK0S6e/T+W9eI6D3T2OcnkJBQ1 9IZ+sh7jgGT4Dpk61H3l6vsKOWsxwg/nrZ/bFCuaOjtVv0sM2kMGALwN8CsyUrqljn7s C1f51k1weKYmXeigWYSHMairmGpgmbw47wZ/4EodUc1xRxS2tW3KIN3NEiwUW3Tz0onv dVEcJHX6MadZfACzrVm/RpkqRTobgN3Db0A9rmSxaSTstzSJzhyETjZRDa21K0mtNie+ rFGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=F0x1cjoIFRGPx3cHH1PATFVA+whI8s+i+hJTvWVG/Dw=; b=sc54BKkCCpvxkn6S04hQBoWQTKRjTdNsTvRIg7fuP2z0U6ObCO8VBVXIgjTPZzbp30 GDs676EziEGDy6FhnxIUMeaLSLWRCsCQmV1iGVdL5xMp5eUomeA1a1NgjNzNGIJTbXhg p6w+OOvhtvfE1fLkpSQIT5hdhZHRfryh9Wc8JE2gVWieHaGrPanaMFP1gpaDpATDWEFv yw4DQpcyb+JQBbb08y3siQIHszugZEnpJPAc/ZGhxQgIpxwZAmYLveU6rbcuzvPwLBzg UD2qBEjaqGPNsTnr+/ZMwI0VHo5Hm5/QKzFQiiW7Nm49Hwt63IUcJo85UwSgla5F2CFA FAFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s66-v6si1500467pfj.164.2018.05.10.12.40.54; Thu, 10 May 2018 12:41:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752078AbeEJTkW (ORCPT + 99 others); Thu, 10 May 2018 15:40:22 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:43142 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750762AbeEJTkV (ORCPT ); Thu, 10 May 2018 15:40:21 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B65B6406F12C; Thu, 10 May 2018 19:40:20 +0000 (UTC) Received: from flask (unknown [10.43.2.80]) by smtp.corp.redhat.com (Postfix) with SMTP id EFEC72026DEF; Thu, 10 May 2018 19:40:17 +0000 (UTC) Received: by flask (sSMTP sendmail emulation); Thu, 10 May 2018 21:40:17 +0200 Date: Thu, 10 May 2018 21:40:17 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Vitaly Kuznetsov Cc: kvm@vger.kernel.org, x86@kernel.org, Paolo Bonzini , Roman Kagan , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley (EOSG)" , Mohammed Gamal , Cathy Avery , linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 4/6] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation Message-ID: <20180510194016.GB3885@flask> References: <20180416110806.4896-1-vkuznets@redhat.com> <20180416110806.4896-5-vkuznets@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180416110806.4896-5-vkuznets@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 10 May 2018 19:40:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 10 May 2018 19:40:20 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'rkrcmar@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2018-04-16 13:08+0200, Vitaly Kuznetsov: > Implement HvFlushVirtualAddress{List,Space} hypercalls in a simplistic way: > do full TLB flush with KVM_REQ_TLB_FLUSH and kick vCPUs which are currently > IN_GUEST_MODE. > > Signed-off-by: Vitaly Kuznetsov > --- > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > @@ -1242,6 +1242,65 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) > return kvm_hv_get_msr(vcpu, msr, pdata); > } > > +static void ack_flush(void *_completed) > +{ > +} > + > +static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa, > + u16 rep_cnt) > +{ > + struct kvm *kvm = current_vcpu->kvm; > + struct kvm_vcpu_hv *hv_current = ¤t_vcpu->arch.hyperv; > + struct hv_tlb_flush flush; > + struct kvm_vcpu *vcpu; > + int i, cpu, me; > + > + if (unlikely(kvm_read_guest(kvm, ingpa, &flush, sizeof(flush)))) > + return HV_STATUS_INVALID_HYPERCALL_INPUT; > + > + trace_kvm_hv_flush_tlb(flush.processor_mask, flush.address_space, > + flush.flags); > + > + cpumask_clear(&hv_current->tlb_lush); > + > + me = get_cpu(); > + > + kvm_for_each_vcpu(i, vcpu, kvm) { > + struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv; > + > + if (!(flush.flags & HV_FLUSH_ALL_PROCESSORS) && Please add a check to prevent undefined behavior in C: (hv->vp_index >= 64 || > + !(flush.processor_mask & BIT_ULL(hv->vp_index))) > + continue; It would also fail in the wild as shl only considers the bottom 5 bits. > + /* > + * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we > + * can't analyze it here, flush TLB regardless of the specified > + * address space. > + */ > + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); > + > + /* > + * It is possible that vCPU will migrate and we will kick wrong > + * CPU but vCPU's TLB will anyway be flushed upon migration as > + * we already made KVM_REQ_TLB_FLUSH request. > + */ > + cpu = vcpu->cpu; > + if (cpu != -1 && cpu != me && cpu_online(cpu) && > + kvm_arch_vcpu_should_kick(vcpu)) > + cpumask_set_cpu(cpu, &hv_current->tlb_lush); > + } > + > + if (!cpumask_empty(&hv_current->tlb_lush)) > + smp_call_function_many(&hv_current->tlb_lush, ack_flush, > + NULL, true); Hm, quite a lot of code duplication with EX hypercall and also kvm_make_all_cpus_request ... I'm thinking about making something like kvm_make_some_cpus_request(struct kvm *kvm, unsigned int req, bool (*predicate)(struct kvm_vcpu *vcpu)) or to implement a vp_index -> vcpu mapping and using kvm_vcpu_request_mask(struct kvm *kvm, unsigned int req, long *vcpu_bitmap) The latter would probably simplify logic of the EX hypercall. What do you think?