Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp1774554rwn; Fri, 9 Sep 2022 04:02:03 -0700 (PDT) X-Google-Smtp-Source: AA6agR67z2hvjJVw9twuR6EYAmf2bKnFn1POeP4WGe7O3o+BGda/0eATD9YBhRODfAc15ZhN2E9G X-Received: by 2002:a63:6c1:0:b0:434:c10c:af23 with SMTP id 184-20020a6306c1000000b00434c10caf23mr11658554pgg.237.1662721322737; Fri, 09 Sep 2022 04:02:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662721322; cv=none; d=google.com; s=arc-20160816; b=I1iT43MkqXwVamaeymGci4Qeje5bLbAWl1udQLQ/yCc7z+z1SFsuLB41Yu0JNCcrzK j5pnMHrlNgorQIdR9mPUgS5l+P9tDq0QbzumNstzyyLNNSha3/ZhWUqKZw/ojo0osyDp jMn+utpZtA2buXyjCACR+rzld/v9xl3thzcQ/DBViDLEhVCKOgy1hce3g3AxYvMu4h37 S4D3ojQD1so5eyPLXmCOIUFw8RodVM7SDfmnCImGchCIAcazxuJDDSXt/biZcBJx0Kgw +B6kMvfrunsHuHioJtkzbhsaO2m7dJ6+cYLi5bmazvBzAKKJDiCQyQZnGaA4AOP7IYoo iRHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ga0KXbfsZxBvniEYK+V0PLuE8LBCeWPj6GbvQKqhHp4=; b=GTZ3ynFiORoHiO01YG7k+JZ5pQLZslkF8DDfVhEwB/dRDNHJ1cfXslNEQvx84TtoD9 z/0Km6MDR6Q14JsqSnv1O5Ujp9NlvLZyQ7whs7Z/CwcRt6542l8bL5/j6ACBi+NJ97Mi ky2+zPQ+GrCbkZNfawURnzbcObrIQIF+j6pcReP5Pdnxa5cnwhmsPaG4yDri0ChxKyky gLwNl25UZUlvDZ+4SD94qBgh3xLdH9jpvLS3sOY15Qb7gf4ulLEehnimXVeMi6Bh/O9T D+pbCQuShU/VAV2Jwz0a8/GQqtiermH5dO+99miUcVLlW7JAieuEer66W2HV1F48ZWYz 2Nbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gPghUKmj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l3-20020a170903244300b0016d0be7887csi230268pls.518.2022.09.09.04.01.41; Fri, 09 Sep 2022 04:02:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gPghUKmj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229687AbiIIKpv (ORCPT + 99 others); Fri, 9 Sep 2022 06:45:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbiIIKpS (ORCPT ); Fri, 9 Sep 2022 06:45:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D7414E861 for ; Fri, 9 Sep 2022 03:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662720316; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ga0KXbfsZxBvniEYK+V0PLuE8LBCeWPj6GbvQKqhHp4=; b=gPghUKmjylZ64Bt+8yUyjb3XYxe+v2BrIzPHGqiBssyNFrfirzZXo7SZtAuXMp+dTMqTsF zmSrW5zYS2iA3A9p4mByKBKyEHT3o2rH6Eo0P7q4JFf6vL5mb6aosLfLmXwMJ77tEHH+JQ qMgGFvtY2IlXXeWDpjMcek1M/b7FNBc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-587-up2WwOObMYGsMZmYRXOhfQ-1; Fri, 09 Sep 2022 06:45:12 -0400 X-MC-Unique: up2WwOObMYGsMZmYRXOhfQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7144A802E5D; Fri, 9 Sep 2022 10:45:11 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DFA940D282E; Fri, 9 Sep 2022 10:45:11 +0000 (UTC) From: Emanuele Giuseppe Esposito To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , David Hildenbrand , Maxim Levitsky , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Emanuele Giuseppe Esposito Subject: [RFC PATCH 5/9] kvm_main.c: split __kvm_set_memory_region logic in kvm_check_mem and kvm_prepare_batch Date: Fri, 9 Sep 2022 06:45:02 -0400 Message-Id: <20220909104506.738478-6-eesposit@redhat.com> In-Reply-To: <20220909104506.738478-1-eesposit@redhat.com> References: <20220909104506.738478-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Just a function split. No functional change intended, except for the fact that kvm_prepare_batch() does not immediately call kvm_set_memslot() if batch->change is KVM_MR_DELETE, but delegates the caller (__kvm_set_memory_region). Signed-off-by: Emanuele Giuseppe Esposito --- virt/kvm/kvm_main.c | 120 +++++++++++++++++++++++++++++--------------- 1 file changed, 79 insertions(+), 41 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 17f07546d591..9d917af30593 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1927,19 +1927,9 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, return false; } -/* - * Allocate some memory and give it an address in the guest physical address - * space. - * - * Discontiguous memory is allowed, mostly for framebuffers. - * This function takes also care of initializing batch->new/old/invalid/change - * fields. - * - * Must be called holding kvm->slots_lock for write. - */ -int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_internal_memory_region_list *batch) +static int kvm_prepare_batch(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_internal_memory_region_list *batch) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -1947,34 +1937,10 @@ int __kvm_set_memory_region(struct kvm *kvm, unsigned long npages; gfn_t base_gfn; int as_id, id; - int r; - - r = check_memory_region_flags(mem); - if (r) - return r; as_id = mem->slot >> 16; id = (u16)mem->slot; - /* General sanity checks */ - if ((mem->memory_size & (PAGE_SIZE - 1)) || - (mem->memory_size != (unsigned long)mem->memory_size)) - return -EINVAL; - if (mem->guest_phys_addr & (PAGE_SIZE - 1)) - return -EINVAL; - /* We can read the guest memory with __xxx_user() later on. */ - if ((mem->userspace_addr & (PAGE_SIZE - 1)) || - (mem->userspace_addr != untagged_addr(mem->userspace_addr)) || - !access_ok((void __user *)(unsigned long)mem->userspace_addr, - mem->memory_size)) - return -EINVAL; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) - return -EINVAL; - if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) - return -EINVAL; - if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES) - return -EINVAL; - slots = __kvm_memslots(kvm, as_id); /* @@ -1993,7 +1959,7 @@ int __kvm_set_memory_region(struct kvm *kvm, batch->change = KVM_MR_DELETE; batch->new = NULL; - return kvm_set_memslot(kvm, batch); + return 0; } base_gfn = (mem->guest_phys_addr >> PAGE_SHIFT); @@ -2020,7 +1986,7 @@ int __kvm_set_memory_region(struct kvm *kvm, else if (mem->flags != old->flags) change = KVM_MR_FLAGS_ONLY; else /* Nothing to change. */ - return 0; + return 1; } if ((change == KVM_MR_CREATE || change == KVM_MR_MOVE) && @@ -2041,12 +2007,81 @@ int __kvm_set_memory_region(struct kvm *kvm, batch->new = new; batch->change = change; + return 0; +} + +static int kvm_check_mem(const struct kvm_userspace_memory_region *mem) +{ + int as_id, id; + + as_id = mem->slot >> 16; + id = (u16)mem->slot; + + /* General sanity checks */ + if ((mem->memory_size & (PAGE_SIZE - 1)) || + (mem->memory_size != (unsigned long)mem->memory_size)) + return -EINVAL; + if (mem->guest_phys_addr & (PAGE_SIZE - 1)) + return -EINVAL; + /* We can read the guest memory with __xxx_user() later on. */ + if ((mem->userspace_addr & (PAGE_SIZE - 1)) || + (mem->userspace_addr != untagged_addr(mem->userspace_addr)) || + !access_ok((void __user *)(unsigned long)mem->userspace_addr, + mem->memory_size)) + return -EINVAL; + if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) + return -EINVAL; + if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) + return -EINVAL; + if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES) + return -EINVAL; + + return 0; +} + +static int kvm_check_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_internal_memory_region_list *batch) +{ + int r; + + r = check_memory_region_flags(mem); + if (r) + return r; - r = kvm_set_memslot(kvm, batch); + r = kvm_check_mem(mem); if (r) - kfree(new); + return r; + + r = kvm_prepare_batch(kvm, mem, batch); + if (r && batch->new) + kfree(batch->new); + return r; } + +/* + * Allocate some memory and give it an address in the guest physical address + * space. + * + * Discontiguous memory is allowed, mostly for framebuffers. + * This function takes also care of initializing batch->new/old/invalid/change + * fields. + * + * Must be called holding kvm->slots_lock for write. + */ +int __kvm_set_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_internal_memory_region_list *batch) +{ + int r; + + r = kvm_check_memory_region(kvm, mem, batch); + if (r) + return r; + + return kvm_set_memslot(kvm, batch); +} EXPORT_SYMBOL_GPL(__kvm_set_memory_region); static int kvm_set_memory_region(struct kvm *kvm, @@ -2061,6 +2096,9 @@ static int kvm_set_memory_region(struct kvm *kvm, mutex_lock(&kvm->slots_lock); r = __kvm_set_memory_region(kvm, mem, batch); mutex_unlock(&kvm->slots_lock); + /* r == 1 means empty request, nothing to do but no error */ + if (r == 1) + r = 0; return r; } -- 2.31.1