Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp6229920rwr; Mon, 24 Apr 2023 16:19:36 -0700 (PDT) X-Google-Smtp-Source: AKy350a6tvYcEUelad1tdH1qjfB3oPgmS61JjiCw4E6Z9aRfioO1n5865gvKOuXKFTw26xPyv1Wm X-Received: by 2002:a05:6a21:3396:b0:f0:6aaf:1abf with SMTP id yy22-20020a056a21339600b000f06aaf1abfmr21249540pzb.4.1682378376472; Mon, 24 Apr 2023 16:19:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682378376; cv=none; d=google.com; s=arc-20160816; b=Na/gLeP0MKKDgkIrmL01zrIi7IjVBrtBq3o1ors4k+DdUq0MKkYhfabT+ZJdlDinSE QHEcdO4VvUhOA/jlYcLZ9NYaNgT//X8lDnArVrzk5E+vxKnMA+2Afa9ailFseTTF9tXd RucTlhJ3yclmGT4FweRsQUMpCOf3w5y6G7UBDK2NkLdISGRyE8Ug9JprW7/Ddd3bkzyu /Kfvcc9nUM3g5be6xAFPdQCdrRc5tOVfyTXEHwwO3ZXE/kpeVl0EBgMwj1A6hl5fOhj0 Lb6BrCc/uPzuyxnQokW8qaQiTKGpbSEZ/Sx0bYK9oOsUXIpvvO8J/oF4QoFWAcotBcPH vlYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rX5SgbSyp7z9Zo8ElCjJ9oWGR1pEs911ZLKUcc0VHQk=; b=oEqfWF2Sr4eFC4Znnjr1R9CCS7xjfnVYaHG/levu2mVEcqMQOb83rrsnrdCsMecqNF 3cFFj3CopSs1iKeNlzC7MFoHL4r7r5y/gyKCzHIMILSGCV743qciSKBIqtSz+TFfHX3m 56LHdN1tUcgNl5SBB6cPjwbensxQZwliGE1NaPvQaR4xVfjJzVYjOk8HXpEnFiZM3GAC QnnBJ6YdFiWJAFrKWKlfQOxwvCmeKTxIEsGOWiBKsV/vbICgzVCUl0DMixBQJRzxFNap v0BeCLQsWb+YoAMydhtmq+Q815VlbRWEW55cFUkjVYnRK0TAhT1p6UQ8YqrTSON80uQi r4vA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b=jEqoqiJ5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 137-20020a63078f000000b00513a68ca71asi7456519pgh.742.2023.04.24.16.19.23; Mon, 24 Apr 2023 16:19:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b=jEqoqiJ5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233318AbjDXXSp (ORCPT + 99 others); Mon, 24 Apr 2023 19:18:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233243AbjDXXSU (ORCPT ); Mon, 24 Apr 2023 19:18:20 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65DDBA25C; Mon, 24 Apr 2023 16:17:39 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33OKtd84009206; Mon, 24 Apr 2023 23:16:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=rX5SgbSyp7z9Zo8ElCjJ9oWGR1pEs911ZLKUcc0VHQk=; b=jEqoqiJ5+X7ispukTOahwjLnz+qIoCMvtLU1ESqhxQVObyx8ubcQcSZ1XSvUYLEw9Qqm R9lfMI1Di5vtXX+gotccKC4FSj4Sb34lzIpO1hTL7OxN/d55x4vvCExwWvMlXjmvDMhf Q8OTdmoYa//xf9742n60FIxmWcRBUknvW2Xy1eX7tRwWdIj7wGWeK4iJ3xBP9mzbfIEP sYlR+jzbdUacK4iw8s22ovOySFhjweeFDGOnhh90cEGRJSSp8c8kjsytOB/CkPpr4GtS joMBzKPkgXFD6/Mig4M4e+5h4Z8m710NcG1jZVYRRsPj3TOCn9qM2fJP2MGtI1loxBOJ 7A== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3q5tjf16mn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 24 Apr 2023 23:16:57 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 33ONGugv030119 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 24 Apr 2023 23:16:56 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Mon, 24 Apr 2023 16:16:55 -0700 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v12 19/25] virt: gunyah: Add resource tickets Date: Mon, 24 Apr 2023 16:15:52 -0700 Message-ID: <20230424231558.70911-20-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230424231558.70911-1-quic_eberman@quicinc.com> References: <20230424231558.70911-1-quic_eberman@quicinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: enbT-lKgfFoESv_tnHZ28eOqVKV1me9l X-Proofpoint-ORIG-GUID: enbT-lKgfFoESv_tnHZ28eOqVKV1me9l X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-24_11,2023-04-21_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 suspectscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304240211 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some VM functions need to acquire Gunyah resources. For instance, Gunyah vCPUs are exposed to the host as a resource. The Gunyah vCPU function will register a resource ticket and be able to interact with the hypervisor once the resource ticket is filled. Resource tickets are the mechanism for functions to acquire ownership of Gunyah resources. Gunyah functions can be created before the VM's resources are created and made available to Linux. A resource ticket identifies a type of resource and a label of a resource which the ticket holder is interested in. Resources are created by Gunyah as configured in the VM's devicetree configuration. Gunyah doesn't process the label and that makes it possible for userspace to create multiple resources with the same label. Resource ticket owners need to be prepared for populate to be called multiple times if userspace created multiple resources with the same label. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 117 +++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 4 ++ include/linux/gunyah_vm_mgr.h | 14 ++++ 3 files changed, 134 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index e4b0a595589c..8fd59203792e 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -184,6 +184,99 @@ void gh_vm_function_unregister(struct gh_vm_function *fn) } EXPORT_SYMBOL_GPL(gh_vm_function_unregister); +int gh_vm_add_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket) +{ + struct gh_vm_resource_ticket *iter; + struct gh_resource *ghrsc; + int ret = 0; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(iter, &ghvm->resource_tickets, vm_list) { + if (iter->resource_type == ticket->resource_type && iter->label == ticket->label) { + ret = -EEXIST; + goto out; + } + } + + if (!try_module_get(ticket->owner)) { + ret = -ENODEV; + goto out; + } + + list_add(&ticket->vm_list, &ghvm->resource_tickets); + INIT_LIST_HEAD(&ticket->resources); + + list_for_each_entry(ghrsc, &ghvm->resources, list) { + if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_move(&ghrsc->list, &ticket->resources); + } + } +out: + mutex_unlock(&ghvm->resources_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gh_vm_add_resource_ticket); + +void gh_vm_remove_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket) +{ + struct gh_resource *ghrsc, *iter; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry_safe(ghrsc, iter, &ticket->resources, list) { + ticket->unpopulate(ticket, ghrsc); + list_move(&ghrsc->list, &ghvm->resources); + } + + module_put(ticket->owner); + list_del(&ticket->vm_list); + mutex_unlock(&ghvm->resources_lock); +} +EXPORT_SYMBOL_GPL(gh_vm_remove_resource_ticket); + +static void gh_vm_add_resource(struct gh_vm *ghvm, struct gh_resource *ghrsc) +{ + struct gh_vm_resource_ticket *ticket; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(ticket, &ghvm->resource_tickets, vm_list) { + if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_add(&ghrsc->list, &ticket->resources); + else + list_add(&ghrsc->list, &ghvm->resources); + /* unconditonal -- we prevent multiple identical + * resource tickets so there will not be some other + * ticket elsewhere in the list if populate() failed. + */ + goto found; + } + } + list_add(&ghrsc->list, &ghvm->resources); +found: + mutex_unlock(&ghvm->resources_lock); +} + +static void gh_vm_clean_resources(struct gh_vm *ghvm) +{ + struct gh_vm_resource_ticket *ticket, *titer; + struct gh_resource *ghrsc, *riter; + + mutex_lock(&ghvm->resources_lock); + if (!list_empty(&ghvm->resource_tickets)) { + dev_warn(ghvm->parent, "Dangling resource tickets:\n"); + list_for_each_entry_safe(ticket, titer, &ghvm->resource_tickets, vm_list) { + dev_warn(ghvm->parent, " %pS\n", ticket->populate); + gh_vm_remove_resource_ticket(ghvm, ticket); + } + } + + list_for_each_entry_safe(ghrsc, riter, &ghvm->resources, list) { + gh_rm_free_resource(ghrsc); + } + mutex_unlock(&ghvm->resources_lock); +} + static int gh_vm_rm_notification_status(struct gh_vm *ghvm, void *data) { struct gh_rm_vm_status_payload *payload = data; @@ -266,6 +359,9 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) init_waitqueue_head(&ghvm->vm_status_wait); INIT_WORK(&ghvm->free_work, gh_vm_free); kref_init(&ghvm->kref); + mutex_init(&ghvm->resources_lock); + INIT_LIST_HEAD(&ghvm->resources); + INIT_LIST_HEAD(&ghvm->resource_tickets); INIT_LIST_HEAD(&ghvm->functions); ghvm->vm_status = GH_RM_VM_STATUS_NO_STATE; @@ -275,9 +371,11 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) static int gh_vm_start(struct gh_vm *ghvm) { struct gh_vm_mem *mapping; + struct gh_rm_hyp_resources *resources; + struct gh_resource *ghrsc; u64 dtb_offset; u32 mem_handle; - int ret; + int ret, i, n; down_write(&ghvm->status_lock); if (ghvm->vm_status != GH_RM_VM_STATUS_NO_STATE) { @@ -337,6 +435,22 @@ static int gh_vm_start(struct gh_vm *ghvm) } ghvm->vm_status = GH_RM_VM_STATUS_READY; + ret = gh_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); + if (ret) { + dev_warn(ghvm->parent, "Failed to get hypervisor resources for VM: %d\n", ret); + goto err; + } + + for (i = 0, n = le32_to_cpu(resources->n_entries); i < n; i++) { + ghrsc = gh_rm_alloc_resource(ghvm->rm, &resources->entries[i]); + if (!ghrsc) { + ret = -ENOMEM; + goto err; + } + + gh_vm_add_resource(ghvm, ghrsc); + } + ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid); if (ret) { dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); @@ -458,6 +572,7 @@ static void gh_vm_free(struct work_struct *work) gh_vm_stop(ghvm); gh_vm_remove_functions(ghvm); + gh_vm_clean_resources(ghvm); if (ghvm->vm_status != GH_RM_VM_STATUS_NO_STATE && ghvm->vm_status != GH_RM_VM_STATUS_LOAD && diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index c4bec1469ae8..e5e0c92d4cb1 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,7 @@ #define _GH_VM_MGR_H #include +#include #include #include #include @@ -52,6 +53,9 @@ struct gh_vm { struct list_head memory_mappings; struct mutex fn_lock; struct list_head functions; + struct mutex resources_lock; + struct list_head resources; + struct list_head resource_tickets; }; int gh_vm_mem_alloc(struct gh_vm *ghvm, struct gh_userspace_memory_region *region); diff --git a/include/linux/gunyah_vm_mgr.h b/include/linux/gunyah_vm_mgr.h index f609c44d2af3..266b24579636 100644 --- a/include/linux/gunyah_vm_mgr.h +++ b/include/linux/gunyah_vm_mgr.h @@ -82,4 +82,18 @@ void gh_vm_function_unregister(struct gh_vm_function *f); module_gh_vm_function(_name); \ MODULE_ALIAS_GH_VM_FUNCTION(_type, _idx) +struct gh_vm_resource_ticket { + struct list_head vm_list; /* for gh_vm's resource tickets list */ + struct list_head resources; /* resources associated with this ticket */ + enum gh_resource_type resource_type; + u32 label; + + struct module *owner; + bool (*populate)(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc); + void (*unpopulate)(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc); +}; + +int gh_vm_add_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket); +void gh_vm_remove_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket); + #endif -- 2.40.0